WO2008032441A1 - Image processing device, electronic camera and image processing program - Google Patents
Image processing device, electronic camera and image processing program Download PDFInfo
- Publication number
- WO2008032441A1 WO2008032441A1 PCT/JP2007/000978 JP2007000978W WO2008032441A1 WO 2008032441 A1 WO2008032441 A1 WO 2008032441A1 JP 2007000978 W JP2007000978 W JP 2007000978W WO 2008032441 A1 WO2008032441 A1 WO 2008032441A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- images
- image processing
- resolution
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
Definitions
- Image processing apparatus electronic camera, and image processing program
- the present invention relates to an image processing apparatus, an electronic camera, and an image processing program.
- Patent Document 1 Japanese Patent Laid-Open No. 2 00 2 _ 1 0 7 7 8 7 (Claim 1 etc.)
- an object of the present invention is to provide a technique for improving the image quality after composition in the alignment composition of a plurality of images.
- An image processing apparatus of the present invention includes an image input unit, a positional deviation detection unit, and an image composition unit.
- the image input unit captures multiple images of the same subject.
- the positional deviation detection unit detects a positional deviation of the pattern between a plurality of images.
- the image compositing unit performs pattern alignment on a plurality of images based on the positional deviation and combines them.
- the image composition unit determines the high frequency component of the spatial frequency for a plurality of images. At this time, the composition ratio of the alignment composition at the edge portion of the pattern is reduced as the image is determined to have few high-frequency components.
- the image composition unit sets the composition ratio of the alignment composition in the flat part of the pattern to the edge. Higher than the composite ratio.
- the image composition unit selects an image to be used for composition according to the amount of the high frequency component.
- An electronic camera includes an image processing apparatus according to any one of ⁇ 1 >> to ⁇ 3 >>, and an imaging unit that continuously captures a subject and generates a plurality of images. And has a function of aligning and synthesizing a plurality of images by the image processing apparatus.
- the image processing program of the present invention is a program for causing a computer to function as the image processing apparatus according to any one of ⁇ 1 >> to ⁇ 3 >>.
- a plurality of images are distinguished by a high frequency component of spatial frequency.
- the composition ratio of the registration composition is lowered at least at the edge of the picture.
- the image is composed mainly of high-frequency components.
- FIG. 1 Block diagram showing electronic camera 10 (including image processing device 25)
- FIG. 2 is a block diagram schematically showing the configuration of the image processing device 25.
- FIG. 3 is a flowchart for explaining the operation of the electronic force mela 10 in the first embodiment.
- FIG. 4 is a flowchart for explaining the operation of the electronic force mela 10 in the first embodiment.
- FIG. 9 Diagram for explaining the operation of generating rearranged images
- FIG. 10 is a flowchart for explaining the operation of the electronic camera 10 in the second embodiment.
- FIG. 1 1 Schematic diagram showing image composition processing in the second embodiment.
- FIG. 1 is a block diagram showing an electronic camera 10 (including an image processing device 25) of the present embodiment.
- a photographing lens 12 is attached to the electronic camera 10.
- the imaging surface of the imaging device 11 is arranged.
- the imaging element 11 is controlled by the imaging control unit 14.
- the imaging device 11 has a mode for reading out a high resolution image and a mode for reading out a low resolution image by performing pixel thinning and pixel addition inside the device.
- the image signal output from the image sensor 11 is processed through the signal processing unit 15 and the A / D conversion unit 16 and then temporarily stored in the memory 17.
- the memory 17 is connected to the bus 18.
- An imaging control unit 14, a microprocessor 19, a recording unit 22, an image compression unit 24, a monitor display unit 30, and an image processing device 25 are connected to the bus 18.
- An operation unit 19 a such as a release button is connected to the microprocessor 19.
- a recording medium 2 2 a is detachably attached to the recording unit 22.
- FIG. 2 is a block diagram schematically showing the configuration of the image processing device 25.
- the high resolution image read from the memory 17 is supplied to the reduced image generation unit 3 2, the feature amount extraction unit 3 3, and the image synthesis unit 3 4 via the gain correction unit 3 1. .
- the output data of the reduced image generation unit 32 is supplied to the rough detection unit 36 via the feature amount extraction unit 35.
- the output data of the feature quantity extraction unit 33 is supplied to the precision detection unit 38 via the phase division unit 37.
- a plurality of low-resolution images read from the memory 17 are supplied to the feature amount extraction unit 39 and the image synthesis unit 34, respectively.
- the output data of the feature quantity extraction unit 39 is supplied to the rough detection unit 36 and the precise detection unit 38, respectively.
- the positional deviation roughly detected by the rough detection unit 36 is supplied to the fine detection unit 38.
- the positional deviation detected with high precision by the precision detection unit 38 is supplied to the image composition unit 34.
- the image synthesis unit 34 synthesizes a plurality of low-resolution images and high-resolution images based on the detection result of the positional deviation.
- Step S 1 When the main power supply of the electronic camera 10 is turned on, the microprocessor 19 instructs the imaging control unit 14 to read out a low resolution image.
- the imaging control unit 14 drives the imaging device 11 in the low-resolution readout mode, and sequentially reads out the low-resolution images at, for example, 30 frames / second as shown in FIG.
- Step S2 The low-resolution image read from the image sensor 11 is processed through the signal processing sound ⁇ 5 and the A / D converter 16 and then temporarily stored in the memory 17 The The microprocessor 19 deletes the low resolution images exceeding the predetermined number of frames in the memory 17 in order from the oldest one.
- the predetermined number of frames here corresponds to the number of frames of a low-resolution image used for composition of a rearranged image, which will be described later, and the number of pixels of a high-resolution image / number of pixels of a low-resolution image It is preferable to set.
- the number of vertical and horizontal pixels of a low-resolution image is 1 each of the number of vertical and horizontal pixels of a high-resolution image.
- Step S 3 The monitor display unit 30 displays a low-resolution image (through image) on the monitor screen.
- the microprocessor 19 calculates the exposure based on the photometry result of the photometry unit (not shown) and the brightness of the low resolution image, and determines the exposure time of the high resolution image.
- Step S 4 Here, the microprocessor 19 determines whether or not the release button has been fully pressed by the user.
- step S5 When the release button is fully pressed, the microprocessor 19 shifts the operation to step S5. On the other hand, when the release button is not fully pressed The microprocessor 19 returns to step S1.
- Step S 5 the microprocessor 19 determines whether or not the exposure time setting of the high-resolution image determined in Step S 3 is equal to or less than an allowable upper limit where blur is not noticeable.
- the allowable upper limit is set to about 1 / (the focal length in terms of 35 mm size of the taking lens 1 2) second.
- step S6 If the exposure time setting is less than or equal to the allowable upper limit, the microprocessor 19 moves to step S6. On the other hand, if the exposure time setting exceeds the allowable upper limit, the microprocessor 19 moves to step S7.
- Step S6 The imaging control unit 14 uses the imaging device according to the set exposure time.
- the imaging control unit 14 drives the imaging device 11 in a high-resolution reading mode, and reads a high-resolution image.
- This high-resolution image (still image) is recorded and stored in the recording medium 22 a after undergoing image processing and image compression as in the conventional case.
- the electronic camera 10 completes the shooting operation.
- Step S 7 On the other hand, if it is determined that the exposure time setting exceeds the allowable upper limit of blurring, the microprocessor 19 limits the exposure time to an allowable upper limit that does not cause blurring.
- the imaging control unit 14 performs shutter control of the imaging device 11 according to a short limited exposure time. In this state, the imaging control unit 14 drives the imaging device 11 in the high resolution readout mode to read out the high resolution image.
- This high-resolution image has a low signal level due to underexposure but is less likely to blur. This high resolution image is temporarily recorded in the memory 17.
- Step S 8 The gain correction unit 31 in the image processing device 25 fetches a high resolution image from the memory 17.
- the gain correction unit 31 adjusts the gain of this high resolution image to match the signal level of the low resolution image.
- Step S 9 The reduced image generation unit 32 performs resolution conversion on the high-resolution image after gain adjustment and matches the number of pixels of the low-resolution image.
- the vertical and horizontal dimensions of the high-resolution image can be converted to 1/4 of the number of pixels.
- the high resolution image (hereinafter referred to as a reduced image) reduced in resolution in this way is transmitted to the feature quantity extraction unit 35.
- Step S 10 FIG. 6 is a diagram for explaining image shift detection based on comparison of projected edges.
- the image misalignment detection process will be described with reference to FIG.
- the feature quantity extraction unit 35 performs the vertical edge extraction filter shown in the following equation (Fig. 6).
- the vertical edge component g y is extracted from the reduced image f (x, y) force.
- the feature extraction unit 35 uses a horizontal edge extraction filter (Fig.
- the feature quantity extraction unit 35 preferably replaces the vertical edge component g y and the horizontal edge component g x that fall within a predetermined minute amplitude with zero.
- the feature quantity extraction unit 35 calculates the vertical projection waveform by accumulating the vertical edge component g y in a horizontal unit.
- the feature quantity extraction unit 35 calculates a horizontal projection waveform by cumulatively adding the horizontal edge component g x to the unit of the vertical column.
- the feature quantity extraction unit 39 fetches a plurality of low resolution images from the memory 17.
- the feature quantity extraction unit 39 performs the same processing as the feature quantity extraction unit 35 on each low-resolution image, and obtains a vertical projection waveform and a horizontal projection waveform, respectively.
- the coarse detection unit 36 takes a difference while shifting the vertical projection waveform in the central area of the reduced image and the vertical projection waveform in the central area of the low-resolution image. Detect the waveform deviation that minimizes the sum of the absolute values of the differences. This waveform shift corresponds to the vertical position shift between the reduced image and the low resolution image.
- the coarse detection unit 36 is arranged in the horizontal direction of the central area of the reduced image.
- the difference between the projected waveform and the horizontal projected waveform in the center area of the low-resolution image is taken to detect the waveform shift that minimizes the sum of the absolute values of the differences.
- This waveform shift corresponds to a positional shift in the horizontal direction between the reduced image and the low resolution image.
- the coarse detection unit 36 obtains the positional deviations (coarse detection results) of the plurality of low resolution images using the reduced image as a position reference, and outputs it to the fine detection unit 38.
- Step S 1 1 The feature amount extraction unit 3 3 takes in the high-resolution image that has been gain-corrected, and extracts the vertical edge component g y and the horizontal edge component g x using an edge extraction filter. .
- edge extraction filter here is preferably switched as follows according to the low-resolution image readout method.
- g y (X, y) (-f (x, y-4)-f (x, y-3)-f (x, y-2)-f (x, y-1) + f (x, y + 4) + f (x, y + 5) + f (x, y + 6) + f (x, y + 7)] / 4
- g x (x, y) (-f (x- 4, y)-f (x-3, y)-f (x-2, y)-f (x-1, y) + f (x + 4, y) + f (x + 5, y) + f (x + 6, y) + f (x + 7, y)] / 4
- the feature quantity extraction unit 33 replaces the vertical edge component g y and the horizontal edge component g x that fall within a predetermined minute amplitude with zero.
- the feature quantity extraction unit 33 calculates a vertical projection waveform by cumulatively adding the vertical edge component g y to a horizontal unit. Also, the feature quantity extraction unit 33 calculates the horizontal projection waveform by cumulatively adding the horizontal edge component g x to the unit of the vertical column.
- the phase division unit 37 sub-samples the vertical projection waveform of the high-resolution image every four pixels. At this time, the phase division unit 37 shifts the phase of subsampling. By doing so, as shown in Fig. 7, four types of sampling information whose phases are shifted from each other are generated.
- the phase division unit 37 subsamples the horizontal projection waveform of the high-resolution image every four pixels. At this time, by shifting the phase of sub-sampling, four types of sampling information whose phases are shifted from each other are generated.
- Step S12 The precision detection unit 3 8 uses the coarse detection result of the positional deviation by the coarse detection unit 36 as a starting point, sampling information of the longitudinal projection waveform obtained from the high resolution image, and the low resolution image Differences are taken while shifting from the vertical projection waveform of, and the waveform deviation that minimizes the absolute sum of the differences is detected.
- the precise detection unit 38 performs the detection of the waveform deviation for each of the four types of sampling information, thereby obtaining the waveform deviation that most closely matches the feature of the pattern (in this case, the waveform).
- This waveform shift corresponds to a horizontal position shift in units smaller than the pixel interval of the low resolution image.
- the precision detection unit 38 similarly detects the positional deviation in the vertical direction in a unit smaller than the pixel interval of the low resolution image (for example, the unit of the pixel interval of the high resolution image).
- the precision detection unit 38 obtains positional shifts (precision detection results) of a plurality of low resolution images using the high resolution image as a position reference, and outputs them to the image composition unit 34.
- Step S 1 3 The image composition unit 34 performs high-pass filter processing on the low-resolution image, calculates the absolute value sum of the filter result, and obtains the amount of the high-frequency component. Classify multiple low-resolution images according to the amount of high-frequency components found.
- an image may be given an order in descending order of high frequency components, and images from higher order to a predetermined order may be selected to be used for composition.
- an image in which the amount of the high frequency component exceeds a predetermined threshold may be selected and used as an image to be combined.
- Step S 14 The image composition unit 34 performs high-pass filter processing on the high-resolution image, and divides the high-resolution image into an edge area and a flat area from the filter result.
- Step S 15 In the edge area obtained in step S 14, the composition ratio is reduced as the image has fewer high frequency components. On the other hand, in the flat region obtained in step S 14, the composition ratio of the image with few high-frequency components is increased from the edge portion. For regions that are neither the edge region nor the flat region, the composite ratio is set to an intermediate value between the two regions.
- Step S 1 6 The image composition unit 3 4 enlarges each of the plurality of low resolution images.
- the image composition unit 34 does not perform pixel interpolation, and obtains an enlarged image with a large pixel interval.
- the image compositing unit 34 shifts the pixel position of the enlarged image of the low resolution image based on the precise detection result of the positional shift obtained by the precise detection unit 38, respectively.
- Step S 17 The rearranged image that has been subjected to the mapping process includes pixels that are not filled with gaps, pixels that deviate from the normal pixel position, and overlapping pixels.
- the image composition unit 34 picks up neighboring pixels for each regular pixel position of the rearranged image.
- the image composition unit 34 performs a weighted average on the signal components of these neighboring pixels according to the composition ratio set in step S15.
- the image composition unit 34 uses the weighted average value as the signal component (luminance / color difference component, etc.) at the normal pixel position.
- Step S 1 8 The image composition unit 34 performs the filtering process on the luminance component of the high resolution image as follows.
- the image composition unit 34 extracts a luminance component from the high-resolution image after gain correction, and performs filter processing that combines median processing and Gaussian filtering.
- the filter size is set to 3 X 3 pixels, and the three values in the center are extracted from 9 pixels within this filter size, and the Gaussian filter is executed. With this process, it is possible to reduce in advance the noise contained in the underexposed luminance component.
- Step S 19 Next, a composition ratio is set between the luminance component A of the high-resolution image after the filtering process and the luminance component B of the rearranged image.
- composition ratio it is preferable to set the composition ratio according to the following rules.
- the ratio of the rearranged image is locally reduced at locations where the local signal change of the luminance component A of the high-resolution image is large.
- the composition ratio of the luminance component B of the rearranged image is generally increased.
- a Gaussian filter smoothing
- the luminance component A of the high-resolution image is locally given priority in places where the pattern of the high-resolution image and the rearranged image are significantly different.
- the composition ratio G (i, j, p) is not particularly changed depending on the reference distance of the pixel.
- the luminance component B is within the filter size m, it can be reflected in the luminance component A even if the reference distance is long. As a result, it is possible to tolerate misalignment of the rearranged image to some extent.
- ⁇ in the above equation is a numerical value for adjusting the value of the composition ratio and the amount of decrease.
- This ⁇ is preferably set to be locally smaller at locations where the variance of 3 ⁇ 3 pixels around the luminance component ⁇ is larger.
- ⁇ in the above equation is set to be larger as the time interval and / or positional deviation between the high resolution image and the low resolution image is smaller.
- Step S 2 0 Combines the color difference component of the rearranged image generated in step S 17 and the luminance component of the composite image generated in step S 19 to obtain information on the low-resolution image. A reflected high-resolution color image is generated.
- the color image is stored and recorded in the recording medium 2 2 a via the image compression unit 24 and the recording unit 22. [Effects of this embodiment, etc.]
- the through image (low resolution image) discarded after the monitor display is effectively utilized for improving the image quality of the still image (high resolution image).
- the imaging performance of the electronic camera 10 can be improved.
- a low-resolution image with a short imaging time interval is synthesized. Therefore, the difference in the pattern between images is originally small, and a good image composition result can be obtained.
- a plurality of low resolution images are aligned using a high resolution image with a high pixel density as a position reference. Therefore, the image alignment accuracy of the low-resolution image is high, and a better image composition result can be obtained.
- a plurality of pieces of sampling information whose sampling phases are shifted from each other are generated from a high-resolution image.
- the position shift can be detected in units smaller than the pixel interval of the low resolution image. Therefore, it is possible to further improve the alignment accuracy of the image of the low resolution image, and obtain a better image composition result.
- luminance components (signal components with high visual sensitivity) of a plurality of low resolution images are aligned and mapped to obtain luminance components with higher resolution than the low resolution images. be able to.
- the color difference components (signal components having low visual sensitivity) of a plurality of low resolution images may be registered and mapped to obtain a color difference component having a higher resolution than that of the low resolution image. it can.
- the high-resolution luminance component (rearranged image) created by the alignment and the luminance component of the original high-resolution image are weighted and synthesized.
- This weighted composition can suppress uncorrelated luminance noise between the two images.
- the composition ratio of the rearranged image is locally reduced. As a result, if a misalignment occurs in the rearranged image, the signal difference between the two images opens significantly, and the composite ratio of the rearranged image decreases. Therefore, this misalignment is not reflected much in the image composition result.
- the composition ratio of the rearranged image is locally lowered at a location where the local signal change of the high resolution image is large. Therefore, the image structure of the high-resolution image is given priority in the edge portion of the image. As a result, it is possible to avoid adverse effects such as multi-line edges.
- the rearranged image synthesis ratio is generally increased. If this condition is met, it can be estimated that the pattern of the high-resolution image and the low-resolution image are very close. Therefore, by increasing the composition ratio of the rearranged images, it is possible to further increase the image S / N without the risk of pattern failure.
- a plurality of low resolution images are ranked according to the amount of the high frequency component of the spatial frequency. At this time, images with few high-frequency components are excluded from the images used for composition because blurring is noticeable.
- the composition ratio of the alignment composition at the edge portion of the pattern is adjusted to be small for an image that is determined to have relatively few high-frequency components.
- the edge part in the image with little subject blur is given priority, and the alignment composition is performed.
- adverse effects such as a dull edge in the rearranged image are suppressed.
- the synthesis ratio of the flat portion is increased compared to the edge portion.
- the flat portion multiple images are positioned and combined more evenly, and the image S / N after the position combining can be increased.
- FIG. 10 is a flowchart for explaining the operation of the electronic camera 10 in the second embodiment.
- FIG. 11 is a schematic diagram showing the image composition processing in the second embodiment.
- the second embodiment is a modification of the first embodiment, and the electronic camera combines a plurality of images having the same resolution.
- the configuration of the electronic camera in the second embodiment is the same as that of the electronic camera in the first embodiment (FIG. 1), a duplicate description is omitted.
- S 1 0 1 to S 1 0 3 in FIG. 10 correspond to S 1 to S 3 in FIG.
- the operation of the electronic camera will be described below along the step numbers shown in FIG.
- Step S 1 0 4 the microprocessor 19 determines whether or not the exposure time setting of the high-resolution image determined in Step S 1 0 3 is below an allowable upper limit where blur is not noticeable.
- the allowable upper limit is set in the same manner as in step S5 above.
- step S 1 0 If the exposure time setting is less than or equal to the allowable upper limit, the microprocessor 19 moves to step S 1 0 5. On the other hand, if the exposure time setting exceeds the allowable upper limit, the microprocessor 19 moves to step S 1 06.
- Step S 1 0 5 The imaging control unit 14 controls the image sensor 11 according to the set exposure time and captures a high-resolution image (still image). Note that the operation of this step corresponds to the above step S 6, and thus redundant description is omitted.
- Step S 1 0 6 On the other hand, when it is determined that the exposure time setting exceeds the allowable upper limit of blurring, the microprocessor 19 switches the operation state of the electronic camera to the continuous shooting mode. The microprocessor 19 limits the exposure time of each frame in the continuous shooting mode to an allowable upper limit that does not cause blurring.
- the imaging control unit 14 drives the imaging device 11 in a high-resolution readout mode according to a short limited exposure time. This Multiple high-resolution images will be continuously captured by the image sensor 1 1
- Each high-resolution image has a low signal level due to underexposure, but is less likely to blur.
- Each of the above high-resolution images is temporarily recorded in memory 17.
- the shooting operation is performed in the following manners (1) and (2).
- the imaging control unit 14 starts continuous shooting of high-resolution images from the time when the mode is switched to the continuous shooting mode, and stores high-resolution images for a certain period in the memory 17. At this time, the microprocessor 19 deletes the high-resolution images exceeding the predetermined number of frames in the memory 17 in order from the oldest one.
- the microprocessor 19 When the microprocessor 19 receives an instruction from the user (such as pressing the release button) during continuous shooting, the microprocessor 19 responds to the timing of the instruction in the high-resolution image in the memory 17.
- the image to be specified is designated as the reference image.
- the microphone mouth processor 19 designates a predetermined number of high-resolution images that move back and forth in the time axis direction with respect to the reference image as targets of a synthesis process described later.
- a high-resolution image designated as a target for the synthesis process is referred to as a synthesized image. Note that the reference image and the composite image are held in the memory 17 at least until the compositing process is completed.
- the microprocessor 19 causes the imaging control unit 14 to start continuous shooting in response to a user's continuous shooting start instruction (such as pressing a release button). Then, the microprocessor 19 receives the user's one continuous shooting end instruction (such as releasing the release button) or when the continuous shooting for a predetermined number of frames is completed, Stop shooting.
- a user's continuous shooting start instruction such as pressing a release button
- the microprocessor 19 receives the user's one continuous shooting end instruction (such as releasing the release button) or when the continuous shooting for a predetermined number of frames is completed, Stop shooting.
- the microprocessor 19 displays the captured images on the monitor display unit 30 and allows the user to select the reference image.
- the microphone processor 19 uses, as a recorded image, an image designated by the user among the high-resolution images taken continuously.
- the microprocessor 19 uses, as a composite image, an image other than the reference image among high-resolution images taken continuously.
- the microprocessor 19 may designate an image corresponding to the timing of the continuous shooting start instruction (the first high-resolution image shot) as the reference image. In this case, it is possible to omit the step of letting the user specify the reference image.
- Step S 1 0 7 The image processing device 25 performs processing of the composite image (S 1 0 6) with respect to the reference image (S 1 0 6) by the same processing as Step S 1 0 of the first embodiment. Detect image misalignment. Note that this image shift is detected for all the synthesized images.
- Step S 1 0 8 The image compositing unit 3 4 of the image processing device 25 performs high-pass filtering on the composite image (S 1 0 6), calculates the absolute value sum of the filter results, and calculates the high frequency Determine the amount of ingredients. Then, the image composition unit 34 classifies a plurality of composite images according to the obtained amount of the high frequency component.
- the image compositing unit 34 assigns the order to the images in the descending order of the high frequency components, selects the images from the higher order to the predetermined order, and You may narrow down the image actually used for composition from among the images.
- the image composition unit 34 may select an image in which the amount of the high-frequency component exceeds a predetermined threshold, and narrow down images to be actually used for composition from among the composite images.
- Step S 1 0 9 The image composition unit 34 performs high-pass filter processing on the reference image. Then, the reference image is divided into an edge area and a flat area from the filter result. Then, the image composition unit 34 reduces the composition ratio in the edge region of the reference image as the composite image with fewer high-frequency components. On the other hand, in the flat region of the reference image, the image composition unit 34 increases the composition ratio of an image with less high-frequency components than the edge portion. For areas that are neither edge areas nor flat areas, the composition ratio is set to an intermediate value between the two areas.
- Step S 1 1 0 The image composition unit 3 4 shifts the pixel position of the composition image based on the position shift detection result obtained in S 1 0 7 with the reference image as the position reference. Mapping (rearrangement) is performed (see Fig. 11). Second embodiment Then, the synthesized image after matbing is called a rearranged image.
- the resolution (number of pixels) of the reference image and each synthesized image are the same.
- the rearranged image group is treated as a three-dimensional image having a time axis t in addition to the X axis y axis indicating the position in the same image.
- Step S 1 1 1 The image composition unit 3 4 performs composition processing of the reference image (S 1 0 6) and the rearranged image (S 1 1 0) to generate a final composite image. .
- the image composition unit 34 sequentially composes each rearranged image with respect to the reference image, and repeats the compositing process for the number of rearranged images.
- the image composition unit 34 may synthesize a plurality of rearranged images in advance by median calculation or the like, and then synthesize the rearranged image after synthesis and the reference image.
- the image composition unit 34 reads out the initial value of the composition ratio obtained in S 1 09. Then, the image composition unit 34 further adjusts the composition ratio of the reference image and the rearranged image by paying attention to the gradation value of the reference image and the gradation value of the rearranged image.
- the image composition unit 3 4 determines that the gradation value difference between the target pixel of the reference image and the target pixel of the rearranged image is greater than or equal to the threshold value (when the gradation difference is large). The composition ratio at the target pixel is locally lowered. On the other hand, when the difference in gradation value between the target pixel of the reference image and the target pixel of the rearranged image is less than the threshold value (when the gradation difference is small), the image composition unit 3 4 Increase the composition ratio at. This greatly suppresses the effects of misalignment between images.
- the image composition unit 34 adjusts the composition ratio in accordance with local signal changes on the reference image. For example, the image synthesizing unit 34 locally lowers the synthesis ratio with the rearranged image at a location where the local signal change on the reference image is larger. In addition, the image composition unit 34 increases locally the composition ratio with the rearranged image at a location where the local signal change is small on the reference image. As a result, it is possible to suppress adverse effects such as multi-line edges on the synthesized image. [0114] Second, the image composition unit 34 adds and composes the corresponding pixels of the reference image and the rearranged image according to the composition ratio.
- Step S 1 1 2 The microprocessor 19 records the final composite image (generated in S 1 1 1) via the image compression unit 24 and the recording unit 22. Record on medium 2 2 a. This is the end of the description of FIG.
- the configuration of the second embodiment can provide the same type of effect as the first embodiment.
- the resolution of the rearranged image is high and the amount of I blue light is high, a higher-definition color image can be acquired.
- the present inventor has disclosed a procedure for further speeding up the position shift detection in Japanese Patent Application No. 2 0 0 5-3 4 5 7 15. According to this procedure, the position shift detection in each of the above embodiments may be speeded up.
- step S 10 absolute positional deviation is roughly detected between the reduced image of the high resolution image and the low resolution image.
- the precision detection unit 38 can roughly know the remaining absolute positional deviation based on the relative coarse detection result and the precision detection result of at least one positional deviation.
- the precision detector 38 can detect a precise positional deviation quickly by performing a positional deviation search using the absolute rough detection result as a starting point.
- the positional deviation of the image is detected by comparing the projected waveforms.
- the present invention is not limited to this.
- the positional deviation may be detected by spatial comparison of the pixel arrangement of both images.
- a plurality of low-resolution images are acquired before capturing a high-resolution image.
- the present invention is not limited to this.
- a plurality of low-resolution images may be acquired after capturing a high-resolution image.
- a plurality of low resolution images may be acquired before and after capturing a high resolution image.
- the color difference component is obtained from the rearranged image, and the luminance component is obtained from the synthesized image.
- the luminance component and the color difference component may be obtained from the rearranged image. Further, the luminance component and the color difference component may be obtained from the composite image.
- the luminance component may be obtained from the reference image and the rearranged image, and the color difference component may be obtained from the rearranged image.
- the present invention is not limited to this. In general, the present invention may be applied to cases where RGB, Lab, or other image signals are handled. In the case of this RGB image signal, the signal component with high visual sensitivity is the G component, and the remaining signal component is the RB component. In the case of a Lab image signal, the signal component with high visual sensitivity is the L component, and the remaining signal component is the ab component.
- the positional deviation of the pattern is detected by image processing.
- an acceleration sensor or the like may be mounted on the camera side to determine the movement (vibration) of the shooting area of the camera, and the displacement of the pattern of multiple images may be detected from the movement (vibration) of this shooting area.
- the present invention is a technique that can be used for an image processing apparatus or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Studio Devices (AREA)
Abstract
An image processing device is provided with an image input unit, a position discrepancy detecting unit and an image synthesizing unit. The image input unit receives a plurality of images formed from the same object. The position discrepancy detecting unit detects position discrepancies of patterns among a plurality of the images. The image synthesizing unit makes the positions of the patterns consistent with each other in accordance with the position discrepancies and synthesizes the patterns. In this structure, the image synthesizing unit judges high frequency components of spatial frequencies of a plurality of images. As an image is judged to be less in the high frequency components in this case, a synthesizing ratio for a position adjustment at edge portions of the patterns is set to be smaller.
Description
明 細 書 Specification
画像処理装置、 電子カメラおよび画像処理プログラム Image processing apparatus, electronic camera, and image processing program
技術分野 Technical field
[0001 ] 本発明は、 画像処理装置、 電子カメラおよび画像処理プログラムに関する 背景技術 TECHNICAL FIELD [0001] The present invention relates to an image processing apparatus, an electronic camera, and an image processing program.
[0002] 従来、 電子カメラを用いて複数回の分割露光を実施し、 得られた複数の画 像を位置合わせして合成することによって、 撮像画像のブレを修復する技術 が知られている (例えば、 下記の特許文献 1 ) 。 [0002] Conventionally, there has been known a technique for repairing a blur of a captured image by performing a plurality of divided exposures using an electronic camera and aligning and synthesizing the obtained images. For example, the following patent document 1).
特許文献 1 :特開 2 0 0 2 _ 1 0 7 7 8 7号公報 (請求項 1など) Patent Document 1: Japanese Patent Laid-Open No. 2 00 2 _ 1 0 7 7 8 7 (Claim 1 etc.)
発明の開示 Disclosure of the invention
発明が解決しょうとする課題 Problems to be solved by the invention
[0003] ところで、 従来技術では、 位置合わせ合成する複数画像の個体差によって 、 絵柄のエッジ部分が合成後に鈍るなどの弊害が懸念される。 [0003] By the way, in the prior art, there is a concern that the edge portion of the pattern may become dull after synthesis due to individual differences in the multiple images to be aligned and synthesized.
[0004] そこで、 本発明では、 複数の画像の位置合わせ合成において、 合成後の画 像品質を高める技術を提供することを目的とする。 [0004] Therefore, an object of the present invention is to provide a technique for improving the image quality after composition in the alignment composition of a plurality of images.
課題を解決するための手段 Means for solving the problem
[0005] 《1》 本発明の画像処理装置は、 画像入力部、 位置ズレ検出部、 および画 像合成部を備える。 画像入力部は、 同一被写体を撮影した複数の画像を取り 込む。 位置ズレ検出部は、 複数の画像の間で絵柄の位置ズレを検出する。 画 像合成部は、 位置ズレに基づいて、 複数の画像に対し絵柄の位置合わせを行 つて合成する。 このような構成において、 画像合成部は、 複数の画像につい て空間周波数の高域成分を判定する。 このとき、 高域成分が少ないと判定さ れた画像ほど、 絵柄のエッジ部における位置合わせ合成の合成比率を小さく する。 << 1 >> An image processing apparatus of the present invention includes an image input unit, a positional deviation detection unit, and an image composition unit. The image input unit captures multiple images of the same subject. The positional deviation detection unit detects a positional deviation of the pattern between a plurality of images. The image compositing unit performs pattern alignment on a plurality of images based on the positional deviation and combines them. In such a configuration, the image composition unit determines the high frequency component of the spatial frequency for a plurality of images. At this time, the composition ratio of the alignment composition at the edge portion of the pattern is reduced as the image is determined to have few high-frequency components.
[0006] 《2》 なお好ましくは、 画像合成部は、 高域成分が少ないと判定された画 像については、 絵柄の平坦部における位置合わせ合成の合成比率を、 エッジ
部の合成比率よりも上げる。 [0006] << 2 >> Preferably, for an image determined to have few high-frequency components, the image composition unit sets the composition ratio of the alignment composition in the flat part of the pattern to the edge. Higher than the composite ratio.
[0007] 《3》 また好ましくは、 画像合成部は、 高域成分の量に応じて合成に使用 する画像を選択する。 <3> Further preferably, the image composition unit selects an image to be used for composition according to the amount of the high frequency component.
[0008] 《4》 本発明の電子カメラは、 《1》 〜 《3》 のいずれか 1項に記載の画 像処理装置と、 被写体を連続的に撮影して複数の画像を生成する撮像部とを 備え、 複数の画像をこの画像処理装置で位置合わせ合成する機能を有する。 << 4 >> An electronic camera according to the present invention includes an image processing apparatus according to any one of << 1 >> to << 3 >>, and an imaging unit that continuously captures a subject and generates a plurality of images. And has a function of aligning and synthesizing a plurality of images by the image processing apparatus.
[0009] 《5》 本発明の画像処理プログラムは、 コンピュータを、 《1》 〜 《3》 のいずれか 1項に記載の画像処理装置として機能させるためのプログラムで 発明の効果 << 5 >> The image processing program of the present invention is a program for causing a computer to function as the image processing apparatus according to any one of << 1 >> to << 3 >>.
[0010] 本発明では、 複数の画像を空間周波数の高域成分によって区別する。 すな わち、 高域成分が少ないと判定された画像については、 少なくとも絵柄のェ ッジ部において位置合わせ合成の合成比率を下げる。 これによつて、 絵柄の エッジ部では、 高域成分の多い画像を主として位置合わせ合成が行われる。 その結果、 合成後にエッジ部分が鈍るなどの弊害を抑制することが可能にな る。 In the present invention, a plurality of images are distinguished by a high frequency component of spatial frequency. In other words, for images that are judged to have few high-frequency components, the composition ratio of the registration composition is lowered at least at the edge of the picture. As a result, at the edge portion of the pattern, the image is composed mainly of high-frequency components. As a result, it is possible to suppress adverse effects such as a dull edge after synthesis.
図面の簡単な説明 Brief Description of Drawings
[001 1 ] [図 1 ]電子カメラ 1 0 (画像処理装置 2 5を含む) を示すブロック図 [001 1] [Fig. 1] Block diagram showing electronic camera 10 (including image processing device 25)
[図 2]画像処理装置 2 5の構成を模式的に示したブロック図 FIG. 2 is a block diagram schematically showing the configuration of the image processing device 25.
[図 3]第 1実施形態での電子力メラ 1 0の動作を説明する流れ図 FIG. 3 is a flowchart for explaining the operation of the electronic force mela 10 in the first embodiment.
[図 4]第 1実施形態での電子力メラ 1 0の動作を説明する流れ図 FIG. 4 is a flowchart for explaining the operation of the electronic force mela 10 in the first embodiment.
[図 5]低解像度画像と高解像度画像とを説明する図 [Fig.5] Diagram explaining low resolution image and high resolution image
[図 6]射影エッジの比較による画像ずれ検出の様子を示す図 [Fig.6] Image misalignment detection by comparing projected edges
[図 7]サブサンプリングの様子を示す図 [Figure 7] Diagram showing sub-sampling
[図 8]合成比率の調整動作を示す図 [Fig.8] Diagram showing adjustment of composite ratio
[図 9]再配置画像の生成動作を説明する図 [Fig. 9] Diagram for explaining the operation of generating rearranged images
[図 10]第 2実施形態での電子カメラ 1 0の動作を説明する流れ図 FIG. 10 is a flowchart for explaining the operation of the electronic camera 10 in the second embodiment.
[図 1 1 ]第 2実施形態における画像の合成処理を示す模式図
発明を実施するための最良の形態 [FIG. 1 1] Schematic diagram showing image composition processing in the second embodiment. BEST MODE FOR CARRYING OUT THE INVENTION
[0012] <第 1実施形態の説明 > <Description of First Embodiment>
[電子カメラの構成説明] [Description of electronic camera configuration]
図 1は、 本実施形態の電子カメラ 1 0 (画像処理装置 2 5を含む) を示す プロック図である。 FIG. 1 is a block diagram showing an electronic camera 10 (including an image processing device 25) of the present embodiment.
[0013] 図 1において、 電子カメラ 1 0には、 撮影レンズ 1 2が装着される。 この 撮影レンズ 1 2の像空間には、 撮像素子 1 1の撮像面が配置される。 撮像素 子 1 1は、 撮像制御部 1 4によって制御される。 この撮像素子 1 1は、 高解 像度画像を読み出すモードと、 素子内部で画素間引きや画素加算を行って低 解像度画像を読み出すモードとを備える。 この撮像素子 1 1から出力される 画像信号は、 信号処理部 1 5、 および A / D変換部 1 6を介して処理された 後、 メモリ 1 7に一時蓄積される。 In FIG. 1, a photographing lens 12 is attached to the electronic camera 10. In the image space of the photographic lens 12, the imaging surface of the imaging device 11 is arranged. The imaging element 11 is controlled by the imaging control unit 14. The imaging device 11 has a mode for reading out a high resolution image and a mode for reading out a low resolution image by performing pixel thinning and pixel addition inside the device. The image signal output from the image sensor 11 is processed through the signal processing unit 15 and the A / D conversion unit 16 and then temporarily stored in the memory 17.
[0014] このメモリ 1 7は、 バス 1 8に接続される。 このバス 1 8には、 撮像制御 部 1 4、 マイクロプロセッサ 1 9、 記録部 2 2、 画像圧縮部 2 4、 モニタ表 示部 3 0、 および画像処理装置 2 5なども接続される。 上記のマイクロプロ セッサ 1 9には、 レリ一ズ釦などの操作部 1 9 aが接続される。 また、 上記 の記録部 2 2には、 記録媒体 2 2 aが着脱自在に装着される。 The memory 17 is connected to the bus 18. An imaging control unit 14, a microprocessor 19, a recording unit 22, an image compression unit 24, a monitor display unit 30, and an image processing device 25 are connected to the bus 18. An operation unit 19 a such as a release button is connected to the microprocessor 19. In addition, a recording medium 2 2 a is detachably attached to the recording unit 22.
[0015] [画像処理装置 2 5の説明] [Description of Image Processing Device 2 5]
図 2は、 画像処理装置 2 5の構成を模式的に示したブロック図である。 FIG. 2 is a block diagram schematically showing the configuration of the image processing device 25.
[0016] メモリ 1 7から読み出される高解像度画像は、 ゲイン補正部 3 1を介して 、 縮小画像生成部 3 2、 特徴量抽出部 3 3、 および画像合成部 3 4にそれぞ れ供給される。 縮小画像生成部 3 2の出力データは、 特徴量抽出部 3 5を介 して、 粗検出部 3 6に供給される。 特徴量抽出部 3 3の出力データは、 位相 分割部 3 7を介して、 精密検出部 3 8に供給される。 The high resolution image read from the memory 17 is supplied to the reduced image generation unit 3 2, the feature amount extraction unit 3 3, and the image synthesis unit 3 4 via the gain correction unit 3 1. . The output data of the reduced image generation unit 32 is supplied to the rough detection unit 36 via the feature amount extraction unit 35. The output data of the feature quantity extraction unit 33 is supplied to the precision detection unit 38 via the phase division unit 37.
[0017] 一方、 メモリ 1 7から読み出される複数の低解像度画像は、 特徴量抽出部 3 9および画像合成部 3 4にそれぞれ供給される。 特徴量抽出部 3 9の出力 データは、 粗検出部 3 6および精密検出部 3 8にそれぞれ供給される。 On the other hand, a plurality of low-resolution images read from the memory 17 are supplied to the feature amount extraction unit 39 and the image synthesis unit 34, respectively. The output data of the feature quantity extraction unit 39 is supplied to the rough detection unit 36 and the precise detection unit 38, respectively.
[0018] 粗検出部 3 6で粗く検出された位置ズレは、 精密検出部 3 8に供給される
。 精密検出部 3 8で高精度に検出された位置ズレは、 画像合成部 3 4に供給 される。 画像合成部 3 4は、 この位置ズレの検出結果に基づいて、 複数の低 解像度画像や高解像度画像を合成する。 [0018] The positional deviation roughly detected by the rough detection unit 36 is supplied to the fine detection unit 38. . The positional deviation detected with high precision by the precision detection unit 38 is supplied to the image composition unit 34. The image synthesis unit 34 synthesizes a plurality of low-resolution images and high-resolution images based on the detection result of the positional deviation.
[001 9] [動作説明] [001 9] [Description of operation]
図 3および図 4は、 電子カメラ 1 0の動作を説明する流れ図である。 以下 3 and 4 are flowcharts for explaining the operation of the electronic camera 10. Less than
、 図 3に示すステップ番号に沿って、 この動作を説明する。 This operation will be described along the step numbers shown in FIG.
[0020] ステップ S 1 :電子カメラ 1 0の主電源が投入されると、 マイクロプロセ ッサ 1 9は、 撮像制御部 1 4に低解像度画像の読み出しを指示する。 撮像制 御部 1 4は、 撮像素子 1 1を低解像度の読み出しモードで駆動し、 図 5に示 すように低解像度画像を例えば 3 0フレーム/秒で順次に読み出す。 Step S 1: When the main power supply of the electronic camera 10 is turned on, the microprocessor 19 instructs the imaging control unit 14 to read out a low resolution image. The imaging control unit 14 drives the imaging device 11 in the low-resolution readout mode, and sequentially reads out the low-resolution images at, for example, 30 frames / second as shown in FIG.
[0021 ] ステップ S 2 :撮像素子 1 1から読み出された低解像度画像は、 信号処理 音 ΙΠ 5、 および A / D変換部 1 6を介して処理された後、 メモリ 1 7に一時 格納される。 なお、 マイクロプロセッサ 1 9は、 メモリ 1 7内で所定コマ数 を超えた低解像度画像を古いものから順番に削除する。 [0021] Step S2: The low-resolution image read from the image sensor 11 is processed through the signal processing sound ΙΠ5 and the A / D converter 16 and then temporarily stored in the memory 17 The The microprocessor 19 deletes the low resolution images exceeding the predetermined number of frames in the memory 17 in order from the oldest one.
[0022] ここでの所定コマ数は、 後述する再配置画像の合成に使用する低解像度画 像のコマ数に相当し、 (高解像度画像の画素数/低解像度画像の画素数) 以 上に設定することが好ましい。 [0022] The predetermined number of frames here corresponds to the number of frames of a low-resolution image used for composition of a rearranged image, which will be described later, and the number of pixels of a high-resolution image / number of pixels of a low-resolution image It is preferable to set.
[0023] 例えば、 低解像度画像の縦横画素数を、 高解像度画像の縦横画素数の各 1[0023] For example, the number of vertical and horizontal pixels of a low-resolution image is 1 each of the number of vertical and horizontal pixels of a high-resolution image.
/ 4とした場合、 所定コマ数は 4 X 4 = 1 6コマ以上に設定することが好ま しい。 In the case of / 4, it is preferable to set the predetermined number of frames to 4 X 4 = 1 6 frames or more.
[0024] ステップ S 3 : モニタ表示部 3 0は、 低解像度画像 (スルー画像) をモニ タ画面に表示する。 一方、 マイクロプロセッサ 1 9は、 測光部 (不図示) の 測光結果や低解像度画像の明るさに基づいて露出計算を行い、 高解像度画像 の露光時間を決定する。 Step S 3: The monitor display unit 30 displays a low-resolution image (through image) on the monitor screen. On the other hand, the microprocessor 19 calculates the exposure based on the photometry result of the photometry unit (not shown) and the brightness of the low resolution image, and determines the exposure time of the high resolution image.
[0025] ステップ S 4 : ここで、 マイクロプロセッサ 1 9は、 ユーザ一によってレ リーズ釦が全押し操作されたか否かを判断する。 Step S 4: Here, the microprocessor 19 determines whether or not the release button has been fully pressed by the user.
[0026] レリーズ釦が全押し操作された場合、 マイクロプロセッサ 1 9は、 ステツ プ S 5に動作を移行する。 一方、 レリーズ釦の全押し操作されていない場合
、 マイクロプロセッサ 1 9は、 ステップ S 1に動作を戻す。 [0026] When the release button is fully pressed, the microprocessor 19 shifts the operation to step S5. On the other hand, when the release button is not fully pressed The microprocessor 19 returns to step S1.
[0027] ステップ S 5 : ここで、 マイクロプロセッサ 1 9は、 ステップ S 3で決定 した高解像度画像の露光時間設定が、 ブレの目立たない許容上限以下か否か を判定する。 例えば、 この許容上限は、 1 / (撮影レンズ 1 2の 3 5 m m判 換算の焦点距離) 秒程度に設定される。 Step S 5: Here, the microprocessor 19 determines whether or not the exposure time setting of the high-resolution image determined in Step S 3 is equal to or less than an allowable upper limit where blur is not noticeable. For example, the allowable upper limit is set to about 1 / (the focal length in terms of 35 mm size of the taking lens 1 2) second.
[0028] 露光時間設定が許容上限以下の場合、 マイクロプロセッサ 1 9はステップ S 6に動作を移行する。 一方、 露光時間設定が許容上限を超える場合、 マイ クロプロセッサ 1 9はステップ S 7に動作を移行する。 [0028] If the exposure time setting is less than or equal to the allowable upper limit, the microprocessor 19 moves to step S6. On the other hand, if the exposure time setting exceeds the allowable upper limit, the microprocessor 19 moves to step S7.
[0029] ステップ S 6 :撮像制御部 1 4は、 設定された露光時間に従って撮像素子 [0029] Step S6: The imaging control unit 14 uses the imaging device according to the set exposure time.
1 1をシャツタ制御する。 続いて、 撮像制御部 1 4は、 撮像素子 1 1を高解 像度の読み出しモードで駆動し、 高解像度画像を読み出す。 この高解像度画 像 (静止画像) は、 従来同様に画像処理および画像圧縮を経た後、 記録媒体 2 2 aに記録保存される。 この動作により、 電子カメラ 1 0は撮影動作を完 了する。 1 Control 1 Subsequently, the imaging control unit 14 drives the imaging device 11 in a high-resolution reading mode, and reads a high-resolution image. This high-resolution image (still image) is recorded and stored in the recording medium 22 a after undergoing image processing and image compression as in the conventional case. With this operation, the electronic camera 10 completes the shooting operation.
[0030] ステップ S 7 :一方、 露光時間設定がブレの許容上限を超えると判断され た場合、 マイクロプロセッサ 1 9は、 露光時間を、 ブレの生じない許容上限 以下に制限する。 Step S 7: On the other hand, if it is determined that the exposure time setting exceeds the allowable upper limit of blurring, the microprocessor 19 limits the exposure time to an allowable upper limit that does not cause blurring.
[0031 ] 撮像制御部 1 4は、 短く制限された露光時間に従って撮像素子 1 1をシャ ッタ制御する。 その状態で、 撮像制御部 1 4は、 撮像素子 1 1を高解像度の 読み出しモードで駆動し、 高解像度画像を読み出す。 この高解像度画像は、 露光不足のために信号レベルは低いが、 ぶれる可能性の少ない画像である。 この高解像度画像は、 メモリ 1 7に一時記録される。 [0031] The imaging control unit 14 performs shutter control of the imaging device 11 according to a short limited exposure time. In this state, the imaging control unit 14 drives the imaging device 11 in the high resolution readout mode to read out the high resolution image. This high-resolution image has a low signal level due to underexposure but is less likely to blur. This high resolution image is temporarily recorded in the memory 17.
[0032] ステップ S 8 :画像処理装置 2 5内のゲイン補正部 3 1は、 メモリ 1 7か ら高解像度画像を取り込む。 ゲイン補正部 3 1は、 この高解像度画像をゲイ ン調整することにより、 低解像度画像の信号レベルに合わせる。 Step S 8: The gain correction unit 31 in the image processing device 25 fetches a high resolution image from the memory 17. The gain correction unit 31 adjusts the gain of this high resolution image to match the signal level of the low resolution image.
[0033] ステップ S 9 :縮小画像生成部 3 2は、 ゲイン調整後の高解像度画像を解 像度変換して、 低解像度画像の画素数と合わせる。 Step S 9: The reduced image generation unit 32 performs resolution conversion on the high-resolution image after gain adjustment and matches the number of pixels of the low-resolution image.
[0034] 例えば、 4 X 4画素の平均値を抽出することにより、 高解像度画像の縦横
画素数を各 1 /4に解像度変換することができる。 [0034] For example, by extracting the average value of 4 X 4 pixels, the vertical and horizontal dimensions of the high-resolution image The resolution can be converted to 1/4 of the number of pixels.
[0035] このように低解像度化された高解像度画像 (以下、 縮小画像という) は、 特徴量抽出部 35に伝達される。 [0035] The high resolution image (hereinafter referred to as a reduced image) reduced in resolution in this way is transmitted to the feature quantity extraction unit 35.
[0036] ステップ S 1 0 : 図 6は、 射影エッジの比較による画像ズレ検出を説明す る図である。 以下、 この図 6を用いて、 画像ズレ検出の処理を説明する。 Step S 10: FIG. 6 is a diagram for explaining image shift detection based on comparison of projected edges. Hereinafter, the image misalignment detection process will be described with reference to FIG.
[0037] まず、 特徴量抽出部 35は、 下式に示す縦エッジ抽出用のフィルタ (図 6 [0037] First, the feature quantity extraction unit 35 performs the vertical edge extraction filter shown in the following equation (Fig. 6).
[A] 参照) を用いて、 縮小画像 f (x, y) 力、ら縦エッジ成分 gyを抽出す る。 (See [A]), the vertical edge component g y is extracted from the reduced image f (x, y) force.
gy (X, y) =- f (x, y-D+f (χ, y+1) g y (X, y) =-f (x, y-D + f (χ, y + 1)
さらに、 特徴量抽出部 35は、 下式に示す横エッジ抽出用のフィルタ (図 In addition, the feature extraction unit 35 uses a horizontal edge extraction filter (Fig.
6 [B] 参照) を用いて、 縮小画像 f (x, y) 力、ら横エッジ成分 gxを抽出 する。 6 [B]) is used to extract the reduced image f (x, y) force and horizontal edge component g x .
gx (X, y) =-f (χ-1 , y) +f (χ+1 , y) g x (X, y) = -f (χ-1, y) + f (χ + 1, y)
なお、 ノイズの影響を軽減するため、 特徴量抽出部 35は、 所定の微小振 幅に収まる縦エッジ成分 gyと横エッジ成分 gxについては、 ゼロに置き換える ことが好ましい。 In order to reduce the influence of noise, the feature quantity extraction unit 35 preferably replaces the vertical edge component g y and the horizontal edge component g x that fall within a predetermined minute amplitude with zero.
[0038] 次に、 特徴量抽出部 35は、 図 6 [A] に示すように、 縦エッジ成分 gyを 水平行の単位に累積加算することにより、 縦射影波形を算出する。 Next, as shown in FIG. 6A, the feature quantity extraction unit 35 calculates the vertical projection waveform by accumulating the vertical edge component g y in a horizontal unit.
[0039] さらに、 特徴量抽出部 35は、 図 6 [B] に示すように、 横エッジ成分 gx を垂直列の単位に累積加算することにより、 横射影波形を算出する。 Furthermore, as shown in FIG. 6B, the feature quantity extraction unit 35 calculates a horizontal projection waveform by cumulatively adding the horizontal edge component g x to the unit of the vertical column.
[0040] 一方、 特徴量抽出部 39は、 メモリ 1 7から複数の低解像度画像を取り込 む。 特徴量抽出部 39は、 個々の低解像度画像に対して特徴量抽出部 35と 同一の処理を実施し、 縦射影波形および横射影波形をそれぞれ求める。 On the other hand, the feature quantity extraction unit 39 fetches a plurality of low resolution images from the memory 17. The feature quantity extraction unit 39 performs the same processing as the feature quantity extraction unit 35 on each low-resolution image, and obtains a vertical projection waveform and a horizontal projection waveform, respectively.
[0041] ここで、 粗検出部 36は、 図 6 [A] に示すように、 縮小画像の中央域の 縦射影波形と、 低解像度画像の中央域の縦射影波形とをずらしながら差分を とり、 その差分の絶対値和が最小となる波形ズレを検出する。 この波形ズレ は、 縮小画像と低解像度画像との縦方向の位置ズレに相当する。 Here, as shown in FIG. 6 [A], the coarse detection unit 36 takes a difference while shifting the vertical projection waveform in the central area of the reduced image and the vertical projection waveform in the central area of the low-resolution image. Detect the waveform deviation that minimizes the sum of the absolute values of the differences. This waveform shift corresponds to the vertical position shift between the reduced image and the low resolution image.
[0042] また、 粗検出部 36は、 図 6 [B] に示すように、 縮小画像の中央域の横
射影波形と、 低解像度画像の中央域の横射影波形とをずらしながら差分をと り、 差分の絶対値和が最小となる波形ズレを検出する。 この波形ズレは、 縮 小画像と低解像度画像との横方向の位置ズレに相当する。 [0042] Further, as shown in Fig. 6 [B], the coarse detection unit 36 is arranged in the horizontal direction of the central area of the reduced image. The difference between the projected waveform and the horizontal projected waveform in the center area of the low-resolution image is taken to detect the waveform shift that minimizes the sum of the absolute values of the differences. This waveform shift corresponds to a positional shift in the horizontal direction between the reduced image and the low resolution image.
[0043] このようにして、 粗検出部 3 6は、 縮小画像を位置基準として複数の低解 像度画像の位置ズレ (粗検出結果) をそれぞれ求め、 精密検出部 3 8に出力 する。 In this way, the coarse detection unit 36 obtains the positional deviations (coarse detection results) of the plurality of low resolution images using the reduced image as a position reference, and outputs it to the fine detection unit 38.
[0044] ステップ S 1 1 :特徴量抽出部 3 3は、 ゲイン補正された高解像画像を取 り込み、 エッジ抽出フィルタを用いて、 縦エッジ成分 g yと横エッジ成分 g xを 抽出する。 [0044] Step S 1 1: The feature amount extraction unit 3 3 takes in the high-resolution image that has been gain-corrected, and extracts the vertical edge component g y and the horizontal edge component g x using an edge extraction filter. .
[0045] なお、 ここでのエッジ抽出フィルタは、 低解像度画像の読み出し方式に応 じて、 下記のように切り替えることが好ましい。 Note that the edge extraction filter here is preferably switched as follows according to the low-resolution image readout method.
■低解像度画像が画素加算または画素平均によって作成される場合 ■ When a low-resolution image is created by pixel addition or pixel averaging
gy (X, y) = [- f (x, y-4) - f (x, y-3) - f (x, y-2) - f (x, y-1 ) +f (x, y+4) +f (x, y+5) +f (x, y+ 6) +f (x, y+7) ] /4 g y (X, y) = (-f (x, y-4)-f (x, y-3)-f (x, y-2)-f (x, y-1) + f (x, y + 4) + f (x, y + 5) + f (x, y + 6) + f (x, y + 7)] / 4
gx (x, y) = [- f (x- 4, y) - f (x-3, y) - f (x-2, y) - f (x-1 , y) +f (x+4, y) +f (x+5, y) +f (x+6, y) +f (x+7, y) ] /4 g x (x, y) = (-f (x- 4, y)-f (x-3, y)-f (x-2, y)-f (x-1, y) + f (x + 4, y) + f (x + 5, y) + f (x + 6, y) + f (x + 7, y)] / 4
•低解像度画像が画素間引きによって作成される場合 • When a low-resolution image is created by pixel decimation
gy (X, y) =-f (χ, y-4) +f (x, y+4) g y (X, y) = -f (χ, y-4) + f (x, y + 4)
gx (x, y) =-f (x— 4, y) +f (x+4, y) g x (x, y) = -f (x— 4, y) + f (x + 4, y)
なお、 ノイズの影響を軽減するため、 特徴量抽出部 3 3は、 所定の微小振 幅に収まる縦エッジ成分 g yと横エッジ成分 g xについては、 ゼロに置き換える ことが好ましい。 In order to reduce the influence of noise, it is preferable that the feature quantity extraction unit 33 replaces the vertical edge component g y and the horizontal edge component g x that fall within a predetermined minute amplitude with zero.
[0046] 次に、 特徴量抽出部 3 3は、 縦エッジ成分 g yを水平行の単位に累積加算す ることにより、 縦射影波形を算出する。 また、 特徴量抽出部 3 3は、 横エツ ジ成分 g xを垂直列の単位に累積加算することにより、 横射影波形を算出する Next, the feature quantity extraction unit 33 calculates a vertical projection waveform by cumulatively adding the vertical edge component g y to a horizontal unit. Also, the feature quantity extraction unit 33 calculates the horizontal projection waveform by cumulatively adding the horizontal edge component g x to the unit of the vertical column.
[0047] 位相分割部 3 7は、 高解像度画像の縦射影波形を 4画素おきにサブサンプ リングする。 このとき、 位相分割部 3 7は、 サブサンプリングの位相をずら
すことによって、 図 7に示すように、 位相が互いにずれた 4種類のサンプリ ング情報を生成する。 [0047] The phase division unit 37 sub-samples the vertical projection waveform of the high-resolution image every four pixels. At this time, the phase division unit 37 shifts the phase of subsampling. By doing so, as shown in Fig. 7, four types of sampling information whose phases are shifted from each other are generated.
[0048] 同様に、 位相分割部 3 7は、 高解像度画像の横射影波形を 4画素おきにサ ブサンプリングする。 このとき、 サブサンプリングの位相をずらすことによ り、 位相が互いにずれた 4種類のサンプリング情報を生成する。 [0048] Similarly, the phase division unit 37 subsamples the horizontal projection waveform of the high-resolution image every four pixels. At this time, by shifting the phase of sub-sampling, four types of sampling information whose phases are shifted from each other are generated.
[0049] ステップ S 1 2 :精密検出部 3 8は、 粗検出部 3 6による位置ズレの粗検 出結果を出発点として、 高解像度画像から求めた縦射影波形のサンプリング 情報と、 低解像度画像の縦射影波形とをずらしながら差分をとり、 差分の絶 対値和が最小となる波形ズレを検出する。 [0049] Step S12: The precision detection unit 3 8 uses the coarse detection result of the positional deviation by the coarse detection unit 36 as a starting point, sampling information of the longitudinal projection waveform obtained from the high resolution image, and the low resolution image Differences are taken while shifting from the vertical projection waveform of, and the waveform deviation that minimizes the absolute sum of the differences is detected.
[0050] 精密検出部 3 8は、 この波形ズレの検出を 4種類のサンプリング情報それ ぞれについて実施することにより、 絵柄の特徴 (ここでは波形) が最も一致 する波形ズレを求める。 この波形ズレは、 低解像度画像の画素間隔よりも小 さな単位の横方向の位置ズレに相当する。 [0050] The precise detection unit 38 performs the detection of the waveform deviation for each of the four types of sampling information, thereby obtaining the waveform deviation that most closely matches the feature of the pattern (in this case, the waveform). This waveform shift corresponds to a horizontal position shift in units smaller than the pixel interval of the low resolution image.
[0051 ] さらに、 精密検出部 3 8は、 同様にして、 縦方向の位置ズレを低解像度画 像の画素間隔よりも小さな単位 (例えば高解像度画像の画素間隔の単位) で 検出する。 [0051] Further, the precision detection unit 38 similarly detects the positional deviation in the vertical direction in a unit smaller than the pixel interval of the low resolution image (for example, the unit of the pixel interval of the high resolution image).
[0052] このようにして、 精密検出部 3 8は、 高解像度画像を位置基準として複数 の低解像度画像の位置ズレ (精密検出結果) をそれぞれ求め、 画像合成部 3 4に出力する。 In this manner, the precision detection unit 38 obtains positional shifts (precision detection results) of a plurality of low resolution images using the high resolution image as a position reference, and outputs them to the image composition unit 34.
[0053] ステップ S 1 3 :画像合成部 3 4は、 低解像度画像にハイパスフィルタ処 理を施し、 フィルタ結果の絶対値和を計算して高域成分の量を求める。 求め た高域成分の量に従って、 複数の低解像度画像を分類する。 Step S 1 3: The image composition unit 34 performs high-pass filter processing on the low-resolution image, calculates the absolute value sum of the filter result, and obtains the amount of the high-frequency component. Classify multiple low-resolution images according to the amount of high-frequency components found.
[0054] なお、 図 8に示すように、 高域成分の多い順に画像に順番を付与し、 上位 から所定順位までの画像を選別して、 合成に使用する画像としてもよい。 ま た、 高域成分の量が所定の閾値を超える画像を選別して、 合成に使用する画 像としてもよい。 Note that, as shown in FIG. 8, an image may be given an order in descending order of high frequency components, and images from higher order to a predetermined order may be selected to be used for composition. In addition, an image in which the amount of the high frequency component exceeds a predetermined threshold may be selected and used as an image to be combined.
[0055] このような選別処理により、 ブレ量が大きいなど、 高域成分が基準に満た ない画像を合成処理から適度に除外することが可能になる。 その結果、 合成
後の画質を確実に高めることが可能になる。 [0055] By such a selection process, an image whose high frequency component does not meet the standard, such as a large amount of blurring, can be appropriately excluded from the synthesis process. As a result, synthesis It becomes possible to improve the image quality later.
[0056] ステップ S 1 4 :画像合成部 3 4は、 高解像度画像にハイパスフィルタ処 理を施し、 フィルタ結果から高解像度画像をェッジ域と平坦域とに領域分割 する。 Step S 14: The image composition unit 34 performs high-pass filter processing on the high-resolution image, and divides the high-resolution image into an edge area and a flat area from the filter result.
[0057] ステップ S 1 5 :ステップ S 1 4で求めたエッジ域では、 高域成分の少な い画像ほど合成比率を小さくする。 一方、 ステップ S 1 4で求めた平坦域で は、 高域成分の少ない画像の合成比率をエッジ部よりも上げる。 なお、 エツ ジ域または平坦域のいずれでもない領域については、 両領域の中間の合成比 率に設定する。 Step S 15: In the edge area obtained in step S 14, the composition ratio is reduced as the image has fewer high frequency components. On the other hand, in the flat region obtained in step S 14, the composition ratio of the image with few high-frequency components is increased from the edge portion. For regions that are neither the edge region nor the flat region, the composite ratio is set to an intermediate value between the two regions.
[0058] ステップ S 1 6 :画像合成部 3 4は、 複数の低解像度画像をそれぞれ拡大 [0058] Step S 1 6: The image composition unit 3 4 enlarges each of the plurality of low resolution images.
( 4 X 4倍) する。 このとき、 画像合成部 3 4は、 画素補間を行わず、 画素 間隔の開いた拡大画像を得る。 (4 x 4 times) At this time, the image composition unit 34 does not perform pixel interpolation, and obtains an enlarged image with a large pixel interval.
[0059] 次に、 画像合成部 3 4は、 精密検出部 3 8が求めた位置ズレの精密検出結 果に基づいて、 低解像度画像の拡大画像の画素位置をそれぞれ変位させ、 図Next, the image compositing unit 34 shifts the pixel position of the enlarged image of the low resolution image based on the precise detection result of the positional shift obtained by the precise detection unit 38, respectively.
9に示すようにマッピング (再配置) を行う。 Perform mapping (relocation) as shown in Fig. 9.
[0060] このようにして、 高解像度画像と同程度の縦横画素数を有する再配置画像 を得ることができる。 In this way, a rearranged image having the same number of vertical and horizontal pixels as a high-resolution image can be obtained.
[0061 ] ステップ S 1 7 : マッピング処理を完了した再配置画像には、 隙間の埋ま らない画素や、 正規の画素位置からずれた画素や、 重なった画素が存在する [0061] Step S 17: The rearranged image that has been subjected to the mapping process includes pixels that are not filled with gaps, pixels that deviate from the normal pixel position, and overlapping pixels.
[0062] 画像合成部 3 4は、 この再配置画像の正規の画素位置ごとに、 近傍画素を ピックアップする。 画像合成部 3 4は、 これら近傍画素の信号成分に対して 、 ステップ S 1 5で設定された合成比率に従って加重平均を実施する。 画像 合成部 3 4は、 この加重平均値を正規の画素位置の信号成分 (輝度/色差成 分など) とする。 このような合成処理により、 高解像度画像と同じ縦横画素 数の再配置画像を得ることができる。 [0062] The image composition unit 34 picks up neighboring pixels for each regular pixel position of the rearranged image. The image composition unit 34 performs a weighted average on the signal components of these neighboring pixels according to the composition ratio set in step S15. The image composition unit 34 uses the weighted average value as the signal component (luminance / color difference component, etc.) at the normal pixel position. By such a composition process, a rearranged image having the same number of vertical and horizontal pixels as the high-resolution image can be obtained.
[0063] なお、 上記の加重平均の代わりに、 近傍画素の信号成分のメディアン演算 によって、 再配置画像の合成を行ってもよい。
[0064] ステップ S 1 8 :画像合成部 3 4は、 高解像度画像の輝度成分に対して、 フィルタ処理を下記のように実施する。 [0063] Instead of the above weighted average, rearranged images may be synthesized by median calculation of signal components of neighboring pixels. Step S 1 8: The image composition unit 34 performs the filtering process on the luminance component of the high resolution image as follows.
[0065] まず、 画像合成部 3 4は、 ゲイン補正後の高解像度画像から輝度成分を抽 出し、 メディアン処理およびガウシァンフィルタを合わせたフィルタ処理を 施す。 例えば、 フィルタサイズを 3 X 3画素に設定し、 このフィルタサイズ 内の 9つの画素から中央 3つの値を抽出して、 ガウシアンフィルタを実施す る。 この処理により、 露光不足の輝度成分に含まれるノイズ分を予め低減す ることができる。 First, the image composition unit 34 extracts a luminance component from the high-resolution image after gain correction, and performs filter processing that combines median processing and Gaussian filtering. For example, the filter size is set to 3 X 3 pixels, and the three values in the center are extracted from 9 pixels within this filter size, and the Gaussian filter is executed. With this process, it is possible to reduce in advance the noise contained in the underexposed luminance component.
[0066] ステップ S 1 9 :次に、 このフィルタ処理後の高解像度画像の輝度成分 A と、 再配置画像の輝度成分 Bとの間で、 合成比率を設定する。 Step S 19: Next, a composition ratio is set between the luminance component A of the high-resolution image after the filtering process and the luminance component B of the rearranged image.
[0067] ここでは、 下記のようなルールに従って、 合成比率を設定することが好ま しい。 [0067] Here, it is preferable to set the composition ratio according to the following rules.
( 1 ) 高解像度画像の輝度成分 Aと再配置画像の輝度成分 Bとの信号差が大 きい箇所ほど、 再配置画像の合成比率を局所的に下げる。 (1) Lower the composition ratio of the rearranged image locally as the signal difference between the luminance component A of the high-resolution image and the luminance component B of the rearranged image increases.
( 2 ) 高解像度画像の輝度成分 Aの局所的な信号変化が大きい箇所ほど、 再 配置画像の合成比率を局所的に下げる。 (2) The ratio of the rearranged image is locally reduced at locations where the local signal change of the luminance component A of the high-resolution image is large.
( 3 ) 高解像度画像と低解像度画像との時間間隔が小さいほど、 再配置画像 の輝度成分 Bの合成比率を全般的に引き上げる。 (3) The smaller the time interval between the high-resolution image and the low-resolution image, the higher the overall composition ratio of the luminance component B of the rearranged image.
( 4 ) 高解像度画像と低解像度画像との位置ズレが小さいほど、 再配置画像 の輝度成分 Bの合成比率を全般的に引き上げる。 (4) As the positional deviation between the high-resolution image and the low-resolution image is small, the composition ratio of the luminance component B of the rearranged image is generally increased.
[0068] なお、 具体的な合成画像の輝度成分 g ( i , j )の計算には、 下式のガウシァ ンフィルタを使用することが好ましい。 [0068] It is preferable to use a Gaussian filter of the following formula for calculating the luminance component g (i, j) of a specific composite image.
[0069] [数 1 ] [0069] [Equation 1]
∑∑ {G(k, I, A(i, j))B(i _ — 1) / 2 + 一 1,ノ一 (w _ 1) / 2 + / _ 1)} + A(i, j) g J) = ∑∑ {G (k, I, A (i, j)) B (i _ — 1) / 2 + 1 1, 1 (w _ 1) / 2 + / _ 1)} + A (i, j ) g J) =
∑∑G ( J ( ) ) + 1
2σ
上式中では、 2段階の処理がなされる。 すなわち、 再配置画像の輝度成分 Bに対して、 例えば m = 5程度のガウシアンフィルタ (平滑化) が施される 。 次に、 この輝度成分 Bの平滑化結果が、 高解像度画像の輝度成分 Aに画素 単位に加重合成される。 ∑∑G (J ()) + 1 2σ In the above formula, a two-step process is performed. That is, a Gaussian filter (smoothing) of about m = 5 is applied to the luminance component B of the rearranged image. Next, the smoothing result of the luminance component B is weighted and combined with the luminance component A of the high-resolution image in units of pixels.
[0070] このとき、 輝度成分 A , Bの信号差が大きいほど、 輝度成分 Bの合成比率 G ( i , j , p)が下がる。 その結果、 高解像度画像と再配置画像との絵柄が大きく 違う箇所については、 高解像度画像の輝度成分 Aが局所的に優先される。 [0070] At this time, the larger the signal difference between the luminance components A and B, the lower the synthesis ratio G (i, j, p) of the luminance component B. As a result, the luminance component A of the high-resolution image is locally given priority in places where the pattern of the high-resolution image and the rearranged image are significantly different.
[0071 ] なお、 合成比率 G ( i , j , p)は、 画素の参照距離に依存して特に変化させない 方が好ましい。 これによつて、 フィルタサイズ m内の輝度成分 Bであれば、 参照距離が離れていても、 輝度成分 Aに反映させることができる。 その結果 、 再配置画像の位置合わせミスをある程度まで許容することが可能になる。 [0071] Note that it is preferable that the composition ratio G (i, j, p) is not particularly changed depending on the reference distance of the pixel. Thus, if the luminance component B is within the filter size m, it can be reflected in the luminance component A even if the reference distance is long. As a result, it is possible to tolerate misalignment of the rearranged image to some extent.
[0072] また、 上式中の σは、 合成比率の値や下げ幅を調整する数値である。 この σは、 輝度成分 Αの周辺 3 X 3画素の分散値が大きい箇所ほど、 局所的に小 さく設定することが好ましい。 このように σを局所的に小さくすることによ り、 高解像度画像のエッジ付近では、 再配置画像の合成比率が局所的に下が る。 その結果、 エッジなどの画像構造への影響 (平滑化や乱れ) を抑制する ことができる。 [0072] Also, σ in the above equation is a numerical value for adjusting the value of the composition ratio and the amount of decrease. This σ is preferably set to be locally smaller at locations where the variance of 3 × 3 pixels around the luminance component Α is larger. By reducing σ locally in this way, the composition ratio of the rearranged image is locally reduced near the edge of the high-resolution image. As a result, it is possible to suppress the influence (smoothing and disturbance) on the image structure such as edges.
[0073] なお、 上式中の σを、 高解像度画像と低解像度画像との時間間隔および/ または位置ズレが小さいほど、 σを大きく設定することが好ましい。 このよ うに σを可変することにより、 高解像度画像と低解像度画像の絵柄が近いと 推定されるほど、 再配置画像の合成比率が全般的に引き上がる。 その結果、 絵柄の近い再配置画像 (低解像度画像) の情報を、 合成画像に優先的に反映 させることができる。 [0073] Note that it is preferable that σ in the above equation is set to be larger as the time interval and / or positional deviation between the high resolution image and the low resolution image is smaller. By varying σ in this way, the overall ratio of the rearranged images increases as the images of the high-resolution image and the low-resolution image are estimated to be closer. As a result, information on rearranged images (low-resolution images) with similar patterns can be preferentially reflected in the composite image.
[0074] ステップ S 2 0 :ステップ S 1 7で生成された再配置画像の色差成分と、 ステップ S 1 9で生成された合成画像の輝度成分とを組み合わせて、 低解像 度画像の情報を反映した高解像度のカラー画像を生成する。 Step S 2 0: Combines the color difference component of the rearranged image generated in step S 17 and the luminance component of the composite image generated in step S 19 to obtain information on the low-resolution image. A reflected high-resolution color image is generated.
[0075] このカラー画像は、 画像圧縮部 2 4および記録部 2 2などを介して、 記録 媒体 2 2 aに保存記録される。
[0076] [本実施形態の効果など] The color image is stored and recorded in the recording medium 2 2 a via the image compression unit 24 and the recording unit 22. [Effects of this embodiment, etc.]
本実施形態では、 モニタ表示を終われば廃棄されるスルー画像 (低解像度 画像) を、 静止画像 (高解像度画像) の画質向上に有効活用する。 このスル —画像の有効活用によって、 電子カメラ 1 0の撮像性能を高めることができ る。 In this embodiment, the through image (low resolution image) discarded after the monitor display is effectively utilized for improving the image quality of the still image (high resolution image). By effectively using this through image, the imaging performance of the electronic camera 10 can be improved.
[0077] また、 本実施形態では、 撮像時間間隔の短い低解像度画像を合成する。 し たがって、 画像間の絵柄の差が元々小さく、 良好な画像合成結果を得ること ができる。 In this embodiment, a low-resolution image with a short imaging time interval is synthesized. Therefore, the difference in the pattern between images is originally small, and a good image composition result can be obtained.
[0078] さらに、 本実施形態では、 画素密度の高い高解像度画像を位置基準として 、 複数の低解像度画像の位置合わせを行う。 したがって、 低解像度画像の絵 柄の位置合わせ精度が高く、 更に良好な画像合成結果を得ることができる。 Furthermore, in this embodiment, a plurality of low resolution images are aligned using a high resolution image with a high pixel density as a position reference. Therefore, the image alignment accuracy of the low-resolution image is high, and a better image composition result can be obtained.
[0079] また、 本実施形態では、 高解像度画像から、 サンプリング位相が互いにず れた複数のサンプリング情報を生成する。 これらサンプリング情報と低解像 度画像との間で位置ズレ検出をそれぞれ行うことにより、 低解像度画像の画 素間隔よりも小さな単位で位置ズレを検出できる。 したがって、 低解像度画 像の絵柄の位置合わせ精度を一段と高めることが可能になり、 一層良好な画 像合成結果を得ることができる。 [0079] In the present embodiment, a plurality of pieces of sampling information whose sampling phases are shifted from each other are generated from a high-resolution image. By performing position shift detection between the sampling information and the low resolution image, the position shift can be detected in units smaller than the pixel interval of the low resolution image. Therefore, it is possible to further improve the alignment accuracy of the image of the low resolution image, and obtain a better image composition result.
[0080] さらに、 本実施形態では、 複数の低解像度画像の輝度成分 (視覚感度の高 い信号成分) を位置合わせしてマッピングして、 低解像度画像よりも高解像 度の輝度成分を得ることができる。 [0080] Furthermore, in this embodiment, luminance components (signal components with high visual sensitivity) of a plurality of low resolution images are aligned and mapped to obtain luminance components with higher resolution than the low resolution images. be able to.
[0081 ] また、 本実施形態では、 複数の低解像度画像の色差成分 (視覚感度の低い 信号成分) を位置合わせしてマッピングすることにより、 低解像度画像より も高解像度の色差成分を得ることもできる。 [0081] In the present embodiment, the color difference components (signal components having low visual sensitivity) of a plurality of low resolution images may be registered and mapped to obtain a color difference component having a higher resolution than that of the low resolution image. it can.
[0082] さらに、 本実施形態では、 位置合わせで作成した高解像度の輝度成分 (再 配置画像) と、 本来の高解像度画像の輝度成分とを加重合成する。 この加重 合成によって、 両画像間の非相関な輝度ノイズを抑制できる。 その結果、 露 光不足状態 (ステップ S 7参照) の高解像度画像の S / Nを改善することが できる。
[0083] また、 本実施形態では、 高解像度画像と再配置画像との信号差が有意に大 きい場合、 再配置画像の合成比率を局所的に下げる。 その結果、 再配置画像 に位置合わせミスが生じた場合、 両画像の信号差が有意に開いて再配置画像 の合成比率が下がる。 したがって、 この位置合わせミスは、 画像合成結果に さほど反映されなくなる。 Furthermore, in this embodiment, the high-resolution luminance component (rearranged image) created by the alignment and the luminance component of the original high-resolution image are weighted and synthesized. This weighted composition can suppress uncorrelated luminance noise between the two images. As a result, it is possible to improve the S / N of the high-resolution image in the underexposure state (see Step S7). In this embodiment, when the signal difference between the high-resolution image and the rearranged image is significantly large, the composition ratio of the rearranged image is locally reduced. As a result, if a misalignment occurs in the rearranged image, the signal difference between the two images opens significantly, and the composite ratio of the rearranged image decreases. Therefore, this misalignment is not reflected much in the image composition result.
[0084] さらに、 本実施形態では、 高解像度画像の局所的な信号変化が大きい箇所 について、 再配置画像の合成比率を局所的に下げる。 したがって、 画像のェ ッジ部などでは、 高解像度画像の画像構造が優先される。 その結果、 エッジ が多重線化するなどの弊害を回避することができる。 Furthermore, in the present embodiment, the composition ratio of the rearranged image is locally lowered at a location where the local signal change of the high resolution image is large. Therefore, the image structure of the high-resolution image is given priority in the edge portion of the image. As a result, it is possible to avoid adverse effects such as multi-line edges.
[0085] また、 本実施形態では、 高解像度画像と低解像度画像との時間間隔および /または位置ズレが小さいほど、 再配置画像の合成比率を全般的に上げる。 このような条件が成立する場合、 高解像度画像と低解像度画像の絵柄は非常 に近いと推定できる。 したがって、 再配置画像の合成比率を上げることによ り、 絵柄の破綻のおそれ無しに、 画像 S / Nを一段と高めることが可能にな る。 Further, in the present embodiment, as the time interval and / or positional deviation between the high resolution image and the low resolution image is small, the rearranged image synthesis ratio is generally increased. If this condition is met, it can be estimated that the pattern of the high-resolution image and the low-resolution image are very close. Therefore, by increasing the composition ratio of the rearranged images, it is possible to further increase the image S / N without the risk of pattern failure.
[0086] なお、 本実施形態では、 図 8に示すように、 複数の低解像度画像を空間周 波数の高域成分の量に従って順位付けする。 このとき、 ブレが目立つなどの 理由から高域成分の少ない画像については、 合成に使用する画像から除外さ れる。 In the present embodiment, as shown in FIG. 8, a plurality of low resolution images are ranked according to the amount of the high frequency component of the spatial frequency. At this time, images with few high-frequency components are excluded from the images used for composition because blurring is noticeable.
[0087] さらに、 合成に使用する画像に選別されても、 高域成分が比較的少ないと 判定された画像については、 絵柄のエッジ部における位置合わせ合成の合成 比率が小さく調整される。 その結果、 手ブレゃ被写体ブレの少ない画像中の エッジ部が優先されて、 位置合わせ合成が行われる。 その結果、 再配置画像 のエツジ部分が鈍るなどの弊害は抑制される。 [0087] Furthermore, even if the images used for the composition are selected, the composition ratio of the alignment composition at the edge portion of the pattern is adjusted to be small for an image that is determined to have relatively few high-frequency components. As a result, if there is a camera shake, the edge part in the image with little subject blur is given priority, and the alignment composition is performed. As a result, adverse effects such as a dull edge in the rearranged image are suppressed.
[0088] また、 本実施形態では、 高域成分が少ないと判定された画像については、 エッジ部に比べて平坦部の合成比率を大きくする。 その結果、 平坦部では、 より均等に複数画像の位置合わせ合成が行われ、 位置合わせ合成後の画像 S / Nを高めることが可能になる。
[0089] <第 2実施形態の説明 > [0088] In the present embodiment, for the image determined to have few high-frequency components, the synthesis ratio of the flat portion is increased compared to the edge portion. As a result, in the flat portion, multiple images are positioned and combined more evenly, and the image S / N after the position combining can be increased. <Explanation of Second Embodiment>
図 1 0は、 第 2実施形態における電子カメラ 1 0の動作を説明する流れ図 である。 また、 図 1 1は、 第 2実施形態における画像の合成処理を示す模式 図である。 FIG. 10 is a flowchart for explaining the operation of the electronic camera 10 in the second embodiment. FIG. 11 is a schematic diagram showing the image composition processing in the second embodiment.
[0090] ここで、 第 2実施形態は第 1実施形態の変形例であって、 電子カメラは同 一解像度の複数の画像を合成する。 また、 第 2実施形態における電子カメラ の構成は、 上記の第 1実施形態の電子カメラ (図 1 ) と共通するので重複説 明は省略する。 Here, the second embodiment is a modification of the first embodiment, and the electronic camera combines a plurality of images having the same resolution. In addition, since the configuration of the electronic camera in the second embodiment is the same as that of the electronic camera in the first embodiment (FIG. 1), a duplicate description is omitted.
[0091 ] さらに、 図 1 0の S 1 0 1から S 1 0 3は、 図 3の S 1から S 3にそれぞ れ対応するので重複説明を省略する。 以下、 図 1 0に示すステップ番号に沿 つて、 電子カメラの動作を説明する。 Furthermore, S 1 0 1 to S 1 0 3 in FIG. 10 correspond to S 1 to S 3 in FIG. The operation of the electronic camera will be described below along the step numbers shown in FIG.
[0092] ステップ S 1 0 4 : ここで、 マイクロプロセッサ 1 9は、 ステップ S 1 0 3で決定した高解像度画像の露光時間設定が、 ブレの目立たない許容上限以 下か否かを判定する。 なお、 許容上限は上記のステップ S 5と同様に設定さ れる。 Step S 1 0 4: Here, the microprocessor 19 determines whether or not the exposure time setting of the high-resolution image determined in Step S 1 0 3 is below an allowable upper limit where blur is not noticeable. The allowable upper limit is set in the same manner as in step S5 above.
[0093] 露光時間設定が許容上限以下の場合、 マイクロプロセッサ 1 9はステップ S 1 0 5に動作を移行する。 一方、 露光時間設定が許容上限を超える場合、 マイクロプロセッサ 1 9はステップ S 1 0 6に動作を移行する。 If the exposure time setting is less than or equal to the allowable upper limit, the microprocessor 19 moves to step S 1 0 5. On the other hand, if the exposure time setting exceeds the allowable upper limit, the microprocessor 19 moves to step S 1 06.
[0094] ステップ S 1 0 5 :撮像制御部 1 4は、 設定された露光時間に従って撮像 素子 1 1をシャツタ制御し、 高解像度画像 (静止画像) を撮像する。 なお、 このステップの動作は上記のステップ S 6に対応するので、 重複する説明は 省略する。 Step S 1 0 5: The imaging control unit 14 controls the image sensor 11 according to the set exposure time and captures a high-resolution image (still image). Note that the operation of this step corresponds to the above step S 6, and thus redundant description is omitted.
[0095] ステップ S 1 0 6 :一方、 露光時間設定がブレの許容上限を超えると判断 された場合、 マイクロプロセッサ 1 9は、 電子カメラの動作状態を連写撮影 モードに切り替える。 そして、 マイクロプロセッサ 1 9は、 連写撮影モード における各フレームの露光時間をブレの生じない許容上限以下に制限する。 Step S 1 0 6: On the other hand, when it is determined that the exposure time setting exceeds the allowable upper limit of blurring, the microprocessor 19 switches the operation state of the electronic camera to the continuous shooting mode. The microprocessor 19 limits the exposure time of each frame in the continuous shooting mode to an allowable upper limit that does not cause blurring.
[0096] この連写撮影モードでの撮像制御部 1 4は、 短く制限された露光時間に従 つて撮像素子 1 1を高解像度の読み出しモードで駆動させる。 これにより、
複数の高解像度画像が撮像素子 1 1によって連続的に撮像されることとなるIn this continuous shooting mode, the imaging control unit 14 drives the imaging device 11 in a high-resolution readout mode according to a short limited exposure time. This Multiple high-resolution images will be continuously captured by the image sensor 1 1
。 各々の高解像度画像は、 露光不足のために信号レベルは低いが、 ぶれる可 能性の少ない画像である。 上記の高解像度画像は、 それぞれメモリ 1 7に一 時記録される。 . Each high-resolution image has a low signal level due to underexposure, but is less likely to blur. Each of the above high-resolution images is temporarily recorded in memory 17.
[0097] ここで、 上記の連写撮影モードでは、 例えば、 以下の (1 ) や (2 ) の要 領で撮影動作が行われる。 Here, in the continuous shooting mode described above, for example, the shooting operation is performed in the following manners (1) and (2).
[0098] ( 1 ) 撮像制御部 1 4は、 連写撮影モードに切り替わった時点から高解像 度画像の連写撮影を開始し、 一定期間分の高解像度画像をメモリ 1 7に蓄積 する。 このとき、 マイクロプロセッサ 1 9は、 メモリ 1 7内で所定フレーム 数を超えた高解像度画像を古いものから順番に削除する。 (1) The imaging control unit 14 starts continuous shooting of high-resolution images from the time when the mode is switched to the continuous shooting mode, and stores high-resolution images for a certain period in the memory 17. At this time, the microprocessor 19 deletes the high-resolution images exceeding the predetermined number of frames in the memory 17 in order from the oldest one.
[0099] そして、 マイクロプロセッサ 1 9は、 連写撮影中にユーザ一の指示 (レリ —ズ釦の押圧など) を受け付けると、 メモリ 1 7内の高解像度画像のうちで 上記指示のタイミングに対応する画像を基準画像に指定する。 また、 マイク 口プロセッサ 1 9は、 基準画像に対して時間軸方向に前後する所定数の高解 像度画像を、 後述の合成処理の対象として指定する。 第 2実施形態では、 合 成処理の対象として指定された高解像度画像を被合成画像と称する。 なお、 基準画像および被合成画像は、 少なくとも合成処理が終了までメモリ 1 7に 保持される。 [0099] When the microprocessor 19 receives an instruction from the user (such as pressing the release button) during continuous shooting, the microprocessor 19 responds to the timing of the instruction in the high-resolution image in the memory 17. The image to be specified is designated as the reference image. Further, the microphone mouth processor 19 designates a predetermined number of high-resolution images that move back and forth in the time axis direction with respect to the reference image as targets of a synthesis process described later. In the second embodiment, a high-resolution image designated as a target for the synthesis process is referred to as a synthesized image. Note that the reference image and the composite image are held in the memory 17 at least until the compositing process is completed.
[0100] ( 2 ) マイクロプロセッサ 1 9は、 ユーザ一の連写撮影開始指示 (レリ一 ズ釦の押圧など) に応じて、 撮像制御部 1 4に連写撮影を開始させる。 そし て、 マイクロプロセッサ 1 9は、 ユーザ一の連写撮影終了指示 (レリ一ズ釦 の押圧解除など) を受け付けるか、 あるいは所定のフレーム数の連写撮影が 終了したときに、 上記の連写撮影を終了させる。 (2) The microprocessor 19 causes the imaging control unit 14 to start continuous shooting in response to a user's continuous shooting start instruction (such as pressing a release button). Then, the microprocessor 19 receives the user's one continuous shooting end instruction (such as releasing the release button) or when the continuous shooting for a predetermined number of frames is completed, Stop shooting.
[0101 ] その後、 マイクロプロセッサ 1 9は、 撮影した画像をそれぞれモニタ表示 部 3 0に表示するとともに、 ユーザ一に基準画像を選択させる。 そして、 マ イク口プロセッサ 1 9は、 連写撮影した高解像度画像のうちで、 ユーザーに よって指定された画像を記録画像とする。 また、 マイクロプロセッサ 1 9は 、 連写撮影された高解像度画像のうち基準画像以外の画像を被合成画像とす
る。 [0101] After that, the microprocessor 19 displays the captured images on the monitor display unit 30 and allows the user to select the reference image. The microphone processor 19 uses, as a recorded image, an image designated by the user among the high-resolution images taken continuously. In addition, the microprocessor 19 uses, as a composite image, an image other than the reference image among high-resolution images taken continuously. The
[0102] なお、 マイクロプロセッサ 1 9は、 上記 (2 ) の場合において、 連写撮影 開始指示のタイミングに対応する画像 (最初に撮影された高解像度画像) を 基準画像に指定してもよい。 この場合には、 ユーザーに基準画像を指定させ る工程を省略することが可能となる。 [0102] Note that in the case of (2) above, the microprocessor 19 may designate an image corresponding to the timing of the continuous shooting start instruction (the first high-resolution image shot) as the reference image. In this case, it is possible to omit the step of letting the user specify the reference image.
[0103] ステップ S 1 0 7 :画像処理装置 2 5は、 第 1実施形態のステップ S 1 0 と同様の処理によって、 基準画像 (S 1 0 6 ) に対する被合成画像 (S 1 0 6 ) の画像のズレを検出する。 なお、 この画像ズレの検出はすべての被合成 画像を対象として行われる。 Step S 1 0 7: The image processing device 25 performs processing of the composite image (S 1 0 6) with respect to the reference image (S 1 0 6) by the same processing as Step S 1 0 of the first embodiment. Detect image misalignment. Note that this image shift is detected for all the synthesized images.
[0104] ステップ S 1 0 8 :画像処理装置 2 5の画像合成部 3 4は、 被合成画像 ( S 1 0 6 ) にハイパスフィルタ処理を施し、 フィルタ結果の絶対値和を計算 して高域成分の量を求める。 そして、 画像合成部 3 4は、 求めた高域成分の 量に従って複数の被合成画像を分類する。 Step S 1 0 8: The image compositing unit 3 4 of the image processing device 25 performs high-pass filtering on the composite image (S 1 0 6), calculates the absolute value sum of the filter results, and calculates the high frequency Determine the amount of ingredients. Then, the image composition unit 34 classifies a plurality of composite images according to the obtained amount of the high frequency component.
[0105] このとき、 第 1実施形態と同様に、 画像合成部 3 4は、 高域成分の多い順 に画像に順番を付与し、 上位から所定順位までの画像を選別して、 被合成画 像のうちから実際に合成に使用する画像を絞り込んでもよい。 また、 画像合 成部 3 4は、 高域成分の量が所定の閾値を超える画像を選別して、 被合成画 像のうちから実際に合成に使用する画像を絞り込んでもよい。 At this time, as in the first embodiment, the image compositing unit 34 assigns the order to the images in the descending order of the high frequency components, selects the images from the higher order to the predetermined order, and You may narrow down the image actually used for composition from among the images. In addition, the image composition unit 34 may select an image in which the amount of the high-frequency component exceeds a predetermined threshold, and narrow down images to be actually used for composition from among the composite images.
[0106] ステップ S 1 0 9 :画像合成部 3 4は、 基準画像にハイパスフィルタ処理 を施す。 そして、 フィルタ結果から基準画像をエッジ域と平坦域とに領域分 割する。 そして、 画像合成部 3 4は、 基準画像のエッジ域では、 高域成分の 少ない被合成画像ほど合成比率を小さくする。 一方、 画像合成部 3 4は、 基 準画像の平坦域では、 高域成分の少ない画像の合成比率をエッジ部よりも上 げる。 なお、 エッジ域または平坦域のいずれでもない領域については、 両領 域の中間の合成比率に設定する。 Step S 1 0 9: The image composition unit 34 performs high-pass filter processing on the reference image. Then, the reference image is divided into an edge area and a flat area from the filter result. Then, the image composition unit 34 reduces the composition ratio in the edge region of the reference image as the composite image with fewer high-frequency components. On the other hand, in the flat region of the reference image, the image composition unit 34 increases the composition ratio of an image with less high-frequency components than the edge portion. For areas that are neither edge areas nor flat areas, the composition ratio is set to an intermediate value between the two areas.
[0107] ステップ S 1 1 0 :画像合成部 3 4は、 S 1 0 7で求めた位置ズレの検出 結果に基づいて、 基準画像を位置基準として、 被合成画像の画素位置をそれ ぞれ変位させてマッピング (再配置) を行う (図 1 1参照) 。 第 2実施形態
では、 マツビング後の被合成画像を再配置画像と称する。 Step S 1 1 0: The image composition unit 3 4 shifts the pixel position of the composition image based on the position shift detection result obtained in S 1 0 7 with the reference image as the position reference. Mapping (rearrangement) is performed (see Fig. 11). Second embodiment Then, the synthesized image after matbing is called a rearranged image.
[0108] この第 2実施形態では、 基準画像とそれぞれの被合成画像との解像度 (画 素数) は同じである。 そのため、 図 1 1に示すように、 再配置画像群は同一 画像内での位置を示す X軸 y軸に加えて時間軸 tを持つ 3次元画像として扱 われる。 In this second embodiment, the resolution (number of pixels) of the reference image and each synthesized image are the same. For this reason, as shown in Fig. 11, the rearranged image group is treated as a three-dimensional image having a time axis t in addition to the X axis y axis indicating the position in the same image.
[0109] ステップ S 1 1 1 :画像合成部 3 4は、 基準画像 (S 1 0 6 ) と再配置画 像 (S 1 1 0 ) との合成処理を行ない、 最終的な合成画像を生成する。 Step S 1 1 1: The image composition unit 3 4 performs composition processing of the reference image (S 1 0 6) and the rearranged image (S 1 1 0) to generate a final composite image. .
[01 10] ここで、 画像合成部 3 4は、 基準画像に対して各々の再配置画像を順次合 成して、 再配置画像の数だけ合成処理を繰り返す。 なお、 画像合成部 3 4は 、 メディアン演算などによって複数の再配置画像を予め合成した上で、 合成 後の再配置画像と基準画像とを合成してもよい。 [0110] Here, the image composition unit 34 sequentially composes each rearranged image with respect to the reference image, and repeats the compositing process for the number of rearranged images. Note that the image composition unit 34 may synthesize a plurality of rearranged images in advance by median calculation or the like, and then synthesize the rearranged image after synthesis and the reference image.
[01 1 1 ] 以下、 S 1 1 1での合成処理を具体的に説明する。 第 1に、 画像合成部 3 4は、 S 1 0 9で求めた合成比率の初期値を読み出す。 そして、 画像合成部 3 4は、 基準画像の階調値と再配置画像の階調値とに着目して、 基準画像お よび再配置画像の合成比率をさらに調整する。 [01 1 1] Hereinafter, the synthesis process in S 1 1 1 will be described in detail. First, the image composition unit 34 reads out the initial value of the composition ratio obtained in S 1 09. Then, the image composition unit 34 further adjusts the composition ratio of the reference image and the rearranged image by paying attention to the gradation value of the reference image and the gradation value of the rearranged image.
[01 12] 例えば、 画像合成部 3 4は、 基準画像の注目画素と再配置画像の注目画素 との間で階調値の差が閾値以上となる場合 (階調差の大きな場合) には、 そ の注目画素における合成比率を局所的に下げる。 一方、 画像合成部 3 4は、 基準画像の注目画素と再配置画像の注目画素との間で階調値の差が閾値未満 となる場合 (階調差の小さな場合) には、 その注目画素における合成比率を 局所的に上げる。 これにより、 画像間の位置合わせミスなどによる影響が大 幅に抑制される。 [01 12] For example, the image composition unit 3 4 determines that the gradation value difference between the target pixel of the reference image and the target pixel of the rearranged image is greater than or equal to the threshold value (when the gradation difference is large). The composition ratio at the target pixel is locally lowered. On the other hand, when the difference in gradation value between the target pixel of the reference image and the target pixel of the rearranged image is less than the threshold value (when the gradation difference is small), the image composition unit 3 4 Increase the composition ratio at. This greatly suppresses the effects of misalignment between images.
[01 13] また、 画像合成部 3 4は、 基準画像上での局所的な信号変化に応じて合成 比率を調節する。 例えば、 画像合成部 3 4は、 基準画像上での局所的な信号 変化が大きい箇所ほど、 再配置画像との合成比率を局所的に下げる。 また、 画像合成部 3 4は、 基準画像上での局所的な信号変化が小さい箇所について は再配置画像との合成比率を局所的に上げる。 これにより、 合成した画像上 でエッジが多重線化するなどの弊害を抑制できる。
[01 14] 第 2に、 画像合成部 3 4は、 上記の合成比率に従って、 基準画像と再配置 画像との対応する画素をそれぞれ加算合成していく。 この S 1 1 1では、 画 像合成部 3 4は、 各画素の輝度成分および色差成分をそれぞれ合成処理で求 めるものとする。 これにより、 複数の被合成画像の情報を反映した高解像度 の力ラー画像が生成されることとなる。 [0113] Further, the image composition unit 34 adjusts the composition ratio in accordance with local signal changes on the reference image. For example, the image synthesizing unit 34 locally lowers the synthesis ratio with the rearranged image at a location where the local signal change on the reference image is larger. In addition, the image composition unit 34 increases locally the composition ratio with the rearranged image at a location where the local signal change is small on the reference image. As a result, it is possible to suppress adverse effects such as multi-line edges on the synthesized image. [0114] Second, the image composition unit 34 adds and composes the corresponding pixels of the reference image and the rearranged image according to the composition ratio. In S 1 1 1, it is assumed that the image composition unit 34 obtains the luminance component and the color difference component of each pixel by the composition processing. As a result, a high-resolution power error image reflecting the information of a plurality of synthesized images is generated.
[01 15] ステップ S 1 1 2 : マイクロプロセッサ 1 9は、 画像圧縮部 2 4および記 録部 2 2などを介して、 最終的な合成画像 (S 1 1 1で生成されたもの) を 記録媒体 2 2 aに記録する。 以上で、 図 1 0の説明を終了する。 [01 15] Step S 1 1 2: The microprocessor 19 records the final composite image (generated in S 1 1 1) via the image compression unit 24 and the recording unit 22. Record on medium 2 2 a. This is the end of the description of FIG.
[01 16] この第 2実施形態の構成によっても第 1実施形態と同種の効果を得ること ができる。 特に、 第 2実施形態の場合には、 再配置画像の解像度が高ぐ I青報 量が高いため、 より高精細のカラー画像を取得しうる。 [0116] The configuration of the second embodiment can provide the same type of effect as the first embodiment. In particular, in the case of the second embodiment, since the resolution of the rearranged image is high and the amount of I blue light is high, a higher-definition color image can be acquired.
[01 17] 《実施形態の補足事項》 [01 17] <Supplementary items of the embodiment>
( 1 ) 本発明者は、 特願 2 0 0 5— 3 4 5 7 1 5において、 位置ズレ検出 を更に高速化する手順を開示している。 この手順に従って、 上記の各実施形 態の位置ズレ検出を高速化してもよい。 (1) The present inventor has disclosed a procedure for further speeding up the position shift detection in Japanese Patent Application No. 2 0 0 5-3 4 5 7 15. According to this procedure, the position shift detection in each of the above embodiments may be speeded up.
[01 18] ( 2 ) ステップ S 1 0では、 高解像度画像の縮小画像と、 低解像度画像と の間で絶対的な位置ズレを粗く検出している。 しかしながら、 本発明はこれ に限定されるものではない。 複数の低解像度画像の間で相対的な位置ズレを 粗く検出してもよい。 精密検出部 3 8は、 この相対的な粗検出結果と、 少な くとも 1つの位置ズレの精密検出結果とに基づいて、 残りの絶対的な位置ズ レを粗く知ることができる。 精密検出部 3 8は、 この絶対的な粗検出結果を 出発点として位置ズレ探索を行うことで、 精密な位置ズレを迅速に検出する ことが可能になる。 [01 18] (2) In step S 10, absolute positional deviation is roughly detected between the reduced image of the high resolution image and the low resolution image. However, the present invention is not limited to this. Relative positional deviations between multiple low-resolution images may be detected roughly. The precision detection unit 38 can roughly know the remaining absolute positional deviation based on the relative coarse detection result and the precision detection result of at least one positional deviation. The precision detector 38 can detect a precise positional deviation quickly by performing a positional deviation search using the absolute rough detection result as a starting point.
[01 19] ( 3 ) 上述した第 1実施形態および第 2実施形態では、 射影波形の比較に より画像の位置ズレを検出している。 しかしながら、 本発明はこれに限定さ れるものではない。 例えば、 両画像の画素配列の空間比較によって位置ズレ を検出してもよい。 [0119] (3) In the first and second embodiments described above, the positional deviation of the image is detected by comparing the projected waveforms. However, the present invention is not limited to this. For example, the positional deviation may be detected by spatial comparison of the pixel arrangement of both images.
[0120] ( 4 ) 上述した第 1実施形態および第 2実施形態では、 電子カメラに画像
処理装置を搭載するケースについて説明した。 しかしながら、 本発明はこれ に限定されるものではない。 上述した画像処理をプログラムコ一ド化した画 像処理プログラムを作成してもよい。 この画像処理プログラムをコンビユー タで実行することにより、 低解像度画像の情報を有効活用して、 高画質かつ 高解像度な合成結果をコンピュータ上で作成することができる。 同様に、 第 2実施形態の画像処理をコンピュータ上で実行する場合には、 連写撮影した 他の画像を有効活用して、 高画質かつ高解像度な合成結果をコンピュータ上 で作成できる。 [0120] (4) In the first embodiment and the second embodiment described above, an image is added to the electronic camera. The case where the processing apparatus is mounted has been described. However, the present invention is not limited to this. An image processing program in which the above-described image processing is converted into a program code may be created. By executing this image processing program with a computer, it is possible to create a high-quality and high-resolution composite result on a computer by effectively utilizing low-resolution image information. Similarly, when the image processing of the second embodiment is executed on a computer, a high-quality and high-resolution composite result can be created on the computer by effectively using other images taken continuously.
[0121 ] なお、 第 2実施形態の画像処理をコンピュータ上で実行する場合には、 連 写画像のうちから任意の画像をユーザーが基準画像として指定することが可 能となる。 [0121] When the image processing of the second embodiment is executed on a computer, it is possible for the user to designate an arbitrary image as a reference image from among the continuously shot images.
[0122] ( 5 ) 上述した第 1実施形態では、 高解像度画像の撮像前に、 複数の低解 像度画像を取得する。 しかしながら、 本発明はこれに限定されるものではな し、。 高解像度画像の撮像後に、 複数の低解像度画像を取得してもよい。 また 、 高解像度画像の撮像前および撮影後にまたがって、 複数の低解像度画像を 取得してもよい。 (5) In the first embodiment described above, a plurality of low-resolution images are acquired before capturing a high-resolution image. However, the present invention is not limited to this. A plurality of low-resolution images may be acquired after capturing a high-resolution image. In addition, a plurality of low resolution images may be acquired before and after capturing a high resolution image.
[0123] ( 6 ) 上述した第 1実施形態では、 再配置画像から色差成分を求め、 合成 画像から輝度成分を求める。 しかしながら、 本発明はこれに限定されるもの ではない。 再配置画像から輝度成分および色差成分を求めてもよい。 また、 合成画像から輝度成分および色差成分を求めてもよい。 なお、 第 2実施形態 においても、 第 1実施形態と同様に基準画像と再配置画像とから輝度成分を 求め、 再配置画像から色差成分を求めるようにしてもよい。 (6) In the first embodiment described above, the color difference component is obtained from the rearranged image, and the luminance component is obtained from the synthesized image. However, the present invention is not limited to this. The luminance component and the color difference component may be obtained from the rearranged image. Further, the luminance component and the color difference component may be obtained from the composite image. In the second embodiment, as in the first embodiment, the luminance component may be obtained from the reference image and the rearranged image, and the color difference component may be obtained from the rearranged image.
[0124] ( 7 ) 上述した第 1実施形態および第 2実施形態では、 輝度色差の画像信 号を扱うケースについて説明した。 しかしながら、 本発明はこれに限定され るものではない。 一般に、 本発明を R G B、 L a bその他の画像信号を扱う ケースに適用してもよい。 この R G B画像信号の場合、 視覚感度の高い信号 成分は G成分、 残りの信号成分は R B成分となる。 また、 L a b画像信号の 場合、 視覚感度の高い信号成分は L成分、 残りの信号成分は a b成分となる
[0125] ( 8 ) 上述した第 1実施形態および第 2実施形態では、 画像処理によって 絵柄の位置ズレを検出している。 しかしながら、 本発明はこれに限定される ものではない。 例えば、 カメラ側に加速度センサなどを搭載してカメラの撮 影域の移動 (振動) を求め、 この撮影域の移動 (振動) から複数画像の絵柄 の位置ズレを検出してもよい。 (7) In the first embodiment and the second embodiment described above, the case of handling an image signal of luminance color difference has been described. However, the present invention is not limited to this. In general, the present invention may be applied to cases where RGB, Lab, or other image signals are handled. In the case of this RGB image signal, the signal component with high visual sensitivity is the G component, and the remaining signal component is the RB component. In the case of a Lab image signal, the signal component with high visual sensitivity is the L component, and the remaining signal component is the ab component. (8) In the first and second embodiments described above, the positional deviation of the pattern is detected by image processing. However, the present invention is not limited to this. For example, an acceleration sensor or the like may be mounted on the camera side to determine the movement (vibration) of the shooting area of the camera, and the displacement of the pattern of multiple images may be detected from the movement (vibration) of this shooting area.
[0126] ( 9 ) 上述した第 1実施形態および第 2実施形態では、 画像処理装置を一 般的な電子カメラに実装した例を説明した。 しかし、 本発明の画像処理装置 は、 例えば、 監視カメラや、 事故画像を記録する車載カメラを備えたドライ ブレコーダシステムなどに組み合わせて適用することも勿論可能である。 (9) In the first embodiment and the second embodiment described above, the example in which the image processing apparatus is mounted on a general electronic camera has been described. However, the image processing apparatus of the present invention can of course be applied in combination with, for example, a surveillance camera or a drive recorder system equipped with an in-vehicle camera that records accident images.
[0127] なお、 本発明は、 その精神または主要な特徴から逸脱することなく、 他の いろいろな形で実施することができる。 そのため、 前述の実施例はあらゆる 点で単なる例示に過ぎず、 限定的に解釈してはならない。 本発明の範囲は、 特許請求の範囲によって示すものであって、 明細書本文には、 何ら拘束され ない。 さらに、 特許請求の範囲の均等範囲に属する変形や変更は、 すべて本 発明の範囲内のものである。 [0127] It should be noted that the present invention can be implemented in various other forms without departing from the spirit or main features thereof. For this reason, the above-described embodiment is merely an example in all respects and should not be interpreted in a limited manner. The scope of the present invention is indicated by the scope of claims, and is not restricted by the text of the specification. Further, all modifications and changes belonging to the equivalent scope of the claims are within the scope of the present invention.
産業上の利用可能性 Industrial applicability
[0128] 以上説明したように、 本発明は、 画像処理装置などに利用可能な技術であ る。
As described above, the present invention is a technique that can be used for an image processing apparatus or the like.
Claims
[1 ] 同一被写体を撮影した複数の画像を取り込む画像入力部と、 [1] An image input unit that captures multiple images of the same subject,
前記複数の画像の間で絵柄の位置ズレを検出する位置ズレ検出部と、 前記位置ズレに基づいて、 前記複数の画像に対し絵柄の位置合わせを行つ て合成する画像合成部とを備え、 A positional deviation detection unit that detects a positional deviation of a pattern between the plurality of images, and an image synthesis unit that performs synthesis by aligning the pattern on the plurality of images based on the positional deviation,
前記画像合成部は、 前記複数の画像について空間周波数の高域成分を判定 し、 前記高域成分が少ない画像ほど、 前記絵柄のエッジ部における位置合わ せ合成の合成比率を小さくする The image synthesizing unit determines a high frequency component of a spatial frequency for the plurality of images, and an image having fewer high frequency components reduces a composition ratio of alignment synthesis at an edge portion of the pattern.
ことを特徴とする画像処理装置。 An image processing apparatus.
[2] 請求項 1に記載の画像処理装置において、 [2] In the image processing device according to claim 1,
前記画像合成部は、 前記高域成分が少ないと判定された画像について、 前 記絵柄の平坦部における位置合わせ合成の合成比率を、 前記エッジ部の合成 比率よりも上げる The image composition unit raises the composition ratio of the registration composition in the flat part of the pattern above the composition ratio of the edge part for the image determined to have a low high-frequency component.
ことを特徴とする画像処理装置。 An image processing apparatus.
[3] 請求項 1または請求項 2に記載の画像処理装置において、 [3] In the image processing device according to claim 1 or 2,
前記画像合成部は、 前記高域成分の量に応じて合成に使用する画像を選択 する The image composition unit selects an image to be used for composition according to the amount of the high frequency component.
ことを特徴とする画像処理装置。 An image processing apparatus.
[4] 請求項 1ないし請求項 3のいずれか 1項に記載の画像処理装置と、 [4] The image processing device according to any one of claims 1 to 3,
被写体を連続的に撮影して複数の画像を生成する撮像部とを備え、 前記複数の画像を前記画像処理装置で位置合わせ合成する機能を有する ことを特徴とする電子カメラ。 An electronic camera comprising: an imaging unit that continuously shoots a subject to generate a plurality of images, and having a function of aligning and synthesizing the plurality of images by the image processing device.
[5] コンピュータを、 請求項 1ないし請求項 3のいずれか 1項に記載の画像処 理装置として機能させるための画像処理プログラム。
[5] An image processing program for causing a computer to function as the image processing apparatus according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008534239A JP5012805B2 (en) | 2006-09-14 | 2007-09-07 | Image processing apparatus, electronic camera, and image processing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-249019 | 2006-09-14 | ||
JP2006249019 | 2006-09-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008032441A1 true WO2008032441A1 (en) | 2008-03-20 |
Family
ID=39183510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/000978 WO2008032441A1 (en) | 2006-09-14 | 2007-09-07 | Image processing device, electronic camera and image processing program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP5012805B2 (en) |
WO (1) | WO2008032441A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011530208A (en) * | 2008-08-01 | 2011-12-15 | オムニヴィジョン テクノロジーズ インコーポレイテッド | Improved image formation using different resolution images |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005130443A (en) * | 2003-09-30 | 2005-05-19 | Seiko Epson Corp | Generation of high resolution image from plurality of low resolution images |
JP2005352720A (en) * | 2004-06-10 | 2005-12-22 | Olympus Corp | Imaging apparatus and method for attaining high resolution of image |
JP2006203717A (en) * | 2005-01-24 | 2006-08-03 | Seiko Epson Corp | Formation of high resolution image using a plurality of low resolution images |
JP2006222493A (en) * | 2005-02-08 | 2006-08-24 | Seiko Epson Corp | Creation of high resolution image employing a plurality of low resolution images |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4371457B2 (en) * | 1999-02-18 | 2009-11-25 | キヤノン株式会社 | Image processing apparatus, method, and computer-readable storage medium |
JP4104937B2 (en) * | 2002-08-28 | 2008-06-18 | 富士フイルム株式会社 | Moving picture composition method, apparatus, and program |
-
2007
- 2007-09-07 JP JP2008534239A patent/JP5012805B2/en active Active
- 2007-09-07 WO PCT/JP2007/000978 patent/WO2008032441A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005130443A (en) * | 2003-09-30 | 2005-05-19 | Seiko Epson Corp | Generation of high resolution image from plurality of low resolution images |
JP2005352720A (en) * | 2004-06-10 | 2005-12-22 | Olympus Corp | Imaging apparatus and method for attaining high resolution of image |
JP2006203717A (en) * | 2005-01-24 | 2006-08-03 | Seiko Epson Corp | Formation of high resolution image using a plurality of low resolution images |
JP2006222493A (en) * | 2005-02-08 | 2006-08-24 | Seiko Epson Corp | Creation of high resolution image employing a plurality of low resolution images |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011530208A (en) * | 2008-08-01 | 2011-12-15 | オムニヴィジョン テクノロジーズ インコーポレイテッド | Improved image formation using different resolution images |
Also Published As
Publication number | Publication date |
---|---|
JPWO2008032441A1 (en) | 2010-01-21 |
JP5012805B2 (en) | 2012-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2008032442A1 (en) | Image processing device, electronic camera and image processing program | |
JP5574423B2 (en) | Imaging apparatus, display control method, and program | |
US8749646B2 (en) | Image processing apparatus, imaging apparatus, solid-state imaging device, image processing method and program | |
JP3770271B2 (en) | Image processing device | |
JP4760973B2 (en) | Imaging apparatus and image processing method | |
US9307212B2 (en) | Tone mapping for low-light video frame enhancement | |
US8760526B2 (en) | Information processing apparatus and method for correcting vibration | |
US20100295953A1 (en) | Image processing apparatus and method thereof | |
JP4748230B2 (en) | Imaging apparatus, imaging method, and imaging program | |
JP4821626B2 (en) | Image processing apparatus, electronic camera, and image processing program | |
JP5569357B2 (en) | Image processing apparatus, image processing method, and image processing program | |
US8830359B2 (en) | Image processing apparatus, imaging apparatus, and computer readable medium | |
JP5211589B2 (en) | Image processing apparatus, electronic camera, and image processing program | |
JP2010187250A (en) | Image correction device, image correction program, and image capturing apparatus | |
JP2010263520A (en) | Image capturing apparatus, data generating apparatus, and data structure | |
JP4586707B2 (en) | Image processing apparatus, electronic camera, and image processing program | |
WO2012070440A1 (en) | Image processing device, image processing method, and image processing program | |
JP5055571B2 (en) | Image processing apparatus, electronic camera, and image processing program | |
JP5012805B2 (en) | Image processing apparatus, electronic camera, and image processing program | |
JP5195973B2 (en) | Image processing apparatus, electronic camera, and image processing program | |
JP5831492B2 (en) | Imaging apparatus, display control method, and program | |
JP2010154390A (en) | Imaging device, imaging method, and program | |
JP5301690B2 (en) | Imaging apparatus, method, and program | |
JP6107882B2 (en) | Imaging apparatus, display control method, and program | |
JP5539098B2 (en) | Image processing apparatus, control method therefor, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07805833 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008534239 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07805833 Country of ref document: EP Kind code of ref document: A1 |