JP2004219765A - Photographing device and program - Google Patents

Photographing device and program Download PDF

Info

Publication number
JP2004219765A
JP2004219765A JP2003007609A JP2003007609A JP2004219765A JP 2004219765 A JP2004219765 A JP 2004219765A JP 2003007609 A JP2003007609 A JP 2003007609A JP 2003007609 A JP2003007609 A JP 2003007609A JP 2004219765 A JP2004219765 A JP 2004219765A
Authority
JP
Japan
Prior art keywords
image
step
images
unit
specific point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2003007609A
Other languages
Japanese (ja)
Other versions
JP2004219765A5 (en
JP4418632B2 (en
Inventor
Koichi Washisu
晃一 鷲巣
Original Assignee
Canon Inc
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc, キヤノン株式会社 filed Critical Canon Inc
Priority to JP2003007609A priority Critical patent/JP4418632B2/en
Priority claimed from US10/756,034 external-priority patent/US7295232B2/en
Publication of JP2004219765A publication Critical patent/JP2004219765A/en
Publication of JP2004219765A5 publication Critical patent/JP2004219765A5/ja
Application granted granted Critical
Publication of JP4418632B2 publication Critical patent/JP4418632B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide a photographing device which provides an image free from image blurring. <P>SOLUTION: The photographing device for obtaining an exposure-corrected composite image by compositing a plurality of images obtained by sequential-photographing, is provided with: a detecting means 113 for extracting a specific point in each image among the plurality of images and detecting the displacement of the specific point on another image with reference to a specific point on a reference image; a coordinate transformation means 114 for transforming the coordinate of the other image based on the detection result by the detecting means 113; and a composition means 118 for compositing the reference image and another image whose coordinate is transformed by the coordinate transformation means 114. The detecting means 113 divides an area of the reference image into at least two areas and detects the displacement based on the specific point included in one area. <P>COPYRIGHT: (C)2004,JPO&NCIPI

Description

[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a photographing apparatus and a program that improve photographing accuracy by correcting shake of a photographed image due to camera shake.
[0002]
[Prior art]
With the current camera, all the important operations for photographing such as exposure determination and focusing are automated, and even a person unskilled in camera operation is very unlikely to fail in photographing.
[0003]
In recent years, a system for preventing camera shake added to a camera has been studied, and a factor that causes a photographer to make a photographing error has almost disappeared.
[0004]
Here, an image stabilizing system for preventing camera shake will be briefly described.
[0005]
The camera shake at the time of shooting is typically a vibration of 1 Hz to 10 Hz as a frequency, but as a basic idea for enabling taking a photograph without image shake even if such a camera shake occurs at the time of exposure, A camera shake due to camera shake must be detected, and the correction lens must be displaced in a plane orthogonal to the optical axis in accordance with the detection result (optical anti-vibration system).
[0006]
That is, in order to take a picture in which image shake does not occur even when camera shake occurs, first, it is necessary to accurately detect the vibration of the camera and, second, to correct the optical axis change due to camera shake.
[0007]
In principle, the image blur is corrected by detecting acceleration, angular acceleration, angular velocity, angular displacement, and the like using a laser gyro or the like, and mounting a vibration detection unit in the camera that performs appropriate arithmetic processing on the detection result. be able to. Then, image shake correction is performed by driving a shake correction optical device (including a correction lens) that decenters the photographing optical axis based on the camera shake detection information from the vibration detection unit.
[0008]
On the other hand, photographing is repeated a plurality of times with an exposure time that does not cause camera shake, and a plurality of images obtained by the photographing are combined while correcting the image deviation, thereby obtaining a photographed image (synthesized image) having a long exposure time. There is a method for obtaining the same (for example, see Patent Document 1).
[0009]
[Patent Document 1]
Patent No. 3110797
[0010]
[Problems to be solved by the invention]
Recent digital cameras have become smaller than silver halide compact cameras, and in particular, cameras having a VGA class image sensor have become smaller as they are built into portable electronic devices (for example, cellular phones).
[0011]
Under these circumstances, if the above-described optical image stabilizing system is mounted on a camera, it is necessary to further reduce the size of the shake correction optical device or the size of the vibration detection unit.
[0012]
However, since the shake correction optical device needs to support the correction lens and drive it with high precision, there is a limit to miniaturization. In addition, most of the vibration detection units currently used use inertial force. Therefore, when the vibration detection unit is miniaturized, the detection sensitivity is reduced, and there is a problem that accurate vibration correction cannot be performed.
[0013]
In addition, there are two types of shake applied to the camera: angular shake around a predetermined axis and shift shake that shakes the camera in parallel.Angular shake can be corrected by the optical image stabilization system, but shift shake uses inertial force. Cannot be handled by the optical vibration isolation system. In particular, as the camera becomes smaller, the shift shake tends to increase.
[0014]
On the other hand, as another image stabilization system, an image sensor detects a motion vector of a screen and changes a readout position of an image in accordance with the motion vector as in the case of using a video camera to shoot a moving image. There is also a way to get a video without any.
[0015]
In the case of such a method, there is no need for a dedicated vibration detection unit or a correction lens as in the above-described optical image stabilization system, and thus there is an advantage that the entire product can be reduced in size.
[0016]
However, this video camera image stabilization system cannot be easily applied to a digital camera. The reason will be described below.
[0017]
The extraction of a motion vector in a video camera is performed every time an image is read out. For example, if 15 frames of images are extracted per second, the extracted images are compared to detect a motion vector.
[0018]
However, when photographing a still image with a digital camera, only one exposure is performed on the photographed subject, so that it is impossible to detect a motion vector by comparing images as in a video camera.
[0019]
For this reason, the image stabilizing system of the video camera cannot be simply adapted to the digital camera.
[0020]
Accordingly, an object of the present invention is to provide a small image stabilization system for photographing a still image of a digital camera different from the optical image stabilization system of a silver halide camera and the image stabilization system of a video camera.
[0021]
[Means for Solving the Problems]
The first invention of the present application is a photographing apparatus that obtains an exposure-corrected synthesized image by synthesizing a plurality of images obtained by sequentially shooting, and extracts a specific point in each image from the plurality of images. Detecting means for detecting an amount of displacement of a specific point on another image relative to a specific point on the reference image; and coordinate converting means for performing coordinate conversion of the other image based on a detection result of the detecting means. And synthesizing means for synthesizing the reference image and another image coordinate-transformed by the coordinate converting means, wherein the detecting means divides the reference image into at least two regions, The detection of the displacement amount is performed based on the specific point included in.
[0022]
The second invention of the present application is a program for obtaining an exposure-corrected synthesized image by synthesizing a plurality of images obtained by sequentially photographing, and identifying a specific point in each image among the plurality of images. A detection step of extracting and detecting a displacement amount of a specific point on another image with respect to a specific point on the reference image, and performing a coordinate conversion of the other image based on a detection result of the detection step And a synthesizing step of synthesizing the reference image and another image subjected to the coordinate conversion in the coordinate conversion step. In the detecting step, the reference image is divided into at least two regions, It is characterized in that the displacement amount is detected based on a specific point included in the inside.
[0023]
BEST MODE FOR CARRYING OUT THE INVENTION
(1st Embodiment)
FIG. 1 is a diagram showing a configuration of a camera (photographing apparatus) according to a first embodiment of the present invention. A light beam (photographing light) incident from the photographing lens 11 passes through the shutter 12, and is formed on the imaging unit 15 after the light amount is limited by the aperture 13. The imaging unit 15 is composed of a semiconductor imaging device such as a MOS or a CCD.
[0024]
The photographing lens 11 receives the driving force from the AF drive motor 14, moves on the optical axis 10, and stops at a predetermined focusing position to perform focus adjustment. The AF drive motor 14 is driven by receiving a drive signal from the focus drive unit 19.
[0025]
The diaphragm 13 has a plurality of diaphragm blades, and these diaphragm blades operate by receiving a driving force from a diaphragm driving unit 17 to change an opening area (aperture diameter) serving as a light passage port. The shutter 12 has a plurality of shutter blades, and these shutter blades open and close an opening serving as a light passage port by receiving a driving force from a shutter driving unit 18. Thereby, the light beam incident on the imaging unit 15 is controlled.
[0026]
The driving of the focus drive unit 19, the aperture drive unit 17, and the shutter drive unit 18 is controlled by the imaging control unit 111.
[0027]
The imaging control unit 111 detects the subject brightness (photometry) based on an image signal captured by a signal processing unit 112 described later, and determines the aperture of the aperture 13 and the opening time of the shutter 12 based on the photometric result. ing. Further, the photographing control unit 111 obtains the in-focus position based on the output from the signal processing unit 112 while driving the focus driving unit 19.
[0028]
The image signal output from the imaging unit 15 is converted into a digital signal by the A / D conversion unit 110 and input to the signal processing unit 112. The signal processing circuit 112 forms a color video signal by performing signal processing such as forming a luminance signal and a color signal on the input signal.
[0029]
The image signal processed by the signal processing circuit 112 is output to the display unit 116 to be displayed as a captured image, and is also output to the recording unit 117 to be recorded.
[0030]
The operation described above is a case where an image of a subject having a brightness that does not require shake correction is taken. On the other hand, when the subject is dark and there is a risk of camera shake due to the long exposure time, the photographer operates an operation unit (anti-vibration switch) (not shown) provided on the camera to provide a vibration isolation system. Is turned on, and the operation is switched to the following operation.
[0031]
First, when the photographer half-presses a release button provided on the camera, a photographing preparation operation (focus adjustment operation, photometric operation, and the like) is started. The opening time (exposure time) of the shutter 12 and the aperture of the aperture 13 are determined on the basis of the photometric value obtained by the photometric operation. Is fully open and the exposure time is long exposure.
[0032]
Therefore, this exposure time is divided into a plurality of short exposure times, and photographing is repeated by the number of divisions. When divided into such short exposure times, each image obtained by exposure becomes underexposed, but these images are images less affected by camera shake.
[0033]
Then, the exposure is improved by combining a plurality of images after photographing and combining them into one image.
[0034]
However, when photographing a plurality of images, the composition (subject image) between the images is slightly shifted due to camera shake during continuous shooting even if the influence of camera shake does not occur in each image obtained by the plurality of shots. May be. Here, if these images are combined as they are, the combined image will be an image that is displaced by the amount by which the composition of each image is shifted.
[0035]
In the present embodiment, a plurality of image signals output from the imaging unit 15 for each shooting in accordance with the continuous shooting are converted into digital signals by the A / D conversion unit 110 as described above, and then are processed by the signal processing unit 112. Processing is performed. The output of the signal processing unit 112 is input to the imaging control unit 111 and also to the shift detection unit 113.
[0036]
The displacement detection unit (detection means) 113 extracts a feature point (specific point) in the captured image, and determines the position coordinates of the feature point in the captured image.
[0037]
For example, as shown in FIG. 2, consider a case where a photograph is taken in which a person 120a is standing against a building 121a in a frame 119a. At this time, if a plurality of images are taken, an image whose composition is deviated from the frame 119a due to camera shake as in the case of the frame 119b may be taken.
[0038]
The displacement detection unit 113 extracts the edge 123a of the window 122a, which is a point of high luminance, from the building 121a located around the frame 119a as a feature point by edge detection, and extracts the feature point 123a and the feature point extracted in the frame 119b. The difference is corrected (coordinate conversion).
[0039]
In FIG. 2, the coordinate of the frame 119b is transformed such that the characteristic point 123b of the frame 119b is superimposed on the characteristic point 123a of the frame 119a as indicated by an arrow 124.
[0040]
Here, the reason why the periphery of the shooting screen (frame) is selected as a feature point will be described below.
[0041]
In many shootings, the main subject is located near the center of the screen, and the main subject is often a person. In such a case, if the main subject is selected as a feature point, inconvenience due to subject shake appears.
[0042]
In other words, not only the camera shake of the photographer but also the shake of the subject are superimposed when a plurality of images are taken, so that the coordinates of the image are converted based on the shake of the subject.
[0043]
In this case, it seems that a preferable image can be obtained because the coordinate transformation is performed so that the composition of the main subject is appropriate.However, in general, the movement of the person is complicated, and the deviation detection accuracy is large depending on where to select the feature points. It depends.
[0044]
For example, if the eye of the main subject (person) is selected as a feature point, the effect of blinking occurs. If the tip of the hand is selected as a feature point, the hand is easy to move. Will be different.
[0045]
Even if the coordinates of an image are converted using one point of a person as a feature point in this manner, not all of the persons are properly converted in coordinates. Each time, the position of the coordinates varies, and a preferable image cannot be obtained.
[0046]
Therefore, as in the present embodiment, it is preferable to select a still subject such as a background that is likely to be located in the periphery of the shooting screen as a feature point and perform coordinate transformation of the image (an image with less image blur). ) Is obtained. In this case, the above-described influence of the subject shake appears. However, since the shake of the entire subject is slight, the image does not deteriorate so much.
[0047]
FIG. 3 is a diagram illustrating an extraction region of a feature point extracted by the shift detection unit 113 in one frame.
[0048]
A peripheral area 125 (shaded area, one area) avoiding an area 126 where the main subject is likely to be located in the frame 119a is a feature point extraction area. This extraction region 125 can be set as appropriate. The shift detecting unit 113 selects an image with high luminance and high contrast in the extraction region 125 and sets it as a feature point (edge detection).
[0049]
For the second and subsequent images, the same image is searched for from the vicinity of the position coordinates of the feature point in the immediately preceding image (the range of composition shake due to camera shake determined by the focal length of the camera). It is a feature point.
[0050]
Here, for the sake of explanation, the feature point coordinates for each image are obtained. However, in practice, the first image and the second image are correlated and the change in the corresponding pixel is used as a motion vector. And the change. Then, for the third image, a change in the feature point is obtained by a correlation operation with the second image, and thereafter, a change in the feature point of each image is obtained in the same manner.
[0051]
Note that, instead of selecting only one feature point in the extraction area 125, a plurality of points are selected, and the average value of the motion vectors of these points or the minimum value of the scalar is used as the change of the feature point. Is also good.
[0052]
Here, the reason why the minimum value is used as the change of the feature point is to select the feature point that does not move most because the feature point selected around the screen may itself move.
[0053]
The coordinate conversion unit (coordinate conversion unit) 114 performs image conversion of each image in accordance with the change in the feature point obtained by the shift detection unit 113. The image storage unit 115 stores each image data after the coordinate conversion.
[0054]
Each image data stored in the image storage unit 115 is output to an image combining unit (synthesizing unit) 118, and each image is combined into one image.
[0055]
In the case of a digital image, exposure can be corrected by increasing the gain of a single underexposed photograph, but increasing the gain increases noise and results in an unsightly image.
[0056]
However, when the gain of the entire image is increased by combining many images as in the present embodiment, an image having a large S / N ratio can be obtained because the noise of each image is averaged. As a result, noise can be suppressed and exposure can be optimized.
[0057]
According to another concept, it can be said that, for example, random images included in an image are reduced by allowing a plurality of images to be taken with the imaging unit 15 having high sensitivity while allowing noise, and averaging these images.
[0058]
Here, in the case of combining two photos whose composition is shifted as shown in FIG. 2, an area 127 where the two images do not overlap as shown in FIG. 4 occurs. Therefore, the image synthesizing unit 118 cuts the area 127, performs diffusion complement processing only on the area where the two images overlap, and returns the size of the original frame.
[0059]
The synthesized image data is displayed as a photographed image (still image) on the display unit 116 and is recorded in the recording unit 117.
[0060]
FIG. 5 is a flowchart summarizing the above-described operation. This flow starts when the image stabilizing switch is operated (turned on).
[0061]
In step S1001, the photographer waits until sw1 is turned on by half-pressing the release button, and when sw1 is turned on, the process proceeds to step S1002.
[0062]
In step S1002, the imaging unit 15 captures an image of a subject. The imaging control unit 111 drives the AF drive motor 14 to move the imaging lens 11 in the optical axis direction while detecting the contrast of the subject image (captured image) based on the output from the signal processing unit 112.
[0063]
Then, when the contrast becomes highest, the driving of the photographing lens 11 is stopped to bring the photographing optical system into a focused state (a so-called hill-climbing AF). Note that focus adjustment can also be performed by detecting a phase difference.
[0064]
Further, the imaging control unit 111 obtains the brightness of the subject based on the output of the imaging unit 15.
[0065]
In step S1003, the number of images to be photographed is obtained from the brightness of the subject obtained in step S1002. For example, in order to measure the brightness of the subject (photometry) and appropriately perform exposure based on the photometry result, the aperture 13 is fully opened (for example, f2.8), and the opening time of the shutter 12, that is, the exposure time is set to 1 Suppose that it is necessary to make / 8 seconds.
[0066]
Here, when the focal length of the photographing optical system is 30 mm in terms of a 35 mm film, there is a possibility that image blurring may occur due to camera shake in photographing with an exposure time of 1/8 second. The time is set to 1/32 second and the shooting is set to be performed four times.
[0067]
On the other hand, when the focal length of the photographing optical system is 300 mm, the exposure time is set to 1/320 second and the photographing is performed 40 times so that image blur does not occur.
[0068]
In step S1004, the number of images is displayed on a display unit provided in the viewfinder of the camera or on a liquid crystal display unit provided on the exterior of the camera to notify the photographer.
[0069]
In step S1005, the process waits until sw2 is turned on by full-pressing the release button. Note that when the half-press operation of the release button is released during this standby step, that is, when the sw1 is turned off, the process returns to the start.
[0070]
In step S1006, shooting of the first image is started.
[0071]
In step S1007, steps S1006 and S1007 are circulated and wait until the first image is captured. When the first image capturing is completed, the process proceeds to step S1008.
[0072]
In step S1008, a characteristic image (feature point) is extracted from the extraction region 125 in the frame by the deviation detection unit 113, and the coordinates of this image are obtained.
[0073]
In step S1009, coordinate conversion is performed by the coordinate conversion unit 114. However, in the case of only the first image, the process proceeds to step S1010 without performing coordinate conversion.
[0074]
In step S1010, if the image data is the first image data or the second or subsequent image data, the image data subjected to coordinate conversion is stored in the image storage unit 115.
[0075]
In step S1011, it is determined whether or not the shooting of the number of images obtained in step S1003 has been completed. Until the shooting of all the number of images has been completed, the process returns to step S1006 to perform shooting, feature point extraction, coordinate conversion, and image storage. (Steps 1006 to S1010) are repeated.
[0076]
In this flow, the first image is taken (coordinate transformation), and it seems that the second image is taken after the image storage is completed. While the extraction (coordinate conversion) and the image storage are being performed, the photographing of the second image and the reading from the imaging unit 15 are performed.
[0077]
By repeating the flow from step S1006 to step S1011, the coordinates of the feature points are obtained in step S1008 for the images captured after the second image, and then the coordinates of the feature points in the first image are added to the second and subsequent images. The coordinate transformation is performed so that the coordinates of the feature points in the image of the second image overlap each other, and the compositions of the second and subsequent images are all aligned with the composition of the first image.
[0078]
If it is determined in step S1011 that all shooting has been completed, the process advances to step S1012.
[0079]
In step S1012, a plurality of images stored in the image storage unit 115 are combined. Here, the synthesis of the images is performed by averaging the signals at the corresponding coordinates of each image, and the random noise in the images is reduced by averaging. Then, the gain of the image with reduced noise is increased to optimize the exposure.
[0080]
In step S1013, a region where the images do not overlap due to composition shake, such as a region 127 in FIG. 4, is cut out of the synthesized image, and the image is complemented by diffusion so as to have the size of the original frame.
[0081]
In step S1014, the image data obtained in step S1013 is displayed as a still image on a liquid crystal display unit disposed on the back of the camera or the like. Thereby, the photographer can observe the photographed image.
[0082]
In step S1015, the image data obtained in step S1013 is recorded on a recording medium that includes, for example, a semiconductor memory or the like and is detachable from the camera. In step S1016, the process returns to the start.
[0083]
When the release button is still half-pressed and the sw1 is on at the stage of step S1016, the flow proceeds to steps S1001, # 1002, # 1003, and # 1004 again. If the release button has been fully pressed at the stage of step S1016 and sw2 is on, the process waits at step S1016 without returning to the start.
[0084]
By the way, in the above-mentioned flow, detection of a deviation of a feature point and coordinate conversion of an image are performed in parallel with each photographing. Therefore, high-speed image processing is required.
[0085]
As a more inexpensive system, a method of slowly performing displacement detection, coordinate conversion, and composition after all of a plurality of images have been taken may be considered.
[0086]
In such a method, the load of the above-described image processing is remarkably reduced, and when a small VGA class image is handled, the capacity of the image memory can be reduced.
[0087]
FIG. 6 shows a flow in that case, and the flow of FIG. 5 is replaced with steps # 1006 to # 1012.
[0088]
Each photographing is performed in step # 1006, and the image is stored in the image recording unit 115 in step # 1010 each time the photographing is completed. Steps # 1007 to # 1006 are repeated until all the images have been captured and stored. When all the images have been stored, the process proceeds to step # 1008.
[0089]
In step # 1008, a characteristic image (feature point) is extracted from the extraction area 125 in the frame by the shift detection unit 113 for the image stored in the image recording unit 115 in the same manner as described above. Find coordinates.
[0090]
In step S1009, coordinate conversion is performed by the coordinate conversion unit 114, and the converted image is stored in the image recording unit 115 again. Note that the coordinate conversion is not performed only for the first image.
[0091]
In step S1012, all the images that have been subjected to coordinate conversion and stored in the image recording unit 115 (the first image has not been subjected to coordinate conversion) are combined.
[0092]
As described above, the image processing can be performed slowly using the time (for example, one second) until the image is displayed after all the photographing is completed, so that the image stabilizing system can be constructed without using an expensive image processing chip. .
[0093]
When the image size is small (for example, VGA) as in a camera mounted on a portable electronic device, the capacity of the image recording unit 115 for temporarily storing captured image data may be small.
[0094]
By performing displacement detection, coordinate conversion, and image synthesis after all images have been captured in this manner, a large-scale device is not required, and a vibration-proof system can be easily realized.
[0095]
(2nd Embodiment)
The camera according to the present embodiment is a modification of the camera according to the above-described first embodiment. Here, the configuration of the camera of the present embodiment is substantially the same as the configuration described in the first embodiment (FIG. 1).
[0096]
In the first embodiment, the feature point extraction region extracted by the displacement detection unit 113 is set in the peripheral region 125 of the frame.
[0097]
However, the feature point extraction area is not limited to the peripheral area 125 of the frame, and an area other than the focus area provided in the shooting screen may be used as a feature point extraction area, or a feature point other than the currently focused area may be used. Or an extraction area of
[0098]
This is because the main subject (person) is superimposed on the focus area when photographing, and in order to set the feature points other than the main subject, an area other than the focus area may be set as the feature point extraction area.
[0099]
FIG. 7 is a diagram showing a feature point extraction region in a shooting screen. When focusing is performed on the focus area 128c that captures the main subject among the focus areas (focus detection areas) 128a, 128b, 128c, 128d, and 128e provided in the shooting screen (frame 119a), The screen peripheral area 125 excluding the main subject area 126 centered on the focus area 128c is set as a feature point extraction area.
[0100]
That is, the main subject area 126 and the feature point extraction area are changed depending on whether the main subject is located in any one of the focus areas 128a, 128b, 128c, 128d, and 128e.
[0101]
If an image suitable for the feature point extraction region is set as a feature point and the image is synthesized by correcting the deviation of each image based on the coordinates of the feature point, preferable image synthesis can be performed.
[0102]
Also, as shown in the flow of FIG. 5, the coordinate transformation of all the images is completed, and the images are not synthesized after the saving of these images is completed. May go.
[0103]
FIG. 8 is a timing chart for explaining such an operation. With respect to the exposure f1, a signal that has been photoelectrically converted and accumulated in the imaging unit 15 is read as an imaging signal F1. Then, a correlation operation between the previous image signal F1 and the current image signal F2 is performed simultaneously with the reading of the image signal F2. Thus, a change in a feature point between the two images is obtained, and the two image signals F1 and F2 are synthesized to obtain a synthesized signal C2.
[0104]
Next, at the same time as the reading of the imaging signal F3, a correlation operation between the previous combined signal C2 and the current imaging signal F3 is performed to determine a change in a feature point, and the combined signal C2 and the imaging signal F3 are combined and combined. A signal C3 is obtained.
[0105]
Next, at the same time as the reading of the imaging signal F4, a correlation operation is performed between the previous synthesized signal C3 and the current imaging signal F4 to determine a change in a feature point, and the synthesized signal C3 and the imaging signal F4 are synthesized and synthesized. A signal C4 is obtained.
[0106]
Then, the obtained synthesized signal C4 (synthesized image) is displayed on a liquid crystal display unit provided on the back of the camera or the like, and is also recorded on a recording medium.
[0107]
FIG. 9 is a flowchart illustrating the above operation. Compared to the flowchart of FIG. 5, the image is not stored in step S1010, coordinate conversion (step S1009) is performed, and image synthesis (step S1012) is performed. A determination is made (step S1011).
[0108]
In this embodiment, the saving of the image in step S1010 is abolished because the photographed image is combined with the image obtained in the previous photographing at the same time as the photographing. This is because there is only one, and there is no need to store a plurality of captured images.
[0109]
That is, since the composite image is updated for each photographing, there is no need to store each photographed image. Therefore, the camera of the present embodiment does not include the image storage unit 115 shown in FIG.
[0110]
In the flow of FIG. 9, it seems that the next image capturing is not performed until all the image processing of step S1011 is completed. However, in actuality, image capturing, image signal output, correlation calculation, and image synthesis are performed as shown in the timing chart of FIG. 8. It is performed simultaneously.
[0111]
As described above, in the first embodiment and the second embodiment of the present invention, shooting is repeated a plurality of times with a short exposure time that does not cause camera shake, and a plurality of shot images are combined, thereby resulting in insufficient exposure. Is focused on, and further, by performing coordinate conversion for each image before the synthesis and correcting the deviation of the composition of each image caused by the camera shake, the image shake generated in the synthesized image is eliminated.
[0112]
As a result, since it is a digital camera, it is possible to perform electronic image stabilization in the same way as a video camera, so that it is possible to construct a vibration reduction system that is much smaller than a silver halide camera, and to correct the deviation of the image itself. In addition, not only the angular shake but also the shift shake can be corrected.
[0113]
Moreover, the above-described first and second embodiments determine in which region in the shooting screen the feature point should be extracted in the shift detection for coordinate conversion for correcting the shift in the composition of each image to be shot. That's what I thought.
[0114]
For example, as shown in FIGS. 3 and 7, the photographing screen is divided into two regions 125 and 126, and a feature point is extracted from the region 126 where the main subject (person) is likely to be located to detect composition deviation. If this is done, the composition deviation cannot be accurately determined due to the shake of the person itself.
[0115]
For this reason, as in the above-described first and second embodiments, the position coordinates of the feature points such as the bright spots 122a are detected for each image in the region 125 other than the region 126 in the shooting screen. After performing coordinate transformation of each image so that the feature points detected in each image are aligned at the same position, by performing image synthesis, one image without image blur can be obtained.
[0116]
(Third embodiment)
FIG. 10 is a block diagram of a camera according to the third embodiment of the present invention. The present embodiment is different from the first embodiment (second embodiment) in that each image is not subjected to coordinate transformation at the time of combining images, but only an image having a small composition deviation is first selected, and the selected image is selected. The point is that a synthesized image without shake is obtained by synthesizing the images.
[0117]
For this reason, in the configuration of the camera according to the present embodiment, the coordinate conversion unit 114 provided in the first embodiment (FIG. 1) is omitted, and instead, the image selection unit (image selection unit) 21 and the selection control unit (selection unit) (Control means) 22 is provided. Other configurations are the same as in the first embodiment.
[0118]
The image selection unit 21 selects only the image whose coordinate position is not substantially shifted from the feature point coordinates in each image obtained by the shift detection unit 113 as in the first embodiment, and stores the selected image in the image storage unit 115. It has become.
[0119]
For this reason, the image used in the synthesis by the image synthesis unit 118 is an image in which the deviation of the feature point coordinates is small, so that an image without image blur can be obtained only by synthesizing the images without performing coordinate conversion.
[0120]
However, unlike the first embodiment, there is a case where an image necessary for image composition is not enough with only a predetermined number of shots. That is, in the present embodiment, an image having a large composition deviation is not selected by the image selection unit 21 as an image necessary for image synthesis.
[0121]
In order to prevent this, the number of images to be photographed needs to be larger than the actually required number of images (the number of photographed images obtained from the photometry result) in advance.
[0122]
On the other hand, as the subject becomes darker, the number of images required for shooting increases accordingly, and the time required to complete all shooting increases.
[0123]
When the photographing time is lengthened in this way, the composition shift due to camera shake increases, and the number of images that can be selected by the image selecting unit 21 decreases, and it becomes impossible to obtain a properly exposed image forever.
[0124]
Therefore, in the present embodiment, as the subject becomes darker, the selection criterion of the image selection unit 21 is lowered, and image composition is performed while allowing a certain degree of composition deviation.
[0125]
In FIG. 10, the selection control unit 22 has a role of determining a selection criterion in the image selection unit 21, the subject is dark as the shooting information from the shooting control unit 111, and the number of shots for obtaining a proper exposure increases. In this case, the selection criterion of the image selection unit 21 is loosened, that is, the allowable change amount of the feature point coordinates is increased.
[0126]
FIG. 11 is a flowchart showing a shooting operation in the camera of this embodiment. This flow starts when an image stabilization switch provided on the camera is turned on.
[0127]
In step S1001, the photographer half-presses the release button to wait until sw1 is turned on. When sw1 is turned on, the process proceeds to step S1002.
[0128]
In step S1002, exposure is performed by the imaging unit 15. The imaging control unit 111 drives the AF drive motor 14 to move the imaging lens 11 in the optical axis direction while detecting the contrast of the captured image based on the output from the signal processing unit 112. Then, when the contrast of the subject image reaches the peak, the movement of the photographing lens 11 is stopped to bring the photographing optical system into a focused state. Further, the imaging control unit 111 obtains the brightness of the subject based on the output of the imaging unit 15.
[0129]
In step S1003, the number of images to be photographed is obtained from the brightness of the subject obtained in step S1002.
[0130]
For example, in order to measure the brightness of the subject and perform appropriate exposure based on the result of the photometry, the aperture 13 is fully opened (for example, f2.8) and the opening time of the shutter 12, that is, the exposure time is reduced by 1 /. Assume that it needs to be 8 seconds.
[0131]
Here, when the focal length of the photographing optical system is 30 mm in terms of 35 mm film, there is a possibility that image blurring due to camera shake may occur in photographing with an exposure time of 1/8 second. The time is set to 1/32 second and the shooting is set to be performed eight times.
[0132]
In the first embodiment, the number of continuous shootings is set to four under the same conditions. However, in the present embodiment, the number of shots is doubled in anticipation of an image to be discarded because an image with a large composition deviation is not used for image synthesis. are doing.
[0133]
On the other hand, when the focal length of the photographing optical system is 300 mm, the exposure time is set to 1/320 seconds and the photographing is performed 80 times so that image blur does not occur. This is also set to twice the number of times of photographing as compared to the case of the first embodiment.
[0134]
In step S1004, the number of images is displayed on a display unit provided in the viewfinder of the camera or on a liquid crystal display unit provided on the exterior of the camera to notify the photographer.
[0135]
In step S1005, the system waits until sw2 is turned on by full-pressing the release button. During the waiting step, the half-press operation of the release button is released, and when sw1 is turned off, the process returns to the start.
[0136]
In step S1006, shooting of the first image is started.
[0137]
In step S1007, steps S1006 and # 1007 are circulated and wait until the first image capturing is completed. When the first image capturing is completed, the process proceeds to step S1008.
[0138]
In step S1008, a characteristic image (characteristic point) is extracted from the region 125 (FIG. 3 or FIG. 7) of the frame by the deviation detecting unit 113, and the coordinates of the image are obtained.
[0139]
Actually, a correlation operation is performed here to obtain a feature point change amount, and when the second image comes to this step, a correlation operation is performed with the first image to obtain a feature point change amount. Can be
[0140]
In this step, the correlation calculation with the first image stored in advance is performed on the third and fourth images, and the change amount of the feature point is obtained.
[0141]
In step S2001, the selection control unit 22 changes the selection criterion of the image selection unit 21 (the amount of change in a feature point serving as a reference at the time of image selection) in accordance with the brightness of the image obtained in step S1002.
[0142]
Specifically, in the case of a subject that requires a total exposure time of 1/8 second to obtain a proper exposure, the brightness of the subject requires a total exposure time of 1/15 second. The allowable amount (the value of the selection criterion) of the composition deviation (the amount of change of the feature point) is set to double.
[0143]
For example, when the total exposure time is 1/15 second, the amount of change at the feature point is allowed up to five pixels of the imaging unit 15 (selection criterion), and an image used for image synthesis is selected. On the other hand, when the total exposure time is 1/8 second, an image to be used for image synthesis is selected by allowing a change amount at a feature point up to 10 pixels (selection criterion).
[0144]
In step S2002, the captured image is determined based on the image selection criterion obtained in step S2001, and if there is an image in which the amount of change in feature point coordinates exceeds the allowable amount (selection criterion) among the determination target images Then, the process proceeds to step S2003 to discard this image and returns to step S1006.
[0145]
When the first image is captured, the process proceeds to step S1010 without selecting an image.
[0146]
In step S1010, the selected image data is stored in the image storage unit 115.
[0147]
In step S1011, it is determined whether or not the shooting of the number of shots determined in step S1003 has been completed. Until the shooting of the total number of shots is completed, the process returns to step S1006, shooting, feature point extraction, image selection, and image storage. (Steps 1006 to S1010) are repeated.
[0148]
As a result, the composition of the second and subsequent images stored in the image storage unit 115 is almost the same as the composition of the first image. When all shooting is completed, the process proceeds to step S1012.
[0149]
In step S1012, a plurality of images stored in the image storage unit 115 are combined.
[0150]
Here, the synthesis of the images is performed by averaging the signals at the corresponding coordinates of each image, and the random noise in the images is reduced by averaging. Then, the gain of the image with reduced noise is increased to optimize the exposure.
[0151]
The difference between the present embodiment and the first embodiment is that the number of images to be synthesized in the first embodiment is determined to be four when the exposure time is 1/8 second in total, for example. In the case of the mode, the point is that some images to be selected cannot be known.
[0152]
Here, when four or more images (appropriate images) to be used for image synthesis are obtained, four images are selected in order from the image having the smallest composition deviation (the amount of change in feature points) among the obtained images. The images are used for image synthesis. On the other hand, when only two images can be obtained as appropriate images, image synthesis is performed using only these two images, and the gain of the image is increased while allowing some noise to optimize the exposure.
[0153]
In the first embodiment, of the images synthesized in step S1013, a region where each image does not overlap due to a composition shift is cut, and the image is diffused and complemented to have the size of the original frame. I have.
[0154]
However, in the present embodiment, since an image having a small composition deviation is selected and used for image synthesis, the area where the images do not overlap is small, and a step corresponding to step S1013 in the first embodiment is eliminated.
[0155]
That is, in the present embodiment, it is possible to perform the shake correction that reproduces the entire shooting screen without waste.
[0156]
In step S1014, the image data obtained in step S1012 is displayed as a photographed image on a liquid crystal display unit disposed on the back of the camera or the like.
[0157]
In step S1015, the image data obtained in step S1012 is recorded on a recording medium.
[0158]
In step S1016, the process returns to the start.
[0159]
If the release button is still half-pressed at the stage of step S1016 and sw1 is on, the flow proceeds to steps S1001, # 1002, # 1003, and # 1004 again. If the release button has been fully pressed at the stage of step S1016 and sw2 is on, the process waits at step S1016 without returning to the start.
[0160]
As described above, in the present embodiment, image synthesis is performed by selecting only an image with a small composition deviation, instead of performing coordinate conversion of the image as in the first embodiment.
[0161]
For this reason, the disadvantage is that the number of shots increases in anticipation of discarding an image with a large composition deviation, but the large load of the coordinate transformation of the image is eliminated and the screen is not missing due to the synthesis after the coordinate transformation. There is an advantage that the entire screen can be reproduced.
[0162]
(Fourth embodiment)
The camera according to the fourth embodiment of the present invention is a modification of the camera according to the third embodiment, and the configuration of the camera according to the present embodiment is the same as the configuration described in the third embodiment (FIG. 10).
[0163]
In the third embodiment, as can be seen from the flowchart of FIG. 11, after the image is discarded in step S2003, shooting is added by an amount corresponding to the discarded image, and there is no allowance for the initial number of shots. With this configuration, extra shooting can be avoided. However, in this case, there are problems in that the photographer does not know in advance how many pictures are to be taken, and it is not possible to predict the end of the photography.
[0164]
However, even in the above method, a guide of the number of shots is displayed to notify the photographer, and when the shooting exceeding the displayed number is performed, the shooting is forcibly terminated, so that the useless shooting can be performed. become.
[0165]
FIG. 12 is a flowchart illustrating the above-described operation. This flow starts when an image stabilization switch provided in the camera is turned on.
[0166]
In step S1001, the photographer half-presses the release button to wait until sw1 is turned on. When sw1 is turned on, the process proceeds to step S1002.
[0167]
In step S1002, exposure in the imaging unit 15 is performed. The imaging control unit 111 drives the AF drive motor 14 to move the imaging lens 11 in the optical axis direction while detecting the contrast of the image based on the output from the signal processing unit 112. When the contrast of the subject image reaches the peak value, the movement of the photographing lens 11 is stopped to bring the photographing optical system into a focused state. At the same time, the brightness of the subject is obtained based on the output of the imaging unit 15.
[0168]
In step S1003, the number of images to be photographed is obtained from the brightness of the subject obtained in step S1002.
[0169]
For example, in order to measure the brightness of the subject and perform appropriate exposure based on the result of the photometry, the aperture 13 is fully opened (for example, f2.8), and the opening time (exposure time) of the shutter 12 is set to one. Suppose that it is necessary to make / 8 seconds.
[0170]
Here, when the focal length of the photographing optical system is 30 mm in 35 mm film conversion, image blurring due to camera shake may occur in shooting with an exposure time of 1/8 second, so that there is no possibility of image blurring. The exposure time is set to 1/32 second, and the shooting is set to be performed eight times.
[0171]
In the first embodiment, the number of times of photographing is set to four under the same condition. However, in this embodiment, images having a large composition deviation are not used for synthesis, so that the number of photographed images is doubled in anticipation of an image to be discarded. You have set.
[0172]
On the other hand, under the above conditions, when the focal length of the photographing optical system is 300 mm, the exposure time is set to 1/320 seconds and the photographing is performed 80 times so as not to cause image blurring. This is also set twice as large as in the first embodiment.
[0173]
In step S2006, the estimated number of shots is displayed on the display unit provided in the viewfinder of the camera or the liquid crystal display unit provided on the exterior of the camera to notify the photographer.
[0174]
Here, the estimated number of images is used because the number of images actually used for image composition changes depending on the amount of shift in the composition of each captured image (the amount of change in feature points). For example, if the camera is firmly fixed and there is no composition deviation, the process ends with four shots. On the other hand, when an image having a large composition deviation is obtained, there is a possibility that images required for image synthesis may be insufficient even with eight shots.
[0175]
In step S1005, the system waits until sw2 is turned on by the full-press operation of the release button. During the standby step, the half-press operation of the release button is released, and when the sw1 is turned off, the process returns to the start.
[0176]
In step S1006, shooting of the first image is started.
[0177]
In step S1007, steps S1006 and # 1007 are circulated and wait until the first image is captured. When the first image capturing is completed, the process proceeds to step S1008.
[0178]
In step S1008, the shift detecting unit 113 extracts a characteristic image (feature point) from the peripheral region 125 (FIG. 3 or FIG. 6) of the frame, and obtains the coordinates of the image.
[0179]
Actually, a correlation operation is performed here to obtain a feature point change amount, and when the second image comes to this step, a correlation operation is performed with the first image to obtain a feature point change amount. Can be
[0180]
In this step, the correlation calculation with the first image stored in advance is performed on the third and fourth images, and the change amount of the feature point is obtained.
[0181]
In step S2001, the selection control unit 22 changes the selection criterion of the image selection unit 21 (the amount of change in a feature point serving as a reference at the time of image selection) in accordance with the brightness of the image obtained in step S1002.
[0182]
Specifically, when the brightness of the subject requires a total exposure time of 1/8 seconds for proper exposure, the brightness of the subject requires a total exposure time of 1/15 seconds. Double the allowable amount of composition deviation (selection criterion).
[0183]
For example, if the total exposure time is 1/15 second, and the amount of change in the feature point is allowed up to 5 pixels (selection criterion) of the imaging unit 15, and if the total exposure time is 1/8 second, Allows the change amount of the feature point up to 10 pixels (selection criterion).
[0184]
In step S2002, the captured image is determined based on the image selection criterion obtained in step S2001. If there is an image whose feature point coordinates exceed the allowable amount (selection criterion value) among the images to be determined. Proceeds to step S2003, discards this image, and returns to step S1006.
[0185]
When the first image is photographed, the process proceeds to step S1012 without selecting an image.
[0186]
In step S1012, the selected images are combined.
[0187]
Here, the synthesis of the images is performed by averaging the signals at the corresponding coordinates of each image, and the random noise in the images is reduced by averaging. Then, the gain of the image with reduced noise is increased to optimize the exposure.
[0188]
In step S2004, it is determined whether or not the images have been combined as many as the number of images (estimated number of images) necessary for proper exposure.
[0189]
For example, when the exposure time is 1/32 second and the exposure time is 1/32 second and a plurality of images are taken, four images with a small composition shift selected in step S2002 are obtained. The process proceeds to step S2005 at the time when is synthesized.
[0190]
On the other hand, if the required number of images have not been combined, the process returns to step S1006 until the required number of images has been combined, and the photographing, feature point extraction, image selection, and image combination (steps S1006 to S1012) are repeated.
[0191]
Step S2007 is provided between steps S2004 and S1006. If the current number of shots is less than the estimated number of shots (for example, eight), the process proceeds to step S1006. The process proceeds to step S2005.
[0192]
This is to prevent the number of shots from being increased unnecessarily in order to shoot an image with less composition deviation.
[0193]
If the number of shots required for proper exposure is not obtained as an image for synthesis, for example, if four images are required for image synthesis but only two images with a small composition deviation are obtained, the image is obtained. Only the obtained image is synthesized, the gain of the synthesized image signal is increased, and the exposure is corrected.
[0194]
In the first embodiment, a region in which the images do not overlap due to the composition shift among the images synthesized in step S1013 is cut, and the image is complemented by diffusion so as to have the size of the original frame.
[0195]
However, in the present embodiment, since an image with a small composition deviation is selected and used for image synthesis, the area where the images do not overlap is small, and a step corresponding to step S1013 of the first embodiment is eliminated.
[0196]
That is, in the present embodiment, it is possible to perform the shake correction that reproduces the entire shooting screen without waste.
[0197]
In step S2005, the number of pictures actually taken is displayed on a display unit provided in the viewfinder of the camera or a liquid crystal display unit provided on the back of the camera.
[0198]
In step S1014, the composite image data obtained in step S1012 is displayed as a captured image on a liquid crystal display unit disposed on the back of the camera or the like.
[0199]
In step S1015, the image data obtained in step S1012 is recorded on a recording medium.
[0200]
In step S1016, the process returns to the start.
[0201]
If the release button is still half-pressed at step S1016 and the switch sw1 is on, the flow proceeds to steps S1001, # 1002, # 1003, and # 1004 again. When the release button is fully pressed at step S1016 and the switch sw2 is on, the process waits at step S1016 without returning to the start.
[0202]
Each time a plurality of images are synthesized, the underexposure of each image is improved. For this reason, the number of shots required for exposure improvement can be known in advance by photometry before shooting.
[0203]
However, some of these captured images have a large compositional deviation and are not suitable for composition.
[0204]
Therefore, in the above-described third and fourth embodiments, when performing a plurality of shootings, the required number of shots or more are shot in advance, and an image having a relatively small composition deviation is obtained from the images obtained by the shooting. By selecting and synthesizing the selected images, it is possible to obtain a synthesized image with less shake.
[0205]
For example, when long-time exposure is desired, one exposure shooting is divided into many short-time exposure shootings, and as the number of shots increases, the composition deviation of each image increases.
[0206]
For this reason, the image selection unit 21 may not be able to select the required number of images suitable for combination.
[0207]
Therefore, in the fourth embodiment, in the case of shooting conditions such as long-time exposure, the image selection criterion is loosened, and an image with a slight composition deviation is selected so that shooting can be reliably continued. I have.
[0208]
(Fifth embodiment)
FIG. 13 is a configuration diagram of a camera according to a fifth embodiment of the present invention. In this figure, the same members as those described in FIGS. 1 and 10 are denoted by the same reference numerals and description thereof is omitted.
[0209]
This embodiment is different from the third embodiment (fourth embodiment) in that an image with a certain composition deviation is allowed by the image selection unit and used as a composite image, and such an image is used as in the first embodiment. The correction of composition deviation is performed by performing coordinate conversion.
[0210]
In addition, when selecting an image, not only changes in feature points in a stationary object or the like around the frame (the area 125 in FIG. 3 or FIG. 7) as in the third embodiment, but also changes in feature points of the main subject (person). Change is also taken into account.
[0211]
Here, the reason for performing the coordinate conversion of the image as in the first embodiment and selecting the image as in the second embodiment will be described.
[0212]
In an image obtained by actually shooting a plurality of images, an image having a large composition deviation appears as the shooting time becomes longer.
[0213]
In the case of such a large composition shift, even if the images are coordinate-converted and then synthesized, the portion of the screen missing (the area 127 in FIG. 4) becomes large, and the image area for viewing becomes extremely narrow.
[0214]
In addition, in the case of a large composition deviation, not only the image is shifted on the coordinates but also the image distortion due to the change of the camera lens characteristic and the camera tilt attitude becomes large, so that some feature points on the image are not included. Even if the change is complemented by the coordinate transformation, the whole image does not overlap well with the previous image.
[0215]
For this reason, only an image with a small composition deviation is selected to some extent, the selected image is subjected to coordinate transformation, and then image synthesis is performed.
[0216]
Next, a description will be given of a point that not only a change in a feature point around a screen but also a magnitude of a movement of a main subject is taken into consideration as in the second embodiment as an image selection.
[0219]
If the exposure time becomes longer, not only the composition changes due to camera shake, but also the movement of the main subject (subject shake) increases.
[0218]
Then, if an image having a large movement of the main subject is combined in each shooting, the main subject of the combined image becomes a blurred image.
[0219]
For example, when a plurality of shots are taken and the last shot is approached, the main subject (person) may misunderstand that the shooting has been completed and move from the shooting position.
[0220]
Therefore, in the present embodiment, an image in which the movement of the main subject is large is not used for the composition.
[0221]
In FIG. 13, the second image selecting unit 32 uses this image for synthesis when the change of the feature point in the area where the main subject is located is large, such as the area 126 in FIG. 3 and the area 126 in FIG. I try not to.
[0222]
The signal output from the signal processing unit 112 is input to a shift detecting unit 113, and the shift detecting unit 113 calculates a change in a feature point.
[0223]
The change in the feature point is obtained by dividing the photographing screen into an area 125 (an area other than the main subject area) and an area 126 (the main subject area or the like) in FIGS. 3 and 7 and calculating the change in the feature point in each area. ing. Then, a change in a plurality of feature points is obtained in each area, and an average value or the above-described minimum value is obtained for each area.
[0224]
The first image selection unit 31 changes the characteristic points in the captured image area 125 (FIGS. 3 and 7) by the shift detection unit 113 and the selection criterion determined by the selection control unit 22 (the criterion for image selection). (The amount of change in the characteristic points), and only the selected image is output to the second image selection unit 32.
[0225]
Here, the image selection criteria are not as strict as the selection criteria in the third embodiment. This is because a certain degree of composition deviation (change in feature points) can be corrected (coordinate conversion) by the coordinate conversion unit 114.
[0226]
For this reason, in the third embodiment, for example, when the total exposure time is 1/15 second, the variation amount of the feature point is allowed up to five pixels of the imaging unit 15 (selection criterion). In the embodiment, the change amount of the feature point is allowed up to 10 pixels (selection criterion).
[0227]
The selection criterion is also changed by the selection control unit 22 in the same manner as in the third embodiment, and when the brightness of the subject requires a total exposure time of 1/8 second, the total exposure time is 1/15. The value of the selection criterion is set to be twice that of the case where the brightness of the subject requires seconds.
[0228]
For example, when the total exposure time is 1/15 second, the amount of change of the feature point is allowed up to 10 pixels (selection reference) of the imaging unit 15, and when the total exposure time is 1/8 second, An image is selected while allowing up to 20 pixels (selection criterion).
[0229]
The second image selection unit 32 selects an image based on a change in a feature point in the area 126 (FIGS. 3 and 7) of the deviation detection unit 113, and outputs only the selected image to the coordinate conversion unit 114.
[0230]
Note that the change in the feature point here is a value obtained by deleting the change in the feature point in the area 125, and the movement of the pure main subject (person) having no composition deviation is regarded as the change in the feature point. This is not the value obtained by calculating the change in the feature point in.
[0231]
The role of the second image selection unit 32 is to prevent an image having a large shake of the main subject from being used for image synthesis as described above. Images that have changed by 10 pixels or more in the section 15 are not used for synthesis.
[0232]
The coordinate conversion unit 114 performs coordinate conversion of the image output from the second image selection unit 32 based on the change in the feature point in the area 125, and outputs the result to the image storage unit 115, as in the first embodiment.
[0233]
The image storage unit 115 outputs the image data to the image synthesizing unit 118 when each image data is accumulated by the number of synthesized images necessary for proper exposure.
[0234]
The image synthesizing unit 118 adds and averages the input images and synthesizes them into one image, appropriately changes the gain, and cuts the screen missing area 127 in FIG. 4 to match the size of the original frame. So as to complement the image.
[0235]
The finished image data is output to the display unit 116 and the recording unit 117, displayed as a photographed image on a liquid crystal display unit provided on the back of the camera, and recorded on a recording medium such as a semiconductor memory. Is done.
[0236]
FIG. 14 is a flowchart illustrating a shooting operation according to the present embodiment. This flow starts when the image stabilization switch is turned on.
[0237]
In step S1001, the photographer performs a half-press operation of the release button, and waits until sw1 is turned on. When sw1 is turned on, the process proceeds to step S1002.
[0238]
In step S1002, imaging is performed by the imaging unit 15. The photographing control unit 111 drives the AF drive motor 14 to move the photographing lens 11 in the optical axis direction while detecting the contrast of the image based on the output of the signal processing unit 112. Then, when the contrast reaches the peak value, the movement of the photographing lens 11 is stopped. Thereby, the photographing optical system is brought into a focused state.
[0239]
Further, the imaging control unit 111 obtains the brightness of the subject based on the output of the imaging unit 15.
[0240]
In step S1003, the number of images to be photographed is obtained from the brightness of the subject obtained in step S1002.
[0241]
For example, in order to measure the brightness of the subject and appropriately perform exposure based on the result of the photometry, the aperture 13 is fully opened (for example, f2.8), and the opening time of the shutter 12 and the exposure time are set to 1/8 second. You need to
[0242]
Here, when the focal length of the photographing optical system is 30 mm in terms of 35 mm film, there is a possibility that image blurring due to camera shake may occur in photographing in which the exposure time is 1/8 second. The time is set to 1/32 second and the shooting is set to be performed six times.
[0243]
In the first embodiment, the number of times of photographing is set to four under the same conditions. In the third embodiment, since an image having a large composition deviation (amount of change in feature point) is not used for composition, the number of photographed images is expected in consideration of an image to be discarded. Is set to double.
[0244]
On the other hand, in the present embodiment, the judgment of the discarded image due to the composition deviation is loosened (the value of the selection criterion is increased), and the composition deviation is corrected by coordinate transformation in the signal processing step at this time. The number of shots is reduced.
[0245]
Further, when the focal length of the photographing optical system is 300 mm, the exposure time is set to 1/320 second and the photographing is performed 60 times so that there is no risk of image blur.
[0246]
This is also set to be larger than the first embodiment and smaller than the third embodiment.
[0247]
In step S1004, the number of images is displayed on a display unit provided in the viewfinder of the camera or on a liquid crystal display unit provided on the exterior of the camera to notify the photographer.
[0248]
In step S1005, the flow waits until sw2 is turned on by fully pressing the release button.
[0249]
During the standby step, the half-press operation of the release button is released, and when the sw1 is turned off, the process returns to the start.
[0250]
In step S1006, shooting of the first image is started.
[0251]
In step S1007, steps S1006 and S1007 are circulated and wait until the first image is captured. When the first image capturing is completed, the process proceeds to step S1008.
[0252]
In step S1008, the shift detection unit 113 extracts a characteristic image (feature point) from the region 125 (FIGS. 3 and 7) in the shooting screen, and obtains the coordinates of the image.
[0253]
Actually, a correlation operation is performed here to obtain a change amount of a feature point, and when the second image comes to this step, a correlation operation is performed with the first image to obtain a change amount of the feature point. .
[0254]
As described above, in this step, feature point changes in the two regions 125 and 126 (FIGS. 3 and 6) are obtained, and each of the region 125 (the periphery of the screen, etc.) and the region 126 (the main subject region, etc.) Are required to change a plurality of feature points. Then, the average value or the minimum value is obtained for each region as described above, and the obtained value is used as the amount of change of the feature point in each region.
[0255]
For the feature points in the region 126, the difference between the movement amount of the region 125 and the movement amount of the coordinates of the image of the actual region 126 is calculated. Is obtained only as a pure subject shake.
[0256]
Similarly, in step S1008, the correlation calculation is performed on the third and fourth images with the first image stored in advance to obtain the amount of change in the feature point.
[0257]
In step S2001, the selection control unit 22 changes the selection criterion (first selection criterion) of the first image selection unit 31 according to the brightness of the image obtained in step S1002.
[0258]
The change of the first selection criterion is as described above. In the case of the brightness of a subject which requires a total exposure time of 1/8 second for proper exposure, a total exposure time of 1/15 second is required. The value of the first selection criterion is set to be twice that of the case of the brightness of the subject.
[0259]
For example, when the total exposure time is 1/15 second, the amount of change of the feature point is allowed up to 10 pixels of the image pickup unit 15, and when the total exposure time is 1/8 second, 20 minutes. It is configured to allow up to the number of pixels.
[0260]
In step S3001, the composition deviation of the captured image is determined based on the change in the feature point in the region 125 obtained in step S1008 and the first selection criterion obtained in step S2001. If the change in the coordinates of the feature point exceeds the allowable amount (the value of the first selection criterion), the process proceeds to step S2003, where the image is discarded and the process returns to step S1006.
[0261]
If there is only the first image, the process proceeds to step S1010 without selecting an image.
[0262]
In step S3002, the composition shift of the captured image is determined based on the change in the feature point in the region 126 obtained in step S1008. If the coordinate change amount of the feature point exceeds a predetermined allowable amount (the value of the second selection criterion), the process proceeds to step S2003, discards the image, and returns to step S1006.
[0263]
Note that the determination criterion (the value of the second selection criterion) in this step may be changed according to the shooting conditions such as the exposure time, as in step S3001.
[0264]
Also in step S3002, if there is only the first image, the process proceeds to step S1009 without selecting an image.
[0265]
In step S1009, coordinate conversion of the image is performed based on the change in the feature point of the area 125 obtained in step S1008, and processing is performed so that the second and subsequent images overlap the first image.
[0266]
In step S1010, the first image and the second and subsequent selected images are stored in the image storage unit 115.
[0267]
In step S1011, it is determined whether or not the shooting of all images obtained in step S1003 has been completed. Until the shooting of all images has been completed, the process returns to step S1006, shooting, feature point extraction, image selection, and image storage ( Steps S1006 to S1010) are repeated.
[0268]
In this flow, it appears that the second image is being captured after the first image has been captured. However, in actuality, the feature points of the first image are extracted, the coordinates are calculated, and the image is saved. The second image is captured and read out from the image capturing unit 15 during the operation.
[0269]
As a result, the compositions of the second and subsequent images stored in the image storage unit 115 are all aligned with the composition of the first image.
[0270]
Upon completion of all shooting, the process advances to step S1012.
[0271]
In step S1012, images are combined.
[0272]
Here, the synthesis of the images is performed by averaging the signals of the corresponding coordinates of each image, and the random noise in the images is reduced by averaging. Then, the gain of the image with reduced noise is increased to optimize the exposure.
[0273]
In this step, as in the third embodiment, it is not known whether the number of images required for the composition can be obtained.
[0274]
That is, even if six images are taken, there may be six images that can be used for image composition because there is little misalignment of the composition and the main subject, or there may be images that can be used only for three images.
[0275]
Therefore, in this step, for example, when the number of images required for composition is four, and when four or more proper images are obtained, the composition deviation (the change amount of the feature point) is the most likely among the images. The four images are used for synthesis in order from the image having the least number.
[0276]
On the other hand, if only three images can be obtained, an image is synthesized using only the three images, and some noise is allowed to increase the gain of the image to optimize the exposure.
[0277]
In step S1013, a region (the region 127 in FIG. 4) in which the images do not overlap due to composition deviation among the images synthesized in step S1012 is cut, and the image is complemented by diffusion so as to have the size of the original frame. ing.
[0278]
In step S1014, the image obtained in step S1012 is displayed on a liquid crystal display unit disposed on the back of the camera or the like.
[0279]
In step S1015, the image obtained in step S1012 is recorded on a recording medium.
[0280]
In step S1016, the process returns to the start.
[0281]
If the release button is still half-pressed and sw1 is on at step S1016, the flow proceeds to steps S1001, # 1002, # 1003, and # 1004 again.
[0282]
If the release button is fully pressed at step S1016 to turn on sw2, the process returns to the start and waits at step S1016.
[0283]
(Sixth embodiment)
The sixth embodiment of the present invention is a modification of the above-described fifth embodiment, and the configuration of the camera in this embodiment is the same as the configuration described in the fifth embodiment (FIG. 13).
[0284]
As can be seen from the flowchart of FIG. 14, after discarding the image in step S2003, the configuration is such that shooting is added by that amount, and if the configuration is such that the initial number of shots does not allow a margin, unnecessary shooting can be avoided. it can. However, in this case, there are problems in that the photographer does not know in advance how many pictures are to be taken, and that it is not possible to predict the end of photography.
[0285]
However, even in the above-described method, a guide for the number of images to be shot is displayed to the photographer, and when the number of images to be shot exceeds the number, the shooting is forcibly terminated.
[0286]
FIG. 15 is a flowchart for explaining such a configuration. This flow starts when the image stabilizing switch is turned on.
[0287]
In step S1001, the photographer half-presses the release button and waits until sw1 is turned on. When sw1 is turned on, the process proceeds to step S1002.
[0288]
In step S1002, imaging is performed by the imaging unit 15. The imaging control unit 111 drives the AF drive motor 14 to move the imaging lens 11 in the optical axis direction while detecting the contrast of the image based on the output from the signal processing unit 112. Then, when the contrast becomes the highest, the movement of the taking lens 11 is stopped. Thereby, the photographing optical system is brought into a focused state.
[0289]
In addition, the imaging control unit 111 obtains the brightness of the subject based on the output of the imaging unit 15 at the same time.
[0290]
In step S1003, the number of images to be photographed is obtained from the brightness of the subject obtained in step S1002.
[0291]
For example, in order to measure the brightness of the subject and appropriately perform exposure based on the result of the photometry, the aperture 13 is fully opened (for example, f2.8), and the opening time of the shutter 12 and the exposure time are set to 1/8 second. It is necessary to be.
[0292]
Here, when the focal length of the photographing optical system is 30 mm in terms of 35 mm film, there is a risk of image blur due to camera shake in photographing in which the exposure time is 1/8 second. The time is set to 1/32 second and the shooting is set to be performed six times.
[0293]
In the first embodiment, the number of times of photographing is set to four times under the same condition. In the third embodiment, the number of photographed images is doubled in anticipation of an image to be discarded, since images with a large composition deviation are not used for composition. .
[0294]
In the present embodiment, the determination of the image to be discarded due to the composition deviation is loosened, and the number of images to be taken is reduced compared to the third embodiment on the assumption that the composition deviation is corrected by coordinate transformation in the signal processing step at this time. I have.
[0295]
On the other hand, when the focal length of the photographing optical system is 300 mm, the exposure time is set to 1/320 second and the photographing is performed 60 times so that there is no risk of image shake.
[0296]
This is also set to be larger than that of the first embodiment and smaller than that of the third embodiment.
[0297]
In step S2006, the estimated number of shots is displayed on the display unit provided in the viewfinder of the camera or the liquid crystal display unit provided on the exterior of the camera to notify the photographer.
[0298]
Here, the reason why the estimated number is used is that the number actually used for the composition changes according to the composition shift (the amount of change of the feature point) in each captured image at this time. For example, if the camera is firmly fixed and there is no deviation in composition, the process ends with four shots. Further, when the composition shift is large in each shooting, there is a possibility that images necessary for composition may become insufficient even with eight shootings.
[0299]
In step S1005, the flow waits until sw2 is turned on by fully pressing the release button.
[0300]
Note that when the release button is half-pressed during the standby step and the sw1 is turned off, the process returns to the start.
[0301]
In step S1006, shooting of the first image is started.
[0302]
In step S1007, steps S1006 and # 1007 are circulated and wait until the first image is captured. When the first image capturing is completed, the process proceeds to step S1008.
[0303]
In step S1008, the displacement detection unit 113 extracts a characteristic image (feature point) from the region 125 (FIG. 3 or FIG. 6) in the shooting screen, and obtains the coordinates of the image.
[0304]
Actually, a correlation operation is performed here to obtain a feature point change amount, and when the second image comes to this step, a correlation operation is performed with the first image to obtain a feature point change amount. Can be
[0305]
As described above, in this step, changes in feature points in the two regions 125 and 126 are obtained, and a plurality of feature points in each of the region 125 (such as the periphery of the screen) and the region 126 (such as the main subject region) are determined. Changes are required. Then, the average value or the minimum value as described above is obtained for each area, and the change is determined as the characteristic point change of each area.
[0306]
For the feature points in the region 126, the difference between the movement amount of the region 125 and the movement amount of the coordinates of the image of the actual region 126 is calculated. Is obtained only as a pure subject shake.
[0307]
Similarly, in step S1008, the third and fourth images are correlated with the previously stored first image, and the amount of change in the feature point is obtained.
[0308]
In step S2001, the selection control unit 22 sets a first selection criterion (a change amount of a feature point serving as a reference at the time of image selection) in the first image selection unit 31 in accordance with the brightness of the image obtained in step S1002. Change).
[0309]
The change of the first selection criterion is, as described above, in the case of the brightness of a subject that requires a total exposure time of 1/8 seconds for proper exposure, The allowable amount of the composition deviation (the value of the first selection criterion) is set to twice as compared with the case of the brightness of.
[0310]
For example, when the total exposure time is 1/15 second, the variation amount of the feature point is allowed up to 10 pixels (first selection criterion) of the imaging unit 15, and the total exposure time is 1/8 second. In this case, up to 20 pixels (first selection criterion) are allowed. Then, as will be described later, an image to be used for synthesis is selected by comparing the amount of change of the feature point with the first selection criterion.
[0311]
In step S3001, the composition deviation of the captured image is determined based on the change of the feature point in the region 125 obtained in step S1008 and the image selection criterion obtained in step S2001. If the coordinate change amount of the feature point exceeds the allowable amount (the value of the first selection criterion), the process proceeds to step S2003, discards this image, and returns to step S1006.
[0312]
If there is only one first image, the process proceeds to step S1010 without selecting an image.
[0313]
In step S3002, the composition shift of the captured image is determined based on the change in the feature point in the region 126 obtained in step S1008. If the coordinate change amount of the feature point exceeds a predetermined allowable amount (the value of the second selection criterion), the process proceeds to step S2003, discards this image, and returns to step S1006.
[0314]
Note that the determination criterion (second selection criterion) in this step may be changed according to the shooting conditions such as the exposure time, as in step S3001. Also in step S3002, if there is only the first one image, the process proceeds to step S1009 without selecting an image.
[0315]
In step S1009, coordinate transformation of the image is performed based on the change of the feature point in the area 125 obtained in step S1008, and processing is performed so that the second and subsequent images overlap the first image.
[0316]
In step S1012, images are combined.
[0317]
Here, the synthesis of the images is performed by averaging the signals at the corresponding coordinates of each image, and the random noise in the images is reduced by averaging. Then, the gain of the image with reduced noise is increased to optimize the exposure.
[0318]
In step S2004, it is determined whether or not the required number of images have been combined for exposure adjustment.
[0319]
For example, in the case where the exposure of 1/8 second in total is performed by a plurality of exposures of 1/32 second, four images (required number) having a small composition deviation obtained through steps S3001 and S3002 are obtained. When the images are combined, the process proceeds to step S1013. Otherwise, the process returns to step S1006 until shooting of all images is completed, and repeats shooting, feature point extraction, image selection, and image synthesis (steps S1006 to S1012).
[0320]
Step S2007 is provided between steps S2004 and S1006. If the current number of shots is less than the estimated number of shots (for example, eight), the process proceeds to step S1006. The process proceeds to step S2005.
[0321]
This is to prevent the number of shots from being increased unnecessarily in order to shoot an image with less composition deviation.
[0322]
Then, when the number of images required for proper exposure is not obtained as an image for composition, for example, only two images with a small composition deviation (suitable for the above selection criteria) are obtained although four images are required for composition. In this case, only the obtained images are combined, the gain of the signal is increased, and the exposure is corrected.
[0323]
In step S1013, a region (the region 127 in FIG. 4) in which the images do not overlap due to composition deviation among the images synthesized in step S1012 is cut, and the image is complemented by diffusion so as to have the size of the original frame. ing.
[0324]
In step S2005, the number of shots is displayed on a liquid crystal display or the like of the camera.
[0325]
In step S1014, the image obtained in step S1012 is displayed on a liquid crystal display unit disposed on the back of the camera or the like.
[0326]
In step S1015, the image obtained in step S1012 is recorded on a recording medium.
[0327]
In step S1016, the process returns to the start.
[0328]
If the release button is still half-pressed and sw1 is on at step S1016, the flow proceeds to steps S1001, # 1002, # 1003, and # 1004 again.
[0329]
If the release button is fully pressed at step S1016 to turn on sw2, the process returns to the start and waits at step S1016.
[0330]
Each time a plurality of images are synthesized, the underexposure of each image is improved. For this reason, the number of shots required for exposure improvement can be known in advance by photometry before shooting.
[0331]
However, some of the captured images have a large composition deviation and are not suitable for composition.
[0332]
Therefore, in the above-described embodiment, when photographing is performed, photographing is performed in advance for a required number of images or more, and an image with relatively small composition deviation (satisfies the selection criterion) is selected from those images. By synthesizing the images, it is possible to obtain a synthesized image with less shake.
[0333]
Moreover, a subtle composition deviation in the selected image is coordinate-transformed for each image, the compositions are aligned, and then synthesized, so that a better image can be obtained.
[0334]
On the other hand, among the plurality of captured images, there is an image in which the main subject (person) has largely moved. If these images are combined without a composition shift for each image (camera shake during shooting of a plurality of images), the main subject will not be a clear image.
[0335]
Therefore, the position of the main subject is specified by a focus area (128a to 128e in FIG. 7), and an image in which the motion in this area (the area 126 in FIG. 7) is significantly different from other captured images is not combined. Thus, the main subject obtains a clear synthesized image.
[0336]
Each embodiment described above is also an example of the case where each of the following inventions is implemented, and each of the following inventions is implemented by adding various changes and improvements to each of the above embodiments.
[0337]
[Invention 1] An image capturing apparatus that obtains an exposure-corrected composite image by combining a plurality of images obtained by sequentially capturing images,
A detection unit that extracts a specific point in each image among the plurality of images, and detects a displacement amount of the specific point on another image with respect to the specific point on the reference image,
Coordinate conversion means for performing coordinate conversion of the other image based on a detection result of the detection means,
A synthesizing unit for synthesizing the reference image and another image subjected to coordinate conversion by the coordinate conversion unit,
The imaging apparatus according to claim 1, wherein the detection unit divides the reference image into at least two regions, and detects the displacement based on a specific point included in one of the regions.
[0338]
In the present invention, the detection unit divides the reference image into at least two regions, extracts a specific point included in one of the regions, and detects a displacement amount based on the extracted specific point. Like that.
[0339]
Here, if the area from which the specific point is extracted by the detection means is an area where a moving subject such as a person is located, the coordinate transformation of the image is performed according to the shake of the subject instead of the shake of the photographing device due to camera shake. Is performed, and accurate shake correction cannot be performed.
[0340]
Therefore, if an area other than the area where the moving subject is located, that is, the area where the non-moving subject such as the background is located is defined as the specific point extraction area (one area), the camera shake (camera shake) Can be obtained.
[0341]
[Invention 2] The photographing apparatus according to Invention 1, wherein the one area is an area located in the periphery of the reference image.
[0342]
[Invention 3] The imaging apparatus according to Invention 1, wherein the one area is an area other than the in-focus area in the reference image.
[0343]
[Invention 4] The imaging apparatus according to Invention 1, wherein the one area is an area other than an area corresponding to a focus detection area provided in an imaging screen in the reference image.
[0344]
[Invention 5] An image capturing apparatus that obtains an exposure-corrected composite image by combining a plurality of images obtained by sequentially capturing images,
A detection unit that extracts a specific point in each image among the plurality of images, and detects a displacement amount of the specific point on another image with respect to the specific point on the reference image,
An image selection unit that selects an image suitable for combination among the other images based on a detection result of the detection unit;
A photographing apparatus comprising: a synthesizing unit that synthesizes the reference image and the image selected by the image selecting unit.
[0345]
According to the present invention, an image suitable for image synthesis, that is, an image with a small displacement at a specific point is selected by the image selection unit, so that a synthesized image with little image blur can be obtained even when a plurality of images are synthesized. it can.
[0346]
[Invention 6] Coordinate conversion means for performing coordinate conversion on the image selected by the image selection means based on the detection result of the detection means,
The photographing apparatus according to the fifth aspect of the invention, wherein the synthesizing unit synthesizes the reference image and the image subjected to the coordinate conversion by the coordinate conversion unit.
[0347]
When an image suitable for synthesis is selected from a plurality of images by the image selecting means as in the above invention 5, since there are many unselected images, there is also a case where the displacement of the specific point is large to some extent. You have to choose. In such a case, by performing coordinate transformation of the image by the coordinate transformation means as in the present invention, it is possible to obtain a composite image with little image blur.
[0348]
[Invention 7] The invention according to the invention 5 or 6, wherein the image selecting means determines whether or not the displacement amount of the specific point is equal to or less than a reference amount, and selects an image which is equal to or less than the reference amount. Shooting equipment.
[0349]
[Invention 8] The imaging apparatus according to Invention 7, further comprising selection control means for controlling a selection operation of the image selection means and changing the reference amount.
[0350]
[Invention 9] The imaging apparatus according to Invention 8, wherein the selection control unit changes the reference value according to imaging conditions.
[0351]
When the reference amount is fixed, when the number of times of photographing is increased, such as in long-time exposure, the deviation of the image becomes large, and it becomes impossible to obtain a sufficient number of images to be selected even for the selected image. .
[0352]
Therefore, by changing the reference amount for selecting an image according to shooting conditions (for example, long-time exposure) as in the present invention, even if the number of times of shooting increases as described above, The number of images required for image composition can be selected.
[0353]
[Invention 10] The invention 5 or wherein the detection unit divides the reference image into at least two regions and detects the displacement based on a specific point included in one of the regions. 7. The imaging device according to 6.
[0354]
[Invention 11] The imaging apparatus according to Invention 10, wherein the detection unit detects the displacement amount based on a specific point included in the other of the at least two regions.
[0355]
Some of the captured images may have a moving main subject, such as a person, that has moved significantly.If such images are combined, a clear composite image of the main subject will be obtained. Can not.
[0356]
Therefore, the displacement amount of the specific point is also detected in the area where the main subject is located (the other area), and an image including the main subject such as a person who moves greatly is selected by selecting an image based on the detection result. It is possible to obtain a clear synthesized image of the main subject by omitting.
[0357]
[Invention 12] The imaging apparatus according to Invention 11, wherein the other area is an area corresponding to a focus detection area provided in an imaging screen in the reference image.
[0358]
[Invention 13] A program for obtaining an exposure-corrected composite image by combining a plurality of images obtained by sequentially photographing,
A detection step of extracting a specific point in each image from among the plurality of images, and detecting a displacement amount of a specific point on another image with respect to the specific point on the reference image,
A coordinate conversion step of performing a coordinate conversion of the other image based on a detection result of the detection step;
A synthesizing step of synthesizing the reference image and another image subjected to the coordinate conversion in the coordinate conversion step,
In the detecting step, a program is divided into at least two regions of the reference image, and the displacement amount is detected based on a specific point included in one region.
[0359]
[Invention 14] A program for obtaining an exposure-corrected synthesized image by synthesizing a plurality of images obtained by sequentially photographing,
A detection step of extracting a specific point in each image from among the plurality of images, and detecting a displacement amount of a specific point on another image with respect to the specific point on the reference image,
An image selection step of selecting an image suitable for synthesis among the other images based on a detection result of the detection step;
A combining step of combining the reference image and the image selected in the image selecting step.
[0360]
[Invention 15] For the image selected in the image selection step, a coordinate conversion step of performing coordinate conversion based on the detection result of the detection step,
15. The program according to claim 14, wherein in the combining step, the reference image and the image subjected to the coordinate transformation in the coordinate transformation step are combined.
[0361]
[Invention 16] In the image selection step, it is determined whether or not the displacement amount of the specific point is equal to or less than a reference amount, and an image having the displacement amount equal to or less than the reference amount is selected. Program.
[0362]
[Invention 17] A program according to any one of Inventions 16, wherein the program further comprises a selection control step of controlling a selection operation in the image selection step and changing the reference amount.
[0363]
[Invention 18] The program according to Invention 17, wherein in the selection control step, the reference value is changed in accordance with a photographing condition.
[0364]
[Invention 19] The invention according to Invention 14 or, wherein in the detection step, the reference image is divided into at least two regions, and the displacement amount is detected based on a specific point included in one of the regions. 16. The photographing device according to claim 15.
[0365]
[Invention 20] The imaging apparatus according to Invention 19, wherein, in the detection step, the displacement amount is detected based on a specific point included in the other of the at least two regions.
[0366]
[Invention 21] A photographing apparatus that obtains an exposure-combined image by combining a plurality of images obtained by sequentially photographing,
A detection unit that extracts a specific point in each image among the plurality of images, and detects a displacement amount of the specific point on another image with respect to the specific point on the reference image,
Coordinate conversion means for performing coordinate conversion of the other image after all of the plurality of images have been shot, based on a detection result of the detection means,
A photographing apparatus comprising: a synthesizing unit that synthesizes the reference image and another image whose coordinates have been converted by the coordinate converting unit.
[0367]
[Invention 22] An imaging device for obtaining an exposure-corrected composite image by combining a plurality of images obtained by sequentially capturing images,
Detecting means for extracting a specific point in each image from among the plurality of images, and detecting a displacement amount of the specific point on another image with respect to the specific point on the reference image after all of the plurality of images have been shot; When,
Coordinate conversion means for performing coordinate conversion of the other image based on a detection result of the detection means,
A photographing apparatus comprising: a synthesizing unit that synthesizes the reference image and another image whose coordinates have been converted by the coordinate converting unit.
[0368]
[Invention 23] An image capturing apparatus that obtains an exposure-corrected composite image by combining a plurality of images obtained by sequentially capturing,
A detection unit that extracts a specific point in each image of the plurality of images and detects a displacement amount of a specific point on another image with respect to the specific point on the reference image,
Coordinate conversion means for performing coordinate conversion of the other image based on a detection result of the detection means,
A photographing apparatus comprising: a synthesizing unit that synthesizes the reference image and another image whose coordinates have been transformed by the coordinate transforming unit after all of the plurality of images have been photographed.
[0369]
[Invention 24] A program for obtaining an exposure-corrected synthesized image by synthesizing a plurality of images obtained by sequentially photographing,
A detection step of extracting a specific point in each image from among the plurality of images, and detecting a displacement amount of a specific point on another image with respect to the specific point on the reference image,
Based on the detection result of the detection step, a coordinate conversion step of performing the coordinate conversion of the other image after all of the plurality of images have been captured,
A synthesizing step of synthesizing the reference image and another image subjected to the coordinate transformation in the coordinate transformation step.
[0370]
[Invention 25] A program for obtaining an exposure-corrected synthesized image by synthesizing a plurality of images obtained by sequentially photographing,
A detecting step of extracting a specific point in each image from among the plurality of images and detecting a displacement amount of the specific point on another image with respect to the specific point on the reference image after all of the plurality of images have been captured; When,
A coordinate conversion step of performing coordinate conversion of the other image based on a detection result of the detection step;
A synthesizing step of synthesizing the reference image and another image subjected to the coordinate transformation in the coordinate transformation step.
[0371]
[Invention 26] A program for obtaining an exposure-corrected synthesized image by synthesizing a plurality of images obtained by sequentially photographing,
A detection step of extracting a specific point in each image from among the plurality of images, and detecting a displacement amount of a specific point on another image with respect to the specific point on the reference image,
A coordinate conversion step of performing coordinate conversion of the other image based on a detection result of the detection step;
A combining step of combining the reference image and another image subjected to the coordinate conversion in the coordinate conversion step after all of the plurality of images have been captured.
[0372]
【The invention's effect】
According to the photographing apparatus or the program of the present invention, it is possible to obtain a high-accuracy composite image (one photographed image) without image blur.
[Brief description of the drawings]
FIG. 1 is a block diagram of a camera according to a first embodiment of the present invention.
FIG. 2 is an explanatory diagram of coordinate conversion according to the first embodiment of the present invention.
FIG. 3 is an explanatory diagram of a feature point extraction area according to the first embodiment of the present invention.
FIG. 4 is an explanatory diagram of image synthesis according to the first embodiment of the present invention.
FIG. 5 is a flowchart illustrating a shooting operation according to the first embodiment of the present invention.
FIG. 6 is a flowchart (modification) illustrating a shooting operation according to the first embodiment of the present invention.
FIG. 7 is an explanatory diagram of a feature point extraction region according to a second embodiment of the present invention.
FIG. 8 is a timing chart illustrating a shooting processing operation according to the second embodiment of the present invention.
FIG. 9 is a flowchart illustrating a shooting operation according to the second embodiment of the present invention.
FIG. 10 is a block diagram of a camera according to a third embodiment of the present invention.
FIG. 11 is a flowchart illustrating a shooting operation according to a third embodiment of the present invention.
FIG. 12 is a flowchart illustrating a photographing operation according to a fourth embodiment of the present invention.
FIG. 13 is a block diagram of a camera according to a fifth embodiment of the present invention.
FIG. 14 is a flowchart illustrating a photographing operation according to a fifth embodiment of the present invention.
FIG. 15 is a flowchart illustrating a photographing operation according to a sixth embodiment of the present invention.
[Explanation of symbols]
10: Optical axis
11: Shooting lens
12: Shutter
13: Aperture
14: AF drive motor
15: Imaging unit
16: drive unit
17: Aperture drive unit
18: Shutter drive unit
19: Focus drive unit
110: A / D converter
111: shooting control unit
112: signal processing unit
113: shift detection unit
114: coordinate conversion unit
115: Image recording unit
116: Display section
117: Recording unit
118: Image synthesis unit
119: Frame
120: Main subject
123: Features
124: Change of feature point
125: feature point extraction area
126: main subject area
127: Screen missing area
128: Focus area
21: Image selection section
22: Selection control unit
31: first image selection unit
32: second image selection unit

Claims (2)

  1. An image capturing apparatus that obtains an exposure-corrected composite image by combining a plurality of images obtained by sequentially capturing,
    A detection unit that extracts a specific point in each image among the plurality of images, and detects a displacement amount of the specific point on another image with respect to the specific point on the reference image,
    Coordinate conversion means for performing coordinate conversion of the other image based on a detection result of the detection means,
    A synthesizing unit for synthesizing the reference image and another image subjected to coordinate conversion by the coordinate conversion unit,
    The imaging apparatus according to claim 1, wherein the detection unit divides the reference image into at least two regions, and detects the displacement based on a specific point included in one of the regions.
  2. A program for obtaining an exposure-corrected synthesized image by synthesizing a plurality of images obtained by sequentially shooting,
    A detection step of extracting a specific point in each image from among the plurality of images, and detecting a displacement amount of a specific point on another image with respect to the specific point on the reference image,
    A coordinate conversion step of performing a coordinate conversion of the other image based on a detection result of the detection step;
    A synthesizing step of synthesizing the reference image and another image subjected to the coordinate conversion in the coordinate conversion step,
    In the detecting step, a program is divided into at least two regions of the reference image, and the displacement amount is detected based on a specific point included in one region.
JP2003007609A 2003-01-15 2003-01-15 Imaging apparatus, composite image generation method, and program Active JP4418632B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003007609A JP4418632B2 (en) 2003-01-15 2003-01-15 Imaging apparatus, composite image generation method, and program

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2003007609A JP4418632B2 (en) 2003-01-15 2003-01-15 Imaging apparatus, composite image generation method, and program
US10/756,034 US7295232B2 (en) 2003-01-15 2004-01-13 Camera and program
US11/876,408 US8558897B2 (en) 2003-01-15 2007-10-22 Image-pickup apparatus and method for obtaining a synthesized image

Publications (3)

Publication Number Publication Date
JP2004219765A true JP2004219765A (en) 2004-08-05
JP2004219765A5 JP2004219765A5 (en) 2006-03-02
JP4418632B2 JP4418632B2 (en) 2010-02-17

Family

ID=32897658

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003007609A Active JP4418632B2 (en) 2003-01-15 2003-01-15 Imaging apparatus, composite image generation method, and program

Country Status (1)

Country Link
JP (1) JP4418632B2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007129476A (en) * 2005-11-02 2007-05-24 Nec Electronics Corp Device, method, and program for correcting image blur
JP2008022319A (en) * 2006-07-13 2008-01-31 Fujifilm Corp Image shake correcting device and correcting method thereof
JP2008060892A (en) * 2006-08-31 2008-03-13 Sanyo Electric Co Ltd Motion detection apparatus and method and imaging apparatus
JP2008060927A (en) * 2006-08-31 2008-03-13 Sanyo Electric Co Ltd Image composition apparatus and method and imaging apparatus
EP2106130A2 (en) 2008-03-25 2009-09-30 Sony Corporation Image processing apparatus, image processing method, and program
US8026962B2 (en) 2008-03-05 2011-09-27 Casio Computer Co., Ltd. Image synthesizing apparatus and image pickup apparatus with a brightness adjusting processing
US8036486B2 (en) 2006-09-06 2011-10-11 Casio Computer Co., Ltd. Image pickup apparatus
JP2012014196A (en) * 2011-10-04 2012-01-19 Casio Comput Co Ltd Camera and camera control program
US8285075B2 (en) 2008-03-25 2012-10-09 Sony Corporation Image processing apparatus, image processing method, and program
US8553138B2 (en) 2008-03-25 2013-10-08 Sony Corporation Image capture apparatus and method for generating combined-image data
US8558944B2 (en) 2008-03-25 2013-10-15 Sony Corporation Image capture apparatus and method for generating combined-image data
US8605159B2 (en) 2008-08-21 2013-12-10 Canon Kabushiki Kaisha Image processing apparatus and image processing method with blur correction based on exposure information and angle of rotation
US8848097B2 (en) 2008-04-07 2014-09-30 Sony Corporation Image processing apparatus, and method, for providing special effect
JP2015188234A (en) * 2010-11-23 2015-10-29 クゥアルコム・インコーポレイテッドQualcomm Incorporated Depth estimation based on global motion

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007129476A (en) * 2005-11-02 2007-05-24 Nec Electronics Corp Device, method, and program for correcting image blur
US7773825B2 (en) 2005-11-02 2010-08-10 Nec Electronics Corporation Image stabilization apparatus, method thereof, and program product thereof
JP4536641B2 (en) * 2005-11-02 2010-09-01 ルネサスエレクトロニクス株式会社 Image blur correction apparatus, image blur correction method, and image blur correction program
JP2008022319A (en) * 2006-07-13 2008-01-31 Fujifilm Corp Image shake correcting device and correcting method thereof
JP4621991B2 (en) * 2006-07-13 2011-02-02 富士フイルム株式会社 Image blur correction apparatus and correction method thereof
JP2008060892A (en) * 2006-08-31 2008-03-13 Sanyo Electric Co Ltd Motion detection apparatus and method and imaging apparatus
JP2008060927A (en) * 2006-08-31 2008-03-13 Sanyo Electric Co Ltd Image composition apparatus and method and imaging apparatus
US7956897B2 (en) 2006-08-31 2011-06-07 Sanyo Electric Co., Ltd. Image combining device and imaging apparatus
US8195000B2 (en) 2006-09-06 2012-06-05 Casio Computer Co., Ltd. Image pickup apparatus
US8036486B2 (en) 2006-09-06 2011-10-11 Casio Computer Co., Ltd. Image pickup apparatus
US8026962B2 (en) 2008-03-05 2011-09-27 Casio Computer Co., Ltd. Image synthesizing apparatus and image pickup apparatus with a brightness adjusting processing
EP2106130A2 (en) 2008-03-25 2009-09-30 Sony Corporation Image processing apparatus, image processing method, and program
US8275212B2 (en) 2008-03-25 2012-09-25 Sony Corporation Image processing apparatus, image processing method, and program
US8285075B2 (en) 2008-03-25 2012-10-09 Sony Corporation Image processing apparatus, image processing method, and program
US8553138B2 (en) 2008-03-25 2013-10-08 Sony Corporation Image capture apparatus and method for generating combined-image data
US8558944B2 (en) 2008-03-25 2013-10-15 Sony Corporation Image capture apparatus and method for generating combined-image data
US8848097B2 (en) 2008-04-07 2014-09-30 Sony Corporation Image processing apparatus, and method, for providing special effect
US8605159B2 (en) 2008-08-21 2013-12-10 Canon Kabushiki Kaisha Image processing apparatus and image processing method with blur correction based on exposure information and angle of rotation
JP2015188234A (en) * 2010-11-23 2015-10-29 クゥアルコム・インコーポレイテッドQualcomm Incorporated Depth estimation based on global motion
JP2012014196A (en) * 2011-10-04 2012-01-19 Casio Comput Co Ltd Camera and camera control program

Also Published As

Publication number Publication date
JP4418632B2 (en) 2010-02-17

Similar Documents

Publication Publication Date Title
JP5919543B2 (en) Digital camera
USRE46239E1 (en) Method and system for image construction using multiple exposures
JP3541820B2 (en) Imaging device and imaging method
US7756411B2 (en) Photographing apparatus and method
US7791668B2 (en) Digital camera
US7986853B2 (en) Image processing apparatus, image taking apparatus and program
JP3697129B2 (en) Imaging device
JP4761146B2 (en) Imaging apparatus and program thereof
US7180043B2 (en) Image taking apparatus
JP4900401B2 (en) Imaging apparatus and program
TWI337286B (en) Camera apparatus and program therefor
US8797423B2 (en) System for and method of controlling a parameter used for detecting an objective body in an image and computer program
JP4708479B2 (en) System and method for realizing motion-driven multi-shot image stabilization
TWI422219B (en) Imaging device, imaging method and computer-readable recording medium
TWI360348B (en) Imaging device and image blurring reduction method
US7688353B2 (en) Image-taking apparatus and image-taking method
US7379094B2 (en) Electronic still imaging apparatus and method having function for acquiring synthesis image having wide-dynamic range
JP5395678B2 (en) Distance map generation type multi-lens camera
US7856176B2 (en) Method and apparatus for compensating hand-trembling of camera
JP4214926B2 (en) Electronic still camera
JP4528235B2 (en) Digital camera
US7463284B2 (en) Image processing apparatus and image processing method for producing an image from a plurality of images
JP4794963B2 (en) Imaging apparatus and imaging program
KR101616422B1 (en) Method and apparatus for processing the digital image by using fast AF
US8279291B2 (en) Image transforming apparatus using plural feature points and method of controlling operation of same

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060106

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060106

RD03 Notification of appointment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7423

Effective date: 20081023

RD05 Notification of revocation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7425

Effective date: 20081201

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090317

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090514

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090630

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090827

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090929

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20091022

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20091124

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20091130

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121204

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131204

Year of fee payment: 4