WO2006064751A1 - Multi-eye imaging apparatus - Google Patents

Multi-eye imaging apparatus Download PDF

Info

Publication number
WO2006064751A1
WO2006064751A1 PCT/JP2005/022751 JP2005022751W WO2006064751A1 WO 2006064751 A1 WO2006064751 A1 WO 2006064751A1 JP 2005022751 W JP2005022751 W JP 2005022751W WO 2006064751 A1 WO2006064751 A1 WO 2006064751A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
imaging
amount
blur
pixel
Prior art date
Application number
PCT/JP2005/022751
Other languages
French (fr)
Japanese (ja)
Inventor
Hironori Kumagai
Taku Hirasawa
Original Assignee
Matsushita Electric Industrial Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co., Ltd. filed Critical Matsushita Electric Industrial Co., Ltd.
Priority to US10/597,794 priority Critical patent/US7986343B2/en
Priority to JP2006519018A priority patent/JP4699995B2/en
Publication of WO2006064751A1 publication Critical patent/WO2006064751A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • H04N23/16Optical arrangements associated therewith, e.g. for beam-splitting or for colour correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values

Definitions

  • the present invention relates to a compound eye imaging apparatus having a pixel shifting function.
  • An imaging device used for a portable device is required to achieve both high resolution and small size.
  • the downsizing of the imaging device is limited by the size and focal length of the imaging optical lens and the size of the imaging device.
  • the optical system of a normal imaging device has a configuration in which a plurality of lenses are stacked in order to form red, green, and blue wavelengths of light on the same imaging surface.
  • the optical system of the imaging device is inevitably long and the imaging device is thick. Therefore, a compound eye type imaging apparatus using a single lens with a short focal length has been proposed as an effective technique for downsizing, in particular, thinning of the imaging apparatus (for example, Patent Document 1).
  • an imaging optical system is arranged in a plane with a lens that handles light of a blue wavelength, a lens that handles light of a green wavelength, and a lens that handles light of a red wavelength.
  • the imaging region is provided for each lens.
  • a single imaging element may be divided into a plurality of areas simply by arranging a plurality of imaging elements side by side.
  • a subject image can be formed on the imaging surface by a single lens, and the thickness of the imaging device can be significantly reduced.
  • FIG. 19 is a schematic perspective view of the main part of an example of a conventional compound eye type imaging apparatus.
  • Reference numeral 1900 denotes a lens array, which is formed by three lenses 1901a, 1901b, and 1901c.
  • Reference numeral 1901a denotes a lens that handles light of a red wavelength, and converts the formed subject image into image information in an imaging region 1902a in which a red wavelength separation filter (color filter) is attached to the light receiving unit.
  • 1901b is a lens that handles light of the green wavelength. The image is converted into green image information in the imaging area 1902b
  • 1901c is a lens corresponding to light of a blue wavelength, and is converted into blue image information in the imaging area 1902c.
  • the compound-eye imaging device can reduce the thickness of the imaging device, but when the images of each color are simply superimposed and synthesized, the images are synthesized according to the number of pixels of the image separated into each color. The resolution of the image will be determined. For this reason, there is a problem that the resolution is inferior to that of an ordinary Bayer image pickup device in which green, red and blue filters are arranged in a staggered manner.
  • FIG. 20 is a conceptual explanatory diagram of high resolution using pixel shifting. This figure shows a part of the enlarged portion of the image sensor.
  • the image sensor has a photoelectric conversion unit 2101 (hereinafter referred to as “photoelectric conversion unit”) that converts received light into an electric signal, and light from a transfer electrode or the like can be converted into an electric signal.
  • photoelectric conversion unit There is an invalid part 2102 (hereinafter referred to as “invalid part”).
  • the photoelectric conversion unit 2101 and the invalid portion 2102 are combined to form one pixel. These pixels are usually formed regularly at a certain interval (pitch).
  • the part surrounded by the thick line in Fig. 20A is one pixel, and P indicates one pitch.
  • FIG. 20A An outline of pixel shifting performed using such an image sensor is shown below.
  • photographing is performed at the position of the image sensor shown in FIG. 20A.
  • FIG. 20B the photoelectric conversion unit 2101 of each pixel is moved in an oblique direction (1Z2 pitch of the pixel in both the horizontal direction and the vertical direction) so as to move to the invalid portion 2102, and shooting is performed. To do.
  • these two captured images are combined as shown in FIG. 20C.
  • the imaging state in FIG. 20C has the same resolution as that captured by the imaging element having twice the photoelectric conversion unit as compared to the imaging state for one time captured by the imaging element in FIG. 20A. Therefore, if the image shift as described above is performed, an image equivalent to an image shot using an image sensor having twice the number of pixels without increasing the number of pixels of the image sensor is obtained. An image can be obtained.
  • the resolution is not limited to the case of shifting in the oblique direction as illustrated, but the resolution can be improved in the shifted direction when shifting in the horizontal direction and the vertical direction. For example, when the shift is combined vertically and horizontally, a resolution of 4 times can be obtained. Also, the amount of pixel shift need not be limited to 0.5 pixels, and the resolution can be further improved by finely shifting pixels so as to interpolate invalid portions.
  • the method of shifting the force pixels by changing the relative positional relationship between the image sensor and the incident light beam by moving the image sensor is not limited to this method.
  • the optical lens may be moved instead of the image sensor.
  • a method using a parallel plate has been proposed (for example, Patent Document 1).
  • the image formed on the image sensor is shifted by inclining parallel plates.
  • Another method is to detect and correct camera shake using shake detection means such as an angular velocity sensor.
  • shake detection means such as an angular velocity sensor.
  • a method of correcting by using both the camera shake correction mechanism and the pixel shifting mechanism has been proposed (for example, Patent Document 2 and Patent Document 3).
  • the amount of shake is detected using the shake detection means, and the shake is detected. After correcting the pixel shift direction and pixel shift amount based on the amount, the image sensor is moved to shift the pixel. By doing so, it is possible to reduce the influence of camera shake.
  • Patent Document 3 it is not necessary to be limited to the method of moving the image sensor. By moving a part of the optical lens in accordance with the detected amount of shake, camera shake correction and pixel shift are performed. The same effect is obtained.
  • Various methods have been proposed for blur detection, such as a method using an angular velocity sensor such as a vibration gai mouth, and a method for comparing motion images taken in time series to obtain a motion beta.
  • Patent Document 3 compares a plurality of images taken in time series, and the positional relationship of the images is appropriately shifted due to camera shake or the like, thereby improving resolution.
  • a method has been proposed in which only images having a relationship that can be expected are selected and combined. This method is all performed electronically, and can reduce the size of an imaging apparatus that does not require the provision of a mechanical mechanism for correcting camera shake.
  • Patent Documents 2 and 3 which detects camera shake, and performs camera shake correction and pixel shifting, requires a new sensor and requires a complicated optical system. , Disadvantageous for miniaturization and thinning.
  • Patent Document 1 JP-A-6-261236
  • Patent Document 2 Japanese Patent Laid-Open No. 11-225284
  • Patent Document 3 Japanese Patent Laid-Open No. 10-191135
  • the present invention solves the above-described conventional problems, and in a compound eye imaging device that performs pixel shifting, a compound eye that can prevent a decrease in the effect of pixel shifting even when there is camera shake or subject blurring.
  • An object is to provide an imaging device.
  • the compound eye imaging apparatus of the present invention is a compound eye imaging apparatus including a plurality of imaging systems each including an optical system and an imaging element and having different optical axes, and the plurality of imaging systems. Includes a first imaging system having a pixel shifting unit that changes a relative positional relationship between an image formed on the image sensor and the image sensor, an image formed on the image sensor, and the image sensor.
  • the second imaging system is characterized in that the relative positional relationship is fixed in time-series imaging.
  • FIG. 1 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart showing the overall operation of the imaging apparatus according to Embodiment 1 of the present invention.
  • FIG. 3 is a diagram showing a positional relationship between a comparison source region and an evaluation region according to an embodiment of the present invention.
  • FIG. 4 is a diagram showing image movement due to camera shake in the embodiment of the present invention.
  • FIG. 5 is a diagram for explaining adjustment of a pixel shift amount in one embodiment of the present invention.
  • FIG. 6 is a configuration diagram of an image pickup optical system, a pixel shifting unit, and an image pickup element according to Embodiment 1 of the present invention.
  • FIG. 7 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 2 of the present invention.
  • FIG. 8 is a flowchart of the overall operation of the imaging apparatus according to Embodiment 2 of the present invention.
  • FIG. 9 is a diagram for explaining parallax in an embodiment of the present invention.
  • FIG. 10 is a diagram for explaining a method for selecting an optimum image in one embodiment of the present invention.
  • FIG. 11 is another diagram for explaining a method for selecting an optimum image in the embodiment of the present invention.
  • FIG. 12 is a diagram showing an image stored in the image memory after pixel shifting once in Embodiment 2 of the present invention.
  • FIG. 13 is a diagram showing images taken in time series with the second imaging system without pixel shift stored in the image memory in Example 3 of the present invention.
  • FIG. 14 is a diagram showing an image photographed in Example 3 of the present invention and a subject group determined by subject determination means.
  • FIG. 15 is a configuration diagram of an image pickup optical system, a pixel shifting unit, and an image pickup device according to Example 5 of the present invention.
  • FIG. 16 is a plan view of a piezoelectric fine movement mechanism according to an embodiment of the present invention.
  • FIG. 17 is a diagram showing an example of an arrangement of an optical system according to an embodiment of the present invention.
  • FIG. 18 is a flowchart of the entire operation in the imaging apparatus according to Embodiment 3 of the present invention.
  • FIG. 19 is a schematic perspective view of a main part of an example of a conventional compound-eye imaging device.
  • FIG. 20 is a conceptual explanatory diagram of high resolution using conventional pixel shifting.
  • the image pickup apparatus using the compound eye type optical system is reduced in size and thickness, and images taken in time series by the second image pickup system that does not shift pixels are compared.
  • the amount of camera shake (the amount of camera shake) can be detected. Using this amount of camera shake, camera shake can be corrected for images shot with the first imaging system that shifts pixels. That is, it is possible to achieve both downsizing and thinning of the imaging device and high resolution.
  • the amount of blur is compared by comparing the image memory storing the image information of a plurality of frames taken in time series with the image information of the plurality of frames stored in the image memory. It is preferable that the camera further includes a blur amount deriving unit that derives the image and an image synthesizing unit that synthesizes the images of the plurality of frames stored in the image memory.
  • the amount of change in the positional relationship by the pixel shifting unit is determined based on the amount of blur obtained by the blur amount deriving unit. According to this configuration, the pixel shift amount can be adjusted according to the amount of camera shake, which is advantageous for improving the resolution.
  • the amount of change in the positional relationship by the pixel shifting unit may be fixed. According to this configuration, it is not necessary to derive the blur amount during shooting and adjust the pixel shift amount, and the time-series shooting time interval can be shortened. This reduces camera shake and enables shooting even when the subject moves quickly.
  • the present invention further includes a parallax amount deriving unit that obtains the magnitude of the parallax for the image forces captured by the plurality of imaging systems having different optical axes, and the image synthesizing unit is obtained by the parallax amount deriving unit. It is preferable that the image is corrected and synthesized based on the parallax amount and the blur amount obtained by the blur amount deriving unit. According to this configuration, when correcting an image, blur correction is performed. In addition, since the parallax correction depending on the distance of the subject is also corrected, the resolution of the synthesized image can be further increased. That is, it is possible to prevent a decrease in resolution depending on the distance of the subject.
  • the image processing apparatus further includes an optimum image selection unit that selects image information used for the composition of the image composition unit from the information and image information captured by the second imaging system.
  • the first and second imaging systems can obtain images before and after blurring, images with parallax, and images with pixel shifts, so images that are suitable for improving resolution without depending on chance. You can choose.
  • the blur amount deriving unit derives a blur amount for each of the different subjects
  • the image composition unit composes an image for each of the different subjects. It is preferable. According to this configuration, by deriving the blur amount for each subject, the resolution can be improved even when the entire image does not move uniformly due to the subject moving.
  • the image processing apparatus further includes means for dividing the image information into a plurality of blocks, the blur amount deriving unit derives a blur amount for each of the plurality of blocks, and the image synthesizing unit includes the plurality of blocks. It is preferable to synthesize an image every time. Also with this configuration, it is possible to improve the resolution when the amount of movement of the subject exists. Furthermore, it is not necessary to detect the subject, and the processing time can be shortened.
  • the plurality of imaging systems having different optical axes include an imaging system that handles red, an imaging system that handles green, and an imaging system that handles blue, and each of the imaging systems corresponding to the respective colors.
  • the number of imaging systems corresponding to at least one color is two or more, and the two or more imaging systems that handle the same color include the first imaging system and the second imaging system. It is preferable that According to this configuration, a color image with improved resolution can be obtained.
  • FIG. 1 is a block diagram illustrating a configuration of the imaging apparatus according to the first embodiment.
  • System The control unit 100 is a central processing unit (CPU) that controls the entire imaging apparatus.
  • the system control means 100 controls the pixel shifting means 101, the transfer means 102, the image memory 103, the blur amount deriving means 104, and the image composition means 105.
  • a subject to be photographed (not shown) is photographed by the first imaging system 106b having the pixel shifting means 101 and the second imaging system 106a having no pixel shifting function.
  • the imaging optical system 107a and the imaging optical system 107b the subject forms an image on the imaging elements 108a and 108b and is converted into image information as a light intensity distribution.
  • the pixel shifting means 101 shifts the relative positional relationship between the object image formed on the image sensor 108b by the image pickup optical system 107b and the image sensor 108b in the in-plane direction of the image sensor 108b. It is. That is, the pixel shift means 101 can change the relative positional relationship between the image sensor 108b and the incident light beam incident on the image sensor 108b in time-series imaging.
  • the positional relationship between the imaging optical system 107a and the imaging element 108a is set so as not to deviate in the in-plane direction of the imaging element 108a. Therefore, the relative positional relationship between the subject image formed on the image sensor 108a by the imaging optical system 107a and the image sensor 108a is fixed in time-series imaging. That is, in the second imaging system 106a, the relative positional relationship between the imaging device 108a and the incident light incident on the imaging device 108a is fixed in time-series imaging.
  • the transfer means 102 transmits the image information photoelectrically converted by the image sensors 108a and 108b to an image memory 103 that stores an image.
  • the first imaging system 106b and the second imaging system 106a are individually driven, and each image is sequentially transferred to the image memory 103 and stored. As will be described later, the pixel shift amount is adjusted while detecting the blur amount using the image captured by the second imaging system 106a. For this reason, the second imaging system 106a can be driven at high speed. That is, the second imaging system 106a can increase the number of times of capturing images per unit time.
  • the blur amount deriving unit 104 derives a blur amount by comparing image information captured at different times (in time series) by the optical system without shifting the pixels, in the second imaging system 106a. It is. The details will be described later.
  • the first imaging system 106 so as to correct this blur amount.
  • the pixel shift amount of b is set, and the pixel shifted image is stored in the image memory 103.
  • the image synthesizing unit 105 synthesizes images captured by the first imaging system 106b and the second imaging system 106a and stored in the image memory 103, and generates a high-resolution image.
  • FIG. 2 is a flowchart showing the overall operation of the imaging apparatus according to the present embodiment.
  • Shooting starts in response to the shooting start command in step 200.
  • shooting pre-processing in step 201 is performed. This calculates the optimal exposure time and performs the focusing process.
  • focusing may be performed by measuring the distance of the subject using a laser or radio wave.
  • the optimal exposure time in consideration of ambient ambient light and the like.
  • These include a method of detecting the brightness with an illuminance sensor and setting the exposure time, and a method of providing a preview function for capturing an image before starting shooting.
  • the method of setting the preview function is to convert the image captured before the start of shooting to brightness information by gray scale. If the histogram is biased to white (bright), it is judged as overexposed (exposure time is too long), and if the histogram is biased to black (dark), it is underexposed (exposure time is too short). And adjust the exposure time.
  • the time from the shooting start command to the start of exposure can be shortened.
  • step 202 imaging by pixel shifting is performed. This photographing is performed by repeating each process from step 203 to step 208.
  • Step 203 is an exposure process of the second imaging system 106a
  • step 204 is a process of transferring an image captured by the second imaging system 106a to the image memory 103. Images taken at different times by the second imaging system 106a are transferred to the image memory 103.
  • the images stored in the image memory 103 are compared to determine the amount of blur (the amount of blur of the imaging device).
  • step 206 based on the pixel shift amount adjusted to reflect the blur amount obtained in step 205, the first imaging system 106b shifts the pixel and takes an image.
  • Step 207 is an exposure process of the first imaging system 106b
  • step 208 is a process of transferring an image photographed by the first imaging system 106b to the image memory 103.
  • the blur amount derivation will be specifically described first. As described above, if images are taken at different times, the image may be blurred due to camera shake or subject shake during that time. In order to utilize the invalid portion of the pixel by pixel shifting, it is necessary to determine the amount of pixel shifting considering this blur.
  • step 202 do not perform pixel shifting immediately before pixel shifting! Capture images taken at different times with the second imaging system 106a, calculate the blur amount, and reflect it in the pixel shift amount. I am letting.
  • the camera shake amount of the imaging apparatus is obtained as described above.
  • the specific method will be described.
  • the subject appears and moves in the time-series images.
  • the time interval is short, the shape of the subject does not change, and the position can be regarded as moving. For this reason, of two images with different shooting times, one is the comparison source image and the other is the comparison destination image, and it is examined to which part of the comparison destination image the predetermined area of the comparison source image has moved. Thus, it is possible to determine how the image has moved.
  • comparison source region a specific region in the comparison source image (hereinafter referred to as “comparison source region”) corresponds to the comparison source image.
  • evaluate source region an evaluation area of the same size as the area, and evaluate how similar the comparison source area and evaluation area are!
  • evaluation areas are sequentially set at different positions, and the movement destination of the comparison source area is searched while performing the above-described evaluation in each evaluation area. In this case, the evaluation area most similar to the comparison source area becomes the movement destination of the comparison source area.
  • the image captured by the image sensor can be regarded as a set of light intensities corresponding to each pixel, the light from the Xth pixel in the horizontal direction to the right and the yth pixel in the vertical direction from the top left is the origin. If the intensity is I (x, y), the image can be considered as a distribution of this light intensity I (x, y). it can.
  • FIG. 3 shows the positional relationship between the comparison source region 301 and the evaluation region 302.
  • the comparison source area is set to a rectangular shape where the upper left pixel position of the comparison source area is (xl, yl) and the lower right pixel position is (x2, y2). is doing.
  • the evaluation area (m, n) moved by m pixels to the right and n pixels downward from the comparison source area the upper left pixel is (xl + m, yl + n) and the lower right position is It can be represented by a region of (x2 + m, y2 + n).
  • the evaluation value R (m, n) shows a smaller value as the correlation between the light intensity distribution (image) of the comparison source region and the evaluation region becomes larger (similar).
  • m and n need not be limited to integers, and the original light intensity I (X, y) force is also interpolated between the pixels I '(X, y ) And calculating the evaluation value R (m, n) from (Equation 1) based on I '(X, y) Can do.
  • equation 1 a method of data interpolation, linear interpolation, nonlinear interpolation, or deviation method may be used!
  • the values of m and n are changed, and an evaluation region whose evaluation value is most similar to the comparison source region is searched with subpixel accuracy.
  • the blur direction of camera shake and subject blur is not limited to a specific direction, so the values of m and n need to be considered including negative values (evaluation of areas moved leftward or upward).
  • m and n may be changed so that the entire range of the comparison target image can be evaluated, the image of the subject moves greatly due to camera shake or the like, and falls outside the light receiving range of the image sensor. Since it cannot be synthesized as an image, it is generally preferable to limit m and n to a predetermined range to shorten the calculation time.
  • the combination of m and n where the evaluation value R (m, n) found in this way is the minimum value is the amount of blur indicating the position of the comparison target image area corresponding to the comparison source area.
  • comparison source region need not be limited to a rectangle, and an arbitrary shape can be set.
  • calculation of the evaluation value does not need to be limited to the sum of absolute values of the differences in light intensity.
  • the evaluation value may be calculated.
  • the comparison method using the correlation of the images can be used for obtaining the amount of parallax described later, and can also be used for calibration of the pixel shifting means. For example, by taking an image before and after pixel shifting by the pixel shifting means and evaluating the amount of shift of the image, the actuator used for pixel shifting is moving accurately due to the surrounding environment (temperature and deterioration over time). Can confirm. By such processing, pixel shifting by the actuator can be ensured.
  • FIG. 4 is a diagram showing image movement due to camera shake in the present embodiment.
  • This figure shows an example of shooting an image of a landscape with little subject movement!
  • FIG. 4A is a diagram when the subject and the camera are moved in parallel, and
  • FIG. 4C shows the change in the image between shooting times 1 and 2 in this case.
  • FIG. 4B is a view when the camera is rotated in the horizontal direction, and
  • FIG. 4C shows a change in the image between shooting times 1 and 2 in this case.
  • the resolution is further improved by detecting and correcting image distortion due to this rotation.
  • only calculating the amount of image blur for one specific evaluation area can only determine the parallel movement of the image, so multiple evaluation areas are set and the amount of blur at each location is calculated.
  • the amount of camera shake and image distortion in each evaluation region can be obtained.
  • FIG. 5 is a diagram for explaining adjustment of the pixel shift amount. This figure shows an enlarged part of a part of the image sensor, and shows the assumed pixel shift vector 400, the blur vector 401 detected by the blur derivation means, and the actual pixel shift vector 402. ing.
  • an image in which the amount of deviation in the X direction and the ⁇ direction of the vector 401 is an integer pitch (integer multiple of one pixel pitch) can be regarded as the same as an image obtained by shifting pixel coordinates by an integer pixel. it can.
  • shooting at shooting time 2 by the second imaging system 106a without pixel shifting is the same as shooting an image that was already shot at shooting time 1 with different pixels.
  • the first pixel shift In the imaging system 106b as in the case where there is no camera shake at all, by shifting 0.5 pixels in the X direction as in the vector 400, the invalid part 405 on the right side of the photoelectric conversion unit 404 can be photographed, and the pixels are shifted. The effect of can be obtained.
  • a new pixel shift vector is set so as to be the same as the shift amount of the partial force vector 400 equal to or smaller than the integer pitch in the hand shake, the pixel shift effect can be obtained.
  • the portion of the blur vector 401 that is less than or equal to the integer pitch in the X direction is 0.25 pixels, and the portion that is less than or equal to the integer pitch in the Y direction is 0.5 pixels.
  • a new pixel shift vector should be set so that the portion below the integer pitch in the X direction is 0.5 pixels and the partial force is less than the integer pitch in the Y direction!
  • the pixel shift vector is set to 0.25 pixels in the X direction and 0.5 pixels in the Y direction as in the vector 402 in FIG.
  • the positional relationship is the same as in the case of pixel shifting using the element shifting vector 400. That is, according to the present embodiment, since the pixel shift vector is adjusted in accordance with the blur vector, it is possible to always obtain the pixel shift effect.
  • step 202 After a series of steps in step 202 is repeated until the set number of image shifts is completed, the images stored in the image memory are combined in step 209, and the images are output in step 210 to complete the shooting.
  • a concrete example is shown below.
  • FIG. 6 shows the configuration of the imaging optical system, the pixel shifting means, and the imaging device according to the first embodiment.
  • an imaging optical system two aspherical lenses 601a and 601b having a diameter of 2.2 mm were used.
  • the optical axis of the lens is almost parallel to the Z axis in Fig. 6, and the distance between them is 3 mm.
  • a glass plate 602 is provided on the optical axis of the lens 601b.
  • the glass plate 602 can be tilted with respect to the X-axis and Y-axis by a piezoelectric actuator and a tilt mechanism (not shown).
  • the pixel is shifted by 1/2 (1.2 m) of the pixel pitch in the horizontal direction (X-axis direction) to double the number of pixels.
  • the glass plate 602 has an optical glass with a width (X-axis direction) of 2 mm, a height (Y-axis direction) of 2 mm, and a thickness (Z-axis direction) of 500 ⁇ m. BK7 is used.
  • a monochrome CCD 603 having a pitch of 2.4 m between adjacent pixels was used as the image sensor 603.
  • the light receiving surfaces of the glass plate 602 and the image sensor 603 are substantially parallel to the XY plane in FIG. Further, the image sensor 603 is divided into two regions 603a and 6003b so as to correspond to each optical system on a one-to-one basis. By providing a readout circuit and a drive circuit for each of the regions 603a and 603b of the image sensor 603, the images of the regions 603a and 603b can be individually read out.
  • a method of inclining the glass plate as the pixel shifting means is used.
  • the method is not limited to this method.
  • an image sensor or lens may be physically powered by a predetermined amount using an actuator using a piezoelectric element, an electromagnetic actuator, or the like. Even when other means are used as the pixel shifting means in this way, the configuration shown in FIG. 6 is the same except for the glass plate 602.
  • one image sensor is divided into two different regions, but an image sensor that uses two different image sensors so as to correspond one-to-one with each optical system may be used.
  • This form may be any form as long as the plurality of imaging areas correspond one-to-one with each optical system.
  • FIG. 7 shows a configuration of the imaging apparatus according to the second embodiment.
  • the main difference from the first embodiment is that the second embodiment is that a parallax amount deriving means 700 is added, and the imaging element 70 1 is a body, which is substantially the same time as the first imaging system.
  • the second imaging system is photographed, and optimum image selection means 702 for selecting an image to be combined based on the parallax amount and the blur amount is added. Explanation of overlapping parts with the first embodiment is omitted.
  • FIG. 8 shows a flowchart of the overall operation of the imaging apparatus according to the present embodiment.
  • the imaging start command in step 200 and the imaging pre-processing in step 201 are the same as in the first embodiment.
  • step 800 photographing by pixel shifting is performed.
  • Step 800 is the same as Step 801
  • the exposure process of the image sensor, the transfer process of the image of the image sensor in step 802 to the image memory 103, and the pixel shift process in step 803 are repeated.
  • the image sensor 701 is configured to be shared by the first image pickup system 106b and the second image pickup system 106a, images are taken at almost the same timing.
  • the pixel shift amount is a fixed value regardless of the amount of camera shake, and is set for pixels that are set so that invalid pixels can be used effectively when there is no camera shake (for example, 0.5 pixels).
  • step 800 is a step of taking in an image and deriving a blur amount in the second imaging system 106a in order to adjust the pixel shift amount ( Step 205) in FIG. 2 is omitted. For this reason, the interval between shooting time 1 for shooting without pixel shifting and shooting time 2 for shooting with pixel shifting can be shortened. As a result, camera shake is reduced, and shooting can be performed even when the subject moves faster than in the first embodiment.
  • step 803 After the pixel-shift imaging in step 803 is completed, the images stored in time series among the images stored in the image memory 103 in step 804 are processed in the same manner as in step 205 in the first embodiment. Compare and find the amount of blur. If there is a movement of the subject, the amount of blur will not be uniform in the image, and if the amount of blur is determined and superimposed together, it will not overlap exactly and resolution will not improve depending on the location.
  • the resolution of the entire image can be improved. For this division, it is not necessary to limit to the rectangle, separately detect the subject, and divide each subject to detect the amount of blur.
  • step 805 images taken by imaging systems with different optical axes at the same time are compared to determine the amount of parallax.
  • the relative position of the subject image formed on the image sensor changes according to the distance of the subject that the image forming position is not only the distance between the centers of the lenses.
  • FIG. 9 is a diagram for explaining parallax.
  • two imaging optical systems 1301a and 130 lb having the same characteristics are installed at a distance D, and the imaging surfaces of the imaging optical systems are denoted by 1302a and 1302b, respectively.
  • the imaging optical systems 1301a and 1301b observe the same subject from different positions. For this reason, parallax occurs between images formed on the imaging surfaces 1302a and 1302b.
  • the parallax amount ⁇ is given by (Equation 2) below.
  • D is the distance between the optical axis of the imaging optical system 1301a and the optical axis of the imaging optical system 1301b
  • f is the focal length of the imaging optical systems 1301a and 1301b
  • A is the distance between the subject and the imaging planes 1302a and 1302b.
  • the parallax amount ⁇ can be expressed as D'fZA, and ⁇ can be considered as 0.
  • the images taken by the imaging optical systems 1301a and 130 lb can be regarded as the same. Therefore, if the distance D between the lens centers is corrected, the composition process can be performed as it is.
  • the images captured by the imaging optical system 1301a and the imaging optical system 1301b are images that have a shift due to parallax depending on the distance of the subject and cannot be regarded as the same. Therefore, it cannot be combined as it is.
  • the lens center distance D may be calculated from the lens distance distance, a subject that is a marker is placed at infinity, and the position where the image is formed is regarded as the center of the lens. It may be calculated.
  • the method of dividing the block is not limited to this method, and the block may be divided by changing the number of pixels or the shape.
  • the direction in which the parallax occurs is the origin of the image sensor (image sensor And the intersection of the corresponding optical system with the optical axis of each optical system)
  • the combination of m and n in (Equation 1) is limited according to the direction when detecting the visual difference. Just do it.
  • step 806 an image is selected that has a combination that improves resolution when combined based on the amount of blur and the amount of parallax. As described above, the resolution is improved by shifting the pixels as long as the pixels to be overlapped are shifted so as to use the invalid portion. It can be used as well.
  • FIG. 10 is an explanatory diagram of a method for selecting an optimum image.
  • the shaded area in this figure is a subject image formed on the image sensor.
  • subject images 1001a and 1001b are formed in the imaging region 1000a of the second imaging system and the imaging region 1000b of the first imaging system. It is assumed that the subject is on the center line of the second imaging system.
  • the subject image 1001b is formed at a position shifted by ⁇ on the imaging region 1000b.
  • the image of each imaging area is transferred to the image memory 103, it is stored as two-dimensional data.
  • the upper left coordinate of the subject image 1001a is (ax, ay)
  • the upper left coordinate of the subject image 1001b is parallax ⁇ Since it is shifted by the amount, (ax + ⁇ , ay) is obtained.
  • the second imaging area is 1002a
  • the first imaging area is 1002b
  • the subject images at that time are 1003a and 1003b.
  • the first imaging system was moved 0.5 pixels to the right by pixel shifting means.
  • the subject image 1003a on the imaging region 1002a is imaged at a position shifted (bx, by) from the origin.
  • FIG. 11 is another explanatory diagram of the optimal image selection method.
  • the amount of deviation bx and the amount of parallax ⁇ can be divided into a case where it is close to an integer pitch and a case where it is close to a value obtained by adding a 0.5 pixel pitch to an integer pitch.
  • bx and ⁇ indicate values equal to or smaller than the integer pitch.
  • Each value in FIG. 11 is obtained by calculating the value of each X coordinate of the subject with 0 as the X coordinate value ax of the reference imaging region 1000a.
  • 0 indicates that the positional relationship between the pixel of the imaging element that converts the subject into an image and the subject is shifted by an integer pitch compared to the case of the reference imaging region 1000a.
  • 0.5 indicates that the pixel pitch is shifted by 0.5.
  • the image corresponding to the portion indicated by 0.5 is an image that can effectively use the invalid portion.
  • the calculated value of the X coordinate is 0.5 for the four images, regardless of the combination of the amount of parallax ⁇ and the amount of camera shake bx. There is an image. Therefore, in any combination, it is possible to obtain an image that effectively uses the invalid portion. That is, the resolution can be improved regardless of camera shake or the distance of the subject.
  • the part where the values of bx and ⁇ in FIG. 11 are 0.5 may be a value close to 0.5 (for example, a value from 0.3 to 0.7).
  • the part set to 0 may be a value close to 0 (for example, a value less than 0.3 or a value greater than 0.7).
  • image data needs to be arranged on a grid. So, when you overlay and combine images, you can do linear interpolation processing! ⁇ .
  • the pixel pitch in the oblique direction may be used as a reference when the optimum image is selected based on the horizontal pixel pitch of the image sensor. Also, pixel pitch standards can be mixed depending on the situation.
  • Example 2 according to Embodiment 2 will be described.
  • Configuration on appearance of Example 2 6 has the same configuration as that of FIG. 6 of the first embodiment, and the optical system and the pixel shifting mechanism of the second embodiment are also the same as those of the first embodiment.
  • the second embodiment is different in that the image sensor 603 exposes at approximately the same time and transfers an image, and the driving amount of the pixel shifting mechanism is fixed.
  • an optical glass BK7 (602 in the figure) with a thickness of 500 m is provided on the optical axis of the lens 601b, and tilted by about 0.4 degrees using a piezoelectric actuator and tilting mechanism.
  • the subject image is shifted in the horizontal direction (X-axis direction) by a pixel pitch of 1Z2 (1.2 / zm) to double the number of pixels.
  • First image capturing time 1 time taken of the time taken to the second image after the pixel shift (after inclining the glass plate) and shooting time 2.
  • the size of the area to be compared need not be limited to a square, and may be set arbitrarily.
  • the parallax amount was obtained from the captured image 701 and the captured image 702 at the imaging time 1 by the parallax amount deriving means.
  • the optimum image selection means selects an image to be synthesized.
  • the method of tilting the glass plate is used as the pixel shifting means.
  • the force is not limited to this method.
  • an image sensor or lens may be physically powered by a predetermined amount using an actuator using a piezoelectric element, an electromagnetic actuator, or the like.
  • one image sensor is divided into two different areas.
  • two different image sensors may be used so as to correspond to each optical system in a one-to-one relationship.
  • the form of the image sensor may be any form as long as the plurality of image areas correspond to each optical system on a one-to-one basis.
  • the present embodiment is different from the second embodiment in that there is a moving amount of a subject to be imaged (for example, a person or an animal).
  • a subject to be imaged for example, a person or an animal.
  • the subject is moved to another location while the first image is shot, the data is stored in the memory and the second shot is taken, and the first image is captured.
  • the scene where a part of the subject moves to another location is shot.
  • block dividing means for dividing an image into a plurality of blocks is provided, and the amount of blur is derived for each block.
  • the block dividing means is controlled by the system control means 100 and divides the entire first image taken by the second imaging system 106a without pixel shifting into 10 ⁇ 10 pixel blocks.
  • the blur amount deriving unit 104 checks, for each block, which position in the second image each of the divided images corresponds to. (Equation 1) was used to derive the amount of image movement.
  • FIG. 13 shows images taken in time series by the second imaging system 106a without shifting the pixels stored in the image memory according to the present embodiment.
  • FIG. 13A shows an image taken at shooting time 1
  • FIG. 13B shows an image taken at shooting time 2.
  • Fig. 13C shows the amount of image movement derived for each block.
  • the block indicated by A is a block from which a 10.1 pixel blur is derived on the right in FIG. 13A
  • the block indicated by B is 8.8 pixels on the left in FIG. 13A. This is the block from which the blur is derived.
  • the amount of camera shake and the movement of the subject are integrated.
  • the parallax can be divided into blocks and obtained for each block. From the sum of the blur amount and the parallax amount, as in the case of the second embodiment, the one having an integer pitch (or close to the integer pitch) and the 0.5 pixel pitch (or 0.5 pixel pitch). By selecting images with a layout close to (2), it is possible to select an image with improved resolution when combined.
  • the resolution of the entire image can be improved even when the movement of the subject is large.
  • the pixel shifting technique is a technique for improving the resolution, there is no effect on a smooth surface of the subject to be photographed or a fine pattern below the resolution performance of the lens.
  • shifting pixels by reducing the time taken between shots, camera shake and subject shake are reduced, and resolution is improved.
  • the shooting interval can be shortened by stopping the processing for that block.
  • high-frequency components can be seen in high-resolution parts when Fourier transform is performed. Therefore, after the image is captured and divided into blocks, the frequency components of the image are analyzed, and if it is below a predetermined condition, the derivation of the blur amount and the parallax calculation for that portion may be stopped.
  • the present embodiment is different from the third embodiment in that the present embodiment uses subject discrimination means for discriminating different subjects in the image.
  • subject discrimination means By using the subject discrimination means, it is easy to derive the blur amount for each subject. For this reason, the amount of blur can be accurately derived even when the amount of blur is different in an image where there is subject blur in addition to camera shake.
  • the blocks can be divided for each subject or the block size can be changed for each subject. It is also possible to selectively synthesize only a specific subject when synthesizing images.
  • the subject discrimination means a means for measuring the distance to the subject using radio waves or the like to identify different image areas, a means for discriminating different subjects by performing edge detection or the like by image processing, and a parallax amount are used. There is a method of extracting a subject from an image. In addition, the specific means is not limited as long as different subjects in the image can be distinguished.
  • the basic configuration of the present embodiment is the same as that of the second embodiment, and therefore overlapped. The description is omitted here.
  • Fig. 14 is a diagram showing an image photographed in the present example and a subject group discriminated by the subject discriminating means.
  • the captured image was divided into 10 ⁇ 10 pixel blocks (horizontal 11 ⁇ vertical 9), and the distance to the subject was measured by radio waves for each block, and different subjects were identified.
  • subject discrimination those within a certain error range in distance measurement were discriminated as the same subject. In this example, the error range was 5%.
  • Fig. 14A is an image taken without pixel shifting with the second imaging system 106a at shooting time 1
  • Fig. 14B is an image shot with no pixel shifting with the second imaging system 106a at shooting time 2.
  • the distance (unit: meters) measured by radio waves for each block is shown.
  • the distance A may be calculated for each block by the above (Equation 1) using the parallax ⁇ obtained for each block.
  • the distance of the subject was measured by radio waves. As shown in Fig. 14A, two groups of subjects could be identified. One is subject group 1 at a distance of approximately 5 meters, and the other is subject group 2 at a distance of approximately 2 meters. Each subject group is identified by the distance within the 5% error range.
  • the blur correction of subject group 1 is performed on the image taken at shooting time 2! / And the image composition is performed.
  • the same method as in Example 2 was used as the method for selecting an image by the optimum image selection means.
  • a value that is less than or equal to an integer pitch among the 10.3 pixel pitch blur of subject group 1 is 0.3 pixels, and bx in FIG. it can.
  • the value of bx can be set to a negative value of 0.5. In this case, it becomes 0.5 force 5 in Table 11.
  • the positive / negative difference of bx is a difference in whether the position of the invalid pixel to be effectively used is the force left which is the right of the photoelectric conversion unit, and the contribution to the resolution is the same.
  • the blur amount can be derived for each subject, so that the blur amount of the image can be corrected accurately.
  • the image may partially protrude from the shooting range due to camera shake and subject blur.
  • V If the image cannot be recognized! /, Increase the resolution by shifting the pixel to the image area. Instead, you only need to select one of the images you have taken.
  • FIG. 15 shows configurations of the imaging system, the pixel shifting means, and the imaging device according to the present embodiment.
  • an aspherical lens 1101a with a diameter of 2 mm: LlOld was used as the imaging optical system.
  • the optical axis of the lens is almost parallel to the Z axis in Fig. 15, and the distance is 2.5 mm.
  • Color filters 1102a to 1102d are provided in front of each lens (subject side) as wavelength separation means that transmits only a specific wavelength.
  • 1102a and 1102d are color filter letters that transmit green
  • 1102b is a color filter that transmits red
  • 1102c is a color filter that transmits blue.
  • 1103a to 1103d are four image sensors corresponding to each lens on a one-to-one basis, using a common drive circuit and operating in synchronization.
  • a color image can be obtained by combining images taken by each optical system (color component).
  • the pixel pitch of the image sensor is 3 ⁇ m in this example.
  • each lens and the image sensor are installed in parallel with the X axis in FIG. 15 and at equal intervals, and the light receiving surface of each image sensor is substantially parallel to the XY plane in FIG. .
  • Reference numeral 1104 denotes a piezoelectric fine movement mechanism serving as a pixel shifting means.
  • the image sensors 1103a to 103c are attached to the piezoelectric fine movement mechanism 1104, and the X direction in the figure. Enabled to drive in Y direction.
  • the 1103d is independent of the piezoelectric fine movement mechanism, and does not perform pixel alignment. It becomes the second imaging system.
  • FIG. 16 is a plan view of the piezoelectric fine movement mechanism 1104.
  • Image sensors 1103a to 1103c are installed on the stage 1201 at the center.
  • the stage 1201 is finely moved in the X-axis direction in the figure by the laminated piezoelectric elements 1202a and 1202b, and the stage fixing frame 1202 is finely moved in the Y-axis direction in the figure by the laminated piezoelectric elements 1203a to 1203d.
  • the image sensor can be finely moved independently in two axial directions perpendicular to each other in the horizontal plane of the image sensor.
  • each image sensor By capturing the first image, four images corresponding to the four image sensors 1103a to 1103d are obtained.
  • Each of the three image sensors 1103 & 1103 was configured to shoot while moving by 0.5 pixel pitch (1.5 m) in the direction and Y direction. Specifically, take the first picture without shifting the pixels, move the picture by 0.5 pixels in the X direction, take the second picture, and then keep the position in the X direction. The third shot was taken with a 0.5 pixel pitch shift in the Y direction, and finally the fourth shot was taken with a power of 0.5 pixel pitch in the X direction while maintaining the position in the Y direction. By combining these four images, a high-resolution image was obtained.
  • the amount of blur at each shooting time was derived for each of a plurality of images taken in time series using the lens l lOld of the second imaging system without pixel shifting.
  • the parallax amount is derived from the first image captured by the first imaging system with the green power filter 1102a and the second imaging system with the green color filter 1102d. Asked. This is because the amount of parallax that makes it easy to compare the direction images of images taken using the same color filter can be obtained more precisely.
  • FIG. 17 shows another example of the arrangement of the four optical systems.
  • FIG. 17A is an example in which four optical systems are arranged at the vertices of a rectangle. GO, G1 are green, R is red, and B is blue, indicating the wavelength separation means (color filter).
  • FIG. 17B is a diagram for explaining the derivation of the amount of parallax in the arrangement of FIG. 17A.
  • a green imaging system arranged on a diagonal line is used for deriving the amount of parallax.
  • the parallax of the other red and blue imaging systems is an orthogonal component of the parallax amount in the green imaging system because the four optical systems are arranged in a rectangular rectangle.
  • a color filter is provided in front of the lens for wavelength separation.
  • a color filter is provided between the lens and the image sensor, or a color filter is directly formed on the lens. May be.
  • the color filter is not necessarily limited to the three primary colors R, G, and B.
  • the complementary color filter is used to separate the wavelengths and invert the color information by image processing. ⁇ .
  • the wavelength separation means is not limited to a color filter.
  • a mechanism for tilting using a glass plate is used as the pixel shifting means, colored glass can be used as the glass plate.
  • any specific means may be used as the wavelength separation means as long as it is a means for separating only a predetermined wavelength component.
  • FIG. 18 is a flowchart illustrating the overall operation of the imaging apparatus according to the third embodiment.
  • the pixel shift operation method is first determined and a predetermined number of times of shooting is performed.
  • Embodiment 3 has a configuration in which the number of shots varies depending on the shot image.
  • Fig. 18 [Steps 1500, 1501, 1503, 1504 ⁇ ] are the same as steps 200, 201, 801, 802 in Fig. 8. The configuration in FIG. 18 is different from the configuration in FIG. 8 in the subsequent configuration.
  • the amount of blur is obtained in step 1505, and the image to be synthesized is selected in step 1506.
  • step 1507 After selecting an image in step 1506, in step 1507, an image lacking in composition is found, and the amount of deviation is determined so that the image can be obtained.
  • step 1508 pixel shifting is executed. To do.
  • Step 1502 is repeated by repeating a series of steps in step 1502 until an image necessary for composition is obtained. Thereafter, the amount of parallax is derived in step 1509, the images stored in the image memory are combined in step 1510, the image is output in step 1511, and the shooting is completed.
  • the present invention is useful for an imaging device in, for example, a digital still camera or a mobile phone.

Abstract

A multi-eye imaging apparatus including a plurality of imaging systems (106a,106b) that have their respective optical systems (107a,107b) and imaging elements (108a,108b) and also have their respective different optical axes. The plurality of imaging systems (106a,106b) include a first imaging system (106b), which has a pixel shifting means (101) for changing the positional relationship between the imaging element (108b) and an image focused thereon, and a second imaging system (106a) in which the positional relationship between the imaging element (108a) and an image focused thereon is fixed in a time sequence of image pickup.

Description

明 細 書  Specification
複眼撮像装置  Compound eye imaging device
技術分野  Technical field
[0001] 本発明は、画素ずらし機能を有する複眼撮像装置に関する。  The present invention relates to a compound eye imaging apparatus having a pixel shifting function.
背景技術  Background art
[0002] 携帯機器に用いる撮像装置には、高解像度化と小型化の両立が必要とされている 。撮像装置の小型化は、撮像光学レンズの大きさや焦点距離、撮像素子の大きさ〖こ より制限される。  [0002] An imaging device used for a portable device is required to achieve both high resolution and small size. The downsizing of the imaging device is limited by the size and focal length of the imaging optical lens and the size of the imaging device.
[0003] 一般的に、光は波長により屈折率が異なるため、全波長の情報が含まれる景色を 単レンズで撮影面に結像することはできない。このため、通常の撮像装置の光学系 は、赤、緑、青の波長の光を同一の撮像面に結像するため、複数のレンズを重ねた 構成となっている。この構成では、必然的に撮像装置の光学系が長くなり、撮像装置 が厚くなる。そこで、撮像装置の小型化、特に薄型化に有効な技術として、焦点距離 が短い単レンズを用いる複眼方式の撮像装置が提案されている(例えば特許文献 1)  [0003] Generally, since the refractive index of light differs depending on the wavelength, it is not possible to form a scene including information on all wavelengths on a photographing surface with a single lens. For this reason, the optical system of a normal imaging device has a configuration in which a plurality of lenses are stacked in order to form red, green, and blue wavelengths of light on the same imaging surface. In this configuration, the optical system of the imaging device is inevitably long and the imaging device is thick. Therefore, a compound eye type imaging apparatus using a single lens with a short focal length has been proposed as an effective technique for downsizing, in particular, thinning of the imaging apparatus (for example, Patent Document 1).
[0004] 複眼方式のカラー画像撮像装置は、撮像光学系を青色の波長の光を受け持つレ ンズと緑色の波長の光を受け持つレンズと赤色の波長の光を受け持つレンズとを平 面内に並べた構成にし、それぞれのレンズに対して、撮像領域を設けるものである。 [0004] In a compound eye type color image capturing apparatus, an imaging optical system is arranged in a plane with a lens that handles light of a blue wavelength, a lens that handles light of a green wavelength, and a lens that handles light of a red wavelength. The imaging region is provided for each lens.
[0005] この撮像領域は、複数の撮像素子を並べて配置するだけでなぐ一つの撮像素子 を複数の領域に分けてもよい。この構成では、各レンズが受け持つ光の波長が限定 されるため、単レンズにより被写体像を撮像面に結像することが可能となり、撮像装置 の厚さを大幅に小さくできる。  [0005] In this imaging area, a single imaging element may be divided into a plurality of areas simply by arranging a plurality of imaging elements side by side. In this configuration, since the wavelength of light that each lens is responsible for is limited, a subject image can be formed on the imaging surface by a single lens, and the thickness of the imaging device can be significantly reduced.
[0006] 図 19に従来の複眼方式の撮像装置の一例について要部の概略斜視図を示す。 1 900はレンズアレイであり、 3つのレンズ 1901a、 1901b, 1901cカ 体で成型され ている。 1901aは、赤色の波長の光を受け持つレンズであり、結像した被写体像を赤 色の波長分離フィルター (カラーフィルター)を受光部に貼り付けた撮像領域 1902a で画像情報に変換する。同様に 1901bは、緑色の波長の光を受け持つレンズであり 、撮像領域 1902bで緑色の画像情報に変換され、 1901cは青色の波長の光に対応 するレンズであり、撮像領域 1902cで青色の画像情報に変換する。 FIG. 19 is a schematic perspective view of the main part of an example of a conventional compound eye type imaging apparatus. Reference numeral 1900 denotes a lens array, which is formed by three lenses 1901a, 1901b, and 1901c. Reference numeral 1901a denotes a lens that handles light of a red wavelength, and converts the formed subject image into image information in an imaging region 1902a in which a red wavelength separation filter (color filter) is attached to the light receiving unit. Similarly, 1901b is a lens that handles light of the green wavelength. The image is converted into green image information in the imaging area 1902b, and 1901c is a lens corresponding to light of a blue wavelength, and is converted into blue image information in the imaging area 1902c.
[0007] これらの画像を重ね合わせて合成することにより、カラー画像を取得することができ る。なお、レンズは 3個に限定する必要はなぐ複数の同色の画像を取得し合成する ことちでさる。 [0007] By superimposing and synthesizing these images, a color image can be obtained. In addition, it is not necessary to limit the number of lenses to three.
[0008] このように複眼方式の撮像装置は、撮像装置の厚さを薄くすることができるが、単純 に各色の画像を重ね合わせて合成する場合、各色に分離した画像の画素数により 合成した画像の解像度が決まることになる。このため、緑、赤、青のフィルターを千鳥 にならベた通常のべィヤー配列の撮像装置に比べて、解像度が劣る問題がある。  [0008] As described above, the compound-eye imaging device can reduce the thickness of the imaging device, but when the images of each color are simply superimposed and synthesized, the images are synthesized according to the number of pixels of the image separated into each color. The resolution of the image will be determined. For this reason, there is a problem that the resolution is inferior to that of an ordinary Bayer image pickup device in which green, red and blue filters are arranged in a staggered manner.
[0009] 一方、撮像装置の解像度を向上させるには、「画素ずらし」と呼ばれる技術がある。  On the other hand, in order to improve the resolution of the imaging apparatus, there is a technique called “pixel shift”.
図 20は、画素ずらしを用いた高解像度化の概念説明図である。本図は、撮像素子の 一部の拡大部分を示している。図 20Aに示すように、撮像素子には受光した光を電 気信号に変換する光電変換部 2101 (以下、「光電変換部」という)と、転送電極など の光を電気信号に変換することができな 、無効部分 2102 (以下、「無効部分」 、う) が存在する。撮像素子においては、この光電変換部 2101と無効部分 2102とを合わ せて、 1画素となる。この画素は、ある一定の間隔 (ピッチ)で規則正しく形成されてい るのが普通である。図 20Aの太線で囲んだ部分が 1画素分であり、 Pは 1ピッチ分を 指している。  FIG. 20 is a conceptual explanatory diagram of high resolution using pixel shifting. This figure shows a part of the enlarged portion of the image sensor. As shown in FIG. 20A, the image sensor has a photoelectric conversion unit 2101 (hereinafter referred to as “photoelectric conversion unit”) that converts received light into an electric signal, and light from a transfer electrode or the like can be converted into an electric signal. There is an invalid part 2102 (hereinafter referred to as “invalid part”). In the image sensor, the photoelectric conversion unit 2101 and the invalid portion 2102 are combined to form one pixel. These pixels are usually formed regularly at a certain interval (pitch). The part surrounded by the thick line in Fig. 20A is one pixel, and P indicates one pitch.
[0010] このような撮像素子を用いて行われる画素ずらしの概略を以下に示す。まず、図 20 Aに示す撮像素子の位置で撮影を行なう。次に、図 20Bに示したように、各画素の光 電変換部 2101が無効部分 2102に移動するように、斜め方向(水平方向、垂直方向 ともに画素の 1Z2ピッチ分)に移動させて、撮影を行なう。その後、撮像素子の移動 量を考慮の上、図 20Cで示したように、これら 2枚の撮影画像を合成する。  An outline of pixel shifting performed using such an image sensor is shown below. First, photographing is performed at the position of the image sensor shown in FIG. 20A. Next, as shown in FIG. 20B, the photoelectric conversion unit 2101 of each pixel is moved in an oblique direction (1Z2 pitch of the pixel in both the horizontal direction and the vertical direction) so as to move to the invalid portion 2102, and shooting is performed. To do. Then, taking into account the amount of movement of the image sensor, these two captured images are combined as shown in FIG. 20C.
[0011] このことにより、本来信号として取り得ることのできな力つた無効部分からも信号を取 り出すことができる。すなわち、図 20Cの撮像状態は、図 20Aの撮像素子で撮像した 1回分の撮像状態と比べると、 2倍の光電変換部を有する撮像素子で撮像したものと 同等の解像度となる。したがって、前記のような画像ずらしを行えば、撮像素子の画 素数を増やすことなぐ 2倍の画素数の撮像素子を用いて撮影した画像と等価な画 像を得ることができる。 [0011] By this, it is possible to extract a signal from a powerful invalid portion that cannot be originally obtained as a signal. That is, the imaging state in FIG. 20C has the same resolution as that captured by the imaging element having twice the photoelectric conversion unit as compared to the imaging state for one time captured by the imaging element in FIG. 20A. Therefore, if the image shift as described above is performed, an image equivalent to an image shot using an image sensor having twice the number of pixels without increasing the number of pixels of the image sensor is obtained. An image can be obtained.
[0012] なお、例示したように斜め方向にずらした場合に限らず、水平方向、垂直方向にず らした場合は、ずらした方向に解像度を向上させることができる。例えば、ずらしを縦 横に組み合わせると、 4倍の解像度を得ることができる。また、画素ずらしの量は、 0. 5画素に限る必要は無ぐ無効部分を補間するように、細く画素ずらしすることにより、 解像度をさらに向上させることができる。  [0012] It should be noted that the resolution is not limited to the case of shifting in the oblique direction as illustrated, but the resolution can be improved in the shifted direction when shifting in the horizontal direction and the vertical direction. For example, when the shift is combined vertically and horizontally, a resolution of 4 times can be obtained. Also, the amount of pixel shift need not be limited to 0.5 pixels, and the resolution can be further improved by finely shifting pixels so as to interpolate invalid portions.
[0013] また、前記の例では、撮像素子と入射光線の相対的な位置関係を、撮像素子を移 動させることにより変化させた力 画素ずらしの方法はこの方法に限らない。例えば、 撮像素子の代わりに光学レンズを移動させてもよい。また、別の方法として平行平板 を用いた方法などが提案されて 、る (例えば特許文献 1)。特許文献 1に記載の発明 では、平行平板を傾斜させることによって撮像素子に結像する像をずらして 、る。  [0013] In the above-described example, the method of shifting the force pixels by changing the relative positional relationship between the image sensor and the incident light beam by moving the image sensor is not limited to this method. For example, the optical lens may be moved instead of the image sensor. As another method, a method using a parallel plate has been proposed (for example, Patent Document 1). In the invention described in Patent Document 1, the image formed on the image sensor is shifted by inclining parallel plates.
[0014] このように画素ずらしにより、解像度を向上させることができる力 画素ずらしにおい ては、複数の画像を時系列で撮影した後、合成処理を行ない、高解像画像を生成し ている。このため、本来、補完しあう画像がずれてしまうと解像度が劣化する可能性が ある。すなわち、時系列的に撮影した複数画像から、高解像度画像を合成するため には、手ぶれなどにより撮影中に撮像装置が動いてしまうことによるぶれ (以下、「手 ぶれ」という)、および被写体が動いてしまうことなどによる被写体側のぶれ (以下、「 被写体ぶれ」とする)を除かなくてはならな 、。  [0014] In this way, the power that can improve the resolution by pixel shifting In pixel shifting, a plurality of images are photographed in time series, and then a synthesis process is performed to generate a high-resolution image. For this reason, if the images to be complemented are shifted, the resolution may deteriorate. In other words, in order to synthesize a high-resolution image from a plurality of images taken in time series, there is a blur due to movement of the imaging device during shooting due to camera shake (hereinafter referred to as “camera shake”) and a subject. It is necessary to remove blur on the subject side (hereinafter referred to as “subject blur”) due to movement.
[0015] したがって、小型、薄型化を実現するために採用した複眼方式の短所である解像 度低下を、画素ずらし技術によって補うためには、画素ずらしにおけるぶれを除去ま たは補正することが必須となる。  [0015] Therefore, in order to compensate for the reduction in resolution, which is a disadvantage of the compound eye system adopted to realize a small size and a thin shape, by pixel shifting technology, it is possible to remove or correct blurring in pixel shifting. Required.
[0016] ぶれをできる限り除去する方法、またはぶれを補正する先行技術がいくつ力提案さ れている。 1つの方法は、カメラを三脚等で固定して撮影するものである。この方法は 、手ぶれによる影響を低減することができる。  [0016] There have been proposed several methods for removing blur as much as possible, or several prior arts for correcting blur. One method is to shoot with the camera fixed on a tripod or the like. This method can reduce the influence of camera shake.
[0017] 別の方法は、角速度センサーなどのぶれ検知手段を用いて、手ぶれを検知し、補 正するものである。この手ぶれ補正を行う機構と画素ずらし機構とを兼用し、補正す る方法が提案されている (例えば、特許文献 2、特許文献 3)。  [0017] Another method is to detect and correct camera shake using shake detection means such as an angular velocity sensor. A method of correcting by using both the camera shake correction mechanism and the pixel shifting mechanism has been proposed (for example, Patent Document 2 and Patent Document 3).
[0018] 特許文献 2に記載の発明では、ぶれ検知手段を用いてぶれ量を検知し、そのぶれ 量に基づき画素ずらし方向、画素ずらし量の補正をした上で、撮像素子を移動させ て画素ずらしをしている。こうすることで、手ぶれによる影響を低減することができる。 [0018] In the invention described in Patent Document 2, the amount of shake is detected using the shake detection means, and the shake is detected. After correcting the pixel shift direction and pixel shift amount based on the amount, the image sensor is moved to shift the pixel. By doing so, it is possible to reduce the influence of camera shake.
[0019] また、前記のように、撮像素子を移動させる方法に限る必要はなぐ特許文献 3では 、検知されたぶれ量に対応させて光学レンズの一部を動かすことにより、手ぶれ補正 と画素ずらしを行い、同様な効果を得ている。ぶれ検知の方法としては、振動ジャィ 口などの角速度センサーを用いる方法、時系列で撮影した画像を比較し、動きべタト ルを求める方法など多彩な方法が提案されている。  [0019] Further, as described above, in Patent Document 3, it is not necessary to be limited to the method of moving the image sensor. By moving a part of the optical lens in accordance with the detected amount of shake, camera shake correction and pixel shift are performed. The same effect is obtained. Various methods have been proposed for blur detection, such as a method using an angular velocity sensor such as a vibration gai mouth, and a method for comparing motion images taken in time series to obtain a motion beta.
[0020] さらに、ぶれを低減する別の方法として、特許文献 3では、時系列で撮影した複数 の画像を比較し、手ぶれなどにより、画像の位置関係が適切にずれて、解像度の向 上が期待できる関係になっている画像のみ選択し、合成する方法が提案されている 。この方法は、すべて、電子的に行われ、手ぶれ補正をする機械的な機構を設ける 必要がなぐ撮像装置を小型化することができる。  [0020] Further, as another method for reducing blurring, Patent Document 3 compares a plurality of images taken in time series, and the positional relationship of the images is appropriately shifted due to camera shake or the like, thereby improving resolution. A method has been proposed in which only images having a relationship that can be expected are selected and combined. This method is all performed electronically, and can reduce the size of an imaging apparatus that does not require the provision of a mechanical mechanism for correcting camera shake.
[0021] し力しながら、前記の三脚などにより固定する方法は、三脚を常に持ち歩く必要が あるなどユーザーの利便性を大きく損な 、実用的ではな!/、。  [0021] The method of fixing with a tripod while damaging it is not practical because it is necessary to always carry the tripod.
[0022] また、特許文献 2、 3に記載のセンサーにより手ぶれを検知し、手振れ補正と画素ず らしを行う方法は、新たにセンサーが必要であり、また、複雑な光学系が必要である など、小型、薄型化に不利になる。  [0022] In addition, the method described in Patent Documents 2 and 3, which detects camera shake, and performs camera shake correction and pixel shifting, requires a new sensor and requires a complicated optical system. , Disadvantageous for miniaturization and thinning.
[0023] 一方、特許文献 3に記載の時系列で撮影した複数の画像を比較し、合成に適切な 画像のみを選び出して合成する方法は、新たにセンサーを加える必要はないが、手 ぶれなどにより偶然に適切な位置に画像が来ることを期待しており、確実に解像度が 向上するとは限らない。  [0023] On the other hand, the method of comparing a plurality of images taken in time series described in Patent Document 3 and selecting only the images suitable for synthesis does not require a new sensor, but there is a need for camera shake, etc. Therefore, it is expected that the image will come to an appropriate position by chance, and the resolution is not necessarily improved.
特許文献 1 :特開平 6— 261236号公報  Patent Document 1: JP-A-6-261236
特許文献 2:特開平 11― 225284号公報  Patent Document 2: Japanese Patent Laid-Open No. 11-225284
特許文献 3 :特開平 10— 191135号公報  Patent Document 3: Japanese Patent Laid-Open No. 10-191135
発明の開示  Disclosure of the invention
[0024] 本発明は、前記のような従来の問題を解決するものであり、画素ずらしをする複眼 撮像装置において、手ぶれ、または被写体ぶれがある場合にも画素ずらしの効果の 低下を防止できる複眼撮像装置を提供することを目的とする。 [0025] 前記目的を達成するために本発明の複眼撮像装置は、光学系と撮像素子とを含 みそれぞれ光軸が異なる撮像系を複数備えた複眼撮像装置であって、前記複数の 撮像系は、前記撮像素子に結像する画像と前記撮像素子との相対的な位置関係を 変化させる画素ずらし手段を持つ第 1の撮像系と、前記撮像素子に結像する画像と 前記撮像素子との相対的な位置関係が時系列の撮影において固定されている第 2 の撮像系とを含んで ヽることを特徴とする。 [0024] The present invention solves the above-described conventional problems, and in a compound eye imaging device that performs pixel shifting, a compound eye that can prevent a decrease in the effect of pixel shifting even when there is camera shake or subject blurring. An object is to provide an imaging device. In order to achieve the above object, the compound eye imaging apparatus of the present invention is a compound eye imaging apparatus including a plurality of imaging systems each including an optical system and an imaging element and having different optical axes, and the plurality of imaging systems. Includes a first imaging system having a pixel shifting unit that changes a relative positional relationship between an image formed on the image sensor and the image sensor, an image formed on the image sensor, and the image sensor. The second imaging system is characterized in that the relative positional relationship is fixed in time-series imaging.
図面の簡単な説明  Brief Description of Drawings
[0026] [図 1]本発明の実施の形態 1に係る撮像装置の構成を示すブロック図。 FIG. 1 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 1 of the present invention.
[図 2]本発明の実施の形態 1に係る撮像装置における全体動作を示すフローチャート  FIG. 2 is a flowchart showing the overall operation of the imaging apparatus according to Embodiment 1 of the present invention.
[図 3]本発明の一実施の形態に係る比較元領域と評価領域との位置関係を示す図。 FIG. 3 is a diagram showing a positional relationship between a comparison source region and an evaluation region according to an embodiment of the present invention.
[図 4]本発明の一実施の形態において、手ぶれによる画像の動きを示す図。  FIG. 4 is a diagram showing image movement due to camera shake in the embodiment of the present invention.
[図 5]本発明の一実施の形態において、画素ずらし量の調整を説明する図。  FIG. 5 is a diagram for explaining adjustment of a pixel shift amount in one embodiment of the present invention.
[図 6]本発明の実施例 1に係る撮像光学系、画素ずらし手段および撮像素子の構成 図。  FIG. 6 is a configuration diagram of an image pickup optical system, a pixel shifting unit, and an image pickup element according to Embodiment 1 of the present invention.
[図 7]本発明の実施の形態 2に係る撮像装置の構成を示すブロック図。  FIG. 7 is a block diagram showing a configuration of an imaging apparatus according to Embodiment 2 of the present invention.
[図 8]本発明の実施の形態 2に係る撮像装置の全体動作のフローチャート。  FIG. 8 is a flowchart of the overall operation of the imaging apparatus according to Embodiment 2 of the present invention.
[図 9]本発明の一実施の形態において視差を説明する図。  FIG. 9 is a diagram for explaining parallax in an embodiment of the present invention.
[図 10]本発明の一実施の形態において、最適画像の選択方法を説明する図。  FIG. 10 is a diagram for explaining a method for selecting an optimum image in one embodiment of the present invention.
[図 11]本発明の一実施の形態において、最適画像の選択方法を説明する別の図。  FIG. 11 is another diagram for explaining a method for selecting an optimum image in the embodiment of the present invention.
[図 12]本発明の実施例 2において、 1回の画素ずらしをして画像メモリに記憶された 画像を示す図。  FIG. 12 is a diagram showing an image stored in the image memory after pixel shifting once in Embodiment 2 of the present invention.
[図 13]本発明の実施例 3にお 、て、画像メモリに記憶された画素ずらしをしな 、第 2 の撮像系で時系列撮影した画像を示す図。  FIG. 13 is a diagram showing images taken in time series with the second imaging system without pixel shift stored in the image memory in Example 3 of the present invention.
[図 14]本発明の実施例 3で撮影した画像、および被写体判別手段により判別した被 写体群を示す図。  FIG. 14 is a diagram showing an image photographed in Example 3 of the present invention and a subject group determined by subject determination means.
[図 15]本発明の実施例 5に係る撮像光学系、画素ずらし手段および撮像素子の構成 図。 [図 16]本発明の一実施の形態に係る圧電微動機構の平面図。 FIG. 15 is a configuration diagram of an image pickup optical system, a pixel shifting unit, and an image pickup device according to Example 5 of the present invention. FIG. 16 is a plan view of a piezoelectric fine movement mechanism according to an embodiment of the present invention.
[図 17]本発明の一実施の形態に係る光学系の配置の一例を示す図。  FIG. 17 is a diagram showing an example of an arrangement of an optical system according to an embodiment of the present invention.
[図 18]本発明の実施の形態 3に係る撮像装置における全体動作のフローチャート。  FIG. 18 is a flowchart of the entire operation in the imaging apparatus according to Embodiment 3 of the present invention.
[図 19]従来の複眼方式の撮像装置の一例の要部の概略斜視図。  FIG. 19 is a schematic perspective view of a main part of an example of a conventional compound-eye imaging device.
[図 20]従来の画素ずらしを用いた高解像度化の概念説明図。  FIG. 20 is a conceptual explanatory diagram of high resolution using conventional pixel shifting.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0027] 本発明によれば、複眼方式の光学系による撮像装置の小型化、薄型化とともに、画 素ずらしをしない第 2の撮像系により時系列で撮影した画像同士を比較することによ り、撮像装置のぶれ量(手ぶれの量)を検知することができる。このぶれ量を用いて、 画素ずらしをする第 1の撮像系で撮影した画像に対し、手ぶれを補正することができ る。すなわち、撮像装置の小型化及び薄型化と高解像度化とを両立することができる [0027] According to the present invention, the image pickup apparatus using the compound eye type optical system is reduced in size and thickness, and images taken in time series by the second image pickup system that does not shift pixels are compared. The amount of camera shake (the amount of camera shake) can be detected. Using this amount of camera shake, camera shake can be corrected for images shot with the first imaging system that shifts pixels. That is, it is possible to achieve both downsizing and thinning of the imaging device and high resolution.
[0028] 前記本発明の複眼撮像装置においては、時系列で撮影した複数フレームの画像 情報を蓄積する画像メモリと、前記画像メモリに蓄積された前記複数フレームの画像 情報を比較して、ぶれ量を導出するぶれ量導出手段と、前記画像メモリに蓄積され た前記複数フレームの画像を合成する画像合成手段とをさらに備えたことが好ましい [0028] In the compound eye imaging device of the present invention, the amount of blur is compared by comparing the image memory storing the image information of a plurality of frames taken in time series with the image information of the plurality of frames stored in the image memory. It is preferable that the camera further includes a blur amount deriving unit that derives the image and an image synthesizing unit that synthesizes the images of the plurality of frames stored in the image memory.
[0029] また、前記画素ずらし手段による前記位置関係の変化量は、前記ぶれ量導出手段 により求めたぶれ量に基づき決定されることが好ましい。この構成によれば、手ぶれ 量に応じて、画素ずらし量を調整することができるので、解像度向上に有利になる。 [0029] Further, it is preferable that the amount of change in the positional relationship by the pixel shifting unit is determined based on the amount of blur obtained by the blur amount deriving unit. According to this configuration, the pixel shift amount can be adjusted according to the amount of camera shake, which is advantageous for improving the resolution.
[0030] また、前記画素ずらし手段による前記位置関係の変化量が固定されている構成で もよい。この構成によれば、撮影中にぶれ量を導出して画素ずらし量を調整する必要 がなく、時系列の撮影時刻間隔を短くすることができる。これにより、手ぶれが少なく なるとともに、被写体の動きが速 、場合にも撮影できるようになる。  [0030] In addition, the amount of change in the positional relationship by the pixel shifting unit may be fixed. According to this configuration, it is not necessary to derive the blur amount during shooting and adjust the pixel shift amount, and the time-series shooting time interval can be shortened. This reduces camera shake and enables shooting even when the subject moves quickly.
[0031] また、前記光軸の異なる複数の撮像系で撮影した画像力も視差の大きさを求める 視差量導出手段をさらに備え、前記画像合成手段は、前記視差量導出手段によつ て求めた視差量と、前記ぶれ量導出手段によって求めたぶれ量とに基づき画像を補 正し合成することが好ましい。この構成によれば、画像を補正する際は、ぶれの補正 に加え、被写体の距離に依存する視差の補正もするので、合成した画像の解像度を さらに高めることができる。すなわち、被写体の距離に依存した解像度の低下を防ぐ ことができる。 [0031] Further, the present invention further includes a parallax amount deriving unit that obtains the magnitude of the parallax for the image forces captured by the plurality of imaging systems having different optical axes, and the image synthesizing unit is obtained by the parallax amount deriving unit. It is preferable that the image is corrected and synthesized based on the parallax amount and the blur amount obtained by the blur amount deriving unit. According to this configuration, when correcting an image, blur correction is performed. In addition, since the parallax correction depending on the distance of the subject is also corrected, the resolution of the synthesized image can be further increased. That is, it is possible to prevent a decrease in resolution depending on the distance of the subject.
[0032] また、前記ぶれ量導出手段により求めたぶれ量と、前記視差量導出手段によって 求めた視差量とに基づいて、前記画像メモリに蓄積された前記第 1の撮像系で撮影 された画像情報と、前記第 2の撮像系で撮影された画像情報とから前記画像合成手 段の合成に用いる画像情報を選択する最適画像選択手段をさらに備えたことが好ま しい。この構成によれば、第 1、 2の撮像系により、ぶれ前後の画像、視差のある画像 、画素ずらしした画像を得ることができるので、偶然性に依存することなぐ解像度向 上に適した画像を選択することができる。  [0032] Further, based on the blur amount obtained by the blur amount deriving unit and the parallax amount obtained by the parallax amount deriving unit, an image captured by the first imaging system stored in the image memory It is preferable that the image processing apparatus further includes an optimum image selection unit that selects image information used for the composition of the image composition unit from the information and image information captured by the second imaging system. According to this configuration, the first and second imaging systems can obtain images before and after blurring, images with parallax, and images with pixel shifts, so images that are suitable for improving resolution without depending on chance. You can choose.
[0033] また、異なる被写体を判別する手段をさらに備えており、前記ぶれ量導出手段は、 前記異なる被写体毎にぶれ量を導出し、前記画像合成手段は、前記異なる被写体 毎に画像を合成することが好ましい。この構成によれば、被写体毎にぶれ量を導出 することにより、被写体が動くことにより画像全体が均一に移動しない場合においても 、解像度を向上させることができる。  [0033] Further, it further includes means for discriminating different subjects, the blur amount deriving unit derives a blur amount for each of the different subjects, and the image composition unit composes an image for each of the different subjects. It is preferable. According to this configuration, by deriving the blur amount for each subject, the resolution can be improved even when the entire image does not move uniformly due to the subject moving.
[0034] また、画像情報を複数のブロックに分割する手段をさらに備えており、前記ぶれ量 導出手段は、前記複数のブロック毎にぶれ量を導出し、前記画像合成手段は、前記 複数のブロック毎に画像を合成することが好ましい。この構成によっても、被写体の移 動量が存在する場合における解像度の向上を図ることができる。さらに、被写体の検 出が不要となり、処理時間の短縮することができる。  [0034] Further, the image processing apparatus further includes means for dividing the image information into a plurality of blocks, the blur amount deriving unit derives a blur amount for each of the plurality of blocks, and the image synthesizing unit includes the plurality of blocks. It is preferable to synthesize an image every time. Also with this configuration, it is possible to improve the resolution when the amount of movement of the subject exists. Furthermore, it is not necessary to detect the subject, and the processing time can be shortened.
[0035] また、前記光軸が異なる複数の撮影系は、赤色を扱う撮像系と、緑色を扱う撮像系 と、青色を扱う撮像系とで構成されており、前記各色に対応した撮像系のうち、少なく とも 1色に対応した撮像系の個数は、 2個以上であり、前記同色を扱う 2個以上の撮 像系は、前記第 1の撮像系と前記第 2の撮像系とを含んでいることが好ましい。この 構成によれば、解像度を向上させたカラー画像を得ることができる。  [0035] Further, the plurality of imaging systems having different optical axes include an imaging system that handles red, an imaging system that handles green, and an imaging system that handles blue, and each of the imaging systems corresponding to the respective colors. Among them, the number of imaging systems corresponding to at least one color is two or more, and the two or more imaging systems that handle the same color include the first imaging system and the second imaging system. It is preferable that According to this configuration, a color image with improved resolution can be obtained.
[0036] 以下、本発明の一実施の形態について図面を参照しながら説明する。  Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
[0037] (実施の形態 1)  [0037] (Embodiment 1)
図 1は、実施の形態 1に係る撮像装置の構成を示すブロック図である。システム制 御手段 100は、撮像装置の全体を制御する中央演算装置(Central Processing U nit ; CPU)である。システム制御手段 100は、画素ずらし手段 101、転送手段 102、 画像メモリ 103、ぶれ量導出手段 104、画像合成手段 105を制御するものである。 FIG. 1 is a block diagram illustrating a configuration of the imaging apparatus according to the first embodiment. System The control unit 100 is a central processing unit (CPU) that controls the entire imaging apparatus. The system control means 100 controls the pixel shifting means 101, the transfer means 102, the image memory 103, the blur amount deriving means 104, and the image composition means 105.
[0038] 撮影する被写体(図示せず)は、画素ずらし手段 101を持つ第 1の撮像系 106bと、 画素ずらし機能を持たな ヽ第 2の撮像系 106aとによって撮影される。撮像光学系 10 7aおよび撮像光学系 107bとによって、被写体は撮像素子 108a、 108b上に結像し 、光の強度分布として画像情報に変換される。  [0038] A subject to be photographed (not shown) is photographed by the first imaging system 106b having the pixel shifting means 101 and the second imaging system 106a having no pixel shifting function. By the imaging optical system 107a and the imaging optical system 107b, the subject forms an image on the imaging elements 108a and 108b and is converted into image information as a light intensity distribution.
[0039] 画素ずらし手段 101は、撮像光学系 107bによって撮像素子 108b上に結像する被 写体像と、撮像素子 108bとの相対的な位置関係を、撮像素子 108bの面内方向に ずらすものである。すなわち、画素ずらし手段 101により、撮像素子 108bと、撮像素 子 108bに入射する入射光線との相対的な位置関係を時系列の撮影において変化 させることがでさる。  [0039] The pixel shifting means 101 shifts the relative positional relationship between the object image formed on the image sensor 108b by the image pickup optical system 107b and the image sensor 108b in the in-plane direction of the image sensor 108b. It is. That is, the pixel shift means 101 can change the relative positional relationship between the image sensor 108b and the incident light beam incident on the image sensor 108b in time-series imaging.
[0040] 一方、撮像光学系 107aと撮像素子 108aとの位置関係は、撮像素子 108aの面内 方向にずれないようにしてある。したがって、撮像光学系 107aによって撮像素子 108 a上に結像する被写体像と、撮像素子 108aとの相対的な位置関係は、時系列の撮 影において固定されていることになる。すなわち、第 2の撮像系 106aは、撮像素子 1 08aと、撮像素子 108aに入射する入射光線との相対的な位置関係は、時系列の撮 影において固定されていることになる。  On the other hand, the positional relationship between the imaging optical system 107a and the imaging element 108a is set so as not to deviate in the in-plane direction of the imaging element 108a. Therefore, the relative positional relationship between the subject image formed on the image sensor 108a by the imaging optical system 107a and the image sensor 108a is fixed in time-series imaging. That is, in the second imaging system 106a, the relative positional relationship between the imaging device 108a and the incident light incident on the imaging device 108a is fixed in time-series imaging.
[0041] 転送手段 102は、撮像素子 108a、 108bで光電変換した画像情報を、画像を記憶 する画像メモリ 103に伝達するものである。  [0041] The transfer means 102 transmits the image information photoelectrically converted by the image sensors 108a and 108b to an image memory 103 that stores an image.
[0042] 第 1の撮像系 106bと第 2の撮像系 106aは個別に駆動し、それぞれの画像を順次 画像メモリ 103に転送し、蓄積する。後述するように第 2の撮像系 106aで撮影した画 像を用いて、ぶれ量を検出しながら、画素ずらし量を調整する。このため、第 2の撮像 系 106aの方力 高速で駆動できるようになつている。すなわち、第 2の撮像系 106a の方が、単位時間当たりの画像の取り込み回数を多くできるようにしている。  [0042] The first imaging system 106b and the second imaging system 106a are individually driven, and each image is sequentially transferred to the image memory 103 and stored. As will be described later, the pixel shift amount is adjusted while detecting the blur amount using the image captured by the second imaging system 106a. For this reason, the second imaging system 106a can be driven at high speed. That is, the second imaging system 106a can increase the number of times of capturing images per unit time.
[0043] ぶれ量導出手段 104は、第 2の撮像系 106a、すなわち画素ずらしをしな 、光学系 によって異なる時刻に(時系列的に)撮影した画像情報を比較し、ぶれ量を導出する ものである。詳細は後に説明する力 このぶれ量を補正するように第 1の撮像系 106 bの画素ずらし量を設定し、画素ずらしした画像を画像メモリ 103に蓄積する。 [0043] The blur amount deriving unit 104 derives a blur amount by comparing image information captured at different times (in time series) by the optical system without shifting the pixels, in the second imaging system 106a. It is. The details will be described later. The first imaging system 106 so as to correct this blur amount. The pixel shift amount of b is set, and the pixel shifted image is stored in the image memory 103.
[0044] 画像合成手段 105は、第 1の撮像系 106b及び第 2の撮像系 106aで撮影し、画像 メモリ 103に蓄積された画像を合成し、高解像度画像を生成するものである。 The image synthesizing unit 105 synthesizes images captured by the first imaging system 106b and the second imaging system 106a and stored in the image memory 103, and generates a high-resolution image.
[0045] 図 2は、本実施の形態の撮像装置における全体動作を示すフローチャートである。 FIG. 2 is a flowchart showing the overall operation of the imaging apparatus according to the present embodiment.
ステップ 200の撮影開始指令により、撮影が開始する。撮影が開始すると、まずステ ップ 201の撮影前処理が行われる。これは、最適な露出時間の計算、焦点合わせ処 理を行なうものである。  Shooting starts in response to the shooting start command in step 200. When shooting starts, first, shooting pre-processing in step 201 is performed. This calculates the optimal exposure time and performs the focusing process.
[0046] 例えば、被写体と撮像装置の距離が変わると、結像距離が変化し画像がぼける現 象がある。この現象を補正するために、撮像光学系と撮像素子との距離調整 (焦点 合わせ)が行われる。焦点合わせは、焦点が合った場合に撮影した画像のコントラス トが最大になる特性を用い、撮像光学系と撮像素子との間 (結像距離)を焦点合わせ 用のァクチユエータ(図示せず)で変化させることで実現できる。  [0046] For example, when the distance between the subject and the imaging device changes, there is a phenomenon in which the imaging distance changes and the image is blurred. In order to correct this phenomenon, distance adjustment (focusing) between the imaging optical system and the imaging element is performed. Focusing uses the characteristic that maximizes the contrast of the image taken when the image is in focus, and the focusing distance between the imaging optical system and the image sensor (imaging distance) is a focusing actuator (not shown). It can be realized by changing.
[0047] なお、焦点合わせには必ずしもコントラストを用いる必要はなぐレーザーや電波な どにより被写体の距離を計測し、焦点合わせをしてもよい。  [0047] Note that it is not always necessary to use contrast for focusing, and focusing may be performed by measuring the distance of the subject using a laser or radio wave.
[0048] また、周辺環境光などを考慮のうえ、最適な露光時間を調整する必要がある。これ には、照度センサーにより明るさを検出し、露光時間を設定する方法や、撮影開始前 に画像取り込みをするプレビュー機能を設ける方法などがある。プレビュー機能を設 ける方法は、撮影開始前に取り込んだ画像をグレイスケールィ匕して明るさ情報に変 換する。そして、そのヒストグラムが白色(明るい)に偏っていれば、露出過剰 (露光時 間が長すぎる)と判定し、ヒストグラムが黒色(暗い)に偏っていれば、露出不足(露出 時間が短すぎる)と判定し、露出時間を調整する。  In addition, it is necessary to adjust the optimal exposure time in consideration of ambient ambient light and the like. These include a method of detecting the brightness with an illuminance sensor and setting the exposure time, and a method of providing a preview function for capturing an image before starting shooting. The method of setting the preview function is to convert the image captured before the start of shooting to brightness information by gray scale. If the histogram is biased to white (bright), it is judged as overexposed (exposure time is too long), and if the histogram is biased to black (dark), it is underexposed (exposure time is too short). And adjust the exposure time.
[0049] また、プレビュー機能を有する場合、撮影開始指令前に、この前処理を行なってお くことで、撮影開始指令から、露光を始めるまでの時間を短縮することができる。  In addition, when the preview function is provided, by performing this preprocessing before the shooting start command, the time from the shooting start command to the start of exposure can be shortened.
[0050] 次に、ステップ 202で画素ずらしによる撮影が行われる。この撮影はステップ 203か らステップ 208までの各処理を繰り返して行われる。  [0050] Next, in step 202, imaging by pixel shifting is performed. This photographing is performed by repeating each process from step 203 to step 208.
[0051] ステップ 203は第 2の撮像系 106aの露光処理であり、ステップ 204は第 2の撮像系 106aで撮影した画像の画像メモリ 103への転送処理である。画像メモリ 103へは、 第 2の撮像系 106aによって異なる時刻に撮影された画像が転送されることになる。 [0052] ステップ 205において、画像メモリ 103に蓄積された画像を比較し、ぶれ量 (撮像装 置のぶれ量)を求める。ステップ 206では、ステップ 205で求めたぶれ量を反映させ て調整した画素ずらし量に基づいて、第 1の撮像系 106bにより画素ずらしをして撮 影をする。ステップ 207は、第 1の撮像系 106bの露光処理であり、ステップ 208は、 第 1の撮像系 106bで撮影した画像の画像メモリ 103への転送処理である。 Step 203 is an exposure process of the second imaging system 106a, and step 204 is a process of transferring an image captured by the second imaging system 106a to the image memory 103. Images taken at different times by the second imaging system 106a are transferred to the image memory 103. [0052] In step 205, the images stored in the image memory 103 are compared to determine the amount of blur (the amount of blur of the imaging device). In step 206, based on the pixel shift amount adjusted to reflect the blur amount obtained in step 205, the first imaging system 106b shifts the pixel and takes an image. Step 207 is an exposure process of the first imaging system 106b, and step 208 is a process of transferring an image photographed by the first imaging system 106b to the image memory 103.
[0053] これらの各処理のうち、まずぶれ量導出について、具体的に説明する。前記のよう に、異なる時刻に撮影すると、その間における手ぶれ、または被写体ぶれにより画像 にぶれが生じる。画素ずらしにより画素の無効部分を活用するためには、このぶれを 考慮して画素ずらしの量を決める必要がある。  [0053] Among these processes, the blur amount derivation will be specifically described first. As described above, if images are taken at different times, the image may be blurred due to camera shake or subject shake during that time. In order to utilize the invalid portion of the pixel by pixel shifting, it is necessary to determine the amount of pixel shifting considering this blur.
[0054] そのため、ステップ 202では、画素ずらしをする直前に画素ずらしをしな!、第 2の撮 像系 106aで異なる時刻に撮影した画像を取り込み、ぶれ量を算出し、画素ずらし量 に反映させている。  [0054] Therefore, in step 202, do not perform pixel shifting immediately before pixel shifting! Capture images taken at different times with the second imaging system 106a, calculate the blur amount, and reflect it in the pixel shift amount. I am letting.
[0055] ステップ 205のぶれ量の導出処理では、前記のように撮像装置の手ぶれ量を求め る。以下、その具体的な方法を説明する。手ぶれや被写体のぶれがあると時系列で 撮影した画像のなかで、被写体が映って 、る位置が移動することになる。  In the blur amount deriving process in step 205, the camera shake amount of the imaging apparatus is obtained as described above. Hereinafter, the specific method will be described. When there is camera shake or subject shake, the subject appears and moves in the time-series images.
[0056] 短 、時間間隔であれば、被写体の形状は変らず、位置が移動して 、るとみなすこと ができる。このため、撮影時刻の異なる 2つの画像のうち、一方を比較元画像、他方 を比較先画像とし、その比較元画像の所定の領域が、比較先画像のどの部分に移 動したかを調べることにより、画像がどのように動いたかを求めることができる。  [0056] If the time interval is short, the shape of the subject does not change, and the position can be regarded as moving. For this reason, of two images with different shooting times, one is the comparison source image and the other is the comparison destination image, and it is examined to which part of the comparison destination image the predetermined area of the comparison source image has moved. Thus, it is possible to determine how the image has moved.
[0057] より具体的には、比較元画像内の特定領域 (以下、「比較元領域」という)が、比較 先画像のどの領域に対応するかを調べるためには、比較先画像に比較元領域と同じ サイズの評価領域を設定し、比較元領域と評価領域がどれだけ似て!ヽるかを評価す る。以後、順次別の位置に評価領域を設定し、各評価領域において前記の評価をし ながら、比較元領域の移動先を探索する。この場合、比較元領域と最も似ている評 価領域が、比較元領域の移動先になる。  More specifically, in order to check which region of the comparison target image a specific region in the comparison source image (hereinafter referred to as “comparison source region”) corresponds to the comparison source image, Set an evaluation area of the same size as the area, and evaluate how similar the comparison source area and evaluation area are! Thereafter, evaluation areas are sequentially set at different positions, and the movement destination of the comparison source area is searched while performing the above-described evaluation in each evaluation area. In this case, the evaluation area most similar to the comparison source area becomes the movement destination of the comparison source area.
[0058] 撮像素子で撮影した画像は、それぞれの画素に対応する光強度の集合とみなせる ので、画像の左上を原点とし、水平方向の右向きに X番目、垂直方向下向きに y番目 の画素の光強度を I (x, y)とすれば、画像はこの光強度 I (x, y)の分布と考えることが できる。 [0058] Since the image captured by the image sensor can be regarded as a set of light intensities corresponding to each pixel, the light from the Xth pixel in the horizontal direction to the right and the yth pixel in the vertical direction from the top left is the origin. If the intensity is I (x, y), the image can be considered as a distribution of this light intensity I (x, y). it can.
[0059] 図 3に比較元領域 301と評価領域 302との位置関係を示す。図 3の例では、比較 元領域の左上の画素の位置が(xl , yl)であり、右下の画素の位置が(x2, y2)とな るような長方形の形に比較元領域を設定している。この場合、比較元領域から右方 向に m画素、下方向に n画素移動した評価領域 (m, n)は、左上の画素が(xl +m, yl +n)で、右下の位置が(x2+m, y2+n)となる領域で表すことができる。  FIG. 3 shows the positional relationship between the comparison source region 301 and the evaluation region 302. In the example in Fig. 3, the comparison source area is set to a rectangular shape where the upper left pixel position of the comparison source area is (xl, yl) and the lower right pixel position is (x2, y2). is doing. In this case, in the evaluation area (m, n) moved by m pixels to the right and n pixels downward from the comparison source area, the upper left pixel is (xl + m, yl + n) and the lower right position is It can be represented by a region of (x2 + m, y2 + n).
[0060] この評価領域と比較元領域の相関(どれだけ似て 、る力 を表す評価値 R (m, n) は、(数 1)に示すように、各画素における光強度の差分の絶対値総和によって表さ れる。  [0060] The correlation between the evaluation region and the comparison source region (how much the evaluation value R (m, n) representing the force is, as shown in (Equation 1)), is the absolute difference in light intensity at each pixel. It is represented by the sum of values.
[0061] [数 1]  [0061] [Equation 1]
V,— _y, J¾一 J¾ y=y\ χ=χι V, — _y, J¾ 一 J¾ y = y \ χ = χ ι
[0062] 比較元領域と評価領域とが似ているほど、両領域において対応する両画素間の光 強度の差は小さくなる。このため、評価値 R(m, n)は、比較元領域と評価領域の光 強度分布 (画像)の相関が大き ヽ (似て 、る)ほど小さ ヽ値を示すことになる。 As the comparison source region and the evaluation region are similar, the difference in light intensity between the corresponding pixels in both regions becomes smaller. For this reason, the evaluation value R (m, n) shows a smaller value as the correlation between the light intensity distribution (image) of the comparison source region and the evaluation region becomes larger (similar).
[0063] なお、領域の相関を比較するので、 m、 nは整数に限る必要は無ぐもとの光強度 I ( X, y)力も各々のピクセル間を内挿したデータ I' (X, y)を新たに作成し、 I' (X, y) をもとに (数 1)によって評価値 R (m,n)を算出することで、整数以下 (サブピクセル)で ぶれ量を導出することができる。データの内挿の方法としては、線形補間、非線形補 間 、ずれの方法を用いてもよ!、。  [0063] Since the correlation of the regions is compared, m and n need not be limited to integers, and the original light intensity I (X, y) force is also interpolated between the pixels I '(X, y ) And calculating the evaluation value R (m, n) from (Equation 1) based on I '(X, y) Can do. As a method of data interpolation, linear interpolation, nonlinear interpolation, or deviation method may be used!
[0064] 以上より、ぶれ量を導出するには、 m、 nの値を変化させて、評価値が比較元領域と 一番似ている評価領域をサブピクセルの精度で探索することになる。この場合、手ぶ れ、および被写体ぶれのぶれ方向は特定方向に限定されないので、 m、 nの値は負 の値 (左方向や上方向に移動した領域の評価)を含めて検討を行なう必要がある。 [0065] また、比較先画像の全ての範囲を評価できるように m、 nを変化させてもょ 、が、手 ぶれなどで大きく被写体の結像が移動し、撮像素子の受光範囲から外れると画像と して合成できないので、一般的に、所定の範囲に m、 nを限定し、計算時間を短縮す るのが好ましい。このようにして見出した評価値 R(m, n)が最小値となる m、 nの組合 せが、比較元領域に対応する比較先画像の領域の位置を示すぶれ量となる。 From the above, in order to derive the blur amount, the values of m and n are changed, and an evaluation region whose evaluation value is most similar to the comparison source region is searched with subpixel accuracy. In this case, the blur direction of camera shake and subject blur is not limited to a specific direction, so the values of m and n need to be considered including negative values (evaluation of areas moved leftward or upward). There is. [0065] Further, although m and n may be changed so that the entire range of the comparison target image can be evaluated, the image of the subject moves greatly due to camera shake or the like, and falls outside the light receiving range of the image sensor. Since it cannot be synthesized as an image, it is generally preferable to limit m and n to a predetermined range to shorten the calculation time. The combination of m and n where the evaluation value R (m, n) found in this way is the minimum value is the amount of blur indicating the position of the comparison target image area corresponding to the comparison source area.
[0066] なお、比較元領域は長方形に限る必要はなぐ任意形状を設定することが可能で ある。また、評価値の算出は、光強度の差分の絶対値総和に限る必要はなぐ各領 域で正規ィ匕して力も相関を求めるなど、相関を示す関数であれば、どのような関数を 用いて評価値を算出してもよい。  [0066] Note that the comparison source region need not be limited to a rectangle, and an arbitrary shape can be set. In addition, the calculation of the evaluation value does not need to be limited to the sum of absolute values of the differences in light intensity. The evaluation value may be calculated.
[0067] この画像の相関を用いて、比較する方法は、後述の視差量を求める際にも用いるこ とができ、さらに画素ずらし手段のキャリブレーションにも用いることができる。例えば 、画素ずらし手段により画素ずらしをする前後に、画像を撮影しておき、その画像の ズレ量を評価することにより、周辺環境 (気温や経時劣化)により画素ずらしに用いる ァクチユエータが正確に動いていうか確認することができる。このような処理により、ァ クチユエータによる画素ずらしを確実にすることができる。  [0067] The comparison method using the correlation of the images can be used for obtaining the amount of parallax described later, and can also be used for calibration of the pixel shifting means. For example, by taking an image before and after pixel shifting by the pixel shifting means and evaluating the amount of shift of the image, the actuator used for pixel shifting is moving accurately due to the surrounding environment (temperature and deterioration over time). Can confirm. By such processing, pixel shifting by the actuator can be ensured.
[0068] 以下、図 4を参照しながら、手ぶれについてさらに具体的に説明する。図 4は、本実 施の形態において、手ぶれによる画像の動きを示す図である。本図は、被写体の動 きが少な!/、風景の画像を撮影する例を示して ヽる。図 4Aは被写体とカメラとが平行 移動した場合の図であり、図 4Cの A図は、この場合の撮影時刻 1と 2との間の画像の 変化を示している。図 4Bはカメラが横方向に回転した場合の図であり、図 4Cの B図 は、この場合の撮影時刻 1と 2との間の画像の変化を示している。  Hereinafter, camera shake will be described more specifically with reference to FIG. FIG. 4 is a diagram showing image movement due to camera shake in the present embodiment. This figure shows an example of shooting an image of a landscape with little subject movement! FIG. 4A is a diagram when the subject and the camera are moved in parallel, and FIG. 4C shows the change in the image between shooting times 1 and 2 in this case. FIG. 4B is a view when the camera is rotated in the horizontal direction, and FIG. 4C shows a change in the image between shooting times 1 and 2 in this case.
[0069] 図 4Aのように、撮像装置が平行移動する場合と図 4Bのように回転移動する場合の いずれの場合も、画像が面内で平行移動するとみなすことができるが、図 4Cに示す ように、平行移動した場合よりも、回転して光軸がぶれる場合の方が、画像に与える 影響が大きい。図 4Bはカメラが横方向に回転した例である力 縦方向に回転した場 合も同様である。このような、撮像装置の平行移動または回転による画像の平行移動 を補正することにより、手ぶれの補正をすることができる。  [0069] In both cases where the imaging device moves in parallel as shown in FIG. 4A and in the case where the imaging device rotates as shown in FIG. 4B, the image can be considered to be translated in the plane. Thus, the effect on the image is greater when the optical axis is rotated and the optical axis is deviated than when translated. Figure 4B shows the same result when the camera rotates in the vertical direction. By correcting the translation of the image due to the translation or rotation of the imaging apparatus, it is possible to correct camera shake.
[0070] なお、撮像装置が回転する場合は、画像が平行移動するとみなすことができるが、 厳密には被写体とレンズの距離が一部変化するため、画像にわずかなひずみが生じ ることになる。このわずかにひずんだ画像を単純に重ね合わせると、本来重なりあう部 分が、重なり合わず、画素ずらしによっても高解像度化する効果が低下する。 [0070] Note that, when the imaging device rotates, it can be considered that the image moves in parallel, Strictly speaking, since the distance between the subject and the lens changes in part, the image will be slightly distorted. If these slightly distorted images are simply superimposed, the originally overlapping parts do not overlap, and the effect of increasing the resolution by shifting the pixels is reduced.
[0071] したがって、この回転による画像のひずみを検出し、補正することで解像度がさらに 向上する。また、特定の 1ケ所の評価領域を対象に画像のぶれ量を求めるだけでは、 画像の平行移動しか求めることができないので、複数の評価領域を設定し、それぞ れの場所におけるぶれ量を求めることで、それぞれの評価領域における手ぶれ量、 および画像のひずみを求めることができる。この画像のひずみにしたがって、重ね合 わせる画像を変形することにより、画像劣化を防止し、高解像度画像を得ることがで さることになる。  Therefore, the resolution is further improved by detecting and correcting image distortion due to this rotation. In addition, only calculating the amount of image blur for one specific evaluation area can only determine the parallel movement of the image, so multiple evaluation areas are set and the amount of blur at each location is calculated. Thus, the amount of camera shake and image distortion in each evaluation region can be obtained. By deforming the superimposed image according to the distortion of the image, it is possible to prevent image deterioration and obtain a high-resolution image.
[0072] 次に、画素ずらし量の調整について、具体的に説明する。図 5は、画素ずらし量の 調整を説明する図である。本図は、撮像素子の一部の拡大部分を示しており、想定 していた画素ずらしベクトル 400と、ぶれ導出手段によって検出されたぶれベクトル 4 01と、実際に行った画素ずらしベクトル 402を示している。  [0072] Next, the adjustment of the pixel shift amount will be specifically described. FIG. 5 is a diagram for explaining adjustment of the pixel shift amount. This figure shows an enlarged part of a part of the image sensor, and shows the assumed pixel shift vector 400, the blur vector 401 detected by the blur derivation means, and the actual pixel shift vector 402. ing.
[0073] 手ぶれが全く無い場合、光電変換部 404の右にある無効部分 405を有効に活用す るためには、ベクトル 400のように X方向に 0. 5画素、 Y方向に 0画素ずらす必要があ る。一方、べク卜ノレ 401ίま、手ぶれ【こ Jり、 X方向【こ 1. 25画素、 Υ方向【こ 1. 5画素ず れた例を示している。この場合、画素ずらし量を調整せずに画素ずらしを行うと、すな わちベクトル 400のように X方向に 0. 5画素ずらす画素ずらしを行うと、ベクトル 400と ベクトル 401を合成した位置 403で、次の撮影を行うことになる。この場合、本来、活 用しょうとしていた光電変換部 404の右側の部分とは異なる部分を撮影することにな る。  [0073] When there is no camera shake, in order to effectively use the invalid portion 405 on the right side of the photoelectric conversion unit 404, it is necessary to shift 0.5 pixels in the X direction and 0 pixels in the Y direction as in the vector 400 There is. On the other hand, an example is shown in which the image is shifted by 401 °, the camera shake is less than J, the X direction is less than 1.25 pixels, and the lower direction is less than 1.5 pixels. In this case, if the pixel shift is performed without adjusting the pixel shift amount, that is, if the pixel shift is performed by 0.5 pixel shift in the X direction as in the vector 400, the position where the vector 400 and the vector 401 are combined 403 Then, the next shooting will be performed. In this case, a portion different from the right side portion of the photoelectric conversion unit 404 originally intended to be used is photographed.
[0074] ここで、手ぶれによる移動により光軸がわずかにずれることになる力 非常に微小で ある。このため、ベクトル 401の X方向、 Υ方向のずれ量がそれぞれ整数ピッチ(1画 素ピッチの整数倍)の場合の画像は、画素の座標を整数画素分ずらした画像と同じ ものとみなすことができる。すなわち、画素ずらしをしない第 2の撮像系 106aによる撮 影時刻 2における撮影は、撮影時刻 1にお ヽてすでに撮影した画像を異なる画素で 重ねて撮影したのと同じことなる。したがって、この場合は、画素ずらしをする第 1の 撮像系 106bでは、手ぶれが全く無い場合と同様に、ベクトル 400のように X方向に 0 . 5画素ずらすことにより、光電変換部 404の右にある無効部分 405の部分を撮影で き、画素ずらしの効果を得ることができる。 [0074] Here, the force that slightly shifts the optical axis due to movement due to camera shake is very small. For this reason, an image in which the amount of deviation in the X direction and the 方向 direction of the vector 401 is an integer pitch (integer multiple of one pixel pitch) can be regarded as the same as an image obtained by shifting pixel coordinates by an integer pixel. it can. In other words, shooting at shooting time 2 by the second imaging system 106a without pixel shifting is the same as shooting an image that was already shot at shooting time 1 with different pixels. Therefore, in this case, the first pixel shift In the imaging system 106b, as in the case where there is no camera shake at all, by shifting 0.5 pixels in the X direction as in the vector 400, the invalid part 405 on the right side of the photoelectric conversion unit 404 can be photographed, and the pixels are shifted. The effect of can be obtained.
[0075] すなわち、画素ずらしの効果に影響を与えるのは、手ぶれのうち、整数ピッチ以下( 小数点以下)の部分である。  That is, it is the portion of the hand movement that is less than the integer pitch (below the decimal point) that affects the pixel shifting effect.
[0076] したがって、手ぶれのうち整数ピッチ以下の部分力 ベクトル 400のずらし量と同じ になるように、新たな画素ずらしベクトルを設定すれば、画素ずらしの効果を得ること ができる。前記の例では、ぶれベクトル 401の X方向の整数ピッチ以下の部分は 0. 2 5画素、 Y方向の整数ピッチ以下の部分は 0. 5画素である。この場合、 X方向の整数 ピッチ以下の部分が 0. 5画素、 Y方向の整数ピッチ以下の部分力^画素になるように 、新たな画素ずらしベクトルを設定すればよ!、ことになる。  [0076] Therefore, if a new pixel shift vector is set so as to be the same as the shift amount of the partial force vector 400 equal to or smaller than the integer pitch in the hand shake, the pixel shift effect can be obtained. In the above example, the portion of the blur vector 401 that is less than or equal to the integer pitch in the X direction is 0.25 pixels, and the portion that is less than or equal to the integer pitch in the Y direction is 0.5 pixels. In this case, a new pixel shift vector should be set so that the portion below the integer pitch in the X direction is 0.5 pixels and the partial force is less than the integer pitch in the Y direction!
[0077] したがって、画素ずらしベクトルを図 5のベクトル 402のように、 X方向に 0. 25画素 、 Y方向に 0. 5画素とすることにより、手ぶれベクトル 401と合成した際に、本来の画 素ずらしベクトル 400による画素ずらしをした場合と同じ位置関係になる。すなわち、 本実施の形態によれば、ぶれベクトルにあわせて、画素ずらしベクトルを調整するの で、常に画素ずらしの効果を得ることができるようになる。  [0077] Therefore, the pixel shift vector is set to 0.25 pixels in the X direction and 0.5 pixels in the Y direction as in the vector 402 in FIG. The positional relationship is the same as in the case of pixel shifting using the element shifting vector 400. That is, according to the present embodiment, since the pixel shift vector is adjusted in accordance with the blur vector, it is possible to always obtain the pixel shift effect.
[0078] ステップ 202における一連のステップを、設定した画像ずらしの回数が終わるまで 繰り返した後、ステップ 209において画像メモリに蓄積された画像を合成し、ステップ 210において画像を出力して撮影が終了する。具体的に実施した例を以下に示す。  [0078] After a series of steps in step 202 is repeated until the set number of image shifts is completed, the images stored in the image memory are combined in step 209, and the images are output in step 210 to complete the shooting. . A concrete example is shown below.
[0079] (実施例 1)  [0079] (Example 1)
図 6は実施例 1に係る撮像光学系、画素ずらし手段および撮像素子の構成を示し ている。撮像光学系として、直径 2. 2mmの非球面レンズ 601a、 601bを 2枚用いた 。レンズの光軸は図 6中の Z軸とほぼ平行で、その間隔は 3mmとした。  FIG. 6 shows the configuration of the imaging optical system, the pixel shifting means, and the imaging device according to the first embodiment. As an imaging optical system, two aspherical lenses 601a and 601b having a diameter of 2.2 mm were used. The optical axis of the lens is almost parallel to the Z axis in Fig. 6, and the distance between them is 3 mm.
[0080] 画素ずらしをする第 1の撮像系には、レンズ 601bの光軸上にガラス板 602を設け た。ガラス板 602は、圧電ァクチユエータおよび傾斜機構(図示せず)により X軸、 Y 軸に対して傾けることができる。本実施例は、水平方向(X軸方向)に画素ピッチの 1 /2 (1. 2 m)だけ画素ずらしをし、画素数を 2倍にする構成とした。ガラス板 602に は、幅 (X軸方向) 2mm、高さ(Y軸方向) 2mm、厚さ(Z軸方向) 500 μ mの光学ガラ スである BK7を用いた。 [0080] In the first imaging system that performs pixel shifting, a glass plate 602 is provided on the optical axis of the lens 601b. The glass plate 602 can be tilted with respect to the X-axis and Y-axis by a piezoelectric actuator and a tilt mechanism (not shown). In this embodiment, the pixel is shifted by 1/2 (1.2 m) of the pixel pitch in the horizontal direction (X-axis direction) to double the number of pixels. The glass plate 602 has an optical glass with a width (X-axis direction) of 2 mm, a height (Y-axis direction) of 2 mm, and a thickness (Z-axis direction) of 500 μm. BK7 is used.
[0081] 撮像素子 603として、隣り合う画素のピッチが 2. 4 mの白黒 CCD603を用いた。  As the image sensor 603, a monochrome CCD 603 having a pitch of 2.4 m between adjacent pixels was used.
ガラス板 602および撮像素子 603の受光面は、図 6中の XY平面とほぼ平行となって いる。また、各光学系に 1対 1に対応するように撮像素子 603を、 2つの領域 603aと 6 03bとに分けている。読み出し回路、駆動回路を撮像素子 603の各領域 603a、 603 b毎に設けることにより、各領域 603a、 603bの画像を個別に読み出せるようにした。  The light receiving surfaces of the glass plate 602 and the image sensor 603 are substantially parallel to the XY plane in FIG. Further, the image sensor 603 is divided into two regions 603a and 6003b so as to correspond to each optical system on a one-to-one basis. By providing a readout circuit and a drive circuit for each of the regions 603a and 603b of the image sensor 603, the images of the regions 603a and 603b can be individually read out.
[0082] 本実施例に係る装置を手で保持し、撮影を行ったところ、晴天における屋外の風景 など、露光時間が短ぐ被写体の動きが少ない環境において、解像度が向上した。  [0082] When the apparatus according to this example was held by hand and shot, the resolution was improved in an environment where the exposure time was short and the movement of the subject was short, such as an outdoor scenery in fine weather.
[0083] なお、本実施例では画素ずらし手段としてガラス板を傾斜させる方法を用いて!/、る 力 この方法には限らない。例えば、圧電素子を用いたァクチユエータや、電磁ァク チユエータなどを用いて撮像素子やレンズを所定量だけ物理的に動力してもよい。こ のように画素ずらし手段として他の手段を用いても、図 6に示した構成は、ガラス板 60 2を除き同様である。  In this embodiment, a method of inclining the glass plate as the pixel shifting means is used. The method is not limited to this method. For example, an image sensor or lens may be physically powered by a predetermined amount using an actuator using a piezoelectric element, an electromagnetic actuator, or the like. Even when other means are used as the pixel shifting means in this way, the configuration shown in FIG. 6 is the same except for the glass plate 602.
[0084] また、本実施例では 1つの撮像素子を異なる 2つの領域に分けているが、それぞれ の光学系と 1対 1に対応するように異なる 2つの撮像素子を用いてもよぐ撮像素子の 形態は、複数の撮像領域がそれぞれの光学系と 1対 1に対応していればどのような形 態でもよい。  Further, in this embodiment, one image sensor is divided into two different regions, but an image sensor that uses two different image sensors so as to correspond one-to-one with each optical system may be used. This form may be any form as long as the plurality of imaging areas correspond one-to-one with each optical system.
[0085] (実施の形態 2)  [0085] (Embodiment 2)
図 7は、実施の形態 2に係る撮像装置の構成を示している。実施の形態 1との主な 相違点は、実施の形態 2は、視差量導出手段 700が加えられている点、撮像素子 70 1がー体であり、略同一時刻に第 1の撮像系と第 2の撮像系の撮影が行われる点、お よび視差量とぶれ量とをもとに画像合成する画像を選択する最適画像選択手段 702 が加えられている点である。実施の形態 1との重複部分については、説明は省略する  FIG. 7 shows a configuration of the imaging apparatus according to the second embodiment. The main difference from the first embodiment is that the second embodiment is that a parallax amount deriving means 700 is added, and the imaging element 70 1 is a body, which is substantially the same time as the first imaging system. The second imaging system is photographed, and optimum image selection means 702 for selecting an image to be combined based on the parallax amount and the blur amount is added. Explanation of overlapping parts with the first embodiment is omitted.
[0086] 図 8は本実施の形態に係る撮像装置の全体動作のフローチャートを示している。ス テツプ 200の撮像開始指令、ステップ 201の撮影前処理については、実施の形態 1 と同様である。 FIG. 8 shows a flowchart of the overall operation of the imaging apparatus according to the present embodiment. The imaging start command in step 200 and the imaging pre-processing in step 201 are the same as in the first embodiment.
[0087] ステップ 800で画素ずらしによる撮影が行われる。ステップ 800は、ステップ 801の 撮像素子の露光処理と、ステップ 802の撮像素子の画像の画像メモリ 103への転送 処理と、ステップ 803の画素ずらし処理とを繰り返して行われる。 [0087] In step 800, photographing by pixel shifting is performed. Step 800 is the same as Step 801 The exposure process of the image sensor, the transfer process of the image of the image sensor in step 802 to the image memory 103, and the pixel shift process in step 803 are repeated.
[0088] 撮像素子 701は、第 1の撮像系 106bと第 2の撮像系 106aとで共有する構成にな つているため、ほぼ同じタイミングで撮影されることになる。また、画素ずらし量は、手 ぶれ量にかかわらず固定した値とし、手ぶれがな 、ときに無効画素を有効に活用で きるように設定した画素分 (例えば、 0. 5画素分)とした。  [0088] Since the image sensor 701 is configured to be shared by the first image pickup system 106b and the second image pickup system 106a, images are taken at almost the same timing. In addition, the pixel shift amount is a fixed value regardless of the amount of camera shake, and is set for pixels that are set so that invalid pixels can be used effectively when there is no camera shake (for example, 0.5 pixels).
[0089] すなわち、ステップ 800は実施の形態 1の図 2のステップ 202に比べて、画素ずらし 量を調整するために、第 2の撮像系 106aにおいて画像を取り込み、ぶれ量を導出す るステップ(図 2のステップ 205)が省略されている。このため、画素ずらし無しで撮影 する撮影時刻 1と、画素ずらしをして撮影する撮影時刻 2との間隔を短くすることがで きる。これにより、手ぶれが少なくなるとともに、被写体の動きが実施の形態 1に比べ て速 、場合にも撮影できるようになる。  That is, compared with step 202 of FIG. 2 of the first embodiment, step 800 is a step of taking in an image and deriving a blur amount in the second imaging system 106a in order to adjust the pixel shift amount ( Step 205) in FIG. 2 is omitted. For this reason, the interval between shooting time 1 for shooting without pixel shifting and shooting time 2 for shooting with pixel shifting can be shortened. As a result, camera shake is reduced, and shooting can be performed even when the subject moves faster than in the first embodiment.
[0090] ステップ 803の画素ずらしの撮影が終わった後、ステップ 804で画像メモリ 103に蓄 積された画像のうち、時系列で撮影されたものを実施の形態 1のステップ 205と同様 の方法で比較し、ぶれ量を求める。被写体の移動などがあると、ぶれ量は、画像の中 においてすべて均一とならず、一括してぶれ量を求め重ね合わせると、正確に重なり 合わず、場所によっては解像度が向上しないことになる。  [0090] After the pixel-shift imaging in step 803 is completed, the images stored in time series among the images stored in the image memory 103 in step 804 are processed in the same manner as in step 205 in the first embodiment. Compare and find the amount of blur. If there is a movement of the subject, the amount of blur will not be uniform in the image, and if the amount of blur is determined and superimposed together, it will not overlap exactly and resolution will not improve depending on the location.
[0091] そこで、画像をブロックに分割し、その分割したブロックごとにぶれ量を求めることに より、画像全体において解像度を向上させることができる。この分割は、長方形に限 る必要は無ぐ被写体の検出を別途行い、被写体毎に分割して、ぶれ量を検出して ちょい。  Therefore, by dividing the image into blocks and obtaining the amount of blur for each of the divided blocks, the resolution of the entire image can be improved. For this division, it is not necessary to limit to the rectangle, separately detect the subject, and divide each subject to detect the amount of blur.
[0092] 次に、ステップ 805において、同時刻に光軸の異なる撮像系で撮影した画像を比 較し、視差量を求める。光軸の異なる撮像系で撮影する場合、結像する位置がレン ズの中心間距離だけ離れるだけではなぐ被写体の距離に応じて、撮像素子上に結 像する被写体像の相対位置が変化する。  [0092] Next, in step 805, images taken by imaging systems with different optical axes at the same time are compared to determine the amount of parallax. When shooting with an imaging system having different optical axes, the relative position of the subject image formed on the image sensor changes according to the distance of the subject that the image forming position is not only the distance between the centers of the lenses.
[0093] この差を視差と呼んでいる。図 9は視差を説明する図である。図 9では、簡単のため 、同じ特性の 2つの撮像光学系 1301a、 130 lbが距離 D離れた位置に設置されてお り、各撮像光学系の結像面をそれぞれ 1302a、 1302bとしている。 [0094] このとき、撮像光学系 1301aと 1301bとでは、異なる位置から同一被写体を観察す ることになる。このため、結像面 1302a、 1302b上で結像する画像の間には視差が 生じる。視差量 Δは、下記の (数 2)で与えられる。 Dは撮像光学系 1301aの光軸と撮 像光学系 1301bの光軸との間隔、 fは撮像光学系 1301aおよび 1301bの焦点距離 、 Aは被写体と結像面 1302aおよび 1302bとの距離である。 This difference is called parallax. FIG. 9 is a diagram for explaining parallax. In FIG. 9, for the sake of simplicity, two imaging optical systems 1301a and 130 lb having the same characteristics are installed at a distance D, and the imaging surfaces of the imaging optical systems are denoted by 1302a and 1302b, respectively. At this time, the imaging optical systems 1301a and 1301b observe the same subject from different positions. For this reason, parallax occurs between images formed on the imaging surfaces 1302a and 1302b. The parallax amount Δ is given by (Equation 2) below. D is the distance between the optical axis of the imaging optical system 1301a and the optical axis of the imaging optical system 1301b, f is the focal length of the imaging optical systems 1301a and 1301b, and A is the distance between the subject and the imaging planes 1302a and 1302b.
[0095] [数 2]  [0095] [Equation 2]
Δ = D · f / ( A— ί ) Δ = D · f / (A— ί)
[0096] Αが十分大きぐ被写体が無限遠にあるとみなせる場合は、視差量 Δは D'fZAと 表すことができ、 Δは 0とみなすことができる。この場合は、撮像光学系 1301aと 130 lbとにより撮影した画像は同一のものとみなすことができる。このため、レンズ中心間 距離 Dの補正をすれば、そのまま合成処理することができる。 [0096] When it can be assumed that a subject with sufficiently large wrinkles is at infinity, the parallax amount Δ can be expressed as D'fZA, and Δ can be considered as 0. In this case, the images taken by the imaging optical systems 1301a and 130 lb can be regarded as the same. Therefore, if the distance D between the lens centers is corrected, the composition process can be performed as it is.
[0097] しかし、 Aが小さ 、場合は、視差量 Δは有限の値となり、無視することができな!/、。  However, when A is small, the parallax amount Δ becomes a finite value and cannot be ignored! /.
すなわち、撮像光学系 1301aと撮像光学系 1301bとで撮影した画像は、被写体の 距離に依存して、視差によるずれがある画像であり、同一とみなすことができない。し たがって、そのまま、重ねあわせて合成することができない。  That is, the images captured by the imaging optical system 1301a and the imaging optical system 1301b are images that have a shift due to parallax depending on the distance of the subject and cannot be regarded as the same. Therefore, it cannot be combined as it is.
[0098] この視差を補正するためには、被写体ごとに視差を求める必要がある。この視差は 、同時刻に撮影した光軸の異なる画像を、それぞれブロックに分割し、対応するプロ ックがどの位置に移動しているかを調べればよい。この処理は、前記の画像を比較し 、ぶれ量を求める場合と同様に、(数 1)を用いて画像を比較し、相関の高い場所を探 索する処理をすることにより、実現することができる。  In order to correct this parallax, it is necessary to obtain the parallax for each subject. For this parallax, images with different optical axes taken at the same time may be divided into blocks, and the position of the corresponding block may be checked. This process can be realized by comparing the images and comparing the images using (Equation 1) and searching for a highly correlated place, as in the case of obtaining the blur amount. it can.
[0099] なお、レンズ中心間距離 Dはレンズ間距離カゝら算出してもよいが、無限遠にマーカ 一となる被写体を設置し、その画像が結像する位置をレンズの中心とみなし、算出し てもよい。  [0099] Although the lens center distance D may be calculated from the lens distance distance, a subject that is a marker is placed at infinity, and the position where the image is formed is regarded as the center of the lens. It may be calculated.
[0100] また、ブロックの分割の方法はこの方法に限らず、画素数や形状を変えて分割して もよい。ぶれ導出のときとは異なり、視差が生じる方向は撮像素子の原点 (撮像素子 と、それぞれが対応する光学系の光軸との交点)を結ぶ直線方向に限られるので、視 差検出の際には、その方向に合わせて (数 1)における m、 nの組み合わせを限定す ればよい。 [0100] Further, the method of dividing the block is not limited to this method, and the block may be divided by changing the number of pixels or the shape. Unlike blur derivation, the direction in which the parallax occurs is the origin of the image sensor (image sensor And the intersection of the corresponding optical system with the optical axis of each optical system), the combination of m and n in (Equation 1) is limited according to the direction when detecting the visual difference. Just do it.
[0101] 次に、ステップ 806では、ぶれ量と視差量とをもとに合成した場合に解像度が向上 する組み合わせになって 、る画像を選択する。前記のように画素ずらしにより解像度 が向上するのは、重ね合わせる画素が無効部分を活用するようにずれていればよぐ 時系列で画素ずらしした画像だけでなぐ視差や手ぶれによりずれている場合も同様 に利用できる。  [0101] Next, in step 806, an image is selected that has a combination that improves resolution when combined based on the amount of blur and the amount of parallax. As described above, the resolution is improved by shifting the pixels as long as the pixels to be overlapped are shifted so as to use the invalid portion. It can be used as well.
[0102] 図 10は最適画像の選択方法の説明図である。本図の斜線部は、撮像素子上に結 像した被写体像である。時刻 1において、第 2の撮像系の撮影領域 1000aと第 1の撮 像系の撮像領域 1000bには、被写体像 1001aと 1001bとが結像している。被写体 は、第 2の撮像系の中心線上にあるとする。このとき、視差により、撮像領域 1000b上 では、 Δだけずれた位置に被写体像 1001bが結像して 、る。  [0102] FIG. 10 is an explanatory diagram of a method for selecting an optimum image. The shaded area in this figure is a subject image formed on the image sensor. At time 1, subject images 1001a and 1001b are formed in the imaging region 1000a of the second imaging system and the imaging region 1000b of the first imaging system. It is assumed that the subject is on the center line of the second imaging system. At this time, due to the parallax, the subject image 1001b is formed at a position shifted by Δ on the imaging region 1000b.
[0103] それぞれの撮像領域の画像を画像メモリ 103に転送すると、 2次元データとして格 納される。各画像領域の左上の点を原点とし、被写体像の位置を座標で示すと、被 写体像 1001aの左上の座標は(ax, ay)となり、被写体像 1001bの左上の座標は、 視差 Δの分だけずれているので、 (ax+ Δ , ay)となる。  [0103] When the image of each imaging area is transferred to the image memory 103, it is stored as two-dimensional data. When the upper left point of each image area is the origin and the position of the subject image is indicated by coordinates, the upper left coordinate of the subject image 1001a is (ax, ay), and the upper left coordinate of the subject image 1001b is parallax Δ Since it is shifted by the amount, (ax + Δ, ay) is obtained.
[0104] 次に、時刻 2においては、第 2の撮像領域は 1002aとなり、第 1の撮像領域は 1002 bとなり、そのときの被写体像が 1003a、 1003bとなったとする。第 1の撮像系は、画 素ずらし手段により、 0. 5画素右側に移動させた。撮像領域 1002a上の被写体像 1 003aは、原点から (bx、 by)ずれた位置に結像している。  [0104] Next, at time 2, the second imaging area is 1002a, the first imaging area is 1002b, and the subject images at that time are 1003a and 1003b. The first imaging system was moved 0.5 pixels to the right by pixel shifting means. The subject image 1003a on the imaging region 1002a is imaged at a position shifted (bx, by) from the origin.
[0105] 被写体の動きが無 、とすると、このずれ量は、手ぶれによりずれである。それぞれの 画像領域の画像を画像メモリに転送し、座標で表示すると、被写体像 1003aの左上 の座標は、 (ax + bx, ay+by)となる。撮像領域 1002bは、画素ずらしされているの で、座標の原点が右側に 0. 5画素ずれることになる。このため、第 2の撮像系の撮像 領域 1002aと比べると、第 1の撮像系の撮像領域 1002bにおいては、座標の原点が 0. 5画素分だけ被写体像 1003bに、近づいていることになる。また、時刻 1と同様に 、被写体像 1003bは、視差 Δの分だけ右側にずれていることになる。したがって、被 写体像 1003bの左上の座標は、(ax+bx+ Δ— 0. 5, ay+by)となる。 [0105] If there is no movement of the subject, this shift amount is shifted due to camera shake. When the images in each image area are transferred to the image memory and displayed in coordinates, the coordinates at the upper left of the subject image 1003a are (ax + bx, ay + by). Since the imaging region 1002b is shifted in pixels, the origin of coordinates is shifted 0.5 pixels to the right. For this reason, compared to the imaging area 1002a of the second imaging system, the origin of coordinates is closer to the subject image 1003b by 0.5 pixels in the imaging area 1002b of the first imaging system. As with time 1, the subject image 1003b is shifted to the right by the amount of parallax Δ. Therefore, the covered The upper left coordinates of the subject image 1003b are (ax + bx + Δ—0.5, ay + by).
[0106] 図 11は、最適画像の選択方法の別の説明図である。ずれ量 bx及び視差量 Δは、 整数ピッチに近い場合と、整数ピッチに 0. 5画素ピッチを加えた値に近い場合とに、 区分することができる。ずれ量 bx及び視差量 Δを整数ピッチ以下の値で表現すると 、整数ピッチの場合は bx=0、 Δ =0となり、整数ピッチに 0. 5画素ピッチをカ卩えた値 の場合は bx=0. 5、 Δ =0. 5になる。 FIG. 11 is another explanatory diagram of the optimal image selection method. The amount of deviation bx and the amount of parallax Δ can be divided into a case where it is close to an integer pitch and a case where it is close to a value obtained by adding a 0.5 pixel pitch to an integer pitch. When the amount of deviation bx and the amount of parallax Δ are expressed as values less than or equal to an integer pitch, bx = 0 and Δ = 0 for integer pitches, and bx = 0 for values obtained by adding 0.5 pixel pitches to integer pitches. .5, Δ = 0.5.
[0107] 図 11の bx、 Δは、整数ピッチ以下の値を示している。図 11中の各値は、基準となる 撮像領域 1000aの X座標の値 axを 0として、被写体の各 X座標の値を算出したもの である。図 11中、 0で示したところは、被写体を画像に変換する撮像素子の画素と被 写体の位置関係が、基準となる撮像領域 1000aの場合に比べ整数ピッチずれてい ることを示している。 0. 5で示したところは、 0. 5画素ピッチずれていることを示してい る。この 0. 5で示した部分に対応する画像は、無効部分を有効活用できる画像であ る。 [0107] In FIG. 11, bx and Δ indicate values equal to or smaller than the integer pitch. Each value in FIG. 11 is obtained by calculating the value of each X coordinate of the subject with 0 as the X coordinate value ax of the reference imaging region 1000a. In FIG. 11, 0 indicates that the positional relationship between the pixel of the imaging element that converts the subject into an image and the subject is shifted by an integer pitch compared to the case of the reference imaging region 1000a. . 0.5 indicates that the pixel pitch is shifted by 0.5. The image corresponding to the portion indicated by 0.5 is an image that can effectively use the invalid portion.
[0108] ここで、図 11から分るように、視差量 Δ、手ぶれ量 bxがどのような組み合わせにな つても、 4枚の画像には X座標の値の算出値が 0. 5になる画像がある。このため、ど の組み合わせにお 、ても、無効部分を有効に利用した画像を得ることが可能となる。 すなわち、手ぶれ、被写体の距離によらず、解像度を向上させることができる。  Here, as can be seen from FIG. 11, the calculated value of the X coordinate is 0.5 for the four images, regardless of the combination of the amount of parallax Δ and the amount of camera shake bx. There is an image. Therefore, in any combination, it is possible to obtain an image that effectively uses the invalid portion. That is, the resolution can be improved regardless of camera shake or the distance of the subject.
[0109] なお、手ぶれ量、視差量ともに、 0. 5画素単位でデジタル的に変化するのではなく 、実際には、連続的に徐々に変化する。このため、図 11中の bx、 Δの値を 0. 5として いる部分は、 0. 5に近い値 (例えば 0. 3から 0. 7までの値)の場合もある。また、 0とし ている部分は、 0に近い値 (例えば 0. 3に満たない値、 0. 7より大きい値)の場合もあ る。一方、画像のデータは、グリッド上に配置されている必要がある。そこで、画像を 重ね合わせ合成する際に、線形補完処理などを行えばよ!ヽ。  Note that both the camera shake amount and the parallax amount do not change digitally in units of 0.5 pixels, but actually change gradually and continuously. For this reason, the part where the values of bx and Δ in FIG. 11 are 0.5 may be a value close to 0.5 (for example, a value from 0.3 to 0.7). Also, the part set to 0 may be a value close to 0 (for example, a value less than 0.3 or a value greater than 0.7). On the other hand, image data needs to be arranged on a grid. So, when you overlay and combine images, you can do linear interpolation processing!ヽ.
[0110] なお、本実施の形態では、撮像素子の水平方向の画素ピッチを基準に最適画像の 選択を行なった力 斜め方向の画素ピッチを基準としてもよい。また、状況に応じて 画素ピッチの基準を混在させてもよ 、。  [0110] In the present embodiment, the pixel pitch in the oblique direction may be used as a reference when the optimum image is selected based on the horizontal pixel pitch of the image sensor. Also, pixel pitch standards can be mixed depending on the situation.
[0111] (実施例 2)  [0111] (Example 2)
以下、実施の形態 2に係る実施例 2について説明する。実施例 2の外観上の構成 は実施例 1の図 6と同様の構成であり、実施例 2の光学系、画素ずらし機構について も、実施例 1と同じであり、重複説明は省略する。 Hereinafter, Example 2 according to Embodiment 2 will be described. Configuration on appearance of Example 2 6 has the same configuration as that of FIG. 6 of the first embodiment, and the optical system and the pixel shifting mechanism of the second embodiment are also the same as those of the first embodiment.
[0112] 実施例 2は、撮像素子 603が略同一時刻に露光し、画像を転送する点と、画素ずら し機構の駆動量が固定されている点が異なる。 [0112] The second embodiment is different in that the image sensor 603 exposes at approximately the same time and transfers an image, and the driving amount of the pixel shifting mechanism is fixed.
[0113] 画素ずらしを行う第 1の撮像系として、レンズ 601bの光軸上に厚さ 500 mの光学 ガラス BK7 (図中 602)を設け、圧電ァクチユエータおよび傾斜機構により約 0. 4度 傾けることにより、被写体像を水平方向(X軸方向)に画素ピッチの 1Z2 (1. 2 /z m) だけ画素ずらし、画素数を 2倍にする構成とした。 [0113] As a first imaging system for pixel shifting, an optical glass BK7 (602 in the figure) with a thickness of 500 m is provided on the optical axis of the lens 601b, and tilted by about 0.4 degrees using a piezoelectric actuator and tilting mechanism. Thus, the subject image is shifted in the horizontal direction (X-axis direction) by a pixel pitch of 1Z2 (1.2 / zm) to double the number of pixels.
[0114] この構成により、 1回の画素ずらしをして画像メモリに記憶された画像を図 12に示す[0114] With this configuration, an image stored in the image memory after one pixel shift is shown in FIG.
。 1枚目の画像を撮影した時刻を撮影時刻 1、画素ずらしをした後 (ガラス板を傾斜さ せた後)に2枚目の画像を撮影した時刻を撮影時刻 2とする。 . First image capturing time 1 time taken of the time taken to the second image after the pixel shift (after inclining the glass plate) and shooting time 2.
[0115] 本実施例では、被写体の移動が十分小さ!、場面 (例えば風景など)を撮影した。し たがって、撮影時刻 1に撮影した画像 701と、撮影時刻 2に撮影した画像 703とには 被写体ぶれはない。ぶれが存在する場合は、異なる時刻 1、 2間において手ぶれによ り画像全体が移動して ヽる場合である。 [0115] In this example, the movement of the subject is sufficiently small! Therefore, there is no subject blur between the image 701 taken at shooting time 1 and the image 703 taken at shooting time 2. When there is blurring, the entire image moves due to camera shake between different times 1 and 2.
[0116] そこで、画像全体が均一に移動するとみなし、画素ずらしをしない第 2の撮影系で 撮影した撮像時刻 1の画像 701と、撮影時刻 2に同撮像系で撮影した画像 703とを 比較して手ぶれ量を導出した。より具体的には、画像 701の中央部分 (例えば 100 X[0116] Therefore, it is assumed that the entire image moves uniformly, and the image 701 taken at the second imaging system without pixel shifting is compared with the image 703 taken at the imaging time 2 by the same imaging system. The amount of camera shake was derived. More specifically, the central portion of image 701 (e.g. 100 X
100画素の領域)力 画像 703のどの領域に移動したかを前記の (数 1)を用いた画 像比較方法で評価し、手ぶれ量を導出した。その結果、ぶれ量は画面上方向に 2. 2 画素、横方向に 2. 5画素であった。 The region of 100 pixels) force The region of the image 703 moved to was evaluated by the image comparison method using the above (Equation 1), and the amount of camera shake was derived. As a result, the amount of blur was 2.2 pixels in the upper direction of the screen and 2.5 pixels in the horizontal direction.
[0117] この場合、画面上方向のぶれ量 2. 2画素のうち整数ピッチ以下の値 byは 0. 2画素 になり、 by=0とみなすことができる。横方向のぶれ量 2. 5画素のうち整数ピッチ以下 の値 bxは、 0. 5画素になり、 bx=0. 5となる。 [0117] In this case, the value by by the integer pitch or less out of the two pixels of the amount of blurring on the screen is 0.2 pixels, and can be regarded as by = 0. Amount of lateral blur 2. A value bx that is less than or equal to the integer pitch among 5 pixels is 0.5 pixels, and bx = 0.5.
[0118] なお、比較する領域のサイズは、正方形に限る必要はなく任意に設定してもよい。 [0118] The size of the area to be compared need not be limited to a square, and may be set arbitrarily.
[0119] また、視差量導出手段により、撮像時刻 1における撮像画像 701と撮像画像 702と から、視差量を求めた。その結果、被写体距離が遠いため、視差量は画像のいずれ の領域においても、 0. 1画素以下であり、 Δ =0とみなすことができる。すなわち、視 差量の分布は無視し、全体が均一な視差であるとみなすことができる。 [0119] Further, the parallax amount was obtained from the captured image 701 and the captured image 702 at the imaging time 1 by the parallax amount deriving means. As a result, since the subject distance is long, the parallax amount is 0.1 pixel or less in any region of the image, and can be regarded as Δ = 0. Ie The distribution of the difference amount is ignored, and the whole can be regarded as uniform parallax.
[0120] これらのぶれ量と視差量をもとに、最適画像選択手段で、合成する画像を選択する 。前記の結果は、図 11の Δ =0、 bx=0. 5の部分に相当する。最適画像選択手段 は、図 11の A =0、bx=0. 5の歹 IJにお!/、て、 0の咅分と 0. 5の咅分にネ目当する画像 の組み合わせを選ぶことになる。  [0120] Based on the amount of blur and the amount of parallax, the optimum image selection means selects an image to be synthesized. The above result corresponds to the part of FIG. 11 where Δ = 0 and bx = 0. The optimal image selection means is to select a combination of images corresponding to the apportionment of 0 and apportionment of 0.5 to the IJ of A = 0, bx = 0.5 in Fig. 11! become.
[0121] この場合、 3枚分の画像の値力^であるので、複数の組み合わせが選択可能になる 。このように、複数の組み合わせがある場合、同じ時刻で組み合わせる場合を選択す ると被写体のぶれが小さくなり、より高解像度な画像が得られる。  [0121] In this case, since it is the value power of three images, a plurality of combinations can be selected. In this way, when there are a plurality of combinations, if the combination at the same time is selected, the blurring of the subject is reduced, and a higher resolution image can be obtained.
[0122] なお、前記の図 11の例では、 Y軸方向のぶれ量の整数ピッチ以下の値 byが by=0 とみなされる場合について説明した力 by=0. 5とみなされる場合であってもよい。こ の場合は、解像度向上に寄与する画像が、光電変換部の下側の無効部分に対応し た位置で撮影した画像、または光電変換部の右下部分の無効部分に対応した位置 で撮影した画像になる。  [0122] Note that, in the example of Fig. 11 described above, the force by = 0.5 described when the value by less than the integer pitch of the blur amount in the Y-axis direction is considered by = 0. Also good. In this case, an image that contributes to improving the resolution was taken at a position corresponding to the invalid portion on the lower side of the photoelectric conversion unit or at a position corresponding to the invalid portion on the lower right side of the photoelectric conversion unit. Become an image.
[0123] また、本実施例では画素ずらし手段としてガラス板を傾斜させる方法を用いて!/、る 力 この方法には限らない。例えば、圧電素子を用いたァクチユエータや、電磁ァク チユエータなどを用いて撮像素子やレンズを所定量だけ物理的に動力してもよい。  In this embodiment, the method of tilting the glass plate is used as the pixel shifting means. The force is not limited to this method. For example, an image sensor or lens may be physically powered by a predetermined amount using an actuator using a piezoelectric element, an electromagnetic actuator, or the like.
[0124] また、本実施例では 1つの撮像素子を異なる 2つの領域に分けているが、それぞれ の光学系と 1対 1に対応するように異なる 2つの撮像素子を用いてもょ 、。撮像素子 の形態は、複数の撮像領域がそれぞれの光学系と 1対 1に対応していればどのような 形態でもよい。  [0124] In this embodiment, one image sensor is divided into two different areas. However, two different image sensors may be used so as to correspond to each optical system in a one-to-one relationship. The form of the image sensor may be any form as long as the plurality of image areas correspond to each optical system on a one-to-one basis.
[0125] (実施例 3)  [0125] (Example 3)
本実施例は、撮影対象の被写体の移動量が存在する (例えば人や動物など)点が 実施例 2と異なる。本実施例では、 1枚目の画像を撮影し、そのデータをメモリに記憶 して 2枚目の撮影を行なうまでの間に、被写体が別の場所に移動してしまい、 1枚目 の画像と 2枚目の画とでは、被写体の一部が別の場所に移動するような場面を撮影 する。  The present embodiment is different from the second embodiment in that there is a moving amount of a subject to be imaged (for example, a person or an animal). In this embodiment, the subject is moved to another location while the first image is shot, the data is stored in the memory and the second shot is taken, and the first image is captured. In the second picture and the second picture, the scene where a part of the subject moves to another location is shot.
[0126] 基本的な構成については実施例 2と同様であるので、重複部分については説明を 省略する。被写体の移動がある場合は、画像全体が均一に移動することはなぐ実 施例 2のように画像の一部領域の移動から、全体の移動を推定することができな!/、。 [0126] Since the basic configuration is the same as that of the second embodiment, the description of the overlapping parts is omitted. If there is a movement of the subject, the entire image will not move uniformly. As in Example 2, the total movement cannot be estimated from the movement of a partial area of the image! /.
[0127] そこで実施例 3では、画像を複数のブロックに分割するブロック分割手段を備えて おり、ブロック毎にぶれ量を導出するようにしている。ブロック分割手段は、システム制 御手段 100で制御され、画素ずらしをしない第 2の撮像系 106aで撮影した 1枚目の 画像全体を 10 X 10画素のブロックに分割する。ぶれ量導出手段 104は、そのブロッ クごとに、分割したそれぞれの画像が 2枚目の画像のどの位置に対応するのかを調 ベるようにして 、る。画像の移動量の導出には (数 1)を用いた。 [0127] Therefore, in the third embodiment, block dividing means for dividing an image into a plurality of blocks is provided, and the amount of blur is derived for each block. The block dividing means is controlled by the system control means 100 and divides the entire first image taken by the second imaging system 106a without pixel shifting into 10 × 10 pixel blocks. The blur amount deriving unit 104 checks, for each block, which position in the second image each of the divided images corresponds to. (Equation 1) was used to derive the amount of image movement.
[0128] 図 13は本実施例にぉ 、て画像メモリに記憶された画素ずらしをしな 、第 2の撮像 系 106aで、時系列撮影した画像を示したものである。図 13Aは撮影時刻 1に撮影し た画像であり、図 13Bは、撮影時刻 2に撮影した画像である。また、図 13Cはブロック ごとに導出した画像の移動量を示したものである。 FIG. 13 shows images taken in time series by the second imaging system 106a without shifting the pixels stored in the image memory according to the present embodiment. FIG. 13A shows an image taken at shooting time 1, and FIG. 13B shows an image taken at shooting time 2. Fig. 13C shows the amount of image movement derived for each block.
[0129] 図 13Cにおいて、 Aと示しているブロックは図 13Aにおいて右に 10. 1画素のぶれ が導出されたブロックであり、 Bと示しているブロックは図 13Aにおいて左に 8. 8画素 のぶれが導出されたブロックである。このぶれ量には、手ぶれと被写体の動きが積算 されている。 In FIG. 13C, the block indicated by A is a block from which a 10.1 pixel blur is derived on the right in FIG. 13A, and the block indicated by B is 8.8 pixels on the left in FIG. 13A. This is the block from which the blur is derived. The amount of camera shake and the movement of the subject are integrated.
[0130] 同様に視差に関しても、ブロックに分割し、それぞれのブロックについて求めること ができる。これらのぶれ量と視差量を合わせたものの中から、実施例 2の場合と同様 に、整数ピッチ(または整数ピッチに近い)の配置のものと、 0. 5画素ピッチ(または 0 . 5画素ピッチに近い)の配置のものを選び出すことによって、合成したときに、解像 度が向上する画像を選び出すことができる。  [0130] Similarly, the parallax can be divided into blocks and obtained for each block. From the sum of the blur amount and the parallax amount, as in the case of the second embodiment, the one having an integer pitch (or close to the integer pitch) and the 0.5 pixel pitch (or 0.5 pixel pitch). By selecting images with a layout close to (2), it is possible to select an image with improved resolution when combined.
[0131] このようにして、ブロック毎に選び出した最適画像を合成することにより、被写体の 移動が大きい場合でも、画像全体で、解像度を向上させることができる。  In this way, by combining the optimal images selected for each block, the resolution of the entire image can be improved even when the movement of the subject is large.
[0132] なお、ユーザーの選択によって、手ぶれのみ補正を行な!/、、被写体ぶれの補正を 意図的に行わないように画像処理することにより、動きのある場面の躍動感を強調す るネ ΐ正モードを設けることちでさる。  [0132] Note that only the camera shake is corrected at the user's selection! /, And the image processing is performed so as not to intentionally correct the subject shake, thereby enhancing the dynamic feeling of the moving scene. It can be done by providing a correction mode.
[0133] また、被写体が移動して 、る場合、時系列で撮影した画像にぉ 、て、一部被写体 が隠れる部分が存在する(図 13Cにおいて Xで示したブロック)。このような場合には 、その部分だけ、複数の画像の合成をすることなくある特定の時刻に撮影した画像の みを選択すること自然な仕上がりの画像を得ることができる。 [0133] Also, when the subject moves, there is a portion where a part of the subject is hidden in the images taken in time series (block indicated by X in Fig. 13C). In such a case, only that part of the image taken at a certain time without combining multiple images. By selecting only the image, a natural finish image can be obtained.
[0134] また、画素ずらし技術は、解像度を向上させる技術であるため、撮影対象の被写体 が滑らかな表面や、レンズの解像性能以下の細かな模様に関しては、効果が無い。 一方で、画素ずらしをする上では、撮影と撮影の間にかかる時間を短縮することで、 手ぶれや被写体ぶれが小さくなり、解像度が向上する。  [0134] Further, since the pixel shifting technique is a technique for improving the resolution, there is no effect on a smooth surface of the subject to be photographed or a fine pattern below the resolution performance of the lens. On the other hand, when shifting pixels, by reducing the time taken between shots, camera shake and subject shake are reduced, and resolution is improved.
[0135] そこで、ブロックに分割した画像を分析し、画素ずらしの効果が無 ヽ画像である場 合は、そのブロックについては、処理を中止することで、撮影間隔を短くすることがで きる。一般的に解像度が高い部分は、フーリエ変換すると高周波成分が多く見られる 。そこで、画像を取り込みブロック分割したのちに、画像の周波数成分を分析し、所 定の条件以下であれば、その部分のぶれ量導出や視差計算を中止すればよい。  [0135] Therefore, when an image divided into blocks is analyzed, and the effect of pixel shifting is an infinite image, the shooting interval can be shortened by stopping the processing for that block. In general, high-frequency components can be seen in high-resolution parts when Fourier transform is performed. Therefore, after the image is captured and divided into blocks, the frequency components of the image are analyzed, and if it is below a predetermined condition, the derivation of the blur amount and the parallax calculation for that portion may be stopped.
[0136] また、撮影と撮影の間には、露光する時間と、撮像素子から画像メモリに画像を転 送する時間がある。露光は一括で行われため省略できないが、画像メモリへの転送 は、必要なブロックのみにすることにより、処理時間を短縮することができる。  [0136] Further, there is a time for exposure and a time for transferring an image from the image sensor to the image memory between the photographing. Although exposure is performed in a lump and cannot be omitted, the processing time can be shortened by transferring only the necessary blocks to the image memory.
[0137] (実施例 4)  [Example 4]
本実施例は、画像の中の異なる被写体を判別する被写体判別手段を用いて ヽる 点が前記の実施例 3と異なっている。被写体判別手段を用いることにより、被写体ごと にぶれ量を導出することが容易となる。このため、手ぶれに加えて被写体ぶれが存在 するような、画像の中でぶれ量が異なる場合にもぶれ量を正確に導出することができ る。  The present embodiment is different from the third embodiment in that the present embodiment uses subject discrimination means for discriminating different subjects in the image. By using the subject discrimination means, it is easy to derive the blur amount for each subject. For this reason, the amount of blur can be accurately derived even when the amount of blur is different in an image where there is subject blur in addition to camera shake.
[0138] また、実施例 3のように画像をブロックに分けてぶれ量を導出する場合にも、被写体 ごとにブロック分けをしたり、被写体ごとにブロックの大きさを変えたりすることができる 。また、画像合成する際に、ある特定の被写体のみを選択的に合成することもできる  [0138] Also, when the blur amount is derived by dividing an image into blocks as in the third embodiment, the blocks can be divided for each subject or the block size can be changed for each subject. It is also possible to selectively synthesize only a specific subject when synthesizing images.
[0139] 被写体判別手段としては、電波などにより被写体までの距離を測り、異なる画像領 域を識別する手段、画像処理によりエッジ検出などをして異なる被写体を判別する手 段、視差量を利用し被写体を画像から抽出する方法などがある。また、これらに限る ものではなぐ画像の中の異なる被写体を判別できれば、具体的手段は問わない。 本実施例における基本的な構成については、実施例 2と同様であるので重複部分に ついては説明を省略する。 [0139] As the subject discrimination means, a means for measuring the distance to the subject using radio waves or the like to identify different image areas, a means for discriminating different subjects by performing edge detection or the like by image processing, and a parallax amount are used. There is a method of extracting a subject from an image. In addition, the specific means is not limited as long as different subjects in the image can be distinguished. The basic configuration of the present embodiment is the same as that of the second embodiment, and therefore overlapped. The description is omitted here.
[0140] 図 14は、本実施例で撮影した画像、および被写体判別手段により判別した被写体 群を示す図である。本実施例では、撮影した画像を 10 X 10画素のブロック (横 11 X 縦 9)に分け、それぞれのブロックごとに電波により被写体までの距離を測り異なる被 写体を判別した。被写体の判別では、距離の測定においてある誤差範囲内に入るも のを同一被写体として判別した。本実施例では、誤差範囲を 5%とした。  [0140] Fig. 14 is a diagram showing an image photographed in the present example and a subject group discriminated by the subject discriminating means. In this example, the captured image was divided into 10 × 10 pixel blocks (horizontal 11 × vertical 9), and the distance to the subject was measured by radio waves for each block, and different subjects were identified. In subject discrimination, those within a certain error range in distance measurement were discriminated as the same subject. In this example, the error range was 5%.
[0141] 図 14Aは撮影時刻 1に第 2の撮像系 106aで画素ずらしをせず撮影した画像であり 、図 14Bは撮影時刻 2に第 2の撮像系 106aで画素ずらしをせず撮影した画像である 。また、ブロック毎に電波により測定した距離 (単位はメートル)を示している。この距 離については、ブロック毎に求めた視差 Δを用いて、前記の(数 1)によりブロック毎に 距離 Aを算出するようにしてもょ ヽ。  [0141] Fig. 14A is an image taken without pixel shifting with the second imaging system 106a at shooting time 1, and Fig. 14B is an image shot with no pixel shifting with the second imaging system 106a at shooting time 2. Is. In addition, the distance (unit: meters) measured by radio waves for each block is shown. For this distance, the distance A may be calculated for each block by the above (Equation 1) using the parallax Δ obtained for each block.
[0142] 撮影時刻 1に撮影する前、電波により被写体の距離を測定したところ、図 14Aのよう に、大きく 2つの被写体群を判別できた。 1つは距離およそ 5メートルにおける被写体 群 1、もう 1つは距離およそ 2メートルにおける被写体群 2である。各被写体群は、前 記の 5%の誤差範囲内に入る距離で判別されている。  [0142] Before shooting at shooting time 1, the distance of the subject was measured by radio waves. As shown in Fig. 14A, two groups of subjects could be identified. One is subject group 1 at a distance of approximately 5 meters, and the other is subject group 2 at a distance of approximately 2 meters. Each subject group is identified by the distance within the 5% error range.
[0143] 撮影時刻 2に撮影する前、電波により被写体の距離を測定したところ、それぞれの 被写体群は図 14Bのように判別された。本実施例では、これら被写体群ごとに画素 ずらし前後におけるぶれ量の導出を行なった。  [0143] Before shooting at shooting time 2, the distance of the subject was measured by radio waves, and each subject group was identified as shown in Fig. 14B. In this embodiment, the amount of blurring before and after pixel shift is derived for each subject group.
[0144] ぶれ量導出手段により、それぞれの被写体群のぶれ量を導出したところ、被写体群 1に関しては、図中左方向に 10.3画素ピッチのぶれが導出された。このぶれは、図 中には 1ブロックのぶれとして図示している。被写体群 2に関しては、被写体ぶれが 大きぐ一部画像からはみ出してしまっているため、被写体群全体としてのぶれ量を 正確に導出することができな力つた。  [0144] When the blur amount of each subject group was derived by the blur amount deriving means, for the subject group 1, a blur of 10.3 pixel pitch was derived in the left direction in the figure. This blur is shown as one block blur in the figure. With regard to subject group 2, the subject blurring protruded from a part of the image where the subject blurring was large, so it was difficult to accurately derive the blurring amount of the subject group as a whole.
[0145] そこで、本実施例にお!ヽては、撮影時刻 2に撮影した画像にぉ ヽて、被写体群 1の みぶれ補正を行な!/、画像合成を行なうことにした。最適画像選択手段によって画像 を選択する方法は、実施例 2と同様な方法を用いた。  [0145] Therefore, in this embodiment, the blur correction of subject group 1 is performed on the image taken at shooting time 2! / And the image composition is performed. The same method as in Example 2 was used as the method for selecting an image by the optimum image selection means.
[0146] より具体的には、被写体群 1の 10.3画素ピッチのぶれのうち整数ピッチ以下の値 b Xは 0. 3画素になり、図 11における bxは bx=0. 5とみなすこと力 Sできる。 [0147] なお、被写体群 1は、左方向に移動しているので、 bxの値を負の値 0. 5とするこ ともできる。この場合は、表 11中の 0. 5力 5となる。また、 Δ =0、 bx= -0. 5の ときの ax+bx+ Δ—Ο. 5の値は、 1となる力 これは整数ピッチであるので 0になり 、 bx=0. 5のとさと同じである。 [0146] More specifically, a value that is less than or equal to an integer pitch among the 10.3 pixel pitch blur of subject group 1 is 0.3 pixels, and bx in FIG. it can. Note that since the subject group 1 is moving in the left direction, the value of bx can be set to a negative value of 0.5. In this case, it becomes 0.5 force 5 in Table 11. Also, when Δ = 0, bx = -0.5, the value of ax + bx + Δ—Ο.5 is a force of 1. Since this is an integer pitch, it becomes 0, and bx = 0.5 The same.
[0148] すなわち、 bxの正負の差は、有効活用する無効画素の位置が光電変換部の右で ある力左であるかの差であり、解像度への寄与は同じである。  That is, the positive / negative difference of bx is a difference in whether the position of the invalid pixel to be effectively used is the force left which is the right of the photoelectric conversion unit, and the contribution to the resolution is the same.
[0149] 本実施例のように、被写体判別手段を用いて異なる被写体を判別することにより、 その被写体ごとにぶれ量の導出ができるため、正確に画像のぶれ量を補正すること ができる。  As in this embodiment, by discriminating different subjects using the subject discriminating means, the blur amount can be derived for each subject, so that the blur amount of the image can be corrected accurately.
[0150] また、手ぶれ、および被写体ぶれにより画像が一部撮影範囲内からはみ出してしま V、画像を認識できな!/、場合は、その画像領域にぉ 、て画素ずらしによる高解像度化 を行なわず、撮影した複数の画像の中から 1枚のみを選択すればよい。  [0150] Also, the image may partially protrude from the shooting range due to camera shake and subject blur. V If the image cannot be recognized! /, Increase the resolution by shifting the pixel to the image area. Instead, you only need to select one of the images you have taken.
[0151] (実施例 5)  [0151] (Example 5)
図 15に、本実施例に係る撮像系、画素ずらし手段および撮像素子の構成を示す。 撮像光学系として、直径 2mmの非球面レンズ 1101a〜: L lOldを用いた。レンズの光 軸は、図 15中の Z軸とほぼ平行となっており、間隔は 2. 5mmである。各レンズの前( 被写体側)には、特定の波長のみを透過する波長分離手段としてカラーフィルター 1 102a〜1102dを設けた。 1102a、 1102dは緑色を透過するカラーフイノレター、 110 2bは赤色を透過するカラーフィルター、 1102cは青色を透過するカラーフィルターで ある。  FIG. 15 shows configurations of the imaging system, the pixel shifting means, and the imaging device according to the present embodiment. As the imaging optical system, an aspherical lens 1101a with a diameter of 2 mm: LlOld was used. The optical axis of the lens is almost parallel to the Z axis in Fig. 15, and the distance is 2.5 mm. Color filters 1102a to 1102d are provided in front of each lens (subject side) as wavelength separation means that transmits only a specific wavelength. 1102a and 1102d are color filter letters that transmit green, 1102b is a color filter that transmits red, and 1102c is a color filter that transmits blue.
[0152] 1103a〜1103dは、各レンズと 1対 1に対応する 4つの撮像素子であり、駆動回路 を共通にし、同期して動作するようにした。各光学系(色成分)により撮影した画像を 合成することにより、カラー画像を得ることができる。撮像素子の画素ピッチは、本実 施例では 3 μ mである。  [0152] 1103a to 1103d are four image sensors corresponding to each lens on a one-to-one basis, using a common drive circuit and operating in synchronization. A color image can be obtained by combining images taken by each optical system (color component). The pixel pitch of the image sensor is 3 μm in this example.
[0153] また、各レンズおよび撮像素子は、図 15中の X軸と平行かつ等間隔に設置されて おり、各撮像素子の受光面は、図 15中の XY平面とほぼ平行となっている。  [0153] Further, each lens and the image sensor are installed in parallel with the X axis in FIG. 15 and at equal intervals, and the light receiving surface of each image sensor is substantially parallel to the XY plane in FIG. .
[0154] 1104は、画素ずらし手段となる圧電微動機構である。画素ずらしをする第 1の撮像 系として、撮像素子 1103a〜l 103cは圧電微動機構 1104に取り付け、図中 X方向 、 Y方向に駆動できるようにした。 1103dは、圧電微動機構とは独立しており、画素ず らしをしな!ヽ第 2の撮像系となる。 Reference numeral 1104 denotes a piezoelectric fine movement mechanism serving as a pixel shifting means. As the first imaging system to shift pixels, the image sensors 1103a to 103c are attached to the piezoelectric fine movement mechanism 1104, and the X direction in the figure. Enabled to drive in Y direction. The 1103d is independent of the piezoelectric fine movement mechanism, and does not perform pixel alignment. It becomes the second imaging system.
[0155] 図 16は、圧電微動機構 1104の平面図である。中央部分のステージ 1201に、撮像 素子 1103a〜1103cが設置される。積層型の圧電素子 1202a、 1202bによってス テージ 1201を図中の X軸方向に微動し、積層型の圧電素子 1203a〜1203dによつ て、ステージ固定枠 1202を図中の Y軸方向に微動する。このことにより、撮像素子を 撮像素子の水平面内で直交する 2軸方向に、独立して微動させることができる。  FIG. 16 is a plan view of the piezoelectric fine movement mechanism 1104. Image sensors 1103a to 1103c are installed on the stage 1201 at the center. The stage 1201 is finely moved in the X-axis direction in the figure by the laminated piezoelectric elements 1202a and 1202b, and the stage fixing frame 1202 is finely moved in the Y-axis direction in the figure by the laminated piezoelectric elements 1203a to 1203d. . As a result, the image sensor can be finely moved independently in two axial directions perpendicular to each other in the horizontal plane of the image sensor.
[0156] 本実施例では、 1回の撮影指令により画素ずらしをしながら、各撮像素子について 4枚分の撮影をした。 1枚目の撮影により、 4つの各撮像素子 1103a〜1103dに対 応した 4枚の画像が得られる。 3つの各撮像素子 1103&〜1103じは 方向、 Y方向 に 0. 5画素ピッチ(1. 5 m)ずつ移動させながら撮影する構成にした。具体的には 、画素ずらしをしな ヽ状態で 1枚目の撮影を行 ヽ、 X方向に 0. 5画素ピッチ動かして 2枚目の撮影を行ない、次に X方向の位置を保ったまま Y方向に 0. 5画素ピッチ動か して 3枚目の撮影を行ない、最後に Y方向の位置を保ったまま X方向に— 0. 5画素 ピッチ動力して 4枚目の撮影を行なった。これら 4枚の画像を合成することによって、 高解像度な画像を得るようにした。  [0156] In this example, four images were shot for each image sensor while shifting pixels by one shooting command. By capturing the first image, four images corresponding to the four image sensors 1103a to 1103d are obtained. Each of the three image sensors 1103 & 1103 was configured to shoot while moving by 0.5 pixel pitch (1.5 m) in the direction and Y direction. Specifically, take the first picture without shifting the pixels, move the picture by 0.5 pixels in the X direction, take the second picture, and then keep the position in the X direction. The third shot was taken with a 0.5 pixel pitch shift in the Y direction, and finally the fourth shot was taken with a power of 0.5 pixel pitch in the X direction while maintaining the position in the Y direction. By combining these four images, a high-resolution image was obtained.
[0157] まず、画素ずらしをしない第 2の撮像系のレンズ l lOldを用いて時系列的に撮影し た複数の画像カゝらそれぞれの撮影時刻におけるぶれ量を導出した。また、緑色の力 ラーフィルター 1102aを取り付けた第 1の撮像系と、緑色のカラーフィルター 1102d を取り付けた第 2の撮像系とで 1枚目に撮影した画像から、視差量導出手段により、 視差量を求めた。これは、同色のカラーフィルターを用いて撮影した画像の方力 画 像の比較をし易ぐ視差量をより精密に求めることができるためである。  [0157] First, the amount of blur at each shooting time was derived for each of a plurality of images taken in time series using the lens l lOld of the second imaging system without pixel shifting. In addition, the parallax amount is derived from the first image captured by the first imaging system with the green power filter 1102a and the second imaging system with the green color filter 1102d. Asked. This is because the amount of parallax that makes it easy to compare the direction images of images taken using the same color filter can be obtained more precisely.
[0158] 次に、導出したぶれ量、視差量をもとにして、最適画像選択手段で画像合成をする 画像を選択し、各色の画像を合成した。カラー画像を生成するためには、各画素に おいて、三原色の輝度データが必要となる。緑色の画像データは、第 1の撮像系と第 2の撮像系の両方に入っているため、解像度を向上させることができる。  [0158] Next, based on the derived blur amount and parallax amount, an image to be combined by the optimal image selection means was selected, and images of each color were combined. In order to generate a color image, luminance data of the three primary colors is required for each pixel. Since the green image data is included in both the first imaging system and the second imaging system, the resolution can be improved.
[0159] 一方、赤色、青色の画像に関しては、画素ずらしをすることなく撮影した画像が無 いため、ぶれ量や視差量によっては、 0. 5画素ずれている(無効部分を利用した)画 像が得られず解像度が向上しないことがある。 [0159] On the other hand, for red and blue images, there are no images taken without pixel shifting, so images that are shifted by 0.5 pixels depending on the amount of blur or parallax (using the invalid portion). An image may not be obtained and resolution may not be improved.
[0160] し力しながら、一般的に人の目は緑色に関する情報を多く受けるので、青色、赤色 の解像度が緑色に比べて悪くなつていても、自然風景や人物などを撮影する際には 影響を受けにくい。また、画像の局所領域では、緑色と青、赤の画像に強い相関があ ることが知られており、この特性を利用し、緑色の画像から、青、赤の画像の補間部 分を推測することも可能である。  [0160] However, since the human eye generally receives a lot of information about green, even when the resolution of blue and red is worse than green, when shooting natural scenery or people Not easily affected. In addition, it is known that there is a strong correlation between green, blue, and red images in the local region of the image. Using this characteristic, the interpolation part of the blue and red images can be estimated from the green image. It is also possible to do.
[0161] また、緑、赤、青色の各色すべてにっ 、て、画素ずらしをしな 、撮像光学系を備え た構成にすれば、最適画像選択手段が選択する画像に、無効部分を利用できる 0. 5画素ずれて 、る画像を確実に含ませることができ、確実に高解像度な画像が得ら れる。  [0161] In addition, if all of the green, red, and blue colors are provided with an imaging optical system without pixel shifting, invalid portions can be used in the image selected by the optimum image selection means. An image that is shifted by 0.5 pixels can be included reliably, and a high-resolution image can be reliably obtained.
[0162] なお、本実施例では、 4つの光学系を 1つの直線上に配置した力 この配置に限定 する必要はない。図 17は、 4つの光学系の配置の別の一例を示している。図 17Aは 、 4つの光学系を長方形の頂点に配置した例である。 GO、 G1は緑、 Rは赤、 Bは青 の波長分離手段 (カラーフィルター)を示して 、る。  [0162] In this example, the force of arranging four optical systems on one straight line is not necessarily limited to this arrangement. FIG. 17 shows another example of the arrangement of the four optical systems. FIG. 17A is an example in which four optical systems are arranged at the vertices of a rectangle. GO, G1 are green, R is red, and B is blue, indicating the wavelength separation means (color filter).
[0163] 図 17Bは、図 17Aの配置において、視差量の導出を説明する図である。視差量の 導出には、対角線上に配置された緑色の撮像系を用いる。その他の赤色、青色の撮 像系の視差は、 4つの光学系が長方形の長方形状に配置されているため、緑色の撮 像系における視差量の直交成分となる。  FIG. 17B is a diagram for explaining the derivation of the amount of parallax in the arrangement of FIG. 17A. For deriving the amount of parallax, a green imaging system arranged on a diagonal line is used. The parallax of the other red and blue imaging systems is an orthogonal component of the parallax amount in the green imaging system because the four optical systems are arranged in a rectangular rectangle.
[0164] また、本実施例ではレンズの前にカラーフィルターを設けて波長分離をしている力 レンズと撮像素子の間にカラーフィルターを設けたり、レンズ上に直接カラーフィルタ 一を形成したりしてもよい。  [0164] In this embodiment, a color filter is provided in front of the lens for wavelength separation. A color filter is provided between the lens and the image sensor, or a color filter is directly formed on the lens. May be.
[0165] また、カラーフィルタ一は、 R、 G、 Bの 3原色に限る必要はなぐ補色フィルターを用 Vヽて波長を分離し、画像処理によりカラー情報を反転し合成してもよ!/ヽ。  [0165] The color filter is not necessarily limited to the three primary colors R, G, and B. The complementary color filter is used to separate the wavelengths and invert the color information by image processing.ヽ.
[0166] さらに、波長分離手段はカラーフィルターに限らない。例えば、画素ずらし手段とし てガラス板を用いて傾斜させる機構を用いた場合には、そのガラス板として色ガラス を用いることもできる。このように、波長分離手段としては、所定の波長成分のみを分 離する手段であれば、具体的手段は問わない。  [0166] Furthermore, the wavelength separation means is not limited to a color filter. For example, when a mechanism for tilting using a glass plate is used as the pixel shifting means, colored glass can be used as the glass plate. As described above, any specific means may be used as the wavelength separation means as long as it is a means for separating only a predetermined wavelength component.
[0167] また、緑色を扱う光学系で撮影された画像を比較し、視差およびぶれ量を導出した 例で説明したが、緑色に限る必要はなぐ同色の波長分離手段を第 1の撮像系と第 2 の撮像系に配置することにより、同様な結果を得ることが可能である。 [0167] In addition, images taken with an optical system that handles green were compared to derive the parallax and blur amount. As explained in the example, it is possible to obtain the same result by arranging the wavelength separation means of the same color which need not be limited to green in the first imaging system and the second imaging system.
[0168] (実施の形態 3)  [Embodiment 3]
図 18は、実施の形態 3の撮像装置における全体動作を示すフローチャートである。 前記実施の形態 2は、最初に画素ずらしの動作方法を決めて、規定回数撮影する構 成である。実施の形態 3は、撮影した画像に応じて、撮影枚数が異なる構成である。  FIG. 18 is a flowchart illustrating the overall operation of the imaging apparatus according to the third embodiment. In the second embodiment, the pixel shift operation method is first determined and a predetermined number of times of shooting is performed. Embodiment 3 has a configuration in which the number of shots varies depending on the shot image.
[0169] 図 18【こお!ヽて、ステップ 1500、 1501、 1503、 1504ίま、図 8のステップ 200、 201 、 801、 802と同様である。図 18の構成は、これ以降の構成が、図 8の構成と異なつ ている。図 18のフローチャートでは、画素ずらしと撮影を繰り返す処理ステップ 1502 のうち、ステップ 1505でぶれ量を求め、ステップ 1506で合成する画像を選択する。  [0169] Fig. 18 [Steps 1500, 1501, 1503, 1504ί] are the same as steps 200, 201, 801, 802 in Fig. 8. The configuration in FIG. 18 is different from the configuration in FIG. 8 in the subsequent configuration. In the flowchart of FIG. 18, among the processing steps 1502 for repeating pixel shifting and photographing, the amount of blur is obtained in step 1505, and the image to be synthesized is selected in step 1506.
[0170] ぶれ量、視差量によっては、 1回の撮影で、合成に必要な 0. 5画素ピッチずれた画 像が複数得られることになる。このため、最初に決めた画素ずらし動作を行うと、同じ 位置関係の画像を取り込むことになり、高解像度化に貢献しない画像を取り込むこと になる。  [0170] Depending on the amount of blur and the amount of parallax, a plurality of images with a shift of 0.5 pixel pitch necessary for composition can be obtained in one shooting. For this reason, when the pixel shift operation determined first is performed, an image having the same positional relationship is captured, and an image that does not contribute to high resolution is captured.
[0171] したがって、ステップ 1506で画像の選択を行った後、ステップ 1507では、合成に 足りない画像を見つけ出し、その画像が得られるようにズレ量を決定し、ステップ 150 8では、画素ずらしを実行する。  [0171] Therefore, after selecting an image in step 1506, in step 1507, an image lacking in composition is found, and the amount of deviation is determined so that the image can be obtained. In step 1508, pixel shifting is executed. To do.
[0172] ステップ 1502における一連のステップを、合成に必要な画像が得られるまで繰り返 してステップ 1502が終了する。その後は、ステップ 1509で視差量を導出し、ステツ プ 1510にお 、て画像メモリに蓄積された画像を合成し、ステップ 1511にお ヽて画 像を出力して撮影が終了する。  Step 1502 is repeated by repeating a series of steps in step 1502 until an image necessary for composition is obtained. Thereafter, the amount of parallax is derived in step 1509, the images stored in the image memory are combined in step 1510, the image is output in step 1511, and the shooting is completed.
[0173] このような処理により、画素ずらしの回数を少なくすることができ、手ぶれや被写体 の移動の影響を最小限に抑え、より高解像度の画像を得ることができるようになる。 産業上の利用可能性  [0173] With such processing, the number of pixel shifts can be reduced, and the influence of camera shake and subject movement can be minimized and a higher resolution image can be obtained. Industrial applicability
[0174] 以上のように、本発明によれば、画素ずらしをする際に手ぶれ、被写体ぶれが存在 したとしても、画素ずらしの効果の低下を防止でき、高解像度な画像を得ることができ る。このため、本発明は、例えばデジタルスチルカメラ、携帯電話などにおける撮像 装置に有用である。 [0174] As described above, according to the present invention, even if camera shake or subject shake occurs when pixel shifting is performed, it is possible to prevent a reduction in pixel shifting effect and obtain a high-resolution image. . For this reason, the present invention is useful for an imaging device in, for example, a digital still camera or a mobile phone.

Claims

請求の範囲 The scope of the claims
[1] 光学系と撮像素子とを含みそれぞれ光軸が異なる撮像系を複数備えた複眼撮像 装置であって、  [1] A compound-eye imaging device comprising a plurality of imaging systems each including an optical system and an imaging element and having different optical axes,
前記複数の撮像系は、  The plurality of imaging systems are:
前記撮像素子に結像する画像と前記撮像素子との相対的な位置関係を変化させ る画素ずらし手段を持つ第 1の撮像系と、  A first imaging system having a pixel shifting means for changing a relative positional relationship between an image formed on the imaging element and the imaging element;
前記撮像素子に結像する画像と前記撮像素子との相対的な位置関係が時系列の 撮影にお 1、て固定されて 、る第 2の撮像系とを含んで 、ることを特徴とする複眼撮像 装置。  The relative positional relationship between the image formed on the image pickup device and the image pickup device includes a second image pickup system that is fixed in time-series shooting. Compound eye imaging device.
[2] 時系列で撮影した複数フレームの画像情報を蓄積する画像メモリと、  [2] Image memory that stores image information of multiple frames taken in time series,
前記画像メモリに蓄積された前記複数フレームの画像情報を比較して、ぶれ量を 導出するぶれ量導出手段と、  A blur amount deriving unit for deriving a blur amount by comparing the image information of the plurality of frames stored in the image memory;
前記画像メモリに蓄積された前記複数フレームの画像を合成する画像合成手段と をさらに備えた請求項 1に記載の複眼撮像装置。  The compound eye imaging apparatus according to claim 1, further comprising: an image synthesizing unit that synthesizes the images of the plurality of frames stored in the image memory.
[3] 前記画素ずらし手段による前記位置関係の変化量は、前記ぶれ量導出手段により 求めたぶれ量に基づき決定される請求項 2に記載の複眼撮像装置。 3. The compound eye imaging apparatus according to claim 2, wherein the amount of change in the positional relationship by the pixel shifting unit is determined based on the amount of blur calculated by the amount of blur deriving unit.
[4] 前記画素ずらし手段による前記位置関係の変化量が固定されている請求項 1に記 載の複眼撮像装置。 4. The compound eye imaging device according to claim 1, wherein a change amount of the positional relationship by the pixel shifting means is fixed.
[5] 前記光軸の異なる複数の撮像系で撮影した画像から視差の大きさを求める視差量 導出手段をさらに備え、  [5] The apparatus further includes a parallax amount deriving unit that obtains the magnitude of parallax from images taken by a plurality of imaging systems having different optical axes,
前記画像合成手段は、前記視差量導出手段によって求めた視差量と、前記ぶれ 量導出手段によって求めたぶれ量とに基づき画像を補正し合成する請求項 2に記載 の複眼撮像装置。  The compound-eye imaging device according to claim 2, wherein the image synthesis unit corrects and synthesizes an image based on the parallax amount obtained by the parallax amount derivation unit and the blur amount obtained by the blur amount derivation unit.
[6] 前記ぶれ量導出手段により求めたぶれ量と、前記視差量導出手段によって求めた 視差量とに基づいて、前記画像メモリに蓄積された前記第 1の撮像系で撮影された 画像情報と、前記第 2の撮像系で撮影された画像情報とから前記画像合成手段の合 成に用いる画像情報を選択する最適画像選択手段をさらに備えた請求項 5に記載 の複眼撮像装置。 [6] Image information captured by the first imaging system stored in the image memory based on the blur amount obtained by the blur amount deriving unit and the parallax amount obtained by the parallax amount deriving unit; 6. The compound eye imaging apparatus according to claim 5, further comprising optimum image selection means for selecting image information used for synthesis of the image synthesis means from image information photographed by the second imaging system.
[7] 異なる被写体を判別する手段をさらに備えており、 [7] It further comprises means for distinguishing different subjects,
前記ぶれ量導出手段は、前記異なる被写体毎にぶれ量を導出し、  The blur amount deriving unit derives a blur amount for each different subject,
前記画像合成手段は、前記異なる被写体毎に画像を合成する請求項 2に記載の 複眼撮像装置。  3. The compound eye imaging apparatus according to claim 2, wherein the image synthesizing unit synthesizes an image for each of the different subjects.
[8] 画像情報を複数のブロックに分割する手段をさらに備えており、  [8] It further comprises means for dividing the image information into a plurality of blocks,
前記ぶれ量導出手段は、前記複数のブロック毎にぶれ量を導出し、  The blur amount deriving unit derives a blur amount for each of the plurality of blocks,
前記画像合成手段は、前記複数のブロック毎に画像を合成する請求項 2に記載の 複眼撮像装置。  The compound eye imaging apparatus according to claim 2, wherein the image synthesizing unit synthesizes an image for each of the plurality of blocks.
[9] 前記光軸が異なる複数の撮影系は、  [9] The plurality of imaging systems having different optical axes are:
赤色を扱う撮像系と、  An imaging system that handles red,
緑色を扱う撮像系と、  An imaging system that handles green,
青色を扱う撮像系とで構成されており、  It consists of an imaging system that handles blue,
前記各色に対応した撮像系のうち、少なくとも 1色に対応した撮像系の個数は、 2個 以上であり、  Among the imaging systems corresponding to each color, the number of imaging systems corresponding to at least one color is two or more,
前記同色を扱う 2個以上の撮像系は、前記第 1の撮像系と前記第 2の撮像系とを含 んで 、る請求項 1に複眼撮像装置。  2. The compound eye imaging apparatus according to claim 1, wherein the two or more imaging systems that handle the same color include the first imaging system and the second imaging system.
PCT/JP2005/022751 2004-12-16 2005-12-12 Multi-eye imaging apparatus WO2006064751A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/597,794 US7986343B2 (en) 2004-12-16 2005-12-12 Multi-eye imaging apparatus
JP2006519018A JP4699995B2 (en) 2004-12-16 2005-12-12 Compound eye imaging apparatus and imaging method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2004363868 2004-12-16
JP2004-363868 2004-12-16
JP2005154447 2005-05-26
JP2005-154447 2005-05-26

Publications (1)

Publication Number Publication Date
WO2006064751A1 true WO2006064751A1 (en) 2006-06-22

Family

ID=36587806

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/022751 WO2006064751A1 (en) 2004-12-16 2005-12-12 Multi-eye imaging apparatus

Country Status (3)

Country Link
US (1) US7986343B2 (en)
JP (1) JP4699995B2 (en)
WO (1) WO2006064751A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008135847A (en) * 2006-11-27 2008-06-12 Funai Electric Co Ltd Motion detecting and imaging apparatus
JP2009049900A (en) * 2007-08-22 2009-03-05 Hoya Corp Imaging device
WO2010013733A1 (en) * 2008-07-31 2010-02-04 富士フイルム株式会社 Compound-eye imaging device
WO2010055643A1 (en) * 2008-11-12 2010-05-20 シャープ株式会社 Imaging device
JP2012070389A (en) * 2010-03-19 2012-04-05 Fujifilm Corp Imaging device, method, and program
WO2015087599A1 (en) * 2013-12-09 2015-06-18 ソニー株式会社 Image pickup unit, lens barrel and portable device
WO2015182447A1 (en) * 2014-05-28 2015-12-03 コニカミノルタ株式会社 Imaging device and color measurement method
WO2017094535A1 (en) 2015-12-01 2017-06-08 Sony Corporation Surgery control apparatus, surgery control method, program, and surgery system
JP2017220745A (en) * 2016-06-06 2017-12-14 オリンパス株式会社 Imaging apparatus
WO2021182066A1 (en) * 2020-03-11 2021-09-16 ソニー・オリンパスメディカルソリューションズ株式会社 Medical image processing device and medical observation system

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702673B2 (en) 2004-10-01 2010-04-20 Ricoh Co., Ltd. System and methods for creation and use of a mixed media environment
US10192279B1 (en) 2007-07-11 2019-01-29 Ricoh Co., Ltd. Indexed document modification sharing with mixed media reality
US8156116B2 (en) 2006-07-31 2012-04-10 Ricoh Co., Ltd Dynamic presentation of targeted information in a mixed media reality recognition system
JP5186364B2 (en) * 2005-05-12 2013-04-17 テネブラックス コーポレイション Improved virtual window creation method
US9063952B2 (en) * 2006-07-31 2015-06-23 Ricoh Co., Ltd. Mixed media reality recognition with image tracking
US8446509B2 (en) * 2006-08-09 2013-05-21 Tenebraex Corporation Methods of creating a virtual window
CN101682698A (en) * 2007-06-28 2010-03-24 富士通株式会社 Electronic device for improving brightness of recorded image in low luminance environment
US20090051790A1 (en) * 2007-08-21 2009-02-26 Micron Technology, Inc. De-parallax methods and apparatuses for lateral sensor arrays
US20090079842A1 (en) * 2007-09-26 2009-03-26 Honeywell International, Inc. System and method for image processing
US8791984B2 (en) * 2007-11-16 2014-07-29 Scallop Imaging, Llc Digital security camera
US20090290033A1 (en) * 2007-11-16 2009-11-26 Tenebraex Corporation Systems and methods of creating a virtual window
JP5551075B2 (en) * 2007-11-16 2014-07-16 テネブラックス コーポレイション System and method for generating a virtual window
EP2175632A1 (en) * 2008-10-10 2010-04-14 Samsung Electronics Co., Ltd. Image processing apparatus and method
WO2010044383A1 (en) * 2008-10-17 2010-04-22 Hoya株式会社 Visual field image display device for eyeglasses and method for displaying visual field image for eyeglasses
CN102422200B (en) 2009-03-13 2015-07-08 特拉维夫大学拉玛特有限公司 Imaging system and method for imaging objects with reduced image blur
US20100328456A1 (en) * 2009-06-30 2010-12-30 Nokia Corporation Lenslet camera parallax correction using distance information
WO2011019358A1 (en) * 2009-08-14 2011-02-17 Hewlett-Packard Development Company, L.P. Reducing temporal aliasing
US20110069148A1 (en) * 2009-09-22 2011-03-24 Tenebraex Corporation Systems and methods for correcting images in a multi-sensor system
US8390724B2 (en) * 2009-11-05 2013-03-05 Panasonic Corporation Image capturing device and network camera system
CN102118556B (en) * 2009-12-31 2012-12-19 敦南科技股份有限公司 Method for adjusting image capturing frequency in real time by using image sensing device
JP2011257541A (en) * 2010-06-08 2011-12-22 Fujifilm Corp Lens system for stereo camera
US8717467B2 (en) * 2011-01-25 2014-05-06 Aptina Imaging Corporation Imaging systems with array cameras for depth sensing
JP5956808B2 (en) 2011-05-09 2016-07-27 キヤノン株式会社 Image processing apparatus and method
US9058331B2 (en) 2011-07-27 2015-06-16 Ricoh Co., Ltd. Generating a conversation in a social network based on visual search results
JP5762211B2 (en) * 2011-08-11 2015-08-12 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP2013055381A (en) * 2011-08-31 2013-03-21 Ricoh Co Ltd Imaging apparatus, imaging method and portable information terminal device
JP5917054B2 (en) * 2011-09-12 2016-05-11 キヤノン株式会社 Imaging apparatus, image data processing method, and program
JP2013183353A (en) * 2012-03-02 2013-09-12 Toshiba Corp Image processor
US9253433B2 (en) 2012-11-27 2016-02-02 International Business Machines Corporation Method and apparatus for tagging media with identity of creator or scene
US9565416B1 (en) 2013-09-30 2017-02-07 Google Inc. Depth-assisted focus in multi-camera systems
US9426365B2 (en) * 2013-11-01 2016-08-23 The Lightco Inc. Image stabilization related methods and apparatus
US9154697B2 (en) * 2013-12-06 2015-10-06 Google Inc. Camera selection based on occlusion of field of view
KR20160068407A (en) * 2014-12-05 2016-06-15 삼성전기주식회사 Photographing apparatus and control method thereof
DE102015215840B4 (en) * 2015-08-19 2017-03-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. A multi-aperture imaging apparatus, imaging system, and method of providing a multi-aperture imaging apparatus
JP2020096301A (en) * 2018-12-13 2020-06-18 オリンパス株式会社 Imaging apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001012927A (en) * 1999-06-29 2001-01-19 Fuji Photo Film Co Ltd Parallax image-inputting device and image pickup device
JP2002204462A (en) * 2000-10-25 2002-07-19 Canon Inc Image pickup device and its control method and control program and storage medium
JP2005176040A (en) * 2003-12-12 2005-06-30 Canon Inc Imaging device

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06261236A (en) 1993-03-05 1994-09-16 Sony Corp Image pickup device
US6429895B1 (en) * 1996-12-27 2002-08-06 Canon Kabushiki Kaisha Image sensing apparatus and method capable of merging function for obtaining high-precision image by synthesizing images and image stabilization function
JP3530696B2 (en) * 1996-12-27 2004-05-24 キヤノン株式会社 Imaging device
JPH10341367A (en) * 1997-06-06 1998-12-22 Toshiba Corp Still image generating method and still image fetch system
JPH11225284A (en) 1998-02-04 1999-08-17 Ricoh Co Ltd Image input device
US6611289B1 (en) * 1999-01-15 2003-08-26 Yanbin Yu Digital cameras using multiple sensors with multiple lenses
JP2000350123A (en) * 1999-06-04 2000-12-15 Fuji Photo Film Co Ltd Picture selection device, camera, picture selection method and recording medium
US7262799B2 (en) * 2000-10-25 2007-08-28 Canon Kabushiki Kaisha Image sensing apparatus and its control method, control program, and storage medium
US7286168B2 (en) * 2001-10-12 2007-10-23 Canon Kabushiki Kaisha Image processing apparatus and method for adding blur to an image
JP3866957B2 (en) 2001-10-23 2007-01-10 オリンパス株式会社 Image synthesizer
JP2004048644A (en) * 2002-05-21 2004-02-12 Sony Corp Information processor, information processing system and interlocutor display method
JP4191639B2 (en) 2003-03-28 2008-12-03 川崎 光洋 Three-dimensional image information related thing of five sense information of plane image
US7162151B2 (en) * 2003-08-08 2007-01-09 Olympus Corporation Camera
JP4164424B2 (en) * 2003-08-29 2008-10-15 キヤノン株式会社 Imaging apparatus and method
US7123298B2 (en) * 2003-12-18 2006-10-17 Avago Technologies Sensor Ip Pte. Ltd. Color image sensor with imaging elements imaging on respective regions of sensor elements
US7420592B2 (en) * 2004-06-17 2008-09-02 The Boeing Company Image shifting apparatus for enhanced image resolution
JP2006140971A (en) * 2004-11-15 2006-06-01 Canon Inc Image processing apparatus and image processing method
JP4401949B2 (en) * 2004-11-26 2010-01-20 キヤノン株式会社 Moving picture imaging apparatus and moving picture imaging method
JP4378272B2 (en) * 2004-12-15 2009-12-02 キヤノン株式会社 Imaging device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001012927A (en) * 1999-06-29 2001-01-19 Fuji Photo Film Co Ltd Parallax image-inputting device and image pickup device
JP2002204462A (en) * 2000-10-25 2002-07-19 Canon Inc Image pickup device and its control method and control program and storage medium
JP2005176040A (en) * 2003-12-12 2005-06-30 Canon Inc Imaging device

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008135847A (en) * 2006-11-27 2008-06-12 Funai Electric Co Ltd Motion detecting and imaging apparatus
JP2009049900A (en) * 2007-08-22 2009-03-05 Hoya Corp Imaging device
WO2010013733A1 (en) * 2008-07-31 2010-02-04 富士フイルム株式会社 Compound-eye imaging device
JP2010032969A (en) * 2008-07-31 2010-02-12 Fujifilm Corp Compound-eye imaging apparatus
US8169489B2 (en) 2008-07-31 2012-05-01 Fujifilm Corporation Multi view imaging device and method for correcting image blur
WO2010055643A1 (en) * 2008-11-12 2010-05-20 シャープ株式会社 Imaging device
JP2010118818A (en) * 2008-11-12 2010-05-27 Sharp Corp Image capturing apparatus
JP2012070389A (en) * 2010-03-19 2012-04-05 Fujifilm Corp Imaging device, method, and program
WO2015087599A1 (en) * 2013-12-09 2015-06-18 ソニー株式会社 Image pickup unit, lens barrel and portable device
US10050071B2 (en) 2013-12-09 2018-08-14 Sony Semiconductor Solutions Corporation Imaging unit, lens barrel, and portable terminal
WO2015182447A1 (en) * 2014-05-28 2015-12-03 コニカミノルタ株式会社 Imaging device and color measurement method
JP5896090B1 (en) * 2014-05-28 2016-03-30 コニカミノルタ株式会社 Imaging apparatus and colorimetric method
WO2017094535A1 (en) 2015-12-01 2017-06-08 Sony Corporation Surgery control apparatus, surgery control method, program, and surgery system
US11127116B2 (en) 2015-12-01 2021-09-21 Sony Corporation Surgery control apparatus, surgery control method, program, and surgery system
JP2017220745A (en) * 2016-06-06 2017-12-14 オリンパス株式会社 Imaging apparatus
WO2021182066A1 (en) * 2020-03-11 2021-09-16 ソニー・オリンパスメディカルソリューションズ株式会社 Medical image processing device and medical observation system

Also Published As

Publication number Publication date
JPWO2006064751A1 (en) 2008-06-12
US7986343B2 (en) 2011-07-26
US20070159535A1 (en) 2007-07-12
JP4699995B2 (en) 2011-06-15

Similar Documents

Publication Publication Date Title
JP4699995B2 (en) Compound eye imaging apparatus and imaging method
JP5066851B2 (en) Imaging device
EP2518995B1 (en) Multocular image pickup apparatus and multocular image pickup method
US9025060B2 (en) Solid-state image sensor having a shielding unit for shielding some of photo-electric converters and image capturing apparatus including the solid-state image sensor
JP5012495B2 (en) IMAGING ELEMENT, FOCUS DETECTION DEVICE, FOCUS ADJUSTMENT DEVICE, AND IMAGING DEVICE
EP1841207B1 (en) Imaging device, imaging method, and imaging device design method
US8885026B2 (en) Imaging device and imaging method
JP5901246B2 (en) Imaging device
KR101156991B1 (en) Image capturing apparatus, image processing method, and recording medium
EP2160018B1 (en) Image pickup apparatus
US9282312B2 (en) Single-eye stereoscopic imaging device, correction method thereof, and recording medium thereof
CN103688536B (en) Image processing apparatus, image processing method
JP6906947B2 (en) Image processing equipment, imaging equipment, image processing methods and computer programs
JPH08116490A (en) Image processing unit
JP2009141390A (en) Image sensor and imaging apparatus
JP2012049773A (en) Imaging apparatus and method, and program
JP2010213083A (en) Imaging device and method
JP5348258B2 (en) Imaging device
CN100477739C (en) Multi-eye imaging apparatus
JP2010130628A (en) Imaging apparatus, image compositing device and image compositing method
JP2006135823A (en) Image processor, imaging apparatus and image processing program
JP2008053787A (en) Multiple-lens electronic camera and parallax correcting method of multi-lens electronic camera
JP2006135501A (en) Imaging apparatus
JP2004007213A (en) Digital three dimensional model image pickup instrument
JP2005064749A (en) Camera

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2006519018

Country of ref document: JP

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007159535

Country of ref document: US

Ref document number: 10597794

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 200580005071.X

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 10597794

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 05814288

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 5814288

Country of ref document: EP