WO2006064770A1 - 撮像装置 - Google Patents
撮像装置 Download PDFInfo
- Publication number
- WO2006064770A1 WO2006064770A1 PCT/JP2005/022792 JP2005022792W WO2006064770A1 WO 2006064770 A1 WO2006064770 A1 WO 2006064770A1 JP 2005022792 W JP2005022792 W JP 2005022792W WO 2006064770 A1 WO2006064770 A1 WO 2006064770A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- imaging
- imaging optical
- optical systems
- Prior art date
Links
- 230000003287 optical effect Effects 0.000 claims abstract description 115
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 10
- 238000003384 imaging method Methods 0.000 claims description 207
- 238000012545 processing Methods 0.000 claims description 75
- 238000000034 method Methods 0.000 description 79
- 238000012937 correction Methods 0.000 description 75
- 238000010586 diagram Methods 0.000 description 24
- 238000007781 pre-processing Methods 0.000 description 24
- 239000002131 composite material Substances 0.000 description 16
- 238000006243 chemical reaction Methods 0.000 description 13
- 230000015572 biosynthetic process Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 238000003786 synthesis reaction Methods 0.000 description 10
- 239000000203 mixture Substances 0.000 description 8
- 238000013461 design Methods 0.000 description 6
- 230000005855 radiation Effects 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 150000001875 compounds Chemical class 0.000 description 3
- 239000011295 pitch Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000002730 additional effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/13—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
- H04N23/15—Image signal generation with circuitry for avoiding or correcting image misregistration
Definitions
- the present invention relates to an imaging apparatus that synthesizes one high-definition image by performing parallax correction from a plurality of images obtained using a plurality of imaging optical systems.
- a general compound eye imaging apparatus will be described with reference to FIG.
- a subject image is formed on each of a plurality of imaging optical systems 101, 102, and 103, and a plurality of image sensors 104, 105, and 106, respectively.
- the imaging light receiving characteristics of the plurality of image sensors 104, 105, and 106 are different from each other.
- the image sensor 104 images the red (R) wavelength region
- the image sensor 105 images the green (G) wavelength region
- 106 images the blue ( ⁇ ) wavelength region.
- the plurality of images captured by the plurality of image sensors 104, 105, and 106 are subjected to image processing by the R signal processing circuit 107, the G signal processing circuit 108, and the ⁇ signal processing circuit 109, respectively, and then the image composition processing circuit 110. Are combined and output as a color image.
- the optical axes of the plurality of imaging optical systems 101, 102, 103 are different from each other, and are inclined by an angle 0 (radiation angle) symmetrically with respect to the normal line of a subject at a certain position. Be placed.
- the radiation angle ⁇ is set to be optimal with respect to the subject position b in FIG.
- the radiation angle ⁇ is different from the optimum radiation angle for the subject positions a and c. Deviation occurs between the images captured by 106.
- FIGS. 16A, 16B, and 16C are diagrams each showing a composite image obtained by the compound-eye imaging device shown in FIG. 15, and FIG. 16A shows a case where the subject at position a in FIG. 15 is imaged.
- FIG. 16B shows a case where the subject at position b is taken, and
- FIG. 16C shows a case where the subject at position c is taken.
- the red image (R) and the blue image (with respect to the green image) because the radiation angles of the plurality of imaging optical systems 101, 102, 103 are not appropriate.
- B) is an image that is shifted to the left and right, and the combined image is output as a color misregistration. That is, the part where the blue image is missing becomes yellow (Ye) due to the green image and the red image, and the part where the red image is missing becomes cyan (Cy) due to the green image and the blue image, and the blue image and red image are missing
- the image is green image (G).
- An object of the present invention is to provide an imaging apparatus that can synthesize a high-quality image from a plurality of images having a difference in an imaging apparatus having a compound-eye imaging system in view of the above-described conventional problems.
- the imaging apparatus of the present invention corresponds to a plurality of imaging optical systems and the plurality of imaging optical systems on a one-to-one basis, and a plurality of images that respectively capture a plurality of images via the plurality of imaging optical systems.
- an image composition unit having a function of aligning a shift between the plurality of images and a function of synthesizing one image from the plurality of images.
- the imaging device of the present invention since the deviations other than the parallax between the plurality of images are aligned before the plurality of images are combined, the plurality of images can be combined with high accuracy. Can be obtained.
- FIG. 1 is a block diagram showing a schematic configuration of an imaging apparatus according to an embodiment of the present invention.
- FIG. 2 is a flowchart showing a flow of imaging in the imaging apparatus according to one embodiment of the present invention.
- FIG. 3 shows the relationship between a subject and an image in an imaging apparatus according to an embodiment of the present invention.
- FIG. 4 is a diagram schematically showing an apparatus for measuring a focal length of an imaging optical system in an imaging apparatus according to an embodiment of the present invention.
- FIG. 5A is a diagram showing a method of measuring the focal length when the principal point of the lens to be tested and the rotation center axis of the apparatus of FIG.
- FIG. 5B is a diagram showing a focal length measurement method in the case where the principal point of the lens to be tested and the rotation center axis do not match in the apparatus of FIG.
- FIG. 6 is a flowchart showing an algorithm for magnification correction processing of the imaging apparatus according to the embodiment of the present invention.
- FIG. 7 is a diagram for explaining interpolation processing of the imaging apparatus according to one embodiment of the present invention.
- FIG. 8 is a diagram showing an example of a positional relationship between each image sensor and a subject image in the imaging apparatus according to one embodiment of the present invention.
- FIG. 9 is a diagram showing an example of an image obtained via each image sensor in the imaging apparatus according to one embodiment of the present invention.
- FIG. 10A is a diagram for explaining a method of detecting the rotation angle of the image sensor in the imaging apparatus according to one embodiment of the present invention.
- FIG. 10B is a diagram for explaining a method of detecting the rotation angle of the image sensor in the imaging apparatus according to one embodiment of the present invention.
- FIG. 11 is a flowchart showing an algorithm for rotation correction processing of the imaging apparatus according to the embodiment of the present invention.
- FIG. 12A is a diagram showing a grid-like subject.
- FIG. 12B is a diagram showing an example of a grid-like subject image captured through the first imaging optical system in the imaging apparatus according to the embodiment of the present invention.
- FIG. 12C is a diagram showing an example of a grid-like subject image imaged through the second imaging optical system in the imaging device according to one embodiment of the present invention.
- FIG. 12D is a diagram showing an example of a grid-like subject image imaged through the third imaging optical system in the imaging device according to one embodiment of the present invention.
- FIG. 13 is a diagram for explaining a method for detecting the distortion amount of the imaging optical system in the imaging apparatus according to the embodiment of the present invention.
- FIG. 14 is a flowchart showing an algorithm for distortion correction processing of the imaging apparatus according to the embodiment of the present invention.
- FIG. 15 is a block diagram showing a schematic configuration of a conventional compound eye imaging apparatus.
- FIG. 16A is a diagram showing a composite image when a white circle subject at position a is imaged by the compound-eye imaging apparatus of FIG.
- FIG. 16B is a diagram showing a composite image when a white circle subject at position b is imaged by the compound-eye imaging apparatus of FIG.
- FIG. 16C is a diagram showing a composite image when the white circle subject at position c is imaged by the compound-eye imaging device of FIG.
- the function of aligning the shift between the plurality of images is a function of aligning the magnifications of the plurality of images.
- the imaging apparatus further includes a recording unit that records information on magnifications of the plurality of imaging optical systems, and the image synthesis unit includes information on the magnifications of the plurality of imaging optical systems. It is preferable to align the magnification of the plurality of images using.
- the imaging apparatus further includes a recording unit that records information on focal lengths of the plurality of imaging optical systems, and the image synthesis unit is configured to determine the focal lengths of the plurality of imaging optical systems. It is preferable to align the magnifications of the plurality of images using information.
- the function is to align the inclination of the plurality of images.
- the imaging apparatus further includes a recording unit that records information on the inclinations of the plurality of imaging regions, and the image composition unit uses the information on the inclinations of the plurality of imaging regions. It is preferable to align the inclination of the plurality of images.
- the function of aligning the distortion of the plurality of images is preferable.
- the imaging apparatus further includes information on the amount of distortion of the plurality of imaging optical systems.
- the image synthesizing unit aligns the distortions of the plurality of images using information on the distortion amounts of the plurality of imaging optical systems.
- FIG. 1 is a block diagram showing a schematic configuration of an imaging apparatus according to an embodiment of the present invention.
- the same parts as those in FIG. 15 showing a conventional imaging apparatus are denoted by the same reference numerals. Yes.
- a plurality of imaging optical systems 101, 102, and 103 respectively form subject images on a plurality of image sensors 104, 105, and 106 corresponding one-to-one.
- the imaging light receiving characteristics of the plurality of image sensors 104, 105, and 106 are different from each other.
- the image sensor 104 images the red (R) wavelength region
- the image sensor 105 images the green (G) wavelength region.
- the age sensor 106 images the blue (B) wavelength region.
- the image sensor may be provided with wavelength dependency, or wavelength selectivity may be realized by inserting a filter or the like.
- the plurality of images captured by the plurality of image sensors 104, 105, 106 are subjected to image processing by the R signal processing circuit 107, the G signal processing circuit 108, and the B signal processing circuit 109, respectively. Deviations between a plurality of images excluding parallax are aligned by the preprocessing circuit 111, and then synthesized by the image synthesis processing circuit 110 and output as a color image.
- the image synthesizing unit 115 that receives a plurality of image signals obtained from the plurality of image sensors 104, 105, and 106 and outputs a synthesized image signal, An image pre-processing circuit 111 that has the function of aligning the shift between multiple images, and an image compositing process that has the function of creating a single image (composite image) by combining multiple images processed by the image pre-processing circuit 111
- the description will be divided into the circuit 110, it is not necessary that the image pre-processing circuit 111 and the image composition processing circuit 110 are clearly distinguished from each other in an actual imaging apparatus.
- FIG. 2 is an occupancy flow diagram showing an imaging flow of the imaging apparatus of the present embodiment.
- step S10 imaging is started by pressing an imaging button or the like.
- step S20 image acquisition processing is performed in which a plurality of images are captured from the plurality of image sensors 104, 105, and 106. This processing is performed by signal processing circuits 107, 108, 109.
- step S30 an intensity correction process is performed on a plurality of images to perform variation and sensitivity adjustment between the plurality of image sensors 104, 105, and 106. This processing is performed by the signal processing circuits 107, 108, and 109.
- step S40 an origin correction process whose main purpose is to correct mounting misalignment of the plurality of imaging optical systems 101, 102, 103 and the plurality of image sensors 104, 105, 106 for a plurality of images. Is made. This processing is performed by the signal processing circuits 107, 108, and 109.
- step S50 a magnification correction process for aligning the magnifications of the plurality of images is performed on the plurality of images. This processing is performed by the image preprocessing circuit 111.
- step S60 rotation correction processing for aligning the inclinations of the plurality of images is performed on the plurality of images. This processing is performed by the image preprocessing circuit 111.
- step S70 a distortion correction process for aligning the distortions of the plurality of images is performed on the plurality of images. This processing is performed by the image preprocessing circuit 111.
- step S80 a parallax correction combining process for calculating a parallax amount between a plurality of images and correcting the parallax amount to synthesize a plurality of images to form one image (synthesized image) is performed. This processing is performed by the image composition processing circuit 110.
- step S90 the composite image is displayed on a display device such as a liquid crystal display integrated with the imaging device, or a device capable of outputting an image such as a CRT, TV, PC (personal computer), or printer via a wiring cable. Is output.
- a display device such as a liquid crystal display integrated with the imaging device, or a device capable of outputting an image such as a CRT, TV, PC (personal computer), or printer via a wiring cable. Is output.
- the magnification, inclination, and distortion of a plurality of images are aligned (steps S50, S60, S70) in the previous stage of the parallax correction combining process (step S80).
- combination processing can be made easy and a high-definition composite image can be obtained.
- FIG. 2 is a flowchart for the purpose of finally outputting a composite image.
- the parallax amount obtained in step S80 and the plurality of imaging optical systems 101, 102, 103 are shown in FIG.
- the distance to the subject may be calculated using the principle of triangulation from the focal length and the distances between the optical axes of the plurality of imaging optical systems 101, 102, 103 used when calculating the amount of parallax. Therefore, the imaging apparatus according to the present embodiment can simultaneously acquire the composite image and the distance information to the subject. Alternatively, only the distance information to the subject can be acquired without performing a process of combining a plurality of images. In this case, between steps S50, S60, S70 By performing the process of aligning the deviations, the accuracy of the distance information can be improved.
- magnification correction process step S50
- rotation correction process step S60
- distortion correction process step S70
- a magnification correction process for aligning the magnifications of a plurality of images (step S50 in FIG. 2).
- FIG. 3 is a diagram simply showing the relationship between the lens 1, the subject 2, and the subject image 3.
- 4 and 5 are representative first and second rays from subject 2
- z is the optical axis
- F is the front focal point
- F ' is the rear focal point
- H is the front principal point
- H' is the rear principal point
- Y is the height of subject 2
- y ′ is the height of subject image 3
- f ′ is the rear focal length.
- the magnification j8 is defined by the following equation (1).
- Equation (1) is transformed as Equation (2) below.
- a magnification correction process for aligning the magnifications of a plurality of images before parallax correction and image synthesis.
- Step 50 in FIG. 2 is preferably performed.
- the image preprocessing circuit 111 aligns the magnification deviation between a plurality of images are shown.
- magnification correction processing is performed on a plurality of images using this magnification information.
- the focal length of each imaging optical system is measured, the magnification is obtained by substituting the information into the above equation (2), and the information is used as an initial value.
- the image is recorded in the recording unit 112 (see FIG. 1) in the image preprocessing circuit 111.
- the magnification correction processing (step 50 in FIG. 2) is performed on a plurality of images using this magnification information.
- the focal length information of each imaging optical system is recorded in the recording unit 112, and magnification correction processing (step 50 in FIG. 2) is performed on a plurality of images using the focal length information during imaging. May be.
- FIG. 4 is a diagram showing an apparatus for measuring the focal length of the lens 11 as an imaging optical system.
- 6 is a collimator
- 7 is a moving stage
- 8 is a rotary stage
- 9 is a lens holder
- 10 is parallel light
- 11 is a lens to be examined
- 12 is an imaging camera.
- the lens 11 to be examined is held by the moving stage 7 and the rotating stage 8 through the lens holder 9.
- the moving stage 7 can move the test lens 11 in the optical axis direction
- the rotary stage 8 can rotate the test lens 11 about the rotation center axis 8a in a plane perpendicular to the paper surface including the optical axis. I can do it.
- a collimator 6 is disposed on one side of the lens 11 to be examined, and an imaging camera 12 is disposed on the other side.
- the parallel light 10 formed by the collimator 6 enters the imaging camera 12 through the lens 11 to be examined.
- the principal point 11a of the lens 11 to be examined and the rotating stage 8 are rotating.
- the center axis 8a coincides with the center axis 8a
- the test lens 11 is moved in a direction parallel to the optical axis by the moving stage 7 while the test lens 11 is tilted so that the principal point 11a and the rotation center axis 8a coincide.
- the focal length f of the lens 11 to be measured can be measured.
- the step of measuring the magnification or the focal length of each imaging optical system is sufficient to be performed once in the initialization step at the time of assembling the imaging device or a product on which the imaging device is mounted.
- a magnification correction process using the magnification or focal length information of a plurality of imaging optical systems a new image is reconstructed by the image preprocessing circuit 111, and the image signal is replaced with the original image signal.
- the image pre-processing circuit 111 manages only the magnification or focal length information
- the next-stage image synthesis processing circuit 110 manages the image signal and the magnification or focal length information.
- magnification correction process (step 50 in FIG. 2) performed for each shooting will be described with reference to FIG.
- step S51 a plurality of images (photodiode (hereinafter referred to as PD) output) imaged through a plurality of imaging optical systems are temporarily stored in the two-dimensional space in the image preprocessing circuit 111. Coordinates (x, y) with the origin as the origin, which is recorded in the information recording unit (not shown) and extracted in the origin correction process in step S40 (see Fig. 2), are added to the pixels constituting each image.
- PD photodiode
- step S52 the (x, y) coordinate system is converted into an (r, ⁇ ) polar coordinate system centered on the origin. This conversion can be easily performed by a coordinate conversion method using the following equations (3) and (4).
- step S53 magnification correction is performed.
- 83
- j82 ⁇ j81> 1 or j83 ⁇ j81> 1 it becomes R r from equation (5), and the image sensor power before magnification correction becomes smaller than the size of the extracted image.
- magnification ( ⁇ 81) of the imaging optical system corresponding to the reference image so that ⁇ 2 / ⁇ 1 ⁇ 1, J83ZJ81 ⁇ 1.
- step S 54 the polar coordinate system (R, 0 ′) is converted to the (X, Y) coordinate system.
- This conversion can be performed by using the following equations (7) and (8).
- step S55 data interpolation processing is performed.
- Most of the image data after the coordinate conversion in step S54 is data at a position (between pixels) different from the grid-point pixel position. Therefore, it is necessary to create image data at the pixel position by performing interpolation processing using image data around the pixel.
- Image data at pixel P can be obtained by the following equation (9).
- magnification correction process (step S50 in FIG. 2) ends.
- magnification or focal length of each of the plurality of imaging optical systems is directly measured using the actually assembled imaging device, and the magnification of the plurality of images is determined using the information.
- the present invention is not limited to this. For example, if the magnification or focal length of each imaging optical system is considered to be almost equal to the design value, the magnification correction processing may be performed using the design value. It is done.
- step S60 in FIG. 2 a rotation correction process for aligning the inclinations of a plurality of images will be described.
- FIG. 8 is a view of the positional relationship between the plurality of image sensors 104, 105, 106 and the subject image of the imaging apparatus shown in FIG.
- the same parts as those in Fig. 1 are given the same numbers.
- a cross-shaped subject image 120 is formed on each of the image sensors 104, 105, and 106.
- the image sensor 105 is installed at an angle ⁇ with respect to the other image sensors 104 and 106. This assumes a variation when the image sensors 104, 105, and 106 are mounted on the same substrate.
- FIG. 9 shows images of the subject captured by the image sensors 104, 105, and 106 via an R signal processing circuit 107, a G signal processing device 108, and a B signal processing device 109.
- FIG. 6 is a diagram showing a result displayed on a display).
- the screen displays 104 ′, 105 ′, and 106 correspond to the image sensors 104, 105, and 106 in order. According to FIG. 9, it can be seen that the subject image 120 captured by the image sensor 105 is rotated by ⁇ compared to the subject image 120 ′ captured by the other image sensors 104 and 106.
- the image 120 obtained by the image sensor 105 is another image. It is impossible to superimpose on the image 120 by the image sensors 104 and 106. Therefore, a rotation correction process for aligning the inclinations of a plurality of images before parallax correction and image synthesis.
- Step S60 in FIG. 2 is preferably performed.
- an adjustment image having a remarkably large contrast difference is imaged as a subject.
- a black crosshair on a white background For example, a black crosshair on a white background.
- the width of the crosshair is preferably at least twice the pixel pitch of the image sensors 104, 105, and 106 in consideration of the magnification of the optical system.
- FIGS. 10A and 10B An example of an output image rotation angle detection method when a crosshair is used as a subject will be described with reference to FIGS. 10A and 10B.
- the cross line 131 is displayed on the display device 130 having the number of pixels H in the horizontal direction and the number of pixels V in the vertical direction by rotating the rotation angle ⁇ around the origin Pc.
- the output value from each pixel is detected while moving the target pixel in the vertical direction with reference to the pixel A that has moved HZ2 pixels in the horizontal direction from the origin Pc.
- An outline of the detection result of the output value from the pixel is shown in Fig. 10B. In Fig.
- the horizontal axis is the number of pixels in the vertical direction with reference to pixel A
- the vertical axis is the output value (pixel output) from the pixel.
- the rotation angle ⁇ of the horizontal line of the crosshair with respect to the horizontal direction of this image sensor is (10).
- Equation (10) is the case where the pixel pitches in the horizontal direction and the vertical direction are the same. If they are different, it is necessary to multiply the ratio.
- This measurement is performed for all image sensors 104, 105, 106, and each image sensor Is recorded in the recording unit 112 (see FIG. 1) in the image preprocessing circuit 111 as initial values ( ⁇ 1, ⁇ 2, and ⁇ 3).
- rotation correction processing (step S60 in FIG. 2) is performed on a plurality of images using this inclination information.
- the step of measuring the inclination of each image sensor need only be performed once in the initialization step when assembling the imaging device or the product on which it is mounted.
- step S 60 in FIG. 2 an example of rotation correction processing (step S 60 in FIG. 2) performed for each shooting will be described.
- step S61 a plurality of images (PD output) captured through a plurality of imaging optical systems are temporarily stored in a two-dimensional spatial information recording unit (not shown) in the image preprocessing circuit 111.
- the coordinates (x, y) with the origin as the origin extracted in the origin correction process in step S40 (see Fig. 2) are added to the pixels that constitute each image.
- step S62 the (x, y) coordinate system is converted into a (r, ⁇ ) polar coordinate system centered on the origin. This conversion can be easily performed by a coordinate conversion method using the following equations (3) and (4).
- step S63 rotation correction is performed.
- the coordinates (r, ⁇ ) are expressed by the following equations (11) and (12). Is used to convert to coordinates (R, 0 ').
- step S64 the polar coordinate system (R, 0 ′) is converted to the (X, Y) coordinate system.
- This conversion can be performed by using the following equations (7) and (8).
- step S65 data interpolation processing is performed.
- Most of the image data after the coordinate conversion in step S64 is data at a position (between the pixels) different from the grid-point pixel position. Therefore, it is necessary to create image data at the pixel position by performing interpolation processing using image data around the pixel.
- the image data at pixel P can be obtained by the following equation (9).
- the rotation angles of a plurality of images output from a plurality of image sensor cameras are detected using an actually assembled imaging device, and a plurality of images are used using the information.
- the present invention is not limited to this.
- step S70 in FIG. 2 a distortion correction process for aligning the distortion of a plurality of images will be described.
- a normal optical lens has a phenomenon (this is called distortion) in which a linear object is distorted into a curved line at the periphery of an imaging area. 12A to 12D, the imaging device of the present invention is used. An example of distortion caused by the imaging optical system will be described.
- FIG. 12A shows a grid-like subject.
- 12B is an image obtained by the image sensor 104 picking up the grid-like subject shown in FIG. 12A via the imaging optical system 101
- FIG. 12C is a picture showing the grid-like subject shown in FIG. 12A taken by the image sensor 105 via the imaging optical system 102.
- the captured image, FIG. 12D is an image captured by the image sensor 106 via the imaging optical system 103 of the grid-like subject of FIG. 12A.
- all the images in FIGS. 12B to 12D are deformed into a barrel shape.
- the image in FIG. 12B and the image in FIG. 12D are almost the same, but the image in FIG. 12C is slightly different in shape, and the image in FIG. 12C is the same as that in FIG. 12B and FIG. 12D.
- the distortion is bigger than the image.
- Step S70 in FIG. 2 is preferably performed.
- the distortion amount of each imaging optical system is measured at the time of assembling the imaging apparatus, and the information is recorded as an initial value in the recording unit 112 (see FIG. 1) in the image preprocessing circuit 111. If the actual distortion amount of each imaging optical system is considered to be almost the same as the design value, the design value may be recorded in the recording unit 112 (see FIG. 1).
- a plurality of lattice-like subjects having a linear force orthogonal to each other shown in FIG. 12A can be photographed, and the amount of distortion of the imaging optical system can be measured from an image that can also obtain each image sensor force.
- the output image of the image sensor is as shown by the solid line 142 in FIG.
- a part of a grid-like object enlarged (or reduced) in consideration of the magnification of the imaging optical system is also shown by a broken line 141.
- Point P on subject 141 is point P on output image 142
- Point P indicates the position of the optical axis.
- Optical axis P force is also the distance to point P on the subject 141
- the separation y is up to point P on the optical axis P force output image 142 due to distortion of the imaging optical system.
- the distortion amount D of the imaging optical system should be obtained by the following equation (13).
- This measurement is performed for all the imaging optical systems 101, 102, 103, and information on the amount of distortion of each imaging optical system 101, 102, 103 is used as the initial value (Dl (y), D2 (y), D3 (y)).
- the image is recorded in the recording unit 112 (see FIG. 1) in the image preprocessing circuit 111.
- distortion correction processing (step S70 in FIG. 2) is performed on a plurality of images using this distortion amount information.
- the step of measuring the distortion amount of each imaging optical system is sufficient to be performed once in the initialization step when assembling the imaging device or a product on which the imaging device is mounted.
- step S70 in Fig. 2 An example of distortion correction processing (step S70 in Fig. 2) performed for each shooting will be described with reference to Fig. 14.
- step S71 a plurality of images (PD outputs) captured using a plurality of imaging optical systems are temporarily stored in a two-dimensional spatial information recording unit (not shown) in the image preprocessing circuit 111.
- the coordinates (x, y) with the origin as the origin extracted in the origin correction process in step S40 (see Fig. 2) are added to the pixels that constitute each image.
- step S72 the (x, y) coordinate system is converted into a (r, ⁇ ) polar coordinate system centered on the origin. This conversion can be easily performed by a coordinate conversion method using the following equations (3) and (4).
- step S 74 the polar coordinate system (R, 0 ′) is converted to the (X, Y) coordinate system.
- This conversion can be performed by using the following equations (7) and (8).
- step S75 data interpolation processing is performed.
- Most of the image data after the coordinate conversion in step S74 is data at a position (between the pixels) different from the pixel position of the lattice point. Therefore, it is necessary to create image data at the pixel position by performing interpolation processing using image data around the pixel.
- the image data at pixel P can be obtained by the following equation (9).
- step S70 in FIG. 2 is completed.
- the distortion amount of each of the plurality of imaging optical systems is directly measured using the actually assembled imaging device, and the distortion of the plurality of images is aligned using the information.
- the present invention is not limited to this.
- the distortion correction processing is performed using the design value. Even in this case, the same effect as described above can be obtained.
- the reference distortion amount may be the distortion amount of the imaging optical system having the smallest distortion amount among the plurality of imaging optical systems, or may be a distortion amount smaller than the distortion amount of the plurality of imaging optical systems. good.
- a composite image in which distortion is sufficiently corrected can be obtained as an additional effect.
- the imaging optical system can be easily designed, and a thin imaging optical system that has not been conventionally available can be realized.
- the distortion correction processing is performed! / It is preferable to obtain the amount of parallax using the image, correct the obtained amount of parallax using the distortion amount, and obtain the corrected amount of parallax by calculating the distance information to the subject. In this case, it is only necessary to consider the amount of distortion at a specific pixel used when obtaining the amount of parallax, so that the calculation time can be shortened. In contrast, when a plurality of images are subjected to distortion correction processing and the amount of parallax is obtained from the plurality of corrected images, calculation for distortion correction is performed for all pixel data in the image distortion correction processing.
- the variation in the distortion amount among the plurality of imaging optical systems is about ⁇ 5% or less.
- the variation in the distortion amount is larger than ⁇ 5%, the calculation for obtaining the plurality of image power parallax amounts does not function normally, and the possibility that the accurate parallax amount cannot be obtained increases.
- magnification initial value setting step the magnification or focal length of each of the plurality of imaging optical systems is measured and recorded as an initial value in the recording unit 112 (magnification initial value setting step).
- a step of measuring each distortion of the plurality of imaging optical systems and recording this as an initial value in the recording unit 112 distaltion It is preferable to perform an initial value setting step.
- a preferable execution order of these three initial value setting steps will be described below.
- the initial tilt value setting step if an appropriate subject is used, such as the cross-shaped subject described in the second embodiment, even if magnification and distortion vary among a plurality of captured images, It is possible to accurately measure the tilt regardless of the angle. Therefore, it is preferable to perform the tilt initial value setting process first.
- the captured image 142 is compared with the subject 141 (see FIG. 13). For this comparison, information on the magnification or focal length of the imaging optical system is required. Therefore, it is preferable that the magnification initial value setting process is performed prior to the distortion initial value setting process.
- the initial slope setting process, the initial magnification setting process, and the initial distortion setting process are performed in this order.
- This subject consists of a first straight line and a second straight line that are orthogonal to each other.
- the first straight line is preferably parallel to the horizontal direction and long enough to extend out of the field of view of the imaging optical system.
- This first straight line is used to measure the tilt of the image sensor (see Figure 10A).
- the second straight line is preferably parallel to the vertical direction. It is preferable that the length of the second straight line is within the field of view of the imaging optical system.
- the length of the image formed on the image sensor is the effective imaging of the image sensor. It is preferably set to be approximately 80% of the length of the vertical side (short side) of the region.
- the line widths of the first and second straight lines are preferably set so that the line width of the image formed on the image sensor is at least twice the pixel pitch of the image sensor.
- the colors of the first and second straight lines are preferably black in order to significantly increase the contrast ratio, and the background is preferably white.
- any one of the obtained red, blue, and green images for example, red or It is necessary to interpolate and extract the information of other color images (for example, green images) from the information of blue images.
- the imaging apparatus of the present invention is not limited to this.
- one image sensor common to a plurality of imaging optical systems may be used, and this image sensor may be divided into a plurality of imaging regions so as to correspond one-to-one to the plurality of imaging optical systems.
- the image pickup apparatus that performs the magnification correction process (step S50), the rotation correction process (step S60), and the distortion correction process (step S70) shown in Fig. 2 in this order.
- the order of these three correction processes is not limited to this.
- One or two of these three correction processes may be omitted.
- the rotation correction process (step S60) can be omitted.
- the field of application of the imaging apparatus according to the present invention is not particularly limited, but can be used as a camera module for portable devices, for example, because it can capture high-quality images despite the small size in the optical axis direction. It is.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Cameras In General (AREA)
- Color Television Image Signal Generators (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006548830A JPWO2006064770A1 (ja) | 2004-12-16 | 2005-12-12 | 撮像装置 |
US11/720,684 US7742087B2 (en) | 2004-12-16 | 2005-12-12 | Image pickup device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-363867 | 2004-12-16 | ||
JP2004363867 | 2004-12-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006064770A1 true WO2006064770A1 (ja) | 2006-06-22 |
Family
ID=36587825
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/022792 WO2006064770A1 (ja) | 2004-12-16 | 2005-12-12 | 撮像装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US7742087B2 (ja) |
JP (1) | JPWO2006064770A1 (ja) |
CN (1) | CN101080921A (ja) |
WO (1) | WO2006064770A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011121841A1 (ja) * | 2010-03-31 | 2011-10-06 | 富士フイルム株式会社 | 立体撮像装置 |
WO2011121840A1 (ja) * | 2010-03-31 | 2011-10-06 | 富士フイルム株式会社 | 立体撮像装置 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7742624B2 (en) * | 2006-04-25 | 2010-06-22 | Motorola, Inc. | Perspective improvement for image and video applications |
US8300108B2 (en) | 2009-02-02 | 2012-10-30 | L-3 Communications Cincinnati Electronics Corporation | Multi-channel imaging devices comprising unit cells |
GB0906461D0 (en) * | 2009-04-15 | 2009-05-20 | Siemens Medical Solutions | Partial volume correction via smoothing at viewer |
CN102118551A (zh) * | 2009-12-31 | 2011-07-06 | 鸿富锦精密工业(深圳)有限公司 | 成像装置 |
JP5499778B2 (ja) * | 2010-03-03 | 2014-05-21 | 株式会社ニコン | 撮像装置 |
JP2011209269A (ja) * | 2010-03-08 | 2011-10-20 | Ricoh Co Ltd | 撮像装置及び距離取得システム |
JP5639024B2 (ja) * | 2011-09-27 | 2014-12-10 | 富士重工業株式会社 | 画像処理装置 |
US9600863B2 (en) * | 2012-02-13 | 2017-03-21 | Omnivision Technologies, Inc. | Method for combining images |
JP5950678B2 (ja) * | 2012-04-27 | 2016-07-13 | キヤノン株式会社 | 撮像装置、制御方法、及びプログラム |
CN106803872A (zh) * | 2017-03-02 | 2017-06-06 | 苏州苏大维格光电科技股份有限公司 | 相机镜头及手机 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002158929A (ja) * | 2000-11-16 | 2002-05-31 | Canon Inc | 固体撮像素子、固体撮像装置及び撮像システム |
JP2002204462A (ja) * | 2000-10-25 | 2002-07-19 | Canon Inc | 撮像装置及びその制御方法及び制御プログラム及び記憶媒体 |
JP2002330332A (ja) * | 2001-04-27 | 2002-11-15 | Canon Inc | 画像処理装置、撮像装置、その制御方法、及び記録媒体 |
JP2003283907A (ja) * | 2002-03-20 | 2003-10-03 | Japan Science & Technology Corp | 撮像装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07303207A (ja) * | 1994-05-09 | 1995-11-14 | Canon Inc | 撮像装置 |
JPH08116490A (ja) * | 1994-10-14 | 1996-05-07 | Olympus Optical Co Ltd | 画像処理装置 |
JP3792832B2 (ja) * | 1997-05-07 | 2006-07-05 | 富士重工業株式会社 | ステレオカメラの調整装置 |
JP3284190B2 (ja) * | 1998-05-14 | 2002-05-20 | 富士重工業株式会社 | ステレオカメラの画像補正装置 |
JP3397758B2 (ja) | 1999-06-30 | 2003-04-21 | キヤノン株式会社 | 撮像装置 |
US6882368B1 (en) * | 1999-06-30 | 2005-04-19 | Canon Kabushiki Kaisha | Image pickup apparatus |
JP2001223931A (ja) * | 2000-02-10 | 2001-08-17 | Olympus Optical Co Ltd | 撮像装置及び画像システム |
US6952228B2 (en) * | 2000-10-13 | 2005-10-04 | Canon Kabushiki Kaisha | Image pickup apparatus |
US7262799B2 (en) * | 2000-10-25 | 2007-08-28 | Canon Kabushiki Kaisha | Image sensing apparatus and its control method, control program, and storage medium |
JP2002171537A (ja) * | 2000-11-30 | 2002-06-14 | Canon Inc | 複眼撮像系、撮像装置および電子機器 |
-
2005
- 2005-12-12 CN CNA2005800432305A patent/CN101080921A/zh active Pending
- 2005-12-12 JP JP2006548830A patent/JPWO2006064770A1/ja active Pending
- 2005-12-12 US US11/720,684 patent/US7742087B2/en active Active
- 2005-12-12 WO PCT/JP2005/022792 patent/WO2006064770A1/ja not_active Application Discontinuation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002204462A (ja) * | 2000-10-25 | 2002-07-19 | Canon Inc | 撮像装置及びその制御方法及び制御プログラム及び記憶媒体 |
JP2002158929A (ja) * | 2000-11-16 | 2002-05-31 | Canon Inc | 固体撮像素子、固体撮像装置及び撮像システム |
JP2002330332A (ja) * | 2001-04-27 | 2002-11-15 | Canon Inc | 画像処理装置、撮像装置、その制御方法、及び記録媒体 |
JP2003283907A (ja) * | 2002-03-20 | 2003-10-03 | Japan Science & Technology Corp | 撮像装置 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011121841A1 (ja) * | 2010-03-31 | 2011-10-06 | 富士フイルム株式会社 | 立体撮像装置 |
WO2011121840A1 (ja) * | 2010-03-31 | 2011-10-06 | 富士フイルム株式会社 | 立体撮像装置 |
JP4875225B2 (ja) * | 2010-03-31 | 2012-02-15 | 富士フイルム株式会社 | 立体撮像装置 |
CN102362487A (zh) * | 2010-03-31 | 2012-02-22 | 富士胶片株式会社 | 立体成像设备 |
JP4897940B2 (ja) * | 2010-03-31 | 2012-03-14 | 富士フイルム株式会社 | 立体撮像装置 |
US8363091B2 (en) | 2010-03-31 | 2013-01-29 | Fujifilm Corporation | Stereoscopic image pick-up apparatus |
US8502863B2 (en) | 2010-03-31 | 2013-08-06 | Fujifilm Corporation | Stereoscopic imaging apparatus |
Also Published As
Publication number | Publication date |
---|---|
US7742087B2 (en) | 2010-06-22 |
JPWO2006064770A1 (ja) | 2008-06-12 |
CN101080921A (zh) | 2007-11-28 |
US20080211956A1 (en) | 2008-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006064770A1 (ja) | 撮像装置 | |
JP4699995B2 (ja) | 複眼撮像装置及び撮像方法 | |
JP4790086B2 (ja) | 多眼撮像装置および多眼撮像方法 | |
JP4903705B2 (ja) | 複眼方式の撮像装置及びその製造方法 | |
EP2160018B1 (en) | Image pickup apparatus | |
JP5157400B2 (ja) | 撮像装置 | |
JP5361535B2 (ja) | 撮像装置 | |
KR101356286B1 (ko) | 화상 처리 장치, 화상 처리 방법, 프로그램 및 촬상 장치 | |
JP5804055B2 (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP2011044801A (ja) | 画像処理装置 | |
JP2009159226A (ja) | 撮像素子、焦点検出装置、焦点調節装置および撮像装置 | |
JP2009100417A (ja) | 画像処理装置及び画像処理方法、画像処理プログラム | |
US8446513B2 (en) | Imaging apparatus with light transmissive filter | |
US20090122170A1 (en) | Imaging apparatus and imaging method | |
JP2011135359A (ja) | カメラモジュール及び画像処理装置 | |
CN115023938B (zh) | 电子装置 | |
JP5634614B2 (ja) | 撮像素子及び撮像装置 | |
JP2006135823A (ja) | 画像処理装置、撮像装置および画像処理プログラム | |
WO2015182447A1 (ja) | 撮像装置および測色方法 | |
JP5589799B2 (ja) | 撮像装置 | |
JP5978570B2 (ja) | 撮像装置 | |
JP2000023172A (ja) | 撮像装置 | |
JPH01194587A (ja) | 固体撮像素子のあおり測定方法 | |
JP2019219594A (ja) | 処理装置、撮像装置、交換レンズ、処理方法およびプログラム | |
JP2000139884A (ja) | X線像撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006548830 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11720684 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580043230.5 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05814513 Country of ref document: EP Kind code of ref document: A1 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 5814513 Country of ref document: EP |