WO2009101798A1 - 複眼撮像装置、測距装置、視差算出方法及び測距方法 - Google Patents
複眼撮像装置、測距装置、視差算出方法及び測距方法 Download PDFInfo
- Publication number
- WO2009101798A1 WO2009101798A1 PCT/JP2009/000534 JP2009000534W WO2009101798A1 WO 2009101798 A1 WO2009101798 A1 WO 2009101798A1 JP 2009000534 W JP2009000534 W JP 2009000534W WO 2009101798 A1 WO2009101798 A1 WO 2009101798A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging optical
- image
- optical system
- parallax
- correlation value
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
- G01C3/08—Use of electric radiation detectors
- G01C3/085—Use of electric radiation detectors with electronic parallax measurement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
- G06T7/596—Depth or shape recovery from multiple images from stereo images from three or more stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Definitions
- the present invention relates to a compound-eye imaging apparatus that has a plurality of imaging optical systems and calculates parallax generated between the imaging optical systems.
- a stereo distance measuring method using the principle of triangulation As a method for measuring the distance to the object or the three-dimensional position of the object, a stereo distance measuring method using the principle of triangulation has been conventionally used.
- the stereo ranging method a distance to an object is calculated based on parallax generated between a plurality of cameras.
- FIG. 30 is a diagram for explaining an example of calculating the distance to the object by the stereo distance measuring method when two cameras, camera a and camera b, are used.
- the light rays 101a and 101b of the object 100 are focused on the imaging regions 104a and 104b via the optical centers 105a and 105b of the lens 102a of the camera a and the lens 102b of the camera b.
- the optical axis 103a and the optical axis 103b represent the optical axis of each camera.
- the object 100 forms an image at a position 107a away from the intersection 106a between the imaging area 104a and the optical axis 103a, and for the camera b, the imaging area 104b and the optical axis on the imaging area 104b.
- the parallax P varies depending on the distance D between the distance measuring device and the object distance.
- the distance D to the object is expressed by (Equation 1). It is. Therefore, if the base line length B and the focal length f are known by a prior calibration process or the like, the distance D to the object 100 can be calculated by obtaining the parallax P.
- Non-Patent Document 1 parallelization processing as shown in Non-Patent Document 1 is performed.
- an image in which the optical axes are parallel is created, and the distance D can be calculated using the calculation using the above (Equation 1).
- the imaging areas 104a and 104b are usually configured by an imaging element such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS). Therefore, since the parallax P is calculated using the luminance signal of the object image discretized on the two-dimensional plane, the parallax detection resolution is usually one pixel.
- the distance measurement resolution (hereinafter referred to as distance measurement accuracy) is determined from the parallax detection resolution based on the relationship of (Equation 1).
- the three-dimensional position of the object can be calculated by the method described below with reference to FIGS. 31 to 33, for example.
- FIG. 31 is a diagram showing the positional relationship between the distance measuring device and the object.
- the origin Mw (0, 0, 0) of the world coordinates is the optical center 105a of the camera a.
- Zw1 is obtained by the distance D calculated by (Equation 1) described with reference to FIG. .
- FIG. 32 is a diagram of the camera a and the point 110 of the object 110 in FIG. 31 as viewed from the negative side of the Yw axis.
- Xw1 is expressed by (Expression 2) using the coordinates ms (xs1, ys1) of the imaging position 107a. expressed.
- FIG. 33 is a view of the camera 111 of FIG. 31 and one point 111 of the object 110 as viewed from the plus side of the Xw axis.
- Yw1 is expressed by (Expression 3) using the coordinates ms (xs1, ys1) of the imaging position 107a. expressed.
- the measurement accuracy of the three-dimensional position is also determined by the parallax detection resolution in the same manner as the distance measurement accuracy described above.
- the parallax P is a SAD (Sum of Sum) that is a correlation value for each small region of each image with respect to the image a captured in the imaging region 104a of the camera a and the image b captured in the imaging region 104b of the camera b.
- Absolute Difference sum of absolute differences is calculated, and the image shift for each small area between the image a and the image b, that is, the parallax P is calculated using the calculated correlation value.
- SAD is an example of a correlation value
- SSD Sum of Squared Difference
- NCC Normalized Cross-Correlation
- FIG. 34 is a diagram illustrating a method for expressing the luminance of each pixel in an image. As shown in the figure, 0 is black and 15 is white, and the luminance gradation of each pixel is expressed by the line density. The luminance may be a value below the decimal point.
- FIG. 35A is a diagram showing a part of the image a when the texture of the mapped object is viewed from the object side.
- FIG. 35B is a diagram illustrating a part of the image b when the texture of the mapped object is viewed from the object side.
- the same image as the image block a surrounded by the thick line in FIG. 35A is mapped when the object is at infinity.
- parallax occurs as shown in FIGS. 35A and 35B, and therefore the image of FIG. 35B is mapped to the right side of the image of FIG. 35A.
- FIGS. 35A and 35B have an actual parallax of 3.6 pixels will be described.
- the image block b is shifted by one pixel to the right from the position of the thick line in FIG. 35B, and the SAD is calculated based on (Equation 4) for each shift amount. To do.
- Ia and Ib represent luminance values in each image block
- i and j represent local addresses in each image block.
- the image block a and the image block b have the same image size, and the sum of the absolute values of the luminance difference at the same address of both image blocks is calculated for each shift amount.
- the shape of the image block may be a rectangle or a shape corresponding to the texture characteristics, but here, a square image block is described.
- FIG. 36 is a diagram showing the transition of SAD when the image block b is moved pixel by pixel. Since the SAD is minimum when the shift amount is 4 pixels, it is considered that the correlation between the image block a and the image block b is the highest. Therefore, the parallax between the camera a and the camera b in the image block a is calculated as 4 pixels, and the parallax P of (Equation 1) is obtained by multiplying the calculated parallax by the pixel pitch size, and the distance to the target object D can be calculated.
- a method for estimating the sub-pixel level parallax has been proposed as a method for obtaining the ranging accuracy, that is, the parallax detection resolution with higher accuracy instead of one pixel unit (for example, Patent Document 1).
- the subpixel parallax estimation method called equiangular straight line fitting, as shown in FIG. 37, it is assumed that the transition of SAD is the same slope on the left and right with reference to the actual parallax, and the actual linear interpolation is performed.
- Parallax is estimated at the sub-pixel level.
- a subpixel parallax calculation formula of equiangular straight line fitting, that is, an interpolation formula is shown in (Formula 5).
- P is the sub-pixel parallax
- Pmin is the shift amount that minimizes SAD (integer parallax)
- R (0) is the correlation value (SAD) at the shift amount that minimizes SAD
- the adjacent shift amount Let SAD be R (-1) and R (1).
- the actual parallax is interpolated using a higher-order linear function such as a quadratic function or a nonlinear function, assuming that the transition of each correlation value shift amount, such as SAD, is symmetric with respect to the actual parallax.
- a calculation method has been proposed. In the case of an object whose luminance changes linearly as shown in FIGS. 35A and 35B, the transition of SAD is symmetric with respect to the actual parallax as shown in FIG. 36, and SAD is linearly linear. Transition to. Therefore, when subpixel parallax estimation is performed using equiangular straight line fitting, a parallax of 3.6 pixels can be accurately obtained.
- JP 2000-283755 A Xugang, Saburo Tsubasa "3D Vision" Kyoritsu Publishing pp 96-99 September 25, 2002
- FIGS. 35A, 35B, and 36 illustrate an example of an object whose luminance distribution changes uniformly in a linear function in the parallax search direction.
- the transition of the correlation value is symmetric with respect to the actual parallax
- subpixel parallax estimation can be accurately performed by equiangular straight line fitting.
- the surface pattern rarely changes uniformly, and for example, as shown in FIGS. 38A and 38B, the luminance distribution often does not change uniformly.
- FIG. 38A is a diagram showing a part of the image a viewed from the object side.
- FIG. 38B is a diagram illustrating a part of the image b viewed from the object side.
- the same image as the image block a surrounded by the thick line in FIG. 38A is mapped when the object is at infinity.
- parallax occurs as shown in FIG. 30, so that the image in FIG. 38B moves to the right with respect to the image in FIG. 38A.
- FIG. 35A and FIG. 35B the case where FIG. 38A and FIG. 38B have an actual parallax of 3.6 pixels will be described.
- FIG. 39 is a diagram for explaining the transition of SAD in the case of the images of FIGS. 38A and 38B and subpixel parallax estimation by the above-mentioned equiangular straight line fitting at that time.
- the subpixel parallax estimation result deviates from the actual parallax.
- the sub-pixel parallax is estimated to be about 3.2 pixels.
- the estimated parallax has an error of about 0.4 pixels from the actual parallax.
- the interpolation formula for performing sub-pixel parallax estimation used in the conventional stereo ranging method assumes that the transition of the correlation value is symmetric with respect to the actual parallax. For this reason, there is a problem in that an estimation error of parallax occurs in an object whose transition of correlation values is not symmetric with respect to actual parallax.
- an estimation error of parallax occurs in an object whose transition of correlation values is not symmetric with respect to actual parallax.
- FIGS. 40A and 40B in the case of an object in which the transition of the correlation value is not symmetric with respect to the actual parallax, There was a problem that an estimation error occurred.
- the present invention has been made to solve the above problems, and the transition of the correlation value is symmetric with respect to the actual parallax regardless of the luminance distribution of the object, and the subpixel parallax estimation by the conventional interpolation as described above is performed. It is an object of the present invention to provide a compound eye imaging device or a distance measuring device that can estimate a sub-pixel parallax with high accuracy even when a method is used.
- a compound-eye imaging apparatus is a compound-eye imaging apparatus that calculates parallax generated in a plurality of imaging optical systems that capture the same object, and images the object.
- the reference imaging optical system for generating an image including a reference image and the optical centers of the optical imaging center of the reference imaging optical system are arranged substantially point symmetrically, and the reference image is obtained by imaging the object.
- the reference for searching for an image position of the reference image similar to the reference image for each of two or more even-numbered reference imaging optical systems that generate an image including and the two or more even-numbered reference imaging optical systems The search position of the reference image in the image generated by the imaging optical system is in a direction parallel to a base line that is a straight line connecting the optical center of the standard imaging optical system and the optical center of the reference imaging optical system.
- Correlation value calculation means for calculating a correlation value representing the degree of similarity between the reference image and the reference image, and each of the two or more even numbered reference imaging optical systems for each shift amount.
- Correlation value adding means for calculating a combined correlation value by adding the calculated correlation value for each corresponding shift amount, and based on the combined correlation value, the similarity between the reference image and the reference image is maximized
- a parallax calculating unit that calculates parallax that is a shift amount at a sub-pixel level.
- the correlation value calculated for each of the two or more reference imaging optical systems arranged substantially symmetrically with respect to the standard imaging optical system is added for each corresponding shift amount. Is symmetrical with respect to the actual parallax, and the sub-pixel parallax can be estimated with high accuracy.
- the substantially point symmetry means that the optical centers of the two reference imaging optical systems and the optical center of the standard imaging optical system are arranged on substantially one straight line. This means that the base line lengths of the two reference imaging optical systems and the standard imaging optical system are substantially the same.
- the optical centers of the two added reference imaging optical systems and the optical centers of the standard imaging optical system are arranged on a substantially straight line and added 2 This means that the base line lengths of the two reference imaging optical systems and the standard imaging optical system are substantially the same.
- each set of optical centers of the reference imaging optical system and the optical center of the standard imaging optical system are arranged on a substantially straight line.
- each reference imaging optical system and the reference imaging optical system have substantially the same base line length.
- the parallax calculating unit may calculate a sub-pixel level parallax by interpolating the correlation value for each shift amount added by the correlation value adding unit using an interpolation formula using symmetry. preferable.
- the compound-eye imaging device includes four or more reference imaging optical systems, and the optical centers of the four or more reference imaging optical systems are arranged approximately point-symmetrically with respect to the optical center of the standard imaging optical system.
- a pair of first reference imaging optical systems different from the optical center of the first reference imaging optical system arranged substantially point-symmetrically with respect to the direction of the base line related to the pair of first reference imaging optical systems and the optical center of the standard imaging optical system It is preferable that the second reference imaging optical system is arranged so that the direction of the baseline is inclined by a predetermined angle.
- the reference imaging optical system that is, the reference image is increased, so that the information amount is increased and the linearity of the correlation value transition is improved.
- the direction of a base line that is a straight line connecting the optical centers of a pair of reference imaging optical systems and a reference imaging optical system that are arranged substantially point-symmetrically with respect to the standard imaging optical system, and another pair of reference imaging optical systems Since the direction of the base line between the reference image pickup optical system and the reference image pickup optical system is inclined by a predetermined angle, the information amount of the imaged object is further increased, and the linearity of the correlation value transition is improved. As a result, the subpixel parallax estimation accuracy is further improved.
- the four or more reference imaging optical systems include a first baseline length that is a length of a baseline between the first reference imaging optical system and the reference imaging optical system, and the second reference imaging optical system.
- the correlation value calculating means is arranged so that a second baseline length that is a length of a baseline with the reference imaging optical system is different, and the correlation value calculating means includes a correlation value of a reference image generated by the second reference imaging optical system. Used to calculate the correlation value of the reference image generated by the first reference imaging optical system to the value obtained by dividing the second baseline length by the first baseline length.
- the correlation value is calculated for each second shift amount that is a value obtained by multiplying one shift amount.
- the standard imaging optical system and the four or more reference imaging optical systems are arranged so as to have the same positional relationship of pixels constituting the imaging device included in the standard imaging optical system.
- optical center position error which is the distance between the straight line connecting the optical center of the system and the optical center of the other reference imaging optical system in the pair
- D is the distance to the object
- pitch is the pixel pitch
- f is the focal length
- the optical center of one of the reference imaging optical systems and the standard imaging optical A first baseline length that is an interval from the optical center of the system, and a second baseline length that is an interval between the optical center of the other reference imaging optical system and the optical center of the reference imaging optical system in the pair.
- the baseline length error which is the difference, is when D is the distance to the object, pitch is the pixel pitch, and f is the focal length.
- Baseline length error ⁇ D ⁇ pitch ⁇ 0.2 / f It is preferable to satisfy.
- the compound eye imaging apparatus further includes preprocessing means for applying a smoothing filter to the reference image and the reference image, and the correlation value calculating means is based on the reference image and the reference image having been subjected to the smoothing filter. Preferably, the correlation value is calculated.
- the above-described SAD and equiangular linear fitting are performed.
- the linearity of the SAD transition is improved while maintaining the symmetry of the correlation value transition, and the subpixel parallax estimation accuracy is further improved.
- the distance measuring device calculates the distance to the object or the object by calculating parallax generated in a plurality of imaging optical systems that image the same object.
- a distance measuring device for calculating a three-dimensional position of an object wherein a reference imaging optical system that generates an image including a reference image by imaging the object and an optical center of the reference imaging optical system, respectively Of the two or more even-numbered reference imaging optical systems that generate an image including a reference image by imaging the object, and the two or more even-numbered reference imaging optical systems.
- the search position of the reference image in the image generated by the reference imaging optical system to search for the image position of the reference image similar to the reference image is the optical center of the standard imaging optical system.
- a correlation value representing the degree of similarity between the reference image and the reference image is calculated for each shift amount when shifted along a direction parallel to a base line that is a straight line connecting the optical center of the reference imaging optical system.
- Correlation value calculating means for calculating the combined correlation value by adding the correlation values calculated for each of the two or more even-numbered reference imaging optical systems for each corresponding shift amount; and Based on a composite correlation value, a parallax calculation unit that calculates a parallax that is a shift amount that maximizes the similarity between the reference image and the reference image at a sub-pixel level, the calculated parallax, and a focus of the reference imaging optical system And a distance calculating means for calculating a distance from the distance measuring device to the object or a three-dimensional position of the object based on the distance and the length of the base line.
- the transition of the correlation value is symmetric with respect to the actual parallax regardless of the subject. It is possible to estimate the subpixel parallax with high accuracy. As a result, it is possible to estimate the distance to the target object as a subject with high accuracy.
- the parallax calculation method is a parallax calculation method for calculating parallax generated in a plurality of imaging optical systems that capture the same object, and the plurality of imaging optical systems capture the object.
- a reference imaging optical system that generates an image including a reference image, and an optical center of each of the optical centers of the reference imaging optical system are arranged substantially point-symmetrically, and reference is made by imaging the object.
- Two or more even-numbered reference imaging optical systems that generate an image including an image, and the parallax calculation method includes, for each of the two or more even-numbered reference imaging optical systems, the reference image similar to the reference image.
- the search position of the reference image in the image generated by the reference imaging optical system for searching for the image position is directly connected to the optical center of the standard imaging optical system and the optical center of the reference imaging optical system.
- a correlation value calculating step for calculating a correlation value representing the similarity between the reference image and the reference image for each shift amount when shifting in a direction parallel to the base line, and the two or more even numbers A correlation value adding step for calculating a composite correlation value by adding the correlation values calculated for each of the reference imaging optical systems for each corresponding shift amount, and based on the composite correlation value, the reference image and the reference
- the distance measuring method calculates the distance to the object or the three-dimensional position of the object by calculating parallax generated by a plurality of imaging optical systems that image the same object.
- the plurality of imaging optical systems capture a target image by generating an image including a reference image by imaging the target object, and an optical center of the reference imaging optical system.
- Two or more even-numbered reference imaging optical systems that generate an image including a reference image by imaging the object, and the distance measurement method includes the two or more ranging methods.
- the reference imaging is used to search for the reference image in the image generated by the reference imaging optical system in order to search for the image position of the reference image similar to the reference image.
- a correlation value calculating step for calculating a correlation value representing the correlation value, and a correlation value calculated for each of the two or more even-numbered reference imaging optical systems for each corresponding shift amount to calculate a composite correlation value
- a value adding step for calculating a parallax calculating step for calculating a parallax that is a shift amount that maximizes the similarity between the reference image and the reference image based on the composite correlation value; a calculated parallax;
- the present invention can also be realized as a program that causes a computer to execute the steps included in such a parallax calculation method or distance measurement method.
- a program can be distributed via a recording medium such as a CD-ROM (Compact Disc-Read Only Memory) or a communication network such as the Internet.
- the transition of the correlation value is symmetric with respect to the actual parallax regardless of the luminance distribution of the object, and the subpixel can be accurately obtained even when the above-described conventional subpixel parallax estimation method using interpolation is used. It becomes possible to provide a compound eye imaging device or a distance measuring device capable of estimating parallax.
- FIG. 1 is a diagram showing a configuration of a distance measuring apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a diagram showing a positional relationship between the distance measuring apparatus and the object according to Embodiment 1 of the present invention.
- FIG. 3 is a flowchart showing a flow of processing related to calculation of a three-dimensional position or distance of an object by the distance measuring apparatus according to Embodiment 1 of the present invention.
- FIG. 4 is a diagram showing a part of an image viewed from the object side when the texture around the point 13 on the surface of the object according to Embodiment 1 of the present invention is imaged by the reference imaging optical system. is there.
- FIG. 1 is a diagram showing a configuration of a distance measuring apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a diagram showing a positional relationship between the distance measuring apparatus and the object according to Embodiment 1 of the present invention.
- FIG. 3 is a flowchart showing a flow of
- FIG. 5A is a diagram showing a part of an image captured by the imaging optical system according to Embodiment 1 of the present invention.
- FIG. 5B is a diagram showing a part of an image captured by the imaging optical system according to Embodiment 1 of the present invention.
- FIG. 5C is a diagram illustrating a part of an image captured by the imaging optical system according to Embodiment 1 of the present invention.
- FIG. 6A is a diagram showing transition of SAD according to Embodiment 1 of the present invention.
- FIG. 6B is a diagram showing transition of SAD according to Embodiment 1 of the present invention.
- FIG. 7 is a diagram showing a transition of the synthesized SAD according to Embodiment 1 of the present invention.
- FIG. 8 is a diagram showing a texture around one point of the object mapped to the image according to Embodiment 1 of the present invention.
- FIG. 9A is a diagram showing transition of SAD according to Embodiment 1 of the present invention.
- FIG. 9B is a diagram showing transition of SAD according to Embodiment 1 of the present invention.
- FIG. 10 is a diagram showing the configuration of the distance measuring apparatus according to Embodiment 2 of the present invention.
- FIG. 11A is a diagram showing transition of SAD according to Embodiment 2 of the present invention.
- FIG. 11B is a diagram showing a transition of SAD according to Embodiment 2 of the present invention.
- FIG. 12 is a diagram showing a transition of SAD according to the second embodiment of the present invention.
- FIG. 13 is a diagram showing the configuration of the distance measuring apparatus according to Embodiment 3 of the present invention.
- FIG. 14 is a diagram for explaining the operation of the distance measuring apparatus according to the third embodiment of the present invention.
- FIG. 15 is a diagram for explaining the operation of the distance measuring apparatus according to the third embodiment of the present invention.
- FIG. 16 is a diagram for explaining the performance of the distance measuring apparatus according to the third embodiment of the present invention.
- FIG. 17 is a diagram for explaining the operation of the distance measuring apparatus according to the third embodiment of the present invention.
- FIG. 18A is a diagram showing a transition of SAD according to Embodiment 3 of the present invention.
- FIG. 18B is a diagram showing a transition of SAD according to Embodiment 3 of the present invention.
- FIG. 18A is a diagram showing a transition of SAD according to Embodiment 3 of the present invention.
- FIG. 18B is a diagram showing a transition of SAD according to Embodiment 3 of the present invention
- FIG. 19 is a diagram showing a transition of the synthesized SAD according to the third embodiment of the present invention.
- FIG. 20 is a diagram for explaining the performance of the distance measuring apparatus according to the third embodiment of the present invention.
- FIG. 21 is a diagram showing the configuration of the distance measuring apparatus according to the fourth embodiment of the present invention.
- FIG. 22 is a diagram showing the positional relationship between the distance measuring apparatus and the object according to Embodiment 4 of the present invention.
- FIG. 23 is a flowchart showing a flow of processing relating to calculation of a three-dimensional position or distance of an object by the distance measuring apparatus according to Embodiment 4 of the present invention.
- FIG. 24A is a diagram illustrating the configuration of the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 24B is a diagram showing the configuration of the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 24C is a diagram illustrating the configuration of the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 24D is a diagram illustrating a configuration of the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 24E is a diagram illustrating the configuration of the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 24F is a diagram illustrating the configuration of the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 25A is a diagram showing an image for performance evaluation by the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 25B is a diagram showing an image for performance evaluation by the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 25A is a diagram showing an image for performance evaluation by the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 25B is a diagram showing an image for performance evaluation by the distance measuring apparatus according to the embodiment of
- FIG. 25C is a diagram showing an image for performance evaluation by the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 25D is a diagram illustrating an image for performance evaluation by the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 26A is a diagram illustrating performance evaluation with the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 26B is a diagram illustrating performance evaluation with the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 26C is a diagram illustrating performance evaluation with the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 26D is a diagram illustrating performance evaluation with the distance measuring apparatus according to the embodiment of the present invention.
- FIG. 27A is a diagram showing an arrangement of an imaging optical system according to a modified example of the present invention.
- FIG. 27B is a diagram showing an arrangement of an imaging optical system according to a modified example of the present invention.
- FIG. 28A is a diagram for explaining the arrangement of the imaging optical system according to the present invention.
- FIG. 28B is a diagram for explaining the arrangement of the imaging optical system according to the present invention.
- FIG. 28C is a diagram for explaining the arrangement of the imaging optical system according to the present invention.
- FIG. 29A is a diagram for explaining the arrangement of the imaging optical system according to the present invention.
- FIG. 29B is a diagram for explaining the arrangement of the imaging optical system according to the present invention.
- FIG. 30 is a diagram for explaining an example of calculating the distance to the object by the stereo distance measuring method.
- FIG. 30 is a diagram for explaining an example of calculating the distance to the object by the stereo distance measuring method.
- FIG. 31 is a diagram illustrating the positional relationship between the distance measuring apparatus and the target object in the description of the related art.
- FIG. 32 is a view of one point of the camera and the object in the description of the prior art as viewed from the minus side of the Yw axis.
- FIG. 33 is a view of one point of the camera and the object in the description of the related art when viewed from the plus side of the Xw axis.
- FIG. 34 is a diagram illustrating a method for expressing the luminance of each pixel in an image.
- FIG. 35A is a diagram illustrating a part of an image obtained by viewing the texture of the mapped object in the description of the related art from the object side.
- FIG. 35B is a diagram illustrating a part of an image of the texture of the mapped object in the description of the related art as viewed from the object side.
- FIG. 36 is a diagram showing the transition of SAD in the description of the prior art.
- FIG. 37 is a diagram showing the transition of SAD in the description of the prior art.
- FIG. 38A is a diagram illustrating a part of an image viewed from the object side in the description of the related art.
- FIG. 38B is a diagram illustrating a part of an image viewed from the object side in the description of the related art.
- FIG. 39 is a diagram for explaining sub-pixel parallax estimation by equiangular straight line fitting in the description of the prior art.
- FIG. 40A is a diagram showing a distance measuring apparatus having another configuration in the description of the conventional technology.
- FIG. 40B is a diagram showing a distance measuring device having another configuration in the description of the conventional technology.
- FIG. 1 is a diagram showing a configuration of a distance measuring device 50 according to the present embodiment.
- the distance measuring device 50 includes three cameras 1s, 1a, and 1b, an A / D conversion unit 4, a pre-processing unit 5, a correlation value calculation unit 6, a correlation value addition unit 7, a parallax calculation unit 8, and a post-processing unit 9. Prepare.
- the three cameras 1s, 1a, and 1b have the same configuration. That is, the camera 1s includes a lens 2s and an imaging region 3s, and the cameras 1a and 1b include a lens 2a, an imaging region 3a, a lens 2b, and an imaging region 3b, respectively.
- the camera 1s is referred to as a standard imaging optical system s, and the cameras 1a and 1b are referred to as a reference imaging optical system a and a reference imaging optical system b, respectively.
- the imaging regions 3s, 3a, and 3b are configured on a solid-state imaging device such as a CCD or a CMOS, for example, and generate an image from the light of the object that has passed through the lenses 2s, 2a, and 2b.
- a solid-state imaging device such as a CCD or a CMOS, for example
- the standard imaging optical system s and the reference imaging optical systems a and b according to the present embodiment have the following characteristics.
- the optical axes of the imaging optical systems are parallel.
- the optical centers of the respective imaging optical systems are arranged on a straight line, and the straight line is perpendicular to the optical axis.
- the imaging area (two-dimensional plane) and the optical axis of each imaging optical system are arranged vertically, and the focal length (distance from the imaging area to the optical center) is the same in all imaging optical systems.
- the line connecting the optical centers of the imaging optical systems, that is, the epipolar line is parallel to the horizontal pixel array of each imaging area, and the parallax between the imaging optical systems occurs in the horizontal direction of the pixel array of each imaging area. .
- the optical axes do not have to be strictly parallel to each other as long as they can be corrected by a calibration process or the like.
- the standard imaging optical system s is arranged in the middle of the three imaging optical systems, and the distance (hereinafter referred to as the baseline length) Ba between the optical center of the standard imaging optical system s and the optical center of the reference imaging optical system a.
- the distance (baseline length) Bb between the optical center of the standard imaging optical system s and the optical center of the reference imaging optical system b is the same. That is, the reference imaging optical systems a and b are arranged point-symmetrically with respect to the standard imaging optical system s.
- the A / D conversion unit 4 converts the luminance information transmitted from the imaging elements constituting the imaging regions 3s, 3a, and 3b from an analog value to a digital value (quantization).
- the image s obtained by quantizing the image of the imaging region 3s by the A / D conversion unit 4 is the image s
- the image of the imaging region 3a and the imaging region 3b by the A / D conversion unit 4 is quantized by the image a and the image.
- the A / D converter 4 may be configured separately for each of the cameras 1s, 1a, and 1b, may be configured in common for the cameras 1s, 1a, and 1b, or only one of them may be configured separately. May be.
- the pre-processing unit 5 performs high-precision image correlation operations such as calibration, luminance shading correction, and reduction correction of the brightness difference between optical systems for the luminance information of each imaging region converted into a digital value.
- the image is corrected.
- generally known calibration processing such as lens distortion correction and stereo image parallelization.
- image correction such as calibration processing will be described.
- the distance measuring device to which the present invention is applied is not limited to such a distance measuring device. Absent. For example, a distance measuring device without a calibration process may be used.
- the correlation value calculation unit 6 is a base line that is a straight line connecting the optical center of the standard imaging optical system s and the optical center of the reference imaging optical system a or b for each of the reference imaging optical systems a and b.
- shifting one image position of the reference image with respect to the standard image means selecting a partial region of the image generated by the standard imaging optical system and the reference imaging optical system as the standard image and the reference image, respectively. This means that the selection position (search position) of the reference image is shifted from the standard image.
- the correlation value adding unit 7 is symmetrical with respect to the actual parallax by adding the correlation values calculated for each combination of the imaging optical systems by the correlation value calculating unit 6 for each corresponding shift amount. A composite correlation value that provides a stable distribution is calculated.
- the parallax calculation unit 8 interpolates the composite correlation value having a symmetric distribution with the actual parallax added by the correlation value addition unit 7 as a reference by using an interpolation formula using symmetry, so that the reference image and the reference image
- the sub-pixel level parallax at is estimated.
- the sub-pixel level refers to the accuracy of pixels below the decimal point.
- the post-processing unit 9 calculates the three-dimensional position of the object (or the distance from the distance measuring device to the object) based on the sub-pixel level parallax calculated by the parallax calculation unit 8, and the estimated three-dimensional shape Data corresponding to the output for each application, such as filtering processing and processing for creating the texture of the estimated object.
- the post-processing unit 9 calculates the three-dimensional position and distance of the object.
- an apparatus to which the present invention is applied is limited to such a distance measuring apparatus. is not.
- the post-processing unit 9 may be a device that outputs the parallax calculated by the parallax calculation unit 8 to another device. In this case, since the distance to the object is not measured, an apparatus that calculates parallax is referred to as a compound eye imaging apparatus.
- FIG. 2 is a diagram showing a positional relationship between the distance measuring device 50 and the object 12 shown in FIG.
- the optical center 10s is the optical center of the standard imaging optical system s
- the optical center 10a is the optical center of the reference imaging optical system a
- the optical center 10b is the optical center of the reference imaging optical system b.
- the optical center 10s of the reference imaging optical system s is set as the origin Mw (0, 0, 0) of the three-dimensional world coordinate system.
- the optical axis 11s is the optical axis of the standard imaging optical system s, and the optical axes 11a and 11b are the optical axes of the reference imaging optical system a and the reference imaging optical system b, respectively.
- the object 12 is an object for measuring a three-dimensional position or distance.
- the point 13 on the surface of the object is a point on the surface of the object 12, and here, the area around the point 13 is set in parallel with the imaging area.
- the world coordinates of the point 13 are Mw (Xw1, Yw1, Zw1).
- FIG. 3 is a flowchart showing a flow of processing relating to the three-dimensional position or distance calculation of the object 12 by the distance measuring device 50.
- the A / D conversion unit 4 converts the luminance information transmitted from the imaging elements constituting the imaging regions 3s, 3a, and 3b from an analog value to a digital value (S101).
- the pre-processing unit 5 performs high-precision image correlation operations such as calibration, luminance shading correction, and brightness difference reduction correction between optical systems for the luminance information of each imaging region converted into a digital value. Then, an image correction process is performed (S102).
- the correlation value calculation unit 6 divides the image corrected in step S102 into predetermined small regions (hereinafter referred to as blocks) (S103). Then, the correlation value calculation unit 6 selects a block of the image s corresponding to the point 13 on the surface of the object 12 to be calculated for the three-dimensional position or distance as a reference image (S104). Then, if the correlation value calculation unit 6 can acquire the image a or the image b generated by the reference imaging optical system that has not yet been processed in steps S106 to S109 described below, the correlation value calculation unit 6 starts the loop 1 (S105). Furthermore, if the correlation value calculation unit 6 can acquire the shift amount, the loop 2 is started (S106).
- the correlation value calculation unit 6 selects, as a reference image, the block corresponding to the shift amount acquired in step S106 from the image a or the image b acquired in step S105 (S107). Subsequently, a correlation value, for example, SAD, representing the degree of similarity between the standard image that is the block of the image s selected in step S104 and the reference image that is the block of the image a or b selected in step S107 is calculated (S108). .
- SAD a correlation value representing the degree of similarity between the standard image that is the block of the image s selected in step S104 and the reference image that is the block of the image a or b selected in step S107
- the correlation value calculation unit 6 increases the shift amount in order from a predetermined minimum shift amount, and calculates a correlation value for each shift amount (S106 to S109).
- the correlation value calculation unit 6 ends the loop 2 (S106 to S109).
- the correlation value calculation unit 6 upon completion of loop 2 (S106 to S109), that is, calculation of the correlation value for each shift amount, the correlation value calculation unit 6 generates a reference imaging optical system that has not been subjected to the processing related to the calculation of the correlation value. An image is acquired, and the processing of loop 2 (S106 to S109) is repeated (S105 to S110).
- the correlation value calculation unit 6 Loop 1 (S105 to S110) is terminated.
- the correlation value adding unit 7 calculates a composite correlation value by adding the correlation value between the reference image calculated in the above-described process and each reference image for each corresponding shift amount (S111).
- the composite correlation value obtained by this process forms a symmetric distribution with reference to the actual parallax.
- the parallax calculation unit 8 interpolates the correlation value for each shift amount after the addition in step S111 using an interpolation formula using symmetry (S112).
- the interpolation formula used here is, for example, a subpixel parallax estimation method based on the assumption that correlation values such as equiangular straight line fitting and parabolic fitting (fitting by a quadratic function) form a symmetric distribution based on actual parallax. Interpolation formula used.
- the parallax calculation unit 8 uses the correlation value after interpolation to calculate a sub-pixel parallax that is a shift amount that maximizes or minimizes the correlation value (highest similarity) (S113). Specifically, when SAD is used as a correlation value, the parallax calculation unit 8 calculates a shift amount that minimizes SAD as sub-pixel parallax.
- the post-processing unit 9 calculates the three-dimensional position or distance of the object (S114).
- the distance measuring device 50 obtains sub-pixel parallax for one selected block of the image s and calculates the three-dimensional position and distance of the object. By repeating the selection, the sub-pixel parallax may be obtained for all the blocks of the image s, and the three-dimensional position and distance of the object may be calculated.
- FIG. 4 is a diagram showing a part of an image s viewed from the object side when the texture around the point 13 on the surface of the object 12 in FIG. 2 is imaged by the reference imaging optical system s.
- each quadrangle represents a pixel
- the density of a line in each pixel represents a luminance value.
- the point 13 corresponds to a position indicated by a black circle on the image s.
- the correlation value calculation unit 6 divides the image obtained from each imaging optical system into predetermined small regions (blocks), and the three-dimensional position for each block. Is calculated.
- the image s is a measurement target.
- the block at the position of the block 14s including the point 13 on the surface of the object 12 is selected as the reference image.
- the reference image in the image s may be selected in any way as long as the point 13 on the surface of the object 12 to be measured is included.
- the reference image may be selected from the position shown in FIG. A block at a position shifted to the left by one pixel may be selected as the reference image.
- the image a and the image The parallax generated in b has the same direction and the same magnitude.
- both the actual parallax generated in the image a and the image b are 3.6 pixels.
- the image at the position of the block 14a shown in FIG. 4 appears in the image coordinates of the same image a as the block 14s of the image s.
- an image at the position of the block 14b shown in FIG. 4 appears in the image coordinates of the same image b as the block 14s of the image s. That is, the images of the object 12 at the same image coordinates of the image s, the image a, and the image b are as shown in FIGS. 5A to 5C.
- the image coordinates are coordinates representing the position of the pixel when the origin is the intersection of the optical axis of each imaging optical system and the imaging surface.
- the image of the object is formed at the same image coordinates in the images s, a, and b.
- FIG. 5A is a diagram showing a part of an image s taken by the reference imaging optical system s.
- FIG. 5B is a diagram showing a part of an image a photographed by the reference imaging optical system a.
- FIG. 5C is a diagram illustrating a part of an image b captured by the reference imaging optical system b.
- blocks of 4 ⁇ 4 pixels surrounded by bold lines in each image correspond to the blocks 14s, 14a, and 14b shown in FIG. 4 and represent blocks having the same image coordinates.
- a correlation value calculation method in the case where the SAD obtained by (Expression 4) is used as the correlation value between the image s and the image a or the image b will be described.
- the correlation value calculation unit 6 acquires the image a, that is, the image of FIG. 5B among images captured by the reference imaging optical system. Then, the correlation value calculation unit 6 performs SAD on the block of the image s that is the already selected reference image, that is, the block surrounded by the thick line in FIG. 5A and the reference image that is the block of the image a. calculate. At this time, the correlation value calculation unit 6 performs the parallax generation direction from the block surrounded by the thick line in FIG.
- the correlation value calculation unit 6 selects the image b for which the correlation value has not yet been calculated, that is, the image of FIG. 5C. Then, the correlation value calculation unit 6 calculates the SAD between the standard image and the reference image as in the case of the image a. At this time, the correlation value calculation unit 6 generates the parallax from the block surrounded by the thick line in FIG. 5C corresponding to the minimum shift amount of 0 pixel to the block corresponding to the maximum shift amount of 7 pixels, that is, in FIG. A block to be a reference image is selected while shifting the block by one pixel to the left in the image horizontal direction, which is the direction of the arrow shown in 5C. As a result, the SAD for each shift amount is calculated. The transition of SAD calculated in this way is shown in FIG. 6B.
- FIG. 6A and 6B are diagrams showing the transition of SAD between the image s and the image a and the image s and the image b. Since the transition of the SAD shown in the figure is asymmetrical with respect to the actual parallax, when subpixel parallax is estimated by the above-mentioned equiangular straight line fitting, with respect to 3.6 pixels that are actual parallax, In the image a, an error of about 0.4 pixels (FIG. 6A) occurs on the minus side, and in the image b, an error of about 0.5 pixels (FIG. 6B) occurs on the plus side.
- the correlation value addition unit 7 adds the SAD for each corresponding shift amount so that the transition of the SAD is symmetrical with respect to the actual parallax, thereby combining the correlation value. Is calculated.
- FIG. 7 is a diagram illustrating a transition of the combined SAD that is a combined correlation value when the SADs illustrated in FIGS. 6A and 6B are added for each shift amount.
- the transition of the SAD after the addition is symmetric with respect to the actual parallax.
- the transition S_sum (i) of the composite SAD calculated by the correlation value adding unit 7 is expressed as follows, assuming that the transition of SAD in FIG. 6A is Sa (i) and the transition of SAD in FIG. 6B is Sb (i). ).
- the subpixel is obtained by the above-mentioned equiangular straight line fitting using the symmetry equation using the symmetry.
- the parallax calculation unit 8 can calculate the parallax with an error of less than 0.1 pixel as shown in FIG. Therefore, the subpixel parallax estimation accuracy can be significantly improved as compared with the case where the transition of the SAD is asymmetrical. That is, in equiangular straight line fitting, subpixel parallax estimation can be performed without error when the SAD transition is symmetric with respect to the actual parallax and the SAD transition is linear.
- the post-processing unit 9 converts the parallax into a form according to the output and outputs data, as shown in step S114 of FIG. For example, when outputting the three-dimensional position of the object, the two-dimensional coordinates with the origin of the intersection of the optical axis 10s and the imaging region 3s in FIG. 2 from m (us1, vs1) which is the two-dimensional image coordinate system of the image s.
- the system is converted to ms (xs1, ys1), and the three-dimensional position for each block of the image s is obtained using (Equation 1), (Equation 2), and (Equation 3) as in the description of the prior art. it can.
- the parallax P in (Expression 1) can be obtained by multiplying the calculated sub-pixel parallax by the pixel pitch. Further, when only the distance from the distance measuring device to the object is calculated, the distance can be calculated using (Equation 1) as in the description of the prior art.
- FIG. 8 is a diagram showing the texture around the point 13 on the surface of the object 12 mapped to the image s, as viewed from the object 12 side. Further, the block 15s surrounded by the thick line is the same as the block surrounded by the thick line shown in FIG. 5A.
- the block 15a surrounded by a dotted line is an image area of the image a mapped to the same image coordinates as the block 15s of the image s.
- a block 15b surrounded by a dotted line is an image area of the image b mapped to the same image coordinates as the block 15s of the image s.
- the transition of the SAD shown in FIG. 6A corresponds to the calculation of the area of the black arrow 16a and the white arrow 17a in FIG.
- the transition of the SAD shown in FIG. 6B corresponds to that calculated while shifting the block of the image b that is the reference image in the areas of the white arrow 16b and the black arrow 17b in FIG.
- FIG. 9A is a diagram showing the transition of SAD corresponding to FIGS. 6A and 6B when the sample interval is infinitely small.
- FIG. 9B is a diagram showing the transition of the synthesized SAD when the sample interval is infinitely small.
- the transition of SAD indicated by a solid line indicates the transition of SAD of image a corresponding to FIG. 6A.
- the SAD transition (solid line 18a) from the shift amount 0 to the actual parallax shift amount corresponds to the SAD in the area of the black arrow 16a shown in FIG.
- the SAD transition (solid line 19a) with a shift amount larger than the amount corresponds to the SAD in the region of the white arrow 17a shown in FIG.
- the black dots in FIG. 9A are actual sample points.
- the transition of the SAD indicated by the dotted line indicates the transition of the SAD of the image b corresponding to FIG. 6B.
- the SAD transition (dotted line 18b) from the shift amount 0 to the actual parallax shift amount corresponds to the SAD in the area of the white arrow 16b shown in FIG.
- the SAD transition (dotted line 19b) with a shift amount larger than the amount corresponds to the SAD in the area of the black arrow 17b shown in FIG.
- white dots in FIG. 9A are actual sample points.
- the transition of SAD at the black arrow 16a that is, the transition from the shift amount 0 to the actual parallax in the image a
- the transition of SAD at the black arrow 17b that is, the shift from the actual parallax in the image b further.
- the transition of the SAD indicated by the solid line 18a in FIG. 9A and the transition of the SAD indicated by the dotted line 19b are symmetric with respect to the actual shift amount of the parallax.
- the transition of the SAD at the white arrow 17a that is, the transition from the actual parallax shift amount to the shift amount beyond that
- the SAD transition at the white arrow 16b that is, the image In the transition from the shift amount 0 to the actual parallax shift amount in b
- the combination of the reference image and the reference image for calculating the SAD based on the actual parallax shift amount is the same. . Therefore, the SAD transition indicated by the solid line 16b in FIG. 9A and the SAD transition indicated by the dotted line 17a are symmetrical with respect to the actual amount of shift of the parallax.
- one block in the image s is described. However, the same calculation is performed on all the blocks in the image s, so that all three-dimensional positions of the object shown in the image s are obtained. Can be requested.
- the post-processing unit 9 calculates the three-dimensional position and distance of the object 12, but uses the parallax calculated by the parallax calculation unit 8 to synthesize a plurality of images. Also good.
- the transition of the correlation value is symmetric with respect to the actual parallax regardless of the luminance distribution of the target, the subpixel parallax is estimated with high accuracy regardless of the target.
- a possible compound eye imaging device and distance measuring device can be provided.
- Embodiment 2 Next, a distance measuring apparatus according to Embodiment 2 of the present invention will be described.
- the distance measuring device 60 according to the present embodiment is different from the distance measuring device 50 of the first embodiment in that the preprocessing unit 5 includes a smoothing filter unit that reduces high-frequency components of an image.
- the preprocessing unit 5 includes a smoothing filter unit that reduces high-frequency components of an image.
- the parts, functions, and the like are the same as those of the distance measuring device 50 of the first embodiment. Therefore, the description will focus on the characteristic part of the distance measuring device of the present embodiment.
- FIG. 10 is a diagram showing a configuration of the distance measuring device 60 according to the present embodiment. Note that the same components as those in Embodiment 1 are denoted by the same reference numerals, and description thereof is omitted.
- the pre-processing unit 5 included in the distance measuring device 60 includes the same image correction processing as that of the first embodiment in order to perform image correlation calculation with high accuracy.
- a smoothing filter unit 23 that performs processing for reducing high-frequency components of an image such as a Gaussian filter, an averaging filter, and a weighted averaging filter is provided.
- the correlation value calculation unit 6 calculates the transition of the correlation value by the process similar to the process described in the first embodiment using the image in which the high frequency component is reduced in this way, the linearity of the correlation value transition is obtained. Slightly improved.
- the linearity of the SAD transition is slightly improved compared to the SAD transition shown in FIG. 6A and FIG. 6B of the first embodiment because the luminance distribution of the high-frequency component of the image is reduced.
- the symmetry of the SAD transition based on the actual parallax is hardly improved. Therefore, even if the sub-pixel parallax estimation by the above-mentioned equiangular straight line fitting is performed on the respective SAD transitions shown in FIG. 11A and FIG. 11B, a great improvement in the accuracy of the parallax estimation cannot be expected. Therefore, as in the first embodiment, the correlation value adding unit 7 needs to improve the symmetry of SAD transition by adding SAD for each shift amount.
- FIG. 12 is a diagram showing the transition of the combined SAD when the SADs shown in FIGS. 11A and 11B are added for each shift amount.
- the parallax calculation unit 8 performs subpixel parallax estimation by equiangular straight line fitting on the transition of the synthesized SAD, the error is further reduced as compared with the case of the first embodiment.
- the linearity of the SAD transition is improved by reducing the high frequency component of the luminance distribution by the smoothing filter unit 23 in addition to the transition of the synthesized SAD being symmetric with respect to the actual parallax as described above. . That is, in the case of a subject whose SAD transition is not symmetric with respect to the actual parallax, the accuracy of subpixel parallax estimation by equiangular straight line fitting does not greatly improve if the smoothing filter unit 23 only removes high-frequency components of the image. .
- the smoothing filter unit 23 improves the linearity of the transition of the correlation value (here, SAD) by removing the high-frequency component of the image.
- SAD the transition of the correlation value
- the distance measuring device can estimate the sub-pixel parallax with higher accuracy.
- FIG. 13 is a diagram showing a configuration of the distance measuring device 70 according to the present embodiment.
- the optical center 19b of the reference imaging optical system b is relative to a straight line (dotted line 18 in FIG. 13) connecting the optical center 19s of the standard imaging optical system s and the optical center 19a of the reference imaging optical system a.
- the distance Error_v is separated. That is, the distance measuring device 70 has an optical center position error (hereinafter referred to as a baseline vertical direction error) Error_v.
- the base length Ba which is the distance between the optical center 19s and the optical center 19a
- the base length Bb (the distance in the direction parallel to the dotted line 18) between the optical center 19s and the optical center 19b have a difference of Error_h.
- the distance measuring device 70 has a baseline length error (hereinafter referred to as a baseline direction error) Error_h.
- a baseline direction error Error_h.
- Other components and functions are the same as those of the distance measuring device 60 of the second embodiment shown in FIG. Therefore, the description will focus on the characteristic part of the distance measuring device of the present embodiment.
- the optical axes of the standard imaging optical system s and the reference imaging optical systems a and b are parallel, the optical axis of the standard imaging optical system is located at the center of the imaging area 3s, and the reference imaging optical systems a and It is assumed that the optical axis b is located at the center of the imaging areas 3a and 3b. Furthermore, it is assumed that the vertical and horizontal pixel arrays are parallel in the imaging region 3s, the imaging region 3a, and the imaging region 3b. Even when the above assumption is not satisfied, the assumption may be realized by correction by calibration in the preprocessing unit 5.
- the generation direction of the parallax is described to be greatly different from the search direction.
- the transition of values is no longer true.
- the parallax In the direction parallel to the parallax search direction, the parallax is about 24.99 pixels, and the parallax with a negligible difference of 0.01 pixels or less with respect to 25 pixels. Therefore, the transition of the SAD between the image s and the image a and the transition of the SAD between the image s and the image b are actually caused by the shift of about 0.62 images in this case in the direction perpendicular to the parallax search direction. It is not strictly symmetrical with respect to the parallax. When the subject distance is closer, the deviation of the mapping of the image b in the vertical direction in the parallax search direction becomes large, and the left-right symmetry of the transition of the SAD is further reduced.
- FIG. 16 is a graph showing a result of verification by simulation of a decrease in ranging accuracy (increase in parallax detection error) when a subject is imaged using a ranging device having a baseline vertical direction error Error_v as shown in FIG. Indicates. Since the decrease in distance measurement accuracy is caused by the shift amount of the mapping in the vertical direction of the parallax search direction, the horizontal axis in FIG. 16 represents the shift amount of the mapping in the vertical direction of the parallax search direction. Error_v can be converted by (Equation 7), where P_v is the amount of image shift on the horizontal axis in FIG.
- D is a subject distance
- pitch is a pixel pitch
- f is a focal length.
- the reference distance measurement accuracy of the dotted line in FIG. 16 is a result when the distance measurement accuracy (parallax detection error) of the conventional stereo camera (three eyes) in FIG. 40A when the same subject is used is obtained by simulation.
- the focal length, pixel pitch, and subject distance of the conventional stereo camera are set to be the same, the base length of the standard imaging optical system s and the reference imaging optical system a in FIG. 40A, and the standard imaging optical system s and the reference imaging optical system b. These baseline lengths are the same as the baseline length Ba of the present embodiment in FIG.
- the reference ranging accuracy is a constant value. It becomes.
- the vertical direction error Error_v is created within 0.03 mm, thereby measuring with higher accuracy than the conventional stereo camera.
- (Parallax detection) can be performed. Therefore, the relative position of the optical center of each camera in FIG. 13 may be mounted with high accuracy so as to satisfy (Equation 8) at the time of mounting.
- the lenses 2s, 2a, and 2b can be realized relatively easily by forming them by integral molding.
- the actual parallax here refers to the actual parallax between the image s and the image a.
- the greater the baseline direction error Error_h the greater the amount of translation and the worse the symmetry.
- FIG. 19 shows the transition of the combined SAD obtained by adding the SADs of FIG. 18A and FIG. 18B for each shift amount. As is clear from FIG. 19, it can be seen that the estimated parallax is deviated from the actual parallax due to the deterioration of the symmetry of the SAD transition based on the actual parallax.
- FIG. 20 is a graph showing a result of verification by simulation of a decrease in ranging accuracy (increase in parallax detection error) when a certain subject is imaged by a ranging device having a baseline direction error Error_h as shown in FIG. Since the parallax detection error is caused by the shift amount of the mapping in the parallax search direction, the horizontal axis in FIG. 20 represents the shift amount of the mapping in the parallax search direction. Error_h can be converted by (Equation 9), where P_h is the amount of image shift on the horizontal axis in FIG.
- D is a subject distance
- pitch is a pixel pitch
- f is a focal length.
- the solid-line reference ranging accuracy in FIG. 20 is a result when the ranging accuracy (parallax detection error) of the conventional stereo camera (three eyes) in FIG. 40A when the same subject is used is obtained by simulation.
- the focal length, pixel pitch, and subject distance of the conventional stereo camera are set to be the same, the base length of the standard imaging optical system s and the reference imaging optical system a in FIG. 40A, and the standard imaging optical system s and the reference imaging optical system b. These baseline lengths are the same as the baseline length Ba of the present embodiment in FIG.
- the reference ranging accuracy is a constant value.
- the distance measuring device 70 can perform distance measurement (parallax detection) with higher accuracy than a conventional stereo camera.
- the vertical error Error_v is created within 0.04 mm, so that the distance can be measured with higher accuracy than the conventional stereo camera. (Parallax detection) can be performed. Therefore, it is preferable to mount the relative position of the optical center of each camera in FIG. 13 with high accuracy so as to satisfy (Equation 10) at the time of mounting.
- the above can be realized relatively easily by forming the lenses 2s, 2a, and 2b by integral molding.
- the distance measuring device 70 can perform distance measurement (parallax detection) with higher accuracy than a conventional stereo camera as long as it has a configuration satisfying (Expression 8) and (Expression 10).
- the distance measuring device 80 includes eight reference imaging optical systems, and the correlation value calculation unit 6 includes a parallax conversion unit that converts a difference in parallax between the reference imaging optical systems. Unlike the distance measuring device 60 of the second embodiment, other components and functions are the same as those of the distance measuring device 60 of the second embodiment. Therefore, the description will focus on the characteristic part of the distance measuring device of the present embodiment.
- FIG. 21 is a diagram showing a configuration of the distance measuring device 80 according to the present embodiment. Note that the same components as those in Embodiment 2 are denoted by the same reference numerals, and description thereof is omitted.
- the distance measuring device 80 includes a compound eye camera 20.
- the compound-eye camera 20 is composed of a single solid-state imaging device such as a single CCD or CMOS having nine lens arrays that are integrally molded and nine different imaging regions.
- the optical band separation filter and the diaphragm are not shown because they are not the main point of the present invention. Since the compound eye camera 20 has a smaller lens diameter than a normal camera, the focal length of the lens can be designed to be short, and the entire optical system can be made very thin. Further, by integrally molding with the lens array, the relative positional relationship between the optical axes of the optical systems included in the array can be created with high accuracy (for example, an error of less than 5 ⁇ m).
- each imaging optical system is a lens array
- the distance measuring device to which the present invention is applied is not limited to such a distance measuring device.
- the distance measuring device may be configured with a separate imaging optical system.
- a plurality of image sensors may be used.
- the reference imaging optical system s includes a lens 21s and an imaging region 22s, and is arranged near the center of the solid-state imaging device.
- the reference imaging optical systems a to h are configured to include lenses 21a to 21h and imaging regions 22a to 22h, respectively.
- the standard imaging optical system s and the reference imaging optical systems a to h have the following characteristics.
- the optical axes of the imaging optical systems are parallel.
- the optical centers of the imaging optical systems are arranged on the same plane, and the plane is perpendicular to the optical axis.
- the imaging area (two-dimensional plane) and the optical axis of each imaging optical system are arranged vertically, and the focal length (distance from the imaging area to the optical center) is the same in all imaging optical systems.
- the optical centers of the standard imaging optical system s, the reference imaging optical system a, and the reference imaging optical system b are arranged on the same straight line.
- the optical center of the reference imaging optical system a and the optical center of the reference imaging optical system b are arranged at point-symmetric positions with respect to the standard imaging optical system s.
- the optical centers of the standard imaging optical system s, the reference imaging optical system c, and the reference imaging optical system d are arranged on the same straight line.
- the optical center of the reference imaging optical system c and the optical center of the reference imaging optical system d are arranged at point-symmetric positions with respect to the standard imaging optical system s.
- the optical centers of the standard imaging optical system s, the reference imaging optical system e, and the reference imaging optical system f are arranged on the same straight line.
- the optical center of the reference imaging optical system e and the optical center of the reference imaging optical system f are arranged at point-symmetric positions with respect to the standard imaging optical system s.
- the optical centers of the standard imaging optical system s, the reference imaging optical system g, and the reference imaging optical system h are arranged on the same straight line.
- the optical center of the reference imaging optical system g and the optical center of the reference imaging optical system h are arranged at point-symmetric positions with respect to the standard imaging optical system s.
- the straight line connecting the optical centers of the standard imaging optical system s, the reference imaging optical system a, and the reference imaging optical system b is parallel to the horizontal pixel array of the imaging region 22s. . Therefore, the parallax generated between the standard imaging optical system s and the reference imaging optical system a and the parallax generated between the standard imaging optical system s and the reference imaging optical system b are in the horizontal direction of the pixel array of each imaging region. Arise.
- the baseline length Ba of the standard imaging optical system s and the reference imaging optical system a is equal to the baseline length Bb of the standard imaging optical system s and the reference imaging optical system b.
- the baseline length Bc of the standard imaging optical system s and the reference imaging optical system c is equal to the baseline length Bd of the standard imaging optical system s and the reference imaging optical system d.
- the baseline length Be of the standard imaging optical system s and the reference imaging optical system e is equal to the baseline length Bf of the standard imaging optical system s and the reference imaging optical system f.
- the baseline length Bg of the standard imaging optical system s and the reference imaging optical system g is equal to the baseline length Bh of the standard imaging optical system s and the reference imaging optical system h.
- the correlation value calculation unit 6 includes a parallax conversion unit 24 that performs a parallax conversion process in addition to the process of calculating the correlation value described in the first embodiment. Although the details will be described later, the parallax conversion unit 24 can add the block shift amount by the correlation value addition unit 7 when calculating the correlation value of the reference imaging optical system arranged so that the baseline lengths are different. Convert to shift amount.
- the parallax conversion unit 24 calculates the correlation value of the reference imaging optical system e arranged so that the baseline length is different from that of the reference imaging optical system a, a value Le obtained by dividing the baseline length Be by the baseline length Ba
- the correlation value adding unit 7 converts the block into a shift amount that can be added.
- the pixel pitch in the direction parallel to the base line length means the shortest cycle in which a point corresponding to the center point of the pixel appears on the same straight line parallel to the base line in the image captured by the imaging optical system.
- the unit of the shift amount is “pixel” representing the pixel pitch in the baseline direction. Therefore, when adding correlation values between imaging optical systems having different pixel pitches depending on the baseline direction, the parallax conversion unit 24 needs to perform unit conversion. That is, it is necessary not only to multiply the baseline length ratio by the reference shift amount but also to multiply the pixel pitch ratio Me to perform parallax conversion.
- the unit of the shift amount is a unit that does not depend on the baseline direction, for example, when the unit is a unit such as millimeters, it is not necessary to convert the unit. That is, the parallax conversion unit 24 can convert the shift amount by multiplying Le, which is the ratio of the baseline lengths, by the reference shift amount without using Me, which is the pixel pitch ratio.
- Correlation value addition unit 7 corresponds to the correlation value calculated for each combination of optical systems by correlation value calculation unit 6 based on the shift amount converted by parallax conversion unit 24, as in the second embodiment. Add all correlation values for each shift amount. As a result, the correlation value adding unit 7 calculates a combined correlation value that becomes a symmetric transition with reference to the actual parallax.
- FIG. 22 is a diagram showing a positional relationship between the distance measuring device 80 and the object 12 shown in FIG.
- the optical center 25 s of the reference imaging optical system s is set to the origin Mw (0, 0, 0) of the world coordinates, and the point 13 is one point on the surface of the object 12. Is set to Mw (Xw1, Yw1, Zw1).
- the reference imaging optical system c and the reference imaging optical system d are also imaged point-symmetrically with respect to the optical center of the standard imaging optical system s, similarly to the reference imaging optical system a and the reference imaging optical system b.
- the optical center of the optical system is arranged, and each optical center is arranged on the same straight line. Therefore, when the SAD for each shift amount obtained by the block matching calculation of the standard imaging optical system s and the reference imaging optical system c and the standard imaging optical system s and the reference imaging optical system d is added, the actual parallax is symmetric with respect to the reference. The transition of SAD is obtained.
- FIG. 23 is a flowchart showing the flow of processing relating to the three-dimensional position or distance calculation of the object 12 by the distance measuring device 80.
- steps S201 to S204 is the same as the processing of steps S101 to S104 shown in FIG.
- step S204 the correlation value calculation unit 6 can acquire any of the images a to h generated by the reference imaging optical system that has not been subjected to the processing of steps S206 to S212 described below.
- Loop 1 is started (S205).
- the parallax conversion unit 24 included in the correlation value calculation unit 6 is based on, for example, the standard imaging optical system s, the reference imaging optical system a, and the reference imaging optical system b, the reference imaging optical system a and the standard imaging optical system.
- the base line length (reference base line length) with s and the pixel pitch are acquired (S206).
- the parallax conversion unit 24 acquires the baseline length and the pixel pitch between the reference imaging optical system and the standard imaging optical system s that generated the image selected in step S205 (S207).
- the parallax conversion unit 24 calculates a new shift amount based on the reference baseline length acquired in step S206 and the baseline length and pixel pitch acquired in step S207 (S208).
- the correlation value calculation unit 6 starts the loop 2 if the new shift amount calculated as described above can be acquired (S209).
- the correlation value calculation unit 6 selects a block corresponding to the shift amount acquired in step S209 as a reference image from any one of the images a to h acquired in step S205 (S210). Subsequently, a correlation value, for example, SAD, representing the similarity between the reference image that is a block of the image s selected in step S204 and the reference image that is a block of any of the images a to h selected in step S209 is obtained. Calculate (S211).
- the correlation value calculation unit 6 calculates a correlation value for each new shift amount calculated in step S208 (S212, S209).
- the correlation value calculation unit 6 ends the loop 2 (S209, S212).
- the correlation value calculation unit 6 when the calculation of the correlation value for each shift amount is completed in loop 2 (S209, S212), the correlation value calculation unit 6 generates the reference imaging optical system that has not been subjected to the processing related to the calculation of the correlation value. An image is acquired, and the processing of loop 2 (S209, S212) is repeated (S213, S205).
- the correlation value calculation unit 6 Loop 1 (S205, S213) is terminated.
- the correlation value adding unit 7 adds the correlation values between the reference image and each reference image calculated in the above process for each corresponding shift amount (S214).
- the correlation values of the reference imaging optical systems arranged symmetrically are added, but the correlation values of all the reference imaging optical systems are added.
- the composite correlation value obtained by this process forms a symmetric distribution with reference to the actual parallax.
- the actual parallax here is the amount of parallax at the reference baseline length and pixel pitch.
- the parallax conversion unit 24 calculates a new shift amount based on the ratio of the baseline length and the pixel pitch. Specifically, when the correlation value calculation unit 6 calculates the correlation value of the image c generated by the reference imaging optical system c, the reference baseline length is the baseline length Ba of the reference imaging optical system a, and the reference pixel is When the pixel pitch in the direction parallel to the base line length Ba is pitch_a, the parallax conversion unit 24 calculates the shift amount increment Kc (unit: pixel) when calculating the SAD transition from the base line length ratio according to (Equation 11). Can be calculated. Note that the increment of the shift amount in calculating the correlation value of the image a is one pixel.
- Bc is a baseline length between the standard imaging optical system s and the reference imaging optical system c
- pitch_c is a pixel pitch in a direction parallel to the baseline length Bc.
- the baseline length Ba and the baseline length Bb are the same, and the baseline length Bc and the baseline length Bd are also the same. Accordingly, the parallax conversion unit 24 calculates the shift amount when calculating the transition of the SAD of the standard imaging optical system s and the reference imaging optical system c, and the standard imaging optical system s and the reference imaging optical system d, as described above.
- the new shift amounts are 0 pixel, Kc pixel, 2 ⁇ Kc pixel, 3 ⁇ Kc pixel,. If the minimum shift amount is ⁇ 2 pixels, the shift amounts are ⁇ 2 ⁇ Kc pixels, ⁇ Kc pixels, 0 pixels, Kc pixels, 2 ⁇ Kc pixels, 3 ⁇ Kc pixels,. Note that, depending on the value of the increment Kc described above, the shift amount may be a sub-pixel unit. In this case, when the correlation value calculation unit 6 selects a reference image, the correlation value can be calculated by extracting the reference image by a process such as bilinear interpolation.
- the parallax conversion unit 24 calculates a new shift amount when calculating the SAD.
- the step amount used when the parallax conversion unit 24 calculates a new shift amount is obtained by (Equation 12).
- Be is a baseline length between the standard imaging optical system s and the reference imaging optical system e
- pitch_e is a pixel pitch in a direction parallel to the baseline length Be.
- the baseline length Be and the baseline length Bf are the same. Therefore, the parallax conversion unit 24 calculates the shift amount when calculating the transition of the SAD of the standard imaging optical system s and the reference imaging optical system e, and the standard imaging optical system s and the reference imaging optical system f as described above. It can be calculated from That is, when the minimum shift amount is set to 0 pixel, the new shift amount is a shift amount of 0 pixel, Ke pixel, 2 ⁇ Ke pixel, 3 ⁇ Ke pixel,.
- the parallax conversion unit 24 calculates a new shift amount when calculating the SAD.
- the step amount used when the parallax conversion unit 24 calculates a new shift amount is obtained by (Equation 13).
- Bg is a baseline length between the standard imaging optical system s and the reference imaging optical system g
- pitch_g is a pixel pitch in a direction parallel to the baseline length Bg.
- the baseline length Bg and the baseline length Bh are the same.
- the parallax conversion unit 24 calculates the shift amount when calculating the transition of the SAD of the standard imaging optical system s and the reference imaging optical system g, and the standard imaging optical system s and the reference imaging optical system h, as described above. It can be calculated from In other words, the shift is 0 pixel, Kg pixel, 2 ⁇ Kg pixel, 3 ⁇ Kg pixel,.
- variables for storing the transition of the SAD between the standard imaging optical system s and the reference imaging optical systems a to h are Sa (i), Sb (i), Sc (i), Sd (i), Se (, respectively). i), Sf (i), Sg (i), Sh (i).
- the correlation value adding unit 7 synthesizes (adds) the transition of SAD by (Equation 14).
- the shift amount used in the composition of the transition of the SAD is a new shift amount calculated by the parallax conversion unit 24.
- the transition of the SAD synthesized by the correlation value adding unit 7 is also symmetric with respect to the actual parallax as in the first embodiment. Further, since the number of combinations of two reference imaging optical systems that are symmetric with respect to the standard imaging optical system s is increased as compared with the first embodiment, the variation in the SAD transition is reduced due to the smoothing effect, and further, , The linearity of the transition of SAD is improved.
- the parallax calculating unit 8 calculates the sub-pixel level parallax from the correlation value S_sum synthesized by the correlation value adding unit 7 as shown in steps S215 and S216 of FIG. At this time, since the linearity of the transition of the SAD is improved, it is possible to estimate the sub-pixel parallax with high accuracy regardless of the luminance distribution of the object when a low-order interpolation formula is used.
- the post-processing unit 9 converts the parallax into a form according to the output and outputs data, as shown in step S217 of FIG. For example, when outputting the three-dimensional position of the object, the intersection of the optical axis of the reference imaging optical system s in FIG. 22 and the imaging region 22s from m (us1, vs1) which is the two-dimensional image coordinate system of the image s.
- the baseline length parameter used at this time is the baseline length Ba (baseline length between the standard imaging optical system s and the reference imaging optical system a). Note that the parallax P in (Expression 1) can be obtained by multiplying the sub-pixel parallax calculated by the above processing by the pixel pitch.
- the transition of the correlation value is symmetric with respect to the actual parallax regardless of the luminance distribution of the target, so that the subpixel parallax can be estimated with high accuracy regardless of the target. It is possible to provide a distance measuring device capable of performing the above. Further, by increasing the number of combinations of two reference imaging optical systems whose point is symmetrical with respect to the standard imaging optical system s, fluctuations in the transition of the correlation value are reduced by the smoothing effect, and low order By using the interpolation formula, it is possible to provide a distance measuring device that can estimate subpixel parallax with higher accuracy.
- FIG. 24A to 24C show a configuration of a conventional stereo camera.
- 24D to 24F show the distance measuring apparatus according to the present invention, that is, the optical centers of the two reference optical systems are arranged symmetrically with respect to the optical center of the standard imaging optical system, and one pair of the two reference optical systems is provided.
- a configuration of a distance measuring device is shown in which the combined correlation value is symmetrical with respect to the actual parallax regardless of the luminance distribution of the target object.
- FIG. 25A to FIG. 25D are diagrams showing subjects used for comparison of ranging accuracy. For each subject shown in FIGS. 25A to 25D, a shifted image (parallax image) of 10.0 to 11.0 pixels is appropriately and ideally created in units of 0.1 pixel according to each optical system, and used for the comparison simulation.
- 26A to 26D are graphs showing comparison simulation results corresponding to the subjects in FIGS. 25A to 25D.
- the horizontal axis indicates the calculation block size (the number of pixels on one side of the square calculation block) when performing the parallax calculation
- the vertical axis indicates the corresponding parallax detection error.
- the calculation method of the parallax detection error for each calculation block shows a value obtained by dividing each subject in FIG. 25A to FIG. 25D by the corresponding calculation block size and averaging the parallax detection error for each calculation block in the entire subject area. Yes.
- the parallax detection error for each calculation block is the parallax detection for all shift amounts (parallax images) of 10.0 to 11.0 pixel shifted images (parallax images) created in increments of 0.1 pixels.
- the average value of errors is calculated (that is, all parallax amounts in the range of 0.0 to 0.9 in 0.1 pixel steps after the decimal point are verified).
- the distance measuring device according to the present invention significantly reduces the parallax detection error in any subject compared to the conventional stereo camera (the parallax detection accuracy is improved). (Improved).
- the distance measuring device according to each of the above-described embodiments is an example for explaining the present invention.
- the distance measuring device as shown in FIG. 27A may be used.
- FIG. 27A is a diagram showing a configuration of an imaging optical system of the distance measuring apparatus according to the present modification.
- the distance measuring device according to the present modification includes an object calculated from the standard imaging optical system s and the reference imaging optical systems a to f, and the standard imaging optical system s and the reference imaging optical systems a to f.
- Two texture imaging optical systems for mapping a high-resolution color texture to a three-dimensional position (shape) of an object are provided.
- the texture imaging optical system may be arranged at an arbitrary position and number.
- a new imaging optical system may be added to the distance measuring apparatus according to this modification.
- FIG. 27B is a diagram illustrating a configuration of an imaging optical system of the distance measuring apparatus according to the present modification.
- the distance measuring apparatus according to this modification includes seven imaging optical systems.
- the optical center of the reference imaging optical system a and the optical center of the reference imaging optical system b are substantially symmetric and point-symmetric with respect to the optical center of the standard imaging optical system s.
- the optical center of the reference imaging optical system c and the optical center of the reference imaging optical system d are arranged on a substantially straight line with point symmetry with respect to the optical center of the standard imaging optical system s.
- the optical center and the optical center of the reference imaging optical system f are arranged on a substantially straight line with point symmetry with respect to the optical center of the standard imaging optical system s.
- the distance measuring apparatus includes a plurality of two reference imaging optical systems in which the optical centers are arranged substantially symmetrically on the basis of the standard imaging optical system on substantially one straight line. Any structure may be used. Further, the effects of the present invention can be obtained even with a distance measuring device including a plurality of reference imaging optical systems, for example, 10 or 12, for example.
- the block matching calculation may be performed.
- the shift amount is in units of subpixels, and the time for block matching calculation sometimes increases. Therefore, an arrangement of the imaging optical system in which the shift amount at the time of block matching calculation is not in sub-pixel units, that is, the calculation time of the block matching calculation is shortened will be described with reference to FIGS. 28A to 28C.
- the imaging optical system group 1 includes two reference imaging optical systems in which the optical centers are arranged substantially symmetrically with respect to the reference imaging optical system on substantially one straight line parallel to the pixel arrangement direction (horizontal or vertical). And The other two reference imaging optical systems in which the optical centers are arranged substantially point-symmetrically with respect to the reference imaging optical system on substantially one straight line parallel to the pixel arrangement direction (horizontal or vertical) are the imaging optical system group. 2.
- the base line lengths of the standard imaging optical system s and the two reference imaging optical systems are B1 and B2, respectively.
- the pixel shift directions in the block matching calculation of the imaging optical system groups 1 and 2, that is, the pixel pitches in the direction parallel to the respective base line lengths are p1 and p2, respectively.
- the imaging optical system group 1 is referred to as a reference imaging optical system a and a reference imaging optical system b
- the imaging optical system group 2 is referred to as a reference imaging optical system c and a reference imaging optical system d.
- Be is the baseline length of the standard imaging optical system s and the reference imaging optical system e.
- the block boundary coincides with the pixel boundary (the cut-out coordinates of the reference image are always integers). As a result, it is possible to greatly reduce the calculation time for the block matching calculation.
- each imaging optical system significantly reduces the calculation time of block matching calculation by arranging the imaging region that is the imaging device of each imaging optical system in accordance with the positional relationship of the constituent pixels. Is possible. That is, each imaging optical system is arranged so that the arrangement direction and distance of the pixels constituting the imaging region of each imaging optical system and the direction and length of the base line of each imaging optical system are similar, The calculation time can be greatly reduced.
- the distance measuring apparatus may be such that the optical center of each reference image is substantially point-symmetric with respect to the optical center of the standard imaging optical system s.
- 29A and 29B are views of another example of the distance measuring device according to the present invention as viewed from the upper side of the distance measuring device. Configurations other than the lens and the image sensor are omitted.
- the base line connecting the optical centers of the optical systems and the optical axis of the optical systems may not be perpendicular.
- the calculation may be performed with the optical axis and the base line direction vertical by calibration (viewpoint conversion) using affine transformation.
- the optical axes of the optical systems do not necessarily have to be parallel. This is because the direction of the optical axis can be corrected by calibration (viewpoint conversion) using affine transformation.
- the imaging optical system according to each of the above embodiments and modifications may include a color imaging device such as a Bayer array.
- the parallax can be calculated with high accuracy by using a color image whose resolution has been increased by a generally known demosaic process or the like, as in the above-described embodiments and modifications.
- the optical center of the standard imaging optical system and the optical centers of the other two reference imaging optical systems are on one straight line, and the optical center of the standard imaging optical system is used as a reference.
- the optical center of the standard imaging optical system and the optical centers of the other two reference imaging optical systems are arranged on a substantially straight line and are substantially point symmetrical with respect to the optical center of the standard imaging optical system. May be.
- substantially on a straight line and substantially point-symmetric refers to a range that satisfies the conditions of (Expression 8) and (Expression 10) described in the third embodiment.
- SAD is used as a function for calculating a correlation value.
- ZNCC cross-correlation coefficient
- the correlation value is 1 when the correlation is the highest, and the correlation value is less than 1 when the correlation is low.
- the present invention can also be applied to the case where the shift amount that maximizes the transition of the correlation value is obtained as the parallax at the sub-pixel level.
- the present invention can be applied to the aforementioned SSD, NCC, and the like as a function for calculating a correlation value.
- the transition of the correlation value is symmetric with respect to the actual parallax, it is possible to obtain the subpixel parallax with high accuracy regardless of the maximum or minimum of the extreme value of the transition of the correlation value. is there.
- the distance measuring device is called a compound eye imaging device. Similar to the distance measuring apparatus, the compound-eye imaging apparatus has a configuration as shown in FIG. However, unlike the distance measuring device, the compound-eye imaging device does not include the post-processing unit 9 that calculates the three-dimensional position and distance of the object.
- the present invention can be realized not only as a distance measuring apparatus as described above but also as a distance measuring method or a parallax calculation method including steps as characteristic components included in the distance measuring apparatus as described above. Or can be realized as a program that causes a computer to execute these steps. Needless to say, such a program can be distributed via a recording medium such as a CD-ROM or a transmission medium such as the Internet.
- the present invention can also be realized as a semiconductor integrated circuit (LSI) that realizes a part of the functions of the components of the distance measuring apparatus as described above.
- LSI semiconductor integrated circuit
- the present invention relates to a compound-eye imaging apparatus capable of calculating parallax generated by a plurality of imaging optical systems that capture the same object, and to determine the distance from the apparatus to the object or the three-dimensional position or shape of the object.
- the present invention relates to a distance measuring device that can be used, and is useful for in-vehicle, surveillance, medical, robot, game, CG image creation, stereoscopic image input, autofocus applications of digital cameras and digital video cameras, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of Optical Distance (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Studio Devices (AREA)
Abstract
Description
光学中心位置誤差 ≦ D・pitch・0.15/f
を満たすことが好ましい。
基線長誤差 ≦ D・pitch・0.2/f
を満たすことが好ましい。
2s、2a、2b、21a、21b、21c、21d、21e、21f、21g、21h レンズ
3s、3a、3b、22a、22b、22c、22d、22e、22f、22g、22h 撮像領域
4 A/D変換部
5 前処理部
6 相関値算出部
7 相関値加算部
8 視差算出部
9 後処理部
10s、10a、10b 光学中心
11s、11a、11b 光軸
12 対象物
13 対象物の表面上の点
14s、14a、14b、15s、15a、15b ブロック
16a、16b、17a、17b 矢印
20 複眼カメラ
23 平滑化フィルタ部
24 視差換算部
25s 光学中心
50、60、70、80 測距装置
図1は、本実施の形態に係る測距装置50の構成を示す図である。この測距装置50は、3つのカメラ1s、1a、1b並びにA/D変換部4、前処理部5、相関値算出部6、相関値加算部7、視差算出部8及び後処理部9を備える。
次に、本発明の実施の形態2に係る測距装置について説明する。
次に、本発明の実施の形態3に係る測距装置について説明する。
次に、本発明の実施の形態4に係る測距装置について説明する。
本実施例では、従来のステレオカメラと本発明による測距装置との測距精度(視差検出精度)の比較シミュレーション結果の例を示す。図24A~図24Cは従来のステレオカメラの構成を示す。図24D~図24Fは本発明による測距装置、つまり基準撮像光学系の光学中心に対して2つの参照光学系の光学中心を点対称に配置し、その2つの参照光学系の対が1つ以上構成されることにより、対象物の輝度分布によらず合成相関値が実際視差を基準として左右対称になる測距装置の構成を示す。図24A~図24Fにおいては、すべての光学系について焦点距離、水平及び垂直方向画素ピッチ、被写体距離は同一としている。各参照画像の基準撮像光学系に対する基線長は、水平及び垂直方向に基線が生じる場合はすべて同一、斜め方向に基線が生じる場合は水平方向の基線長のsqrt(2)倍としている。実施の形態3で説明したような基線垂直方向誤差や基線方向誤差はここでは含んでいない。図25A~図25Dは、測距精度の比較に用いた被写体を示す図である。図25A~図25Dの各被写体については各光学系に応じて適切に理想的に0.1画素刻みで10.0~11.0画素のずらし画像(視差画像)を作成し、比較シミュレーションに用いている。画像には実際の撮像素子で観測される程度の白色ノイズを加えている。図26A~図26Dは、図25A~図25Dの各被写体に対応する比較シミュレーション結果を示すグラフである。図26A~図26Dの各グラフの横軸は視差演算を行う際の演算ブロックサイズ(正方形演算ブロックの1辺の画素数)を示し、縦軸はそれぞれに対応する視差検出誤差を示している。各演算ブロック毎の視差検出誤差の算出方法は、図25A~図25Dの各被写体を該当演算ブロックサイズで領域分割し、各演算ブロック毎の視差検出誤差を被写体全領域で平均した値を示している。また、各演算ブロック毎の視差検出誤差は、0.1画素刻みで作成した10.0~11.0画素のずらし画像(視差画像)のすべてのずらし量(視差画像)に対しての視差検出誤差の平均値を算出している(つまり、小数点以下が0.1画素刻みで0.0~0.9のすべての視差量について検証している)。
上述の各実施の形態に係る測距装置は、本発明を説明するための一例であり、例えば、図27Aに示すような測距装置の構成でもよい。
上述の各実施の形態に係る測距装置は、4つ又は8つの参照撮像光学系を備えていたが、6つの参照撮像光学系を備えてもよいことは言うまでもない。
上述の各実施の形態に係る測距装置は、図29A及び図29Bに示すように、各参照画像の光学中心が基準撮像光学系sの光学中心に対して略点対称であればよい。図29A及び図29Bは本発明に係る測距装置を装置の別の例を測距装置の上側から見た図である。レンズ及び撮像素子以外の構成は省略している。図29Aに示すように、各光学系の光学中心を結ぶ基線と、各光学系の光軸とは垂直でなくてもよい。この場合は、視差探索時にブロックサイズをずらし量ごとに可変とする従来の視差探索手法を用いて各参照撮像光学系に対する相関値を導出すればよく、合成した相関値の推移の実際視差に対する対称性は損なわれないため本発明の効果を得ることが可能である。もしくはアフィン変換を用いたキャリブレーション(視点変換)により光軸と基線方向を垂直にして演算してもよい。また、図29Bに示すように、各光学系の光軸は必ずしも平行である必要はない。光軸の方向はアフィン変換を用いたキャリブレーション(視点変換)により補正可能であるためである。
Claims (11)
- 同一の対象物を撮像する複数の撮像光学系で生じる視差を算出する複眼撮像装置であって、
前記対象物を撮像することによって、基準画像を含む画像を生成する基準撮像光学系と、
前記基準撮像光学系の光学中心に対してそれぞれの光学中心が略点対称に配置され、前記対象物を撮像することによって、参照画像を含む画像を生成する2以上の偶数の参照撮像光学系と、
前記2以上の偶数の参照撮像光学系のそれぞれについて、前記基準画像に類似する前記参照画像の画像位置を探索するために前記参照撮像光学系により生成された画像中の前記参照画像の探索位置を、前記基準撮像光学系の光学中心と前記参照撮像光学系の光学中心とを結ぶ直線である基線と平行な方向に沿ってずらしていった場合のずらし量ごとに、前記基準画像と前記参照画像との類似度を表す相関値を算出する相関値算出手段と、
前記2以上の偶数の参照撮像光学系のそれぞれについて算出された相関値を、対応するずらし量ごとに加算することにより合成相関値を算出する相関値加算手段と、
前記合成相関値に基づき、前記基準画像と前記参照画像との類似度が最大となるずらし量である視差をサブピクセルのレベルで算出する視差算出手段と
を少なくとも備えることを特徴とする複眼撮像装置。 - 前記視差算出手段は、前記相関値加算手段で加算されたずらし量ごとの相関値を、対称性を利用した補間式を用いて補間することにより、サブピクセルレベルの視差を算出する
ことを特徴とする請求項1に記載の複眼撮像装置。 - 前記複眼撮像装置は、4以上の参照撮像光学系を備え、
前記4以上の参照撮像光学系の光学中心は、前記基準撮像光学系の光学中心に対して略点対称に配置された1対の第1の参照撮像光学系に係る基線の方向と、前記基準撮像光学系の光学中心に対して略点対称に配置された第1の参照撮像光学系の光学中心と異なる1対の第2の参照撮像光学系に係る基線の方向とが所定の角度傾くように配置される
ことを特徴とする請求項1に記載の複眼撮像装置。 - 前記4以上の参照撮像光学系は、前記第1の参照撮像光学系と前記基準撮像光学系との基線の長さである第1の基線長と、前記第2の参照撮像光学系と前記基準撮像光学系との基線の長さである第2の基線長とが異なるように配置され、
前記相関値算出手段では、前記第2の参照撮像光学系で生成された参照画像の相関値を算出する際に、前記第2の基線長を前記第1の基線長で除した値に、前記第1の参照撮像光学系で生成された参照画像の相関値を算出する際に利用した第1のずらし量を乗じた値である第2のずらし量ごとに前記相関値を算出する
ことを特徴とする請求項3に記載の複眼撮像装置。 - 基準撮像光学系及び前記4以上の参照撮像光学系は、前記基準撮像光学系が有する撮像装置を構成する画素の位置関係と同一となるように配置される
ことを特徴とする請求項3に記載の複眼撮像装置。 - 前記基準撮像光学系の光学中心に対して略点対称に配置された対となる参照撮像光学系のそれぞれの対において、対のうち一方の参照撮像光学系の光学中心と前記基準撮像光学系の光学中心とを結ぶ直線と、対のうち他方の参照撮像光学系の光学中心との距離である光学中心位置誤差が、Dを対象物までの距離、pitchを画素ピッチ、fを焦点距離としたときに、
光学中心位置誤差 ≦ D・pitch・0.15/f
を満たすことを特徴とする請求項1に記載の複眼撮像装置。 - 前記基準撮像光学系の光学中心に対して略点対称に配置された対となる参照撮像光学系のそれぞれの対において、対のうち一方の参照撮像光学系の光学中心と前記基準撮像光学系の光学中心との間隔である第1基線長と、対のうち他方の参照撮像光学系の光学中心と前記基準撮像光学系の光学中心との間隔である第2基線長との長さの差である基線長誤差が、Dを対象物までの距離、pitchを画素ピッチ、fを焦点距離としたときに、
基線長誤差 ≦ D・pitch・0.2/f
を満たすことを特徴とする請求項1に記載の複眼撮像装置。 - 前記複眼撮像装置は、さらに、前記基準画像及び前記参照画像に平滑化フィルタを施す前処理手段を備え、
前記相関値算出手段では、前記平滑化フィルタを施した基準画像及び参照画像に基づいて前記相関値を算出する
ことを特徴とする請求項1に記載の複眼撮像装置。 - 同一の対象物を撮像する複数の撮像光学系で生じる視差を算出することにより、前記対象物までの距離又は前記対象物の三次元位置を算出する測距装置であって、
前記対象物を撮像することによって、基準画像を含む画像を生成する基準撮像光学系と、
前記基準撮像光学系の光学中心に対してそれぞれの光学中心が略点対称に配置され、前記対象物を撮像することによって、参照画像を含む画像を生成する2以上の偶数の参照撮像光学系と、
前記2以上の偶数の参照撮像光学系のそれぞれについて、前記基準画像に類似する前記参照画像の画像位置を探索するために前記参照撮像光学系により生成された画像中の前記参照画像の探索位置を、前記基準撮像光学系の光学中心と前記参照撮像光学系の光学中心とを結ぶ直線である基線と平行な方向に沿ってずらしていった場合のずらし量ごとに、前記基準画像と前記参照画像との類似度を表す相関値を算出する相関値算出手段と、
前記2以上の偶数の参照撮像光学系のそれぞれについて算出された相関値を、対応するずらし量ごとに加算することにより合成相関値を算出する相関値加算手段と、
前記合成相関値に基づき、前記基準画像と前記参照画像との類似度が最大となるずらし量である視差をサブピクセルのレベルで算出する視差算出手段と
算出された視差と前記基準撮像光学系の焦点距離と前記基線の長さに基づき、前記測距装置から前記対象物までの距離又は前記対象物の三次元位置を算出する距離算出手段と
を備えることを特徴とする測距装置。 - 同一の対象物を撮像する複数の撮像光学系で生じる視差を算出する視差算出方法であって、
前記複数の撮像光学系は、
前記対象物を撮像することによって、基準画像を含む画像を生成する基準撮像光学系と、
前記基準撮像光学系の光学中心に対してそれぞれの光学中心が略点対称に配置され、前記対象物を撮像することによって、参照画像を含む画像を生成する2以上の偶数の参照撮像光学系とを含み、
前記視差算出方法は、
前記2以上の偶数の参照撮像光学系のそれぞれについて、前記基準画像に類似する前記参照画像の画像位置を探索するために前記参照撮像光学系により生成された画像中の前記参照画像の探索位置を、前記基準撮像光学系の光学中心と前記参照撮像光学系の光学中心とを結ぶ直線である基線と平行な方向に沿ってずらしていった場合のずらし量ごとに、前記基準画像と前記参照画像との類似度を表す相関値を算出する相関値算出ステップと、
前記2以上の偶数の参照撮像光学系のそれぞれについて算出された相関値を、対応するずらし量ごとに加算することにより合成相関値を算出する相関値加算ステップと、
前記合成相関値に基づき、前記基準画像と前記参照画像との類似度が最大となるずらし量である視差をサブピクセルのレベルで算出する視差算出ステップと
を含むことを特徴とする視差算出方法。 - 同一の対象物を撮像する複数の撮像光学系で生じる視差を算出することにより、前記対象物までの距離又は前記対象物の三次元位置を算出する測距方法であって、
前記複数の撮像光学系は、
前記対象物を撮像することによって、基準画像を含む画像を生成する基準撮像光学系と、
前記基準撮像光学系の光学中心に対してそれぞれの光学中心が略点対称に配置され、前記対象物を撮像することによって、参照画像を含む画像を生成する2以上の偶数の参照撮像光学系とを含み、
前記測距方法は、
前記2以上の偶数の参照撮像光学系のそれぞれについて、前記基準画像に類似する前記参照画像の画像位置を探索するために前記参照撮像光学系により生成された画像中の前記参照画像の探索位置を、前記基準撮像光学系の光学中心と前記参照撮像光学系の光学中心とを結ぶ直線である基線と平行な方向に沿ってずらしていった場合のずらし量ごとに、前記基準画像と前記参照画像との類似度を表す相関値を算出する相関値算出ステップと、
前記2以上の偶数の参照撮像光学系のそれぞれについて算出された相関値を、対応するずらし量ごとに加算することにより合成相関値を算出する相関値加算ステップと、
前記合成相関値に基づき、前記基準画像と前記参照画像との類似度が最大となるずらし量である視差をサブピクセルのレベルで算出する視差算出ステップと
算出された視差と前記基準撮像光学系の焦点距離と前記基線の長さに基づき、前記測距装置から前記対象物までの距離又は前記対象物の三次元位置を算出する距離算出ステップと
を含むことを特徴とする測距方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009526429A JP4382156B2 (ja) | 2008-02-12 | 2009-02-10 | 複眼撮像装置、測距装置、視差算出方法及び測距方法 |
US12/594,975 US8090195B2 (en) | 2008-02-12 | 2009-02-10 | Compound eye imaging apparatus, distance measuring apparatus, disparity calculation method, and distance measuring method |
CN2009800002254A CN101680756B (zh) | 2008-02-12 | 2009-02-10 | 复眼摄像装置、测距装置、视差算出方法以及测距方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-030598 | 2008-02-12 | ||
JP2008030598 | 2008-02-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009101798A1 true WO2009101798A1 (ja) | 2009-08-20 |
Family
ID=40956829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/000534 WO2009101798A1 (ja) | 2008-02-12 | 2009-02-10 | 複眼撮像装置、測距装置、視差算出方法及び測距方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8090195B2 (ja) |
JP (1) | JP4382156B2 (ja) |
CN (1) | CN101680756B (ja) |
WO (1) | WO2009101798A1 (ja) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012114593A (ja) * | 2010-11-22 | 2012-06-14 | Nippon Hoso Kyokai <Nhk> | 多視点ロボットカメラシステム、多視点ロボットカメラ制御装置及びプログラム |
JP2013145605A (ja) * | 2013-04-30 | 2013-07-25 | Toshiba Corp | 画像処理装置 |
WO2014069169A1 (ja) * | 2012-10-29 | 2014-05-08 | 日立オートモティブシステムズ株式会社 | 画像処理装置 |
CN103839259A (zh) * | 2014-02-13 | 2014-06-04 | 西安交通大学 | 一种图像搜寻最优匹配块方法及装置 |
JP2016118830A (ja) * | 2014-12-18 | 2016-06-30 | 株式会社リコー | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム |
CN111402315A (zh) * | 2020-03-03 | 2020-07-10 | 四川大学 | 一种自适应调整双目摄像机基线的三维距离测量方法 |
US10791314B2 (en) | 2010-03-31 | 2020-09-29 | Interdigital Ce Patent Holdings, Sas | 3D disparity maps |
US10810762B2 (en) | 2010-09-24 | 2020-10-20 | Kabushiki Kaisha Toshiba | Image processing apparatus |
US10872432B2 (en) | 2018-01-05 | 2020-12-22 | Panasonic Intellectual Property Management Co., Ltd. | Disparity estimation device, disparity estimation method, and program |
JP2021162305A (ja) * | 2020-03-30 | 2021-10-11 | ミネベアミツミ株式会社 | 測距システム、測距方法及び測距プログラム |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8885067B2 (en) * | 2009-12-24 | 2014-11-11 | Sharp Kabushiki Kaisha | Multocular image pickup apparatus and multocular image pickup method |
US20110222757A1 (en) * | 2010-03-10 | 2011-09-15 | Gbo 3D Technology Pte. Ltd. | Systems and methods for 2D image and spatial data capture for 3D stereo imaging |
JP5682291B2 (ja) * | 2010-12-20 | 2015-03-11 | ソニー株式会社 | 補正値演算装置、複眼撮像装置、および、補正値演算装置の制御方法 |
JP5782766B2 (ja) * | 2011-03-18 | 2015-09-24 | 株式会社リコー | 画像処理装置、画像処理方法、及び画像処理プログラム |
JP5792662B2 (ja) * | 2011-03-23 | 2015-10-14 | シャープ株式会社 | 視差算出装置、距離算出装置及び視差算出方法 |
BR112013030289A2 (pt) | 2011-05-26 | 2016-11-29 | Thomson Licensing | mapas independentes de escala |
JP2012247364A (ja) * | 2011-05-30 | 2012-12-13 | Panasonic Corp | ステレオカメラ装置、ステレオカメラシステム、プログラム |
JP5942203B2 (ja) * | 2011-06-17 | 2016-06-29 | パナソニックIpマネジメント株式会社 | ステレオ画像処理装置およびステレオ画像処理方法 |
JP5318168B2 (ja) * | 2011-09-07 | 2013-10-16 | シャープ株式会社 | 立体画像処理装置、立体画像処理方法、及びプログラム |
JP6008298B2 (ja) | 2012-05-28 | 2016-10-19 | パナソニックIpマネジメント株式会社 | 画像処理装置、撮像装置、画像処理方法、およびプログラム |
JP5837463B2 (ja) * | 2012-07-06 | 2015-12-24 | 株式会社東芝 | 画像処理装置および画像処理システム |
US20140063199A1 (en) * | 2012-09-05 | 2014-03-06 | Samsung Electro-Mechanics Co., Ltd. | Electronic device and depth calculating method of stereo camera image using the same |
EP2902743B1 (en) * | 2012-09-24 | 2017-08-23 | FUJIFILM Corporation | Device and method for measuring distances to two subjects |
CN102980556B (zh) * | 2012-11-29 | 2015-08-12 | 小米科技有限责任公司 | 一种测距方法及装置 |
US9442482B2 (en) * | 2013-04-29 | 2016-09-13 | GlobalFoundries, Inc. | System and method for monitoring wafer handling and a wafer handling machine |
JP6232497B2 (ja) * | 2014-06-25 | 2017-11-15 | 株式会社日立製作所 | 外界認識装置 |
US9519289B2 (en) | 2014-11-26 | 2016-12-13 | Irobot Corporation | Systems and methods for performing simultaneous localization and mapping using machine vision systems |
CN106537186B (zh) * | 2014-11-26 | 2021-10-08 | 艾罗伯特公司 | 用于使用机器视觉系统执行同时定位和映射的系统和方法 |
US10063840B2 (en) * | 2014-12-31 | 2018-08-28 | Intel Corporation | Method and system of sub pixel accuracy 3D measurement using multiple images |
KR102298652B1 (ko) | 2015-01-27 | 2021-09-06 | 삼성전자주식회사 | 시차 결정 방법 및 장치 |
KR101729164B1 (ko) * | 2015-09-03 | 2017-04-24 | 주식회사 쓰리디지뷰아시아 | 멀티 구 교정장치를 이용한 멀티 카메라 시스템의 이미지 보정 방법 |
KR101729165B1 (ko) | 2015-09-03 | 2017-04-21 | 주식회사 쓰리디지뷰아시아 | 타임 슬라이스 영상용 오차교정 유닛 |
CN105627926B (zh) * | 2016-01-22 | 2017-02-08 | 尹兴 | 四像机组平面阵列特征点三维测量系统及测量方法 |
US10582179B2 (en) * | 2016-02-01 | 2020-03-03 | Samsung Electronics Co., Ltd. | Method and apparatus for processing binocular disparity image |
JP6585006B2 (ja) * | 2016-06-07 | 2019-10-02 | 株式会社東芝 | 撮影装置および車両 |
CN107809610B (zh) * | 2016-09-08 | 2021-06-11 | 松下知识产权经营株式会社 | 摄像头参数集算出装置、摄像头参数集算出方法以及记录介质 |
CN107167092B (zh) * | 2017-05-18 | 2019-12-13 | 上海晶电新能源有限公司 | 一种基于多目图像识别的定日镜面形检测系统及方法 |
CN108965651A (zh) * | 2017-05-19 | 2018-12-07 | 深圳市道通智能航空技术有限公司 | 一种无人机高度测量方法以及无人机 |
US10762658B2 (en) * | 2017-10-24 | 2020-09-01 | Altek Corporation | Method and image pick-up apparatus for calculating coordinates of object being captured using fisheye images |
CN109813251B (zh) * | 2017-11-21 | 2021-10-01 | 蒋晶 | 用于三维测量的方法、装置以及系统 |
DE102017130897A1 (de) | 2017-12-21 | 2019-06-27 | Pilz Gmbh & Co. Kg | Verfahren zum Bestimmen von Entfernungsinformation aus einer Abbildung eines Raumbereichs |
CN108181666B (zh) * | 2017-12-26 | 2020-02-18 | 中国科学院上海技术物理研究所 | 一种广域覆盖窄域多点重点侦察检测方法 |
CN109191527B (zh) * | 2018-11-15 | 2021-06-11 | 凌云光技术股份有限公司 | 一种基于最小化距离偏差的对位方法及装置 |
US20220415054A1 (en) * | 2019-06-24 | 2022-12-29 | Nec Corporation | Learning device, traffic event prediction system, and learning method |
CN112129262B (zh) * | 2020-09-01 | 2023-01-06 | 珠海一微半导体股份有限公司 | 一种多摄像头组的视觉测距方法及视觉导航芯片 |
CN113676659B (zh) * | 2021-08-11 | 2023-05-26 | Oppo广东移动通信有限公司 | 图像处理方法及装置、终端及计算机可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6266113A (ja) * | 1985-09-19 | 1987-03-25 | Tokyo Optical Co Ltd | 座標測定方法及びその装置 |
JPS6473468A (en) * | 1987-09-14 | 1989-03-17 | Sony Corp | Image processor |
JPH07234111A (ja) * | 1994-02-23 | 1995-09-05 | Matsushita Electric Works Ltd | 三次元物体の計測方法 |
JPH0861932A (ja) * | 1994-08-23 | 1996-03-08 | Sumitomo Electric Ind Ltd | 多眼視覚装置の信号処理方法 |
JPH0949728A (ja) * | 1995-08-04 | 1997-02-18 | Omron Corp | 距離測定装置 |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4745562A (en) * | 1985-08-16 | 1988-05-17 | Schlumberger, Limited | Signal processing disparity resolution |
US5109425A (en) * | 1988-09-30 | 1992-04-28 | The United States Of America As Represented By The United States National Aeronautics And Space Administration | Method and apparatus for predicting the direction of movement in machine vision |
US5179441A (en) * | 1991-12-18 | 1993-01-12 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Near real-time stereo vision system |
US5768404A (en) * | 1994-04-13 | 1998-06-16 | Matsushita Electric Industrial Co., Ltd. | Motion and disparity estimation method, image synthesis method, and apparatus for implementing same methods |
JP3242529B2 (ja) * | 1994-06-07 | 2001-12-25 | 松下通信工業株式会社 | ステレオ画像対応付け方法およびステレオ画像視差計測方法 |
JP3539788B2 (ja) * | 1995-04-21 | 2004-07-07 | パナソニック モバイルコミュニケーションズ株式会社 | 画像間対応付け方法 |
US5612735A (en) * | 1995-05-26 | 1997-03-18 | Luncent Technologies Inc. | Digital 3D/stereoscopic video compression technique utilizing two disparity estimates |
US5652616A (en) * | 1996-08-06 | 1997-07-29 | General Instrument Corporation Of Delaware | Optimal disparity estimation for stereoscopic video coding |
US6215898B1 (en) * | 1997-04-15 | 2001-04-10 | Interval Research Corporation | Data processing system and method |
US5917937A (en) * | 1997-04-15 | 1999-06-29 | Microsoft Corporation | Method for performing stereo matching to recover depths, colors and opacities of surface elements |
US6222938B1 (en) * | 1998-05-20 | 2001-04-24 | Canon Kabushiki Kaisha | Compensating pixel records of related images for detecting images disparity, apparatus and method |
US6141440A (en) * | 1998-06-04 | 2000-10-31 | Canon Kabushiki Kaisha | Disparity measurement with variably sized interrogation regions |
JP2000283753A (ja) | 1999-03-31 | 2000-10-13 | Fuji Heavy Ind Ltd | ステレオ画像による測距装置 |
US7015954B1 (en) * | 1999-08-09 | 2006-03-21 | Fuji Xerox Co., Ltd. | Automatic video system using multiple cameras |
US20020012459A1 (en) * | 2000-06-22 | 2002-01-31 | Chips Brain Co. Ltd. | Method and apparatus for detecting stereo disparity in sequential parallel processing mode |
US7085431B2 (en) * | 2001-11-13 | 2006-08-01 | Mitutoyo Corporation | Systems and methods for reducing position errors in image correlation systems during intra-reference-image displacements |
EP1466137B1 (en) * | 2001-12-28 | 2010-04-14 | Rudolph Technologies, Inc. | Stereoscopic three-dimensional metrology system and method |
US6961481B2 (en) * | 2002-07-05 | 2005-11-01 | Lockheed Martin Corporation | Method and apparatus for image processing using sub-pixel differencing |
US7164784B2 (en) * | 2002-07-30 | 2007-01-16 | Mitsubishi Electric Research Laboratories, Inc. | Edge chaining using smoothly-varying stereo disparity |
US6847728B2 (en) * | 2002-12-09 | 2005-01-25 | Sarnoff Corporation | Dynamic depth recovery from multiple synchronized video streams |
FR2880718A1 (fr) * | 2005-01-10 | 2006-07-14 | St Microelectronics Sa | Procede et dispositif de reduction des artefacts d'une image numerique |
US7606424B2 (en) * | 2006-02-13 | 2009-10-20 | 3M Innovative Properties Company | Combined forward and reverse correlation |
-
2009
- 2009-02-10 CN CN2009800002254A patent/CN101680756B/zh active Active
- 2009-02-10 JP JP2009526429A patent/JP4382156B2/ja active Active
- 2009-02-10 US US12/594,975 patent/US8090195B2/en active Active
- 2009-02-10 WO PCT/JP2009/000534 patent/WO2009101798A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6266113A (ja) * | 1985-09-19 | 1987-03-25 | Tokyo Optical Co Ltd | 座標測定方法及びその装置 |
JPS6473468A (en) * | 1987-09-14 | 1989-03-17 | Sony Corp | Image processor |
JPH07234111A (ja) * | 1994-02-23 | 1995-09-05 | Matsushita Electric Works Ltd | 三次元物体の計測方法 |
JPH0861932A (ja) * | 1994-08-23 | 1996-03-08 | Sumitomo Electric Ind Ltd | 多眼視覚装置の信号処理方法 |
JPH0949728A (ja) * | 1995-08-04 | 1997-02-18 | Omron Corp | 距離測定装置 |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10791314B2 (en) | 2010-03-31 | 2020-09-29 | Interdigital Ce Patent Holdings, Sas | 3D disparity maps |
US10810762B2 (en) | 2010-09-24 | 2020-10-20 | Kabushiki Kaisha Toshiba | Image processing apparatus |
JP2012114593A (ja) * | 2010-11-22 | 2012-06-14 | Nippon Hoso Kyokai <Nhk> | 多視点ロボットカメラシステム、多視点ロボットカメラ制御装置及びプログラム |
US9652853B2 (en) | 2012-10-29 | 2017-05-16 | Hitachi Automotive Systems, Ltd. | Image processing device |
WO2014069169A1 (ja) * | 2012-10-29 | 2014-05-08 | 日立オートモティブシステムズ株式会社 | 画像処理装置 |
JP2014090233A (ja) * | 2012-10-29 | 2014-05-15 | Hitachi Automotive Systems Ltd | 画像処理装置 |
CN104769942A (zh) * | 2012-10-29 | 2015-07-08 | 日立汽车系统株式会社 | 图像处理装置 |
JP2013145605A (ja) * | 2013-04-30 | 2013-07-25 | Toshiba Corp | 画像処理装置 |
CN103839259A (zh) * | 2014-02-13 | 2014-06-04 | 西安交通大学 | 一种图像搜寻最优匹配块方法及装置 |
CN103839259B (zh) * | 2014-02-13 | 2016-11-23 | 西安交通大学 | 一种图像搜寻最优匹配块方法及装置 |
JP2016118830A (ja) * | 2014-12-18 | 2016-06-30 | 株式会社リコー | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム |
US10872432B2 (en) | 2018-01-05 | 2020-12-22 | Panasonic Intellectual Property Management Co., Ltd. | Disparity estimation device, disparity estimation method, and program |
CN111402315A (zh) * | 2020-03-03 | 2020-07-10 | 四川大学 | 一种自适应调整双目摄像机基线的三维距离测量方法 |
CN111402315B (zh) * | 2020-03-03 | 2023-07-25 | 四川大学 | 一种自适应调整双目摄像机基线的三维距离测量方法 |
JP2021162305A (ja) * | 2020-03-30 | 2021-10-11 | ミネベアミツミ株式会社 | 測距システム、測距方法及び測距プログラム |
JP7397734B2 (ja) | 2020-03-30 | 2023-12-13 | ミネベアミツミ株式会社 | 測距システム、測距方法及び測距プログラム |
Also Published As
Publication number | Publication date |
---|---|
JP4382156B2 (ja) | 2009-12-09 |
US20100150455A1 (en) | 2010-06-17 |
CN101680756A (zh) | 2010-03-24 |
CN101680756B (zh) | 2012-09-05 |
US8090195B2 (en) | 2012-01-03 |
JPWO2009101798A1 (ja) | 2011-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4382156B2 (ja) | 複眼撮像装置、測距装置、視差算出方法及び測距方法 | |
JP4782899B2 (ja) | 視差検出装置、測距装置及び視差検出方法 | |
JP6168794B2 (ja) | 情報処理方法および装置、プログラム。 | |
JP6112824B2 (ja) | 画像処理方法および装置、プログラム。 | |
JP6021541B2 (ja) | 画像処理装置及び方法 | |
Greisen et al. | An FPGA-based processing pipeline for high-definition stereo video | |
EP3048787B1 (en) | Image processing apparatus, image pickup apparatus, image processing method, program, and storage medium | |
JP5666069B1 (ja) | 座標算出装置及び方法、並びに画像処理装置及び方法 | |
JP6189061B2 (ja) | 固体撮像装置 | |
KR101140953B1 (ko) | 영상 왜곡 보정 장치 및 방법 | |
JP2019114842A (ja) | 画像処理装置、コンテンツ処理装置、コンテンツ処理システム、および画像処理方法 | |
CN102158731B (zh) | 影像处理系统及方法 | |
JP6732440B2 (ja) | 画像処理装置、画像処理方法、及びそのプログラム | |
JP5925109B2 (ja) | 画像処理装置、その制御方法、および制御プログラム | |
JP6755737B2 (ja) | 距離測定装置、撮像装置、および距離測定方法 | |
JP2016119542A (ja) | 画像処理方法、画像処理プログラム、画像処理装置および撮像装置 | |
JP6648916B2 (ja) | 撮像装置 | |
JP6105960B2 (ja) | 画像処理装置、画像処理方法、および撮像装置 | |
JP2013110711A (ja) | 画像処理装置および画像処理方法、並びにプログラム | |
JP2011182325A (ja) | 撮像装置 | |
US20230060314A1 (en) | Method and apparatus with image processing | |
US20240020865A1 (en) | Calibration method for distance measurement device, distance measurement device, and storage medium | |
JP2017083817A (ja) | ズレ量取得装置、撮像装置、およびズレ量取得方法 | |
JP6566765B2 (ja) | 撮像装置 | |
JP2014142781A (ja) | 画像処理装置、画像処理方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980000225.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009526429 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12594975 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09711372 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09711372 Country of ref document: EP Kind code of ref document: A1 |