WO2020017209A1 - Distance measurement camera - Google Patents

Distance measurement camera Download PDF

Info

Publication number
WO2020017209A1
WO2020017209A1 PCT/JP2019/023661 JP2019023661W WO2020017209A1 WO 2020017209 A1 WO2020017209 A1 WO 2020017209A1 JP 2019023661 W JP2019023661 W JP 2019023661W WO 2020017209 A1 WO2020017209 A1 WO 2020017209A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
optical system
distance
subject image
Prior art date
Application number
PCT/JP2019/023661
Other languages
French (fr)
Japanese (ja)
Inventor
覚 須藤
Original Assignee
ミツミ電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ミツミ電機株式会社 filed Critical ミツミ電機株式会社
Publication of WO2020017209A1 publication Critical patent/WO2020017209A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • G01C3/08Use of electric radiation detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention generally relates to a distance measuring camera for measuring a distance to a subject, and more specifically, is formed by at least two optical systems in which a change in magnification of a subject image according to the distance to the subject is different from each other. And a distance measuring camera for measuring a distance to a subject based on an image magnification ratio between at least two subject images.
  • a distance measuring camera that measures a distance to a subject by imaging the subject.
  • Such a distance-measuring camera includes at least an optical system for collecting light from a subject and forming a subject image, and an image sensor for converting the subject image formed by the optical system into an image.
  • An image sensor for converting the subject image formed by the optical system into an image.
  • a stereo camera-type distance measuring camera as disclosed in Patent Literature 1 has a translational parallax between two subject images formed by two optical systems arranged to be shifted from each other in a direction perpendicular to an optical axis direction. (Parallax in a direction perpendicular to the optical axis direction) is calculated, and the distance to the subject can be calculated based on the value of the translation parallax.
  • the feature point of the subject image for calculating the translational parallax is shown in one image from the relationship of the visual field of the obtained image, but the feature point in the other image is obtained. Then, the situation that it is not reflected occurs. To avoid this situation, it is necessary to arrange the two optical systems in close proximity. However, when the two optical systems are arranged close to each other, the translational parallax between the subject images is reduced, and the accuracy of the distance measurement is reduced. For this reason, it is difficult to accurately calculate the distance to an object located at a short distance using distance measurement based on translational parallax between object images.
  • an image magnification ratio type ranging camera that calculates a distance to a subject based on an image magnification ratio (magnification ratio) between two subject images.
  • a distance measuring camera of the image magnification ratio method two optical systems having different magnifications of a subject image according to the distance to the subject are used, and an image between the two subject images formed by the two optical systems is used.
  • the distance to the subject is calculated based on the magnification ratio (ratio of magnification) (see Patent Document 2).
  • the translational parallax between the subject images is not used to calculate the distance to the subject. Therefore, even if two optical systems are arranged close to each other, the distance to the subject can be reduced. Can be calculated accurately. Therefore, the size of the distance measuring camera can be reduced. Further, since the image magnification ratio between the subject images can be accurately obtained even when the subject is located at a short distance, the distance measurement camera of the image magnification ratio can measure the distance to the subject located at a short distance. The distance can be calculated accurately.
  • the image magnification ratio between the subject images is calculated from the ratio of the sizes of the two subject images.
  • a plurality of feature points for example, both ends in the height direction or the width direction of the distance measurement target
  • it is obtained by measuring the distance between the feature points in the image.
  • it is necessary to acquire the size of the same part of the two subject images. Therefore, after detecting a plurality of feature points of one subject image, corresponding feature point detection for detecting a plurality of feature points of the other subject image corresponding to the detected feature points of the one subject image, respectively. Processing needs to be performed.
  • Such a corresponding feature point detection process is generally executed by searching the entire region of an image acquired by capturing the other subject image.
  • the search of the entire region of the image is an operation requiring a lot of processing time, and the processing time required for the corresponding feature point detection processing becomes long.
  • the processing time for calculating the distance to the subject based on the image magnification ratio between the subject images becomes long.
  • JP 2012-26841 A Japanese Patent Application No. 2017-241896
  • the present invention has been made in view of the above-described conventional problems, and has as its object to check corresponding features for detecting a plurality of feature points of the other subject image corresponding to a plurality of feature points of the one subject image, respectively.
  • a distance measurement capable of reducing a processing time for calculating a distance to a subject based on an image magnification ratio between subject images by performing a search for a feature point using an epipolar line based on epipolar geometry. It is to provide a camera.
  • Imaging system By detecting a plurality of feature points of the first subject image in the first image and measuring a distance between the plurality of feature points of the first subject image, Acquiring a size, detecting a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the first subject image, A size obtaining unit for obtaining a size of the second subject image by measuring a distance between the plurality of feature points of the image; The magnification of the first subject image and the magnification of the second subject image obtained as a ratio of the size of the first subject image acquired by the size acquiring unit to the size of the second subject image.
  • a distance calculator for calculating the distance to the subject based on the image magnification ratio with The size acquiring unit searches the plurality of epipolar lines in the second image corresponding to the plurality of feature points of the first subject image, respectively, to thereby obtain the second image in the second image.
  • a distance measuring camera for detecting the plurality of feature points of a subject image.
  • the size acquisition unit corresponds to each of the plurality of feature points of the first subject image based on a model in which characteristics and arrangement of the first imaging system and the second imaging system are considered.
  • the plurality of epipolar lines in the second image respectively corresponding to the plurality of feature points of the first subject image are measured in the above (2) represented by the following equation (1).
  • Distance camera x 1 and y 1 are respectively x and y coordinates in any one of the first images of the plurality of feature points of the first subject image
  • x 2 and y 2 are respectively , X and y coordinates of feature points of the second subject image in the second image corresponding to the arbitrary one of the plurality of feature points of the first subject image
  • P x and P y Are the values in the x-axis direction and the y-axis direction of the translational parallax between the front principal point of the first optical system and the front principal point of the second optical system, respectively
  • D is the A depth parallax between the first optical system and the second optical system in the optical axis direction of the first optical system or the second optical system
  • PS 1 is a pixel size of the first image sensor.
  • PS 2 is a pixel size of the second image sensor
  • f 1 is the focal of said first optical system Distance
  • f 2 is the focal length of the second optical system
  • the exit pupil of EP 1 is the first optical system, to the imaging position of the first object image in the case where the subject is present at infinity distance
  • EP 2 from the exit pupil of the second optical system, the distance to the imaging position of the second object image when the object exists at infinity
  • a FD1 is the first image sensor
  • a FD2 is the imaging surface of the second imaging device. The distance from the front principal point of the second optical system to the subject when the second subject image is the best focus.
  • the first optical system and the second optical system may be configured such that the change in the magnification of the first subject image according to the distance to the subject is different from the magnification according to the distance from the subject.
  • the ranging camera according to the above (1) which is configured to be different from the change in the magnification of the second subject image.
  • the first optical system and the second optical system are configured such that a focal length of the first optical system and a focal length of the second optical system are different from each other.
  • the change in the magnification of the first subject image according to the distance to the subject is different from the change in the magnification of the second subject image according to the distance to the subject.
  • the first optical system and the second optical system are formed by the first optical system when the subject is at infinity from the exit pupil of the first optical system.
  • the second object formed by the second optical system when the object is at infinity from the distance to the image forming position of the first object image and the exit pupil of the second optical system The distance to the image forming position is configured to be different, whereby the change in the magnification of the first subject image according to the distance to the subject is the distance to the subject.
  • the distance measuring camera according to the above (4) or (5) which is different from the change in the magnification of the second object image according to the following.
  • the depth parallax in the optical axis direction of the first optical system or the second optical system is between the front principal point of the first optical system and the front principal point of the second optical system.
  • the change in the magnification of the first subject image according to the distance to the subject is caused by the change in the magnification of the second subject image according to the distance to the subject.
  • a corresponding feature point detection process for detecting a plurality of feature points of the other subject image corresponding to a plurality of feature points of the one subject image
  • an epipolar line based on epipolar geometry is used.
  • a search for the used feature point is executed. Therefore, the processing time for calculating the distance to the subject based on the image magnification ratio between the subject images can be reduced.
  • FIG. 1 is a diagram for explaining the principle of distance measurement of the distance measurement camera of the present invention.
  • FIG. 2 is a diagram for explaining the principle of distance measurement of the distance measurement camera of the present invention.
  • FIG. 3 is an image magnification of the magnification of the first subject image formed by the first optical system shown in FIG. 2 and the magnification of the second subject image formed by the second optical system shown in FIG. 6 is a graph for explaining that a ratio changes according to a distance to a subject.
  • FIG. 4 is an XZ plan view showing a model for deriving an epipolar line used in the distance measuring camera of the present invention.
  • FIG. 5 is a YZ plan view showing a model for deriving an epipolar line used in the distance measuring camera of the present invention.
  • FIG. 6 is a diagram illustrating an example of an epipolar line derived using the models illustrated in FIGS. 4 and 5.
  • FIG. 7 is a block diagram schematically showing the distance measuring camera according to the first embodiment of the present invention.
  • FIG. 8 is a block diagram schematically showing a distance measuring camera according to the second embodiment of the present invention.
  • FIG. 9 is a block diagram schematically showing a distance measuring camera according to the third embodiment of the present invention.
  • FIG. 10 is a flowchart for explaining a distance measuring method executed by the distance measuring camera of the present invention.
  • FIG. 11 is a flowchart showing details of the corresponding feature point detection process executed in the distance measuring method shown in FIG.
  • the magnification m OD of the subject image formed by the optical system is calculated from the distance (subject distance) a from the front principal point (front principal plane) of the optical system to the subject and the rear principal point (rear principal plane) of the optical system.
  • the distance b OD to the imaging position of the subject image and the focal length f of the optical system can be expressed by the following formula (1) from the lens formula.
  • the size Y OD of the subject image can be expressed by the following equation (2) from the magnification m OD of the subject image and the actual size sz of the subject.
  • the size Y OD of the subject image can be expressed by the above equation (2).
  • the size YOD of the subject image can be obtained by using the above equation (2).
  • the optical system is a fixed focus system having no autofocus function and the imaging surface of the imaging device such as a sensor is not at the position where the subject image is formed, that is, when there is defocus, the imaging of the imaging device is performed.
  • the defocus amount that is, the difference (shift) in the depth direction (optical axis direction) between the imaging position of the subject image and the position of the imaging surface of the image sensor is determined. Amount).
  • the distance from the exit pupil of the optical system to the image forming position of the subject image when the subject is at infinity is defined as EP, and the subject is positioned at an arbitrary distance a from the exit pupil of the optical system.
  • the distance from the image forming position of the subject image in the case where the object exists is defined as EP OD
  • the distance from the exit pupil of the optical system to the imaging surface of the image sensor is defined as EP FD
  • the distance from the rear principal point of the optical system to the imaging position of the subject image when the subject is at an arbitrary distance a is represented by b OD
  • the distance from the rear principal point of the optical system to the imaging surface of the image sensor Is defined as bFD .
  • the optical system is schematically illustrated such that the rear principal point of the optical system is located at the center position of the optical system.
  • the distance b OD from the rear principal point of the optical system to the imaging position of the subject image when the subject exists at an arbitrary distance a can be obtained from the following formula (3) from the lens formula.
  • the distance b FD from the rear principal point of the optical system to the imaging surface of the image sensor is the distance a FD from the front principal point of the optical system to the object when the subject image is best focused on the imaging surface of the image sensor.
  • the intersection of the optical axis and the exit pupil of the optical system is one of the vertices, and the size of the subject image at the image forming position of the subject image when the subject exists at an arbitrary distance a.
  • EP OD : EP FD Y OD : Y FD is established, and the size Y FD of the subject image on the imaging surface of the imaging device can be obtained from the following equation (7).
  • the size Y FD of the subject image on the imaging surface of the imaging device is determined based on the actual size sz of the subject, the focal length f of the optical system, and the exit pupil of the optical system.
  • the distance EP from the exit pupil of the optical system to the subject (subject distance) a, and the optical system when the subject image is best focused on the imaging surface of the image sensor. Can be expressed as a function of the distance (focus distance) a FD from the front principal point to the subject.
  • the first imaging system IS1 collects light from the subject 100 and forms a first optical system OS1 that forms a first subject image, and a first subject image formed by the first optical system OS1. And a first imaging element S1 for imaging.
  • the second imaging system IS2 condenses light from the subject 100 to form a second subject image formed by the second optical system OS2, and a second subject image formed by the second optical system OS2.
  • the pixel size of the first image sensor S1 (size per one pixel) is PS 1
  • the pixel size of the second image pickup element S2 is PS 2.
  • the optical axis of the first optical system OS1 of the first imaging system IS1 is parallel to the optical axis of the second optical system OS2 of the second imaging system IS2. Do not match.
  • the second optical system OS2 is arranged at a distance P in a direction perpendicular to the optical axis direction of the first optical system OS1.
  • the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 are parallel, but the present invention is not limited to this.
  • the first optical system OS1 and the second optical system OS1 are different from each other in that the angle of the optical axis of the first optical system OS1 (the angle parameters ⁇ and ⁇ in three-dimensional polar coordinates) and the angle of the optical axis of the second optical system OS2 are different from each other.
  • Two optical systems OS2 may be arranged.
  • the first optical system OS1 and the second optical system OS2 are, as shown in FIG. 2, composed of an optical axis of the first optical system OS1 and an optical axis of the second optical system OS2. Are parallel but not coincident, and are spaced apart from each other by a distance P.
  • the first optical system OS1 and the second optical system OS2 are fixed-focus optical systems having focal lengths f 1 and f 2 , respectively.
  • the position (lens position) of the first optical system OS1 that is, the separation distance between the first optical system OS1 and the first imaging element S1 is an arbitrary distance ( (Focus distance) a
  • the first subject image of the subject 100 at the FD1 is formed on the imaging surface of the first image sensor S1, that is, the subject 100 at the arbitrary distance a FD1 is adjusted to be the best focus. Have been.
  • the position (lens position) of the second optical system OS2 that is, the separation distance between the second optical system OS2 and the second imaging element S2 is arbitrary.
  • distance (focus distance) a second object image of an object 100 in a FD2 is formed on the imaging surface of the second image sensor S2, i.e., the object 100 at an arbitrary distance a FD2 is best focus Has been adjusted as follows.
  • the exit pupil of the first optical system OS1 the distance to the imaging position of the first object image when the object 100 is present at infinity is EP 1
  • the exit pupil of the second optical system OS2 from the distance to the imaging position of the second object image when the object 100 is present at infinity is EP 2.
  • the first optical system OS1 and the second optical system OS2 are located between the front principal point (front principal plane) of the first optical system OS1 and the front principal point (front principal plane) of the second optical system OS2. Are arranged and arranged so that there is a difference (depth parallax) D in the depth direction (optical axis direction). That is, assuming that the distance (subject distance) from the front principal point of the first optical system OS1 to the subject 100 is a, the distance from the front principal point of the second optical system OS2 to the subject 100 is a + D.
  • magnification m 1 of the first object image formed on the imaging surface of the first by the optical system OS1 first image sensor S1 is the following formula It can be represented by (8).
  • EP OD1 is the distance from the exit pupil of the first optical system OS1 to the image forming position of the first subject image when the subject 100 exists at the distance a
  • EP FD1 is the first The distance from the exit pupil of the optical system OS1 to the imaging surface of the first imaging element S1.
  • the positional relationship between the distance EP OD1 and the distance EP FD1 is determined by the position of the first optical system OS1 such that when the first imaging system IS1 is configured, the subject 100 at an arbitrary distance a FD1 is in the best focus. (Lens position) is determined.
  • ⁇ b OD1 is the focal length f 1 and the distance b OD1 from the rear principal point of the first optical system OS1 to the image forming position of the first subject image when the subject 100 exists at the distance a.
  • ⁇ b FD1 is the difference between the focal length f 1 and the distance b FD1 from the rear principal point of the first optical system OS1 to the imaging surface of the first image sensor S1, and m OD1 is , The magnification of the first subject image at the image forming position of the first subject image when the subject 100 exists at the distance a.
  • a FD1 is the distance from the front principal point of the first optical system OS1 to the subject 100 when the first subject image is in the best focus on the imaging surface of the first image sensor S1.
  • magnification m2 of the second subject image formed on the imaging surface of the second imaging element S2 by the second optical system OS2 can be expressed by the following equation (10).
  • EP OD2 is the distance from the exit pupil of the second optical system OS2 to the imaging position of the second subject image when the subject 100 exists at a distance a + D
  • EP FD2 is the second This is the distance from the exit pupil of the optical system OS2 to the imaging surface of the second image sensor S2.
  • the positional relationship between the distance EP OD2 and the distance EP FD2 is determined by the position of the second optical system OS2 such that the subject 100 at an arbitrary distance a FD2 is in the best focus when the second imaging system IS2 is configured. (Lens position) is determined.
  • ⁇ b OD2 is the focal length f 2 and the distance b OD2 from the rear principal point of the second optical system OS2 to the imaging position of the second subject image when the subject 100 exists at the distance a + D.
  • ⁇ b FD2 is the difference between the focal length f 2 and the distance b FD2 from the rear principal point of the second optical system OS2 to the imaging surface of the second image sensor S2, and m OD2 is Is the magnification of the second subject image at the image forming position of the second subject image when the subject 100 exists at the distance a + D, and a FD2 is the second subject image on the imaging surface of the second image sensor S2. Is the distance from the front principal point of the second optical system OS2 to the subject 100 when the best focus is obtained.
  • the image magnification ratio MR with respect to the magnification m2 of the second object image formed above can be expressed by the following equation (11).
  • K is a coefficient, which is determined from fixed values f 1 , f 2 , EP 1 , EP 2 , a FD1 , and a FD2 determined by the configurations of the first imaging system IS1 and the second imaging system IS2. It is represented by the following equation (12).
  • the image magnification ratio MR of the second subject image formed on the imaging surface of the second image sensor S2 to the magnification m2 of the second subject image depends on the distance a from the subject 100 to the front principal point of the first optical system OS1. Change.
  • f 1 , f 2 , EP 1 , EP 2 , D, and K are fixed values determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2. If the image magnification ratio MR can be obtained, the distance a from the subject 100 to the front principal point of the first optical system OS1 can be calculated.
  • FIG 3 which is calculated based on the equation (13), and the magnification m 1 of the first object image formed by the first optical system OS1 on the imaging surface of the first imaging device S1
  • the An example of the relationship between the image magnification ratio MR of the magnification m 2 of the second subject image formed on the imaging surface of the second imaging element S2 by the second optical system OS2 and the distance a to the subject 100 is shown. Have been. As is apparent from FIG. 3, a one-to-one relationship is established between the value of the image magnification ratio MR and the distance a to the subject 100.
  • the image magnification ratio MR can be calculated by the following equation (14).
  • sz is the actual size (height or width) of the subject 100
  • Y FD1 is the first subject image formed on the imaging surface of the first image sensor S1 by the first optical system OS1.
  • the size (image height or image width) is the size (image height or image width) of the second subject image formed on the imaging surface of the second image sensor S2 by the second optical system OS2. .
  • the size Y FD1 of the first subject image can be actually measured from a first image obtained by capturing the first subject image by the first image sensor S1.
  • the size Y FD2 of the second subject image can be actually measured from the second image acquired by the second image sensor S2 capturing the second subject image.
  • the size Y FD1 of the first subject image detects a plurality of feature points (for example, both ends in the height direction or the width direction) of the first subject image included in the first image. , And is obtained by measuring the distance between the detected feature points.
  • the size Y FD2 of the second subject image detects a plurality of feature points of the second subject image in the second image corresponding to the plurality of feature points of the detected first subject image, respectively. It is obtained by measuring the distance between the detected feature points.
  • a process for detecting a plurality of feature points of a second subject image in a second image corresponding to a plurality of feature points of a detected first subject image is referred to as a corresponding feature. This is called point detection processing.
  • the processing time required for the corresponding feature point detection processing is greatly reduced by using the epipolar line based on the epipolar geometry in the corresponding feature point detection processing.
  • FIGS. 4 and 5 show models for deriving epipolar lines used in the distance measuring camera of the present invention.
  • FIG. 4 is an XZ plan view showing an arrangement of the first imaging system IS1 and the second imaging system IS2 in a model for deriving an epipolar line.
  • FIG. 5 is a YZ plan view showing an arrangement of the first imaging system IS1 and the second imaging system IS2 in a model for deriving an epipolar line.
  • the first imaging system IS1 and the second imaging system IS2 are composed of the optical axis of the first optical system OS1 of the first imaging system IS1 and the second imaging system IS2.
  • the two optical systems OS2 are arranged such that the optical axes do not coincide. Therefore, a translation parallax occurs between the first subject image formed by the first optical system OS1 and the second subject image formed by the second optical system OS2.
  • the distance measuring camera of the present invention utilized for the magnification m 1 of the first object image which is the ratio of the magnification m 2 of the second subject image Zobaihi MR calculates the distance a to the object 100
  • the translation parallax between the first subject image and the second subject image is not used for calculating the distance a to the subject 100.
  • the principle of the epipolar line based on epipolar geometry as used in a stereo camera type ranging camera is described in the present invention.
  • the present invention can also be applied to the first subject image and the second subject image obtained by the distance measuring camera of the present invention.
  • a first arrangement of an imaging system IS1 and the second imaging system IS2 (parameter P x about parallax, P y, D) only in consideration of the first imaging system IS1 and a second characteristic of the imaging system IS2 (above parameters f 1, f 2, EP 1 , EP 2, a FD1, a FD2, PS 1, PS 2) pinhole model that does not consider is often used .
  • the actual imaging systems IS1 and IS2 have many factors related to imaging of the optical systems OS1 and OS2 and the imaging devices S1 and S2. For this reason, a divergence occurs between the pinhole model ignoring such factors and reality, and it is not possible to accurately derive an epipolar line.
  • the epipolar line is derived by using a model in which the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 shown in FIGS. It is possible to derive the epipolar line more accurately.
  • the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 in the models shown in FIGS. 4 and 5 are as shown in the following table as described with reference to FIG.
  • the coordinates of the front principal point of the first optical system OS1 of the first imaging system IS1 are the origin (0, 0, 0), and the coordinates of the second imaging system IS2 are
  • the coordinates of the front principal point of the second optical system OS2 are (P x , P y , -D). Therefore, the distance between the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 in a direction perpendicular to the optical axis direction of the first optical system OS1 or the second optical system OS2.
  • the front principal point of the first optical system OS1 and the x-axis direction of the distance P x of the front principal point of the second optical system OS2 called translation parallax in the x-axis direction
  • the first optical system OS1 the distance P y in the y-axis direction between the front principal point and the front principal point of the second optical system OS2 of the y-axis direction translation parallax
  • the distance D in the z-axis direction between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2 is referred to as depth parallax.
  • the feature point S of the subject 100 located at the coordinates (X, Y, a) is imaged using the first imaging system IS1 and the second imaging system IS2.
  • coordinates having an arbitrary reference point as the origin are referred to as world coordinates
  • coordinates having the origin at the front principal point of the first optical system OS1 of the first imaging system IS1 are referred to as the first imaging system.
  • the coordinates of the second imaging system IS2, which are referred to as the camera coordinates of the second imaging system IS2 are referred to as the camera coordinates of the second imaging system IS2
  • the coordinates in the first image For example, x 1 , y 1 ) is referred to as image coordinates of the first image, and coordinates (eg, x 2 , y 2 ) in the second image are referred to as image coordinates of the second image.
  • the origin of the world coordinates is the front principal point of the first optical system OS1 of the first imaging system IS1. Accordingly, in the models shown in FIGS. 4 and 5, the origin of the world coordinates coincides with the origin of the camera coordinates of the first imaging system IS1.
  • the world coordinates are converted into camera coordinates by an external matrix of the imaging system. Further, the camera coordinates are converted into image coordinates by an internal matrix of the imaging system. Therefore, the world coordinates (X, Y, a) of the feature point S are converted into the image coordinates (x 1 , y 1 ) of the first image by the external matrix and the internal matrix of the first imaging system IS1. Similarly, the world coordinates (X, Y, a) of the feature point S are converted into the image coordinates (x 2 , y 2 ) of the second image by the external matrix and the internal matrix of the second imaging system IS2. .
  • the image coordinates (x 1 , y 1 ) of the first image acquired by the first imaging system IS1 will be considered.
  • the world coordinates (X, Y, a) of the feature point S are calculated using the external matrix of the first imaging system IS1 and the camera coordinates of the first imaging system IS1.
  • (X ′ 1 , y ′ 1 , a ′) are calculated using the external matrix of the first imaging system IS1.
  • the world coordinates in the models shown in FIGS. 4 and 5 have the origin (reference point) at the front principal point of the first optical system OS1 of the first imaging system IS1. There is no rotation or position shift between the world coordinates in the model shown in FIG. 5 and the camera coordinates of the first imaging system IS1.
  • the matrix of 4 rows and 4 columns in the following equation (15) is the external matrix of the first imaging system IS1. Since there is no rotation or position shift between the world coordinates in the models shown in FIGS. 4 and 5 and the camera coordinates of the first imaging system IS1, the external matrix of the first imaging system IS1 is a unit matrix.
  • the camera coordinates (x ′ 1 , y ′ 1 , a ′) of the first imaging system IS1 of the feature point S are determined by the internal matrix of the first imaging system IS1 using the image coordinates (x 1 , y 1 ).
  • Internal matrix of the first imaging system IS1 is described above with reference to FIG. 2, similar to the relationship between the size Y FD1 size sz a first object image of an object 100 represented by the above formula (7) Can be derived. Therefore, the following equation (16) can be obtained.
  • the size sz of the subject 100 and the size Y FD1 of the first subject image are expressed in units of mm, but the following equation (16) expresses the image coordinates x1 of the first image. It is in pixel units.
  • K 1 and L 1 in the above equations (16) and (17) are determined by fixed values f 1 , EP 1 , a FD1 , and PS 1 determined by the configuration of the first imaging system IS1. . Therefore, K 1 and L 1 in the above equations (16) and (17) are fixed values uniquely determined by the configuration of the first imaging system IS1.
  • Equation (18) representing the image coordinates (x 1 , y 1 ) of the first image of the feature point S can be obtained.
  • the matrix of 3 rows and 4 columns in the following equation (18) is an internal matrix of the first imaging system IS1.
  • the coordinates (x 1 , y 1 ) of the feature point S of the subject 100 in the first image acquired by the first imaging system IS1 can be specified.
  • the feature point S of the subject 100 observed at the image coordinates (x 1 , y 1 ) of the first image is referred to as a feature point of the first subject image.
  • the external matrix of the first imaging system IS1 of 4 rows and 4 columns in the above equation (18) reflects the arrangement of the first imaging system IS1 (the arrangement of the first imaging system IS1 with respect to a reference point in world coordinates).
  • the internal matrix of the first imaging system IS1 of 3 rows and 4 columns represents the characteristics (fixed values f 1 , EP 1 , a FD1 , and PS 1 ) of the first imaging system IS1. Reflects.
  • the world coordinates (X, Y, a) of the feature point S are converted into camera coordinates (x ′ 2 , y ′ 2 , a ′) of the second imaging system IS2 by an external matrix of the second imaging system IS2.
  • Rotation matrix R z of rotation of the rotation matrix R x, rotation matrix R y of the rotation about the y-axis, and z about axis of rotation around the x-axis is represented by the following formula (19).
  • the rotation matrix R of the second imaging system IS2 is represented by the following equation (20).
  • the rotation matrix R is represented by R x , R y , R z , but a rotation matrix R x , a rotation matrix R y , and a rotation matrix R for obtaining the rotation matrix R are obtained.
  • the order of multiplying z is not limited to this.
  • the rotation matrix R may be represented by R z ⁇ R y ⁇ R x and R y ⁇ R x ⁇ R z, and the like.
  • the second imaging system IS2 relative to the first imaging system IS1, translation direction of the translation parallax P x, and a P y and the depth direction of the depth disparity D.
  • These parallaxes can be represented by a parallel progression column t in the following equation (21).
  • the external matrix of the second imaging system IS2 is expressed by a combination of the rotation matrix R of Expression (20) and the translation sequence of Expression (21), and the camera coordinates (x '2, y' 2, a ') can be represented by the following formula (22).
  • the matrix of 4 rows and 4 columns in the following equation (22) is the external matrix of the second imaging system IS2.
  • the camera coordinates (x ′ 2 , y ′ 2 , a ′) of the second imaging system IS2 of the feature point S are calculated by the internal matrix of the second imaging system IS2 using the image coordinates (x 2 , y 2 ).
  • the image coordinates (x 2 , y 2 ) of the second image of the feature point S are expressed by the following equations (24) and (25).
  • K 2 and L 2 in the above equations (24) and (25) are determined by fixed values f 2 , EP 2 , a FD2 , and PS 2 determined by the configuration of the second imaging system IS2. . Therefore, K 2 and L 2 in the above equations (24) and (25) are fixed values uniquely determined by the configuration of the second imaging system IS2.
  • the image coordinates (x 2 , y 2 ) of the second image of the feature point S can be expressed by the following equation (26).
  • the matrix of 3 rows and 4 columns in the following equation (26) is an internal matrix of the second imaging system IS2.
  • the coordinates (x 2 , y 2 ) of the characteristic point S of the subject 100 in the second image acquired by the second imaging system IS2 can be specified.
  • the external matrix of the second imaging system IS2 of 4 rows and 4 columns in the above equation (26) reflects the arrangement of the second imaging system IS2 (the arrangement of the second imaging system IS2 with respect to a reference point in world coordinates).
  • the internal matrix of the second imaging system IS2 of 3 rows and 4 columns in the above equation (26) indicates the characteristics (fixed values f 2 , EP 2 , a FD2, PS 1 ) of the second imaging system IS2. Reflects.
  • ⁇ , ⁇ , and ⁇ in the general formula (29) are fixed values f 1 , f 2 , EP 1 , and EP 2 determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2.
  • PS 1, PS 2, a FD1, a FD2, P x, P y, is determined by D. Therefore, ⁇ , ⁇ , and ⁇ in the above equation (29) are fixed values uniquely determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2.
  • FIG. 6 shows an example of the epipolar line calculated as described above.
  • the first imaging system IS1 and the second imaging system IS2 shown in FIG. 6 the first image and the second image as shown in FIG. An image is obtained.
  • an upper vertex of a triangle included in the first image and the second image is set as an arbitrary feature point S of the subject 100.
  • the coordinates having the center point as the origin (coordinates (0, 0)) in each image are the image coordinates of each image.
  • the feature points of the second subject corresponding to the feature points of the first subject image always exist on the epipolar line in the second image represented by the above equation (29).
  • the search on the epipolar line can be performed without searching the entire region of the second image.
  • a feature point of the second subject image corresponding to an arbitrary feature point of one subject image can be detected.
  • the epipolar line based on the epipolar geometry as described above By performing a search for a feature point using, the processing time required for the corresponding feature point detection process can be significantly reduced. For this reason, the ranging camera of the present invention achieves a significant reduction in processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images.
  • the models shown in FIGS. 4 and 5 are different from those of the first imaging system IS1 and the second imaging system IS2.
  • the feature is that both characteristics and arrangement are considered.
  • the characteristics (fixed values f 1 , EP 1 , a FD1 , PS 1 ) of the first imaging system IS1 are reflected in the internal matrix of the first imaging system IS1 of 3 rows and 4 columns in the above equation (18).
  • the characteristics (fixed values f 2 , EP 2 , a FD2 , PS 2 ) of the second imaging system IS2 are reflected in the internal matrix of the second imaging system IS2 of 3 rows and 4 columns in the above equation (26).
  • a plurality of feature points of the second subject image in the second image can be detected more accurately than in the case of using the conventional pinhole model.
  • the distance measuring camera uses the epipolar line based on the epipolar geometry as described above in the corresponding feature point detection processing, and detects the first object detected to measure the size YFD1 of the first object image.
  • a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the image are detected.
  • the distance between a plurality of feature points of the detected second subject image is measured, and the size YFD2 of the second subject image is obtained.
  • the obtained size Y FD1 of the first subject image and the size Y FD2 of the second subject image are obtained by calculating the image magnification ratio MR of the magnification m 1 of the first subject image and the magnification m 2 of the second subject image.
  • the distance a to the subject 100 is calculated based on the image magnification ratio MR.
  • the change in the magnification m1 of the first subject image according to the distance a to the subject 100 becomes the same as the change in the magnification m2 of the second subject image according to the distance a to the subject 100.
  • the first optical system OS1 and the second optical system OS2 are configured and arranged so that at least one of the following three conditions is satisfied.
  • change in the magnification m 1 of the first object image in accordance with a is adapted to differ from the change in the magnification m 2 of the second object image in accordance with the distance a to the object 100.
  • the focal length f 1 of the (first condition) first optical system OS1, the focal length f 2 of the second optical system OS2 are different from each other (f 1 ⁇ f 2) (Second condition) from the exit pupil of the first optical system OS1, the distance EP 1 to the imaging position of the first object image when the object 100 is at infinity, the injection of the second optical system OS2 from the pupil, and the distance EP 2 to the imaging position of the second object image when the object 100 is at infinity is different from each other (EP 1 ⁇ EP 2) (Third condition)
  • a difference D in the depth direction exists between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2 (D ⁇ 0).
  • the image magnification ratio MR does not hold as a function of the distance a, and the distance a from the first optical system OS1 to the subject 100 is calculated based on the image magnification ratio MR. Becomes impossible. Therefore, in order to calculate the distance a from the first optical system OS1 to the subject 100 based on the image magnification ratio MR, in the distance measuring camera of the present invention, the image magnification ratio MR is established as a function of the distance a. The fourth condition is further satisfied.
  • the image magnification ratio is calculated from the size Y FD1 of the first subject image and the size Y FD2 of the second subject image actually measured from the first image and the second image acquired by using the distance measuring camera of the present invention.
  • the distance a from the front principal point of the first optical system OS1 to the subject 100 can be calculated.
  • a ranging camera of the present invention for calculating the distance a to the object 100 based on Zobaihi MR with magnification m 1 of the first object image and the magnification m 2 of the second object image, in the accompanying drawings A detailed description will be given based on the preferred embodiment shown.
  • FIG. 7 is a block diagram schematically showing the distance measuring camera according to the first embodiment of the present invention.
  • the distance measuring camera 1 shown in FIG. 7 includes a control unit 2 for controlling the distance measuring camera 1, a first optical system OS1 for collecting light from the subject 100, and forming a first subject image.
  • a first imaging system IS1 having a first imaging device S1 for capturing a first subject image and acquiring a first image including the first subject image, and a first optical system OS1.
  • the first optical system OS1 is arranged so as to be shifted by a distance P in a direction perpendicular to the optical axis direction of the first optical system OS1, and collects light from the subject 100 to form a second subject image.
  • a second imaging system IS2 having a second optical system OS2, a second imaging device S2 for capturing a second subject image, and obtaining a second image including the second subject image; size acquisition for acquiring the size Y FD2 size Y FD1 and a second object image of the first object image 3, the magnification m 1 of the first object image and Zobaihi MR with magnification m 2 of the second object image, association information which stores association information associating the distance a to the object 100 a storage unit 4, the first size Y FD1 and a second object image of the first object image obtained as the ratio of the size Y FD2 of the subject image magnification m 1 and a second acquired by the size acquiring section 3 based on Zobaihi MR with magnification m 2 of the subject image, a distance calculation unit 5 for calculating the distance a to the object 100, the first image acquired by the first image sensor S1 or the second A three-dimensional image generation unit 6 for generating a three-dimensional image of the subject 100
  • the distance measuring camera 1 of the present embodiment has the focal length f 1 of the first optical system OS1 among the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR.
  • the focal length f 2 of the second optical system OS2 is different (f 1 ⁇ f 2) first so the condition is met that the first optical system OS1 and a second optical system OS2 is configured It is characterized by having been done.
  • the first optical system OS1 and the second optical system OS2 satisfy the other two conditions (EP 1 ⁇ EP 2 and D ⁇ 0) among the above three conditions. Not configured and deployed.
  • the distance measuring camera 1 of the present embodiment is configured to satisfy a fourth condition that the image magnification ratio MR is satisfied as a function of the distance a.
  • the magnification m 2 magnification m 1 and a second object image of the first object image by imaging a subject 100 by the first imaging system IS1 and the second imaging system IS2 Is calculated, and the distance a to the subject 100 is calculated using the above equation (30).
  • the size acquisition unit 3 includes a plurality of feature points (for example, in the height direction) of the first subject image in the first image acquired by the first image sensor S1.
  • the size YFD1 of the first subject image is acquired by detecting the distance between the plurality of feature points.
  • the size obtaining unit 3 detects a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the detected first subject image, and detects the plurality of feature points.
  • the size Y FD2 of the second subject image is obtained by measuring the distance between them.
  • a corresponding feature check for detecting a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the first subject image.
  • an epipolar line based on epipolar geometry is used.
  • the second camera corresponding to each of the plurality of feature points of the first subject image is searched.
  • a plurality of feature points of the second subject image in the image can be detected.
  • a plurality of feature points of the second subject image can be detected without searching the entire area of the second image, and the processing time required for the corresponding feature point detection process can be significantly reduced.
  • a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced.
  • the control unit 2 transmits and receives various data and various instructions to and from each component via the data bus 10 and controls the distance measuring camera 1.
  • the control unit 2 includes a processor for executing arithmetic processing, and a memory for storing data, programs, modules, and the like necessary for controlling the distance measuring camera 1. Executes the control of the distance measuring camera 1 by using data, programs, modules, and the like stored in the memory.
  • the processor of the control unit 2 can provide a desired function by using each component of the distance measuring camera 1.
  • the processor control unit 2 by using the distance calculation unit 5, based on Zobaihi MR with magnification m 1 of the first object image and the magnification m 2 of the second object image to the object 100 Can be executed for calculating the distance a.
  • the processor of the control unit 2 includes, for example, one or more microprocessors, a microcomputer, a microcontroller, a digital signal processor (DSP), a central processing unit (CPU), a memory control unit (MCU), and an image processing unit. (GPU), a state machine, a logic circuit, an application specific integrated circuit (ASIC), or an arithmetic unit that performs arithmetic operations such as signal operations based on computer readable instructions such as a combination thereof.
  • the processor of the controller 2 is configured to fetch computer readable instructions (eg, data, programs, modules, etc.) stored in the memory of the controller 2 and perform operations, signal operations and controls. I have.
  • the memory of the control unit 2 includes a volatile storage medium (eg, RAM, SRAM, DRAM), a non-volatile storage medium (eg, ROM, EPROM, EEPROM, flash memory, hard disk, optical disk, CD-ROM, digital versatile disk ( DVD), a magnetic cassette, a magnetic tape, a magnetic disk), or a removable or non-removable computer-readable medium including a combination thereof.
  • a volatile storage medium eg, RAM, SRAM, DRAM
  • a non-volatile storage medium eg, ROM, EPROM, EEPROM, flash memory, hard disk, optical disk, CD-ROM, digital versatile disk ( DVD), a magnetic cassette, a magnetic tape, a magnetic disk
  • a removable or non-removable computer-readable medium including a combination thereof.
  • fixed values f 1 , f 2 , EP 1 , EP 2 , a FD1 , and a fixed values f 1 , f 2 determined by the configuration and arrangement of the first imaging system IS1 and the second imaging system IS2 are provided.
  • FD2 PS 1, PS 2, P x, P y, and D, as well, are derived from these fixed values, the formula for calculating the distance a to the object 100 (13) (or, simplified
  • K 1 , K 2 , ⁇ , ⁇ , and ⁇ are stored in advance.
  • the first imaging system IS1 has a first optical system OS1 and a first imaging element S1.
  • the first optical system OS1 has a function of condensing light from the subject 100 and forming a first subject image on the imaging surface of the first image sensor S1.
  • the first imaging element S1 has a function of capturing a first subject image formed on an imaging surface and acquiring a first image including the first subject image.
  • the second imaging system IS2 has a second optical system OS2 and a second imaging element S2.
  • the second optical system OS2 has a function of condensing light from the subject 100 and forming a second subject image on the imaging surface of the second image sensor S2.
  • the second imaging element S2 has a function of capturing a second subject image formed on the imaging surface and acquiring a second image including the second subject image.
  • the first imaging element S1 and the first optical system OS1 that constitute the first imaging system IS1 are provided in the same housing, and constitute the second imaging system IS2.
  • the second image sensor S2 and the second optical system OS2 are provided in another identical housing, the present invention is not limited to this.
  • An embodiment in which the first optical system OS1, the second optical system OS2, the first image sensor S1, and the second image sensor S2 are all provided in the same housing is also within the scope of the present invention. is there.
  • the first optical system OS1 and the second optical system OS2 include one or more lenses and optical elements such as a diaphragm.
  • the first optical system OS1 and a second optical system OS2 is a focal length f 1 of the first optical system OS1 and the focal length f 2 of the second optical system OS2 are different from each other ( f 1 ⁇ f 2 ).
  • the second object image formed by the second optical system OS2 It has become the change for the distance to the subject 100 of the magnification m 2 and different.
  • Zobaihi MR is the ratio of the magnification m 2 of such first magnification m 1 of the first subject image obtained by the configuration of the optical system OS1 and a second optical system OS2 second object image Is used to calculate the distance a to the subject 100.
  • the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 are parallel but not coincident. Further, the second optical system OS2 is arranged shifted by a distance P in a direction perpendicular to the optical axis direction of the first optical system OS1.
  • Each of the first image sensor S1 and the second image sensor S2 has a CMOS image sensor having a color filter such as an RGB primary color filter or a CMY complementary color filter arranged in an arbitrary pattern such as a Bayer array. It may be a color image sensor such as a CCD image sensor or a black and white image sensor without such a color filter.
  • the first image obtained by the first image sensor S1 and the second image obtained by the second image sensor S2 are color or monochrome luminance information of the subject 100.
  • each of the first image sensor S1 and the second image sensor S2 may be a phase sensor that acquires phase information of the subject 100.
  • the first image obtained by the first image sensor S1 and the second image obtained by the second image sensor S2 are phase information of the subject 100.
  • the first optical system OS1 forms a first subject image on the imaging surface of the first imaging device S1, and the first imaging device S1 acquires a first image including the first subject image. .
  • the acquired first image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10.
  • a second subject image is formed on the imaging surface of the second image sensor S2 by the second optical system OS2, and a second image including the second subject image is formed by the second image sensor S2. Is obtained.
  • the acquired second image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10.
  • the first image and the second image sent to the size obtaining unit 3 are used for obtaining the size Y FD1 of the first subject and the size Y FD2 of the second subject.
  • the first image and the second image sent to the control unit 2 are used for image display by the display unit 7 and communication of image signals by the communication unit 9.
  • the size acquisition unit 3 acquires a size Y FD1 of the first subject and a size Y FD2 of the second subject from the first image including the first subject image and the second image including the second subject image. It has the function to do. Specifically, the size obtaining unit 3 detects a plurality of feature points of the first subject image in the first image, and measures a distance between the detected plurality of feature points of the first subject image. Thereby, the size YFD1 of the first subject image is obtained. Further, the size obtaining unit 3 detects a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the detected first subject image, and detects the detected second feature point. The size Y FD2 of the second subject image is obtained by measuring the distance between a plurality of feature points of the subject image.
  • the size obtaining unit 3 receives a first image from the first image sensor S1, and further receives a second image from the second image sensor S2. After that, the size obtaining unit 3 detects arbitrary plural feature points of the first subject image in the first image.
  • the method by which the size obtaining unit 3 detects arbitrary plural feature points of the first subject image in the first image is not particularly limited, and the size obtaining unit 3 uses various methods known in the art.
  • An arbitrary plurality of feature points of the first subject image in the first image can be detected.
  • the coordinates (x 1 , y 1 ) of each of the plurality of feature points detected by the size obtaining unit 3 are temporarily stored in the memory of the control unit 2.
  • the size acquisition unit 3 performs a filtering process such as Canny on the first image, and extracts an edge portion of the first subject image in the first image. Thereafter, the size acquisition unit 3 detects any of the extracted edge portions of the first subject image as a plurality of feature points of the first subject image, and measures a separation distance between the plurality of feature points. Thus, the size Y FD1 of the first subject image is obtained. In this case, the size acquisition unit 3 detects edge portions corresponding to both ends in the height direction of the first subject image as a plurality of feature points of the first subject image, and determines a separation distance between the feature points as the second feature point.
  • a filtering process such as Canny
  • the size (image height) YFD1 of the first subject image may be used, or edge portions corresponding to both ends in the width direction of the first subject image may be detected as a plurality of feature points of the first subject image.
  • the distance between the points may be set as the size (image width) YFD1 of the first subject image.
  • the size obtaining unit 3 After obtaining the size Y FD1 of the first subject image, the size obtaining unit 3 sets the plurality of second subject images in the second image corresponding to the plurality of feature points of the detected first subject image, respectively. The corresponding feature point detection process for detecting the feature point is performed.
  • the size obtaining unit 3 first detects and detects the coordinates (x 1 , y 1 ) of the plurality of feature points of the first subject image stored in the memory of the control unit 2. One of the plurality of feature points of the first subject image is selected. Thereafter, the size acquisition unit 3 determines an area of a predetermined size centered on the selected feature point in the first image (for example, an area of 5 ⁇ 5 pixels centered on the selected feature point, 7 ⁇ 7 (A pixel area, etc.), and a search block for the selected feature point is obtained. The search block is used to search for a feature point of the second subject image in the second image corresponding to the selected feature point of the first subject. The obtained search block is temporarily stored in the memory of the control unit 2.
  • the size obtaining unit 3 uses the fixed value stored in the memory of the control unit 2 to convert the epipolar line corresponding to the feature point of the selected first subject image into the above equation (32) (or , General formula (29)). Thereafter, the size acquiring unit 3 detects a feature point of the second subject image in the second image corresponding to the feature point of the selected first subject image by searching on the derived epipolar line. .
  • the size obtaining unit 3 searches for the search block for the feature point of the selected first subject image stored in the memory of the control unit 2 and the pixel on the epipolar line in the second image.
  • a convolution operation (convolution integration) between the search block and the epipolar line peripheral region having the same size as the search block is executed to calculate a correlation value between the search block and the epipolar line peripheral region.
  • the calculation of the correlation value is performed along the derived epipolar line in the second image.
  • the size acquisition unit 3 determines the center pixel (that is, the pixel on the epipolar line) of the epipolar line peripheral region having the highest correlation value as the second pixel in the second image corresponding to the selected feature point of the first subject image. 2 are detected as characteristic points of the subject image.
  • the calculated coordinates (x 2 , y 2 ) of the feature point of the second subject image are temporarily stored in the memory of the control unit 2.
  • interpolation of pixels with respect to the search block or the second image may be performed. Any method known in the art for accurately obtaining such a correlation value between two regions may be used in the corresponding feature point detection processing.
  • the size acquiring unit 3 derives a plurality of epipolar lines respectively corresponding to a plurality of feature points of the detected first subject image based on the above equation (32) (or general equation (29)), By searching on each of the plurality of epipolar lines as described above, a plurality of features of the second subject image in the second image corresponding to the plurality of feature points of the detected first subject image, respectively. Detect points. When the feature points of the second subject image in the second image corresponding to all of the detected feature points of the first subject image are detected, the corresponding feature point detection processing by the size acquisition unit 3 ends.
  • the size acquisition unit 3 After executing the corresponding feature point detection process, the size acquisition unit 3 detects the size from the coordinates (x 2 , y 2 ) of the plurality of feature points of the second subject image temporarily stored in the memory of the control unit 2.
  • the size YFD2 of the second subject image is obtained by measuring the separation distance between the plurality of feature points of the second subject image thus obtained.
  • the epipolar line represented by the above equation (32) corresponds to the first imaging system IS1 and the second imaging system IS2 generally used in the related art.
  • the pinhole model that does not consider the characteristics of the first imaging system IS1 and the second imaging system IS2 as shown in FIGS.
  • the size obtaining unit 3 derives a plurality of epipolar lines in the second image using the conventional pinhole model, and detects a plurality of feature points of the second subject image in the second image. It is possible to more accurately detect a plurality of feature points of the second subject image in the second image as compared with. Thus, the distance a to the subject 100 can be measured more accurately.
  • Association information storage unit 4 and the magnification m 1 of the first object image and Zobaihi MR (m 2 / m 1) and magnification m 2 of the second object image, the front of the first optical system OS1
  • An arbitrary non-volatile recording medium for example, a hard disk or a flash memory
  • the associating information stored in the associating information storage unit 4 includes the above equation (30) (or the general equation (30)) for calculating the distance a to the subject 100 based on the image magnification ratio MR. 13)).
  • the association information stored in the association information storage unit 4 may be a look-up table in which the image magnification ratio MR and the distance a to the subject 100 are uniquely associated. By referring to such association information stored in the association information storage unit 4, the distance a to the subject 100 can be calculated based on the image magnification ratio MR.
  • the association information is the above-described expression for calculating the distance a to the subject 100
  • the fixed value stored in the memory of the control unit 2 is also referred to, in addition to the association information, A distance a to 100 is calculated.
  • the distance calculation unit 5 refers to the association information stored in the association information storage unit 4 (if the association information is the above-described expression for calculating the distance a to the subject 100, The distance a to the subject 100 is calculated (specified) based on the image magnification ratio MR (see also the fixed value stored in the memory of the control unit 2).
  • the three-dimensional image generation unit 6 calculates the distance a to the subject 100 calculated by the distance calculation unit 5 and the color or black-and-white luminance information of the subject 100 acquired by the first imaging system IS1 or the second imaging system IS2.
  • the “three-dimensional image of the subject 100” refers to the calculated distance a to the subject 100 associated with each pixel of the two-dimensional image representing the color or monochrome luminance information of the normal subject 100. Means the data that exists.
  • the first imaging device S1 of the first imaging system IS1 and the second imaging device S2 of the second imaging system IS2 are phase sensors that acquire phase information of the subject 100, a three-dimensional image is obtained.
  • the generation unit 6 is omitted.
  • the display unit 7 is a panel-type display unit such as a liquid crystal display unit, and according to a signal from a processor of the control unit 2, the color of the subject 100 acquired by the first imaging system IS1 or the second imaging system IS2. Alternatively, black-and-white luminance information or phase information (first image or second image) of the subject 100, the distance a to the subject 100 calculated by the distance calculating unit 5, the subject 100 generated by the three-dimensional image generating unit 6, 3D image, information for operating the distance measuring camera 1, and the like are displayed on the display unit 7 in the form of characters or images.
  • the operation unit 8 is used by a user of the distance measuring camera 1 to execute an operation.
  • the operation unit 8 is not particularly limited as long as the user of the distance measuring camera 1 can perform an operation.
  • a mouse, a keyboard, a numeric keypad, a button, a dial, a lever, a touch panel, and the like can be used as the operation unit 8. .
  • the operation unit 8 transmits a signal corresponding to an operation by the user of the distance measuring camera 1 to the processor of the control unit 2.
  • the communication unit 9 has a function of inputting data to the ranging camera 1 or outputting data from the ranging camera 1 to an external device.
  • the communication unit 9 may be configured to be connectable to a network such as the Internet. In this case, by using the communication unit 9, the distance measuring camera 1 can communicate with an external device such as a web server or a data server provided outside.
  • the first optical system OS1 and the second optical system OS2 are formed by the focal length f1 of the first optical system OS1 and the focal length of the second optical system OS2.
  • f 2 are different from each other (f 1 ⁇ f 2 ), whereby the change in the magnification m 1 of the first subject image with respect to the distance a to the subject 100 and the distance a to the subject 100 And the change in the magnification m2 of the second subject image with respect to Therefore, the distance measurement camera 1 of the present invention, based on the Zobaihi MR (m 2 / m 1) between the magnification m 1 of the first object image and the magnification m 2 of the second object image to the object 100 Can be uniquely calculated.
  • an epipolar line based on epipolar geometry is used in the corresponding feature point detection processing executed by the size acquisition unit 3. Therefore, the processing time required for the corresponding feature point detection processing can be significantly reduced, and the processing time required for calculating the distance a to the subject 100 can be significantly reduced.
  • the epipolar line represented by the above equation (32) does not consider the characteristics of the first imaging system IS1 and the second imaging system IS2 generally used in the related art. Instead of the pinhole model, it is derived using a model that considers both the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 shown in FIGS. Therefore, a plurality of epipolar lines in the second image are derived using the conventional pinhole model, and compared with a case where a plurality of feature points of the second subject image in the second image are detected. A plurality of feature points of the second subject image in the second image can be accurately detected. Thereby, the accuracy of the measurement of the distance a to the subject 100 by the distance measuring camera 1 can be improved.
  • FIG. 8 is a block diagram schematically showing a distance measuring camera according to the second embodiment of the present invention.
  • the ranging camera 1 of the present embodiment is the same as the ranging camera 1 of the first embodiment, except that the configurations of the first optical system OS1 and the second optical system OS2 are changed.
  • the distance measuring camera 1 uses the exit pupil of the first optical system OS1 among the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR.
  • the first optical system OS1 and the second optical system OS2 satisfy the other two conditions (f 1 ⁇ f 2 and D ⁇ 0) among the above three conditions. Not configured and deployed.
  • the distance measuring camera 1 of the present embodiment is configured to satisfy a fourth condition that the image magnification ratio MR is satisfied as a function of the distance a.
  • first optical system OS1 and a second optical system OS2 is constituted by this, the first subject image with respect to the distance a to the object 100 and the change in the magnification m 1, the second for the distance a to the object 100 change the magnification m 2 of the object image, which is different from each other. Therefore, the distance measurement camera 1 of this embodiment, based on Zobaihi MR (m 2 / m 1) between the magnification m 1 of the first object image and the magnification m 2 of the second subject image, the subject 100 Can be uniquely calculated.
  • the second camera corresponding to each of the plurality of feature points of the first subject image is searched.
  • a plurality of feature points of the second subject image in the second image can be detected.
  • a plurality of feature points of the second subject image can be detected without searching the entire area of the second image, and the processing time required for the corresponding feature point detection process can be significantly reduced.
  • a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced.
  • the same effects as those of the first embodiment can be exerted.
  • FIG. 9 is a block diagram schematically showing a distance measuring camera according to the third embodiment of the present invention.
  • the ranging camera 1 of the present embodiment is the same as the ranging camera 1 of the first embodiment, except that the configurations of the first optical system OS1 and the second optical system OS2 are changed.
  • the distance measuring camera 1 uses the front principal point of the first optical system OS1 among the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR. And the first optical system OS1 and the second optical system OS2 so that the third condition that a difference D in the depth direction (optical axis direction) exists between the second optical system OS2 and the front principal point (D (0) is satisfied.
  • the second optical system OS2 is configured and arranged.
  • the first optical system OS1 and the second optical system OS2 satisfy the other two conditions (f 1 ⁇ f 2 and EP 1 ⁇ EP 2 ) among the above three conditions. Is not configured as such.
  • the distance measuring camera 1 of the present embodiment is configured to satisfy a fourth condition that the image magnification ratio MR is satisfied as a function of the distance a.
  • the difference D in the depth direction (optical axis direction) between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2. Is present (D ⁇ 0), whereby the change in the magnification m 1 of the first subject image with respect to the distance a to the subject 100 and the second change the magnification m 2 of the object image, which is different from each other. Therefore, the distance measurement camera 1 of this embodiment, based on Zobaihi MR (m 2 / m 1) between the magnification m 1 of the first object image and the magnification m 2 of the second subject image, the subject 100 Can be uniquely calculated.
  • the second camera corresponding to each of the plurality of feature points of the first subject image is searched.
  • a plurality of feature points of the second subject image in the second image can be detected.
  • a plurality of feature points of the second subject image can be detected without searching the entire area of the second image, and the processing time required for the corresponding feature point detection process can be significantly reduced.
  • a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced.
  • the same effects as those of the first embodiment can be exerted.
  • the distance measuring camera 1 of the present invention uses the first image acquired using the first imaging system IS1 and the second imaging system IS2.
  • the front main unit of the first optical system OS1 is calculated.
  • the distance a from the point to the subject 100 can be calculated.
  • a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced.
  • the additional optical system changes the magnification of the subject image formed by the additional optical system with respect to the distance a to the subject 100 with respect to the distance a to the subject with the magnification m1 of the first subject image.
  • the configuration and arrangement are different from the change and the change with respect to the distance a to the subject at the magnification m2 of the second subject image.
  • the first to third embodiments satisfy any one of the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR.
  • the first optical system OS1 and the second optical system OS2 are configured and arranged, but the first optical system OS1 and the second optical system OS2 are configured so that at least one of the above three conditions is satisfied.
  • the present invention is not limited to this as long as it is configured and arranged.
  • an aspect in which the first optical system OS1 and the second optical system OS2 are configured and arranged such that all or any combination of the above three conditions is satisfied is also within the scope of the present invention. .
  • FIG. 10 is a flowchart for explaining a distance measuring method executed by the distance measuring camera of the present invention.
  • FIG. 11 is a flowchart showing details of the corresponding feature point detection process executed in the distance measuring method shown in FIG.
  • the ranging method described in detail below is performed using the ranging camera 1 according to the above-described first to third embodiments of the present invention and an arbitrary device having the same function as the ranging camera 1. However, the description will be made assuming that the processing is executed using the distance measuring camera 1 according to the first embodiment.
  • the distance measuring method S100 shown in FIG. 10 is started when the user of the distance measuring camera 1 executes an operation for measuring the distance a to the subject 100 using the operation unit 8.
  • a first image of the subject formed by the first optical system OS1 is captured by the first imaging element S1 of the first imaging system IS1, and a first image including the first subject image is formed. Is obtained.
  • the first image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10.
  • the second imaging device S2 of the second imaging system IS2 captures the second subject image formed by the second optical system OS2, and the second imaging device includes the second subject image. Are acquired.
  • the second image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10.
  • the acquisition of the first image in S110 and the acquisition of the second image in S120 may be performed simultaneously or separately.
  • step S130 the size obtaining unit 3 detects arbitrary plural feature points of the first subject image in the first image.
  • Arbitrary plural feature points of the first subject image detected by the size acquiring unit 3 in step S130 are, for example, both ends in the height direction of the first subject image or both ends in the width direction of the first subject image. Department.
  • the coordinates (x 1 , y 1 ) of each of the plurality of feature points of the first subject image detected by the size obtaining unit 3 are temporarily stored in the memory of the control unit 2.
  • the size acquisition unit 3 refers to the coordinates (x 1 , y 1 ) of the plurality of feature points of the first subject image temporarily stored in the memory of the control unit 2 and detects the coordinates.
  • the size YFD1 of the first subject image is obtained by measuring the distance between a plurality of feature points of the first subject image.
  • the size Y FD1 of the first subject image acquired in step S140 is temporarily stored in the memory of the control unit 2.
  • step S150 the size acquisition unit 3 detects a plurality of feature points of the second subject image in the second image corresponding to the plurality of feature points of the first subject image detected in step S130, respectively.
  • Corresponding feature point detection processing for performing the FIG. 11 shows a flowchart illustrating details of the corresponding feature point detection processing executed in step S150.
  • step S151 the size acquisition unit 3, with reference to the respective coordinates of a plurality of feature points of the first subject image stored in the control unit 2 memory (x 1, y 1), were detected One of a plurality of feature points of one subject image is selected.
  • step S152 the size obtaining unit 3 determines a region of a predetermined size centered on a feature point of the selected first subject image in the first image (for example, a 5 ⁇ area centered on the feature point). A region of 5 pixels, a region of 7 ⁇ 7 pixels, etc.) is cut out, and a search block for the selected feature point is obtained. The obtained search block is temporarily stored in the memory of the control unit 2.
  • step S153 the size acquisition unit 3 uses the fixed value stored in the memory of the control unit 2 to select a second point corresponding to the feature point of the first subject image selected in step S151.
  • the epipolar line in the image is derived based on the general formula (29) described above (or the expression representing the simplified epipolar line in each embodiment).
  • step S154 the size acquisition unit 3 searches the feature block of the selected first subject image stored in the memory of the control unit 2 for the feature point of the selected first subject image and the derived epipolar image in the second image.
  • a convolution operation convolution integration
  • the calculated correlation value is temporarily stored in the memory of the control unit 2.
  • the calculation of the correlation value is also referred to as block matching, and is performed along the derived epipolar line in the second image.
  • step S150 shifts to step S155.
  • step S155 the size acquisition unit 3 determines the center pixel (that is, the pixel on the epipolar line) of the epipolar line peripheral region having the highest correlation value as the second pixel corresponding to the selected feature point of the first subject image. It is detected as a feature point of the second subject image in the image.
  • the calculated coordinates (x 2 , y 2 ) of the feature point of the second subject image are temporarily stored in the memory of the control unit 2.
  • step S151 an unselected one of a plurality of feature points of the first subject image is newly selected, and the feature point of the selected first subject image is changed.
  • the processing of steps S151 to S155 is performed until the feature points of the second subject image in the second image corresponding to all the feature points of the detected first subject image are detected. Are repeatedly executed by changing the feature points of the subject image.
  • step S160 the size obtaining unit 3 obtains the size Y FD2 of the second subject image by measuring the distance between a plurality of feature points of the detected second subject image.
  • the size Y FD2 of the second subject image acquired in step S160 is temporarily stored in the memory of the control unit 2.
  • step S170 the distance calculation unit 5 calculates the above equation (14) from the size Y FD1 of the first subject image and the size Y FD2 of the second subject image temporarily stored in the memory of the control unit 2. based on the Y FD2 / Y FD1, it calculates the Zobaihi MR with magnification m 2 of the magnification m 1 of the first subject image the second object image.
  • the distance calculation unit 5 refers to the association information stored in the association information storage unit 4, and calculates the distance a to the subject 100 based on the calculated image magnification ratio MR. . If the association information is the above-described expression for calculating the distance a to the subject 100, the distance calculation unit 5 adds the fixed information stored in the memory of the control unit 2 in addition to the association information. The distance a to the subject 100 is calculated with reference to the value.
  • step S180 when the distance calculation unit 5 calculates the distance a to the subject 100, the distance measuring method S100 proceeds to step S190.
  • step S190 the three-dimensional image generation unit 6 determines whether the distance a to the subject 100 calculated by the distance calculation unit 5 and the color or monochrome of the subject 100 acquired by the first imaging system IS1 or the second imaging system IS2. A three-dimensional image of the subject 100 is generated based on the luminance information (the first image or the second image). If the first imaging device S1 of the first imaging system IS1 and the second imaging device S2 of the second imaging system IS2 are phase sensors that acquire phase information of the subject 100, step S190 is performed. Omitted.
  • the two-dimensional image is displayed on the display unit 7 or transmitted to an external device by the communication unit 9, and the distance measuring method S100 ends.
  • each component of the present invention can be replaced with any component that can exhibit the same function, or any component can be added to each component of the present invention.
  • a ranging camera having a modified configuration is also within the scope of the present invention.
  • a mode in which the distance measuring cameras of the first to fourth embodiments are arbitrarily combined is also within the scope of the present invention.
  • each component of the distance measuring camera may be realized by hardware, may be realized by software, or may be realized by a combination thereof.
  • the ranging camera of the present invention in a corresponding feature point detection process for detecting a plurality of feature points of the other subject image corresponding to a plurality of feature points of the one subject image, an epipolar line based on epipolar geometry is used. A search for the used feature point is executed. Therefore, the processing time for calculating the distance to the subject based on the image magnification ratio between the subject images can be reduced. Therefore, the present invention has industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of Optical Distance (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

This distance measurement camera 1 comprises: a first imaging system IS1 for acquiring a first image including a first subject image; a second imaging system IS2 for acquiring a second image including a second subject image; a size acquisition unit 3 for detecting a plurality of feature points on the first subject image in the first image, acquiring the size of the first subject image by measuring the distance between the feature points, using an epipolar line to detect a plurality of feature points on the second subject image in the second image that correspond to the plurality of feature points on the first subject image, and acquiring the size of the second subject image by measuring the distance between the feature points; and a distance calculation unit 5 for calculating the distance to the subject 100 on the basis of the image magnification ratio between the magnification of the first subject image and the magnification of the second subject image.

Description

測距カメラRanging camera
 本発明は、一般に、被写体までの距離を測定するための測距カメラに関し、より具体的には、被写体までの距離に応じた被写体像の倍率の変化が互いに異なる少なくとも2つの光学系によって形成された少なくとも2つの被写体像間の像倍比に基づいて、被写体までの距離を測定する測距カメラに関する。 The present invention generally relates to a distance measuring camera for measuring a distance to a subject, and more specifically, is formed by at least two optical systems in which a change in magnification of a subject image according to the distance to the subject is different from each other. And a distance measuring camera for measuring a distance to a subject based on an image magnification ratio between at least two subject images.
 従来より、被写体を撮像することにより、被写体までの距離を測定する測距カメラが提案されている。このような測距カメラとしては、被写体からの光を集光し、被写体像を形成するための光学系と、該光学系によって形成された被写体像を画像に変換するための撮像素子とを少なくとも2対備えるステレオカメラ方式の測距カメラが知られている(例えば、特許文献1参照)。 距 Conventionally, a distance measuring camera that measures a distance to a subject by imaging the subject has been proposed. Such a distance-measuring camera includes at least an optical system for collecting light from a subject and forming a subject image, and an image sensor for converting the subject image formed by the optical system into an image. 2. Description of the Related Art A stereo camera type distance measuring camera including two pairs is known (for example, see Patent Document 1).
 特許文献1が開示するようなステレオカメラ方式の測距カメラは、光軸方向に対して垂直な方向に互いにシフトして配置された2つの光学系によって形成された2つの被写体像間の並進視差(光軸方向に対して垂直な方向の視差)を算出し、この並進視差の値に基づいて、被写体までの距離を算出することができる。 A stereo camera-type distance measuring camera as disclosed in Patent Literature 1 has a translational parallax between two subject images formed by two optical systems arranged to be shifted from each other in a direction perpendicular to an optical axis direction. (Parallax in a direction perpendicular to the optical axis direction) is calculated, and the distance to the subject can be calculated based on the value of the translation parallax.
 このようなステレオカメラ方式の測距カメラでは、被写体像間の並進視差が小さいと、被写体までの距離を正確に算出することができない。よって、被写体像間の並進視差を十分に大きくするために、2つの光学系を光軸方向に対して垂直な方向に大きく離間して配置する必要がある。このことは、測距カメラのサイズを小さくすることを困難としている。 In such a stereo camera-type ranging camera, if the translational parallax between the subject images is small, the distance to the subject cannot be calculated accurately. Therefore, in order to sufficiently increase the translational parallax between the subject images, it is necessary to arrange the two optical systems so as to be largely separated from each other in a direction perpendicular to the optical axis direction. This makes it difficult to reduce the size of the ranging camera.
 また、被写体が近距離に位置していると、得られる画像の視野の関係から、並進視差を算出するための被写体像の特徴点が一方の画像内には写っているが、他方の画像内では写っていないという状況が発生してしまう。この状況を避けるためには、2つの光学系を近接して配置する必要がある。しかしながら、2つの光学系を近接して配置すると被写体像間の並進視差が小さくなってしまい、測距の精度が低下する。そのため、被写体像間の並進視差に基づく測距を用いて、近距離に位置する被写体までの距離を正確に算出することは困難である。 When the subject is located at a short distance, the feature point of the subject image for calculating the translational parallax is shown in one image from the relationship of the visual field of the obtained image, but the feature point in the other image is obtained. Then, the situation that it is not reflected occurs. To avoid this situation, it is necessary to arrange the two optical systems in close proximity. However, when the two optical systems are arranged close to each other, the translational parallax between the subject images is reduced, and the accuracy of the distance measurement is reduced. For this reason, it is difficult to accurately calculate the distance to an object located at a short distance using distance measurement based on translational parallax between object images.
 このような問題に対し、本発明者らによって、2つの被写体像間の像倍比(倍率の比)に基づいて、被写体までの距離を算出する像倍比方式の測距カメラが提案されている。像倍比方式の測距カメラでは、被写体までの距離に応じた被写体像の倍率の変化が互いに異なる2つの光学系が用いられ、該2つの光学系によって形成された2つの被写体像間の像倍比(倍率の比)に基づいて、被写体までの距離が算出される(特許文献2参照)。 In order to solve such a problem, the present inventors have proposed an image magnification ratio type ranging camera that calculates a distance to a subject based on an image magnification ratio (magnification ratio) between two subject images. I have. In a distance measuring camera of the image magnification ratio method, two optical systems having different magnifications of a subject image according to the distance to the subject are used, and an image between the two subject images formed by the two optical systems is used. The distance to the subject is calculated based on the magnification ratio (ratio of magnification) (see Patent Document 2).
 このような像倍比方式の測距カメラでは、被写体までの距離を算出するために被写体像間の並進視差が利用されないため、2つの光学系を近接して配置しても、被写体までの距離を正確に算出することができる。そのため、測距カメラのサイズを小さくすることができる。また、被写体像間の像倍比は、被写体が近距離に位置する場合であっても正確に取得することが可能なので、像倍比方式の測距カメラは、近距離に位置する被写体までの距離を正確に算出することができる。 In such a distance-measuring camera using the image magnification ratio, the translational parallax between the subject images is not used to calculate the distance to the subject. Therefore, even if two optical systems are arranged close to each other, the distance to the subject can be reduced. Can be calculated accurately. Therefore, the size of the distance measuring camera can be reduced. Further, since the image magnification ratio between the subject images can be accurately obtained even when the subject is located at a short distance, the distance measurement camera of the image magnification ratio can measure the distance to the subject located at a short distance. The distance can be calculated accurately.
 被写体像間の像倍比は、2つの被写体像のサイズの比から算出される。被写体像のサイズを取得するためには、被写体像を撮像することによって得られた画像中の被写体像の複数の特徴点(例えば、測距対象の高さ方向または幅方向の両端部)を検出し、画像中の該特徴点間の距離を測定することにより取得される。また、被写体像間の像倍比を取得するためには、2つの被写体像の同じ部分のサイズを取得する必要がある。そのため、一方の被写体像の複数の特徴点を検出した後、検出された一方の被写体像の複数の特徴点にそれぞれ対応する他方の被写体像の複数の特徴点を検出するための対応特徴点検出処理を実行する必要がある。 The image magnification ratio between the subject images is calculated from the ratio of the sizes of the two subject images. In order to obtain the size of the subject image, a plurality of feature points (for example, both ends in the height direction or the width direction of the distance measurement target) in the subject image in the image obtained by capturing the subject image are detected. Then, it is obtained by measuring the distance between the feature points in the image. Further, in order to obtain the image magnification ratio between the subject images, it is necessary to acquire the size of the same part of the two subject images. Therefore, after detecting a plurality of feature points of one subject image, corresponding feature point detection for detecting a plurality of feature points of the other subject image corresponding to the detected feature points of the one subject image, respectively. Processing needs to be performed.
 このような対応特徴点検出処理は、他方の被写体像を撮像することによって取得された画像の全領域を探索することにより実行されるのが一般的である。しかしながら、画像の全領域の探索は、多くの処理時間を要する作業であり、対応特徴点検出処理に要する処理時間が長くなってしまう。その結果、被写体像間の像倍比に基づいて被写体までの距離を算出するための処理時間が、長くなってしまうという問題があった。 対 応 Such a corresponding feature point detection process is generally executed by searching the entire region of an image acquired by capturing the other subject image. However, the search of the entire region of the image is an operation requiring a lot of processing time, and the processing time required for the corresponding feature point detection processing becomes long. As a result, there is a problem that the processing time for calculating the distance to the subject based on the image magnification ratio between the subject images becomes long.
特開2012-26841号公報JP 2012-26841 A 特願2017-241896号Japanese Patent Application No. 2017-241896
 本発明は、上記従来の問題点を鑑みたものであり、その目的は、一方の被写体像の複数の特徴点にそれぞれ対応する他方の被写体像の複数の特徴点を検出するための対応特徴点検出処理において、エピポーラ幾何に基づくエピポーラ線を利用した特徴点の探索を実行することによって、被写体像間の像倍比に基づいて被写体までの距離を算出するための処理時間を短縮可能な測距カメラを提供することにある。 SUMMARY OF THE INVENTION The present invention has been made in view of the above-described conventional problems, and has as its object to check corresponding features for detecting a plurality of feature points of the other subject image corresponding to a plurality of feature points of the one subject image, respectively. In the output processing, a distance measurement capable of reducing a processing time for calculating a distance to a subject based on an image magnification ratio between subject images by performing a search for a feature point using an epipolar line based on epipolar geometry. It is to provide a camera.
 このような目的は、以下の(1)~(7)の本発明により達成される。
 (1)被写体からの光を集光し、第1の被写体像を形成するための第1の光学系と、前記第1の被写体像を撮像することにより、前記第1の被写体像を含む第1の画像を取得するための第1の撮像素子とを有する第1の撮像系と、
 前記第1の光学系に対して、前記第1の光学系の光軸方向に対して垂直な方向にシフトして配置され、前記被写体からの前記光を集光し、第2の被写体像を形成するための第2の光学系と、前記第2の被写体像を撮像することにより、前記第2の被写体像を含む第2の画像を取得するための第2の撮像素子とを有する第2の撮像系と、
 前記第1の画像中の前記第1の被写体像の複数の特徴点を検出し、前記第1の被写体像の前記複数の特徴点間の距離を測定することにより、前記第1の被写体像のサイズを取得し、さらに、前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の前記第2の被写体像の複数の特徴点を検出し、前記第2の被写体像の前記複数の特徴点間の距離を測定することにより、前記第2の被写体像のサイズを取得するためのサイズ取得部と、
 前記サイズ取得部によって取得された前記第1の被写体像の前記サイズと前記第2の被写体像の前記サイズとの比として得られる前記第1の被写体像の倍率と前記第2の被写体像の倍率との像倍比に基づいて、前記被写体までの距離を算出するための距離算出部と、を備え、
 前記サイズ取得部は、前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の複数のエピポーラ線上を探索することにより、前記第2の画像中の前記第2の被写体像の前記複数の特徴点を検出することを特徴とする測距カメラ。
Such an object is achieved by the following (1) to (7) of the present invention.
(1) A first optical system for condensing light from a subject to form a first subject image, and a first optical system including the first subject image by capturing the first subject image A first imaging system having a first imaging element for acquiring one image;
The first optical system is arranged so as to be shifted in a direction perpendicular to the optical axis direction of the first optical system, condenses the light from the subject, and forms a second subject image. A second optical system for forming, and a second image sensor for capturing the second subject image to obtain a second image including the second subject image. Imaging system,
By detecting a plurality of feature points of the first subject image in the first image and measuring a distance between the plurality of feature points of the first subject image, Acquiring a size, detecting a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the first subject image, A size obtaining unit for obtaining a size of the second subject image by measuring a distance between the plurality of feature points of the image;
The magnification of the first subject image and the magnification of the second subject image obtained as a ratio of the size of the first subject image acquired by the size acquiring unit to the size of the second subject image. A distance calculator for calculating the distance to the subject based on the image magnification ratio with
The size acquiring unit searches the plurality of epipolar lines in the second image corresponding to the plurality of feature points of the first subject image, respectively, to thereby obtain the second image in the second image. A distance measuring camera for detecting the plurality of feature points of a subject image.
 (2)前記サイズ取得部は、前記第1の撮像系および前記第2の撮像系の特性および配置を考慮したモデルに基づいて、前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の前記複数のエピポーラ線を導出する上記(1)に記載の測距カメラ。 (2) The size acquisition unit corresponds to each of the plurality of feature points of the first subject image based on a model in which characteristics and arrangement of the first imaging system and the second imaging system are considered. The ranging camera according to (1), wherein the plurality of epipolar lines in the second image are derived.
 (3)前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の前記複数のエピポーラ線は、下記式(1)によって表される上記(2)に記載の測距カメラ。
Figure JPOXMLDOC01-appb-I000002
 ここで、xおよびyは、それぞれ、前記第1の被写体像の前記複数の特徴点の任意の1つの前記第1の画像中におけるx座標およびy座標、xおよびyは、それぞれ、前記第1の被写体像の前記複数の特徴点の前記任意の1つに対応する前記第2の画像中における前記第2の被写体像の特徴点のx座標およびy座標、PおよびPは、それぞれ、前記第1の光学系の前側主点と前記第2の光学系の前側主点との間の並進視差のx軸方向の値およびy軸方向の値であり、Dは前記第1の光学系と前記第2の光学系との間の前記第1の光学系または前記第2の光学系の光軸方向の奥行視差であり、PSは前記第1の撮像素子の画素サイズであり、PSは前記第2の撮像素子の画素サイズであり、fは前記第1の光学系の焦点距離、fは前記第2の光学系の焦点距離、EPは前記第1の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第1の被写体像の結像位置までの距離、EPは前記第2の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第2の被写体像の結像位置までの距離、aFD1は前記第1の撮像素子の撮像面で前記第1の被写体像がベストピントとなる場合の前記第1の光学系の前記前側主点から前記被写体までの距離、aFD2は前記第2の撮像素子の撮像面で前記第2の被写体像がベストピントとなる場合の前記第2の光学系の前記前側主点から前記被写体までの距離である。
(3) The plurality of epipolar lines in the second image respectively corresponding to the plurality of feature points of the first subject image are measured in the above (2) represented by the following equation (1). Distance camera.
Figure JPOXMLDOC01-appb-I000002
Here, x 1 and y 1 are respectively x and y coordinates in any one of the first images of the plurality of feature points of the first subject image, and x 2 and y 2 are respectively , X and y coordinates of feature points of the second subject image in the second image corresponding to the arbitrary one of the plurality of feature points of the first subject image, P x and P y Are the values in the x-axis direction and the y-axis direction of the translational parallax between the front principal point of the first optical system and the front principal point of the second optical system, respectively, and D is the A depth parallax between the first optical system and the second optical system in the optical axis direction of the first optical system or the second optical system, and PS 1 is a pixel size of the first image sensor. in and, PS 2 is a pixel size of the second image sensor, f 1 is the focal of said first optical system Distance, f 2 is the focal length of the second optical system, the exit pupil of EP 1 is the first optical system, to the imaging position of the first object image in the case where the subject is present at infinity distance, EP 2 from the exit pupil of the second optical system, the distance to the imaging position of the second object image when the object exists at infinity, a FD1 is the first image sensor The distance from the front principal point of the first optical system to the subject when the first subject image is in the best focus on the imaging surface of a, and a FD2 is the imaging surface of the second imaging device. The distance from the front principal point of the second optical system to the subject when the second subject image is the best focus.
 (4)前記第1の光学系および前記第2の光学系は、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の変化が、前記被写体からの前記距離に応じた前記第2の被写体像の前記倍率の変化と異なるように構成されている上記(1)に記載の測距カメラ。 (4) The first optical system and the second optical system may be configured such that the change in the magnification of the first subject image according to the distance to the subject is different from the magnification according to the distance from the subject. The ranging camera according to the above (1), which is configured to be different from the change in the magnification of the second subject image.
 (5)前記第1の光学系および前記第2の光学系は、前記第1の光学系の焦点距離と、前記第2の光学系の焦点距離とが、互いに異なるよう構成されており、これにより、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の前記変化が、前記被写体までの前記距離に応じた前記第2の被写体像の前記倍率の前記変化と異なるようになっている上記(4)に記載の測距カメラ。 (5) The first optical system and the second optical system are configured such that a focal length of the first optical system and a focal length of the second optical system are different from each other. Thus, the change in the magnification of the first subject image according to the distance to the subject is different from the change in the magnification of the second subject image according to the distance to the subject. The distance measuring camera according to the above (4).
 (6)前記第1の光学系および前記第2の光学系は、前記第1の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第1の光学系によって形成される前記第1の被写体像の結像位置までの距離と、前記第2の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第2の光学系によって形成される前記第2の被写体像の結像位置までの距離とが異なるように構成されており、これにより、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の前記変化が、前記被写体までの前記距離に応じた前記第2の被写体像の前記倍率の前記変化と異なるようになっている上記(4)または(5)に記載の測距カメラ。 (6) The first optical system and the second optical system are formed by the first optical system when the subject is at infinity from the exit pupil of the first optical system. The second object formed by the second optical system when the object is at infinity from the distance to the image forming position of the first object image and the exit pupil of the second optical system The distance to the image forming position is configured to be different, whereby the change in the magnification of the first subject image according to the distance to the subject is the distance to the subject. The distance measuring camera according to the above (4) or (5), which is different from the change in the magnification of the second object image according to the following.
 (7)前記第1の光学系の前側主点と前記第2の光学系の前側主点との間に、前記第1の光学系または前記第2の光学系の光軸方向の奥行視差が存在し、これにより、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の前記変化が、前記被写体までの前記距離に応じた前記第2の被写体像の前記倍率の前記変化と異なるようになっている上記(4)ないし(6)のいずれかに記載の測距カメラ。 (7) The depth parallax in the optical axis direction of the first optical system or the second optical system is between the front principal point of the first optical system and the front principal point of the second optical system. The change in the magnification of the first subject image according to the distance to the subject is caused by the change in the magnification of the second subject image according to the distance to the subject. The distance measuring camera according to any one of the above (4) to (6), which is different from the above.
 本発明の測距カメラにおいては、一方の被写体像の複数の特徴点にそれぞれ対応する他方の被写体像の複数の特徴点を検出するための対応特徴点検出処理において、エピポーラ幾何に基づくエピポーラ線を利用した特徴点の探索が実行される。そのため、被写体像間の像倍比に基づいて被写体までの距離を算出するための処理時間を短縮することができる。 In the ranging camera of the present invention, in a corresponding feature point detection process for detecting a plurality of feature points of the other subject image corresponding to a plurality of feature points of the one subject image, an epipolar line based on epipolar geometry is used. A search for the used feature point is executed. Therefore, the processing time for calculating the distance to the subject based on the image magnification ratio between the subject images can be reduced.
図1は、本発明の測距カメラの測距原理を説明するための図である。FIG. 1 is a diagram for explaining the principle of distance measurement of the distance measurement camera of the present invention. 図2は、本発明の測距カメラの測距原理を説明するための図である。FIG. 2 is a diagram for explaining the principle of distance measurement of the distance measurement camera of the present invention. 図3は、図2に示す第1の光学系によって形成される第1の被写体像の倍率と、図2に示す第2の光学系によって形成される第2の被写体像の倍率との像倍比が、被写体までの距離に応じて変化することを説明するためのグラフである。FIG. 3 is an image magnification of the magnification of the first subject image formed by the first optical system shown in FIG. 2 and the magnification of the second subject image formed by the second optical system shown in FIG. 6 is a graph for explaining that a ratio changes according to a distance to a subject. 図4は、本発明の測距カメラにおいて用いられるエピポーラ線を導出するためのモデルを示すためのX-Z平面図である。FIG. 4 is an XZ plan view showing a model for deriving an epipolar line used in the distance measuring camera of the present invention. 図5は、本発明の測距カメラにおいて用いられるエピポーラ線を導出するためのモデルを示すためのY-Z平面図である。FIG. 5 is a YZ plan view showing a model for deriving an epipolar line used in the distance measuring camera of the present invention. 図6は、図4および図5に示すモデルを用いて導出されたエピポーラ線の一例を示す図である。FIG. 6 is a diagram illustrating an example of an epipolar line derived using the models illustrated in FIGS. 4 and 5. 図7は、本発明の第1実施形態に係る測距カメラを概略的に示すブロック図である。FIG. 7 is a block diagram schematically showing the distance measuring camera according to the first embodiment of the present invention. 図8は、本発明の第2実施形態に係る測距カメラを概略的に示すブロック図である。FIG. 8 is a block diagram schematically showing a distance measuring camera according to the second embodiment of the present invention. 図9は、本発明の第3実施形態に係る測距カメラを概略的に示すブロック図である。FIG. 9 is a block diagram schematically showing a distance measuring camera according to the third embodiment of the present invention. 図10は、本発明の測距カメラによって実行される測距方法を説明するためのフローチャートである。FIG. 10 is a flowchart for explaining a distance measuring method executed by the distance measuring camera of the present invention. 図11は、図10に示す測距方法において実行される対応特徴点検出処理の詳細を示すフローチャートである。FIG. 11 is a flowchart showing details of the corresponding feature point detection process executed in the distance measuring method shown in FIG.
 最初に、本発明の測距カメラにおいて用いられている、被写体像間の像倍比に基づいて被写体までの距離を算出するための原理について説明する。なお、各図において、同様または類似した機能を発揮するコンポーネントには、同一の参照符号を付す。 First, the principle used in the distance measuring camera of the present invention for calculating the distance to the subject based on the image magnification ratio between the subject images will be described. In each of the drawings, components having the same or similar functions are denoted by the same reference numerals.
 光学系により形成される被写体像の倍率mODは、光学系の前側主点(前側主面)から被写体までの距離(被写体距離)a、光学系の後側主点(後側主面)から被写体像の結像位置までの距離bOD、および光学系の焦点距離fによって、レンズの公式から下記式(1)のように表すことができる。 The magnification m OD of the subject image formed by the optical system is calculated from the distance (subject distance) a from the front principal point (front principal plane) of the optical system to the subject and the rear principal point (rear principal plane) of the optical system. The distance b OD to the imaging position of the subject image and the focal length f of the optical system can be expressed by the following formula (1) from the lens formula.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 また、被写体像のサイズYODは、被写体像の倍率mODと、被写体の実際のサイズszから、下記式(2)のように表すことができる。 In addition, the size Y OD of the subject image can be expressed by the following equation (2) from the magnification m OD of the subject image and the actual size sz of the subject.
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 センサー等の撮像素子の撮像面が被写体像の結像位置にある場合、すなわち、ベストピントである場合、被写体像のサイズYODは、上記式(2)で表すことができる。光学系がオートフォーカス機能を有しており、常にベストピントで撮像を行う場合には、上記式(2)を用いて被写体像のサイズYODを求めることができる。 When the imaging surface of the imaging device such as a sensor is located at the imaging position of the subject image, that is, when the subject is in the best focus, the size Y OD of the subject image can be expressed by the above equation (2). When the optical system has an autofocus function and always performs imaging with the best focus, the size YOD of the subject image can be obtained by using the above equation (2).
 しかしながら、光学系がオートフォーカス機能を有さない固定焦点系であり、センサー等の撮像素子の撮像面が被写体像の結像位置にない場合、すなわち、デフォーカスが存在する場合、撮像素子の撮像面上に形成される被写体像のサイズYFDを求めるためには、デフォーカス量、すなわち、被写体像の結像位置と撮像素子の撮像面の位置の奥行方向(光軸方向)の差(シフト量)を考慮する必要がある。 However, when the optical system is a fixed focus system having no autofocus function and the imaging surface of the imaging device such as a sensor is not at the position where the subject image is formed, that is, when there is defocus, the imaging of the imaging device is performed. In order to obtain the size Y FD of the subject image formed on the surface, the defocus amount, that is, the difference (shift) in the depth direction (optical axis direction) between the imaging position of the subject image and the position of the imaging surface of the image sensor is determined. Amount).
 図1に示すように、光学系の射出瞳から、被写体が無限遠に存在する場合の被写体像の結像位置までの距離をEPとし、光学系の射出瞳から、被写体が任意の距離aに存在する場合の被写体像の結像位置までの距離をEPODとし、光学系の射出瞳から撮像素子の撮像面までの距離(フォーカス距離:Focus Distance)をEPFDとする。また、光学系の後側主点から、被写体が任意の距離aに存在する場合の被写体像の結像位置までの距離をbODとし、光学系の後側主点から撮像素子の撮像面までの距離をbFDとする。なお、図示の形態では、説明の簡略化のため、光学系は、光学系の後側主点が、光学系の中心位置にあるものとして概略的に示されている。 As shown in FIG. 1, the distance from the exit pupil of the optical system to the image forming position of the subject image when the subject is at infinity is defined as EP, and the subject is positioned at an arbitrary distance a from the exit pupil of the optical system. The distance from the image forming position of the subject image in the case where the object exists is defined as EP OD, and the distance from the exit pupil of the optical system to the imaging surface of the image sensor (Focus Distance) is defined as EP FD . Further, the distance from the rear principal point of the optical system to the imaging position of the subject image when the subject is at an arbitrary distance a is represented by b OD, and the distance from the rear principal point of the optical system to the imaging surface of the image sensor. Is defined as bFD . In the illustrated embodiment, for simplicity of description, the optical system is schematically illustrated such that the rear principal point of the optical system is located at the center position of the optical system.
 光学系の後側主点から、任意の距離aに被写体が存在する場合の被写体像の結像位置までの距離bODは、レンズの公式から下記式(3)によって求めることができる。 The distance b OD from the rear principal point of the optical system to the imaging position of the subject image when the subject exists at an arbitrary distance a can be obtained from the following formula (3) from the lens formula.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 したがって、焦点距離fと距離bODとの差ΔbODは、下記式(4)によって求めることができる。 Therefore, the difference Δb OD between the focal length f and the distance b OD can be obtained by the following equation (4).
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 また、光学系の後側主点から撮像素子の撮像面までの距離bFDは、撮像素子の撮像面で被写体像がベストピントとなる場合の光学系の前側主点から被写体までの距離aFDを用いて、レンズの公式から下記式(5)によって求めることができる。 The distance b FD from the rear principal point of the optical system to the imaging surface of the image sensor is the distance a FD from the front principal point of the optical system to the object when the subject image is best focused on the imaging surface of the image sensor. Can be obtained by the following equation (5) from the lens formula using
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 よって、焦点距離fと距離bFDとの差ΔbFDは、下記式(6)によって求めることができる。 Therefore, the difference Δb FD between the focal length f and the distance b FD can be obtained by the following equation (6).
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 また、図1から明らかなように、光軸と光学系の射出瞳との交点を頂点の一つとし、任意の距離aに被写体が存在する場合の被写体像の結像位置における被写体像のサイズYODを1つの辺とする直角三角形と、光軸と光学系の射出瞳との交点を頂点の一つとし、撮像素子の撮像面における被写体像のサイズYFDを1つの辺とする直角三角形とは相似関係にある。そのため、相似関係から、EPOD:EPFD=YOD:YFDが成立し、下記式(7)から撮像素子の撮像面における被写体像のサイズYFDを求めることができる。 As is clear from FIG. 1, the intersection of the optical axis and the exit pupil of the optical system is one of the vertices, and the size of the subject image at the image forming position of the subject image when the subject exists at an arbitrary distance a. A right-angled triangle having Y OD as one side, and a right-angled triangle having one of the vertices at the intersection of the optical axis and the exit pupil of the optical system, and having the size Y FD of the subject image on the imaging surface of the image sensor as one side. And have a similar relationship. Therefore, from the similarity relation, EP OD : EP FD = Y OD : Y FD is established, and the size Y FD of the subject image on the imaging surface of the imaging device can be obtained from the following equation (7).
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 上記式(7)から明らかなように、撮像素子の撮像面における被写体像のサイズYFDは、被写体の実際のサイズsz、光学系の焦点距離f、光学系の射出瞳から、被写体が無限遠に存在する場合の被写体像の結像位置までの距離EP、光学系の射出瞳から被写体までの距離(被写体距離)a、および撮像素子の撮像面で被写体像がベストピントとなる場合の光学系の前側主点から被写体までの距離(フォーカス距離)aFDの関数として表すことができる。 As is apparent from the above equation (7), the size Y FD of the subject image on the imaging surface of the imaging device is determined based on the actual size sz of the subject, the focal length f of the optical system, and the exit pupil of the optical system. , The distance EP from the exit pupil of the optical system to the subject (subject distance) a, and the optical system when the subject image is best focused on the imaging surface of the image sensor. Can be expressed as a function of the distance (focus distance) a FD from the front principal point to the subject.
 次に、図2に示すように、同じ被写体100を、2つの撮像系IS1、IS2を用いて撮像した場合を想定する。第1の撮像系IS1は、被写体100からの光を集光し、第1の被写体像を形成する第1の光学系OS1と、第1の光学系OS1によって形成された第1の被写体像を撮像するための第1の撮像素子S1とを有している。第2の撮像系IS2は、被写体100からの光を集光し、第2の被写体像を形成する第2の光学系OS2と、第2の光学系OS2によって形成された第2の被写体像を撮像するための第2の撮像素子S2とを有している。なお、第1の撮像素子S1の画素サイズ(画素1つ当たりのサイズ)はPSであり、第2の撮像素子S2の画素サイズはPSである。 Next, as shown in FIG. 2, it is assumed that the same subject 100 is imaged using two imaging systems IS1 and IS2. The first imaging system IS1 collects light from the subject 100 and forms a first optical system OS1 that forms a first subject image, and a first subject image formed by the first optical system OS1. And a first imaging element S1 for imaging. The second imaging system IS2 condenses light from the subject 100 to form a second subject image formed by the second optical system OS2, and a second subject image formed by the second optical system OS2. A second imaging element S2 for imaging. The pixel size of the first image sensor S1 (size per one pixel) is PS 1, the pixel size of the second image pickup element S2 is PS 2.
 また、図2から明らかなように、第1の撮像系IS1の第1の光学系OS1の光軸と、第2の撮像系IS2の第2の光学系OS2の光軸は、平行であるが一致していない。また、第2の光学系OS2は、第1の光学系OS1の光軸方向に対して垂直な方向に、距離Pだけ離間して配置されている。 As is clear from FIG. 2, the optical axis of the first optical system OS1 of the first imaging system IS1 is parallel to the optical axis of the second optical system OS2 of the second imaging system IS2. Do not match. The second optical system OS2 is arranged at a distance P in a direction perpendicular to the optical axis direction of the first optical system OS1.
 なお、図示の構成では、第1の光学系OS1の光軸と第2の光学系OS2の光軸は平行であるが、本発明はこれに限られない。例えば、第1の光学系OS1の光軸の角度(3次元極座標の角度パラメーターθ、φ)および第2の光学系OS2の光軸の角度が互いに異なるように、第1の光学系OS1および第2の光学系OS2が配置されていてもよい。しかしながら、説明の簡略化のため、第1の光学系OS1および第2の光学系OS2は、図2に示すように、第1の光学系OS1の光軸と第2の光学系OS2の光軸が平行であるが一致せず、互いに距離Pだけ離間するよう、配置されているものとする。 In the illustrated configuration, the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 are parallel, but the present invention is not limited to this. For example, the first optical system OS1 and the second optical system OS1 are different from each other in that the angle of the optical axis of the first optical system OS1 (the angle parameters θ and φ in three-dimensional polar coordinates) and the angle of the optical axis of the second optical system OS2 are different from each other. Two optical systems OS2 may be arranged. However, for simplicity of description, the first optical system OS1 and the second optical system OS2 are, as shown in FIG. 2, composed of an optical axis of the first optical system OS1 and an optical axis of the second optical system OS2. Are parallel but not coincident, and are spaced apart from each other by a distance P.
 第1の光学系OS1および第2の光学系OS2は、それぞれ焦点距離f、fを有する固定焦点の光学系である。第1の撮像系IS1が構成される際において、第1の光学系OS1の位置(レンズ位置)、すなわち、第1の光学系OS1と第1の撮像素子S1の離間距離は、任意の距離(フォーカス距離)aFD1にある被写体100の第1の被写体像が第1の撮像素子S1の撮像面上に形成される、すなわち、任意の距離aFD1にある被写体100がベストピントとなるように調整されている。同様に、第2の撮像系IS2が構成される際において、第2の光学系OS2の位置(レンズ位置)、すなわち、第2の光学系OS2と第2の撮像素子S2の離間距離は、任意の距離(フォーカス距離)aFD2にある被写体100の第2の被写体像が第2の撮像素子S2の撮像面上に形成される、すなわち、任意の距離aFD2にある被写体100がベストピントとなるように調整されている。 The first optical system OS1 and the second optical system OS2 are fixed-focus optical systems having focal lengths f 1 and f 2 , respectively. When the first imaging system IS1 is configured, the position (lens position) of the first optical system OS1, that is, the separation distance between the first optical system OS1 and the first imaging element S1 is an arbitrary distance ( (Focus distance) a The first subject image of the subject 100 at the FD1 is formed on the imaging surface of the first image sensor S1, that is, the subject 100 at the arbitrary distance a FD1 is adjusted to be the best focus. Have been. Similarly, when the second imaging system IS2 is configured, the position (lens position) of the second optical system OS2, that is, the separation distance between the second optical system OS2 and the second imaging element S2 is arbitrary. distance (focus distance) a second object image of an object 100 in a FD2 is formed on the imaging surface of the second image sensor S2, i.e., the object 100 at an arbitrary distance a FD2 is best focus Has been adjusted as follows.
 また、第1の光学系OS1の射出瞳から、被写体100が無限遠に存在する場合の第1の被写体像の結像位置までの距離はEPであり、第2の光学系OS2の射出瞳から、被写体100が無限遠に存在する場合の第2の被写体像の結像位置までの距離はEPである。 Further, from the exit pupil of the first optical system OS1, the distance to the imaging position of the first object image when the object 100 is present at infinity is EP 1, the exit pupil of the second optical system OS2 from the distance to the imaging position of the second object image when the object 100 is present at infinity is EP 2.
 第1の光学系OS1および第2の光学系OS2は、第1の光学系OS1の前側主点(前側主面)と、第2の光学系OS2の前側主点(前側主面)との間に、奥行方向(光軸方向)の差(奥行視差)Dが存在するよう構成および配置されている。すなわち、第1の光学系OS1の前側主点から被写体100までの距離(被写体距離)をaとすると、第2の光学系OS2の前側主点から被写体100までの距離は、a+Dとなる。 The first optical system OS1 and the second optical system OS2 are located between the front principal point (front principal plane) of the first optical system OS1 and the front principal point (front principal plane) of the second optical system OS2. Are arranged and arranged so that there is a difference (depth parallax) D in the depth direction (optical axis direction). That is, assuming that the distance (subject distance) from the front principal point of the first optical system OS1 to the subject 100 is a, the distance from the front principal point of the second optical system OS2 to the subject 100 is a + D.
 図1を参照して説明した相似関係を利用することにより、第1の光学系OS1によって第1の撮像素子S1の撮像面上に形成される第1の被写体像の倍率mは、下記式(8)で表すことができる。 By utilizing the reference to the described similar relationship to Figure 1, the magnification m 1 of the first object image formed on the imaging surface of the first by the optical system OS1 first image sensor S1 is the following formula It can be represented by (8).
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 ここで、EPOD1は、第1の光学系OS1の射出瞳から、距離aに被写体100が存在する場合の第1の被写体像の結像位置までの距離であり、EPFD1は、第1の光学系OS1の射出瞳から、第1の撮像素子S1の撮像面までの距離である。これら距離EPOD1および距離EPFD1の位置関係は、第1の撮像系IS1が構成される際において、任意の距離aFD1にある被写体100がベストピントとなるように第1の光学系OS1の位置(レンズ位置)を調整することにより決定される。また、ΔbOD1は、焦点距離fと、第1の光学系OS1の後側主点から、距離aに被写体100が存在する場合の第1の被写体像の結像位置までの距離bOD1との差であり、ΔbFD1は、焦点距離fと、第1の光学系OS1の後側主点から第1の撮像素子S1の撮像面までの距離bFD1との差であり、mOD1は、距離aに被写体100が存在する場合の第1の被写体像の結像位置における第1の被写体像の倍率である。 Here, EP OD1 is the distance from the exit pupil of the first optical system OS1 to the image forming position of the first subject image when the subject 100 exists at the distance a, and EP FD1 is the first The distance from the exit pupil of the optical system OS1 to the imaging surface of the first imaging element S1. The positional relationship between the distance EP OD1 and the distance EP FD1 is determined by the position of the first optical system OS1 such that when the first imaging system IS1 is configured, the subject 100 at an arbitrary distance a FD1 is in the best focus. (Lens position) is determined. Δb OD1 is the focal length f 1 and the distance b OD1 from the rear principal point of the first optical system OS1 to the image forming position of the first subject image when the subject 100 exists at the distance a. Δb FD1 is the difference between the focal length f 1 and the distance b FD1 from the rear principal point of the first optical system OS1 to the imaging surface of the first image sensor S1, and m OD1 is , The magnification of the first subject image at the image forming position of the first subject image when the subject 100 exists at the distance a.
 上記式(1)、(4)および(6)が第1の光学系OS1による結像にも適用できるので、上記式(8)は、下記式(9)で表すことができる。 Since the above equations (1), (4) and (6) can be applied to the image formation by the first optical system OS1, the above equation (8) can be expressed by the following equation (9).
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
 ここで、aFD1は、第1の撮像素子S1の撮像面で第1の被写体像がベストピントとなる場合の第1の光学系OS1の前側主点から被写体100までの距離である。 Here, a FD1 is the distance from the front principal point of the first optical system OS1 to the subject 100 when the first subject image is in the best focus on the imaging surface of the first image sensor S1.
 同様に、第2の光学系OS2によって第2の撮像素子S2の撮像面上に形成される第2の被写体像の倍率mは、下記式(10)で表すことができる。 Similarly, the magnification m2 of the second subject image formed on the imaging surface of the second imaging element S2 by the second optical system OS2 can be expressed by the following equation (10).
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 ここで、EPOD2は、第2の光学系OS2の射出瞳から、距離a+Dに被写体100が存在する場合の第2の被写体像の結像位置までの距離であり、EPFD2は、第2の光学系OS2の射出瞳から第2の撮像素子S2の撮像面までの距離である。これら距離EPOD2および距離EPFD2の位置関係は、第2の撮像系IS2が構成される際において、任意の距離aFD2にある被写体100がベストピントとなるように第2の光学系OS2の位置(レンズ位置)を調整することにより決定される。また、ΔbOD2は、焦点距離fと、第2の光学系OS2の後側主点から、距離a+Dに被写体100が存在する場合の第2の被写体像の結像位置までの距離bOD2との差であり、ΔbFD2は、焦点距離fと、第2の光学系OS2の後側主点から第2の撮像素子S2の撮像面までの距離bFD2との差であり、mOD2は、距離a+Dに被写体100が存在する場合の第2の被写体像の結像位置における第2の被写体像の倍率であり、aFD2は、第2の撮像素子S2の撮像面で第2の被写体像がベストピントとなる場合の第2の光学系OS2の前側主点から被写体100までの距離である。 Here, EP OD2 is the distance from the exit pupil of the second optical system OS2 to the imaging position of the second subject image when the subject 100 exists at a distance a + D, and EP FD2 is the second This is the distance from the exit pupil of the optical system OS2 to the imaging surface of the second image sensor S2. The positional relationship between the distance EP OD2 and the distance EP FD2 is determined by the position of the second optical system OS2 such that the subject 100 at an arbitrary distance a FD2 is in the best focus when the second imaging system IS2 is configured. (Lens position) is determined. Δb OD2 is the focal length f 2 and the distance b OD2 from the rear principal point of the second optical system OS2 to the imaging position of the second subject image when the subject 100 exists at the distance a + D. Δb FD2 is the difference between the focal length f 2 and the distance b FD2 from the rear principal point of the second optical system OS2 to the imaging surface of the second image sensor S2, and m OD2 is Is the magnification of the second subject image at the image forming position of the second subject image when the subject 100 exists at the distance a + D, and a FD2 is the second subject image on the imaging surface of the second image sensor S2. Is the distance from the front principal point of the second optical system OS2 to the subject 100 when the best focus is obtained.
 したがって、第1の光学系OS1によって第1の撮像素子S1の撮像面上に形成される第1の被写体像の倍率mと、第2の光学系OS2によって第2の撮像素子S2の撮像面上に形成される第2の被写体像の倍率mとの像倍比MRは、下記式(11)で表すことができる。 Thus, the imaging surface of the first magnification m 1 of the first object image formed on the imaging surface of the first imaging device S1 by an optical system OS1, the second image pickup element S2 by the second optical system OS2 The image magnification ratio MR with respect to the magnification m2 of the second object image formed above can be expressed by the following equation (11).
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
 ここで、Kは、係数であり、第1の撮像系IS1および第2の撮像系IS2の構成により決定される固定値f、f、EP、EP、aFD1、およびaFD2から構成される下記式(12)で表される。 Here, K is a coefficient, which is determined from fixed values f 1 , f 2 , EP 1 , EP 2 , a FD1 , and a FD2 determined by the configurations of the first imaging system IS1 and the second imaging system IS2. It is represented by the following equation (12).
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
 上記式(11)から明らかなように、第1の光学系OS1によって第1の撮像素子S1の撮像面上に形成される第1の被写体像の倍率mと、第2の光学系OS2によって第2の撮像素子S2の撮像面上に形成される第2の被写体像の倍率mとの像倍比MRは、被写体100から第1の光学系OS1の前側主点までの距離aに応じて変化することがわかる。 As apparent from the above equation (11), the first magnification m 1 of a subject image formed on the imaging surface of the first by the optical system OS1 first image sensor S1, the second optical system OS2 The image magnification ratio MR of the second subject image formed on the imaging surface of the second image sensor S2 to the magnification m2 of the second subject image depends on the distance a from the subject 100 to the front principal point of the first optical system OS1. Change.
 また、上記式(11)を距離aについて解くと、被写体100までの距離aについての一般式(13)を得ることができる。 解 Also, when the above equation (11) is solved for the distance a, the general equation (13) for the distance a to the subject 100 can be obtained.
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000015
 上記式(13)中において、f、f、EP、EP、DおよびKは、第1の撮像系IS1および第2の撮像系IS2の構成および配置により決定される固定値なので、像倍比MRを得ることができれば、被写体100から第1の光学系OS1の前側主点までの距離aを算出することができる。 In the above equation (13), f 1 , f 2 , EP 1 , EP 2 , D, and K are fixed values determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2. If the image magnification ratio MR can be obtained, the distance a from the subject 100 to the front principal point of the first optical system OS1 can be calculated.
 図3には、上記式(13)に基づいて算出された、第1の光学系OS1によって第1の撮像素子S1の撮像面上に形成される第1の被写体像の倍率mと、第2の光学系OS2によって第2の撮像素子S2の撮像面上に形成される第2の被写体像の倍率mとの像倍比MRと、被写体100までの距離aとの関係の一例が示されている。図3から明らかなように、像倍比MRの値と、被写体100までの距離aとの間には、一対一関係が成立している。 FIG 3, which is calculated based on the equation (13), and the magnification m 1 of the first object image formed by the first optical system OS1 on the imaging surface of the first imaging device S1, the An example of the relationship between the image magnification ratio MR of the magnification m 2 of the second subject image formed on the imaging surface of the second imaging element S2 by the second optical system OS2 and the distance a to the subject 100 is shown. Have been. As is apparent from FIG. 3, a one-to-one relationship is established between the value of the image magnification ratio MR and the distance a to the subject 100.
 一方、像倍比MRは、下記式(14)によって算出することができる。 On the other hand, the image magnification ratio MR can be calculated by the following equation (14).
Figure JPOXMLDOC01-appb-M000016
Figure JPOXMLDOC01-appb-M000016
 ここで、szは、被写体100の実際のサイズ(高さまたは幅)、YFD1は、第1の光学系OS1によって第1の撮像素子S1の撮像面上に形成される第1の被写体像のサイズ(像高または像幅)、YFD2は、第2の光学系OS2によって第2の撮像素子S2の撮像面上に形成される第2の被写体像のサイズ(像高または像幅)である。 Here, sz is the actual size (height or width) of the subject 100, and Y FD1 is the first subject image formed on the imaging surface of the first image sensor S1 by the first optical system OS1. The size (image height or image width), Y FD2 is the size (image height or image width) of the second subject image formed on the imaging surface of the second image sensor S2 by the second optical system OS2. .
 第1の被写体像のサイズYFD1は、第1の撮像素子S1が第1の被写体像を撮像することにより得られる第1の画像から実測することができる。同様に、第2の被写体像のサイズYFD2は、第2の撮像素子S2が第2の被写体像を撮像することにより取得される第2の画像から実測することができる。 The size Y FD1 of the first subject image can be actually measured from a first image obtained by capturing the first subject image by the first image sensor S1. Similarly, the size Y FD2 of the second subject image can be actually measured from the second image acquired by the second image sensor S2 capturing the second subject image.
 具体的には、第1の被写体像のサイズYFD1は、第1の画像内に含まれる第1の被写体像の複数の特徴点(例えば、高さ方向または幅方向の両端部)を検出し、検出された該複数の特徴点間の距離を測定することにより取得される。一方、第2の被写体像のサイズYFD2は、検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出し、検出された該特徴点間の距離を測定することにより取得される。なお、以下の説明では、検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出するための処理を、対応特徴点検出処理という。本発明の測距カメラでは、対応特徴点検出処理において、エピポーラ幾何に基づくエピポーラ線を利用することにより、対応特徴点検出処理に必要な処理時間を大幅に短縮している。 Specifically, the size Y FD1 of the first subject image detects a plurality of feature points (for example, both ends in the height direction or the width direction) of the first subject image included in the first image. , And is obtained by measuring the distance between the detected feature points. On the other hand, the size Y FD2 of the second subject image detects a plurality of feature points of the second subject image in the second image corresponding to the plurality of feature points of the detected first subject image, respectively. It is obtained by measuring the distance between the detected feature points. In the following description, a process for detecting a plurality of feature points of a second subject image in a second image corresponding to a plurality of feature points of a detected first subject image is referred to as a corresponding feature. This is called point detection processing. In the ranging camera of the present invention, the processing time required for the corresponding feature point detection processing is greatly reduced by using the epipolar line based on the epipolar geometry in the corresponding feature point detection processing.
 図4および図5には、本発明の測距カメラにおいて用いられるエピポーラ線を導出するためのモデルが示されている。図4は、エピポーラ線を導出するためのモデルにおける第1の撮像系IS1および第2の撮像系IS2の配置を示すX-Z平面図である。図5は、エピポーラ線を導出するためのモデルにおける第1の撮像系IS1および第2の撮像系IS2の配置を示すY-Z平面図である。 FIGS. 4 and 5 show models for deriving epipolar lines used in the distance measuring camera of the present invention. FIG. 4 is an XZ plan view showing an arrangement of the first imaging system IS1 and the second imaging system IS2 in a model for deriving an epipolar line. FIG. 5 is a YZ plan view showing an arrangement of the first imaging system IS1 and the second imaging system IS2 in a model for deriving an epipolar line.
 図4および図5に示すように、第1の撮像系IS1および第2の撮像系IS2は、第1の撮像系IS1の第1の光学系OS1の光軸と第2の撮像系IS2の第2の光学系OS2の光軸が一致しないよう、配置されている。そのため、第1の光学系OS1によって形成される第1の被写体像と、第2の光学系OS2によって形成される第2の被写体像との間には、並進視差が発生することになる。本発明の測距カメラにおいては、第1の被写体像の倍率mと第2の被写体像の倍率mとの比である像倍比MRが被写体100までの距離aを算出するために利用され、第1の被写体像と第2の被写体像との間の並進視差は被写体100までの距離aを算出するために利用されない。しかしながら、第1の被写体像と第2の被写体像との間に並進視差は存在しているので、ステレオカメラ方式の測距カメラにおいて用いられるような、エピポーラ幾何に基づくエピポーラ線の原理を、本発明の測距カメラにおいて得られる第1の被写体像と第2の被写体像にも適用することができる。 As shown in FIG. 4 and FIG. 5, the first imaging system IS1 and the second imaging system IS2 are composed of the optical axis of the first optical system OS1 of the first imaging system IS1 and the second imaging system IS2. The two optical systems OS2 are arranged such that the optical axes do not coincide. Therefore, a translation parallax occurs between the first subject image formed by the first optical system OS1 and the second subject image formed by the second optical system OS2. In the distance measuring camera of the present invention, utilized for the magnification m 1 of the first object image which is the ratio of the magnification m 2 of the second subject image Zobaihi MR calculates the distance a to the object 100 The translation parallax between the first subject image and the second subject image is not used for calculating the distance a to the subject 100. However, since a translational parallax exists between the first subject image and the second subject image, the principle of the epipolar line based on epipolar geometry as used in a stereo camera type ranging camera is described in the present invention. The present invention can also be applied to the first subject image and the second subject image obtained by the distance measuring camera of the present invention.
 一般に、エピポーラ線を導出するためのモデルとしては、第1の撮像系IS1および第2の撮像系IS2の配置(視差に関するパラメーターP、P、D)のみを考慮し、第1の撮像系IS1および第2の撮像系IS2の特性(上述のパラメーターf、f、EP、EP、aFD1、aFD2、PS、PS)を考慮しないピンホールモデルが用いられることが多い。しかしながら、実際の撮像系IS1、IS2は、光学系OS1、OS2や撮像素子S1、S2等の撮像に関する多くの要因を有している。そのため、このような要因を無視したピンホールモデルと現実との間に乖離が発生してしまい、正確にエピポーラ線を導出することができない。一方、本発明の測距カメラでは、図4および図5に示す第1の撮像系IS1および第2の撮像系IS2の特性および配置を考慮したモデルを用いて、エピポーラ線を導出することにより、より正確にエピポーラ線を導出することを可能としている。なお、図4および図5に示すモデルにおける第1の撮像系IS1および第2の撮像系IS2の特性および配置は、図2を参照して説明したように、以下の表のとおりである。 In general, as the model for deriving the epipolar line, a first arrangement of an imaging system IS1 and the second imaging system IS2 (parameter P x about parallax, P y, D) only in consideration of the first imaging system IS1 and a second characteristic of the imaging system IS2 (above parameters f 1, f 2, EP 1 , EP 2, a FD1, a FD2, PS 1, PS 2) pinhole model that does not consider is often used . However, the actual imaging systems IS1 and IS2 have many factors related to imaging of the optical systems OS1 and OS2 and the imaging devices S1 and S2. For this reason, a divergence occurs between the pinhole model ignoring such factors and reality, and it is not possible to accurately derive an epipolar line. On the other hand, in the distance measuring camera of the present invention, the epipolar line is derived by using a model in which the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 shown in FIGS. It is possible to derive the epipolar line more accurately. The characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 in the models shown in FIGS. 4 and 5 are as shown in the following table as described with reference to FIG.
Figure JPOXMLDOC01-appb-T000017
Figure JPOXMLDOC01-appb-T000017
 図4および図5に示すモデルでは、第1の撮像系IS1の第1の光学系OS1の前側主点の座標が原点(0,0,0)となっており、第2の撮像系IS2の第2の光学系OS2の前側主点の座標は、(P,P,-D)となっている。そのため、第1の光学系OS1または第2の光学系OS2の光軸方向に対して垂直な方向における第1の光学系OS1の光軸と第2の光学系OS2の光軸との間の離間距離Pは、P=(P +P 1/2で表される。なお、第1の光学系OS1の前側主点と第2の光学系OS2の前側主点とのx軸方向の離間距離Pをx軸方向の並進視差といい、第1の光学系OS1の前側主点と第2の光学系OS2の前側主点とのy軸方向の離間距離Pをy軸方向の並進視差という。また、上述のように、第1の光学系OS1の前側主点と第2の光学系OS2の前側主点とのz軸方向の離間距離Dを奥行視差という。 In the models shown in FIGS. 4 and 5, the coordinates of the front principal point of the first optical system OS1 of the first imaging system IS1 are the origin (0, 0, 0), and the coordinates of the second imaging system IS2 are The coordinates of the front principal point of the second optical system OS2 are (P x , P y , -D). Therefore, the distance between the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 in a direction perpendicular to the optical axis direction of the first optical system OS1 or the second optical system OS2. the distance P is expressed by P = (P x 2 + P y 2) 1/2. Incidentally, the front principal point of the first optical system OS1 and the x-axis direction of the distance P x of the front principal point of the second optical system OS2 called translation parallax in the x-axis direction, the first optical system OS1 the distance P y in the y-axis direction between the front principal point and the front principal point of the second optical system OS2 of the y-axis direction translation parallax. As described above, the distance D in the z-axis direction between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2 is referred to as depth parallax.
 このようなモデルにおいて、座標(X、Y、a)に位置する被写体100の特徴点Sを第1の撮像系IS1および第2の撮像系IS2を用いて撮像するものとする。この時、第1の撮像系IS1によって取得された第1の画像中における特徴点Sの座標を(x,y)とし、第2の撮像系IS2によって取得された第2の画像中における特徴点Sの座標を(x,y)とする。 In such a model, the feature point S of the subject 100 located at the coordinates (X, Y, a) is imaged using the first imaging system IS1 and the second imaging system IS2. In this case, first the coordinates of the feature point S in the image and (x 1, y 1), second in the image acquired by the second imaging system IS2 obtained by the first imaging system IS1 Let the coordinates of the feature point S be (x 2 , y 2 ).
 なお、以下の説明において、任意の基準点を原点とする座標をワールド座標といい、第1の撮像系IS1の第1の光学系OS1の前側主点を原点とする座標を第1の撮像系IS1のカメラ座標といい、第2の撮像系IS2の第2の光学系OS2の前側主点を原点とする座標を第2の撮像系IS2のカメラ座標といい、第1の画像中の座標(例えば、x、y)を第1の画像の画像座標といい、第2の画像中の座標(例えば、x、y)を第2の画像の画像座標という。図4および図5に示すモデルでは、ワールド座標の原点は、第1の撮像系IS1の第1の光学系OS1の前側主点である。したがって、図4および図5に示すモデルにおいて、ワールド座標の原点は、第1の撮像系IS1のカメラ座標の原点に一致している。 In the following description, coordinates having an arbitrary reference point as the origin are referred to as world coordinates, and coordinates having the origin at the front principal point of the first optical system OS1 of the first imaging system IS1 are referred to as the first imaging system. The coordinates of the second imaging system IS2, which are referred to as the camera coordinates of the second imaging system IS2, are referred to as the camera coordinates of the second imaging system IS2, and the coordinates in the first image ( For example, x 1 , y 1 ) is referred to as image coordinates of the first image, and coordinates (eg, x 2 , y 2 ) in the second image are referred to as image coordinates of the second image. In the models shown in FIGS. 4 and 5, the origin of the world coordinates is the front principal point of the first optical system OS1 of the first imaging system IS1. Accordingly, in the models shown in FIGS. 4 and 5, the origin of the world coordinates coincides with the origin of the camera coordinates of the first imaging system IS1.
 ワールド座標は、撮像系の外部行列によりカメラ座標に変換される。さらに、カメラ座標は、撮像系の内部行列により画像座標に変換される。したがって、特徴点Sのワールド座標(X,Y,a)は、第1の撮像系IS1の外部行列および内部行列によって、第1の画像の画像座標(x,y)に変換される。同様に、特徴点Sのワールド座標(X,Y,a)は、第2の撮像系IS2の外部行列および内部行列によって、第2の画像の画像座標(x,y)に変換される。 The world coordinates are converted into camera coordinates by an external matrix of the imaging system. Further, the camera coordinates are converted into image coordinates by an internal matrix of the imaging system. Therefore, the world coordinates (X, Y, a) of the feature point S are converted into the image coordinates (x 1 , y 1 ) of the first image by the external matrix and the internal matrix of the first imaging system IS1. Similarly, the world coordinates (X, Y, a) of the feature point S are converted into the image coordinates (x 2 , y 2 ) of the second image by the external matrix and the internal matrix of the second imaging system IS2. .
 最初に、第1の撮像系IS1によって取得される第1の画像の画像座標(x,y)について検討する。特徴点Sを第1の撮像系IS1で撮像した場合、特徴点Sのワールド座標(X,Y,a)は、第1の撮像系IS1の外部行列により、第1の撮像系IS1のカメラ座標(x’,y’,a’)に変換される。しかしながら、上述のように、図4および図5に示すモデルにおけるワールド座標は、第1の撮像系IS1の第1の光学系OS1の前側主点を原点(基準点)としているので、図4および図5に示すモデルにおけるワールド座標と第1の撮像系IS1のカメラ座標との間には回転や位置シフトは存在しない。この状態は、下記式(15)で表すことができる。下記式(15)中の4行4列の行列が第1の撮像系IS1の外部行列である。図4および図5に示すモデルにおけるワールド座標と第1の撮像系IS1のカメラ座標との間には回転や位置シフトは存在しないため、第1の撮像系IS1の外部行列は単位行列となる。 First, the image coordinates (x 1 , y 1 ) of the first image acquired by the first imaging system IS1 will be considered. When the feature point S is imaged by the first imaging system IS1, the world coordinates (X, Y, a) of the feature point S are calculated using the external matrix of the first imaging system IS1 and the camera coordinates of the first imaging system IS1. (X ′ 1 , y ′ 1 , a ′). However, as described above, the world coordinates in the models shown in FIGS. 4 and 5 have the origin (reference point) at the front principal point of the first optical system OS1 of the first imaging system IS1. There is no rotation or position shift between the world coordinates in the model shown in FIG. 5 and the camera coordinates of the first imaging system IS1. This state can be represented by the following equation (15). The matrix of 4 rows and 4 columns in the following equation (15) is the external matrix of the first imaging system IS1. Since there is no rotation or position shift between the world coordinates in the models shown in FIGS. 4 and 5 and the camera coordinates of the first imaging system IS1, the external matrix of the first imaging system IS1 is a unit matrix.
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000018
 次に、特徴点Sの第1の撮像系IS1のカメラ座標(x’,y’,a’)は、第1の撮像系IS1の内部行列により、第1の画像の画像座標(x,y)に変換される。この第1の撮像系IS1の内部行列は、図2を参照して上述した、上記式(7)で表される被写体100のサイズszと第1の被写体像のサイズYFD1との関係と同様に導出することができる。そのため、下記式(16)を得ることができる。なお、上記式(7)では、被写体100のサイズszおよび第1の被写体像のサイズYFD1は、mm単位で表していたが、下記式(16)は第1の画像の画像座標xを表すので、ピクセル単位となる。 Next, the camera coordinates (x ′ 1 , y ′ 1 , a ′) of the first imaging system IS1 of the feature point S are determined by the internal matrix of the first imaging system IS1 using the image coordinates (x 1 , y 1 ). Internal matrix of the first imaging system IS1 is described above with reference to FIG. 2, similar to the relationship between the size Y FD1 size sz a first object image of an object 100 represented by the above formula (7) Can be derived. Therefore, the following equation (16) can be obtained. In the above equation (7), the size sz of the subject 100 and the size Y FD1 of the first subject image are expressed in units of mm, but the following equation (16) expresses the image coordinates x1 of the first image. It is in pixel units.
Figure JPOXMLDOC01-appb-M000019
Figure JPOXMLDOC01-appb-M000019
 同様に、第1の画像の画像座標yを求めると、下記式(17)を得ることができる。 Similarly, when the image coordinates y1 of the first image is obtained, the following equation (17) can be obtained.
Figure JPOXMLDOC01-appb-M000020
Figure JPOXMLDOC01-appb-M000020
 ここで、上記式(16)および(17)中のKおよびLは、第1の撮像系IS1の構成により決定される固定値f、EP、aFD1、PSによって決定される。そのため、上記式(16)および(17)中のKおよびLは、第1の撮像系IS1の構成により一意に定まる固定値である。 Here, K 1 and L 1 in the above equations (16) and (17) are determined by fixed values f 1 , EP 1 , a FD1 , and PS 1 determined by the configuration of the first imaging system IS1. . Therefore, K 1 and L 1 in the above equations (16) and (17) are fixed values uniquely determined by the configuration of the first imaging system IS1.
 上記式(16)および(17)から、特徴点Sの第1の画像の画像座標(x,y)を表す下記式(18)を得ることができる。なお、下記式(18)中の3行4列の行列が、第1の撮像系IS1の内部行列となる。 From the above equations (16) and (17), the following equation (18) representing the image coordinates (x 1 , y 1 ) of the first image of the feature point S can be obtained. The matrix of 3 rows and 4 columns in the following equation (18) is an internal matrix of the first imaging system IS1.
Figure JPOXMLDOC01-appb-M000021
Figure JPOXMLDOC01-appb-M000021
 上記式(18)から、第1の撮像系IS1によって取得される第1の画像中の被写体100の特徴点Sの座標(x,y)を特定することができる。以下、第1の画像の画像座標(x,y)において観測される被写体100の特徴点Sを、第1の被写体像の特徴点という。 From the above equation (18), the coordinates (x 1 , y 1 ) of the feature point S of the subject 100 in the first image acquired by the first imaging system IS1 can be specified. Hereinafter, the feature point S of the subject 100 observed at the image coordinates (x 1 , y 1 ) of the first image is referred to as a feature point of the first subject image.
 上記式(18)中における4行4列の第1の撮像系IS1の外部行列が、第1の撮像系IS1の配置(ワールド座標の基準点に対する第1の撮像系IS1の配置)を反映しており、上記式(18)中における3行4列の第1の撮像系IS1の内部行列が、第1の撮像系IS1の特性(固定値f、EP、aFD1、PS)を反映している。 The external matrix of the first imaging system IS1 of 4 rows and 4 columns in the above equation (18) reflects the arrangement of the first imaging system IS1 (the arrangement of the first imaging system IS1 with respect to a reference point in world coordinates). In the equation (18), the internal matrix of the first imaging system IS1 of 3 rows and 4 columns represents the characteristics (fixed values f 1 , EP 1 , a FD1 , and PS 1 ) of the first imaging system IS1. Reflects.
 次に、第2の撮像系IS2によって取得される第2の画像の画像座標(x,y)について検討する。特徴点Sのワールド座標(X,Y,a)は、第2の撮像系IS2の外部行列により、第2の撮像系IS2のカメラ座標(x’,y’,a’)に変換される。この際、ワールド座標の原点となっている第1の撮像系IS1の第1の光学系OS1の前側主点に対する、第2の撮像系IS2の回転や位置ズレが存在し得る。 Now consider the image coordinates of the second image acquired by the second imaging system IS2 (x 2, y 2) . The world coordinates (X, Y, a) of the feature point S are converted into camera coordinates (x ′ 2 , y ′ 2 , a ′) of the second imaging system IS2 by an external matrix of the second imaging system IS2. You. At this time, there may be a rotation or displacement of the second imaging system IS2 with respect to the front principal point of the first optical system OS1 of the first imaging system IS1, which is the origin of the world coordinates.
 x軸周りの回転についての回転行列R、y軸周りの回転についての回転行列R、およびz軸周りの回転についての回転行列Rは、下記式(19)によって表される。 Rotation matrix R z of rotation of the rotation matrix R x, rotation matrix R y of the rotation about the y-axis, and z about axis of rotation around the x-axis is represented by the following formula (19).
Figure JPOXMLDOC01-appb-M000022
Figure JPOXMLDOC01-appb-M000022
 第2の撮像系IS2のx軸、y軸、およびz軸の全ては、第1の撮像系IS1に対して、回転する可能性を有するため、回転行列R、回転行列R、および回転行列Rを掛け合わせたものが第2の撮像系IS2の回転行列Rとなる。したがって、第2の撮像系IS2の回転行列Rは、下記式(20)で表される。なお、下記式(20)では、回転行列Rは、R・R・Rで表されているが、回転行列Rを得るための回転行列R、回転行列R、および回転行列Rの掛け合わせの順番はこれに限定されない。例えば、回転行列Rは、R・R・RやR・R・R等で表されてもよい。 Since all of the x-axis, y-axis, and z-axis of the second imaging system IS2 have a possibility of rotating with respect to the first imaging system IS1, the rotation matrix R x , the rotation matrix R y , and the rotation The product of the multiplication by the matrix Rz is the rotation matrix R of the second imaging system IS2. Therefore, the rotation matrix R of the second imaging system IS2 is represented by the following equation (20). In the following equation (20), the rotation matrix R is represented by R x , R y , R z , but a rotation matrix R x , a rotation matrix R y , and a rotation matrix R for obtaining the rotation matrix R are obtained. The order of multiplying z is not limited to this. For example, the rotation matrix R may be represented by R z · R y · R x and R y · R x · R z, and the like.
Figure JPOXMLDOC01-appb-M000023
Figure JPOXMLDOC01-appb-M000023
 また、上述のように、第2の撮像系IS2は、第1の撮像系IS1に対して、並進方向の並進視差P、Pおよび奥行方向の奥行視差Dを有している。これらの視差は、下式(21)の並進行列tで表すことができる。 Further, as described above, the second imaging system IS2, relative to the first imaging system IS1, translation direction of the translation parallax P x, and a P y and the depth direction of the depth disparity D. These parallaxes can be represented by a parallel progression column t in the following equation (21).
Figure JPOXMLDOC01-appb-M000024
Figure JPOXMLDOC01-appb-M000024
 第2の撮像系IS2の外部行列は、上記式(20)の回転行列Rと上記式(21)の並進行列の組み合わせで表現され、特徴点Sの第2の撮像系IS2のカメラ座標(x’,y’,a’)は、下記式(22)で表すことができる。下記式(22)中の4行4列の行列が第2の撮像系IS2の外部行列である。 The external matrix of the second imaging system IS2 is expressed by a combination of the rotation matrix R of Expression (20) and the translation sequence of Expression (21), and the camera coordinates (x '2, y' 2, a ') can be represented by the following formula (22). The matrix of 4 rows and 4 columns in the following equation (22) is the external matrix of the second imaging system IS2.
Figure JPOXMLDOC01-appb-M000025
Figure JPOXMLDOC01-appb-M000025
 なお、説明を簡単にするため、以下の記述では、図4および図5に示すように、第1の撮像系IS1に対する第2の撮像系IS2の回転要素はないものとする。すなわち、上記式(22)において、θ=0、θ=0、θ=0とする。なお、このような仮定は、説明の簡略化のためであり、第1の撮像系IS1に対して第2の撮像系IS2が回転しているような態様(すなわち、θ、θ、およびθがゼロではない態様)も、本発明の範囲内である。 For the sake of simplicity, in the following description, it is assumed that there is no rotating element of the second imaging system IS2 with respect to the first imaging system IS1, as shown in FIGS. That is, in the above formula (22), θ x = 0 , θ y = 0, and θ z = 0. Note that such an assumption is for the sake of simplicity of description, and is such that the second imaging system IS2 is rotating with respect to the first imaging system IS1 (that is, θ x , θ y , and Embodiments in which θ z is not zero) are also within the scope of the present invention.
 上述のように、θ=0、θ=0、θ=0とすると、上記式(22)が簡略化され、下記式(23)が得られる。 As described above, when θ x = 0, θ y = 0, and θ z = 0, the above equation (22) is simplified, and the following equation (23) is obtained.
Figure JPOXMLDOC01-appb-M000026
Figure JPOXMLDOC01-appb-M000026
 次に、特徴点Sの第2の撮像系IS2のカメラ座標(x’,y’,a’)は、第2の撮像系IS2の内部行列により、第2の画像の画像座標(x,y)に変換される。上記式(16)および(17)と同様の理由により、特徴点Sの第2の画像の画像座標(x,y)は、下記式(24)および(25)によって表される。 Next, the camera coordinates (x ′ 2 , y ′ 2 , a ′) of the second imaging system IS2 of the feature point S are calculated by the internal matrix of the second imaging system IS2 using the image coordinates (x 2 , y 2 ). For the same reason as the above equations (16) and (17), the image coordinates (x 2 , y 2 ) of the second image of the feature point S are expressed by the following equations (24) and (25).
Figure JPOXMLDOC01-appb-M000027
Figure JPOXMLDOC01-appb-M000027
Figure JPOXMLDOC01-appb-M000028
Figure JPOXMLDOC01-appb-M000028
 ここで、上記式(24)および(25)中のKおよびLは、第2の撮像系IS2の構成により決定される固定値f、EP、aFD2、PSによって決定される。そのため、上記式(24)および(25)中のKおよびLは、第2の撮像系IS2の構成により一意に定まる固定値である。 Here, K 2 and L 2 in the above equations (24) and (25) are determined by fixed values f 2 , EP 2 , a FD2 , and PS 2 determined by the configuration of the second imaging system IS2. . Therefore, K 2 and L 2 in the above equations (24) and (25) are fixed values uniquely determined by the configuration of the second imaging system IS2.
 上記式(24)および(25)から、特徴点Sの第2の画像の画像座標(x,y)は、下記式(26)で表すことができる。なお、下記式(26)中の3行4列の行列が、第2の撮像系IS2の内部行列となる。 From the above equations (24) and (25), the image coordinates (x 2 , y 2 ) of the second image of the feature point S can be expressed by the following equation (26). The matrix of 3 rows and 4 columns in the following equation (26) is an internal matrix of the second imaging system IS2.
Figure JPOXMLDOC01-appb-M000029
Figure JPOXMLDOC01-appb-M000029
 上記式(26)から、第2の撮像系IS2によって取得される第2の画像中の被写体100の特徴点Sの座標(x,y)を特定することができる。以下、第2の画像の画像座標(x,y)において観測される被写体100の特徴点Sを、第2の被写体像の特徴点という。 From the above equation (26), the coordinates (x 2 , y 2 ) of the characteristic point S of the subject 100 in the second image acquired by the second imaging system IS2 can be specified. Hereinafter, the feature point S of the object 100 to be observed in the image coordinates of the second image (x 2, y 2), that feature point of the second object image.
 上記式(26)中における4行4列の第2の撮像系IS2の外部行列が、第2の撮像系IS2の配置(ワールド座標の基準点に対する第2の撮像系IS2の配置)を反映しており、上記式(26)中における3行4列の第2の撮像系IS2の内部行列が、第2の撮像系IS2の特性(固定値f、EP、aFD2、PS)を反映している。 The external matrix of the second imaging system IS2 of 4 rows and 4 columns in the above equation (26) reflects the arrangement of the second imaging system IS2 (the arrangement of the second imaging system IS2 with respect to a reference point in world coordinates). The internal matrix of the second imaging system IS2 of 3 rows and 4 columns in the above equation (26) indicates the characteristics (fixed values f 2 , EP 2 , a FD2, PS 1 ) of the second imaging system IS2. Reflects.
 さらに、上記式(18)のXおよび上記式(26)のXが同一であるので、上記式(18)および上記式(26)から、距離aについての下記式(27)が得られる。 Furthermore, since X in the above equation (18) and X in the above equation (26) are the same, the following equation (27) for the distance a is obtained from the above equations (18) and (26).
Figure JPOXMLDOC01-appb-M000030
Figure JPOXMLDOC01-appb-M000030
 同様に、上記式(18)のYおよび上記式(26)のYが同一であるので、上記式(18)および上記式(26)から、距離aについての下記式(28)が得られる。 Similarly, since Y in the above equation (18) and Y in the above equation (26) are the same, the following equation (28) for the distance a is obtained from the above equations (18) and (26).
Figure JPOXMLDOC01-appb-M000031
Figure JPOXMLDOC01-appb-M000031
 上記式(27)および(28)は等価であるため、第2の画像中の第2の被写体像の特徴点の座標xおよびyについてまとめると、下に示すエピポーラ線についての一般式(29)を得ることができる。 Since the equation (27) and (28) are equivalent, summarized the coordinates x 2 and y 2 of the characteristic points of the second object image in the second image, the general formula for the epipolar lines shown below ( 29) can be obtained.
Figure JPOXMLDOC01-appb-M000032
Figure JPOXMLDOC01-appb-M000032
 上記一般式(29)中のα、β、およびγは、第1の撮像系IS1および第2の撮像系IS2の構成および配置により決定される固定値f、f、EP、EP、PS、PS、aFD1、aFD2、P、P、Dにより決定される。そのため、上記式(29)中のα、β、およびγは、第1の撮像系IS1および第2の撮像系IS2の構成および配置により一意に定まる固定値である。 Α, β, and γ in the general formula (29) are fixed values f 1 , f 2 , EP 1 , and EP 2 determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2. , PS 1, PS 2, a FD1, a FD2, P x, P y, is determined by D. Therefore, α, β, and γ in the above equation (29) are fixed values uniquely determined by the configurations and arrangements of the first imaging system IS1 and the second imaging system IS2.
 上記式(29)により表される第2の画像中の第2の被写体像の特徴点の座標xおよびyに関する1次方程式は、第1の画像中の座標(x,y)に位置する第1の被写体像の特徴点に対応する第2の画像中のエピポーラ線を表す。すなわち、第1の画像中において第1の被写体像の任意の特徴点が座標(x,y)で検出された場合、第2の画像中において、第1の被写体像の任意の特徴点に対応する第2の被写体像の特徴点は、上記式(29)で表されるエピポーラ線上に必ず存在していることになる。 The formula (29) equations relating the coordinates x 2 and y 2 of the characteristic points of the second second object image in the image represented by the first coordinate in the image (x 1, y 1) Represents the epipolar line in the second image corresponding to the feature point of the first subject image located at. That is, when an arbitrary feature point of the first subject image is detected at coordinates (x 1 , y 1 ) in the first image, an arbitrary feature point of the first subject image is detected in the second image. Is always present on the epipolar line represented by the above equation (29).
 図6には、上述のようにして算出されたエピポーラ線の一例が示されている。図6中に示されている第1の撮像系IS1および第2の撮像系IS2の特性および配置で被写体100を撮像した場合、図6に示されているような第1の画像および第2の画像が取得される。図6の例では、第1の画像および第2の画像中に含まれる三角形の上側の頂点を被写体100の任意の特徴点Sとしている。各画像中において中心点を原点(座標(0,0))とする座標が各画像の画像座標となる。 FIG. 6 shows an example of the epipolar line calculated as described above. When the subject 100 is imaged with the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 shown in FIG. 6, the first image and the second image as shown in FIG. An image is obtained. In the example of FIG. 6, an upper vertex of a triangle included in the first image and the second image is set as an arbitrary feature point S of the subject 100. The coordinates having the center point as the origin (coordinates (0, 0)) in each image are the image coordinates of each image.
 第1の画像中において(x,y)=(972.0,-549.0)の位置に第1の被写体像の特徴点(第1の画像中における三角形の上側頂点)が検出された場合、上記式(29)で表される第2の画像中のエピポーラ線上に、第1の被写体像の特徴点に対応する第2の被写体の特徴点が必ず存在する。図示の例では、第1の被写体像の特徴点に対応する第2の被写体の特徴点は、第2の画像中の座標(x,y)=(568.7,-229.5)に存在している。 A feature point of the first subject image (an upper vertex of the triangle in the first image) is detected at a position (x 1 , y 1 ) = (972.0, −549.0) in the first image. In this case, the feature points of the second subject corresponding to the feature points of the first subject image always exist on the epipolar line in the second image represented by the above equation (29). In the illustrated example, the feature points of the second subject corresponding to the feature points of the first subject image are coordinates (x 2 , y 2 ) in the second image = (568.7, −229.5). Exists.
 このように、上記式(29)を用いて、第2の画像中のエピポーラ線を導出することにより、第2の画像の全領域を探索せずとも、該エピポーラ線上を探索することにより、第1の被写体像の任意の特徴点に対応する第2の被写体像の特徴点を検出することができる。第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の特徴点を検出するための対応特徴点検出処理において、上述のようなエピポーラ幾何に基づくエピポーラ線を利用した特徴点の探索を実行することによって、対応特徴点検出処理に要する処理時間を大幅に短縮することができる。このような理由により、本発明の測距カメラでは、被写体像間の像倍比MRに基づいて被写体100までの距離aを算出するための処理時間の大幅な短縮を実現している。 As described above, by deriving the epipolar line in the second image by using the above equation (29), the search on the epipolar line can be performed without searching the entire region of the second image. A feature point of the second subject image corresponding to an arbitrary feature point of one subject image can be detected. In the corresponding feature point detection processing for detecting the feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the first subject image, the epipolar line based on the epipolar geometry as described above By performing a search for a feature point using, the processing time required for the corresponding feature point detection process can be significantly reduced. For this reason, the ranging camera of the present invention achieves a significant reduction in processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images.
 さらに、上述のようにエピポーラ線を導出するために用いられることが多いピンポールモデルと比較して、図4および図5に示したモデルは、第1の撮像系IS1および第2の撮像系IS2の特性および配置の双方が考慮されている点に特徴がある。特に、上記式(18)中における3行4列の第1の撮像系IS1の内部行列に第1の撮像系IS1の特性(固定値f、EP、aFD1、PS)が反映されており、上記式(26)中における3行4列の第2の撮像系IS2の内部行列に第2の撮像系IS2の特性(固定値f、EP、aFD2、PS)が反映されている。そのため、従来のピンホールモデルを用いる場合と比較して、より正確に第2の画像中の第2の被写体像の複数の特徴点を検出することができる。 Further, as compared with the pin pole model which is often used to derive the epipolar line as described above, the models shown in FIGS. 4 and 5 are different from those of the first imaging system IS1 and the second imaging system IS2. The feature is that both characteristics and arrangement are considered. In particular, the characteristics (fixed values f 1 , EP 1 , a FD1 , PS 1 ) of the first imaging system IS1 are reflected in the internal matrix of the first imaging system IS1 of 3 rows and 4 columns in the above equation (18). The characteristics (fixed values f 2 , EP 2 , a FD2 , PS 2 ) of the second imaging system IS2 are reflected in the internal matrix of the second imaging system IS2 of 3 rows and 4 columns in the above equation (26). Have been. For this reason, a plurality of feature points of the second subject image in the second image can be detected more accurately than in the case of using the conventional pinhole model.
 本発明の測距カメラは、対応特徴点検出処理において、上述のようなエピポーラ幾何に基づくエピポーラ線を利用し、第1の被写体像のサイズYFD1を測定するために検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出している。検出された第2の被写体像の複数の特徴点間の距離が測定され、第2の被写体像のサイズYFD2が取得される。取得された第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2は、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRを取得するために用いられ、像倍比MRに基づいて、被写体100までの距離aが算出される。 The distance measuring camera according to the present invention uses the epipolar line based on the epipolar geometry as described above in the corresponding feature point detection processing, and detects the first object detected to measure the size YFD1 of the first object image. A plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the image are detected. The distance between a plurality of feature points of the detected second subject image is measured, and the size YFD2 of the second subject image is obtained. The obtained size Y FD1 of the first subject image and the size Y FD2 of the second subject image are obtained by calculating the image magnification ratio MR of the magnification m 1 of the first subject image and the magnification m 2 of the second subject image. The distance a to the subject 100 is calculated based on the image magnification ratio MR.
 このように、本発明の測距カメラは、実際に被写体100を第1の撮像系IS1および第2の撮像系IS2を用いて撮像することにより得られた第1の被写体像を含む第1の画像および第2の被写体像を含む第2の画像から、第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2を実測し、上記式(14)MR=YFD2/YFD1に基づいて、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRを得ることができる。 As described above, the distance measuring camera according to the present invention includes the first object image including the first object image obtained by actually imaging the object 100 using the first imaging system IS1 and the second imaging system IS2. From the image and the second image including the second subject image, the size Y FD1 of the first subject image and the size Y FD2 of the second subject image are actually measured, and the above equation (14) MR = Y FD2 / Y FD1 it can be obtained by the Zobaihi MR with magnification m 2 of the magnification m 1 of the first subject image the second object image based on.
 なお、上記式(11)から明らかなように、第1の光学系OS1の焦点距離fが第1の光学系OS1の焦点距離fと等しく(f=f)、第1の光学系OS1の射出瞳から、被写体100が無限遠にある場合の第1の被写体像の結像位置までの距離EPが、第2の光学系OS2の射出瞳から、被写体100が無限遠にある場合の第2の被写体像の結像位置までの距離EPと等しく(EP=EP)、かつ、第1の光学系OS1の前側主点と、第2の光学系OS2の前側主点との間の奥行方向(光軸方向)の奥行視差Dが存在しない(D=0)場合、像倍比MRが距離aの関数として成立せず、像倍比MRは定数となる。この場合、被写体100までの距離aに応じた第1の被写体像の倍率mの変化が、被写体100までの距離aに応じた第2の被写体像の倍率mの変化と同一になってしまい、像倍比MRに基づいて第1の光学系OS1から被写体までの距離aを算出することが不可能となる。 As is apparent from the above equation (11), the focal length f 1 of the first optical system OS1 is equal to the focal length f 2 of the first optical system OS1 (f 1 = f 2), first optical from the exit pupil of the system OS1, the distance EP 1 to the imaging position of the first object image when the object 100 is at infinity is, from the exit pupil of the second optical system OS2, the far object 100 is infinite second equal to the distance EP 2 to the imaging position of the object image (EP 1 = EP 2), and a front principal point of the first optical system OS1, the front principal point of the second optical system OS2 when When there is no depth parallax D in the depth direction (optical axis direction) (D = 0), the image magnification ratio MR does not hold as a function of the distance a, and the image magnification ratio MR is a constant. In this case, the change in the magnification m1 of the first subject image according to the distance a to the subject 100 becomes the same as the change in the magnification m2 of the second subject image according to the distance a to the subject 100. As a result, it becomes impossible to calculate the distance a from the first optical system OS1 to the subject based on the image magnification ratio MR.
 また、特別な条件として、f≠f、EP≠EP、かつD=0の場合であっても、f=EPかつf=EPの場合、像倍比MRが距離aの関数として成立せず、像倍比MRは定数となる。このような特別な場合にも、像倍比MRに基づいて第1の光学系OS1から被写体までの距離aを算出することが不可能となる。 As a special condition, even if f 1 ≠ f 2 , EP 1 ≠ EP 2 and D = 0, if f 1 = EP 1 and f 2 = EP 2 , the image magnification ratio MR is This does not hold as a function of a, and the image magnification ratio MR is a constant. Even in such a special case, it is impossible to calculate the distance a from the first optical system OS1 to the subject based on the image magnification ratio MR.
 したがって、本発明の測距カメラでは、以下の3つの条件の少なくとも1つが満たされるよう、第1の光学系OS1および第2の光学系OS2が構成および配置され、これにより、被写体100までの距離aに応じた第1の被写体像の倍率mの変化が、被写体100までの距離aに応じた第2の被写体像の倍率mの変化と異なるようになっている。 Therefore, in the distance measuring camera of the present invention, the first optical system OS1 and the second optical system OS2 are configured and arranged so that at least one of the following three conditions is satisfied. change in the magnification m 1 of the first object image in accordance with a is adapted to differ from the change in the magnification m 2 of the second object image in accordance with the distance a to the object 100.
 (第1の条件)第1の光学系OS1の焦点距離fと、第2の光学系OS2の焦点距離fとが、互いに異なる(f≠f
 (第2の条件)第1の光学系OS1の射出瞳から、被写体100が無限遠にある場合の第1の被写体像の結像位置までの距離EPと、第2の光学系OS2の射出瞳から、被写体100が無限遠にある場合の第2の被写体像の結像位置までの距離EPとが、互いに異なる(EP≠EP
 (第3の条件)第1の光学系OS1の前側主点と、第2の光学系OS2の前側主点との間に奥行方向(光軸方向)の差Dが存在する(D≠0)
The focal length f 1 of the (first condition) first optical system OS1, the focal length f 2 of the second optical system OS2 are different from each other (f 1f 2)
(Second condition) from the exit pupil of the first optical system OS1, the distance EP 1 to the imaging position of the first object image when the object 100 is at infinity, the injection of the second optical system OS2 from the pupil, and the distance EP 2 to the imaging position of the second object image when the object 100 is at infinity is different from each other (EP 1 ≠ EP 2)
(Third condition) A difference D in the depth direction (optical axis direction) exists between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2 (D ≠ 0).
 加えて、上記第1~第3の条件の少なくとも1つを満たしていたとしても、上述したような特別な場合(f≠f、EP≠EP、D=0、f=EPかつf=EP)には、像倍比MRが距離aの関数として成立せず、像倍比MRに基づいて、第1の光学系OS1から被写体100までの距離aを算出することが不可能となる。したがって、像倍比MRに基づいて第1の光学系OS1から被写体100までの距離aを算出するために、本発明の測距カメラは、像倍比MRが距離aの関数として成立しているという第4の条件をさらに満たすよう構成されている。 In addition, even if at least one of the first to third conditions is satisfied, the special case described above (f 1 ≠ f 2 , EP 1 ≠ EP 2 , D = 0, f 1 = EP 1 and f 2 = EP 2 ), the image magnification ratio MR does not hold as a function of the distance a, and the distance a from the first optical system OS1 to the subject 100 is calculated based on the image magnification ratio MR. Becomes impossible. Therefore, in order to calculate the distance a from the first optical system OS1 to the subject 100 based on the image magnification ratio MR, in the distance measuring camera of the present invention, the image magnification ratio MR is established as a function of the distance a. The fourth condition is further satisfied.
 そのため、本発明の測距カメラを用いて取得された第1の画像および第2の画像から実測される第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2から像倍比MRを算出することにより、第1の光学系OS1の前側主点から被写体100までの距離aを算出することができる。 Therefore, the image magnification ratio is calculated from the size Y FD1 of the first subject image and the size Y FD2 of the second subject image actually measured from the first image and the second image acquired by using the distance measuring camera of the present invention. By calculating the MR, the distance a from the front principal point of the first optical system OS1 to the subject 100 can be calculated.
 以下、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRに基づいて被写体100までの距離aを算出する本発明の測距カメラを、添付図面に示す好適な実施形態に基づいて詳述する。 Hereinafter, a ranging camera of the present invention for calculating the distance a to the object 100 based on Zobaihi MR with magnification m 1 of the first object image and the magnification m 2 of the second object image, in the accompanying drawings A detailed description will be given based on the preferred embodiment shown.
 <第1実施形態>
 最初に、図7を参照して本発明の測距カメラの第1実施形態を説明する。図7は、本発明の第1実施形態に係る測距カメラを概略的に示すブロック図である。
<First embodiment>
First, a first embodiment of the distance measuring camera of the present invention will be described with reference to FIG. FIG. 7 is a block diagram schematically showing the distance measuring camera according to the first embodiment of the present invention.
 図7に示す測距カメラ1は、測距カメラ1の制御を行う制御部2と、被写体100からの光を集光し、第1の被写体像を形成するための第1の光学系OS1と、第1の被写体像を撮像し、第1の被写体像を含む第1の画像を取得するための第1の撮像素子S1とを有する第1の撮像系IS1と、第1の光学系OS1に対して、第1の光学系OS1の光軸方向に対して垂直な方向に距離Pだけシフトして配置され、被写体100からの光を集光し、第2の被写体像を形成するための第2の光学系OS2と、第2の被写体像を撮像し、第2の被写体像を含む第2の画像を取得するための第2の撮像素子S2とを有する第2の撮像系IS2と、第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2を取得するためのサイズ取得部3と、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRと、被写体100までの距離aとを関連付ける関連付情報を記憶している関連付情報記憶部4と、サイズ取得部3によって取得された第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2の比として得られる第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRに基づいて、被写体100までの距離aを算出するための距離算出部5と、第1の撮像素子S1によって取得された第1の画像または第2の撮像素子S2によって取得された第2の画像と、距離算出部5によって算出された被写体100までの距離aとに基づいて、被写体100の3次元画像を生成するための3次元画像生成部6と、液晶パネル等の任意の情報を表示するための表示部7と、使用者による操作を入力するための操作部8と、外部デバイスとの通信を実行するための通信部9と、測距カメラ1の各コンポーネント間のデータの授受を実行するためのデータバス10と、を備えている。 The distance measuring camera 1 shown in FIG. 7 includes a control unit 2 for controlling the distance measuring camera 1, a first optical system OS1 for collecting light from the subject 100, and forming a first subject image. A first imaging system IS1 having a first imaging device S1 for capturing a first subject image and acquiring a first image including the first subject image, and a first optical system OS1. On the other hand, the first optical system OS1 is arranged so as to be shifted by a distance P in a direction perpendicular to the optical axis direction of the first optical system OS1, and collects light from the subject 100 to form a second subject image. A second imaging system IS2 having a second optical system OS2, a second imaging device S2 for capturing a second subject image, and obtaining a second image including the second subject image; size acquisition for acquiring the size Y FD2 size Y FD1 and a second object image of the first object image 3, the magnification m 1 of the first object image and Zobaihi MR with magnification m 2 of the second object image, association information which stores association information associating the distance a to the object 100 a storage unit 4, the first size Y FD1 and a second object image of the first object image obtained as the ratio of the size Y FD2 of the subject image magnification m 1 and a second acquired by the size acquiring section 3 based on Zobaihi MR with magnification m 2 of the subject image, a distance calculation unit 5 for calculating the distance a to the object 100, the first image acquired by the first image sensor S1 or the second A three-dimensional image generation unit 6 for generating a three-dimensional image of the subject 100 based on the second image acquired by the image sensor S2 and the distance a to the subject 100 calculated by the distance calculation unit 5 And any information such as the liquid crystal panel , An operation unit 8 for inputting an operation by a user, a communication unit 9 for executing communication with an external device, and data communication between components of the distance measuring camera 1. And a data bus 10 for performing transmission and reception.
 本実施形態の測距カメラ1は、像倍比MRに基づいて被写体100までの距離aを算出するために要求される上述の3つの条件の内、第1の光学系OS1の焦点距離fと、第2の光学系OS2の焦点距離fとが、互いに異なる(f≠f)という第1の条件が満たされるよう、第1の光学系OS1および第2の光学系OS2が構成されていることを特徴とする。一方、本実施形態では、第1の光学系OS1および第2の光学系OS2は、上述の3つの条件の内、その他の2つの条件(EP≠EPおよびD≠0)を満たすように構成および配置されていない。さらに、本実施形態の測距カメラ1は、像倍比MRが距離aの関数として成立しているという第4の条件が満たされるよう構成されている。 The distance measuring camera 1 of the present embodiment has the focal length f 1 of the first optical system OS1 among the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR. When the focal length f 2 of the second optical system OS2 is different (f 1f 2) first so the condition is met that the first optical system OS1 and a second optical system OS2 is configured It is characterized by having been done. On the other hand, in the present embodiment, the first optical system OS1 and the second optical system OS2 satisfy the other two conditions (EP 1 ≠ EP 2 and D ≠ 0) among the above three conditions. Not configured and deployed. Further, the distance measuring camera 1 of the present embodiment is configured to satisfy a fourth condition that the image magnification ratio MR is satisfied as a function of the distance a.
 そのため、像倍比MRを用いて被写体100までの距離aを算出するための上記一般式(13)は、EP=EP=EPおよびD=0の条件により単純化され、下記式(30)で表すことができる。 Therefore, the above general formula (13) for calculating the distance a to the subject 100 using the image magnification ratio MR is simplified by the conditions of EP 1 = EP 2 = EP and D = 0, and the following formula (30) ).
Figure JPOXMLDOC01-appb-M000033
Figure JPOXMLDOC01-appb-M000033
 ここで、係数Kは、下記式(31)で表される。 Here, the coefficient K is represented by the following equation (31).
Figure JPOXMLDOC01-appb-M000034
Figure JPOXMLDOC01-appb-M000034
 本実施形態の測距カメラ1は、第1の撮像系IS1および第2の撮像系IS2によって被写体100を撮像することにより第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRを算出し、さらに、上記式(30)を用いて、被写体100までの距離aを算出する。 Ranging camera 1 of this embodiment, the magnification m 2 magnification m 1 and a second object image of the first object image by imaging a subject 100 by the first imaging system IS1 and the second imaging system IS2 Is calculated, and the distance a to the subject 100 is calculated using the above equation (30).
 また、本実施形態の測距カメラ1では、サイズ取得部3は、第1の撮像素子S1によって取得された第1の画像中の第1の被写体像の複数の特徴点(例えば、高さ方向または幅方向の両端部)を検出し、さらに、該複数の特徴点間の距離を測定することにより、第1の被写体像のサイズYFD1を取得する。さらに、サイズ取得部3は、検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出し、該複数の特徴点間の距離を測定することにより、第2の被写体像のサイズYFD2を取得する。 Further, in the distance measuring camera 1 of the present embodiment, the size acquisition unit 3 includes a plurality of feature points (for example, in the height direction) of the first subject image in the first image acquired by the first image sensor S1. Alternatively , the size YFD1 of the first subject image is acquired by detecting the distance between the plurality of feature points. Further, the size obtaining unit 3 detects a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the detected first subject image, and detects the plurality of feature points. The size Y FD2 of the second subject image is obtained by measuring the distance between them.
 また、本実施形態の測距カメラ1では、第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出するための対応特徴点検出処理において、エピポーラ幾何に基づくエピポーラ線が用いられている。エピポーラ線を表す上記一般式(29)は、EP=EP=EPおよびD=0の条件により単純化され、下記式(32)で表すことができる。 Further, in the distance measuring camera 1 of the present embodiment, a corresponding feature check for detecting a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the first subject image. In the output processing, an epipolar line based on epipolar geometry is used. The general formula (29) representing the epipolar line is simplified by the condition of EP 1 = EP 2 = EP and D = 0, and can be expressed by the following formula (32).
Figure JPOXMLDOC01-appb-M000035
Figure JPOXMLDOC01-appb-M000035
 本実施形態の測距カメラ1では、上記式(32)によって表される第2の画像中のエピポーラ線上を探索することにより、第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出することができる。これにより、第2の画像の全領域を探索せずとも、第2の被写体像の複数の特徴点を検出することができ、対応特徴点検出処理に要する処理時間を大幅に短縮することができる。その結果、被写体像間の像倍比MRに基づいて被写体100までの距離aを算出するための処理時間を大幅に短縮することができる。 In the distance measuring camera 1 of the present embodiment, by searching on the epipolar line in the second image represented by the above equation (32), the second camera corresponding to each of the plurality of feature points of the first subject image is searched. A plurality of feature points of the second subject image in the image can be detected. Thus, a plurality of feature points of the second subject image can be detected without searching the entire area of the second image, and the processing time required for the corresponding feature point detection process can be significantly reduced. . As a result, a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced.
 以下、測距カメラ1の各コンポーネントについて詳述する。制御部2は、データバス10を介して、各コンポーネントとの間の各種データや各種指示の授受を行い、測距カメラ1の制御を実行する。制御部2は、演算処理を実行するためのプロセッサーと、測距カメラ1の制御を行うために必要なデータ、プログラム、モジュール等を保存しているメモリーとを備えており、制御部2のプロセッサーは、メモリー内に保存されているデータ、プログラム、モジュール等を用いることにより、測距カメラ1の制御を実行する。また、制御部2のプロセッサーは、測距カメラ1の各コンポーネントを用いることにより、所望の機能を提供することができる。例えば、制御部2のプロセッサーは、距離算出部5を用いることにより、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRに基づいて、被写体100までの距離aを算出するための処理を実行することができる。 Hereinafter, each component of the distance measuring camera 1 will be described in detail. The control unit 2 transmits and receives various data and various instructions to and from each component via the data bus 10 and controls the distance measuring camera 1. The control unit 2 includes a processor for executing arithmetic processing, and a memory for storing data, programs, modules, and the like necessary for controlling the distance measuring camera 1. Executes the control of the distance measuring camera 1 by using data, programs, modules, and the like stored in the memory. The processor of the control unit 2 can provide a desired function by using each component of the distance measuring camera 1. For example, the processor control unit 2, by using the distance calculation unit 5, based on Zobaihi MR with magnification m 1 of the first object image and the magnification m 2 of the second object image to the object 100 Can be executed for calculating the distance a.
 制御部2のプロセッサーは、例えば、1つ以上のマイクロプロセッサー、マイクロコンピューター、マイクロコントローラー、デジタル信号プロセッサー(DSP)、中央演算処理装置(CPU)、メモリーコントロールユニット(MCU)、画像処理用演算処理装置(GPU)、状態機械、論理回路、特定用途向け集積回路(ASIC)、またはこれらの組み合わせ等のコンピューター可読命令に基づいて信号操作等の演算処理を実行する演算ユニットである。特に、制御部2のプロセッサーは、制御部2のメモリー内に保存されているコンピューター可読命令(例えば、データ、プログラム、モジュール等)をフェッチし、演算、信号操作および制御を実行するよう構成されている。 The processor of the control unit 2 includes, for example, one or more microprocessors, a microcomputer, a microcontroller, a digital signal processor (DSP), a central processing unit (CPU), a memory control unit (MCU), and an image processing unit. (GPU), a state machine, a logic circuit, an application specific integrated circuit (ASIC), or an arithmetic unit that performs arithmetic operations such as signal operations based on computer readable instructions such as a combination thereof. In particular, the processor of the controller 2 is configured to fetch computer readable instructions (eg, data, programs, modules, etc.) stored in the memory of the controller 2 and perform operations, signal operations and controls. I have.
 制御部2のメモリーは、揮発性記憶媒体(例えば、RAM、SRAM、DRAM)、不揮発性記憶媒体(例えば、ROM、EPROM、EEPROM、フラッシュメモリー、ハードディスク、光ディスク、CD-ROM、デジタル多用途ディスク(DVD)、磁気カセット、磁気テープ、磁気ディスク)、またはこれらの組み合わせを含む着脱式または非着脱式のコンピューター可読媒体である。 The memory of the control unit 2 includes a volatile storage medium (eg, RAM, SRAM, DRAM), a non-volatile storage medium (eg, ROM, EPROM, EEPROM, flash memory, hard disk, optical disk, CD-ROM, digital versatile disk ( DVD), a magnetic cassette, a magnetic tape, a magnetic disk), or a removable or non-removable computer-readable medium including a combination thereof.
 また、制御部2のメモリー内には、第1の撮像系IS1および第2の撮像系IS2の構成および配置によって決定される固定値f、f、EP、EP、aFD1、aFD2、PS、PS、P、P、およびD、並びに、これら固定値から導出され、被写体100までの距離aを算出するための上記一般式(13)(または、簡略化された上記式(30))および第2の画像中のエピポーラ線についての上記一般式(29)(または、簡略化された上記式(32))で使用される固定値L、L、K、K、K、α、β、およびγが事前に保存されている。 In the memory of the control unit 2, fixed values f 1 , f 2 , EP 1 , EP 2 , a FD1 , and a fixed values f 1 , f 2 determined by the configuration and arrangement of the first imaging system IS1 and the second imaging system IS2 are provided. FD2, PS 1, PS 2, P x, P y, and D, as well, are derived from these fixed values, the formula for calculating the distance a to the object 100 (13) (or, simplified The fixed values L 1 , L 2 , K, used in equation (30) above and the general equation (29) (or the simplified equation (32)) above for the epipolar line in the second image. K 1 , K 2 , α, β, and γ are stored in advance.
 第1の撮像系IS1は、第1の光学系OS1と、第1の撮像素子S1と有している。第1の光学系OS1は、被写体100からの光を集光し、第1の撮像素子S1の撮像面上に第1の被写体像を形成する機能を有する。第1の撮像素子S1は、撮像面上に形成された第1の被写体像を撮像し、第1の被写体像を含む第1の画像を取得する機能を有している。第2の撮像系IS2は、第2の光学系OS2と、第2の撮像素子S2とを有している。第2の光学系OS2は、被写体100からの光を集光し、第2の撮像素子S2の撮像面上に第2の被写体像を形成する機能を有する。第2の撮像素子S2は、撮像面上に形成された第2の被写体像を撮像し、第2の被写体像を含む第2の画像を取得する機能を有している。 The first imaging system IS1 has a first optical system OS1 and a first imaging element S1. The first optical system OS1 has a function of condensing light from the subject 100 and forming a first subject image on the imaging surface of the first image sensor S1. The first imaging element S1 has a function of capturing a first subject image formed on an imaging surface and acquiring a first image including the first subject image. The second imaging system IS2 has a second optical system OS2 and a second imaging element S2. The second optical system OS2 has a function of condensing light from the subject 100 and forming a second subject image on the imaging surface of the second image sensor S2. The second imaging element S2 has a function of capturing a second subject image formed on the imaging surface and acquiring a second image including the second subject image.
 なお、図示の形態では、第1の撮像系IS1を構成する第1の撮像素子S1および第1の光学系OS1が、同一の筐体内に設けられており、第2の撮像系IS2を構成する第2の撮像素子S2および第2の光学系OS2が、別の同一の筐体内に設けられているが、本発明はこれに限られない。第1の光学系OS1、第2の光学系OS2、第1の撮像素子S1、および第2の撮像素子S2がすべて同一の筐体内に設けられているような様態も、本発明の範囲内である。 In the illustrated embodiment, the first imaging element S1 and the first optical system OS1 that constitute the first imaging system IS1 are provided in the same housing, and constitute the second imaging system IS2. Although the second image sensor S2 and the second optical system OS2 are provided in another identical housing, the present invention is not limited to this. An embodiment in which the first optical system OS1, the second optical system OS2, the first image sensor S1, and the second image sensor S2 are all provided in the same housing is also within the scope of the present invention. is there.
 第1の光学系OS1および第2の光学系OS2は、1つ以上のレンズと絞り等の光学素子から構成されている。上述のように、第1の光学系OS1および第2の光学系OS2は、第1の光学系OS1の焦点距離fと第2の光学系OS2の焦点距離fとが、互いに異なるよう(f≠f)、構成されている。これにより、第1の光学系OS1によって形成される第1の被写体像の倍率mの被写体100までの距離aに応じた変化が、第2の光学系OS2によって形成される第2の被写体像の倍率mの被写体100までの距離に応じた変化と異なるようになっている。このような第1の光学系OS1および第2の光学系OS2の構成によって得られる第1の被写体像の倍率mと第2の被写体像の倍率mとの比である像倍比MRは、被写体100までの距離aを算出するために用いられる。 The first optical system OS1 and the second optical system OS2 include one or more lenses and optical elements such as a diaphragm. As described above, the first optical system OS1 and a second optical system OS2 is a focal length f 1 of the first optical system OS1 and the focal length f 2 of the second optical system OS2 are different from each other ( f 1 ≠ f 2 ). Thus, changes in accordance with the distance a to the first object 100 magnification m 1 of a subject image formed by the first optical system OS1 is, the second object image formed by the second optical system OS2 It has become the change for the distance to the subject 100 of the magnification m 2 and different. Zobaihi MR is the ratio of the magnification m 2 of such first magnification m 1 of the first subject image obtained by the configuration of the optical system OS1 and a second optical system OS2 second object image Is used to calculate the distance a to the subject 100.
 また、図示のように、第1の光学系OS1の光軸と、第2の光学系OS2の光軸は、平行であるが、一致していない。さらに、第2の光学系OS2は、第1の光学系OS1の光軸方向に対して垂直な方向に距離Pだけシフトして配置されている。 As shown in the figure, the optical axis of the first optical system OS1 and the optical axis of the second optical system OS2 are parallel but not coincident. Further, the second optical system OS2 is arranged shifted by a distance P in a direction perpendicular to the optical axis direction of the first optical system OS1.
 第1の撮像素子S1および第2の撮像素子S2のそれぞれは、ベイヤー配列等の任意のパターンで配列されたRGB原色系カラーフィルターやCMY補色系カラーフィルターのようなカラーフィルターを有するCMOS画像センサーやCCD画像センサー等のカラー撮像素子であってもよいし、そのようなカラーフィルターを有さない白黒撮像素子であってもよい。この場合、第1の撮像素子S1によって得られる第1の画像および第2の撮像素子S2によって得られる第2の画像は、被写体100のカラーまたは白黒の輝度情報である。 Each of the first image sensor S1 and the second image sensor S2 has a CMOS image sensor having a color filter such as an RGB primary color filter or a CMY complementary color filter arranged in an arbitrary pattern such as a Bayer array. It may be a color image sensor such as a CCD image sensor or a black and white image sensor without such a color filter. In this case, the first image obtained by the first image sensor S1 and the second image obtained by the second image sensor S2 are color or monochrome luminance information of the subject 100.
 また、第1の撮像素子S1および第2の撮像素子S2のそれぞれは、被写体100の位相情報を取得する位相センサーであってもよい。この場合、第1の撮像素子S1によって得られる第1の画像および第2の撮像素子S2によって得られる第2の画像は、被写体100の位相情報である。 Further, each of the first image sensor S1 and the second image sensor S2 may be a phase sensor that acquires phase information of the subject 100. In this case, the first image obtained by the first image sensor S1 and the second image obtained by the second image sensor S2 are phase information of the subject 100.
 第1の光学系OS1によって、第1の撮像素子S1の撮像面上に第1の被写体像が形成され、第1の撮像素子S1によって第1の被写体像を含む第1の画像が取得される。取得された第1の画像は、データバス10を介して、制御部2およびサイズ取得部3に送られる。同様に、第2の光学系OS2によって、第2の撮像素子S2の撮像面上に第2の被写体像が形成され、第2の撮像素子S2によって第2の被写体像を含む第2の画像が取得される。取得された第2の画像は、データバス10を介して、制御部2およびサイズ取得部3に送られる。 The first optical system OS1 forms a first subject image on the imaging surface of the first imaging device S1, and the first imaging device S1 acquires a first image including the first subject image. . The acquired first image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10. Similarly, a second subject image is formed on the imaging surface of the second image sensor S2 by the second optical system OS2, and a second image including the second subject image is formed by the second image sensor S2. Is obtained. The acquired second image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10.
 サイズ取得部3に送られた第1の画像および第2の画像は、第1の被写体のサイズYFD1および第2の被写体のサイズYFD2を取得するために用いられる。一方、制御部2に送られた第1の画像および第2の画像は、表示部7による画像表示や通信部9による画像信号の通信のために用いられる。 The first image and the second image sent to the size obtaining unit 3 are used for obtaining the size Y FD1 of the first subject and the size Y FD2 of the second subject. On the other hand, the first image and the second image sent to the control unit 2 are used for image display by the display unit 7 and communication of image signals by the communication unit 9.
 サイズ取得部3は、第1の被写体像を含む第1の画像および第2の被写体像を含む第2の画像から、第1の被写体のサイズYFD1および第2の被写体のサイズYFD2を取得する機能を有している。具体的には、サイズ取得部3は、第1の画像中の第1の被写体像の複数の特徴点を検出し、検出された第1の被写体像の複数の特徴点間の距離を測定することにより、第1の被写体像のサイズYFD1を取得する。さらに、サイズ取得部3は、検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出し、検出された第2の被写体像の複数の特徴点間の距離を測定することにより、第2の被写体像のサイズYFD2を取得する。 The size acquisition unit 3 acquires a size Y FD1 of the first subject and a size Y FD2 of the second subject from the first image including the first subject image and the second image including the second subject image. It has the function to do. Specifically, the size obtaining unit 3 detects a plurality of feature points of the first subject image in the first image, and measures a distance between the detected plurality of feature points of the first subject image. Thereby, the size YFD1 of the first subject image is obtained. Further, the size obtaining unit 3 detects a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the detected first subject image, and detects the detected second feature point. The size Y FD2 of the second subject image is obtained by measuring the distance between a plurality of feature points of the subject image.
 具体的には、サイズ取得部3は、第1の撮像素子S1から第1の画像を受信し、さらに、第2の撮像素子S2から第2の画像を受信する。その後、サイズ取得部3は、第1の画像中における第1の被写体像の任意の複数の特徴点を検出する。サイズ取得部3が第1の画像中における第1の被写体像の任意の複数の特徴点を検出する方法は特に限定されず、サイズ取得部3は、本分野において既知の種々の方法を用いて、第1の画像中における第1の被写体像の任意の複数の特徴点を検出することができる。サイズ取得部3によって検出された複数の特徴点のそれぞれの座標(x,y)は、制御部2のメモリー内に一時保存される。 Specifically, the size obtaining unit 3 receives a first image from the first image sensor S1, and further receives a second image from the second image sensor S2. After that, the size obtaining unit 3 detects arbitrary plural feature points of the first subject image in the first image. The method by which the size obtaining unit 3 detects arbitrary plural feature points of the first subject image in the first image is not particularly limited, and the size obtaining unit 3 uses various methods known in the art. , An arbitrary plurality of feature points of the first subject image in the first image can be detected. The coordinates (x 1 , y 1 ) of each of the plurality of feature points detected by the size obtaining unit 3 are temporarily stored in the memory of the control unit 2.
 1つの例において、サイズ取得部3は、第1の画像に対し、Cannyのようなフィルター処理を施し、第1の画像内における第1の被写体像のエッジ部を抽出する。その後、サイズ取得部3は、抽出した第1の被写体像のエッジ部の任意のいくつかを第1の被写体像の複数の特徴点として検出し、該複数の特徴点間の離間距離を測定することにより第1の被写体像のサイズYFD1を取得する。この場合、サイズ取得部3は、第1の被写体像の高さ方向の両端部に相当するエッジ部を第1の被写体像の複数の特徴点として検出し、該特徴点間の離間距離を第1の被写体像のサイズ(像高)YFD1としてもよいし、第1の被写体像の幅方向の両端部に相当するエッジ部を第1の被写体像の複数の特徴点として検出し、該特徴点間の離間距離を第1の被写体像のサイズ(像幅)YFD1としてもよい。 In one example, the size acquisition unit 3 performs a filtering process such as Canny on the first image, and extracts an edge portion of the first subject image in the first image. Thereafter, the size acquisition unit 3 detects any of the extracted edge portions of the first subject image as a plurality of feature points of the first subject image, and measures a separation distance between the plurality of feature points. Thus, the size Y FD1 of the first subject image is obtained. In this case, the size acquisition unit 3 detects edge portions corresponding to both ends in the height direction of the first subject image as a plurality of feature points of the first subject image, and determines a separation distance between the feature points as the second feature point. The size (image height) YFD1 of the first subject image may be used, or edge portions corresponding to both ends in the width direction of the first subject image may be detected as a plurality of feature points of the first subject image. The distance between the points may be set as the size (image width) YFD1 of the first subject image.
 第1の被写体像のサイズYFD1を取得した後、サイズ取得部3は、検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出するための対応特徴点検出処理を実行する。 After obtaining the size Y FD1 of the first subject image, the size obtaining unit 3 sets the plurality of second subject images in the second image corresponding to the plurality of feature points of the detected first subject image, respectively. The corresponding feature point detection process for detecting the feature point is performed.
 具体的には、サイズ取得部3は、最初に、制御部2のメモリー内に保存されている第1の被写体像の複数の特徴点の座標(x,y)を参照し、検出された第1の被写体像の複数の特徴点のいずれか1つを選択する。その後、サイズ取得部3は、第1の画像中の選択された特徴点を中心とする所定のサイズの領域(例えば、選択された特徴点を中心とする5×5ピクセルの領域、7×7ピクセルの領域等)を切り出し、選択された特徴点用の探索ブロックを取得する。この探索ブロックは、選択された第1の被写体の特徴点に対応する第2の画像中の第2の被写体像の特徴点を探索するために用いられる。取得された探索ブロックは、制御部2のメモリー内に一時保存される。 Specifically, the size obtaining unit 3 first detects and detects the coordinates (x 1 , y 1 ) of the plurality of feature points of the first subject image stored in the memory of the control unit 2. One of the plurality of feature points of the first subject image is selected. Thereafter, the size acquisition unit 3 determines an area of a predetermined size centered on the selected feature point in the first image (for example, an area of 5 × 5 pixels centered on the selected feature point, 7 × 7 (A pixel area, etc.), and a search block for the selected feature point is obtained. The search block is used to search for a feature point of the second subject image in the second image corresponding to the selected feature point of the first subject. The obtained search block is temporarily stored in the memory of the control unit 2.
 その後、サイズ取得部3は、制御部2のメモリー内に保存されている固定値を用いて、選択された第1の被写体像の特徴点に対応するエピポーラ線を、上記式(32)(または、一般式(29))に基づいて導出する。その後、サイズ取得部3は、導出されたエピポーラ線上を探索することにより、選択された第1の被写体像の特徴点に対応する第2の画像中の第2の被写体像の特徴点を検出する。 Thereafter, the size obtaining unit 3 uses the fixed value stored in the memory of the control unit 2 to convert the epipolar line corresponding to the feature point of the selected first subject image into the above equation (32) (or , General formula (29)). Thereafter, the size acquiring unit 3 detects a feature point of the second subject image in the second image corresponding to the feature point of the selected first subject image by searching on the derived epipolar line. .
 具体的には、サイズ取得部3は、制御部2のメモリー内に保存されている選択された第1の被写体像の特徴点用の探索ブロックと、第2の画像中のエピポーラ線上の画素を中心とし、探索ブロックと同じサイズを有するエピポーラ線周辺領域との間のコンボリューション演算(畳み込み積分)を実行し、探索ブロックとエピポーラ線周辺領域との間の相関値を算出する。この相関値の算出は、第2の画像中の導出されたエピポーラ線に沿って実行される。サイズ取得部3は、相関値が最も高かったエピポーラ線周辺領域の中心画素(すなわち、エピポーラ線上の画素)を、選択された第1の被写体像の特徴点に対応する第2の画像中の第2の被写体像の特徴点として検出する。算出された第2の被写体像の特徴点の座標(x,y)は、制御部2のメモリー内に一時保存される。 Specifically, the size obtaining unit 3 searches for the search block for the feature point of the selected first subject image stored in the memory of the control unit 2 and the pixel on the epipolar line in the second image. A convolution operation (convolution integration) between the search block and the epipolar line peripheral region having the same size as the search block is executed to calculate a correlation value between the search block and the epipolar line peripheral region. The calculation of the correlation value is performed along the derived epipolar line in the second image. The size acquisition unit 3 determines the center pixel (that is, the pixel on the epipolar line) of the epipolar line peripheral region having the highest correlation value as the second pixel in the second image corresponding to the selected feature point of the first subject image. 2 are detected as characteristic points of the subject image. The calculated coordinates (x 2 , y 2 ) of the feature point of the second subject image are temporarily stored in the memory of the control unit 2.
 なお、探索ブロックとエピポーラ線周辺領域とのコンボリューション演算を実行する際に、探索ブロックまたは第2の画像に対する画素の補間が実行されてもよい。このような2つの領域間の相関値を正確に取得するために本分野において既知の任意の手法が、対応特徴点検出処理において用いられてもよい。 When performing a convolution operation between the search block and the epipolar line peripheral region, interpolation of pixels with respect to the search block or the second image may be performed. Any method known in the art for accurately obtaining such a correlation value between two regions may be used in the corresponding feature point detection processing.
 このような処理は、検出された第1の被写体像の特徴点の全てに対応する第2の画像中の第2の被写体像の特徴点が検出されるまで、選択された第1の被写体像の特徴点を変更して、繰り返し実行される。したがって、サイズ取得部3は、検出された第1の被写体像の複数の特徴点にそれぞれ対応する複数のエピポーラ線を上記式(32)(または、一般式(29))に基づいて導出し、該複数のエピポーラ線のそれぞれ上を上述のように探索することにより、検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出する。検出された第1の被写体像の特徴点の全てに対応する第2の画像中の第2の被写体像の特徴点が検出されると、サイズ取得部3による対応特徴点検出処理が終了する。 Such processing is performed until the feature points of the second subject image in the second image corresponding to all of the detected feature points of the first subject image are detected. Are repeatedly executed by changing the characteristic points of Therefore, the size acquiring unit 3 derives a plurality of epipolar lines respectively corresponding to a plurality of feature points of the detected first subject image based on the above equation (32) (or general equation (29)), By searching on each of the plurality of epipolar lines as described above, a plurality of features of the second subject image in the second image corresponding to the plurality of feature points of the detected first subject image, respectively. Detect points. When the feature points of the second subject image in the second image corresponding to all of the detected feature points of the first subject image are detected, the corresponding feature point detection processing by the size acquisition unit 3 ends.
 対応特徴点検出処理を実行した後、サイズ取得部3は、制御部2のメモリー内に一時保存されている第2の被写体像の複数の特徴点の座標(x,y)から、検出された第2の被写体像の複数の特徴点間の離間距離を測定することにより、第2の被写体像のサイズYFD2を取得する。 After executing the corresponding feature point detection process, the size acquisition unit 3 detects the size from the coordinates (x 2 , y 2 ) of the plurality of feature points of the second subject image temporarily stored in the memory of the control unit 2. The size YFD2 of the second subject image is obtained by measuring the separation distance between the plurality of feature points of the second subject image thus obtained.
 なお、上述したように、上記式(32)(または、一般式(29))で表されるエピポーラ線は、従来技術において一般に用いられている第1の撮像系IS1および第2の撮像系IS2の特性を考慮しないピンホールモデルではなく、図4および図5に示すような第1の撮像系IS1および第2の撮像系IS2の特性および配置を考慮したモデルを用いて導出されている。 As described above, the epipolar line represented by the above equation (32) (or the general equation (29)) corresponds to the first imaging system IS1 and the second imaging system IS2 generally used in the related art. Instead of the pinhole model that does not consider the characteristics of the first imaging system IS1 and the second imaging system IS2 as shown in FIGS.
 そのため、サイズ取得部3は、従来のピンホールモデルを用いて第2の画像中の複数のエピポーラ線を導出し、第2の画像中の第2の被写体像の複数の特徴点を検出する場合と比較して、より正確に第2の画像中の第2の被写体像の複数の特徴点を検出することができる。これにより、被写体100までの距離aをより正確に測定することができる。 Therefore, the size obtaining unit 3 derives a plurality of epipolar lines in the second image using the conventional pinhole model, and detects a plurality of feature points of the second subject image in the second image. It is possible to more accurately detect a plurality of feature points of the second subject image in the second image as compared with. Thus, the distance a to the subject 100 can be measured more accurately.
 関連付情報記憶部4は、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MR(m/m)と、第1の光学系OS1の前側主点から被写体100までの距離(被写体距離)aとを関連付ける関連付情報を記憶するための任意の不揮発性記録媒体(例えば、ハードディスク、フラッシュメモリー)である。関連付情報記憶部4に保存されている関連付情報は、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MR(m/m)から、被写体100までの距離aを算出するための情報である。 Association information storage unit 4, and the magnification m 1 of the first object image and Zobaihi MR (m 2 / m 1) and magnification m 2 of the second object image, the front of the first optical system OS1 An arbitrary non-volatile recording medium (for example, a hard disk or a flash memory) for storing association information for associating a distance (subject distance) a from the principal point to the subject 100. Association information stored in the association information storage unit 4, from Zobaihi MR (m 2 / m 1) between the magnification m 1 of the first object image and the magnification m 2 of the second object image, This is information for calculating the distance a to the subject 100.
 典型的には、関連付情報記憶部4に保存されている関連付情報は、像倍比MRに基づいて被写体100までの距離aを算出するための上記式(30)(または、一般式(13))である。代替的に、関連付情報記憶部4に保存されている関連付情報は、像倍比MRと被写体100までの距離aとを一意に対応づけたルックアップテーブルであってもよい。関連付情報記憶部4に保存されているこのような関連付情報を参照することにより、像倍比MRに基づいて被写体100までの距離aを算出することが可能となる。なお、関連付情報が被写体100までの距離aを算出するための前述の式である場合には、関連付情報に加え、制御部2のメモリー内に保存されている固定値も参照され、被写体100までの距離aが算出される。 Typically, the associating information stored in the associating information storage unit 4 includes the above equation (30) (or the general equation (30)) for calculating the distance a to the subject 100 based on the image magnification ratio MR. 13)). Alternatively, the association information stored in the association information storage unit 4 may be a look-up table in which the image magnification ratio MR and the distance a to the subject 100 are uniquely associated. By referring to such association information stored in the association information storage unit 4, the distance a to the subject 100 can be calculated based on the image magnification ratio MR. When the association information is the above-described expression for calculating the distance a to the subject 100, the fixed value stored in the memory of the control unit 2 is also referred to, in addition to the association information, A distance a to 100 is calculated.
 距離算出部5は、サイズ取得部3によって取得された第1の被写体像のサイズYFD1と第2の被写体像のサイズYFD2との比として得られる第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRに基づいて、被写体100までの距離aを算出する機能を有している。具体的には、距離算出部5は、サイズ取得部3によって取得された第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2に基づき、上記式(14)MR=YFD2/YFD1によって、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRを算出する。その後、距離算出部5は、関連付情報記憶部4に保存されている関連付情報を参照し(関連付情報が被写体100までの距離aを算出するための前述の式である場合には、制御部2のメモリー内に保存されている固定値も参照される)、像倍比MRに基づいて、被写体100までの距離aを算出(特定)する。 The distance calculator 5 calculates the magnification m 1 of the first subject image obtained as a ratio between the size Y FD1 of the first subject image acquired by the size acquiring unit 3 and the size Y FD2 of the second subject image, and It has a function of calculating the distance a to the subject 100 based on the image magnification ratio MR with the magnification m 2 of the subject image of No. 2 . Specifically, the distance calculation unit 5 calculates the above equation (14) MR = Y FD2 based on the size Y FD1 of the first subject image and the size Y FD2 of the second subject image acquired by the size acquisition unit 3. / by Y FD1, calculates the magnification m 1 of the first object image and Zobaihi MR with magnification m 2 of the second object image. Thereafter, the distance calculation unit 5 refers to the association information stored in the association information storage unit 4 (if the association information is the above-described expression for calculating the distance a to the subject 100, The distance a to the subject 100 is calculated (specified) based on the image magnification ratio MR (see also the fixed value stored in the memory of the control unit 2).
 3次元画像生成部6は、距離算出部5によって算出された被写体100までの距離aおよび第1の撮像系IS1または第2の撮像系IS2が取得した被写体100のカラーまたは白黒の輝度情報(第1の画像または第2の画像)に基づいて、被写体100の3次元画像を生成する機能を有している。ここで言う「被写体100の3次元画像」とは、通常の被写体100のカラーまたは白黒の輝度情報を表す2次元画像の各ピクセルに対して、算出された被写体100までの距離aが関連付けられているデータを意味する。なお、第1の撮像系IS1の第1の撮像素子S1および第2の撮像系IS2の第2の撮像素子S2が、被写体100の位相情報を取得する位相センサーである場合には、3次元画像生成部6は省略される。 The three-dimensional image generation unit 6 calculates the distance a to the subject 100 calculated by the distance calculation unit 5 and the color or black-and-white luminance information of the subject 100 acquired by the first imaging system IS1 or the second imaging system IS2. A function of generating a three-dimensional image of the subject 100 based on the first image or the second image). Here, the “three-dimensional image of the subject 100” refers to the calculated distance a to the subject 100 associated with each pixel of the two-dimensional image representing the color or monochrome luminance information of the normal subject 100. Means the data that exists. When the first imaging device S1 of the first imaging system IS1 and the second imaging device S2 of the second imaging system IS2 are phase sensors that acquire phase information of the subject 100, a three-dimensional image is obtained. The generation unit 6 is omitted.
 表示部7は、液晶表示部等のパネル型表示部であり、制御部2のプロセッサーからの信号に応じて、第1の撮像系IS1または第2の撮像系IS2によって取得された被写体100のカラーまたは白黒の輝度情報または被写体100の位相情報(第1の画像または第2の画像)、距離算出部5によって算出された被写体100までの距離a、3次元画像生成部6によって生成された被写体100の3次元画像、測距カメラ1を操作するための情報等が文字または画像の形態で表示部7に表示される。 The display unit 7 is a panel-type display unit such as a liquid crystal display unit, and according to a signal from a processor of the control unit 2, the color of the subject 100 acquired by the first imaging system IS1 or the second imaging system IS2. Alternatively, black-and-white luminance information or phase information (first image or second image) of the subject 100, the distance a to the subject 100 calculated by the distance calculating unit 5, the subject 100 generated by the three-dimensional image generating unit 6, 3D image, information for operating the distance measuring camera 1, and the like are displayed on the display unit 7 in the form of characters or images.
 操作部8は、測距カメラ1の使用者が操作を実行するために用いられる。操作部8は、測距カメラ1の使用者が操作を実行することができれば特に限定されず、例えば、マウス、キーボード、テンキー、ボタン、ダイヤル、レバー、タッチパネル等を操作部8として用いることができる。操作部8は、測距カメラ1の使用者による操作に応じた信号を制御部2のプロセッサーに送信する。 The operation unit 8 is used by a user of the distance measuring camera 1 to execute an operation. The operation unit 8 is not particularly limited as long as the user of the distance measuring camera 1 can perform an operation. For example, a mouse, a keyboard, a numeric keypad, a button, a dial, a lever, a touch panel, and the like can be used as the operation unit 8. . The operation unit 8 transmits a signal corresponding to an operation by the user of the distance measuring camera 1 to the processor of the control unit 2.
 通信部9は、測距カメラ1に対するデータの入力または測距カメラ1から外部デバイスへのデータの出力を行う機能を有している。通信部9は、インターネットのようなネットワークに接続可能に構成されていてもよい。この場合、測距カメラ1は、通信部9を用いることにより、外部に設けられたウェブサーバーやデータサーバーのような外部デバイスと通信を行うことができる。 The communication unit 9 has a function of inputting data to the ranging camera 1 or outputting data from the ranging camera 1 to an external device. The communication unit 9 may be configured to be connectable to a network such as the Internet. In this case, by using the communication unit 9, the distance measuring camera 1 can communicate with an external device such as a web server or a data server provided outside.
 このように、本実施形態の測距カメラ1では、第1の光学系OS1および第2の光学系OS2が、第1の光学系OS1の焦点距離fと第2の光学系OS2の焦点距離fとが、互いに異なるよう(f≠f)、構成されており、これにより、被写体100までの距離aに対する第1の被写体像の倍率mの変化と、被写体100までの距離aに対する第2の被写体像の倍率mの変化とが、互いに異なるようになっている。そのため、本発明の測距カメラ1は、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MR(m/m)に基づいて、被写体100までの距離aを一意に算出することができる。 As described above, in the distance measuring camera 1 of the present embodiment, the first optical system OS1 and the second optical system OS2 are formed by the focal length f1 of the first optical system OS1 and the focal length of the second optical system OS2. f 2 are different from each other (f 1 ≠ f 2 ), whereby the change in the magnification m 1 of the first subject image with respect to the distance a to the subject 100 and the distance a to the subject 100 And the change in the magnification m2 of the second subject image with respect to Therefore, the distance measurement camera 1 of the present invention, based on the Zobaihi MR (m 2 / m 1) between the magnification m 1 of the first object image and the magnification m 2 of the second object image to the object 100 Can be uniquely calculated.
 また、本実施形態の測距カメラ1では、サイズ取得部3によって実行される対応特徴点検出処理において、エピポーラ幾何に基づくエピポーラ線が利用されている。そのため、対応特徴点検出処理に必要な処理時間を大幅に短縮することができ、被写体100までの距離aを算出するために必要な処理時間を大幅に短縮することができる。 In the distance measuring camera 1 of the present embodiment, an epipolar line based on epipolar geometry is used in the corresponding feature point detection processing executed by the size acquisition unit 3. Therefore, the processing time required for the corresponding feature point detection processing can be significantly reduced, and the processing time required for calculating the distance a to the subject 100 can be significantly reduced.
 さらに、上記式(32)(または、一般式(29))で表されるエピポーラ線は、従来技術において一般に用いられている第1の撮像系IS1および第2の撮像系IS2の特性を考慮しないピンホールモデルではなく、図4および図5に示す第1の撮像系IS1および第2の撮像系IS2の特性および配置の双方を考慮したモデルを用いて導出されている。そのため、従来のピンホールモデルを用いて第2の画像中の複数のエピポーラ線を導出し、第2の画像中の第2の被写体像の複数の特徴点を検出する場合と比較して、より正確に第2の画像中の第2の被写体像の複数の特徴点を検出することができる。これにより、測距カメラ1による被写体100までの距離aの測定の精度を高めることができる。 Further, the epipolar line represented by the above equation (32) (or the general equation (29)) does not consider the characteristics of the first imaging system IS1 and the second imaging system IS2 generally used in the related art. Instead of the pinhole model, it is derived using a model that considers both the characteristics and arrangement of the first imaging system IS1 and the second imaging system IS2 shown in FIGS. Therefore, a plurality of epipolar lines in the second image are derived using the conventional pinhole model, and compared with a case where a plurality of feature points of the second subject image in the second image are detected. A plurality of feature points of the second subject image in the second image can be accurately detected. Thereby, the accuracy of the measurement of the distance a to the subject 100 by the distance measuring camera 1 can be improved.
 <第2実施形態>
 次に、図8を参照して、本発明の第2実施形態に係る測距カメラ1について詳述する。図8は、本発明の第2実施形態に係る測距カメラを概略的に示すブロック図である。
<Second embodiment>
Next, a distance measuring camera 1 according to a second embodiment of the present invention will be described in detail with reference to FIG. FIG. 8 is a block diagram schematically showing a distance measuring camera according to the second embodiment of the present invention.
 以下、第2実施形態の測距カメラ1について、第1実施形態の測距カメラ1との相違点を中心に説明し、同様の事項については、その説明を省略する。本実施形態の測距カメラ1は、第1の光学系OS1および第2の光学系OS2の構成が変更されている点を除き、第1実施形態の測距カメラ1と同様である。 Hereinafter, the distance measuring camera 1 of the second embodiment will be described focusing on the differences from the distance measuring camera 1 of the first embodiment, and the description of the same items will be omitted. The ranging camera 1 of the present embodiment is the same as the ranging camera 1 of the first embodiment, except that the configurations of the first optical system OS1 and the second optical system OS2 are changed.
 本実施形態の測距カメラ1は、像倍比MRに基づいて被写体100までの距離aを算出するために要求される上述の3つの条件の内、第1の光学系OS1の射出瞳から、被写体100が無限遠にある場合の第1の被写体像の結像位置までの距離EPと、第2の光学系OS2の射出瞳から、被写体100が無限遠にある場合の第2の被写体像の結像位置までの距離EPとが、互いに異なる(EP≠EP)という第2の条件が満たされるよう、第1の光学系OS1および第2の光学系OS2が構成されていることを特徴とする。一方、本実施形態では、第1の光学系OS1および第2の光学系OS2は、上述の3つの条件の内、その他の2つの条件(f≠fおよびD≠0)を満たすように構成および配置されていない。さらに、本実施形態の測距カメラ1は、像倍比MRが距離aの関数として成立しているという第4の条件が満たされるよう構成されている。 The distance measuring camera 1 according to the present embodiment uses the exit pupil of the first optical system OS1 among the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR. the distance EP 1 to the imaging position of the first object image when the object 100 is at infinity, the exit pupil of the second optical system OS2, second object image when the object 100 is at infinity of the distance EP 2 to the imaging position, different (EP 1 ≠ EP 2) a second so the condition is met that, the first optical system OS1 and a second optical system OS2 is configured It is characterized by. On the other hand, in the present embodiment, the first optical system OS1 and the second optical system OS2 satisfy the other two conditions (f 1 ≠ f 2 and D ≠ 0) among the above three conditions. Not configured and deployed. Further, the distance measuring camera 1 of the present embodiment is configured to satisfy a fourth condition that the image magnification ratio MR is satisfied as a function of the distance a.
 像倍比MRに基づいて被写体100までの距離aを算出するための上記一般式(13)は、f=f=fおよびD=0の条件により単純化され、下記式(33)で表すことができる。 The general formula (13) for calculating the distance a to the subject 100 based on the image magnification ratio MR is simplified by the conditions of f 1 = f 2 = f and D = 0, and is expressed by the following formula (33). Can be represented.
Figure JPOXMLDOC01-appb-M000036
Figure JPOXMLDOC01-appb-M000036
 ここで、係数Kは、下記式(34)で表される。 Here, the coefficient K is represented by the following equation (34).
Figure JPOXMLDOC01-appb-M000037
Figure JPOXMLDOC01-appb-M000037
 また、エピポーラ線を表す上記一般式(29)は、f=f=fおよびD=0の条件により単純化され、下記式(35)で表すことができる。 The above general formula (29) representing the epipolar line is simplified by the conditions of f 1 = f 2 = f and D = 0, and can be expressed by the following formula (35).
Figure JPOXMLDOC01-appb-M000038
Figure JPOXMLDOC01-appb-M000038
 このように、本実施形態の測距カメラ1では、第1の光学系OS1の射出瞳から、被写体100が無限遠にある場合の第1の被写体像の結像位置までの距離EPと、第2の光学系OS2の射出瞳から、被写体100が無限遠にある場合の第2の被写体像の結像位置までの距離EPとが、互いに異なるよう(EP≠EP)、第1の光学系OS1および第2の光学系OS2が構成されており、これにより、被写体100までの距離aに対する第1の被写体像の倍率mの変化と、被写体100までの距離aに対する第2の被写体像の倍率mの変化とが、互いに異なるようになっている。そのため、本実施形態の測距カメラ1は、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MR(m/m)に基づいて、被写体100までの距離aを一意に算出することができる。 Thus, in the distance measuring camera 1 of this embodiment, the exit pupil of the first optical system OS1, the distance EP 1 to the imaging position of the first object image when the object 100 is at infinity, from the exit pupil of the second optical system OS2, and the distance EP 2 to the imaging position of the second object image when the object 100 is at infinity is different from each other (EP 1EP 2), first optical system OS1 and a second optical system OS2 is constituted by this, the first subject image with respect to the distance a to the object 100 and the change in the magnification m 1, the second for the distance a to the object 100 change the magnification m 2 of the object image, which is different from each other. Therefore, the distance measurement camera 1 of this embodiment, based on Zobaihi MR (m 2 / m 1) between the magnification m 1 of the first object image and the magnification m 2 of the second subject image, the subject 100 Can be uniquely calculated.
 さらに、本実施形態の測距カメラ1では、上記式(35)によって表される第2の画像中のエピポーラ線上を探索することにより、第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出することができる。これにより、第2の画像の全領域を探索せずとも、第2の被写体像の複数の特徴点を検出することができ、対応特徴点検出処理に要する処理時間を大幅に短縮することができる。その結果、被写体像間の像倍比MRに基づいて被写体100までの距離aを算出するための処理時間を大幅に短縮することができる。このように、本実施形態によっても、上述の第1の実施形態と同様の効果を発揮することができる。 Further, in the distance measuring camera 1 of the present embodiment, by searching on the epipolar line in the second image represented by the above equation (35), the second camera corresponding to each of the plurality of feature points of the first subject image is searched. A plurality of feature points of the second subject image in the second image can be detected. Thus, a plurality of feature points of the second subject image can be detected without searching the entire area of the second image, and the processing time required for the corresponding feature point detection process can be significantly reduced. . As a result, a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced. As described above, according to the present embodiment, the same effects as those of the first embodiment can be exerted.
 <第3実施形態>
 次に、図9を参照して、本発明の第3実施形態に係る測距カメラ1について詳述する。図9は、本発明の第3実施形態に係る測距カメラを概略的に示すブロック図である。
<Third embodiment>
Next, a distance measuring camera 1 according to a third embodiment of the present invention will be described in detail with reference to FIG. FIG. 9 is a block diagram schematically showing a distance measuring camera according to the third embodiment of the present invention.
 以下、第3実施形態の測距カメラ1について、第1実施形態の測距カメラ1との相違点を中心に説明し、同様の事項については、その説明を省略する。本実施形態の測距カメラ1は、第1の光学系OS1および第2の光学系OS2の構成が変更されている点を除き、第1実施形態の測距カメラ1と同様である。 Hereinafter, the distance measuring camera 1 of the third embodiment will be described focusing on the differences from the distance measuring camera 1 of the first embodiment, and the description of the same items will be omitted. The ranging camera 1 of the present embodiment is the same as the ranging camera 1 of the first embodiment, except that the configurations of the first optical system OS1 and the second optical system OS2 are changed.
 本実施形態の測距カメラ1は、像倍比MRに基づいて被写体100までの距離aを算出するために要求される上述の3つの条件の内、第1の光学系OS1の前側主点と、第2の光学系OS2の前側主点との間に奥行方向(光軸方向)の差Dが存在する(D≠0)という第3の条件が満たされるよう、第1の光学系OS1および第2の光学系OS2が構成および配置されていることを特徴とする。一方、本実施形態では、第1の光学系OS1および第2の光学系OS2は、上述の3つの条件の内、その他の2つの条件(f≠fおよびEP≠EP)を満たすように構成されていない。さらに、本実施形態の測距カメラ1は、像倍比MRが距離aの関数として成立しているという第4の条件が満たされるよう構成されている。 The distance measuring camera 1 according to the present embodiment uses the front principal point of the first optical system OS1 among the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR. And the first optical system OS1 and the second optical system OS2 so that the third condition that a difference D in the depth direction (optical axis direction) exists between the second optical system OS2 and the front principal point (D (0) is satisfied. The second optical system OS2 is configured and arranged. On the other hand, in the present embodiment, the first optical system OS1 and the second optical system OS2 satisfy the other two conditions (f 1 ≠ f 2 and EP 1 ≠ EP 2 ) among the above three conditions. Is not configured as such. Further, the distance measuring camera 1 of the present embodiment is configured to satisfy a fourth condition that the image magnification ratio MR is satisfied as a function of the distance a.
 像倍比MRに基づいて被写体100までの距離aを算出するための上記一般式(13)は、f=f=fおよびEP=EP=EPの条件により単純化され、下記式(36)で表すことができる。 The general formula (13) for calculating the distance a to the subject 100 based on the image magnification ratio MR is simplified by the conditions of f 1 = f 2 = f and EP 1 = EP 2 = EP. It can be represented by (36).
Figure JPOXMLDOC01-appb-M000039
Figure JPOXMLDOC01-appb-M000039
 ここで、係数Kは、下記式(37)で表される。 Here, the coefficient K is represented by the following equation (37).
Figure JPOXMLDOC01-appb-M000040
Figure JPOXMLDOC01-appb-M000040
 また、エピポーラ線を表す上記一般式(29)は、f=f=fおよびEP=EP=EPの条件により単純化され、下記式(38)で表すことができる。 The above general formula (29) representing the epipolar line is simplified by the conditions of f 1 = f 2 = f and EP 1 = EP 2 = EP, and can be expressed by the following formula (38).
Figure JPOXMLDOC01-appb-M000041
Figure JPOXMLDOC01-appb-M000041
 このように、本実施形態の測距カメラ1では、第1の光学系OS1の前側主点と、第2の光学系OS2の前側主点との間に奥行方向(光軸方向)の差Dが存在するよう(D≠0)、構成および配置されており、これにより、被写体100までの距離aに対する第1の被写体像の倍率mの変化と、被写体100までの距離aに対する第2の被写体像の倍率mの変化とが、互いに異なるようになっている。そのため、本実施形態の測距カメラ1は、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MR(m/m)に基づいて、被写体100までの距離aを一意に算出することができる。 Thus, in the distance measuring camera 1 of the present embodiment, the difference D in the depth direction (optical axis direction) between the front principal point of the first optical system OS1 and the front principal point of the second optical system OS2. Is present (D ≠ 0), whereby the change in the magnification m 1 of the first subject image with respect to the distance a to the subject 100 and the second change the magnification m 2 of the object image, which is different from each other. Therefore, the distance measurement camera 1 of this embodiment, based on Zobaihi MR (m 2 / m 1) between the magnification m 1 of the first object image and the magnification m 2 of the second subject image, the subject 100 Can be uniquely calculated.
 さらに、本実施形態の測距カメラ1では、上記式(38)によって表される第2の画像中のエピポーラ線上を探索することにより、第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出することができる。これにより、第2の画像の全領域を探索せずとも、第2の被写体像の複数の特徴点を検出することができ、対応特徴点検出処理に要する処理時間を大幅に短縮することができる。その結果、被写体像間の像倍比MRに基づいて被写体100までの距離aを算出するための処理時間を大幅に短縮することができる。このように、本実施形態によっても、上述の第1の実施形態と同様の効果を発揮することができる。 Further, in the distance measuring camera 1 of the present embodiment, by searching on the epipolar line in the second image represented by the above equation (38), the second camera corresponding to each of the plurality of feature points of the first subject image is searched. A plurality of feature points of the second subject image in the second image can be detected. Thus, a plurality of feature points of the second subject image can be detected without searching the entire area of the second image, and the processing time required for the corresponding feature point detection process can be significantly reduced. . As a result, a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced. As described above, according to the present embodiment, the same effects as those of the first embodiment can be exerted.
 ここまで各実施形態を参照して詳述したように、本発明の測距カメラ1は、第1の撮像系IS1を用いて取得された第1の画像および第2の撮像系IS2を用いて取得された第2の画像から実測される第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2から像倍比MRを算出することにより、第1の光学系OS1の前側主点から被写体100までの距離aを算出することができる。 As described above in detail with reference to each embodiment, the distance measuring camera 1 of the present invention uses the first image acquired using the first imaging system IS1 and the second imaging system IS2. By calculating the image magnification ratio MR from the size Y FD1 of the first subject image actually measured from the acquired second image and the size Y FD2 of the second subject image, the front main unit of the first optical system OS1 is calculated. The distance a from the point to the subject 100 can be calculated.
 また、第2の被写体像のサイズYFD2を測定するための対応特徴点検出処理において、エピポーラ幾何に基づくエピポーラ線が用いられている。そのため、第2の画像の全領域を探索せずとも、第2の被写体像の複数の特徴点を検出することができ、対応特徴点検出処理に要する処理時間を大幅に短縮することができる。その結果、被写体像間の像倍比MRに基づいて被写体100までの距離aを算出するための処理時間を大幅に短縮することができる。 In the corresponding feature point detection processing for measuring the size Y FD2 of the second subject image, an epipolar line based on epipolar geometry is used. Therefore, a plurality of feature points of the second subject image can be detected without searching the entire area of the second image, and the processing time required for the corresponding feature point detection processing can be greatly reduced. As a result, a processing time for calculating the distance a to the subject 100 based on the image magnification ratio MR between the subject images can be significantly reduced.
 また、上記各実施形態では、第1の光学系OS1および第2の光学系OS2の2つの光学系が用いられているが、用いられる光学系の数はこれに限られない。例えば、第1の光学系OS1および第2の光学系OS2に加え、追加的な光学系をさらに備えるような態様もまた本発明の範囲内である。この場合、追加的な光学系は、追加的な光学系によって形成される被写体像の倍率の被写体100までの距離aに対する変化が、第1の被写体像の倍率mの被写体までの距離aに対する変化および第2の被写体像の倍率mの被写体までの距離aに対する変化と異なるように構成および配置されている。 Further, in each of the above embodiments, two optical systems of the first optical system OS1 and the second optical system OS2 are used, but the number of optical systems used is not limited to this. For example, an embodiment further including an additional optical system in addition to the first optical system OS1 and the second optical system OS2 is also within the scope of the present invention. In this case, the additional optical system changes the magnification of the subject image formed by the additional optical system with respect to the distance a to the subject 100 with respect to the distance a to the subject with the magnification m1 of the first subject image. The configuration and arrangement are different from the change and the change with respect to the distance a to the subject at the magnification m2 of the second subject image.
 なお、上述した第1~第3実施形態は、像倍比MRに基づいて被写体100までの距離aを算出するために要求される上述の3つの条件の内のいずれか1つを満たすよう第1の光学系OS1および第2の光学系OS2が構成および配置されているが、上述の3つの条件の内の少なくとも1つが満たされるよう、第1の光学系OS1および第2の光学系OS2が構成および配置されていれば、本発明はこれに限られない。例えば、上述の3つの条件の内の全てまたは任意の組み合わせが満たされるよう、第1の光学系OS1および第2の光学系OS2が構成および配置されている態様も、本発明の範囲内である。 In the first to third embodiments, the first to third embodiments satisfy any one of the above three conditions required to calculate the distance a to the subject 100 based on the image magnification ratio MR. The first optical system OS1 and the second optical system OS2 are configured and arranged, but the first optical system OS1 and the second optical system OS2 are configured so that at least one of the above three conditions is satisfied. The present invention is not limited to this as long as it is configured and arranged. For example, an aspect in which the first optical system OS1 and the second optical system OS2 are configured and arranged such that all or any combination of the above three conditions is satisfied is also within the scope of the present invention. .
 <測距方法>
 次に図10および図11を参照して、本発明の測距カメラ1によって実行される測距方法について説明する。図10は、本発明の測距カメラによって実行される測距方法を説明するためのフローチャートである。図11は、図10に示す測距方法において実行される対応特徴点検出処理の詳細を示すフローチャートである。
<Ranging method>
Next, a distance measuring method executed by the distance measuring camera 1 of the present invention will be described with reference to FIGS. FIG. 10 is a flowchart for explaining a distance measuring method executed by the distance measuring camera of the present invention. FIG. 11 is a flowchart showing details of the corresponding feature point detection process executed in the distance measuring method shown in FIG.
 なお、以下に詳述する測距方法は、上述した本発明の第1~第3実施形態に係る測距カメラ1および測距カメラ1と同等の機能を有する任意の装置を用いて実行することができるが、第1実施形態に係る測距カメラ1を用いて実行されるものとして説明する。 Note that the ranging method described in detail below is performed using the ranging camera 1 according to the above-described first to third embodiments of the present invention and an arbitrary device having the same function as the ranging camera 1. However, the description will be made assuming that the processing is executed using the distance measuring camera 1 according to the first embodiment.
 図10に示す測距方法S100は、測距カメラ1の使用者が操作部8を用いて、被写体100までの距離aを測定するための操作を実行することにより開始される。工程S110において、第1の撮像系IS1の第1の撮像素子S1によって、第1の光学系OS1によって形成された第1の被写体像が撮像され、第1の被写体像を含む第1の画像が取得される。第1の画像は、データバス10を介して、制御部2およびサイズ取得部3に送られる。同様に、工程S120において、第2の撮像系IS2の第2の撮像素子S2によって、第2の光学系OS2によって形成された第2の被写体像が撮像され、第2の被写体像を含む第2の画像が取得される。第2の画像は、データバス10を介して、制御部2およびサイズ取得部3に送られる。S110における第1の画像の取得と、S120における第2の画像の取得は、同時に実行されてもよいし、別々に実行されてもよい。 距 The distance measuring method S100 shown in FIG. 10 is started when the user of the distance measuring camera 1 executes an operation for measuring the distance a to the subject 100 using the operation unit 8. In step S110, a first image of the subject formed by the first optical system OS1 is captured by the first imaging element S1 of the first imaging system IS1, and a first image including the first subject image is formed. Is obtained. The first image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10. Similarly, in step S120, the second imaging device S2 of the second imaging system IS2 captures the second subject image formed by the second optical system OS2, and the second imaging device includes the second subject image. Are acquired. The second image is sent to the control unit 2 and the size acquisition unit 3 via the data bus 10. The acquisition of the first image in S110 and the acquisition of the second image in S120 may be performed simultaneously or separately.
 工程S110における第1の画像の取得および工程S120における第2の画像の取得の後、測距方法S100は、工程S130に移行する。工程S130において、サイズ取得部3は、第1の画像中の第1の被写体像の任意の複数の特徴点を検出する。工程S130においてサイズ取得部3によって検出される第1の被写体像の任意の複数の特徴点は、例えば、第1の被写体像の高さ方向の両端部または第1の被写体像の幅方向の両端部である。サイズ取得部3によって検出された第1の被写体像の複数の特徴点のそれぞれの座標(x,y)は、制御部2のメモリー内に一時保存される。 After the acquisition of the first image in step S110 and the acquisition of the second image in step S120, the distance measuring method S100 proceeds to step S130. In step S130, the size obtaining unit 3 detects arbitrary plural feature points of the first subject image in the first image. Arbitrary plural feature points of the first subject image detected by the size acquiring unit 3 in step S130 are, for example, both ends in the height direction of the first subject image or both ends in the width direction of the first subject image. Department. The coordinates (x 1 , y 1 ) of each of the plurality of feature points of the first subject image detected by the size obtaining unit 3 are temporarily stored in the memory of the control unit 2.
 工程S140において、サイズ取得部3は、制御部2のメモリー内に一時保存されている第1の被写体像の複数の特徴点のそれぞれの座標(x,y)を参照し、検出された第1の被写体像の複数の特徴点間の距離を測定することにより、第1の被写体像のサイズYFD1を取得する。工程S140において取得された第1の被写体像のサイズYFD1は、制御部2のメモリー内に一時保存される。 In step S140, the size acquisition unit 3 refers to the coordinates (x 1 , y 1 ) of the plurality of feature points of the first subject image temporarily stored in the memory of the control unit 2 and detects the coordinates. The size YFD1 of the first subject image is obtained by measuring the distance between a plurality of feature points of the first subject image. The size Y FD1 of the first subject image acquired in step S140 is temporarily stored in the memory of the control unit 2.
 その後、工程S150において、サイズ取得部3は、工程S130において検出された第1の被写体像の複数の特徴点にそれぞれ対応する第2の画像中の第2の被写体像の複数の特徴点を検出するための対応特徴点検出処理を実行する。図11には、工程S150において実行される対応特徴点検出処理の詳細を示すフローチャートが示されている。 Thereafter, in step S150, the size acquisition unit 3 detects a plurality of feature points of the second subject image in the second image corresponding to the plurality of feature points of the first subject image detected in step S130, respectively. Corresponding feature point detection processing for performing the FIG. 11 shows a flowchart illustrating details of the corresponding feature point detection processing executed in step S150.
 工程S151において、サイズ取得部3は、制御部2のメモリー内に保存されている第1の被写体像の複数の特徴点のそれぞれの座標(x,y)を参照し、検出された第1の被写体像の複数の特徴点のいずれか1つを選択する。次に、工程S152において、サイズ取得部3は、第1の画像中の選択された第1の被写体像の特徴点を中心とする所定のサイズの領域(例えば、特徴点を中心とする5×5ピクセルの領域、7×7ピクセルの領域等)を切り出し、選択された特徴点用の探索ブロックを取得する。取得された探索ブロックは、制御部2のメモリー内に一時保存される。 In step S151, the size acquisition unit 3, with reference to the respective coordinates of a plurality of feature points of the first subject image stored in the control unit 2 memory (x 1, y 1), were detected One of a plurality of feature points of one subject image is selected. Next, in step S152, the size obtaining unit 3 determines a region of a predetermined size centered on a feature point of the selected first subject image in the first image (for example, a 5 × area centered on the feature point). A region of 5 pixels, a region of 7 × 7 pixels, etc.) is cut out, and a search block for the selected feature point is obtained. The obtained search block is temporarily stored in the memory of the control unit 2.
 次に、工程S153において、サイズ取得部3は、制御部2のメモリー内に保存されている固定値を用いて、工程S151において選択された第1の被写体像の特徴点に対応する第2の画像中のエピポーラ線を、上述の一般式(29)(または、各実施形態における簡略化されたエピポーラ線を表す式)に基づいて導出する。その後、工程S154において、サイズ取得部3は、制御部2のメモリー内に保存されている選択された第1の被写体像の特徴点用の探索ブロックと、第2の画像中の導出されたエピポーラ線上の画素を中心とし、探索ブロックと同じサイズを有するエピポーラ線周辺領域との間のコンボリューション演算(畳み込み積分)を実行し、探索ブロックとエピポーラ線周辺領域との間の相関値を算出する。算出された相関値は、制御部2のメモリー内に一時保存される。この相関値の算出は、ブロックマッチングとも称され、第2の画像中の導出されたエピポーラ線に沿って実行される。 Next, in step S153, the size acquisition unit 3 uses the fixed value stored in the memory of the control unit 2 to select a second point corresponding to the feature point of the first subject image selected in step S151. The epipolar line in the image is derived based on the general formula (29) described above (or the expression representing the simplified epipolar line in each embodiment). Thereafter, in step S154, the size acquisition unit 3 searches the feature block of the selected first subject image stored in the memory of the control unit 2 for the feature point of the selected first subject image and the derived epipolar image in the second image. A convolution operation (convolution integration) is performed between the search block and the epipolar line peripheral region having the same size as the search block, and a correlation value between the search block and the epipolar line peripheral region is calculated. The calculated correlation value is temporarily stored in the memory of the control unit 2. The calculation of the correlation value is also referred to as block matching, and is performed along the derived epipolar line in the second image.
 第2の画像中のエピポーラ線に沿った相関値の算出が終了すると、工程S150の処理は、工程S155に移行する。工程S155において、サイズ取得部3は、相関値が最も高かったエピポーラ線周辺領域の中心画素(すなわち、エピポーラ線上の画素)を、選択された第1の被写体像の特徴点に対応する第2の画像中の第2の被写体像の特徴点として検出する。算出された第2の被写体像の特徴点の座標(x,y)は、制御部2のメモリー内に一時保存される。 When the calculation of the correlation value along the epipolar line in the second image ends, the process of step S150 shifts to step S155. In step S155, the size acquisition unit 3 determines the center pixel (that is, the pixel on the epipolar line) of the epipolar line peripheral region having the highest correlation value as the second pixel corresponding to the selected feature point of the first subject image. It is detected as a feature point of the second subject image in the image. The calculated coordinates (x 2 , y 2 ) of the feature point of the second subject image are temporarily stored in the memory of the control unit 2.
 その後、工程S156において、工程S130において検出された第1の被写体像の複数の特徴点の全てが、工程S151において選択済みか否かが判別される。工程S130において検出された第1の被写体像の複数の特徴点の全てが、工程S151において選択済みではない場合(工程S156=No)、工程S150の処理は、工程S151に戻る。工程S151において、第1の被写体像の複数の特徴点のうち、未選択の1つが新たに選択され、選択された第1の被写体像の特徴点が変更される。工程S151~工程S155の処理は、検出された第1の被写体像の特徴点の全てに対応する第2の画像中の第2の被写体像の特徴点が検出されるまで、選択された第1の被写体像の特徴点を変更して、繰り返し実行される。 After that, in step S156, it is determined whether all of the plurality of feature points of the first subject image detected in step S130 have been selected in step S151. If all of the plurality of feature points of the first subject image detected in step S130 have not been selected in step S151 (step S156 = No), the process of step S150 returns to step S151. In step S151, an unselected one of a plurality of feature points of the first subject image is newly selected, and the feature point of the selected first subject image is changed. The processing of steps S151 to S155 is performed until the feature points of the second subject image in the second image corresponding to all the feature points of the detected first subject image are detected. Are repeatedly executed by changing the feature points of the subject image.
 工程S130において検出された第1の被写体像の複数の特徴点の全てが、工程S151において選択済みである場合(工程S156=Yes)、工程S150の処理は終了する。工程S150の処理が終了すると、測距方法S100は、工程S160に移行する。 場合 If all of the plurality of feature points of the first subject image detected in step S130 have been selected in step S151 (step S156 = Yes), the processing in step S150 ends. When the process in step S150 ends, the distance measuring method S100 proceeds to step S160.
 図10に戻り、工程S160において、サイズ取得部3は、検出された第2の被写体像の複数の特徴点間の距離を測定することにより、第2の被写体像のサイズYFD2を取得する。工程S160において取得された第2の被写体像のサイズYFD2は、制御部2のメモリー内に一時保存される。 Returning to FIG. 10, in step S160, the size obtaining unit 3 obtains the size Y FD2 of the second subject image by measuring the distance between a plurality of feature points of the detected second subject image. The size Y FD2 of the second subject image acquired in step S160 is temporarily stored in the memory of the control unit 2.
 サイズ取得部3によって第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2が取得されると、測距方法S100は、工程S170に移行する。工程S170において、距離算出部5は、制御部2のメモリー内に一時保存されている第1の被写体像のサイズYFD1および第2の被写体像のサイズYFD2から、上記式(14)MR=YFD2/YFD1に基づいて、第1の被写体像の倍率mと第2の被写体像の倍率mとの像倍比MRを算出する。次に、工程S180において、距離算出部5は、関連付情報記憶部4に保存されている関連付情報を参照し、算出した像倍比MRに基づいて、被写体100までの距離aを算出する。なお、関連付情報が被写体100までの距離aを算出するための前述の式である場合には、距離算出部5は、関連付情報に加え、制御部2のメモリー内に保存されている固定値も参照し、被写体100までの距離aを算出する。 When the size Y FD2 size Y FD1 and a second object image of the first object image is acquired by the size acquiring unit 3, the distance measuring method S100, the process proceeds to step S170. In step S170, the distance calculation unit 5 calculates the above equation (14) from the size Y FD1 of the first subject image and the size Y FD2 of the second subject image temporarily stored in the memory of the control unit 2. based on the Y FD2 / Y FD1, it calculates the Zobaihi MR with magnification m 2 of the magnification m 1 of the first subject image the second object image. Next, in step S180, the distance calculation unit 5 refers to the association information stored in the association information storage unit 4, and calculates the distance a to the subject 100 based on the calculated image magnification ratio MR. . If the association information is the above-described expression for calculating the distance a to the subject 100, the distance calculation unit 5 adds the fixed information stored in the memory of the control unit 2 in addition to the association information. The distance a to the subject 100 is calculated with reference to the value.
 工程S180において、距離算出部5が、被写体100までの距離aを算出すると、測距方法S100は、工程S190に移行する。工程S190において、3次元画像生成部6が、距離算出部5によって算出された被写体100までの距離aおよび第1の撮像系IS1または第2の撮像系IS2が取得した被写体100のカラーまたは白黒の輝度情報(第1の画像または第2の画像)に基づいて、被写体100の3次元画像を生成する。なお、第1の撮像系IS1の第1の撮像素子S1および第2の撮像系IS2の第2の撮像素子S2が、被写体100の位相情報を取得する位相センサーである場合には、工程S190は省略される。 In step S180, when the distance calculation unit 5 calculates the distance a to the subject 100, the distance measuring method S100 proceeds to step S190. In step S190, the three-dimensional image generation unit 6 determines whether the distance a to the subject 100 calculated by the distance calculation unit 5 and the color or monochrome of the subject 100 acquired by the first imaging system IS1 or the second imaging system IS2. A three-dimensional image of the subject 100 is generated based on the luminance information (the first image or the second image). If the first imaging device S1 of the first imaging system IS1 and the second imaging device S2 of the second imaging system IS2 are phase sensors that acquire phase information of the subject 100, step S190 is performed. Omitted.
 その後、ここまでの工程において取得された被写体100のカラー若しくは白黒の輝度情報または位相情報(第1の画像および/または第2の画像)、被写体100までの距離a、および/または被写体100の3次元画像が、表示部7に表示され、または通信部9によって外部デバイスに送信され、測距方法S100は終了する。 Thereafter, the color or black-and-white luminance information or phase information (first image and / or second image) of the subject 100 acquired in the steps up to here, the distance a to the subject 100, and / or 3 The two-dimensional image is displayed on the display unit 7 or transmitted to an external device by the communication unit 9, and the distance measuring method S100 ends.
 以上、本発明の測距カメラを図示の実施形態に基づいて説明したが、本発明はこれに限定されるものではない。本発明の各構成は、同様の機能を発揮し得る任意のものと置換することができ、あるいは、本発明の各構成に任意の構成のものを付加することができる。 Although the distance measuring camera of the present invention has been described based on the illustrated embodiment, the present invention is not limited to this. Each component of the present invention can be replaced with any component that can exhibit the same function, or any component can be added to each component of the present invention.
 本発明の属する分野および技術における当業者であれば、本発明の原理、考え方、および範囲から有意に逸脱することなく、記述された本発明の測距カメラの構成の変更を実行可能であろうし、変更された構成を有する測距カメラもまた、本発明の範囲内である。例えば、第1実施形態から第4実施形態の測距カメラを任意に組み合わせた態様も、本発明の範囲内である。 Those skilled in the art and art to which the invention pertains will be able to make modifications to the described ranging camera arrangements without significantly departing from the principles, concepts and scope of the invention. A ranging camera having a modified configuration is also within the scope of the present invention. For example, a mode in which the distance measuring cameras of the first to fourth embodiments are arbitrarily combined is also within the scope of the present invention.
 また、図7~図9に示された測距カメラのコンポーネントの数や種類は、説明のための例示にすぎず、本発明は必ずしもこれに限られない。本発明の原理および意図から逸脱しない範囲において、任意のコンポーネントが追加若しくは組み合わされ、または任意のコンポーネントが削除された態様も、本発明の範囲内である。また、測距カメラの各コンポーネントは、ハードウェア的に実現されていてもよいし、ソフトウェア的に実現されていてもよいし、これらの組み合わせによって実現されていてもよい。 The numbers and types of components of the distance measuring camera shown in FIGS. 7 to 9 are merely examples for explanation, and the present invention is not necessarily limited to this. Embodiments in which any component is added or combined or any component is deleted without departing from the principle and intent of the present invention are also within the scope of the present invention. Further, each component of the distance measuring camera may be realized by hardware, may be realized by software, or may be realized by a combination thereof.
 また、図10および図11に示された測距方法S100の工程の数や種類は、説明のための例示にすぎず、本発明は必ずしもこれに限られない。本発明の原理および意図から逸脱しない範囲において、任意の工程が、任意の目的で追加若しくは組み合され、または、任意の工程が削除される態様も、本発明の範囲内である。 The number and types of steps of the distance measuring method S100 shown in FIGS. 10 and 11 are merely examples for explanation, and the present invention is not necessarily limited to this. Embodiments in which any step is added or combined for any purpose or any step is deleted without departing from the principle and intention of the present invention are also within the scope of the present invention.
 本発明の測距カメラにおいては、一方の被写体像の複数の特徴点にそれぞれ対応する他方の被写体像の複数の特徴点を検出するための対応特徴点検出処理において、エピポーラ幾何に基づくエピポーラ線を利用した特徴点の探索が実行される。そのため、被写体像間の像倍比に基づいて被写体までの距離を算出するための処理時間を短縮することができる。したがって、本発明は、産業上の利用可能性を有する。 In the ranging camera of the present invention, in a corresponding feature point detection process for detecting a plurality of feature points of the other subject image corresponding to a plurality of feature points of the one subject image, an epipolar line based on epipolar geometry is used. A search for the used feature point is executed. Therefore, the processing time for calculating the distance to the subject based on the image magnification ratio between the subject images can be reduced. Therefore, the present invention has industrial applicability.

Claims (7)

  1.  被写体からの光を集光し、第1の被写体像を形成するための第1の光学系と、前記第1の被写体像を撮像することにより、前記第1の被写体像を含む第1の画像を取得するための第1の撮像素子とを有する第1の撮像系と、
     前記第1の光学系に対して、前記第1の光学系の光軸方向に対して垂直な方向にシフトして配置され、前記被写体からの前記光を集光し、第2の被写体像を形成するための第2の光学系と、前記第2の被写体像を撮像することにより、前記第2の被写体像を含む第2の画像を取得するための第2の撮像素子とを有する第2の撮像系と、
     前記第1の画像中の前記第1の被写体像の複数の特徴点を検出し、前記第1の被写体像の前記複数の特徴点間の距離を測定することにより、前記第1の被写体像のサイズを取得し、さらに、前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の前記第2の被写体像の複数の特徴点を検出し、前記第2の被写体像の前記複数の特徴点間の距離を測定することにより、前記第2の被写体像のサイズを取得するためのサイズ取得部と、
     前記サイズ取得部によって取得された前記第1の被写体像の前記サイズと前記第2の被写体像の前記サイズとの比として得られる前記第1の被写体像の倍率と前記第2の被写体像の倍率との像倍比に基づいて、前記被写体までの距離を算出するための距離算出部と、を備え、
     前記サイズ取得部は、前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の複数のエピポーラ線上を探索することにより、前記第2の画像中の前記第2の被写体像の前記複数の特徴点を検出することを特徴とする測距カメラ。
    A first optical system for condensing light from a subject to form a first subject image; and a first image including the first subject image by capturing the first subject image A first imaging system having a first imaging element for acquiring
    The first optical system is arranged so as to be shifted in a direction perpendicular to the optical axis direction of the first optical system, condenses the light from the subject, and forms a second subject image. A second optical system for forming, and a second image sensor for capturing the second subject image to obtain a second image including the second subject image. Imaging system,
    By detecting a plurality of feature points of the first subject image in the first image and measuring a distance between the plurality of feature points of the first subject image, Acquiring a size, detecting a plurality of feature points of the second subject image in the second image respectively corresponding to the plurality of feature points of the first subject image, A size obtaining unit for obtaining a size of the second subject image by measuring a distance between the plurality of feature points of the image;
    The magnification of the first subject image and the magnification of the second subject image obtained as a ratio of the size of the first subject image acquired by the size acquiring unit to the size of the second subject image. A distance calculator for calculating the distance to the subject based on the image magnification ratio with
    The size acquiring unit searches the plurality of epipolar lines in the second image corresponding to the plurality of feature points of the first subject image, respectively, to thereby obtain the second image in the second image. A distance measuring camera for detecting the plurality of feature points of a subject image.
  2.  前記サイズ取得部は、前記第1の撮像系および前記第2の撮像系の特性および配置を考慮したモデルに基づいて、前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の前記複数のエピポーラ線を導出する請求項1に記載の測距カメラ。 The size acquisition unit is configured to: correspond to the plurality of feature points of the first subject image based on a model in which characteristics and arrangements of the first imaging system and the second imaging system are considered. The distance measuring camera according to claim 1, wherein the plurality of epipolar lines in the image are derived.
  3.  前記第1の被写体像の前記複数の特徴点にそれぞれ対応する前記第2の画像中の前記複数のエピポーラ線は、下記式(1)によって表される請求項2に記載の測距カメラ。
    Figure JPOXMLDOC01-appb-I000001
     ここで、xおよびyは、それぞれ、前記第1の被写体像の前記複数の特徴点の任意の1つの前記第1の画像中におけるx座標およびy座標、xおよびyは、それぞれ、前記第1の被写体像の前記複数の特徴点の前記任意の1つに対応する前記第2の画像中における前記第2の被写体像の特徴点のx座標およびy座標、PおよびPは、それぞれ、前記第1の光学系の前側主点と前記第2の光学系の前側主点との間の並進視差のx軸方向の値およびy軸方向の値であり、Dは前記第1の光学系と前記第2の光学系との間の前記第1の光学系または前記第2の光学系の光軸方向の奥行視差であり、PSは前記第1の撮像素子の画素サイズであり、PSは前記第2の撮像素子の画素サイズであり、fは前記第1の光学系の焦点距離、fは前記第2の光学系の焦点距離、EPは前記第1の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第1の被写体像の結像位置までの距離、EPは前記第2の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第2の被写体像の結像位置までの距離、aFD1は前記第1の撮像素子の撮像面で前記第1の被写体像がベストピントとなる場合の前記第1の光学系の前記前側主点から前記被写体までの距離、aFD2は前記第2の撮像素子の撮像面で前記第2の被写体像がベストピントとなる場合の前記第2の光学系の前記前側主点から前記被写体までの距離である。
    The distance measuring camera according to claim 2, wherein the plurality of epipolar lines in the second image respectively corresponding to the plurality of feature points of the first subject image are represented by the following equation (1).
    Figure JPOXMLDOC01-appb-I000001
    Here, x 1 and y 1 are respectively x and y coordinates in any one of the first images of the plurality of feature points of the first subject image, and x 2 and y 2 are respectively , X and y coordinates of feature points of the second subject image in the second image corresponding to the arbitrary one of the plurality of feature points of the first subject image, P x and P y Are the values in the x-axis direction and the y-axis direction of the translational parallax between the front principal point of the first optical system and the front principal point of the second optical system, respectively, and D is the A depth parallax between the first optical system and the second optical system in the optical axis direction of the first optical system or the second optical system, and PS 1 is a pixel size of the first image sensor. in and, PS 2 is a pixel size of the second image sensor, f 1 is the focal of said first optical system Distance, f 2 is the focal length of the second optical system, the exit pupil of EP 1 is the first optical system, to the imaging position of the first object image in the case where the subject is present at infinity distance, EP 2 from the exit pupil of the second optical system, the distance to the imaging position of the second object image when the object exists at infinity, a FD1 is the first image sensor The distance from the front principal point of the first optical system to the subject when the first subject image is in the best focus on the imaging surface of a, and a FD2 is the imaging surface of the second imaging device. The distance from the front principal point of the second optical system to the subject when the second subject image is the best focus.
  4.  前記第1の光学系および前記第2の光学系は、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の変化が、前記被写体からの前記距離に応じた前記第2の被写体像の前記倍率の変化と異なるように構成されている請求項1に記載の測距カメラ。 The first optical system and the second optical system may be configured such that a change in the magnification of the first subject image according to the distance to the subject is equal to or smaller than the second magnification according to the distance from the subject. The ranging camera according to claim 1, wherein the ranging camera is configured to be different from the change in the magnification of the subject image.
  5.  前記第1の光学系および前記第2の光学系は、前記第1の光学系の焦点距離と、前記第2の光学系の焦点距離とが、互いに異なるよう構成されており、これにより、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の前記変化が、前記被写体までの前記距離に応じた前記第2の被写体像の前記倍率の前記変化と異なるようになっている請求項4に記載の測距カメラ。 The first optical system and the second optical system are configured such that a focal length of the first optical system and a focal length of the second optical system are different from each other, whereby The change in the magnification of the first subject image according to the distance to the subject is different from the change in the magnification of the second subject image according to the distance to the subject. The distance measuring camera according to claim 4.
  6.  前記第1の光学系および前記第2の光学系は、前記第1の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第1の光学系によって形成される前記第1の被写体像の結像位置までの距離と、前記第2の光学系の射出瞳から、前記被写体が無限遠に存在する場合の前記第2の光学系によって形成される前記第2の被写体像の結像位置までの距離とが異なるように構成されており、これにより、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の前記変化が、前記被写体までの前記距離に応じた前記第2の被写体像の前記倍率の前記変化と異なるようになっている請求項4または5に記載の測距カメラ。 The first optical system and the second optical system are formed by the first optical system formed from the exit pupil of the first optical system when the subject is at infinity. From the distance to the imaging position of the subject image and the exit pupil of the second optical system, the focusing of the second subject image formed by the second optical system when the subject is at infinity. The distance to the image position is configured to be different, thereby, the change in the magnification of the first subject image according to the distance to the subject, according to the distance to the subject The distance measuring camera according to claim 4, wherein the magnification of the second subject image is different from the change of the magnification.
  7.  前記第1の光学系の前側主点と前記第2の光学系の前側主点との間に、前記第1の光学系または前記第2の光学系の光軸方向の奥行視差が存在し、これにより、前記被写体までの前記距離に応じた前記第1の被写体像の前記倍率の前記変化が、前記被写体までの前記距離に応じた前記第2の被写体像の前記倍率の前記変化と異なるようになっている請求項4ないし6のいずれかに記載の測距カメラ。 A depth parallax in the optical axis direction of the first optical system or the second optical system exists between a front principal point of the first optical system and a front principal point of the second optical system, Thus, the change in the magnification of the first subject image according to the distance to the subject is different from the change in the magnification of the second subject image according to the distance to the subject. The distance measuring camera according to any one of claims 4 to 6, wherein
PCT/JP2019/023661 2018-07-18 2019-06-14 Distance measurement camera WO2020017209A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018135167 2018-07-18
JP2018-135167 2018-07-18

Publications (1)

Publication Number Publication Date
WO2020017209A1 true WO2020017209A1 (en) 2020-01-23

Family

ID=69164332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/023661 WO2020017209A1 (en) 2018-07-18 2019-06-14 Distance measurement camera

Country Status (3)

Country Link
JP (1) JP7227454B2 (en)
CN (1) CN112424566B (en)
WO (1) WO2020017209A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03200007A (en) * 1989-12-28 1991-09-02 Nippon Telegr & Teleph Corp <Ntt> Stereoscopic measuring instrument
JP2001124519A (en) * 1999-10-29 2001-05-11 Meidensha Corp Recurring correspondent point survey method, three- dimensional position measuring method thereof, these devices and recording medium
JP2001141422A (en) * 1999-11-10 2001-05-25 Fuji Photo Film Co Ltd Image pickup device and image processor
JP2012002683A (en) * 2010-06-17 2012-01-05 Fuji Electric Co Ltd Stereo image processing method and stereo image processing device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3200007B2 (en) 1996-03-26 2001-08-20 シャープ株式会社 Optical coupler and method of manufacturing the same
JP2009258846A (en) * 2008-04-14 2009-11-05 Nikon Systems Inc Image processing method, image processing system, image processor, and image processing program
JP4440341B2 (en) * 2008-05-19 2010-03-24 パナソニック株式会社 Calibration method, calibration apparatus, and calibration system including the apparatus
KR101214536B1 (en) 2010-01-12 2013-01-10 삼성전자주식회사 Method for performing out-focus using depth information and camera using the same
CN103764448B (en) * 2011-09-05 2016-03-02 三菱电机株式会社 Image processing apparatus and image processing method
JP2013156109A (en) * 2012-01-30 2013-08-15 Hitachi Ltd Distance measurement device
US8860930B2 (en) 2012-06-02 2014-10-14 Richard Kirby Three dimensional surface mapping system using optical flow
JP2015036632A (en) * 2013-08-12 2015-02-23 キヤノン株式会社 Distance measuring device, imaging apparatus, and distance measuring method
JP2015045587A (en) * 2013-08-28 2015-03-12 株式会社キーエンス Three-dimensional image processor, method of determining change in state of three-dimensional image processor, program for determining change in state of three-dimensional image processor, computer readable recording medium, and apparatus having the program recorded therein
JP6694234B2 (en) * 2015-01-23 2020-05-13 シャープ株式会社 Distance measuring device
CN105627926B (en) * 2016-01-22 2017-02-08 尹兴 Four-camera group planar array feature point three-dimensional measurement system and measurement method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03200007A (en) * 1989-12-28 1991-09-02 Nippon Telegr & Teleph Corp <Ntt> Stereoscopic measuring instrument
JP2001124519A (en) * 1999-10-29 2001-05-11 Meidensha Corp Recurring correspondent point survey method, three- dimensional position measuring method thereof, these devices and recording medium
JP2001141422A (en) * 1999-11-10 2001-05-25 Fuji Photo Film Co Ltd Image pickup device and image processor
JP2012002683A (en) * 2010-06-17 2012-01-05 Fuji Electric Co Ltd Stereo image processing method and stereo image processing device

Also Published As

Publication number Publication date
JP2020020775A (en) 2020-02-06
CN112424566B (en) 2023-05-16
JP7227454B2 (en) 2023-02-22
CN112424566A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
JP6456156B2 (en) Normal line information generating apparatus, imaging apparatus, normal line information generating method, and normal line information generating program
US8928736B2 (en) Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program
WO2013054499A1 (en) Image processing device, imaging device, and image processing method
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
CN111492201B (en) Distance measuring camera
JP2015197745A (en) Image processing apparatus, imaging apparatus, image processing method, and program
WO2020017377A1 (en) Ranging camera
JP7121269B2 (en) ranging camera
JP2022128517A (en) ranging camera
CN112585423B (en) Distance measuring camera
TWI571099B (en) Device and method for depth estimation
JP2013044597A (en) Image processing device and method, and program
WO2020017209A1 (en) Distance measurement camera
JP6648916B2 (en) Imaging device
JP7328589B2 (en) ranging camera
JP7256368B2 (en) ranging camera
JP6292785B2 (en) Image processing apparatus, image processing method, and program
JP2012202942A (en) Three-dimensional modeling device, three-dimensional modeling method, and program
WO2023199583A1 (en) Viewer control method and information processing device
JP2017076051A (en) Image processing apparatus, imaging apparatus, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19838542

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19838542

Country of ref document: EP

Kind code of ref document: A1