JP6957162B2 - Distance measuring device and moving object - Google Patents

Distance measuring device and moving object Download PDF

Info

Publication number
JP6957162B2
JP6957162B2 JP2017032417A JP2017032417A JP6957162B2 JP 6957162 B2 JP6957162 B2 JP 6957162B2 JP 2017032417 A JP2017032417 A JP 2017032417A JP 2017032417 A JP2017032417 A JP 2017032417A JP 6957162 B2 JP6957162 B2 JP 6957162B2
Authority
JP
Japan
Prior art keywords
photoelectric conversion
parallax
distance measuring
conversion unit
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2017032417A
Other languages
Japanese (ja)
Other versions
JP2017161512A (en
Inventor
章成 高木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to PCT/JP2017/008640 priority Critical patent/WO2017150736A1/en
Priority to US16/074,278 priority patent/US11280606B2/en
Priority to CN201780015269.9A priority patent/CN108885099B/en
Publication of JP2017161512A publication Critical patent/JP2017161512A/en
Application granted granted Critical
Publication of JP6957162B2 publication Critical patent/JP6957162B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • G01C3/08Use of electric radiation detectors
    • G01C3/085Use of electric radiation detectors with electronic parallax measurement
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/30Systems for automatic generation of focusing signals using parallactic triangle with a base line
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane

Description

本発明は、画像取得とともに高精度な測距が可能な測距装置及び移動体に関し、特に、撮像面位相差方式によって距離を測定する測距装置及び移動体に関する。 The present invention relates to a distance measuring device and a moving body capable of performing high-precision distance measurement together with image acquisition, and more particularly to a distance measuring device and a moving body that measures a distance by an imaging surface phase difference method.

車両、ドローンやロボットを含む移動体において当該移動体の移動を支援するために、移動体の周辺物、例えば、障害物と移動体の距離を測定することが求められている。特に、衝突回避や他物体追従等の移動支援を行うためには障害物の画像を用いた認識処理も必要となることから、移動体では障害物との距離を測定するために撮像装置としてのカメラが多用されている。カメラにおいて、認識処理に用いる障害物(被写体)の画像だけでなく距離も取得する方法として撮像面位相差方式が知られている(例えば、特許文献1,2参照)。撮像面位相差方式では、カメラの光学系の射出瞳における異なる2つの領域(部分瞳)を通過した光束によって形成される一対の画像の視差を求め、当該視差から三角測量の原理に基づいて被写体までの距離を測定する。撮像面位相差方式を用いたカメラでは、例えば、撮像素子の各画素が2つの光電変換部(フォトダイオード)を有する。撮像面位相差方式では、2つの光電変換部に入射する光束によって形成される光学像が変換された電気信号(以下、「画像信号」という。)から一対の画像の視差を求める一方、2つの光電変換部から得られる画像信号を合算して被写体の画像を取得する。 In a moving body including a vehicle, a drone or a robot, in order to support the movement of the moving body, it is required to measure the distance between an obstacle and the moving body around the moving body, for example. In particular, in order to perform movement support such as collision avoidance and tracking of other objects, recognition processing using an image of an obstacle is also required. Therefore, in a moving body, it is used as an imaging device to measure the distance to the obstacle. Cameras are heavily used. In a camera, an imaging surface phase difference method is known as a method of acquiring not only an image of an obstacle (subject) used for recognition processing but also a distance (see, for example, Patent Documents 1 and 2). In the imaging surface phase difference method, the parallax of a pair of images formed by the light flux passing through two different regions (partial pupils) in the exit pupil of the optical system of the camera is obtained, and the subject is obtained from the parallax based on the principle of triangulation. Measure the distance to. In a camera using the image pickup surface phase difference method, for example, each pixel of the image pickup device has two photoelectric conversion units (photodiodes). In the imaging surface phase difference method, the parallax of a pair of images is obtained from an electrical signal (hereinafter referred to as "image signal") in which an optical image formed by light flux incident on two photoelectric conversion units is converted, while two The image signals obtained from the photoelectric conversion unit are added up to acquire an image of the subject.

ところで、いわゆるスチルカメラやビデオカメラでは高画質の画像が求められる。画素における2つの光電変換部には対応する2つの部分瞳を通過する光束のそれぞれが入射する。そこで、高画質の画像を得るためには射出瞳における2つの部分瞳を大きくして2つの光電変換部から得られる画像信号を加算したときの画像が、射出瞳全体を通過した光束によって形成される画像と同じになるように設定する必要がある。 By the way, so-called still cameras and video cameras are required to have high-quality images. Each of the two luminous fluxes passing through the corresponding two partial pupils is incident on the two photoelectric conversion units in the pixel. Therefore, in order to obtain a high-quality image, an image obtained by enlarging the two partial pupils in the exit pupil and adding the image signals obtained from the two photoelectric conversion units is formed by the luminous flux passing through the entire exit pupil. It is necessary to set it so that it is the same as the image.

特開2013−190622号公報Japanese Unexamined Patent Publication No. 2013-190622 特開2001−21792号公報Japanese Unexamined Patent Publication No. 2001-21792

しかしながら、図16に示すように、2つの部分瞳160,161を大きくすると、2つの部分瞳160,161の重心間距離である基線長が短くなり、2つの光電変換部から得られる画像信号162,163から求められる一対の画像の視差が小さくなる。その結果、視差から三角測量の原理に基づいて被写体までの距離を測定する撮像面位相差方式では、測定される距離の精度が低下するというおそれがある。 However, as shown in FIG. 16, when the two partial pupils 160 and 161 are enlarged, the baseline length, which is the distance between the centers of gravity of the two partial pupils 160 and 161, becomes shorter, and the image signal 162 obtained from the two photoelectric conversion units becomes shorter. , 163 reduces the parallax of the pair of images. As a result, in the imaging surface phase difference method in which the distance from the parallax to the subject is measured based on the principle of triangulation, the accuracy of the measured distance may decrease.

本発明の目的は、画像取得とともに高精度な測距が可能な測距装置を提供することにある。 An object of the present invention is to provide a distance measuring device capable of acquiring an image and measuring a distance with high accuracy.

上記目的を達成するために、本発明の測距装置は、絞りを有する光学系及び複数の画素が配列された撮像素子を有する撮像装置と、前記複数の画素のそれぞれから出力される、視差を有する1対の光学像の画像信号に基づいて被写体の距離情報を取得する距離情報取得部とを備える測距装置であって、前記絞りには、第1及び第2の測距用光束を規定する第1絞り孔及び第2絞り孔と、撮像用光束を規定する第3絞り孔とが設けられ、前記複数の画素のそれぞれは、前記第1の測距用光束を受光して前記1対の光学像の一方の画像信号を出力する第1光電変換部と、前記第2の測距用光束を受光して前記1対の光学像の他方の画像信号を出力する第2光電変換部と、前記撮像用光束を受光して被写体の光学像の画像信号を出力する第3光電変換部とを有し、前記第3光電変換部は、前記視差の方向に関して、前記第1光電変換部及び前記第2光電変換部の間に挟まれることを特徴とする。 In order to achieve the above object, the distance measuring device of the present invention has an optical system having a diaphragm, an image pickup device having an image pickup element in which a plurality of pixels are arranged, and a parallax output from each of the plurality of pixels. A distance measuring device including a distance information acquisition unit that acquires distance information of a subject based on an image signal of a pair of optical images, and the aperture is defined with first and second distance measuring light beams. A first diaphragm hole and a second diaphragm hole for imaging are provided, and a third diaphragm hole for defining an imaging light beam is provided, and each of the plurality of pixels receives the first ranging light beam and receives the pair. a first photoelectric conversion unit that outputs one image signal of the optical image of a second photoelectric conversion unit in which the second by receiving the distance measuring light beam and outputs the other of the image signal of the optical image of said pair , by receiving the light beam for the imaging have a third photoelectric conversion unit to output an image signal of an optical image of a subject, the third photoelectric conversion unit, with respect to the direction of the parallax, the first photoelectric conversion unit and characterized Rukoto sandwiched between the second photoelectric conversion unit.

また、上記目的を達成するために、本発明の移動体は、測距装置と、前記測距装置の測距結果に基づいて警報を発する警報装置とを備え、前記測距装置は、絞りを有する光学系及び複数の画素が配列された撮像素子を有する撮像装置と、前記複数の画素のそれぞれから出力される、視差を有する1対の光学像の画像信号に基づいて被写体の距離情報を取得する距離情報取得部とを有し、前記絞りには、第1及び第2の測距用光束を規定する第1絞り孔及び第2絞り孔と、撮像用光束を規定する第3絞り孔とが設けられ、前記複数の画素のそれぞれは、前記第1の測距用光束を受光して前記1対の光学像の一方の画像信号を出力する第1光電変換部と、前記第2の測距用光束を受光して前記1対の光学像の他方の画像信号を出力する第2光電変換部と、前記撮像用光束を受光して被写体の光学像の画像信号を出力する第3光電変換部とを有し、前記第3光電変換部は、前記視差の方向に関して、前記第1光電変換部及び前記第2光電変換部の間に挟まれることを特徴とする。 Further, in order to achieve the above object, the moving body of the present invention includes a distance measuring device and an alarm device that issues an alarm based on the distance measuring result of the distance measuring device, and the distance measuring device has a diaphragm. The distance information of the subject is acquired based on the image signal of the optical system having the optical system, the image pickup device having the image pickup element in which a plurality of pixels are arranged, and the image signal of a pair of optical images having a difference in output from each of the plurality of pixels. The diaphragm has a first diaphragm hole and a second diaphragm hole that define the first and second ranging light beams, and a third diaphragm hole that defines the imaging light beam. Each of the plurality of pixels is provided with a first photoelectric conversion unit that receives the first diaphragm-finding light beam and outputs one image signal of the pair of optical images, and the second measurement. The second photoelectric conversion unit that receives the light beam for distance and outputs the other image signal of the pair of optical images, and the third photoelectric conversion unit that receives the light beam for imaging and outputs the image signal of the optical image of the subject. The third photoelectric conversion unit is sandwiched between the first photoelectric conversion unit and the second photoelectric conversion unit in the direction of the diaphragm .

なお、本発明のその他の側面については、以下で説明する各実施の形態で明らかにする。 In addition, other aspects of the present invention will be clarified in each embodiment described below.

本発明によれば、画像取得とともに高精度な測距が可能な測距装置を提供できる。 According to the present invention, it is possible to provide a distance measuring device capable of acquiring an image and measuring a distance with high accuracy.

本発明の第1の実施の形態に係る測距装置の構成を概略的に示すブロック図である。It is a block diagram which shows schematic structure of the distance measuring apparatus which concerns on 1st Embodiment of this invention. 図1における撮像素子の構成を概略的に示す正面図である。It is a front view which shows the structure of the image sensor in FIG. 1 schematically. 撮像面位相差方式による距離測定の原理を説明するための図である。It is a figure for demonstrating the principle of the distance measurement by the imaging surface phase difference method. 本発明の第1の実施の形態における光学系の射出瞳を示す図である。It is a figure which shows the exit pupil of the optical system in 1st Embodiment of this invention. 撮像兼測距用画素における各PDの配置を説明するための図である。It is a figure for demonstrating the arrangement of each PD in a pixel for image pickup and distance measurement. 光学系の射出瞳の変形例及び撮像兼測距用画素における各PDの配置の変形例を説明するための図である。It is a figure for demonstrating the modification of the exit pupil of an optical system, and the modification of the arrangement of each PD in the pixel for imaging and distance measurement. 本発明の第2の実施の形態おける測距用光束の絞りの様子を説明するための図である。It is a figure for demonstrating the state of the diaphragm of the luminous flux for distance measurement in the 2nd Embodiment of this invention. 本発明の第2の実施の形態おける測距用光束の絞りの様子を説明するための図である。It is a figure for demonstrating the state of the diaphragm of the luminous flux for distance measurement in the 2nd Embodiment of this invention. 本発明の第2の実施の形態に係る測距装置の構成を概略的に示すブロック図である。It is a block diagram which shows schematic structure of the distance measuring apparatus which concerns on 2nd Embodiment of this invention. 絞り孔による測距用光束の入射角範囲の制限を説明するための図である。It is a figure for demonstrating the limitation of the incident angle range of the luminous flux for distance measurement by a diaphragm hole. 絞り孔による測距用光束の入射角範囲の制限を説明するための図である。It is a figure for demonstrating the limitation of the incident angle range of the luminous flux for distance measurement by a diaphragm hole. 図9における絞りを光軸方向から眺めた図である。9 is a view of the diaphragm in FIG. 9 viewed from the direction of the optical axis. 本実施の形態に係る移動体としての自動車における運転支援システムの構成を概略的に説明するための図である。It is a figure for demonstrating the configuration of the driving support system in the automobile as a moving body which concerns on this Embodiment. 本実施の形態に係る移動体としての自動車における運転支援システムの構成を概略的に説明するための図である。It is a figure for demonstrating the configuration of the driving support system in the automobile as a moving body which concerns on this Embodiment. 本実施の形態における運転支援システムの動作例としての衝突回避処理を示すフローチャートである。It is a flowchart which shows the collision avoidance processing as an operation example of the driving support system in this embodiment. 光学系の射出瞳における各部分瞳及び基線長の関係を説明するための図である。It is a figure for demonstrating the relationship between each partial pupil and the baseline length in the exit pupil of an optical system.

以下、本発明の実施の形態について図面を参照しながら詳細に説明する。なお、本実施の形態に記載されている構成要素はあくまで例示に過ぎず、本発明の範囲は本実施の形態に記載されている構成要素によって限定されることはない。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. The components described in the present embodiment are merely examples, and the scope of the present invention is not limited by the components described in the present embodiment.

まず、本発明の第1の実施の形態について説明する。図1は、本発明の第1の実施の形態に係る測距装置の構成を概略的に示すブロック図である。 First, the first embodiment of the present invention will be described. FIG. 1 is a block diagram schematically showing a configuration of a distance measuring device according to a first embodiment of the present invention.

図1において、測距装置は、光学系11及び多数の画素が配列された撮像素子12を有するカメラ10と、画像解析部13と、距離情報取得部14とを備える。光学系11は光軸に沿って配列された、例えば、2枚のレンズ11a,11bを有し、被写体の光学像を撮像素子12上に結像する。図2に示すように、撮像素子12が有する多数の画素は複数の撮像用画素12aと、複数の測距用画素12b(第1画素,第2画素)とに区分される。なお、図2では煩雑さをなくすため、左上側の各画素の配列の態様を部分的に示すに留まり、撮像素子12の全面における各画素の配列の態様は図示を省略する。各撮像用画素12a及び各測距用画素12bは各々1つの光電変換部としての光電変換素子、例えば、フォトダイオード(以下、「PD」という。)を有する。本実施の形態において、PDは、シリコンや光吸収特性を有する薄膜で形成した光電変換膜等で構成される。複数の撮像用画素12aは光学系11の射出瞳の部分的領域(射出瞳における異なる領域。以下、「部分瞳」という。)を通過する光束を各々が受光して被写体の光学像を形成する。また、複数の測距用画素12bは各々が光学系11の射出瞳における2つの異なる部分瞳を通過する光束のいずれかを受光する。撮像素子12では、例えば、ベイヤー配列に従い、2行×2列の4画素のうち対角の2画素としてG(緑色)の分光感度を有する撮像用画素12aが配置され、他の2画素としてR(赤色)とB(青色)の分光感度を有する撮像用画素12aが1つずつ配置される。各撮像用画素12aが有する特定色の分光感度は、各撮像用画素12aが有する原色系のカラーフィルタによって付与される。また、撮像素子12では、一部の2行×2列の画素において、対角の2つのGの分光感度を有する撮像用画素12aをそのまま残し、RとBの分光感度を有する撮像用画素12aが測距用画素12bに置き換えられる。撮像素子12では、一部の2行×2列の画素において対角の2つの測距用画素12bが、2つの部分瞳を通過した光束をそれぞれ受光することによって一対の光学像を形成し、さらに各光学像を光電変換して画像信号(出力信号)を出力する。画像解析部13は出力された画像信号に対して画像処理を施す。距離情報取得部14は画像処理が施された画像信号から一対の光学像の視差を算出し、さらに、算出された視差に基づいて被写体までの距離を算出する。すなわち、測距装置は撮像面位相差方式によって被写体までの距離を測定する。なお、画像信号から視差を算出する際、必ずしも撮像素子12から出力された画像信号に画像処理を施す必要はない。例えば、画像信号から直接視差を算出することができる場合、撮像素子12は画像解析部13を経由することなく距離情報取得部14へ直接、画像信号を出力する。 In FIG. 1, the distance measuring device includes a camera 10 having an optical system 11 and an image sensor 12 in which a large number of pixels are arranged, an image analysis unit 13, and a distance information acquisition unit 14. The optical system 11 has, for example, two lenses 11a and 11b arranged along the optical axis, and forms an optical image of a subject on the image pickup device 12. As shown in FIG. 2, a large number of pixels included in the image pickup device 12 are divided into a plurality of imaging pixels 12a and a plurality of distance measuring pixels 12b (first pixel, second pixel). In addition, in order to eliminate the complexity, FIG. 2 only partially shows the mode of arrangement of each pixel on the upper left side, and the mode of arrangement of each pixel on the entire surface of the image sensor 12 is not shown. Each imaging pixel 12a and each distance measuring pixel 12b each has a photoelectric conversion element as a photoelectric conversion unit, for example, a photodiode (hereinafter, referred to as “PD”). In the present embodiment, the PD is composed of a photoelectric conversion film or the like formed of silicon or a thin film having light absorption characteristics. Each of the plurality of imaging pixels 12a receives a light flux passing through a partial region of the exit pupil of the optical system 11 (a different region in the exit pupil; hereinafter referred to as “partial pupil”) to form an optical image of the subject. .. Further, each of the plurality of distance measuring pixels 12b receives a light flux passing through two different partial pupils in the exit pupil of the optical system 11. In the image sensor 12, for example, according to the Bayer arrangement, imaging pixels 12a having a spectral sensitivity of G (green) are arranged as two diagonal pixels out of four pixels in 2 rows × 2 columns, and R as the other 2 pixels. Image pickup pixels 12a having spectral sensitivities of (red) and B (blue) are arranged one by one. The spectral sensitivity of a specific color possessed by each imaging pixel 12a is imparted by a primary color system color filter possessed by each imaging pixel 12a. Further, in the image sensor 12, in some pixels of 2 rows × 2 columns, the imaging pixels 12a having two diagonal spectral sensitivities of G are left as they are, and the imaging pixels 12a having the spectral sensitivities of R and B are left as they are. Is replaced with the distance measuring pixel 12b. In the image sensor 12, two diagonal distance measuring pixels 12b form a pair of optical images by receiving light flux passing through the two partial pupils in some of the pixels of 2 rows × 2 columns. Further, each optical image is photoelectrically converted to output an image signal (output signal). The image analysis unit 13 performs image processing on the output image signal. The distance information acquisition unit 14 calculates the parallax of the pair of optical images from the image signal subjected to the image processing, and further calculates the distance to the subject based on the calculated parallax. That is, the distance measuring device measures the distance to the subject by the imaging surface phase difference method. When calculating the parallax from the image signal, it is not always necessary to perform image processing on the image signal output from the image sensor 12. For example, when the parallax can be calculated directly from the image signal, the image sensor 12 outputs the image signal directly to the distance information acquisition unit 14 without going through the image analysis unit 13.

上述した撮像素子12では、2つの測距用画素12bが各部分瞳を通過した光束をそれぞれ受光して一対の光学像を形成したが、後述する1つの撮像兼測距用画素12cが2つの部分瞳を通過した光束を受光して一対の光学像を形成してもよい。この場合、撮像兼測距用画素12cは少なくとも2つPDを有し、2つのPDが2つの部分瞳を通過した光束をそれぞれ受光する。また、撮像兼測距用画素12cは2つのPDが受光する2つの部分瞳を通過した光束を合成して被写体の光学像を形成する。したがって、撮像素子12のほぼ全域において撮像兼測距用画素12cを配置してもよい。なお、本実施の形態では、カメラ10において光学系11の各レンズ11a,11bは固定され、いわゆるオートフォーカス機能は省略されている。すなわち、カメラ10では焦点が固定される。しかしながら、カメラ10はオートフォーカス機能を備えていてもよく、この場合、距離情報取得部14が算出した被写体までの距離に基づいてオートフォーカスが実行される。 In the image sensor 12 described above, the two distance measuring pixels 12b receive the light flux passing through each partial pupil to form a pair of optical images, but one imaging and distance measuring pixel 12c described later is two. A pair of optical images may be formed by receiving the light flux passing through the partial pupil. In this case, the imaging / ranging pixel 12c has at least two PDs, and each of the two PDs receives the light flux that has passed through the two partial pupils. Further, the imaging / ranging pixel 12c synthesizes the luminous flux passing through the two partial pupils received by the two PDs to form an optical image of the subject. Therefore, the imaging and ranging pixels 12c may be arranged in almost the entire area of the image sensor 12. In the present embodiment, the lenses 11a and 11b of the optical system 11 are fixed in the camera 10, and the so-called autofocus function is omitted. That is, the focus is fixed on the camera 10. However, the camera 10 may have an autofocus function, and in this case, the autofocus is executed based on the distance to the subject calculated by the distance information acquisition unit 14.

図3は、撮像面位相差方式による距離測定の原理を説明するための図である。具体的に、図3(A)は一対をなす2つの測距用画素12bがそれぞれ形成する光学像を用いて距離測定を行う場合を示す。図3(B)は図3(A)における各画素を光軸方向から眺めた場合を示す。図3(C)は1つの撮像兼測距用画素12cが形成する一対の光学像を用いて距離測定を行う場合を示す。なお、図3(A)及び図3(C)では撮像用画素12a、各測距用画素12bや撮像兼測距用画素12cが側方から眺めた状態で描画される。 FIG. 3 is a diagram for explaining the principle of distance measurement by the imaging surface phase difference method. Specifically, FIG. 3A shows a case where distance measurement is performed using an optical image formed by a pair of two distance measuring pixels 12b. FIG. 3B shows a case where each pixel in FIG. 3A is viewed from the direction of the optical axis. FIG. 3C shows a case where distance measurement is performed using a pair of optical images formed by one imaging / ranging pixel 12c. In addition, in FIGS. 3A and 3C, the image pickup pixel 12a, each distance measurement pixel 12b, and the image pickup / distance measurement pixel 12c are drawn in a state of being viewed from the side.

まず、図3(A)において、光学系11の射出瞳30は、水平方向(図中の横方向,第1方向)(以下、「視差方向」という。)に関して射出瞳30の両端近傍にそれぞれ位置する2つの部分瞳(以下、「測距用瞳」という。)31,32を有する。また、射出瞳30は視差方向において各測距用瞳31,32に挟まれて射出瞳30の略中央に位置する部分瞳(以下、「撮像用瞳」という。)33を有する。測距用瞳31,32の各々からは測距用光束34,35(第1光束,第2の光束)が射出されて一対の測距用画素12bの各々に入射する。また、撮像用瞳33からは撮像用光束36が射出されて撮像用画素12aに入射する。2つの測距用画素12bはマイクロレンズ37(第1マイクロレンズ,第2マイクロレンズ)と、該マイクロレンズ37を介して射出瞳30と対向するPD38(第1光電変換部,第2光電変換部)とを有する。さらに、2つの測距用画素12bは、マイクロレンズ37及びPD38の間に配置されてPD38を部分的にマイクロレンズ37と対峙させる開口39(第1開口,第2開口)を有する遮光膜40(第1遮光膜,第2遮光膜)とを有する。図3(B)の左側の撮像用画素12bの開口39(第1開口)は、画素の中心上やマイクロレンズ37の光軸上から視差方向(第1方向)に偏心して配置されている。また、図3(B)の右側の撮像用画素12bの開口39(第2開口)は、画素の中心上やマイクロレンズ37の光軸上から視差方向(第1の方向と反対の第2方向)に偏心して配置されている。さらに、図3(B)の左側の撮像用画素12bのPD38と、図3(B)の右側の撮像用画素12bのPD38は、互いに入射角感度特性が異なる。なお、本特許請求の範囲及び本明細書において画素の中心とは、画素の重心のことであり、例えば、画素が長方形の場合には、2つの対角線の交点が画素の中心に相当する。撮像用画素12aはマイクロレンズ41と、該マイクロレンズ41を介して射出瞳30と対向するPD42とを有する。さらに、撮像用画素12aはマイクロレンズ41及びPD42の間に配置されてPD42を部分的にマイクロレンズ41と対峙させる開口65(第3開口)を有する遮光膜66(第3遮光膜)を有する。撮像用画素12aの開口65(第3開口)は、画素の中心上やマイクロレンズ41の光軸上に配置されている。なお、撮像用画素12aのPD42は、2つの撮像用画素12bのPD38と入射角感度特性が異なる。 First, in FIG. 3A, the exit pupils 30 of the optical system 11 are located near both ends of the exit pupils 30 in the horizontal direction (horizontal direction, first direction in the figure) (hereinafter, referred to as “parallax direction”). It has two partial pupils (hereinafter referred to as "distance measuring pupils") 31 and 32 that are located. Further, the exit pupil 30 has a partial pupil (hereinafter, referred to as “imaging pupil”) 33 which is sandwiched between the distance measuring pupils 31 and 32 in the parallax direction and is located substantially at the center of the exit pupil 30. Luminous fluxes 34, 35 (first luminous flux, second luminous flux) are emitted from each of the distance measuring pupils 31 and 32 and are incident on each of the pair of distance measuring pixels 12b. Further, the image pickup light flux 36 is emitted from the image pickup pupil 33 and is incident on the image pickup pixel 12a. The two distance measuring pixels 12b are a microlens 37 (first microlens, second microlens) and PD38 (first photoelectric conversion unit, second photoelectric conversion unit) facing the exit pupil 30 via the microlens 37. ) And. Further, the two ranging pixels 12b are arranged between the microlens 37 and the PD38 and have an opening 39 (first opening, second opening) that partially confronts the microlens 37. It has a first light-shielding film and a second light-shielding film). The aperture 39 (first aperture) of the imaging pixel 12b on the left side of FIG. 3B is arranged eccentrically in the parallax direction (first direction) from the center of the pixel or the optical axis of the microlens 37. Further, the opening 39 (second opening) of the imaging pixel 12b on the right side of FIG. 3B is a parallax direction (second direction opposite to the first direction) from the center of the pixel or the optical axis of the microlens 37. ) Is eccentric. Further, the PD38 of the imaging pixel 12b on the left side of FIG. 3B and the PD38 of the imaging pixel 12b on the right side of FIG. 3B have different incident angle sensitivity characteristics. The scope of the claims and the center of the pixel in the present specification are the center of gravity of the pixel. For example, when the pixel is rectangular, the intersection of the two diagonal lines corresponds to the center of the pixel. The imaging pixel 12a has a microlens 41 and a PD 42 facing the exit pupil 30 via the microlens 41. Further, the imaging pixel 12a has a light-shielding film 66 (third light-shielding film) arranged between the microlens 41 and the PD42 and having an opening 65 (third aperture) that partially confronts the microlens 41. The aperture 65 (third aperture) of the imaging pixel 12a is arranged on the center of the pixel or on the optical axis of the microlens 41. The PD42 of the imaging pixel 12a has different incident angle sensitivity characteristics from the PD38 of the two imaging pixels 12b.

一対の測距用画素12bでは、2つのマイクロレンズ37が光学系11の像面近傍に配置され、各マイクロレンズ37は測距用光束34、35を2つの遮光膜40(各開口39)へ集光させる。光学系11及び2つのマイクロレンズ37は射出瞳30及び2つの遮光膜40(各開口39)が光学的に共役になるように構成される。したがって、2つのマイクロレンズ37によって2つの遮光膜40のそれぞれの開口39の形状が射出瞳30における2つの測距用瞳31,32へ投影される。すなわち、2つの測距用瞳31,32の配置(位置、大きさ)は遮光膜40のそれぞれの開口39の位置、大きさによって実現される。また、撮像用画素12aでは、マイクロレンズ41が光学系11の像面近傍に配置され、光学系11及びマイクロレンズ41は射出瞳30及び遮光膜66(開口65)が光学的に共役になるように構成される。したがって、マイクロレンズ41によって開口65の形状が射出瞳30における撮像用瞳33へ投影される。すなわち、撮像用瞳33の配置(位置、大きさ)は遮光膜66の開口65の位置、大きさによって実現される。一対の測距用画素12bの2つのPD38は、測距用瞳31,32をそれぞれ通過する測距用光束34,35によって各マイクロレンズ37に形成される光学像が変換された画像信号を出力する。出力された画像信号に対して像ズレ検出演算処理(相関処理、位相差検出処理)等を施すことにより、一対の光学像の視差を算出する。さらに、当該視差から三角測量の原理に基づいてデフォーカス量や被写体までの距離を算出する(例えば、米国特許出願公開第2015/0092988号明細書参照)。また、撮像用画素12aのPD42は、撮像用瞳33を通過する撮像用光束36によってマイクロレンズ41に形成される光学像が変換された画像信号を出力し、当該画像信号から被写体の画像が形成される。なお、図3(A)及図3(B)では遮光膜40や遮光膜66を設けたが、これらの遮光膜は省略してもよい。この場合、PD38やPD42の位置、大きさを開口39や開口65の位置、大きさと同じにすることにより、上述した各測距用瞳31,32の配置や撮像用瞳33の配置を実現することができる。 In the pair of ranging pixels 12b, two microlenses 37 are arranged near the image plane of the optical system 11, and each microlens 37 transfers the ranging light fluxes 34 and 35 to the two light-shielding films 40 (each opening 39). Condensing. The optical system 11 and the two microlenses 37 are configured such that the exit pupil 30 and the two light-shielding films 40 (each opening 39) are optically conjugated. Therefore, the shapes of the openings 39 of the two light-shielding films 40 are projected by the two microlenses 37 onto the two distance measuring pupils 31 and 32 in the exit pupil 30. That is, the arrangement (position, size) of the two distance measuring pupils 31 and 32 is realized by the position and size of the respective openings 39 of the light-shielding film 40. Further, in the imaging pixel 12a, the microlens 41 is arranged near the image plane of the optical system 11, so that the exit pupil 30 and the light-shielding film 66 (aperture 65) of the optical system 11 and the microlens 41 are optically coupled. It is composed of. Therefore, the shape of the aperture 65 is projected by the microlens 41 onto the imaging pupil 33 in the exit pupil 30. That is, the arrangement (position, size) of the imaging pupil 33 is realized by the position and size of the opening 65 of the light-shielding film 66. The two PD38s of the pair of distance measuring pixels 12b output an image signal in which the optical image formed on each microlens 37 is converted by the distance measuring fluxes 34 and 35 passing through the distance measuring pupils 31 and 32, respectively. do. The parallax of a pair of optical images is calculated by performing image shift detection calculation processing (correlation processing, phase difference detection processing) or the like on the output image signal. Further, the defocus amount and the distance to the subject are calculated from the parallax based on the principle of triangulation (see, for example, US Patent Application Publication No. 2015/0092988). Further, the PD 42 of the imaging pixel 12a outputs an image signal obtained by converting an optical image formed on the microlens 41 by the imaging light flux 36 passing through the imaging pupil 33, and an image of the subject is formed from the image signal. Will be done. Although the light-shielding film 40 and the light-shielding film 66 are provided in FIGS. 3 (A) and 3 (B), these light-shielding films may be omitted. In this case, by making the positions and sizes of the PD 38 and PD 42 the same as the positions and sizes of the openings 39 and 65, the arrangement of the distance measuring pupils 31 and 32 and the arrangement of the imaging pupil 33 described above are realized. be able to.

また、図3(C)では、測距用瞳31,32の各々からは測距用光束34,35が射出されて撮像兼測距用画素12cに入射する。撮像用瞳33からは撮像用光束36が射出されて同じ撮像兼測距用画素12cに入射する。撮像兼測距用画素12cはマイクロレンズ43と、該マイクロレンズ43を介して射出瞳30と対向するPD44〜46とを有する。撮像兼測距用画素12cでは、マイクロレンズ43が光学系11の像面近傍に配置され、マイクロレンズ43は測距用光束34、35を2つのPD44,45へ集光させる。光学系11及びマイクロレンズ43は射出瞳30及びPD44〜46が光学的に共役になるように構成される。したがって、マイクロレンズ43により、PD44の形状が射出瞳30における測距用瞳31へ投影される。また、マイクロレンズ43により、PD45の形状が射出瞳30における測距用瞳32へ投影される。さらに、マイクロレンズ43により、PD46の形状が射出瞳30における撮像用瞳33へ投影される。すなわち、各測距用瞳31,32及び撮像用瞳33の配置(位置、大きさ)はPD44〜46の位置、大きさによって実現される。撮像兼測距用画素12cのPD44,45は、測距用瞳31,32をそれぞれ通過する測距用光束34,35によってマイクロレンズ43に形成される光学像が変換された画像信号を出力する。ここでも、出力された画像信号に対して像ズレ検出演算処理(相関処理、位相差検出処理)等を施すことにより、一対の光学像の視差を算出する。さらに、当該視差から三角測量の原理に基づいてデフォーカス量や被写体までの距離を算出する。また、撮像兼測距用画素12cのPD46は、撮像用瞳33を通過する撮像用光束36によってマイクロレンズ43に形成される光学像が変換された画像信号を出力する。なお、PD44〜46は互いに入射角感度特性が異なる。 Further, in FIG. 3C, the ranging luminous fluxes 34 and 35 are emitted from each of the ranging pupils 31 and 32 and incident on the imaging and ranging pixels 12c. An imaging light beam 36 is emitted from the imaging pupil 33 and is incident on the same imaging / ranging pixel 12c. The imaging / ranging pixel 12c has a microlens 43 and PDs 44 to 46 facing the exit pupil 30 via the microlens 43. In the imaging / ranging pixel 12c, the microlens 43 is arranged near the image plane of the optical system 11, and the microlens 43 focuses the ranging light fluxes 34 and 35 on the two PDs 44 and 45. The optical system 11 and the microlens 43 are configured such that the exit pupil 30 and PDs 44 to 46 are optically conjugated. Therefore, the microlens 43 projects the shape of the PD 44 onto the distance measuring pupil 31 at the exit pupil 30. Further, the microlens 43 projects the shape of the PD 45 onto the distance measuring pupil 32 in the exit pupil 30. Further, the microlens 43 projects the shape of the PD 46 onto the imaging pupil 33 in the exit pupil 30. That is, the arrangement (position, size) of the distance measuring pupils 31 and 32 and the imaging pupil 33 is realized by the positions and sizes of PDs 44 to 46. The PDs 44 and 45 of the imaging and ranging pixels 12c output an image signal obtained by converting an optical image formed on the microlens 43 by the ranging luminous fluxes 34 and 35 passing through the ranging pupils 31 and 32, respectively. .. Here, too, the parallax of the pair of optical images is calculated by performing image shift detection calculation processing (correlation processing, phase difference detection processing) or the like on the output image signal. Further, the defocus amount and the distance to the subject are calculated from the parallax based on the principle of triangulation. Further, the PD 46 of the imaging / ranging pixel 12c outputs an image signal in which an optical image formed on the microlens 43 is converted by an imaging luminous flux 36 passing through the imaging pupil 33. The PDs 44 to 46 have different incident angle sensitivity characteristics.

図4は、本発明の第1の実施の形態における光学系の射出瞳を示す図である。なお、図中の横方向が視差方向に対応する。 FIG. 4 is a diagram showing exit pupils of the optical system according to the first embodiment of the present invention. The horizontal direction in the figure corresponds to the parallax direction.

図4(A)において、射出瞳30は、当該射出瞳30の中心(光学系11の光軸)に関して視差方向に対称に位置し、且つ射出瞳30の両端近傍にそれぞれ位置する2つの楕円状の測距用瞳31,32を有する。さらに、射出瞳30は、視差方向において各測距用瞳31,32に挟まれて射出瞳30の略中央に位置する正円状の撮像用瞳33を有する。各測距用瞳31,32の視差方向に関する重心間距離Lの射出瞳30の視差方向に関する長さである射出瞳幅Wに対する比は0.6以上且つ0.9以下である。また、各測距用瞳31,32の視差方向に関する長さである各測距用瞳幅Wa,Wb(部分瞳幅)の射出瞳幅Wに対する比は0.1以上且つ0.4以下である。さらに、楕円状の各測距用瞳31,32は図中縦方向(視差方向と垂直な方向)に長辺を有する。各測距用瞳31,32の図中縦方向に関する長さである測距用瞳高さH(部分瞳高さ)の各測距用瞳幅Wa,Wbに対する比(以下、「アスペクト比」という。)は1以上であり、好ましくは2以上である。測距用瞳31,32の大きさや形状は上述した各測距用瞳幅Wa,Wbや測距用瞳高さHに関する制約に従う限りにおいて任意であり、例えば、図4(B)に示すように、各測距用瞳31,32がやや小ぶりであってもよい。また、図4(C)に示すように、各測距用瞳31,32と、撮像用瞳33とが同じ形状(正円状)を呈していてもよい。しかしながら、測距用瞳31,32の大きさは、測距用瞳31,32をそれぞれ通過する測距用光束34,35に基づく各画像信号の強度が正確な被写体の距離情報を取得することができる程度まで高くなる大きさであることを必要とする。 In FIG. 4A, the exit pupil 30 has two elliptical shapes that are symmetrically located in the parallax direction with respect to the center of the exit pupil 30 (the optical axis of the optical system 11) and are located near both ends of the exit pupil 30. It has parallax eyes 31 and 32. Further, the exit pupil 30 has a perfect circular imaging pupil 33 that is sandwiched between the distance measuring pupils 31 and 32 in the parallax direction and is located substantially in the center of the exit pupil 30. The ratio of the distance L between the centers of gravity with respect to the parallax directions of the distance measuring pupils 31 and 32 to the exit pupil width W, which is the length of the exit pupil 30 with respect to the parallax direction, is 0.6 or more and 0.9 or less. Further, the ratio of the distance measuring pupil widths Wa and Wb (partial pupil width) to the exit pupil width W, which are the lengths of the distance measuring pupils 31 and 32 in the parallax direction, is 0.1 or more and 0.4 or less. be. Further, each of the elliptical distance measuring pupils 31 and 32 has a long side in the vertical direction (direction perpendicular to the parallax direction) in the figure. The ratio of the distance measurement pupil height H (partial pupil height), which is the length of each distance measurement pupil 31 and 32 in the vertical direction in the figure, to the distance measurement pupil widths Wa and Wb (hereinafter, “aspect ratio””. ) Is 1 or more, preferably 2 or more. The sizes and shapes of the distance measuring pupils 31 and 32 are arbitrary as long as they comply with the above-mentioned restrictions on the distance measuring pupil widths Wa and Wb and the distance measuring pupil height H. For example, as shown in FIG. 4 (B). In addition, the distance measuring pupils 31 and 32 may be slightly smaller. Further, as shown in FIG. 4C, the distance measuring pupils 31 and 32 and the imaging pupil 33 may have the same shape (round shape). However, the size of the distance measuring pupils 31 and 32 is to acquire the distance information of the subject whose intensity of each image signal is accurate based on the distance measuring fluxes 34 and 35 passing through the distance measuring pupils 31 and 32, respectively. It is necessary that the size is as high as possible.

本実施の形態によれば、各測距用瞳31,32の視差方向に関する重心間距離L(基線長)の射出瞳30の射出瞳幅Wに対する比は0.6以上且つ0.9以下であるので、基線長を長くすることができる。その結果、撮像面位相差方式を用いて測定される被写体までの距離の精度を高くすることができる。また、各測距用瞳幅Wa,Wbの射出瞳幅Wに対する比は0.1以上且つ0.4以下であるので、射出瞳30の視差方向に関して各測距用瞳31,32の配置の自由度を増すことができる。その結果、各測距用瞳31,32を射出瞳30の視差方向に関する両端近傍に位置させることができ、もって、基線長を確実に長くすることができる。また、各測距用瞳幅Wa,Wbが小さすぎると、測距用光束34,35の光量が大幅に減り、得られる測距用の画像信号のS/N比が低下して測定される距離の精度が低下する。しかしながら、上述したように、各測距用瞳幅Wa,Wbの射出瞳幅Wに対する比は0.1以上であるため、測距用光束34,35の光量が大幅に減るのを防止することができる。また、各測距用瞳幅Wa,Wbが大きくなる、すなわち、各測距用瞳31,32が大きくなると基線長が短くなり、測定される距離の精度が低下する。しかしながら、上述したように、各測距用瞳幅Wa,Wbの射出瞳幅Wに対する比は0.4以下であるため、基線長が短くなるのを防止することができる。さらに、各測距用瞳31,32のアスペクト比は1以上であるので、各測距用瞳31,32を通過する測距用光束34,35の量を増加させることができる。これにより、測距用光束34,35によって形成される光学像から得られる画像信号のS/N比を高くすることができ、被写体の距離情報を高精度に求めることができる。また、射出瞳30は、視差方向において各測距用瞳31,32に挟まれる撮像用瞳33を有し、撮像用瞳33を通過する撮像用光束36によって形成される光学像が変換された画像信号から被写体の画像が形成される。これにより、射出瞳30の全領域を通過する光束を用いた場合に比べ、絞りを小さくして被写体の焦点深度を深くすることができ、認識処理に適した被写体の画像を得ることができる。 According to the present embodiment, the ratio of the distance L (baseline length) between the centers of gravity with respect to the parallax direction of each distance measuring pupils 31 and 32 to the exit pupil width W of the exit pupil 30 is 0.6 or more and 0.9 or less. Therefore, the baseline length can be increased. As a result, the accuracy of the distance to the subject measured by using the imaging surface phase difference method can be improved. Further, since the ratio of the distance measuring pupil widths Wa and Wb to the exit pupil width W is 0.1 or more and 0.4 or less, the arrangement of the distance measuring pupils 31 and 32 with respect to the parallax direction of the exit pupil 30 is arranged. You can increase the degree of freedom. As a result, the distance measuring pupils 31 and 32 can be positioned near both ends of the exit pupil 30 in the parallax direction, so that the baseline length can be reliably increased. Further, if the pupil widths Wa and Wb for distance measurement are too small, the amount of light of the luminous fluxes 34 and 35 for distance measurement is significantly reduced, and the S / N ratio of the obtained image signal for distance measurement is reduced for measurement. Distance accuracy is reduced. However, as described above, since the ratio of the distance measuring pupil widths Wa and Wb to the exit pupil width W is 0.1 or more, it is necessary to prevent the light amount of the distance measuring fluxes 34 and 35 from being significantly reduced. Can be done. Further, when the distance measuring pupil widths Wa and Wb are increased, that is, when the distance measuring pupils 31 and 32 are increased, the baseline length is shortened and the accuracy of the measured distance is lowered. However, as described above, since the ratio of the distance measuring pupil widths Wa and Wb to the exit pupil width W is 0.4 or less, it is possible to prevent the baseline length from becoming short. Further, since the aspect ratios of the distance measuring pupils 31 and 32 are 1 or more, the amount of the distance measuring fluxes 34 and 35 passing through the distance measuring pupils 31 and 32 can be increased. As a result, the S / N ratio of the image signal obtained from the optical images formed by the ranging light fluxes 34 and 35 can be increased, and the distance information of the subject can be obtained with high accuracy. Further, the exit pupil 30 has an imaging pupil 33 sandwiched between the distance measuring pupils 31 and 32 in the parallax direction, and the optical image formed by the imaging luminous flux 36 passing through the imaging pupil 33 is converted. An image of the subject is formed from the image signal. As a result, the aperture can be reduced and the depth of focus of the subject can be deepened as compared with the case where the light flux passing through the entire region of the exit pupil 30 is used, and an image of the subject suitable for the recognition process can be obtained.

上述したように、2つの測距用瞳31,32の配置は2つの測距用画素12bにおける2つの遮光膜40のそれぞれの開口39の位置、大きさによって実現される。また、撮像用瞳33の配置は撮像用画素12aにおける遮光膜66の開口65の位置、大きさによって実現される。若しくは、2つの測距用瞳31,32及び撮像用瞳33の配置は撮像兼測距用画素12cにおけるPD44〜46の位置、大きさによって実現される。したがって、撮像兼測距用画素12cでは、図5(A)に示すように、縦長の各測距用瞳31,32に対応するように2つのPD44,45は図中縦方向(視差方向と垂直な方向)に長辺を有する矩形状を呈する。また、射出瞳30の両端近傍にそれぞれ位置する2つの測距用瞳31,32に対応するように、2つのPD44,45は視差方向に関して互いに離間して配置される。また、正円状の撮像用瞳33に対応するようにPD46は正方形状を呈し、射出瞳30の略中央に位置する撮像用瞳33に対応するように、PD46は撮像兼測距用画素12cにおいて略中央に配置される。図5(A)に示すように、一の撮像兼測距用画素12cが2つのPD44,45を有する場合、1つの撮像兼測距用画素12cから一対の光学像の視差を算出するための画像信号が得られるため、当該画像信号を多く得ることができる。すなわち、画像信号の解像度を高くすることができる。これにより、形成される画像を高画質にすることができる。また、距離情報の解像度を高くすることができる。 As described above, the arrangement of the two distance measuring pupils 31 and 32 is realized by the position and size of the respective openings 39 of the two light shielding films 40 in the two distance measuring pixels 12b. Further, the arrangement of the imaging pupil 33 is realized by the position and size of the opening 65 of the light-shielding film 66 in the imaging pixel 12a. Alternatively, the arrangement of the two ranging pupils 31 and 32 and the imaging pupil 33 is realized by the positions and sizes of PDs 44 to 46 in the imaging and ranging pixels 12c. Therefore, in the image pickup / distance measurement pixel 12c, as shown in FIG. 5A, the two PDs 44 and 45 are in the vertical direction (parallax direction) in the figure so as to correspond to the vertically long distance measurement pupils 31 and 32. It has a rectangular shape with long sides in the vertical direction). Further, the two PDs 44 and 45 are arranged apart from each other in the parallax direction so as to correspond to the two distance measuring pupils 31 and 32 located near both ends of the exit pupil 30. Further, the PD 46 has a square shape so as to correspond to the perfect circular imaging pupil 33, and the PD 46 has an imaging and ranging pixel 12c so as to correspond to the imaging pupil 33 located substantially in the center of the exit pupil 30. It is placed approximately in the center. As shown in FIG. 5A, when one imaging / ranging pixel 12c has two PDs 44 and 45, for calculating the parallax of a pair of optical images from one imaging / ranging pixel 12c. Since the image signal is obtained, many of the image signals can be obtained. That is, the resolution of the image signal can be increased. As a result, the formed image can be made high quality. Moreover, the resolution of the distance information can be increased.

なお、撮像兼測距用画素12cがPD44及びPD45のうち一方のみを有していてもよい。例えば、図5(B)に示すように、(図中下方の)一の撮像兼測距用画素12cがPD44を有し、(図中上方の)他の撮像兼測距用画素12cがPD45を有する。この場合、測距用瞳31を通過する測距用光束34が一の撮像兼測距用画素12cのPD44によって受光され、当該PD44は測距用光束34よって形成される光学像が変換された画像信号を出力する。また、測距用瞳32を通過する測距用光束35が他の撮像兼測距用画素12cのPD45によって受光され、当該PD45は測距用光束35よって形成される光学像が変換された画像信号を出力する。さらに、一の撮像兼測距用画素12cのPD44及び他の撮像兼測距用画素12cのPD45から出力された画像信号から一対の光学像の視差が算出される。図5(B)に示すように、撮像兼測距用画素12cがPD44及びPD45のうち一方のみを有する場合、一つの撮像兼測距用画素12cが有するPDの数を2つに減らすことができる。これにより、各PDの配置に余裕ができ、もって、各PDを大きくすることができる。その結果、各PDの受光量を増加させて各PDの感度を向上することができ、光量が十分でない環境においても、形成される画像を高画質にすることができ、さらに、距離の計算の精度を向上することができる。 The imaging / ranging pixel 12c may have only one of PD44 and PD45. For example, as shown in FIG. 5 (B), one imaging / ranging pixel 12c (lower in the figure) has PD44, and the other imaging / ranging pixel 12c (upper in the figure) has PD45. Has. In this case, the ranging light flux 34 passing through the ranging pupil 31 is received by the PD44 of one imaging and ranging pixel 12c, and the PD44 is converted into an optical image formed by the ranging light flux 34. Output the image signal. Further, the ranging light flux 35 passing through the ranging pupil 32 is received by the PD45 of another imaging / ranging pixel 12c, and the PD45 is an image obtained by converting the optical image formed by the ranging light flux 35. Output a signal. Further, the parallax of the pair of optical images is calculated from the image signals output from the PD44 of one imaging / ranging pixel 12c and the PD45 of the other imaging / ranging pixel 12c. As shown in FIG. 5B, when the imaging / ranging pixel 12c has only one of PD44 and PD45, the number of PDs possessed by one imaging / ranging pixel 12c can be reduced to two. can. As a result, there is a margin in the arrangement of each PD, and each PD can be enlarged. As a result, the amount of light received by each PD can be increased to improve the sensitivity of each PD, the formed image can be made high quality even in an environment where the amount of light is insufficient, and the distance can be calculated. The accuracy can be improved.

なお、本実施の形態では、射出瞳30が図中横方向に配列された各測距用瞳31,32を有するが、さらに図中縦方向に配列された2つの測距用瞳を有していてもよい。例えば、図6(A)に示すように、射出瞳30は各測距用瞳31,32に加えて図中縦方向に配列された2つの楕円状の測距用瞳47,48を有する。2つの測距用瞳47,48は射出瞳30の図中縦方向に関する両端近傍にそれぞれ位置する。これにより、図中横方向だけでなく図中縦方向における一対の光学像の視差を算出することができ、被写体における横線や斜め線までの距離の測定の精度を向上することができる。この場合、撮像兼測距用画素12cは、図6(B)に示すように、楕円状の2つの測距用瞳47,48に対応するように図中横方向に長辺を有する矩形状を呈するPD49,50を有する。また、射出瞳30の図中縦方向に関する両端近傍にそれぞれ位置する2つの測距用瞳47,48に対応するように、2つのPD49,50は図中縦方向に関して互いに離間して配置される。 In the present embodiment, the exit pupils 30 have the distance measuring pupils 31 and 32 arranged in the horizontal direction in the drawing, but further have two distance measuring pupils arranged in the vertical direction in the drawing. You may be. For example, as shown in FIG. 6A, the exit pupil 30 has two elliptical distance measuring pupils 47 and 48 arranged in the vertical direction in the figure in addition to the distance measuring pupils 31 and 32. The two distance measuring pupils 47 and 48 are located near both ends of the exit pupil 30 in the vertical direction in the drawing. As a result, the parallax of the pair of optical images can be calculated not only in the horizontal direction in the drawing but also in the vertical direction in the drawing, and the accuracy of measuring the distance to the horizontal line or the diagonal line in the subject can be improved. In this case, as shown in FIG. 6B, the imaging / ranging pixel 12c has a rectangular shape having a long side in the horizontal direction in the drawing so as to correspond to the two elliptical ranging pupils 47 and 48. Has PD49,50 exhibiting. Further, the two PD49s and 50s are arranged apart from each other in the vertical direction in the drawing so as to correspond to the two distance measuring pupils 47 and 48 located near both ends in the vertical direction in the drawing of the exit pupil 30. ..

また、本実施の形態では、撮像用画素12aは原色系のカラーフィルタを有するが、当該カラーフィルタは原色系ではなく補色系であってもよい。補色系のカラーフィルタは透過させる光束の光量が原色系のカラーフィルタよりも多いため、PD42の感度を向上させることができる。一方、測距用画素12bではPD38が受光する光束は開口39を通過する光束に限られ、撮像兼測距用画素12cではPD44,45の大きさは限定される。しかしながら、測距用画素12bや撮像兼測距用画素12cはカラーフィルタを有さない、若しくは補色系のカラーフィルタを有する。これにより、PD38やPD44,45が受光する光束の光量はさほど制限されない。したがって、PD38やPD44,45の感度が大幅に低下することがない。 Further, in the present embodiment, the imaging pixel 12a has a primary color system color filter, but the color filter may be a complementary color system instead of the primary color system. Since the complementary color filter has a larger amount of light to be transmitted than the primary color filter, the sensitivity of the PD 42 can be improved. On the other hand, the luminous flux received by the PD 38 in the distance measuring pixel 12b is limited to the luminous flux passing through the aperture 39, and the sizes of the PDs 44 and 45 are limited in the imaging / ranging pixel 12c. However, the distance measuring pixel 12b and the imaging / distance measuring pixel 12c do not have a color filter or have a complementary color filter. As a result, the amount of light flux received by the PD38 or PD44, 45 is not so limited. Therefore, the sensitivity of PD38 and PD44,45 does not decrease significantly.

次に、本発明の第2の実施の形態について説明する。本発明の第2の実施の形態は、その構成、作用が上述した第1の実施の形態と基本的に同じであるので、重複した構成、作用については説明を省略し、以下に異なる構成、作用についての説明を行う。 Next, a second embodiment of the present invention will be described. Since the configuration and operation of the second embodiment of the present invention are basically the same as those of the first embodiment described above, the description of the overlapping configuration and operation is omitted, and different configurations and actions are described below. The action will be explained.

測距装置を移動体、例えば、自動車やドローンに適用する場合、当該測距装置はより小型でより軽量であることが好ましく、これに伴い、カメラ10にもより一層の小型化が求められる。このとき、撮像素子12もより小型化され、撮像素子12の各撮像用画素12aや各測距用画素12bもより小型化される。ここで、測距用画素12bが小型化し、例えば、当該画素の大きさ(画素サイズ)が可視光の波長の数倍程度となったとき、マイクロレンズ37に入射した測距用光束34,35の回折による広がりが大きくなる。具体的には、図7(A)に示すように、マイクロレンズ37に入射した測距用光束35が回折に起因して、画素サイズが大きくて回折がほぼ生じないときの測距用光束35(図中の実線で示す光束参照)よりも広がる(図中の破線で示す光束参照)。これにより、開口39を通過する光束の光量が低下し、PD38が受光する測距用光束35の受光量が低下してPD38の測距用光束35に対する感度が低下する。その結果、画素サイズが大きくて回折がほぼ生じないときの理想状態の入射角感度特性(図中の実線で示す特性参照)に比べ測距用光束35が回折して広がったときの入射角感度特性(図中の破線で示す特性参照)における感度の絶対値が小さくなる。そして、得られる画像信号のSNR(S/N比)が低下して測距精度が低下する。また、PD38が受光する測距用光束35を被写体の画像形成に用いる場合、形成される画像の画質が低下し、認識処理における認識精度が低下する。なお、入射角感度特性を示す図(グラフ)において、縦軸は感度を示し、横軸は入射角を示す。横軸における縦軸との交点の入射角は各光束の主光線入射角である。 When the distance measuring device is applied to a moving body, for example, an automobile or a drone, the distance measuring device is preferably smaller and lighter, and the camera 10 is also required to be further miniaturized accordingly. At this time, the image sensor 12 is also made smaller, and the imaging pixels 12a and the distance measuring pixels 12b of the image sensor 12 are also made smaller. Here, when the distance measuring pixel 12b is miniaturized, for example, when the size (pixel size) of the pixel becomes several times the wavelength of visible light, the distance measuring luminous flux 34, 35 incident on the microlens 37 The spread due to the diffraction of the lens becomes large. Specifically, as shown in FIG. 7A, the luminous flux 35 for ranging when the luminous flux 35 incident on the microlens 37 has a large pixel size and almost no diffraction occurs due to diffraction. It spreads more than (see the luminous flux shown by the solid line in the figure) (see the luminous flux shown by the broken line in the figure). As a result, the amount of light of the light beam passing through the opening 39 decreases, the amount of light received by the distance measuring light beam 35 received by the PD 38 decreases, and the sensitivity of the PD 38 to the distance measuring light beam 35 decreases. As a result, the incident angle sensitivity when the distance measuring luminous flux 35 is diffracted and expanded as compared with the incident angle sensitivity characteristic in the ideal state when the pixel size is large and almost no diffraction occurs (see the characteristic shown by the solid line in the figure). The absolute value of sensitivity in the characteristics (see the characteristics shown by the broken line in the figure) becomes smaller. Then, the SNR (S / N ratio) of the obtained image signal is lowered, and the distance measurement accuracy is lowered. Further, when the distance measuring light flux 35 received by the PD 38 is used for forming an image of a subject, the image quality of the formed image is deteriorated, and the recognition accuracy in the recognition process is lowered. In the figure (graph) showing the incident angle sensitivity characteristic, the vertical axis indicates the sensitivity and the horizontal axis indicates the incident angle. The angle of incidence at the intersection with the vertical axis on the horizontal axis is the angle of incidence of the main ray of each luminous flux.

PD38の測距用光束35に対する感度を向上させるには、図7(B)に示すように、開口39の幅Wcを拡大して当該開口39を通過する光束の光量を増加させるのが好ましい。しかしながら、開口39の幅Wcを拡大すると当該開口39を通過可能な光束の入射角の範囲も広がる(図中の破線で示す光束参照)。その結果、理想状態の入射角感度特性(図中の実線で示す特性参照)に比べ開口39の幅Wcを拡大したときの入射角感度特性(図中の破線で示す特性参照)の入射角の範囲が広がるため、基線長が短くなり、測距精度が低下する。 In order to improve the sensitivity of the PD 38 to the luminous flux 35 for distance measurement, it is preferable to increase the width Wc of the aperture 39 to increase the amount of light of the luminous flux passing through the aperture 39, as shown in FIG. 7 (B). However, when the width Wc of the opening 39 is increased, the range of the incident angle of the light flux that can pass through the opening 39 also widens (see the light flux shown by the broken line in the figure). As a result, the incident angle of the incident angle sensitivity characteristic (see the characteristic shown by the broken line in the figure) when the width Wc of the opening 39 is enlarged as compared with the incident angle sensitivity characteristic in the ideal state (see the characteristic shown by the solid line in the figure). Since the range is widened, the baseline length is shortened and the distance measurement accuracy is lowered.

本実施の形態では、これに対応して、図7(C)に示すように、PD38の感度を向上させるために開口39の幅Wcを拡大しつつ、入射角感度特性の入射角の範囲を限定する。すなわち、開口39を通過する光束の入射角を限定するために測距用光束35を絞って測距用光束35の入射角度分布(の幅)を限定している(図中の実線で示す光束参照)。具体的には、絞り51の絞り孔52を用いて測距用光束35の入射角範囲(光学系11の像面における入射角度分布)を狭める。このとき、絞り孔52を通過した測距用光束35がマイクロレンズ37によって回折しても、当該測距用光束35は遮光膜40上(第1遮光膜上)において幅Wcが拡大された開口39よりも広がることがない。その結果、PD38は測距用光束35を十分に受光することができる。同様の対応を行うことにより、PD38は測距用光束34も十分に受光することができる。以上より、入射角感度特性の感度の絶対値を高く維持しつつ、入射角感度特性の入射角の範囲を狭めることができ、測距精度が低下するのを防止することができる。 Correspondingly, in the present embodiment, as shown in FIG. 7C, the width Wc of the opening 39 is increased in order to improve the sensitivity of the PD 38, and the range of the incident angle of the incident angle sensitivity characteristic is increased. limit. That is, in order to limit the incident angle of the luminous flux passing through the opening 39, the luminous flux 35 for ranging is narrowed down to limit the incident angle distribution (width) of the luminous flux 35 for ranging (the luminous flux shown by the solid line in the figure). reference). Specifically, the aperture hole 52 of the aperture 51 is used to narrow the incident angle range (incident angle distribution on the image plane of the optical system 11) of the luminous flux 35 for distance measurement. At this time, even if the distance measuring light flux 35 that has passed through the diaphragm hole 52 is diffracted by the microlens 37, the distance measuring light beam 35 is an opening having an enlarged width Wc on the light shielding film 40 (on the first light shielding film). It does not spread more than 39. As a result, the PD 38 can sufficiently receive the luminous flux 35 for ranging. By taking the same measures, the PD 38 can sufficiently receive the luminous flux 34 for distance measurement. From the above, it is possible to narrow the range of the incident angle of the incident angle sensitivity characteristic while maintaining the absolute value of the sensitivity of the incident angle sensitivity characteristic high, and it is possible to prevent the ranging accuracy from being lowered.

また、撮像兼測距用画素12cが小型化し、例えば、画素サイズが可視光の波長の数倍程度となったときも、マイクロレンズ43に入射した測距用光束34,35の回折による広がりが大きくなる。具体的には、図8(A)に示すように、マイクロレンズ43に入射した測距用光束35が回折に起因して、画素サイズが大きくて回折がほぼ生じないときの測距用光束35(図中の実線で示す光束参照)よりも広がる(図中の破線で示す光束参照)。これにより、PD45が受光する測距用光束35の光量が低下し、PD45の測距用光束35に対する感度が低下する。その結果、画素サイズが大きくて回折がほぼ生じないときの理想状態の入射角感度特性(図中の実線で示す特性参照)に比べ測距用光束35が回折して広がったときの入射角感度特性(図中の破線で示す特性参照)における感度の絶対値が小さくなる。そして、得られる画像信号のSNRが低下して測距精度が低下する。また、PD45が受光する測距用光束35を被写体の画像形成に用いる場合、形成される画像の画質が低下し、認識処理における認識精度が低下する。 Further, when the imaging and ranging pixels 12c are miniaturized, for example, when the pixel size is about several times the wavelength of visible light, the spread due to the diffraction of the ranging light fluxes 34 and 35 incident on the microlens 43 is widened. growing. Specifically, as shown in FIG. 8A, the luminous flux 35 for ranging when the luminous flux 35 incident on the microlens 43 has a large pixel size and almost no diffraction occurs due to diffraction. It spreads more than (see the luminous flux shown by the solid line in the figure) (see the luminous flux shown by the broken line in the figure). As a result, the amount of light received by the PD 45 for the distance measuring flux 35 is reduced, and the sensitivity of the PD 45 to the distance measuring flux 35 is reduced. As a result, the incident angle sensitivity when the distance measuring luminous flux 35 is diffracted and expanded as compared with the incident angle sensitivity characteristic in the ideal state when the pixel size is large and almost no diffraction occurs (see the characteristic shown by the solid line in the figure). The absolute value of sensitivity in the characteristics (see the characteristics shown by the broken line in the figure) becomes smaller. Then, the SNR of the obtained image signal is lowered, and the distance measurement accuracy is lowered. Further, when the distance measuring light flux 35 received by the PD 45 is used for forming an image of a subject, the image quality of the formed image is deteriorated, and the recognition accuracy in the recognition process is lowered.

PD45の測距用光束35に対する感度を向上させるには、図8(B)に示すように、PD45の幅Wdを拡大してPD45が受光する光束の光量を増加させるのが好ましい。しかしながら、PD45の幅Wdを拡大すると当該PD45に入射可能な光束の入射角の範囲も広がる(図中の破線で示す光束参照)。その結果、理想状態の入射角感度特性(図中の実線で示す特性参照)に比べPD45の幅Wdを拡大したときの入射角感度特性(図中の破線で示す特性参照)の入射角の範囲が広がるため、基線長が短くなり、測距精度が低下する。 In order to improve the sensitivity of the PD45 to the luminous flux 35 for distance measurement, it is preferable to increase the width Wd of the PD45 to increase the amount of light flux received by the PD45, as shown in FIG. 8B. However, when the width Wd of the PD45 is increased, the range of the incident angle of the luminous flux that can be incident on the PD45 also expands (see the luminous flux indicated by the broken line in the figure). As a result, the range of the incident angle of the incident angle sensitivity characteristic (see the characteristic shown by the broken line in the figure) when the width Wd of the PD45 is enlarged as compared with the incident angle sensitivity characteristic in the ideal state (see the characteristic shown by the solid line in the figure). As a result, the baseline length becomes shorter and the distance measurement accuracy decreases.

本実施の形態では、これに対応して、図8(C)に示すように、PD45の感度を向上させるためにPD45の幅Wdを拡大しつつ、入射角感度特性の入射角の範囲を限定する。すなわち、PD45に入射する光束の入射角を限定するために測距用光束35を絞って測距用光束35の入射角度分布(の幅)を限定している(図中の実線で示す光束参照)。具体的には、絞り51の絞り孔52を用いて測距用光束35の入射角範囲(光学系11の像面における入射角度分布)を狭める。このとき、絞り孔52を通過した測距用光束35がマイクロレンズ37によって回折しても、当該測距用光束35はPD45を含む平面上において幅Wdが拡大されたPD45よりも広がることがない。その結果、PD45は測距用光束35を十分に受光することができる。同様の対応を行うことにより、PD44も測距用光束34を十分に受光することができる。以上より、入射角感度特性の感度の絶対値を高く維持しつつ、入射角感度特性の入射角の範囲を狭めることができ、測距精度を高くすることができる。 Correspondingly, in the present embodiment, as shown in FIG. 8C, the width Wd of the PD45 is increased in order to improve the sensitivity of the PD45, and the range of the incident angle of the incident angle sensitivity characteristic is limited. do. That is, in order to limit the incident angle of the luminous flux incident on the PD 45, the luminous flux 35 for ranging is narrowed down to limit the incident angle distribution (width) of the luminous flux 35 for ranging (see the luminous flux shown by the solid line in the figure). ). Specifically, the aperture hole 52 of the aperture 51 is used to narrow the incident angle range (incident angle distribution on the image plane of the optical system 11) of the luminous flux 35 for distance measurement. At this time, even if the distance measuring light flux 35 that has passed through the aperture hole 52 is diffracted by the microlens 37, the distance measuring light flux 35 does not spread more than the PD 45 whose width Wd is expanded on the plane including the PD 45. .. As a result, the PD 45 can sufficiently receive the luminous flux 35 for ranging. By taking the same measures, the PD44 can sufficiently receive the luminous flux 34 for distance measurement. From the above, it is possible to narrow the range of the incident angle of the incident angle sensitivity characteristic while maintaining the absolute value of the sensitivity of the incident angle sensitivity characteristic high, and to improve the distance measurement accuracy.

図9は、本発明の第2の実施の形態に係る測距装置の構成を概略的に示すブロック図である。図9において、測距装置は、光学系53及び多数の画素が配列された撮像素子12を有するカメラ10と、画像解析部13と、距離情報取得部14とを備える。光学系53は光軸に沿って配列された、例えば、2枚のレンズ11a,11bを有し、被写体の光学像を撮像素子12上に結像する。また、光学系53は2枚のレンズ11a,11bの間に配置された絞り51を有する。絞り51は2つの測距用瞳31,32に対応する2つの絞り孔52を有する。撮像素子12に測距用画素12bが配列される場合、光学系53及びマイクロレンズ37は絞り51の2つの絞り孔52及び遮光膜40(開口39)が光学的に共役になるように構成される。また、撮像素子12に撮像兼測距用画素12cが配列される場合、光学系53及びマイクロレンズ43は絞り51の2つの絞り孔52及びPD44,45が光学的に共役になるように構成される。 FIG. 9 is a block diagram schematically showing a configuration of a distance measuring device according to a second embodiment of the present invention. In FIG. 9, the distance measuring device includes a camera 10 having an optical system 53 and an image sensor 12 in which a large number of pixels are arranged, an image analysis unit 13, and a distance information acquisition unit 14. The optical system 53 has, for example, two lenses 11a and 11b arranged along the optical axis, and forms an optical image of a subject on the image pickup device 12. Further, the optical system 53 has an aperture 51 arranged between the two lenses 11a and 11b. The diaphragm 51 has two diaphragm holes 52 corresponding to the two distance measuring pupils 31 and 32. When the distance measuring pixels 12b are arranged on the image pickup element 12, the optical system 53 and the microlens 37 are configured so that the two diaphragm holes 52 of the diaphragm 51 and the light-shielding film 40 (opening 39) are optically coupled. NS. Further, when the imaging and ranging pixels 12c are arranged on the image sensor 12, the optical system 53 and the microlens 43 are configured so that the two aperture holes 52 of the aperture 51 and the PDs 44 and 45 are optically conjugated. NS.

上述したように、2つの絞り孔52は測距用瞳31,32を通過する測距用光束34,35を規定する。例えば、図10に示すように、各絞り孔52は各測距用光束34,35の入射角範囲を規定することにより、入射角の範囲が限定された入射角感度特性(以下、「限定入射角感度特性」という。)(図中の実線で示す特性参照)を実現する。このとき、幅Wcが拡大された開口39や幅Wdが拡大されたPD44,45の入射角感度特性(以下、「幅拡大時の入射角感度特性」という。)(図中の破線で示す特性参照)の半値幅をWeとする。半値幅Weは幅拡大時の入射角感度特性において感度が最大感度Haの半分の感度Hbとなるときの入射角範囲である。また、限定入射角感度特性の半値幅をWfとする。半値幅Wfは限定入射角感度特性において感度が最大感度Haの半分の感度Hbとなるときの入射角範囲である。本実施の形態では、限定入射角感度特性の半値幅Wfが幅拡大時の入射角感度特性の半値幅Weよりも小さくなるように、各絞り孔52の大きさ(視差方向の幅)が規定される。具体的には、撮像素子12に撮像兼測距用画素12cが配列される場合、図11に示すように、各絞り孔52の視差方向(図中横方向)の幅をWgとする。このとき、幅Wgは、幅拡大時の入射角感度特性の半値幅Weに対応する入射角の光束(図中の破線で示す光束参照)の絞り51における視差方向の幅Whよりも小さくなるように規定される。また、2つの絞り孔52は、測距用光束34,35が同じ偏光状態であり且つ同じ波長を持つように、大きさや形状、位置が規定されている。 As described above, the two diaphragm holes 52 define the luminous flux 34, 35 for distance measurement passing through the distance measurement pupils 31 and 32. For example, as shown in FIG. 10, each aperture hole 52 has an incident angle sensitivity characteristic in which the range of the incident angle is limited by defining the incident angle range of each of the distance measuring luminous fluxes 34 and 35 (hereinafter, “limited incident”). (Refer to the characteristic shown by the solid line in the figure). At this time, the incident angle sensitivity characteristics of the opening 39 with the expanded width Wc and the PD44, 45 with the expanded width Wd (hereinafter referred to as "incident angle sensitivity characteristics at the time of width expansion") (characteristics shown by the broken lines in the figure). Let We be the half-value width of (see). The half-value width We is the incident angle range when the sensitivity becomes half the sensitivity Hb of the maximum sensitivity Ha in the incident angle sensitivity characteristic when the width is expanded. Further, the half width of the limited incident angle sensitivity characteristic is defined as Wf. The full width at half maximum Wf is the incident angle range when the sensitivity becomes half the sensitivity Hb of the maximum sensitivity Ha in the limited incident angle sensitivity characteristic. In the present embodiment, the size (width in the parallax direction) of each aperture hole 52 is defined so that the half-value width Wf of the limited incident angle sensitivity characteristic is smaller than the half-value width We of the incident angle sensitivity characteristic when the width is expanded. Will be done. Specifically, when the image pickup and distance measurement pixels 12c are arranged on the image sensor 12, the width of each aperture hole 52 in the parallax direction (horizontal direction in the figure) is Wg, as shown in FIG. At this time, the width Wg is smaller than the width Wh in the parallax direction in the aperture 51 of the luminous flux at the incident angle (see the luminous flux shown by the broken line in the figure) corresponding to the half-value width We of the incident angle sensitivity characteristic at the time of width expansion. Is stipulated in. Further, the size, shape, and position of the two diaphragm holes 52 are defined so that the distance measuring luminous fluxes 34 and 35 have the same polarization state and the same wavelength.

図12は、図9における絞りを光軸方向から眺めた図である。図12(A)において、円板状の絞り51は視差方向(図中横方向)に関して直径上に互いに離間して配置される2つの絞り孔52(第1絞り孔、第2絞り孔)を有する。2つの絞り孔52の位置、大きさや形状は各測距用瞳31,32の位置、大きさや形状を実現するように設定される。なお、図12(A)に示すように、絞り51が2つの絞り孔52のみしか有しない場合、撮像用光束36は生じない。したがって、測距用光束34,35は被写体までの距離の測定だけでなく、被写体の画像の形成にも用いられる。図12(B)に示すように、絞り51が2つの絞り孔52だけでなく、略中央に他の絞り孔54(第3絞り孔)を有してもよい。この場合、他の絞り孔54の位置、大きさや形状は撮像用瞳33の位置、大きさや形状を実現するように設定される。また、各絞り孔52の形状は特に限定されないが、図12(C)に示すように、縦長、すなわち、各絞り孔52の視差方向と垂直な方向(図中縦方向)に関する長さの視差方向に関する長さに対する比が1以上であってもよい。この場合、各測距用瞳31,32のアスペクト比を1以上にすることができ、もって、各測距用瞳31,32を通過する測距用光束34,35の量を増加させ、測距用光束34,35から得られる画像信号の強度を高くすることができる。さらに、図12(D)に示すように、絞り51は、他の絞り孔54の開口を調整するための絞り機構55を有してもよい。絞り機構55は2枚の遮蔽羽56からなり、各遮蔽羽56は図中縦方向にスライド可能に構成される。これにより、撮像用瞳33を通過する撮像用光束36を絞ることができ、もって、被写体の焦点深度を深くすることができ、高画質な被写体の画像を容易に得ることができる。また、図12(E)に示すように、絞り51は、絞り孔52の開口を調整するための絞り機構57を有してもよい。各絞り機構57は2枚の遮蔽羽58からなり、各遮蔽羽58は図中縦方向にスライド可能に構成される。これにより、各測距用瞳31,32を通過する測距用光束34,35の光量を自在に調整することができる。 FIG. 12 is a view of the diaphragm in FIG. 9 viewed from the direction of the optical axis. In FIG. 12A, the disc-shaped diaphragm 51 has two diaphragm holes 52 (first diaphragm hole and second diaphragm hole) arranged apart from each other on the diameter in the parallax direction (horizontal direction in the drawing). Have. The positions, sizes and shapes of the two diaphragm holes 52 are set so as to realize the positions, sizes and shapes of the distance measuring pupils 31 and 32. As shown in FIG. 12A, when the diaphragm 51 has only two diaphragm holes 52, the light flux 36 for imaging does not occur. Therefore, the distance measuring luminous fluxes 34 and 35 are used not only for measuring the distance to the subject but also for forming an image of the subject. As shown in FIG. 12B, the diaphragm 51 may have not only the two diaphragm holes 52 but also another diaphragm hole 54 (third diaphragm hole) substantially in the center. In this case, the position, size and shape of the other diaphragm holes 54 are set so as to realize the position, size and shape of the imaging pupil 33. The shape of each aperture hole 52 is not particularly limited, but as shown in FIG. 12C, the parallax of the length in the direction perpendicular to the parallax direction of each aperture hole 52 (vertical direction in the figure). The ratio to the length with respect to the direction may be 1 or more. In this case, the aspect ratio of the distance measuring pupils 31 and 32 can be set to 1 or more, and thus the amount of the distance measuring flux 34 and 35 passing through the distance measuring pupils 31 and 32 is increased for measurement. The intensity of the image signal obtained from the distance fluxes 34 and 35 can be increased. Further, as shown in FIG. 12D, the diaphragm 51 may have a diaphragm mechanism 55 for adjusting the opening of another diaphragm hole 54. The diaphragm mechanism 55 is composed of two shielding blades 56, and each shielding blade 56 is configured to be slidable in the vertical direction in the drawing. As a result, the light flux 36 for imaging that passes through the pupil 33 for imaging can be narrowed down, so that the depth of focus of the subject can be deepened, and a high-quality image of the subject can be easily obtained. Further, as shown in FIG. 12E, the diaphragm 51 may have a diaphragm mechanism 57 for adjusting the opening of the diaphragm hole 52. Each diaphragm mechanism 57 is composed of two shielding blades 58, and each shielding blade 58 is configured to be slidable in the vertical direction in the drawing. As a result, the amount of light of the distance measuring fluxes 34 and 35 passing through the distance measuring pupils 31 and 32 can be freely adjusted.

なお、本実施の形態では、絞り(開口絞り)51の開口を「絞り孔52」と表現しているが、絞り51の開口は必ずしも貫通孔でなくてよい。つまり、絞りは、透明なガラス板上に遮光膜を形成したものであってもよいし、液晶シャッタでもよいし、エレクトロクロミックデバイスでもよい。また、本実施の形態では、絞り51として透過型の開口絞りを例示しているが、代わりに、反射型の開口絞りを用いても良い。この場合、ミラー上の絞り孔以外の部分を光吸収材料で覆った部材を用いても良いし、光を反射しない板(金属板等)上の絞り孔の部分だけをミラーで覆った部材を用いても良い。本特許請求の範囲及び本明細書では、反射型の開口絞りの絞り孔に相当する部分(透明なガラス、ミラー)も、「絞り孔」と呼ぶ。 In the present embodiment, the opening of the diaphragm (opening diaphragm) 51 is referred to as a "throttle hole 52", but the opening of the diaphragm 51 does not necessarily have to be a through hole. That is, the aperture may be a transparent glass plate on which a light-shielding film is formed, a liquid crystal shutter, or an electrochromic device. Further, in the present embodiment, the transmission type aperture diaphragm is illustrated as the diaphragm 51, but a reflection type aperture diaphragm may be used instead. In this case, a member in which the portion other than the aperture hole on the mirror is covered with a light absorbing material may be used, or a member in which only the portion of the aperture hole on a plate (metal plate or the like) that does not reflect light is covered with the mirror. You may use it. In the claims and the present specification, a portion (transparent glass, mirror) corresponding to a diaphragm hole of a reflective aperture diaphragm is also referred to as a “diaphragm hole”.

次に、本発明の第3の実施の形態について説明する。本発明の第3の実施の形態は、上述した第1の実施の形態及び第2の実施の形態に係る測距装置を移動体としての自動車に適用したものである。 Next, a third embodiment of the present invention will be described. The third embodiment of the present invention is an application of the distance measuring device according to the first embodiment and the second embodiment described above to an automobile as a moving body.

図13及び図14は、本実施の形態に係る移動体としての自動車における運転支援システムの構成を概略的に説明するための図である。 13 and 14 are diagrams for schematically explaining the configuration of a driving support system in an automobile as a moving body according to the present embodiment.

図13及び図14において、車両59は、カメラ10、画像解析部13及び距離情報取得部14を有する測距装置60と、車両位置判定部61とを備える。車両位置判定部61は、該測距装置60が算出する測距結果、例えば、前走車までの距離に基づいて当該車両59の前走車に対する相対的な位置を判定する。なお、画像解析部13、距離情報取得部14及び車両位置判定部61は、ソフトウェア(プログラム)による実装及びハードウェアによる実装のいずれも可能であり、ソフトウェアとハードウェアとの組合せによって実装してもよい。例えば、カメラ10に内蔵されたコンピュータ(マイコン、FPGA等)のメモリにプログラムを格納し、そのプログラムをコンピュータに実行させることで各部の処理を実現してもよい。また、各部の処理の全部又は一部を論理回路によって実現するASIC等の専用プロセッサを設けてもよい。 In FIGS. 13 and 14, the vehicle 59 includes a distance measuring device 60 having a camera 10, an image analysis unit 13, and a distance information acquisition unit 14, and a vehicle position determination unit 61. The vehicle position determination unit 61 determines the relative position of the vehicle 59 with respect to the preceding vehicle based on the distance measuring result calculated by the distance measuring device 60, for example, the distance to the preceding vehicle. The image analysis unit 13, the distance information acquisition unit 14, and the vehicle position determination unit 61 can be implemented by software (program) or by hardware, and can be implemented by combining software and hardware. good. For example, the processing of each part may be realized by storing a program in the memory of a computer (microcomputer, FPGA, etc.) built in the camera 10 and causing the computer to execute the program. Further, a dedicated processor such as an ASIC that realizes all or part of the processing of each part by a logic circuit may be provided.

また、車両59は、車両情報取得装置62(移動体情報取得装置)と、制御装置63と、警報装置64とを備え、車両位置判定部61が車両情報取得装置62と、制御装置63と、警報装置64と接続される。車両位置判定部61は、車両情報取得装置62から車両59の車速(速度)、ヨーレート、及び舵角等の少なくともいずれかを車両情報(移動体情報)として取得する。制御装置63は車両位置判定部61の判定結果に基づいて車両59を制御する。警報装置64は車両位置判定部61の判定結果に基づいて警報を発する。制御装置63は、例えば、ECU(エンジンコントロールユニット)である。例えば、車両位置判定部61の判定結果として前走車との衝突可能性が高い場合、制御装置63は車両59のブレーキをかける、アクセルを戻す、エンジン出力を抑制する等して衝突を回避、被害を軽減する車両制御を行う。また、例えば、前走車との衝突可能性が高い場合、警報装置64は音等の警報を鳴し、カーナビゲーションシステム等の画面に警報情報を表示し、シートベルトやステアリングに振動を与える等してユーザに警告を行う。本実施の形態では、測距装置60のカメラ10が車両59の周囲、例えば、前方若しくは後方を撮像する。なお、制御装置63が、測距装置60の測距結果だけでなく車両情報取得装置62で取得した車両情報にも基づいて車両59を制御する構成にしてもよい。 Further, the vehicle 59 includes a vehicle information acquisition device 62 (moving body information acquisition device), a control device 63, and an alarm device 64, and the vehicle position determination unit 61 includes a vehicle information acquisition device 62, a control device 63, and the like. It is connected to the alarm device 64. The vehicle position determination unit 61 acquires at least one of the vehicle speed (speed), yaw rate, steering angle, and the like of the vehicle 59 from the vehicle information acquisition device 62 as vehicle information (moving body information). The control device 63 controls the vehicle 59 based on the determination result of the vehicle position determination unit 61. The alarm device 64 issues an alarm based on the determination result of the vehicle position determination unit 61. The control device 63 is, for example, an ECU (engine control unit). For example, when there is a high possibility of a collision with a vehicle in front as a determination result of the vehicle position determination unit 61, the control device 63 avoids the collision by applying the brake of the vehicle 59, releasing the accelerator, suppressing the engine output, or the like. Perform vehicle control to reduce damage. Further, for example, when there is a high possibility of collision with a vehicle in front, the warning device 64 sounds an alarm such as a sound, displays warning information on the screen of a car navigation system or the like, and gives vibration to the seat belt or steering. To warn the user. In the present embodiment, the camera 10 of the distance measuring device 60 images the surroundings of the vehicle 59, for example, the front or the rear. The control device 63 may be configured to control the vehicle 59 based on not only the distance measurement result of the distance measuring device 60 but also the vehicle information acquired by the vehicle information acquisition device 62.

図15は、本実施の形態における運転支援システムの動作例としての衝突回避処理を示すフローチャートである。このフローチャートに沿って運転支援システムの各部の動作を以下に説明する。 FIG. 15 is a flowchart showing a collision avoidance process as an operation example of the driving support system according to the present embodiment. The operation of each part of the driving support system will be described below according to this flowchart.

まず、ステップS1ではカメラ10を用いて複数の画像(例えば、測距用の2つの画像、及び、撮像用の1つの画像)の画像信号を取得する。次に、ステップS2では車両情報取得装置62から車両情報の取得を行う。ここでの車両情報は、車両59の車速、ヨーレート、及び舵角等の少なくともいずれかを含む情報である。次に、ステップS3では取得された複数の画像信号のうち少なくとも1つの画像信号に対して特徴解析(認識処理)を行う。具体的には、画像解析部13が、画像信号におけるエッジの量や方向、濃度値、色、輝度値等の特徴量を解析することにより、対象物(自動車、自転車、歩行者、車線、ガードレール、ブレーキランプ等)を認識(検知)する。なお、画像の特徴量解析は、複数の画像信号のそれぞれに対して行ってもよく、若しくは、複数の画像信号のうち一部の画像信号のみ(例えば、撮像用の1つの画像信号のみ)に対して行ってもよい。 First, in step S1, the camera 10 is used to acquire image signals of a plurality of images (for example, two images for distance measurement and one image for imaging). Next, in step S2, vehicle information is acquired from the vehicle information acquisition device 62. The vehicle information here is information including at least one of the vehicle speed, yaw rate, steering angle, and the like of the vehicle 59. Next, in step S3, feature analysis (recognition processing) is performed on at least one of the acquired image signals. Specifically, the image analysis unit 13 analyzes the feature quantities such as the amount and direction of the edge, the density value, the color, and the brightness value in the image signal to analyze the object (automobile, bicycle, pedestrian, lane, guardrail). , Brake lamp, etc.) is recognized (detected). The feature amount analysis of the image may be performed for each of a plurality of image signals, or only a part of the plurality of image signals (for example, only one image signal for imaging). You may go against it.

続くステップS4では、カメラ10で撮像された複数の画像間の視差を距離情報取得部14で求めることにより、撮像された画像に存在する対象物までの距離情報を取得する。この距離情報の取得は距離情報取得部14にて行う。なお、視差の算出方法に関しては、SSDA法や面積相関法等が既に公知の技術として存在するため、本実施の形態では詳細の説明は省略する。なお、ステップS2、S3、S4の視差の取得は、上記のステップ順に行ってもよく、若しくは各ステップを並列に行ってもよい。ここで、撮像された画像に存在する対象物までの距離又はデフォーカス量はステップS4で求めた視差、カメラ10の内部パラメータ及び外部パラメータから計算することができる。本実施の形態では、距離情報とは、対象物までの距離、デフォーカス量、視差(像ズレ量)等の対象物の位置に関する情報であると定義する。なお、距離情報は、奥行情報又は深さ情報と呼ばれることもある。 In the following step S4, the distance information acquisition unit 14 obtains the parallax between the plurality of images captured by the camera 10, thereby acquiring the distance information to the object existing in the captured image. The distance information acquisition unit 14 acquires the distance information. As for the parallax calculation method, since the SSDA method, the area correlation method, and the like already exist as known techniques, detailed description thereof will be omitted in the present embodiment. The parallax of steps S2, S3, and S4 may be acquired in the order of the above steps, or each step may be performed in parallel. Here, the distance to the object or the amount of defocus existing in the captured image can be calculated from the parallax obtained in step S4, the internal parameters of the camera 10, and the external parameters. In the present embodiment, the distance information is defined as information on the position of the object such as the distance to the object, the defocus amount, and the parallax (image deviation amount). The distance information may also be referred to as depth information or depth information.

その後、ステップS5で、求められた距離情報が予め定めた設定内にあるか否か、つまり、設定距離内に障害物が存在するか否かを判定し、前方または後方の衝突可能性の判定を行う。設定距離内に障害物が存在する場合、衝突可能性ありと判定し、制御装置63は車両59の回避動作を行う(ステップS6)。具体的には、衝突可能性ありの旨を制御装置63や警報装置64に対して通知する。このとき、制御装置63は車両59の移動方向及び移動速度の少なくとも1つを制御する。例えば、ブレーキをかける、つまり、車両59の各輪に制動力を発生させる制御信号を生成して出力し、エンジンの出力を抑制する等して前走車との衝突の回避及び衝突可能性の低減を行う。また、警報装置64はユーザに対して音や映像、振動等で危険を通知する。その後、本処理を終了する。一方、設定距離内に障害物が存在しない場合、衝突可能性なしと判定し、本処理を終了する。 After that, in step S5, it is determined whether or not the obtained distance information is within the predetermined setting, that is, whether or not there is an obstacle within the set distance, and the possibility of a front or rear collision is determined. I do. When an obstacle exists within the set distance, it is determined that there is a possibility of collision, and the control device 63 performs an avoidance operation of the vehicle 59 (step S6). Specifically, the control device 63 and the alarm device 64 are notified that there is a possibility of collision. At this time, the control device 63 controls at least one of the moving direction and the moving speed of the vehicle 59. For example, braking is applied, that is, a control signal for generating a braking force is generated and output to each wheel of the vehicle 59, and the output of the engine is suppressed to avoid a collision with a vehicle in front and to prevent a collision. Make a reduction. In addition, the alarm device 64 notifies the user of the danger by sound, video, vibration, or the like. After that, this process ends. On the other hand, if there is no obstacle within the set distance, it is determined that there is no possibility of collision, and this process is terminated.

図15の処理によれば、効果的に障害物の検知を行うことが可能となる。つまり、正確に障害物を検知し、衝突回避及び被害低減が可能となる。 According to the process of FIG. 15, it is possible to effectively detect an obstacle. That is, it is possible to accurately detect obstacles, avoid collisions, and reduce damage.

なお、本実施の形態では、距離情報に基づいた衝突回避について説明したが、距離情報に基づいて、先行車に追従走行し、車線内中央を維持し、又は、車線からの逸脱を抑制する車両に本発明を適用することもできる。また、本発明は、車両59の運転支援だけでなく、車両59の自律運転にも適用可能である。さらに、本発明における測距装置60は、自動車等の車両に限らず、例えば、船舶、航空機、ドローンあるいは産業用ロボット等の移動体に適用することができる。さらに、本発明における測距装置60は、移動体に限らず、交差点監視システムや高度道路交通システム(ITS)において用いられる機器等、広く物体認識を利用する機器に適用することができる。例えば、交差点監視システムにおける非移動体である交差点監視カメラに本発明を適用してもよい。 In the present embodiment, collision avoidance based on distance information has been described, but based on the distance information, a vehicle that follows the preceding vehicle, maintains the center in the lane, or suppresses deviation from the lane. The present invention can also be applied to. Further, the present invention can be applied not only to the driving support of the vehicle 59 but also to the autonomous driving of the vehicle 59. Further, the distance measuring device 60 in the present invention can be applied not only to a vehicle such as an automobile but also to a moving body such as a ship, an aircraft, a drone or an industrial robot. Further, the distance measuring device 60 in the present invention can be applied not only to moving objects but also to devices that widely use object recognition, such as devices used in intersection monitoring systems and intelligent transportation systems (ITS). For example, the present invention may be applied to an intersection surveillance camera which is a non-mobile object in an intersection surveillance system.

以上、本発明について各実施の形態を用いて説明したが、本発明は上述した各実施の形態に限定されるものではない。 Although the present invention has been described above using the respective embodiments, the present invention is not limited to the above-described embodiments.

本発明は、上述の各実施の形態の1以上の機能を実現するプログラムを、ネットワークや記憶媒体を介してシステムや装置に供給し、そのシステム又は装置のコンピュータの1つ以上のプロセッサがプログラムを読み出して実行する処理でも実現可能である。また、本発明は、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。 The present invention supplies a program that realizes one or more functions of each of the above-described embodiments to a system or device via a network or storage medium, and one or more processors of the computer of the system or device implements the program. It can also be realized by the process of reading and executing. The present invention can also be realized by a circuit (for example, ASIC) that realizes one or more functions.

10 カメラ
11,53 光学系
12 撮像素子
12a 撮像用画素
12b 測距用画素
12c 撮像兼測距用画素
13 画像解析部
14 距離情報取得部
30 射出瞳
31,32,47,48 測距用瞳
33 撮像用瞳
34,35 測距用光束
36 撮像用光束
37,41,43 マイクロレンズ
38,42,44〜46,49,50 PD
39,65 開口
40,66 遮光膜
51 絞り
52 絞り孔
54 他の絞り孔
59 車両
60 測距装置
10 Camera 11,53 Optical system 12 Image sensor 12a Image sensor 12b Distance measurement pixel 12c Imaging and distance measurement pixel 13 Image analysis unit 14 Distance information acquisition unit 30 Exit pupil 31, 32, 47, 48 Distance measurement pupil 33 Eyes for imaging 34,35 Light beam for distance measurement 36 Light beam for imaging 37,41,43 Microlens 38,42,44-46,49,50 PD
39, 65 Aperture 40, 66 Light-shielding film 51 Aperture 52 Aperture hole 54 Other aperture hole 59 Vehicle 60 Distance measuring device

Claims (6)

絞りを有する光学系及び複数の画素が配列された撮像素子を有する撮像装置と、前記複数の画素のそれぞれから出力される、視差を有する1対の光学像の画像信号に基づいて被写体の距離情報を取得する距離情報取得部とを備える測距装置であって、
前記絞りには
第1及び第2の測距用光束を規定する第1絞り孔及び第2絞り孔と
撮像用光束を規定する第3絞り孔とが設けられ、
前記複数の画素のそれぞれは、
前記第1の測距用光束を受光して前記1対の光学像の一方の画像信号を出力する第1光電変換部と、
前記第2の測距用光束を受光して前記1対の光学像の他方の画像信号を出力する第2光電変換部と、
前記撮像用光束を受光して被写体の光学像の画像信号を出力する第3光電変換部とを有し、
前記第3光電変換部は、前記視差の方向に関して、前記第1光電変換部及び前記第2光電変換部の間に挟まれることを特徴とする測距装置。
Distance information of the subject based on an image signal of an optical system having an aperture, an image sensor having an image sensor in which a plurality of pixels are arranged , and a pair of optical images having parallax output from each of the plurality of pixels. It is a distance measuring device including a distance information acquisition unit for acquiring
It said to stop,
The first diaphragm hole and the second diaphragm hole that define the first and second ranging light fluxes ,
A third diaphragm hole that defines the luminous flux for imaging is provided.
Each of the plurality of pixels
A first photoelectric conversion unit that receives the first ranging light flux and outputs one image signal of the pair of optical images .
A second photoelectric conversion unit that receives the second luminous flux for distance measurement and outputs the other image signal of the pair of optical images .
By receiving the light beam for the imaging have a third photoelectric conversion unit to output an image signal of an optical image of an object,
The third photoelectric conversion unit, with respect to the direction of the parallax, the distance measuring apparatus according to claim Rukoto sandwiched between the first photoelectric conversion unit and the second photoelectric conversion unit.
前記第1絞り孔及び前記第2絞り孔は前記視差の方向に沿って並び、
前記第1絞り孔を通過した光束の前記視差の方向における入射角度分布の幅が前記第1光電変換部の前記視差の方向における入射角感度特性の半値幅よりも小さく、
前記第2絞り孔を通過した光束の前記視差の方向における入射角度分布の幅が前記第2光電変換部の前記視差の方向における入射角感度特性の半値幅よりも小さいことを特徴とする請求項記載の測距装置。
The first diaphragm hole and the second diaphragm hole are arranged along the direction of the parallax.
The width of the incident angle distribution of the light flux passing through the first diaphragm hole in the direction of the parallax is smaller than the half width of the incident angle sensitivity characteristic of the first photoelectric conversion unit in the direction of the parallax.
The claim is characterized in that the width of the incident angle distribution of the light flux passing through the second diaphragm hole in the direction of the parallax is smaller than the half width of the incident angle sensitivity characteristic of the second photoelectric conversion unit in the direction of the parallax. 1. The distance measuring device according to 1.
前記第1絞り孔及び前記第2絞り孔は前記視差の方向に沿って並び、
前記第1光電変換部を含む平面上における前記第1絞り孔の光学像の前記視差の方向における幅が、前記第1光電変換部の前記視差の方向における幅よりも小さく、
前記第2光電変換部を含む平面上における前記第2絞り孔の光学像の前記視差の方向における幅が、前記第2光電変換部の前記視差の方向における幅よりも小さいことを特徴とする請求項記載の測距装置。
The first diaphragm hole and the second diaphragm hole are arranged along the direction of the parallax.
Width in the direction of the parallax of the optical image of the first throttle hole on the plane including the first photoelectric conversion section is smaller than the width in the direction of the parallax of the first photoelectric conversion unit,
Claims wherein the width in the direction of the parallax of the optical image of the second throttle hole on the plane including the second photoelectric conversion portion, wherein the smaller than the width in the direction of the parallax of the second photoelectric conversion unit Item 1. The distance measuring device according to item 1.
前記第1光電変換部及び前記第2光電変換部は第1の形状を有し、The first photoelectric conversion unit and the second photoelectric conversion unit have a first shape and have a first shape.
前記第3光電変換部は、前記第1の形状とは異なる第2の形状を有することを特徴とする請求項1乃至3のいずれか1項に記載の測距装置。The distance measuring device according to any one of claims 1 to 3, wherein the third photoelectric conversion unit has a second shape different from the first shape.
前記第1絞り孔及び前記第2絞り孔は、前記視差の方向に関して互いに離間して配置され、且つ第3の形状を有し、The first diaphragm hole and the second diaphragm hole are arranged apart from each other in the direction of the parallax and have a third shape.
前記第3絞り孔は、前記絞りの略中央に配置され、且つ前記第3の形状とは異なる第4の形状を有することを特徴とする請求項1乃至4のいずれか1項に記載の測距装置。 The measurement according to any one of claims 1 to 4, wherein the third diaphragm hole is arranged substantially in the center of the diaphragm and has a fourth shape different from the third shape. Distance device.
移動体であって、
測距装置と、
前記測距装置の測距結果に基づいて警報を発する警報装置とを備え、
前記測距装置は、
絞りを有する光学系及び複数の画素が配列された撮像素子を有する撮像装置と、
前記複数の画素のそれぞれから出力される、視差を有する1対の光学像の画像信号に基づいて被写体の距離情報を取得する距離情報取得部とを有し、
前記絞りには
第1及び第2の測距用光束を規定する第1絞り孔及び第2絞り孔と、
撮像用光束を規定する第3絞り孔とが設けられ、
前記複数の画素のそれぞれは、
前記第1の測距用光束を受光して前記1対の光学像の一方の画像信号を出力する第1光電変換部と、
前記第2の測距用光束を受光して前記1対の光学像の他方の画像信号を出力する第2光電変換部と、
前記撮像用光束を受光して被写体の光学像の画像信号を出力する第3光電変換部とを有し、
前記第3光電変換部は、前記視差の方向に関して、前記第1光電変換部及び前記第2光電変換部の間に挟まれることを特徴とする移動体。
It ’s a mobile body,
Distance measuring device and
It is provided with an alarm device that issues an alarm based on the distance measurement result of the distance measuring device.
The distance measuring device is
An image pickup device having an optical system having an aperture and an image pickup element in which a plurality of pixels are arranged,
It has a distance information acquisition unit that acquires distance information of a subject based on an image signal of a pair of optical images having parallax output from each of the plurality of pixels.
It said to stop,
The first diaphragm hole and the second diaphragm hole that define the first and second ranging light fluxes ,
A third diaphragm hole that defines the luminous flux for imaging is provided.
Each of the plurality of pixels
A first photoelectric conversion unit that receives the first ranging light flux and outputs one image signal of the pair of optical images.
A second photoelectric conversion unit that receives the second luminous flux for distance measurement and outputs the other image signal of the pair of optical images.
It has a third photoelectric conversion unit that receives the light flux for imaging and outputs an image signal of an optical image of the subject.
The third photoelectric conversion unit is a moving body characterized in that it is sandwiched between the first photoelectric conversion unit and the second photoelectric conversion unit in the direction of the parallax.
JP2017032417A 2016-03-04 2017-02-23 Distance measuring device and moving object Active JP6957162B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2017/008640 WO2017150736A1 (en) 2016-03-04 2017-02-28 Moving object and ranging apparatus capable of acquiring image and high-accurately measuring distance
US16/074,278 US11280606B2 (en) 2016-03-04 2017-02-28 Moving object and ranging apparatus capable of acquiring image and high-accurately measuring distance
CN201780015269.9A CN108885099B (en) 2016-03-04 2017-02-28 Distance measuring device and moving object capable of obtaining image and performing high-precision distance measurement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016042699 2016-03-04
JP2016042699 2016-03-04

Publications (2)

Publication Number Publication Date
JP2017161512A JP2017161512A (en) 2017-09-14
JP6957162B2 true JP6957162B2 (en) 2021-11-02

Family

ID=59857526

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2017032417A Active JP6957162B2 (en) 2016-03-04 2017-02-23 Distance measuring device and moving object

Country Status (2)

Country Link
JP (1) JP6957162B2 (en)
CN (1) CN108885099B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7039236B2 (en) * 2017-09-29 2022-03-22 キヤノン株式会社 Sequential comparison type AD converter, image pickup device, image pickup system, mobile body
JP7023685B2 (en) 2017-11-30 2022-02-22 キヤノン株式会社 Imaging device, imaging system, mobile body
JP7023684B2 (en) * 2017-11-30 2022-02-22 キヤノン株式会社 Imaging device, imaging system, mobile body
JP7075208B2 (en) * 2017-12-22 2022-05-25 キヤノン株式会社 Imaging equipment and imaging system
JP7157529B2 (en) * 2017-12-25 2022-10-20 キヤノン株式会社 Imaging device, imaging system, and imaging device driving method
JP7102161B2 (en) * 2018-02-15 2022-07-19 キヤノン株式会社 Imaging equipment, imaging system, and moving objects
KR102191743B1 (en) * 2019-03-27 2020-12-16 서울대학교산학협력단 Distance measurement device
KR102191747B1 (en) * 2019-03-27 2020-12-16 서울대학교산학협력단 Distance measurement device and method
WO2021210060A1 (en) * 2020-04-14 2021-10-21 オリンパス株式会社 Solid-state imaging element, imaging device, endoscope device, and operation microscope system
US11543654B2 (en) * 2020-09-16 2023-01-03 Aac Optics Solutions Pte. Ltd. Lens module and system for producing image having lens module
CN114071025A (en) * 2021-12-10 2022-02-18 维沃移动通信有限公司 Camera shooting assembly, electronic equipment and control method and device of electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1984785B1 (en) * 2006-02-13 2014-05-07 3M Innovative Properties Company Monocular three-dimensional imaging
JP5040700B2 (en) * 2008-02-12 2012-10-03 ソニー株式会社 Imaging device and imaging apparatus
JP2012008370A (en) * 2010-06-25 2012-01-12 Nikon Corp Imaging device and interchangeable lens
JP2013097280A (en) * 2011-11-04 2013-05-20 Nikon Corp Imaging apparatus
DE102012107329B4 (en) * 2012-08-09 2020-04-23 Trimble Jena Gmbh Distance measuring system
EP2698602A1 (en) * 2012-08-16 2014-02-19 Leica Geosystems AG Hand-held distance measuring device with angle calculation unit
US9348019B2 (en) * 2012-11-20 2016-05-24 Visera Technologies Company Limited Hybrid image-sensing apparatus having filters permitting incident light in infrared region to be passed to time-of-flight pixel
JP6405243B2 (en) * 2014-03-26 2018-10-17 キヤノン株式会社 Focus detection apparatus and control method thereof

Also Published As

Publication number Publication date
CN108885099B (en) 2021-11-16
JP2017161512A (en) 2017-09-14
CN108885099A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
JP6957162B2 (en) Distance measuring device and moving object
JP7266652B2 (en) Range finder and moving body
KR101458287B1 (en) Ranging camera apparatus
JP5610254B2 (en) Imaging apparatus and road surface state determination method
US9414045B2 (en) Stereo camera
WO2017150553A1 (en) Image pickup device
JP7098790B2 (en) Imaging control device and moving object
JP2018077190A (en) Imaging apparatus and automatic control system
US10893197B2 (en) Passive and active stereo vision 3D sensors with variable focal length lenses
US20130188026A1 (en) Depth estimating image capture device and image sensor
JP2010204059A (en) Raindrop detection apparatus and on-vehicle monitoring apparatus
JP6789643B2 (en) Imaging device
JP2017167126A (en) Range-finding device and moving body
US20180160047A1 (en) Us
US11280606B2 (en) Moving object and ranging apparatus capable of acquiring image and high-accurately measuring distance
KR20140028539A (en) Apparatus and method of generating 3-dimensional image
JP2015195489A (en) Collision preventing system, collision preventing method and computer program
US10900770B2 (en) Distance measuring device, imaging apparatus, moving device, robot device, and recording medium
JP2016127512A (en) Imaging apparatus
US20180160101A1 (en) Variable focal length lenses and illuminators on time of flight 3d sensing systems
JP2008070629A (en) Light detection device, camera, focus detection device, and optical characteristic measuring device
JP2015194388A (en) Imaging device and imaging system
JP2014010284A (en) Focus detector, imaging apparatus and camera system
JP7484904B2 (en) Image pickup device, signal processing device, signal processing method, program, and image pickup device
JP2001208962A (en) Focus detector

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20200217

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20210330

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20210511

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20210907

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20211006

R151 Written notification of patent or utility model registration

Ref document number: 6957162

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151