WO2019156072A1 - Attitude estimating device - Google Patents
Attitude estimating device Download PDFInfo
- Publication number
- WO2019156072A1 WO2019156072A1 PCT/JP2019/004065 JP2019004065W WO2019156072A1 WO 2019156072 A1 WO2019156072 A1 WO 2019156072A1 JP 2019004065 W JP2019004065 W JP 2019004065W WO 2019156072 A1 WO2019156072 A1 WO 2019156072A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- road surface
- unit
- points
- posture
- captured image
- Prior art date
Links
- 239000011159 matrix material Substances 0.000 claims abstract description 47
- 230000000449 premovement Effects 0.000 claims abstract description 22
- 230000036544 posture Effects 0.000 claims description 62
- 238000003384 imaging method Methods 0.000 claims description 21
- 238000000605 extraction Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000000034 method Methods 0.000 description 35
- 230000006870 function Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 10
- 239000000470 constituent Substances 0.000 description 8
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 6
- 239000000284 extract Substances 0.000 description 6
- 240000004050 Pentaglottis sempervirens Species 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013519 translation Methods 0.000 description 4
- 230000001186 cumulative effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 239000005357 flat glass Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/26—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- This disclosure relates to a posture estimation device that estimates the posture of an imaging unit.
- Patent Document 1 As a technique for estimating the posture of an imaging unit mounted on a vehicle while the vehicle is running, an optical flow of feature points on a road surface is obtained, and a homography matrix is estimated from the optical flow. Thus, a technique for estimating the posture of the imaging unit is disclosed.
- One aspect of the present disclosure is to provide a technique for improving accuracy in a posture estimation device that estimates the posture of an imaging unit.
- the posture estimation apparatus includes an image acquisition unit, a feature point extraction unit, a basic estimation unit, and a posture estimation unit.
- the image acquisition unit is configured to repeatedly acquire each captured image obtained by one or a plurality of imaging units mounted on the moving object.
- the feature point extraction unit extracts a plurality of feature points as a plurality of pre-movement points from at least one of the repeatedly acquired captured images, and follows a time series more than a captured image obtained by extracting the plurality of pre-movement points.
- a plurality of feature points corresponding to a plurality of pre-movement points are extracted as a plurality of post-movement points from the captured image acquired later.
- the basic estimator is a matrix for transitioning a plurality of pre-movement points to a plurality of post-movement points, and represents a matrix when it is assumed that the moving object moves according to a preset trajectory and does not rotate. Configured to estimate a matrix.
- the posture estimation unit is configured to estimate the posture of one or a plurality of imaging units according to a basic matrix.
- the moving direction of the image pickup unit can be estimated by obtaining a simple basic matrix
- the rough posture of the image pickup unit can be estimated with a simple process.
- the display system 1 is a system that is mounted on a vehicle such as a passenger car and that combines images captured by a plurality of cameras and displays the images on a display unit.
- the camera posture is estimated and an image is displayed according to the posture. It has a function to correct.
- the display system 1 includes a control unit 10.
- the display system 1 may include a front camera 21F, a rear camera 21B, a right camera 21R, a left camera 21L, various sensors 22, a display unit 26, and the like.
- a vehicle equipped with the display system 1 is also referred to as a host vehicle.
- the front camera 21F, the rear camera 21B, the right camera 21R, and the left camera 21L are collectively referred to as a plurality of cameras 21.
- the front camera 21F and the rear camera 21B are attached to the front part and the rear part of the own vehicle, respectively, in order to image the road ahead and behind the own vehicle.
- the right camera 21R and the left camera 21L are attached to the right side surface and the left side surface of the host vehicle, respectively, in order to image the right and left roads of the host vehicle. That is, the plurality of cameras 21 are respectively arranged at different positions on the host vehicle.
- the plurality of cameras 21 are set so that a part of the imaging regions overlap each other, and a portion where the imaging regions overlap in the captured image is defined as an overlapping region.
- the various sensors 22 include, for example, a vehicle speed sensor, a yaw rate sensor, a rudder angle sensor, and the like.
- the various sensors 22 are used to detect whether or not the host vehicle is traveling in a steady manner such as a straight movement or a constant turning movement.
- the control unit 10 includes a microcomputer having a CPU 11 and a semiconductor memory (hereinafter, memory 12) such as a RAM or a ROM. Each function of the control unit 10 is realized by the CPU 11 executing a program stored in a non-transitional physical recording medium.
- the memory 12 corresponds to a non-transitional tangible recording medium that stores a program. Also, by executing this program, a method corresponding to the program is executed. Note that the non-transitional tangible recording medium means that the electromagnetic waves in the recording medium are excluded.
- the control unit 10 may include one microcomputer or a plurality of microcomputers.
- the control unit 10 includes a camera posture estimation unit 16 and an image drawing unit 17.
- the method of realizing the functions of the respective units included in the control unit 10 is not limited to software, and some or all of the functions may be realized using one or a plurality of hardware.
- the function is realized by an electronic circuit that is hardware, the electronic circuit may be realized by a digital circuit, an analog circuit, or a combination thereof.
- the camera posture estimation unit 16 estimates the postures of the plurality of cameras 21.
- the function as the image drawing unit 17 generates a bird's-eye image obtained by looking down the road around the vehicle from the vertical direction from images captured by the plurality of cameras 21. Then, the generated bird's-eye view image is displayed on the display unit 26 that is configured with a liquid crystal display or the like and is disposed in the vehicle interior.
- the coordinates that serve as the boundaries of the captured images according to the postures of the plurality of cameras 21 so that the boundaries of the images to be combined are in appropriate positions. Is set as appropriate.
- the control unit 10 temporarily stores captured images obtained from the plurality of cameras 21 in the memory 12.
- posture estimation processing executed by the control unit 10 will be described with reference to the flowcharts of FIGS. 2A and 2B.
- the posture estimation process is started, for example, when the host vehicle is performing steady running, and is repeatedly performed while performing steady running.
- the posture estimation processing in the present embodiment is performed when the host vehicle is moving straight as steady running.
- the straight-ahead movement indicates a state in which the traveling speed of the host vehicle is a speed equal to or higher than a preset threshold and the steering angle or the yaw rate is less than a preset threshold.
- the control unit 10 acquires images captured by the plurality of cameras 21, respectively. At this time, a captured image captured in the past, for example, a captured image captured 10 frames before is also acquired from the memory 12.
- control unit 10 extracts a plurality of feature points from each captured image, and detects an optical flow for the plurality of feature points.
- process after S120 is implemented for each of the plurality of cameras 21.
- the feature point represents an edge that becomes a boundary between luminance and chromaticity in the captured image.
- a point indicating a part that is relatively easy to track in image processing, such as a corner of a structure, among these edges, is adopted as a feature point. It is sufficient that at least two feature points can be extracted.
- the control unit 10 uses corners of structures such as buildings and corners of window glass as feature points. Many may be extracted.
- control unit 10 extracts a plurality of feature points as a plurality of pre-movement points from each of the repeatedly acquired captured images, and follows a time series more than the captured image from which the plurality of pre-movement points are extracted.
- a plurality of feature points corresponding to a plurality of pre-movement points are extracted from a captured image acquired later as a plurality of post-movement points.
- the control part 10 detects the line segment which connects the point after a movement from the point before a movement as an optical flow.
- the control unit 10 estimates a basic matrix E on the premise of linear motion.
- the basic matrix E is a matrix for transitioning a plurality of pre-movement points U 1 to a plurality of post-movement points U 2 , and the moving object moves according to a preset trajectory, in this embodiment, a linear motion, and Represents a matrix assuming no rotation.
- the vehicle coordinate system is defined as shown in FIG. That is, the road surface at the center of the vehicle is the origin, the right direction of the vehicle is positive on the X axis, the rear direction of the vehicle is positive on the Y axis, and the downward direction of the vehicle is positive on the Z axis.
- control unit 10 obtains the basic matrix E for each of the plurality of pre-movement points U 1 , and performs the optimization operation such as the least square method on the plurality of basic matrices E, so that the most probable.
- a basic matrix E is obtained.
- the control unit 10 estimates the camera posture. This process is configured to estimate the postures of the plurality of cameras 21 according to the basic matrix E. For example, the control unit 10 obtains the rotation component of the camera as follows.
- the control unit 10 first converts the translation vector V included in the basic matrix E into the world coordinate system using the rotation matrix R.
- a deviation angle dx about the X axis between the translation vector V W in the world coordinate system and the translation vector V in the vehicle coordinate system is calculated, and a rotation matrix Rx for correcting the deviation angle dx is obtained.
- Rx correct the rotation matrix R in Rz, and determines a rotation matrix R 2 corresponding to the camera angle after correction.
- the rotational component around the Y axis is a value before correction. If such a rotation matrix R 2 is multiplied by the basic matrix E and then an operation for changing a plurality of pre-movement points U 1 to a plurality of post-movement points U 2 is performed , a rotation component around the Y axis is taken into consideration. It is possible to estimate the camera posture.
- the rotation component around the Y axis may be ignored, a preset rotation matrix Ry may be used, or a calculation may be obtained as follows.
- the control unit 10 When obtaining the rotation matrix Ry, first, in S150, the control unit 10 extracts the road surface flow. That is, the control unit 10 is configured to extract a feature point on the road surface on which the moving object travels as a road surface front point from at least one of the repeatedly acquired captured images. In addition, the control part 10 recognizes the pixel which has the brightness
- control unit 10 extracts feature points corresponding to the road surface front point as a plurality of road surface back points from the captured image acquired later in time series than the captured image from which the road surface front point is extracted. Composed. And the control part 10 detects the line segment which connects a road surface back point from a road surface front point as a road surface flow which is an optical flow.
- the control unit 10 estimates the camera angle and the camera height from the vehicle movement amount.
- the basic matrix E, the road surface front point, and the road surface rear point are used to estimate the height of one or a plurality of cameras 21 with respect to the road surface and the rotation angle with respect to the road surface.
- control unit 10 performs the following process, for example. That is, using a rotation matrix R 2, obtaining the homography matrix H.
- a rotation matrix Ry about the Y axis that optimally satisfies the following expression is obtained using a known nonlinear optimization method such as Newton's method.
- d 1.
- the road surface normal vector Nw is obtained by specifying the position of a plane showing the road surface by a plurality of road surface flows and obtaining a vertical line with respect to the plane.
- one axis of the rotation matrix Ry is obtained separately from this processing.
- the process of obtaining the previous two axes does not require a road surface flow, and the two axes Rx and Rz can be obtained using the structure flow. For this reason, in this process, robustness can be improved as compared with the case of using road surface feature points that are less likely to produce contrast differences.
- the control unit 10 associates feature points in the overlapping area by the plurality of cameras 21. That is, the control unit 10 determines whether or not the feature points located in the overlap region among the feature points are feature points that represent the same object, and when the feature points represent the same object, Is associated with each captured image. The camera postures obtained in this way are accumulated and recorded in the memory 12.
- the control unit 10 estimates the relative position between the cameras. That is, the control unit 10 estimates the inter-camera relative position by recognizing a shift between the coordinates of the feature points located in the overlapping area and the coordinates corresponding to the corrected camera posture.
- control unit 10 acquires the cumulative camera posture as the cumulative camera posture.
- control unit 10 integrates the cumulative camera posture.
- the control unit 10 is configured to perform filtering on a plurality of posture estimation results and output the filtered values as the postures of the plurality of cameras 21.
- any filtering method such as simple average, weighted average, and least square method can be adopted.
- Image conversion is processing for generating a bird's-eye view image in consideration of the posture of the camera. Since the boundary for combining multiple captured images is set in consideration of the camera posture, one object in the imaging area may be displayed in two in the bird's eye view image, or no object may be displayed Can be suppressed.
- control unit 10 displays the bird's-eye view image as a video on the display unit 26, and then ends the posture estimation process of FIGS. 2A and 2B.
- the control unit 10 is configured to repeatedly acquire each captured image obtained by the one or more cameras 21 mounted on the moving object in S110.
- the control unit 10 extracts a plurality of feature points as a plurality of pre-movement points from at least one of the repeatedly acquired captured images, and more than the captured image obtained by extracting the plurality of pre-movement points.
- a plurality of feature points corresponding to a plurality of pre-movement points are extracted as a plurality of post-movement points from a captured image acquired later in time series.
- control unit 10 is a matrix for transitioning a plurality of pre-movement points to a plurality of post-movement points in S130, assuming that the moving object moves according to a preset trajectory and does not rotate. Is configured to estimate a base matrix representing a matrix of
- control unit 10 is configured to estimate the posture of one or a plurality of cameras 21 according to the basic matrix in S140.
- the rough posture of the camera 21 can be estimated by simple processing. Further, according to such a configuration, even when the feature points on the road surface cannot be accurately extracted as in the case where the contrast of the road surface is small, the feature points can be extracted from the structure or the like. The accuracy when estimating the camera posture can be improved.
- step S150 the control unit 10 uses, as a road surface front point, a feature point on the road surface on which the moving object travels from at least one of the repeatedly acquired captured images.
- a feature point corresponding to a road surface front point is extracted as a plurality of road surface rear points from a captured image acquired later in time series than the captured image from which the road surface front point is extracted. .
- control unit 10 is configured to estimate the height of the one or more cameras 21 relative to the road surface and the rotation angle relative to the road surface using the basic matrix, the road surface front point, and the road surface rear point. .
- the height with respect to the road surface and the rotation angle with respect to the road surface are estimated using the feature points on the road surface.
- the estimation accuracy can be improved.
- control unit 10 repeatedly repeats at least the processes of S120 to S140 three or more times to change the captured image from which the feature points are extracted, and repeatedly the posture of one or more cameras 21 Is configured to estimate
- control unit 10 is configured to perform filtering on a plurality of posture estimation results and output the filtered values as the postures of one or a plurality of cameras 21.
- the configuration includes the plurality of cameras 21.
- the configuration may include only one camera.
- the postures of the plurality of cameras 21 are individually estimated using the captured images obtained from the plurality of cameras 21, but the present invention is not limited to this.
- the height of a camera whose posture is to be estimated may be estimated using the height of another camera or the like. This may be because the positional relationship between the camera whose posture is to be estimated and other cameras is often set in advance.
- the control unit 10 uses the height from the road surface for the cameras 21 other than the camera 21 to estimate the height with respect to the road surface in S160. You may be comprised so that the height with respect to a road surface may be estimated. In particular, in S160, the control unit 10 determines the height from the road surface for the camera 21 having the largest ratio of the road surface in the captured image among the cameras 21 other than the camera 21 that is to estimate the height with respect to the road surface. It may be configured to estimate the height of one or more cameras 21 with respect to the road surface.
- the height of the other camera 21 from the road surface can be used. It can be simplified.
- the height from the road surface about the camera 21 with the largest ratio of the road surface in the captured image among the cameras 21 other than the camera 21 to estimate the height with respect to the road surface is obtained. Therefore, it is possible to accurately estimate the height from the road surface using the height from the camera 21 that is highly likely to have been obtained.
- a plurality of functions of one constituent element in the embodiment may be realized by a plurality of constituent elements, or a single function of one constituent element may be realized by a plurality of constituent elements. . Further, a plurality of functions possessed by a plurality of constituent elements may be realized by one constituent element, or one function realized by a plurality of constituent elements may be realized by one constituent element. Moreover, you may abbreviate
- non-transitional actual recording such as a device that is a component of the display system 1, a program for causing a computer to function as the display system 1, and a semiconductor memory that records the program
- the present disclosure can also be realized in various forms such as a medium and a posture estimation method.
- control unit 10 corresponds to the posture estimation device according to the present disclosure.
- process of S110 among the processes executed by the control unit 10 corresponds to an image acquisition unit referred to in the present disclosure
- process of S120 corresponds to a feature point extraction unit referred to in the present disclosure.
- the process of S130 among the processes executed by the control unit 10 corresponds to a basic estimation unit in the present disclosure
- the process of S140 corresponds to an attitude estimation unit and a first estimation unit in the present disclosure
- the process of S150 corresponds to a travel point extraction unit as referred to in the present disclosure
- the process of S160 corresponds to a second estimation unit referred to in the present disclosure
- the process of S230 corresponds to the filtering unit referred to in the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
An attitude estimating device according to one aspect of the present disclosure is provided with an image acquiring unit (S110), a feature point extracting unit (S120), a base estimating unit (S130), and an attitude estimating unit (S140). The feature point extracting unit is configured to extract a plurality of feature points as a plurality of pre-movement points, from within at least one captured image that is acquired repeatedly, and to extract a plurality of feature points corresponding to the plurality of pre-movement points, as a plurality of post-movement points, from within a captured image acquired chronologically later than the captured image from which the plurality of pre-movement points were extracted. The base estimating unit is configured to estimate a base matrix, which is a matrix for causing the plurality of pre-movement points to migrate to the plurality of post-movement points, and which represents a matrix when a moving object is assumed to move along a preset trajectory without rotating. The attitude estimating unit is configured to estimate the attitude of one or a plurality of image capturing units in accordance with the base matrix.
Description
本国際出願は、2018年2月6日に日本国特許庁に出願された日本国特許出願第2018-019011号に基づく優先権を主張するものであり、日本国特許出願第2018-019011号の全内容を参照により本国際出願に援用する。
This international application claims priority based on Japanese Patent Application No. 2018-019011 filed with the Japan Patent Office on February 6, 2018. The entire contents are incorporated herein by reference.
本開示は、撮像部の姿勢を推定する姿勢推定装置に関する。
This disclosure relates to a posture estimation device that estimates the posture of an imaging unit.
例えば、下記特許文献1には、車両の走行中に、車両に搭載された撮像部の姿勢の推定する技術として、路面上の特徴点のオプティカルフローを求め、このオプティカルフローからホモグラフィ行列を推定することによって、撮像部の姿勢を推定するという技術が開示されている。
For example, in Patent Document 1 below, as a technique for estimating the posture of an imaging unit mounted on a vehicle while the vehicle is running, an optical flow of feature points on a road surface is obtained, and a homography matrix is estimated from the optical flow. Thus, a technique for estimating the posture of the imaging unit is disclosed.
しかしながら、発明者の詳細な検討の結果、ホモグラフィ行列を求めるには多くのパラメータの推定が必要であり、上記のように一度の処理でホモグラフィ行列を推定する技術では、安定した精度が得られないという課題が見出された。
However, as a result of detailed studies by the inventor, it is necessary to estimate many parameters in order to obtain a homography matrix, and the technique for estimating a homography matrix by a single process as described above provides stable accuracy. The problem that it was not possible was found.
本開示の1つの局面は、撮像部の姿勢を推定する姿勢推定装置において、精度を向上させる技術を提供することにある。
One aspect of the present disclosure is to provide a technique for improving accuracy in a posture estimation device that estimates the posture of an imaging unit.
本開示の一態様による姿勢推定装置は、画像取得部と、特徴点抽出部と、基本推定部と、姿勢推定部と、を備える。画像取得部は、移動物に搭載された1又は複数の撮像部により得られたそれぞれの撮像画像を繰り返し取得するように構成される。
The posture estimation apparatus according to an aspect of the present disclosure includes an image acquisition unit, a feature point extraction unit, a basic estimation unit, and a posture estimation unit. The image acquisition unit is configured to repeatedly acquire each captured image obtained by one or a plurality of imaging units mounted on the moving object.
特徴点抽出部は、繰り返し取得された撮像画像の少なくとも1つの中から、複数の特徴点を複数の移動前点として抽出するとともに、複数の移動前点を抽出した撮像画像よりも時系列に沿って後に取得された撮像画像中から、複数の移動前点に対応する複数の特徴点を複数の移動後点として抽出するように構成される。
The feature point extraction unit extracts a plurality of feature points as a plurality of pre-movement points from at least one of the repeatedly acquired captured images, and follows a time series more than a captured image obtained by extracting the plurality of pre-movement points. A plurality of feature points corresponding to a plurality of pre-movement points are extracted as a plurality of post-movement points from the captured image acquired later.
基本推定部は、複数の移動前点を複数の移動後点に遷移させるための行列であって、移動物が予め設定された軌跡に従って移動し、かつ回転しないと仮定したときの行列を表す基本行列を推定するように構成される。姿勢推定部は、基本行列に従って1又は複数の撮像部の姿勢を推定するように構成される。
The basic estimator is a matrix for transitioning a plurality of pre-movement points to a plurality of post-movement points, and represents a matrix when it is assumed that the moving object moves according to a preset trajectory and does not rotate. Configured to estimate a matrix. The posture estimation unit is configured to estimate the posture of one or a plurality of imaging units according to a basic matrix.
このような構成によれば、簡素な基本行列を求めることによって撮像部の移動方向を推定できるので、簡素な処理で撮像部の大まかな姿勢を推定することができる。
According to such a configuration, since the moving direction of the image pickup unit can be estimated by obtaining a simple basic matrix, the rough posture of the image pickup unit can be estimated with a simple process.
以下、図面を参照しながら、本開示の一態様の実施形態を説明する。
Hereinafter, embodiments of one aspect of the present disclosure will be described with reference to the drawings.
[1.実施形態]
[1-1.構成]
本実施形態の表示システム1は、乗用車等の車両に搭載され、複数のカメラによって撮像された画像を合成して表示部に表示させるシステムであり、カメラ姿勢を推定し、姿勢に応じて画像を補正する機能を有する。表示システム1は、図1に示すように、制御部10を備える。表示システム1は、フロントカメラ21F、リアカメラ21B、右カメラ21R、左カメラ21L、各種センサ22、表示部26、等を備えてもよい。 [1. Embodiment]
[1-1. Constitution]
Thedisplay system 1 according to the present embodiment is a system that is mounted on a vehicle such as a passenger car and that combines images captured by a plurality of cameras and displays the images on a display unit. The camera posture is estimated and an image is displayed according to the posture. It has a function to correct. As shown in FIG. 1, the display system 1 includes a control unit 10. The display system 1 may include a front camera 21F, a rear camera 21B, a right camera 21R, a left camera 21L, various sensors 22, a display unit 26, and the like.
[1-1.構成]
本実施形態の表示システム1は、乗用車等の車両に搭載され、複数のカメラによって撮像された画像を合成して表示部に表示させるシステムであり、カメラ姿勢を推定し、姿勢に応じて画像を補正する機能を有する。表示システム1は、図1に示すように、制御部10を備える。表示システム1は、フロントカメラ21F、リアカメラ21B、右カメラ21R、左カメラ21L、各種センサ22、表示部26、等を備えてもよい。 [1. Embodiment]
[1-1. Constitution]
The
なお、表示システム1が搭載された車両を自車両ともいう。また、フロントカメラ21F、リアカメラ21B、右カメラ21R、および左カメラ21Lを、まとめて複数のカメラ21とも表記する。
Note that a vehicle equipped with the display system 1 is also referred to as a host vehicle. Further, the front camera 21F, the rear camera 21B, the right camera 21R, and the left camera 21L are collectively referred to as a plurality of cameras 21.
フロントカメラ21Fおよびリアカメラ21Bは、それぞれ、自車両の前方および後方の道路を撮像するために、自車両の前部および後部に取り付けられている。また、右カメラ21Rおよび左カメラ21Lは、それぞれ、自車両の右側および左側の道路を撮像するために、自車両の右側面および左側面に取り付けられている。すなわち、複数のカメラ21は、それぞれ自車両の異なる位置に配置されている。
The front camera 21F and the rear camera 21B are attached to the front part and the rear part of the own vehicle, respectively, in order to image the road ahead and behind the own vehicle. The right camera 21R and the left camera 21L are attached to the right side surface and the left side surface of the host vehicle, respectively, in order to image the right and left roads of the host vehicle. That is, the plurality of cameras 21 are respectively arranged at different positions on the host vehicle.
また、複数のカメラ21は、撮像領域の一部がそれぞれ重複するように設定されており、撮像画像において撮像領域が重複する部位を重複領域と定義する。
Further, the plurality of cameras 21 are set so that a part of the imaging regions overlap each other, and a portion where the imaging regions overlap in the captured image is defined as an overlapping region.
各種センサ22としては、例えば、車速センサ、ヨーレートセンサ、舵角センサ等を備える。各種センサ22は、自車両が、直進運動、定旋回運動等の定常走行をしているか否かを検知するために用いられる。
The various sensors 22 include, for example, a vehicle speed sensor, a yaw rate sensor, a rudder angle sensor, and the like. The various sensors 22 are used to detect whether or not the host vehicle is traveling in a steady manner such as a straight movement or a constant turning movement.
制御部10は、CPU11と、例えば、RAM又はROM等の半導体メモリ(以下、メモリ12)と、を有するマイクロコンピュータを備える。制御部10の各機能は、CPU11が非遷移的実体的記録媒体に格納されたプログラムを実行することにより実現される。この例では、メモリ12が、プログラムを格納した非遷移的実体的記録媒体に該当する。また、このプログラムが実行されることで、プログラムに対応する方法が実行される。なお、非遷移的実体的記録媒体とは、記録媒体のうちの電磁波を除く意味である。また、制御部10は、1つのマイクロコンピュータを備えてもよいし、複数のマイクロコンピュータを備えてもよい。
The control unit 10 includes a microcomputer having a CPU 11 and a semiconductor memory (hereinafter, memory 12) such as a RAM or a ROM. Each function of the control unit 10 is realized by the CPU 11 executing a program stored in a non-transitional physical recording medium. In this example, the memory 12 corresponds to a non-transitional tangible recording medium that stores a program. Also, by executing this program, a method corresponding to the program is executed. Note that the non-transitional tangible recording medium means that the electromagnetic waves in the recording medium are excluded. The control unit 10 may include one microcomputer or a plurality of microcomputers.
制御部10は、図1に示すように、カメラ姿勢推定部16と、画像描画部17と、を備える。制御部10に含まれる各部の機能を実現する手法はソフトウェアに限るものではなく、その一部又は全部の機能は、一つ或いは複数のハードウェアを用いて実現されてもよい。例えば、上記機能がハードウェアである電子回路によって実現される場合、その電子回路は、デジタル回路、又はアナログ回路、或いはこれらの組合せによって実現されてもよい。
As shown in FIG. 1, the control unit 10 includes a camera posture estimation unit 16 and an image drawing unit 17. The method of realizing the functions of the respective units included in the control unit 10 is not limited to software, and some or all of the functions may be realized using one or a plurality of hardware. For example, when the function is realized by an electronic circuit that is hardware, the electronic circuit may be realized by a digital circuit, an analog circuit, or a combination thereof.
カメラ姿勢推定部16および画像描画部17としての機能では、プログラムに従って後述する姿勢推定処理を実施する。特に、カメラ姿勢推定部16の機能では、複数のカメラ21の姿勢をそれぞれ推定する。
In the functions as the camera posture estimation unit 16 and the image drawing unit 17, posture estimation processing described later is performed according to a program. In particular, the camera posture estimation unit 16 estimates the postures of the plurality of cameras 21.
また、画像描画部17としての機能では、複数のカメラ21による撮像画像から、車両周囲の道路を鉛直方向から俯瞰した鳥瞰画像を生成する。そして、その生成した鳥瞰画像を、液晶ディスプレイ等にて構成されて車室内に配置される表示部26に表示させる。ただし、画像描画部17としての機能では、鳥瞰画像を生成する際に、合成する画像の境界が適切な位置になるよう、複数のカメラ21の姿勢に応じてそれぞれの撮像画像の境界となる座標を適宜設定する。なお、制御部10は、複数のカメラ21から得られた撮像画像をメモリ12に一時的に格納する。
Also, the function as the image drawing unit 17 generates a bird's-eye image obtained by looking down the road around the vehicle from the vertical direction from images captured by the plurality of cameras 21. Then, the generated bird's-eye view image is displayed on the display unit 26 that is configured with a liquid crystal display or the like and is disposed in the vehicle interior. However, in the function as the image drawing unit 17, when generating the bird's-eye view image, the coordinates that serve as the boundaries of the captured images according to the postures of the plurality of cameras 21 so that the boundaries of the images to be combined are in appropriate positions. Is set as appropriate. The control unit 10 temporarily stores captured images obtained from the plurality of cameras 21 in the memory 12.
[1-2.処理]
次に、制御部10が実行する姿勢推定処理について、図2A,図2Bのフローチャートを用いて説明する。姿勢推定処理は、例えば、自車両が定常走行を行っている場合に開始され、定常走行を行っている間に繰り返し実施される。 [1-2. processing]
Next, posture estimation processing executed by thecontrol unit 10 will be described with reference to the flowcharts of FIGS. 2A and 2B. The posture estimation process is started, for example, when the host vehicle is performing steady running, and is repeatedly performed while performing steady running.
次に、制御部10が実行する姿勢推定処理について、図2A,図2Bのフローチャートを用いて説明する。姿勢推定処理は、例えば、自車両が定常走行を行っている場合に開始され、定常走行を行っている間に繰り返し実施される。 [1-2. processing]
Next, posture estimation processing executed by the
特に、本実施形態での姿勢推定処理は、定常走行として、自車両が直進運動をしている場合に実施される。直進運動とは、自車両の走行速度が予め設定された閾値以上の速度であって、舵角又はヨーレートが予め設定された閾値未満である状態を示す。
In particular, the posture estimation processing in the present embodiment is performed when the host vehicle is moving straight as steady running. The straight-ahead movement indicates a state in which the traveling speed of the host vehicle is a speed equal to or higher than a preset threshold and the steering angle or the yaw rate is less than a preset threshold.
姿勢推定処理では、まず、S110で、制御部10は、複数のカメラ21による撮像画像をそれぞれ取得する。この際、過去において撮像された撮像画像、例えば、10フレーム前に撮像された撮像画像についてもメモリ12から取得する。
In the posture estimation process, first, in S110, the control unit 10 acquires images captured by the plurality of cameras 21, respectively. At this time, a captured image captured in the past, for example, a captured image captured 10 frames before is also acquired from the memory 12.
続いて、S120で、制御部10は、各撮像画像から複数の特徴点を抽出し、複数の特徴点についてオプティカルフローを検出する。なお、S120以下の処理は、複数のカメラ21毎に実施される。
Subsequently, in S120, the control unit 10 extracts a plurality of feature points from each captured image, and detects an optical flow for the plurality of feature points. In addition, the process after S120 is implemented for each of the plurality of cameras 21.
ここで、特徴点とは、撮像画像中の輝度や色度の境界となるエッジを表す。本実施形態では、これらのエッジのうちの、例えば構造物の角部等、画像処理において追跡が比較的容易な部位を示す点を特徴点として採用する。特徴点は、少なくとも2点抽出できればよいが、自車両が都会のビル群を走行中等には、制御部10は、ビル等の構造物の角部や、窓ガラスの角部等を特徴点として多数抽出してもよい。
Here, the feature point represents an edge that becomes a boundary between luminance and chromaticity in the captured image. In the present embodiment, a point indicating a part that is relatively easy to track in image processing, such as a corner of a structure, among these edges, is adopted as a feature point. It is sufficient that at least two feature points can be extracted. However, when the host vehicle is traveling in a group of urban buildings, the control unit 10 uses corners of structures such as buildings and corners of window glass as feature points. Many may be extracted.
また、制御部10は、繰り返し取得された撮像画像のそれぞれの中から、複数の特徴点を複数の移動前点として抽出し、複数の移動前点を抽出した撮像画像よりも時系列に沿って後に取得された撮像画像中から、複数の移動前点に対応する複数の特徴点を複数の移動後点として抽出する。そして、制御部10は、移動前点から移動後点を結ぶ線分をオプティカルフローとして検出する。
In addition, the control unit 10 extracts a plurality of feature points as a plurality of pre-movement points from each of the repeatedly acquired captured images, and follows a time series more than the captured image from which the plurality of pre-movement points are extracted. A plurality of feature points corresponding to a plurality of pre-movement points are extracted from a captured image acquired later as a plurality of post-movement points. And the control part 10 detects the line segment which connects the point after a movement from the point before a movement as an optical flow.
続いて、S130で、制御部10は、直進運動を前提とする基本行列Eを推定する。基本行列Eは、複数の移動前点U1を複数の移動後点U2に遷移させるための行列であって、移動物が予め設定された軌跡、本実施形態では直進運動に従って移動し、かつ回転しないと仮定したときの行列を表す。
Subsequently, in S <b> 130, the control unit 10 estimates a basic matrix E on the premise of linear motion. The basic matrix E is a matrix for transitioning a plurality of pre-movement points U 1 to a plurality of post-movement points U 2 , and the moving object moves according to a preset trajectory, in this embodiment, a linear motion, and Represents a matrix assuming no rotation.
より詳細には、例えば、以下の数式を用いる。
More specifically, for example, the following formula is used.
上記数式において、制御部10は、複数の移動前点U1毎に基本行列Eを求め、複数の基本行列Eに対して、例えば最小二乗法等の最適化演算を行うことによって、最も確からしい基本行列Eを求める。
In the above formula, the control unit 10 obtains the basic matrix E for each of the plurality of pre-movement points U 1 , and performs the optimization operation such as the least square method on the plurality of basic matrices E, so that the most probable. A basic matrix E is obtained.
続いて、S140で、制御部10は、カメラ姿勢を推定する。この処理では、基本行列Eに従って複数のカメラ21の姿勢を推定するように構成される。例えば、制御部10は、以下のようにカメラの回転成分を求める。
Subsequently, in S140, the control unit 10 estimates the camera posture. This process is configured to estimate the postures of the plurality of cameras 21 according to the basic matrix E. For example, the control unit 10 obtains the rotation component of the camera as follows.
制御部10は、まず、基本行列Eに含まれる並進ベクトルVを回転行列Rを用いて世界座標系に変換する。
The control unit 10 first converts the translation vector V included in the basic matrix E into the world coordinate system using the rotation matrix R.
なお、Y軸回りの回転成分については、無視する構成としてもよいし、予め設定された回転行列Ryを用いてもよいし、或いは、以下のように演算により求める構成としてもよい。
The rotation component around the Y axis may be ignored, a preset rotation matrix Ry may be used, or a calculation may be obtained as follows.
回転行列Ryを求める際には、まずS150で、制御部10は、路面フローを抽出する。すなわち、制御部10は、繰り返し取得された撮像画像の少なくとも1つの中から、移動物が移動の際の走行する路面上の特徴点を路面前点として抽出するように構成される。なお、制御部10は、予め設定された撮像画像中の位置内において、予め設定された範囲内の輝度又は色度を有する画素を、路面を示す画素として認識する。
When obtaining the rotation matrix Ry, first, in S150, the control unit 10 extracts the road surface flow. That is, the control unit 10 is configured to extract a feature point on the road surface on which the moving object travels as a road surface front point from at least one of the repeatedly acquired captured images. In addition, the control part 10 recognizes the pixel which has the brightness | luminance or chromaticity within the preset range as a pixel which shows a road surface in the position in the preset captured image.
また、制御部10は、路面前点を抽出した撮像画像よりも時系列に沿って後に取得された撮像画像中から、路面前点に対応する特徴点を複数の路面後点として抽出するように構成される。そして、制御部10は、路面前点から路面後点を結ぶ線分を、オプティカルフローである路面フローとして検出する。
Further, the control unit 10 extracts feature points corresponding to the road surface front point as a plurality of road surface back points from the captured image acquired later in time series than the captured image from which the road surface front point is extracted. Composed. And the control part 10 detects the line segment which connects a road surface back point from a road surface front point as a road surface flow which is an optical flow.
続いて、S160で、制御部10は、車両移動量からカメラ角度およびカメラ高さを推定する。この処理では、基本行列E、路面前点、および路面後点を用いて、1又は複数のカメラ21の路面に対する高さ、および路面に対する回転角を推定するように構成される。
Subsequently, in S160, the control unit 10 estimates the camera angle and the camera height from the vehicle movement amount. In this process, the basic matrix E, the road surface front point, and the road surface rear point are used to estimate the height of one or a plurality of cameras 21 with respect to the road surface and the rotation angle with respect to the road surface.
この処理では、制御部10は、例えば、以下のような処理を実施する。すなわち、回転行列R2を用いて、ホモグラフィ行列Hを求める。
In this process, the control unit 10 performs the following process, for example. That is, using a rotation matrix R 2, obtaining the homography matrix H.
続いて、S170で、制御部10は、複数のカメラ21による重複領域内において特徴点の対応付けをする。すなわち、制御部10は、特徴点のうちの重複領域内に位置する特徴点が同一の物体を表す特徴点であるか否かを判定し、特徴点が同一の物体を表す場合、カメラの姿勢を考慮してそれぞれの撮像画像において対応付ける。このようにして得られたカメラ姿勢は、メモリ12において累積して記録される。
Subsequently, in S <b> 170, the control unit 10 associates feature points in the overlapping area by the plurality of cameras 21. That is, the control unit 10 determines whether or not the feature points located in the overlap region among the feature points are feature points that represent the same object, and when the feature points represent the same object, Is associated with each captured image. The camera postures obtained in this way are accumulated and recorded in the memory 12.
続いて、S210で、制御部10は、カメラ間相対位置を推定する。すなわち、制御部10は、重複領域内に位置する特徴点の座標と、上記補正後のカメラ姿勢に対応する座標とのずれを認識することで、カメラ間相対位置を推定する。
Subsequently, in S210, the control unit 10 estimates the relative position between the cameras. That is, the control unit 10 estimates the inter-camera relative position by recognizing a shift between the coordinates of the feature points located in the overlapping area and the coordinates corresponding to the corrected camera posture.
続いて、S220で、制御部10は、カメラ姿勢の累積を累積カメラ姿勢として取得する。続いて、S230で、制御部10は、累積カメラ姿勢を統合する。この処理では、制御部10は、複数の姿勢の推定結果に対してフィルタリングを行い、フィルタリング後の値を複数のカメラ21の姿勢として出力するように構成される。フィルタリングとしては、単純平均、加重平均、最小二乗法等、任意のフィルタリング手法を採用することができる。
Subsequently, in S220, the control unit 10 acquires the cumulative camera posture as the cumulative camera posture. Subsequently, in S230, the control unit 10 integrates the cumulative camera posture. In this process, the control unit 10 is configured to perform filtering on a plurality of posture estimation results and output the filtered values as the postures of the plurality of cameras 21. As the filtering, any filtering method such as simple average, weighted average, and least square method can be adopted.
続いて、S240で、制御部10は、画像変換を実行する。画像変換は、カメラの姿勢を加味して、鳥瞰画像を生成する処理である。カメラの姿勢を考慮して複数の撮像画像を合成する際の境界が設定されるので、鳥瞰画像では、撮像領域内の1つの物体が2つに表示されたり、物体が表示されなかったりすることを抑制することができる。
Subsequently, in S240, the control unit 10 performs image conversion. Image conversion is processing for generating a bird's-eye view image in consideration of the posture of the camera. Since the boundary for combining multiple captured images is set in consideration of the camera posture, one object in the imaging area may be displayed in two in the bird's eye view image, or no object may be displayed Can be suppressed.
続いて、S250で、制御部10は、映像としての鳥瞰画像を表示部26に表示させた後、図2A,図2Bの姿勢推定処理を終了する。
Subsequently, in S250, the control unit 10 displays the bird's-eye view image as a video on the display unit 26, and then ends the posture estimation process of FIGS. 2A and 2B.
[1-3.効果]
以上詳述した第1実施形態によれば、以下の効果を奏する。 [1-3. effect]
According to the first embodiment described in detail above, the following effects are obtained.
以上詳述した第1実施形態によれば、以下の効果を奏する。 [1-3. effect]
According to the first embodiment described in detail above, the following effects are obtained.
(1a)上記の表示システム1において、制御部10は、S110で、移動物に搭載された1又は複数のカメラ21により得られたそれぞれの撮像画像を繰り返し取得するように構成される。また、制御部10は、S120で、繰り返し取得された撮像画像の少なくとも1つの中から、複数の特徴点を複数の移動前点として抽出するとともに、複数の移動前点を抽出した撮像画像よりも時系列に沿って後に取得された撮像画像中から、複数の移動前点に対応する複数の特徴点を複数の移動後点として抽出するように構成される。
(1a) In the display system 1 described above, the control unit 10 is configured to repeatedly acquire each captured image obtained by the one or more cameras 21 mounted on the moving object in S110. In S120, the control unit 10 extracts a plurality of feature points as a plurality of pre-movement points from at least one of the repeatedly acquired captured images, and more than the captured image obtained by extracting the plurality of pre-movement points. A plurality of feature points corresponding to a plurality of pre-movement points are extracted as a plurality of post-movement points from a captured image acquired later in time series.
さらに、制御部10は、S130で、複数の移動前点を複数の移動後点に遷移させるための行列であって、移動物が予め設定された軌跡に従って移動し、かつ回転しないと仮定したときの行列を表す基本行列を推定するように構成される。
Furthermore, the control unit 10 is a matrix for transitioning a plurality of pre-movement points to a plurality of post-movement points in S130, assuming that the moving object moves according to a preset trajectory and does not rotate. Is configured to estimate a base matrix representing a matrix of
また、制御部10は、S140で、基本行列に従って1又は複数のカメラ21の姿勢を推定するように構成される。
Further, the control unit 10 is configured to estimate the posture of one or a plurality of cameras 21 according to the basic matrix in S140.
このような構成によれば、簡素な基本行列を求めることによってカメラ21の移動方向を推定できるので、簡素な処理でカメラ21の大まかな姿勢を推定することができる。また、このような構成によれば、路面のコントラストが小さい場合のように、路面上の特徴点を正確に抽出できない場合であっても、構造物等から特徴点を抽出することができるので、カメラ姿勢を推定する際の精度を向上させることができる。
According to such a configuration, since the movement direction of the camera 21 can be estimated by obtaining a simple basic matrix, the rough posture of the camera 21 can be estimated by simple processing. Further, according to such a configuration, even when the feature points on the road surface cannot be accurately extracted as in the case where the contrast of the road surface is small, the feature points can be extracted from the structure or the like. The accuracy when estimating the camera posture can be improved.
(1b)上記の表示システム1において、制御部10は、S150で、繰り返し取得された撮像画像の少なくとも1つの中から、移動物が移動の際の走行する路面上の特徴点を路面前点として抽出するとともに、路面前点を抽出した撮像画像よりも時系列に沿って後に取得された撮像画像中から、路面前点に対応する特徴点を複数の路面後点として抽出するように構成される。
(1b) In the display system 1 described above, in step S150, the control unit 10 uses, as a road surface front point, a feature point on the road surface on which the moving object travels from at least one of the repeatedly acquired captured images. A feature point corresponding to a road surface front point is extracted as a plurality of road surface rear points from a captured image acquired later in time series than the captured image from which the road surface front point is extracted. .
また、制御部10は、S160で、基本行列、路面前点、および路面後点を用いて、1又は複数のカメラ21の路面に対する高さ、および路面に対する回転角を推定するように構成される。
In S160, the control unit 10 is configured to estimate the height of the one or more cameras 21 relative to the road surface and the rotation angle relative to the road surface using the basic matrix, the road surface front point, and the road surface rear point. .
このような構成によれば、基本行列を求めた上で、路面上の特徴点を用いて路面に対する高さ、および路面に対する回転角を推定するので、これらをまとめて推定する構成と比較して、推定精度を向上させることができる。
According to such a configuration, after obtaining the basic matrix, the height with respect to the road surface and the rotation angle with respect to the road surface are estimated using the feature points on the road surface. The estimation accuracy can be improved.
(1c)上記の表示システム1において、制御部10は、少なくともS120~S140の処理を3回以上繰り返すことで、特徴点を抽出する撮像画像を変更しつつ、繰り返し1又は複数のカメラ21の姿勢を推定するように構成される。
(1c) In the display system 1 described above, the control unit 10 repeatedly repeats at least the processes of S120 to S140 three or more times to change the captured image from which the feature points are extracted, and repeatedly the posture of one or more cameras 21 Is configured to estimate
また、制御部10は、S230で、複数の姿勢の推定結果に対してフィルタリングを行い、フィルタリング後の値を1又は複数のカメラ21の姿勢として出力するように構成される。
In S230, the control unit 10 is configured to perform filtering on a plurality of posture estimation results and output the filtered values as the postures of one or a plurality of cameras 21.
このような構成によれば、複数の姿勢の推定結果を用いてカメラ21の姿勢を推定するので、一時的な誤推定が生じたとしても、その影響を軽減することができる。
According to such a configuration, since the posture of the camera 21 is estimated using a plurality of posture estimation results, even if a temporary erroneous estimation occurs, the influence can be reduced.
[2.他の実施形態]
以上、本開示の実施形態について説明したが、本開示は上述の実施形態に限定されることなく、種々変形して実施することができる。 [2. Other Embodiments]
As mentioned above, although embodiment of this indication was described, this indication is not limited to the above-mentioned embodiment, and can carry out various modifications.
以上、本開示の実施形態について説明したが、本開示は上述の実施形態に限定されることなく、種々変形して実施することができる。 [2. Other Embodiments]
As mentioned above, although embodiment of this indication was described, this indication is not limited to the above-mentioned embodiment, and can carry out various modifications.
(2a)上記実施形態では、複数のカメラ21を備える構成としたが、1つのカメラのみを備えた構成でもよい。
(2a) In the above embodiment, the configuration includes the plurality of cameras 21. However, the configuration may include only one camera.
(2b)上記実施形態では、複数のカメラ21から得られる撮像画像を用いて複数のカメラ21の姿勢を個別に推定したが、これに限定されるものではない。例えば、他のカメラの高さ等を用いて、姿勢を推定しようとするカメラの高さを推定してもよい。このようにしてもよいのは、姿勢を推定しようとするカメラと、他のカメラとの位置関係は、予め設定されていることが多いためである。
(2b) In the above embodiment, the postures of the plurality of cameras 21 are individually estimated using the captured images obtained from the plurality of cameras 21, but the present invention is not limited to this. For example, the height of a camera whose posture is to be estimated may be estimated using the height of another camera or the like. This may be because the positional relationship between the camera whose posture is to be estimated and other cameras is often set in advance.
このように構成する場合には、制御部10は、S160で、路面に対する高さを推定しようとするカメラ21以外のカメラ21についての路面からの高さを用いて、1又は複数のカメラ21の路面に対する高さを推定するように構成されてもよい。特に、制御部10は、S160で、路面に対する高さを推定しようとするカメラ21以外のカメラ21のうちの、撮像画像中に路面が占める割合が最も大きなカメラ21についての路面からの高さを用いて、1又は複数のカメラ21の路面に対する高さを推定するように構成されてもよい。
In the case of such a configuration, the control unit 10 uses the height from the road surface for the cameras 21 other than the camera 21 to estimate the height with respect to the road surface in S160. You may be comprised so that the height with respect to a road surface may be estimated. In particular, in S160, the control unit 10 determines the height from the road surface for the camera 21 having the largest ratio of the road surface in the captured image among the cameras 21 other than the camera 21 that is to estimate the height with respect to the road surface. It may be configured to estimate the height of one or more cameras 21 with respect to the road surface.
このような構成によれば、複数のカメラ21間の位置関係が分かっていれば、他のカメラ21の路面からの高さを用いることができるので、路面に対する高さを求める際の演算過程を簡素化することができる。
According to such a configuration, if the positional relationship between the plurality of cameras 21 is known, the height of the other camera 21 from the road surface can be used. It can be simplified.
また、このような構成によれば、路面に対する高さを推定しようとするカメラ21以外のカメラ21のうちの、撮像画像中に路面が占める割合が最も大きなカメラ21についての路面からの高さを用いるので、より正確に高さが求められた可能性が高いカメラ21からの高さを用いて精度よく路面からの高さを推定することができる。
Moreover, according to such a structure, the height from the road surface about the camera 21 with the largest ratio of the road surface in the captured image among the cameras 21 other than the camera 21 to estimate the height with respect to the road surface is obtained. Therefore, it is possible to accurately estimate the height from the road surface using the height from the camera 21 that is highly likely to have been obtained.
(2c)上記実施形態における1つの構成要素が有する複数の機能を、複数の構成要素によって実現したり、1つの構成要素が有する1つの機能を、複数の構成要素によって実現したりしてもよい。また、複数の構成要素が有する複数の機能を、1つの構成要素によって実現したり、複数の構成要素によって実現される1つの機能を、1つの構成要素によって実現したりしてもよい。また、上記実施形態の構成の一部を省略してもよい。また、上記実施形態の構成の少なくとも一部を、他の上記実施形態の構成に対して付加又は置換してもよい。
(2c) A plurality of functions of one constituent element in the embodiment may be realized by a plurality of constituent elements, or a single function of one constituent element may be realized by a plurality of constituent elements. . Further, a plurality of functions possessed by a plurality of constituent elements may be realized by one constituent element, or one function realized by a plurality of constituent elements may be realized by one constituent element. Moreover, you may abbreviate | omit a part of structure of the said embodiment. In addition, at least a part of the configuration of the above embodiment may be added to or replaced with the configuration of the other embodiment.
(2d)上述した表示システム1の他、当該表示システム1の構成要素となる装置、当該表示システム1としてコンピュータを機能させるためのプログラム、このプログラムを記録した半導体メモリ等の非遷移的実態的記録媒体、姿勢推定方法など、種々の形態で本開示を実現することもできる。
(2d) In addition to the display system 1 described above, non-transitional actual recording such as a device that is a component of the display system 1, a program for causing a computer to function as the display system 1, and a semiconductor memory that records the program The present disclosure can also be realized in various forms such as a medium and a posture estimation method.
[3.実施形態の構成と本開示との対応関係]
上記実施形態において制御部10は、本開示での姿勢推定装置に相当する。また、制御部10が実行する処理のうちのS110の処理は、本開示でいう画像取得部に相当し、S120の処理は、本開示でいう特徴点抽出部に相当する。 [3. Correspondence between Configuration of Embodiment and Present Disclosure]
In the above embodiment, thecontrol unit 10 corresponds to the posture estimation device according to the present disclosure. In addition, the process of S110 among the processes executed by the control unit 10 corresponds to an image acquisition unit referred to in the present disclosure, and the process of S120 corresponds to a feature point extraction unit referred to in the present disclosure.
上記実施形態において制御部10は、本開示での姿勢推定装置に相当する。また、制御部10が実行する処理のうちのS110の処理は、本開示でいう画像取得部に相当し、S120の処理は、本開示でいう特徴点抽出部に相当する。 [3. Correspondence between Configuration of Embodiment and Present Disclosure]
In the above embodiment, the
また、制御部10が実行する処理のうちのS130の処理は、本開示でいう基本推定部に相当し、S140の処理は、本開示でいう姿勢推定部、第1推定部に相当する。また、S150の処理は、本開示でいう走行点抽出部に相当し、S160の処理は、本開示でいう第2推定部に相当する。また、S230の処理は、本開示でいうフィルタリング部に相当する。
Moreover, the process of S130 among the processes executed by the control unit 10 corresponds to a basic estimation unit in the present disclosure, and the process of S140 corresponds to an attitude estimation unit and a first estimation unit in the present disclosure. Further, the process of S150 corresponds to a travel point extraction unit as referred to in the present disclosure, and the process of S160 corresponds to a second estimation unit referred to in the present disclosure. Further, the process of S230 corresponds to the filtering unit referred to in the present disclosure.
Claims (5)
- 移動物に搭載された1又は複数の撮像部により得られたそれぞれの撮像画像を繰り返し取得するように構成された画像取得部(S110)と、
繰り返し取得された撮像画像の少なくとも1つの中から、複数の特徴点を複数の移動前点として抽出するとともに、前記複数の移動前点を抽出した撮像画像よりも時系列に沿って後に取得された撮像画像中から、前記複数の移動前点に対応する複数の特徴点を複数の移動後点として抽出するように構成された特徴点抽出部(S120)と、
前記複数の移動前点を前記複数の移動後点に遷移させるための行列であって、前記移動物が予め設定された軌跡に従って移動し、かつ回転しないと仮定したときの行列を表す基本行列を推定するように構成された基本推定部(S130)と、
前記基本行列に従って前記1又は複数の撮像部の姿勢を推定するように構成された姿勢推定部(S140)と、
を備える姿勢推定装置。 An image acquisition unit (S110) configured to repeatedly acquire each captured image obtained by one or a plurality of imaging units mounted on a moving object;
A plurality of feature points are extracted as a plurality of pre-movement points from at least one of the repeatedly acquired captured images, and are acquired later in time series than the captured images obtained by extracting the plurality of pre-movement points. A feature point extraction unit (S120) configured to extract a plurality of feature points corresponding to the plurality of pre-movement points from the captured image as a plurality of post-movement points;
A matrix for transitioning the plurality of pre-movement points to the plurality of post-movement points, the basic matrix representing a matrix when the moving object is assumed to move and rotate according to a preset trajectory. A basic estimation unit (S130) configured to estimate;
A posture estimation unit (S140) configured to estimate the posture of the one or more imaging units according to the basic matrix;
A posture estimation device comprising: - 請求項1に記載の姿勢推定装置であって、
繰り返し取得された撮像画像の少なくとも1つの中から、前記移動物が移動の際の走行する路面上の特徴点を路面前点として抽出するとともに、前記路面前点を抽出した撮像画像よりも時系列に沿って後に取得された撮像画像中から、前記路面前点に対応する特徴点を複数の路面後点として抽出するように構成された走行点抽出部(S150)と、
前記姿勢推定部を第1推定部とし、前記基本行列、前記路面前点、および路面後点を用いて、前記1又は複数の撮像部の路面に対する高さ、および路面に対する回転角を推定するように構成された第2推定部(S160)、
をさらに備える姿勢推定装置。 The posture estimation apparatus according to claim 1,
A feature point on the road surface on which the moving object travels is extracted as a road front point from at least one of the repeatedly acquired captured images, and is more time-series than the captured image from which the road front point is extracted. A travel point extraction unit (S150) configured to extract a feature point corresponding to the road surface front point as a plurality of road surface rear points from the captured image acquired later along
The posture estimation unit is a first estimation unit, and the height of the one or more imaging units with respect to the road surface and the rotation angle with respect to the road surface are estimated using the basic matrix, the road surface front point, and the road surface rear point. A second estimation unit (S160) configured in
An attitude estimation apparatus further comprising: - 請求項1又は請求項2に記載の姿勢推定装置であって、
前記特徴点抽出部、前記基本推定部、および前記姿勢推定部は、特徴点を抽出する撮像画像を変更しつつ、繰り返し前記1又は複数の撮像部の姿勢を推定するように構成され、
前記姿勢推定部にて推定された複数の姿勢の推定結果に対してフィルタリングを行い、フィルタリング後の値を前記1又は複数の撮像部の姿勢として出力するように構成されたフィルタリング部(S230)、
をさらに備える姿勢推定装置。 The posture estimation apparatus according to claim 1 or 2,
The feature point extraction unit, the basic estimation unit, and the orientation estimation unit are configured to repeatedly estimate the orientation of the one or more imaging units while changing a captured image from which a feature point is extracted,
A filtering unit (S230) configured to perform filtering on the estimation results of a plurality of postures estimated by the posture estimation unit, and to output the filtered values as the postures of the one or the plurality of imaging units,
An attitude estimation apparatus further comprising: - 請求項2を引用する請求項3に記載の姿勢推定装置であって、
前記第2推定部は、路面に対する高さを推定しようとする撮像部以外の撮像部についての路面からの高さを用いて、前記1又は複数の撮像部の路面に対する高さを推定する
ように構成された姿勢推定装置。 The posture estimation apparatus according to claim 3 quoting claim 2,
The second estimating unit estimates the height of the one or more imaging units with respect to the road surface by using the height from the road surface of the imaging unit other than the imaging unit to estimate the height with respect to the road surface. Configured posture estimation device. - 請求項4に記載の姿勢推定装置であって、
前記第2推定部は、路面に対する高さを推定しようとする撮像部以外の撮像部のうちの、撮像画像中に路面が占める割合が最も大きな撮像部についての路面からの高さを用いて、前記1又は複数の撮像部の路面に対する高さを推定する
ように構成された姿勢推定装置。 The posture estimation apparatus according to claim 4,
The second estimation unit uses the height from the road surface for the imaging unit that has the largest proportion of the road surface in the captured image, among the imaging units other than the imaging unit that tries to estimate the height relative to the road surface, An attitude estimation device configured to estimate a height of the one or more imaging units with respect to a road surface.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-019011 | 2018-02-06 | ||
JP2018019011A JP6947066B2 (en) | 2018-02-06 | 2018-02-06 | Posture estimator |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019156072A1 true WO2019156072A1 (en) | 2019-08-15 |
Family
ID=67549662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/004065 WO2019156072A1 (en) | 2018-02-06 | 2019-02-05 | Attitude estimating device |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6947066B2 (en) |
WO (1) | WO2019156072A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114782447A (en) * | 2022-06-22 | 2022-07-22 | 小米汽车科技有限公司 | Road surface detection method, device, vehicle, storage medium and chip |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7297595B2 (en) * | 2019-08-23 | 2023-06-26 | 株式会社デンソーテン | Posture estimation device, anomaly detection device, correction device, and posture estimation method |
JP7237773B2 (en) * | 2019-08-23 | 2023-03-13 | 株式会社デンソーテン | Posture estimation device, anomaly detection device, correction device, and posture estimation method |
JP7256734B2 (en) * | 2019-12-12 | 2023-04-12 | 株式会社デンソーテン | Posture estimation device, anomaly detection device, correction device, and posture estimation method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS623663A (en) * | 1985-06-28 | 1987-01-09 | Agency Of Ind Science & Technol | Self-moving state extracting method |
JP2014101075A (en) * | 2012-11-21 | 2014-06-05 | Fujitsu Ltd | Image processing apparatus, image processing method and program |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4868964B2 (en) * | 2006-07-13 | 2012-02-01 | 三菱ふそうトラック・バス株式会社 | Running state determination device |
-
2018
- 2018-02-06 JP JP2018019011A patent/JP6947066B2/en active Active
-
2019
- 2019-02-05 WO PCT/JP2019/004065 patent/WO2019156072A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS623663A (en) * | 1985-06-28 | 1987-01-09 | Agency Of Ind Science & Technol | Self-moving state extracting method |
JP2014101075A (en) * | 2012-11-21 | 2014-06-05 | Fujitsu Ltd | Image processing apparatus, image processing method and program |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114782447A (en) * | 2022-06-22 | 2022-07-22 | 小米汽车科技有限公司 | Road surface detection method, device, vehicle, storage medium and chip |
CN114782447B (en) * | 2022-06-22 | 2022-09-09 | 小米汽车科技有限公司 | Road surface detection method, device, vehicle, storage medium and chip |
Also Published As
Publication number | Publication date |
---|---|
JP2019139290A (en) | 2019-08-22 |
JP6947066B2 (en) | 2021-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210176432A1 (en) | Road vertical contour detection | |
WO2019156072A1 (en) | Attitude estimating device | |
US20120327189A1 (en) | Stereo Camera Apparatus | |
JP2008027138A (en) | Vehicle monitoring device | |
JP6316976B2 (en) | In-vehicle image recognition device | |
EP3418122B1 (en) | Position change determination device, overhead view image generation device, overhead view image generation system, position change determination method, and program | |
EP2770478B1 (en) | Image processing unit, imaging device, and vehicle control system and program | |
US9902341B2 (en) | Image processing apparatus and image processing method including area setting and perspective conversion | |
JP2020060550A (en) | Abnormality detector, method for detecting abnormality, posture estimating device, and mobile control system | |
JP2014106739A (en) | In-vehicle image processing device | |
JP2012252501A (en) | Traveling path recognition device and traveling path recognition program | |
JP2010226652A (en) | Image processing apparatus, image processing method, and computer program | |
JP7247772B2 (en) | Information processing device and driving support system | |
WO2017010268A1 (en) | Estimation device and estimation program | |
CN111260538B (en) | Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera | |
JP2018136739A (en) | Calibration device | |
JP6032141B2 (en) | Travel road marking detection device and travel road marking detection method | |
KR20170114523A (en) | Apparatus and method for AVM automatic Tolerance compensation | |
JP7311407B2 (en) | Posture estimation device and posture estimation method | |
JP2009059132A (en) | Object acknowledging device | |
CN113170057A (en) | Image pickup unit control device | |
JP2020087210A (en) | Calibration device and calibration method | |
KR101949349B1 (en) | Apparatus and method for around view monitoring | |
JP7169227B2 (en) | Anomaly detection device and anomaly detection method | |
JPH09179989A (en) | Road recognition device for vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19750762 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19750762 Country of ref document: EP Kind code of ref document: A1 |