WO2021215199A1 - Information processing device, image capturing system, information processing method, and computer program - Google Patents

Information processing device, image capturing system, information processing method, and computer program Download PDF

Info

Publication number
WO2021215199A1
WO2021215199A1 PCT/JP2021/013324 JP2021013324W WO2021215199A1 WO 2021215199 A1 WO2021215199 A1 WO 2021215199A1 JP 2021013324 W JP2021013324 W JP 2021013324W WO 2021215199 A1 WO2021215199 A1 WO 2021215199A1
Authority
WO
WIPO (PCT)
Prior art keywords
line
roadway
slope
information processing
image
Prior art date
Application number
PCT/JP2021/013324
Other languages
French (fr)
Japanese (ja)
Inventor
檜垣 欣成
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2021215199A1 publication Critical patent/WO2021215199A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • G01B11/27Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes for testing the alignment of axes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • This disclosure relates to an information processing device, an imaging system, an information processing method, and a computer program.
  • Patent Document 1 discloses an image processing device that estimates the distant road gradient in which the automobile is traveling by using two cameras (stereo cameras and the like) mounted on the automobile.
  • two cameras stereo cameras and the like
  • the present disclosure provides an information processing device, an imaging system, an information processing method, and a computer program that estimate the slope of the road surface of a roadway by a simple process.
  • the information processing device of the present disclosure is In the image of the space including the roadway, the detection unit that detects the first area line that identifies the area of the roadway, and the detection unit.
  • An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position. To be equipped.
  • the imaging system of the present disclosure is An imaging device that captures the space including the roadway, In the image acquired by the image pickup device, a detection unit that detects a first region line that identifies the region of the roadway, and a detection unit. An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position. To be equipped.
  • the information processing method of the present disclosure is In the image of the space including the roadway, the first area line that specifies the area of the roadway is detected.
  • the slope of the road surface corresponding to the first position is estimated based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
  • the computer program of this disclosure is In the image of the space including the roadway, the step of detecting the first area line that specifies the area of the roadway, and Have the computer perform a step of estimating the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position. ..
  • the block diagram of the image pickup system and the vehicle control system which concerns on 1st Embodiment of this disclosure A diagram showing an example of a road in real space in a plane.
  • the figure which shows the example of the captured image acquired by the imaging apparatus. A side view showing the relationship between a vehicle and a roadway in real space.
  • the figure which shows the example which set the attention area (target area) ROI. A schematic cross-sectional view showing the geometrical relationship between the image pickup device and the road surface.
  • the figure which shows the example which calculates the gradient angle of the road surface included in the attention area ROI The flowchart of an example of the operation of the information processing apparatus which concerns on this embodiment.
  • FIG. 1 is a block diagram of an imaging system 100 and a vehicle control system 301 according to the first embodiment of the present disclosure.
  • the imaging system 100 of FIG. 1 is mounted on a vehicle such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, and a bicycle.
  • the system of FIG. 1 can be mounted on any type of mobile body such as an AGV (Automated Guided Vehicle), personal mobility, a mobile robot, a construction machine, and an agricultural machine (tractor).
  • AGV Automate Guided Vehicle
  • these moving bodies can travel on the road surface, they also correspond to one form of the vehicle according to the present embodiment.
  • the imaging system 100 is connected to the vehicle control system 301 by wire or wirelessly.
  • the vehicle control system 301 controls a vehicle traveling on a roadway.
  • a vehicle traveling on a roadway For example, radar, LiDAR, ToF sensor, camera, weather sensor, etc. are used to detect obstacles such as preceding vehicles and pedestrians, and control is performed to avoid collisions with the detected obstacles and pedestrians. For example, it controls the acceleration / deceleration of the vehicle and controls the output of an alarm to a pedestrian.
  • the vehicle control system 301 may feed back the detection result or the control result to the imaging system 100.
  • the image pickup system 100 includes an image pickup device 201 and an information processing device 101.
  • the information processing device includes an image acquisition unit 110, a detection unit 120, an area setting unit 130, an estimation unit 140, and a storage unit 150.
  • the image acquisition unit 110, the detection unit 120, the area setting unit 130, and the estimation unit 140 are composed of hardware, software (program), or both. Examples of hardware include a processor such as a CPU (Central Processing Unit), a dedicated circuit, a programmable circuit, a memory, and the like.
  • the storage unit 150 includes any recording medium capable of storing information or data, such as a memory, a hard disk, and an optical disk.
  • the image pickup device 201 is a camera or sensor device that captures a space including a roadway in front of the vehicle.
  • the image pickup device 201 is, for example, a color camera such as an RGB camera, a monochrome camera, an infrared camera, a brightness sensor, or the like.
  • the image pickup apparatus 201 is an RGB monocular camera.
  • the image pickup device 201 is attached at a position where the front of the vehicle can be imaged, such as the upper part of the windshield in the vehicle interior or the front nose.
  • the posture of the image pickup apparatus 201 is set so that, for example, the direction of the light source (imaging direction) is in the direction (slightly downward) toward the road surface (ground) of the road, or is substantially toward the road surface. ing.
  • the image pickup device 201 images the space including the roadway in front of the vehicle at regular time intervals while the vehicle is traveling, and provides the captured image to the information processing device 101.
  • the timing of sensing by the image pickup apparatus 201 may be the timing of receiving an instruction from the vehicle control system 301.
  • the vehicle control system 301 may output an imaging instruction to the imaging device 201 when receiving an instruction for estimating the slope of the road surface from a user (passenger) in the vehicle via an input device (not shown).
  • the image pickup device 201 can also take an image while the vehicle is stopped.
  • the storage unit 150 stores data or information necessary for the processing of the present embodiment, such as parameter information of the image pickup apparatus 201, a gradient calculation function described later, and a learned neural network.
  • the image acquisition unit 110 is connected to the image pickup device 201 by wire or wirelessly, and acquires the captured image from the image pickup device 201.
  • the image acquisition unit 110 provides the acquired captured image to the detection unit 120.
  • the detection unit 120 detects a region line that specifies the region of the roadway included in the captured image.
  • the area line may be a traffic line drawn on the road surface (outside road line, central line, etc.), or may be a boundary portion between the traffic line and the roadway on the roadway side.
  • it may be a boundary portion of the traffic line with the roadway, or an edge portion of the traffic line on the opposite side of the roadway (the width of the traffic line is constant or substantially constant).
  • the area line may be a side groove on the road surface, a boundary portion between the side groove and the roadway on the roadway side, or an edge portion on the side groove on the road surface opposite to the roadway (the width of the side groove is constant). Or it is assumed to be substantially constant).
  • the traffic line is generally drawn in white or orange.
  • the portion may be a region line.
  • the boundary portion between the fence and the roadway on the roadway side or the boundary portion on the wall side may be used as the area line.
  • the shape of the area line can be straight, curved, or a mixture of straight and curved.
  • the traffic line drawn on the road (white line, etc.) is described as not being a part of the roadway, but the traffic line can also be defined as a part of the roadway.
  • FIG. 2 is a plan showing an example of a road in real space.
  • FIG. 2A shows an example of a road in which outside road lines (white lines) 10A and 10B are provided on both sides of the roadway 10C.
  • the outside lines 10A and 10B are straight lines.
  • the opposite side of the outer roadway lines 10A and 10B from the roadway 10C is, for example, a road other than the roadway (walking road, etc.).
  • FIG. 2B shows an example in which the outside road lines (white lines) 11A and 11B provided on both sides of the roadway 11C are curved lines.
  • FIG. 2C shows an example of a road having grooves (side grooves) 12A and 12B on both sides of the roadway 12C.
  • white lines or gutters are present on both sides of the roadway, but they may be present on only one side.
  • FIG. 3 shows a specific example of the roadway and the area line in the captured image acquired by the imaging device 201.
  • FIG. 3A shows an example in which there is no gradient difference between the road surface on which the own vehicle is present and the road surface in front of the road on a straight road.
  • FIG. 3B shows an example of a straight road where there is a gradient difference between the road surface on which the own vehicle is present and the road surface in front (when the front is an uphill).
  • FIG. 3A for example, the boundary portion of the roadway 22 in contact with the white line (outside the roadway) is schematically shown as area lines 21A and 21B.
  • FIG. 3B the boundary portion of the roadway 25 in contact with the white line (outside the roadway) is formally shown as the area lines 24A and 24B.
  • FIGS. 3 (A) and 3 (B) the illustration of an object other than the roadway, such as a white line, is omitted.
  • the area lines 21A and 21B are linear (the white lines adjacent to the area lines 21A and 21B are also linear).
  • the area lines 24A and 24B have a shape that is slightly bent inward in the middle in the traveling direction.
  • the position of each pixel in the captured images of FIGS. 3 (A) and 3 (B) can be represented by an uv coordinate system with the horizontal axis as the u axis and the vertical axis as the v axis.
  • the ui coordinate system corresponds to the so-called image coordinate system.
  • FIGS. 3 (A) and 3 (B) are side views showing the relationship between the vehicle and the roadway in the real space when the captured images of FIGS. 3 (A) and 3 (B) are acquired.
  • the traveling direction of the vehicle is the x-axis
  • the height direction is the z-axis
  • the direction perpendicular to the paper surface is the y-axis.
  • the xyz coordinate system corresponds to the so-called world coordinate system.
  • the position in real space can be represented by xyz coordinates.
  • FIG. 4A there is no gradient difference between the road surface on which the vehicle is present and the road surface in front of the vehicle on the roadway 26 on which the vehicle travels.
  • FIG. 4B there is a gradient difference between the road surface on which the vehicle is present and the road surface in front of the vehicle on the roadway 27 on which the vehicle travels.
  • FIG. 4 (A) When the front is imaged on a straight road with no gradient difference as shown in FIG. 4 (A), an image in which the area line of the road is narrowed with a constant inclination as shown in FIG. 3 (A) can be obtained. .. Further, when the front of the vehicle is imaged on a straight road with a gradient difference as shown in FIG. 4 (B), the area line of the road is inward along the traveling direction as shown in FIG. 3 (B). A bent image is obtained.
  • the relationship between the real space of FIG. 4 (A) and the image of FIG. 3 (A) and the relationship between the real space of FIG. 4 (B) and the image of FIG. 3 (B) are the world coordinate system to which the real space belongs and the image. It is represented by a commonly known investment projection transformation using the image coordinate system to which it belongs.
  • semantic segmentation can be used as a method for the detection unit 120 to detect the area line of the roadway from the captured image.
  • the captured image is divided into a plurality of segments (objects) belonging to any of a plurality of predetermined classes. Examples of classes include roadways, traffic lines (white lines, etc.), or ditches.
  • the boundary portion (boundary line) of the object having the roadway class with the adjacent object can be detected as the area line of the roadway.
  • the boundary portion of the adjacent object with the object having the roadway class can be detected as the area line of the roadway.
  • the edge portion of the adjacent object on the opposite side of the object having the roadway class can be detected as a roadway area line (for example, when the adjacent object is a traffic line object).
  • the image segmentation method may be a method other than semantic segmentation, such as a clustering method based on color information.
  • the region line of the roadway may be detected by using the region line detection method of detecting the contour from the image instead of the segmentation of the image. The following is an example of the operation when segmentation is performed using semantic segmentation.
  • semantic segmentation of the captured image is performed using a trained neural network.
  • the trained neural network is stored in the storage unit 150.
  • Semantic segmentation is a method of classifying the class (type) of each pixel of an image on a pixel-by-pixel basis.
  • the class is determined for each pixel, and a label indicating the determined class is output for each pixel.
  • a plurality of classes such as a roadway, a traffic line (white line, etc.), and a gutter are defined in advance.
  • the class is determined for each pixel.
  • a class value corresponding to each pixel is obtained.
  • a group of consecutive pixels with the same class value corresponds to one segment (object).
  • the captured image is used as the input of the neural network, and the class value corresponding to each pixel is obtained as the output.
  • An image semantic segmentation image in which pixels having the same class value have the same color may be generated. Objects of the same type are shown in the same color, so the segmentation results can be displayed in a visually easy-to-understand manner.
  • the area setting unit 130 sets an area of interest (target area) that includes at least a part of the road surface that is the target of slope estimation on the roadway with respect to the captured image.
  • the shape of the region of interest may be rectangular, trapezoidal, circular, or any other shape, and is not limited to a specific shape. Further, the region of interest may be a straight line (for example, a line having a width of one pixel). Note that setting the region of interest for the captured image includes setting the region of interest directly for the captured image and setting the region of interest for the segmentation image.
  • the estimation unit 140 has a road surface gradient corresponding to the selected position on the road surface of the roadway based on the position (selected position) selected on the area line of the roadway detected by the detection unit 120 and the inclination of the area line at the selected position. To estimate. When two area lines are detected, the selected selected position on one area line corresponds to the first position, and the selected selected position selected on the other area line corresponds to the second position. However, only one area line may be detected, and in this case, the selected position selected from the area line corresponds to the first position.
  • the estimation unit 140 calculates the inclination at an arbitrary position (selected position) on the detected partial line.
  • the slope of the partial line is the slope in the uv coordinate system, and the selected position in the partial line is represented by the uv coordinate.
  • the slope of the partial line may be the slope of the tangent line at the selected position on the partial line, or the slope of a straight line that approximates the partial line.
  • the selected position may be the position of the part (contact point) of the partial line that is in contact with the tangent line, the center of gravity of the partial line (the average value of the coordinates), or the position of an arbitrary selected coordinate from the coordinates included in the partial line. It may be.
  • the partial line may have a pixel width of 1 or 2 or more.
  • FIG. 5 shows an example in which the region of interest (target region) ROI is set for the captured image 23 of FIG. 3 (B).
  • a rectangular attention region ROI that crosses the roadway 25 is set slightly at the center of the captured image 23 in the v-coordinate direction.
  • the region of interest ROI includes at least part of the road surface of the roadway 25.
  • the portion of the area line included in the region of interest ROI is specified as the partial line 31A.
  • the portion of the region line included in the region of interest ROI is specified as the partial line 31B. Only one of the two sublines may be specified.
  • the method of setting the attention area ROI may be arbitrary.
  • a range including at least a part of the detected vehicle may be specified as a region of interest ROI.
  • a predetermined coordinate range in the captured image may be used as the region of interest ROI.
  • the range including the changed portion may be set as the region of interest.
  • the captured image may be displayed on a display device in the vehicle, and the user may specify the area of interest using an input device (touch panel or the like).
  • the slope ⁇ L may be the slope of the tangent line at the selected position on the partial line 31A, or the slope of a straight line that approximates the partial line 31A.
  • the position L (uL, vL) (see FIG. 8 described later), which is the selected position on the partial line 31A, may be, for example, a contact point between the partial line 31A and its tangent line, or the center of gravity of the partial line 31A (for example, the center of the u coordinate). It may be the center of the v-coordinate) or the position determined by another method.
  • the slope ⁇ R may be the slope of the tangent line at the selected position on the partial line 31B, or the slope of a straight line that approximates the partial line 31B.
  • the position R (uR, vR), which is the selected position on the partial line 31B, may be, for example, a contact point between the partial line 31B and its tangent line, or the center of gravity of the partial line 31B (for example, the center of the u coordinate and the center of the v coordinate). However, the position may be determined by another method.
  • the position L and the position R may be determined on the condition that the v coordinates of the position L (uL, vL) and the position R (uR, vR) are the same. The details of the process of calculating the position and inclination of the partial line will be described later.
  • the estimation unit 140 estimates the slope of the road surface portion included in the region of interest based on the slope of the partial line and the selected position.
  • the reason why the slope of the road surface portion included in the attention region ROI can be estimated based on the inclination of the partial line included in the attention region ROI and the selected position will be described in detail.
  • FIG. 6 is a schematic cross-sectional view showing the geometrical relationship between the image pickup device 201 and the road surface.
  • FIG. 6 as in the case shown in FIG. 4B, a case where there is a gradient difference in the front on a straight roadway is taken as an example.
  • the traveling direction of the vehicle is the x-axis, and the height direction is the z-axis.
  • the y-axis is the width direction of the vehicle.
  • the xyz coordinate system corresponds to the world coordinate system.
  • the vehicle image pickup device 201 exists at a height H from the origin of the xyz coordinate system. That is, the image pickup apparatus 201 exists at a position at a height H from the plane 38 on which the vehicle exists.
  • the coordinates of the camera light source are (0, 0, H).
  • the direction (angle) of the optical axis 36 of the camera 35 is ⁇ .
  • the observation target point P is a position parallel to the optical axis of the camera, and corresponds to, for example, a selected position on a partial line.
  • the road surface gradient angle ⁇ 0 is the inclination of the tangent line 37 at the observation target point P on the road surface of the roadway.
  • the intersection of the tangent line 37 and the x-axis is x0.
  • s is a scaling factor and corresponds to the reciprocal of the depth from the camera to the observation target point.
  • fy is the horizontal focal length of the camera (unit is pixel)
  • fz is the vertical focal length of the camera (unit is pixel).
  • Cy, cz is the center of the camera (principal point), that is, the center of the uv coordinate system.
  • is the angle in the optical axis direction.
  • H is the height of the camera.
  • (X, y, z) is the coordinates of the observation target point P, which is an unknown value.
  • the observation target point P corresponds to the position of the partial line in the captured image.
  • the first matrix corresponds to the internal parameters of the camera and the second matrix corresponds to the external parameters of the camera.
  • the internal parameters and external parameters are measured in advance by calibration and stored in the storage unit 150. Internal parameters and external parameters are collectively called camera parameter information.
  • the dv / du (differential coefficient of v with respect to u) is analytically derived below.
  • equation (5) is obtained from the formula for differentiation.
  • x and z are the x and z coordinates of the observation target point P.
  • ⁇ 0 is the angle (road surface gradient angle) at which the tangent line of the observation target point P and the x-axis intersect.
  • equation (7) is specifically derived.
  • the first differential coefficient on the right side is derived. From the formula (3) and the formula (6), the following formula (8) can be obtained.
  • the direction ⁇ of the optical axis 36 of the camera, the focal length fz in the vertical direction, and the camera center (cy, cz) are known from the parameter information of the camera. Therefore, from the angle ⁇ of the partial line and the position (u, v) which is the selected position on the partial line, the target position (observation target point) on the road surface corresponding to the position (u, v) is determined by the equation (17).
  • the road surface gradient angle ⁇ 0 is calculated. That is, the slope of the road surface at the target position can be estimated based on the angle ⁇ of the partial line and the coordinates (u, v) of the selected position on the partial line.
  • the function of equation (17) is stored in advance in the storage unit 150 as a gradient calculation function.
  • the slope calculation function is a function that corresponds to an input variable in which the slope (angle) of a partial line is input, an input variable in which the coordinates of the selected position on the partial line are input, and an output variable that outputs the road surface slope angle. ..
  • the gradient calculation function can be derived in the same way by expressing the relationship between x, y, and z with a polynomial.
  • FIG. 7 schematically shows an example of a curved roadway in the xyz coordinate system.
  • White lines 51L and 52R are provided on both sides of the roadway 50.
  • the relationship between x, y, and z can be expressed by the following equation (18) and the above equation (6).
  • the estimation unit 140 estimates the slope angle (road surface slope angle) of the road surface included in the region of interest ROI based on the angle of the partial line and the coordinates of the selected position using the slope calculation function.
  • FIG. 8 shows an example of calculating the slope angle of the road surface included in the region of interest ROI in the same captured image 23 as in FIG.
  • the ROI of interest is a straight line.
  • the partial line corresponds to the intersection L (point L) of the region of interest ROI (straight line) and the region line 24A.
  • the coordinates of this point L also correspond to the coordinates of the selected position selected on the partial line.
  • the angle ⁇ L of the partial line is, for example, the angle of the tangent line 39L at the point L.
  • the road surface gradient angle ⁇ 0L at the position (target position) in the real space corresponding to the position L (uL, vL) is calculated by the above-mentioned gradient calculation function. ..
  • the other partial line corresponds to the intersection R (point R) between the region of interest ROI and the region line.
  • the coordinates of this point R also correspond to the coordinates of the selected position selected on the partial line.
  • the angle ⁇ R of the partial line is, for example, the angle of the tangent line 39R at the point R. From the angle ⁇ R of the partial line and the position R (uR, vR), the estimation unit 140 uses the above-mentioned gradient calculation function to determine the road surface gradient at the position (target position) in the real space corresponding to the position L (uR, vR). The angle ⁇ 0R is calculated.
  • the estimation unit 140 estimates the average of the two calculated road surface gradient angles ⁇ 0L and ⁇ 0R as the road surface gradient angle of the road surface included in the region of interest.
  • the average may be a weighted average. For example, when the road surface is straight, the weight may be set to 1: 1, and when the road surface is a left curve or a right curve, different weights may be set on the left and right. The weight may be adjusted according to the curvature of the curve. Alternatively, either one (maximum value or minimum value) of the two road surface gradient angles ⁇ 0L and ⁇ 0R may be estimated as the road surface gradient angle of the road surface included in the region of interest.
  • the estimation unit 140 outputs information representing the slope angle of the road surface estimated by the ROI of interest to the vehicle control system 301.
  • the vehicle control system 301 controls the vehicle by using the information of the road surface gradient angle. For example, acceleration / deceleration control is performed as driving support for the driver to avoid collisions with obstacles, pedestrians, and the like. Specifically, when the road surface gradient angle indicates an uphill, acceleration is performed, and when it indicates a downhill, deceleration is performed.
  • FIG. 9 is a flowchart of an example of the operation of the information processing device according to the present embodiment.
  • the image acquisition unit 110 acquires an captured image from the imaging device 201 (S101).
  • the detection unit 120 detects a region line that specifies the region of the roadway in the captured image (S102).
  • the region setting unit 130 sets a region of interest (target region) including at least a part of the road surface for which the gradient angle is to be estimated in the captured image (S103).
  • the estimation unit 140 calculates the inclination at the selected position (first position or second position) of the partial line which is a part of the area line included in the region of interest (S104).
  • the estimation unit 140 estimates the slope angle of the road surface included in the region of interest by calculating a slope calculation function that uses the slope of the partial line and the coordinates of the selected position as input variables (S105).
  • step S103 may be executed before step 102.
  • the road surface included in the attention area is based on the inclination at the selection position of the partial line which is the area line of the roadway included in the attention area ROI set in the captured image and the coordinates of the selection position. Estimate the tilt angle of. As a result, the road surface inclination angle can be estimated at high speed with a small amount of processing. Further, according to the first embodiment, estimation can be performed using an image captured by a monocular camera, and since two or more cameras such as a stereo camera are unnecessary, it can be realized at low cost. Further, by performing the estimation using a plurality of partial lines, the accuracy and robustness of the estimation can be improved.
  • Modification example 1 In the above-described embodiment, an example is shown in which the boundary portion on the roadway side of the roadway and the traffic line (white line, etc.) is used as the region line for specifying the region of the roadway, but in this modification, the boundary portion on the white line side, etc. Is shown as an example in which is a region line.
  • FIG. 10 shows an example of a captured image according to this modified example.
  • White lines 24A and 24B are provided on both sides of the roadway 40.
  • a rectangular attention area ROI that crosses the white lines 24A and 24B in the u coordinate direction is set.
  • the white lines 24A and 24B have a certain width (two or more pixels).
  • the boundary portion of the white line 24A in contact with the roadway 40 can be used as the area line 31.
  • the portion of the region line 31 included in the region of interest ROI corresponds to the partial line 31A.
  • the point 45 can be selected as an example of the selection position.
  • the slope ⁇ L at the point 45 the slope of the tangent line at the point 45 can be used.
  • the slope of a straight line (not shown) connecting the intersections 41 and 42 inside the white line 24A and the region of interest ROI may be calculated as the slope of the partial line 31A.
  • the selected position may be a position selected from the partial line 31A or a position arbitrarily selected from the straight line such as the midpoint on the straight line.
  • the slope of the straight line connecting the outer intersections 46 and 47 of the region line 24A and the region of interest ROI may be calculated as the slope of the partial line 31A.
  • the selected position may be a position selected from the partial line 31A, or a position arbitrarily selected from the straight line such as a midpoint on the straight line.
  • the slope of the straight line connecting the midpoint 43 of the intersection 41 and 46 and the midpoint 47 of the intersection 42 and the intersection 44 may be calculated as the slope of the partial line 31A.
  • the selected position may be a position selected from the partial line 31A, or a position arbitrarily selected from the straight line such as a midpoint on the straight line.
  • the edge portion of the white line 24A on the opposite side of the roadway 40 can be used as the area line 32.
  • the portion of the region line 31 included in the region of interest ROI corresponds to the partial line 32A.
  • the slope of the partial line 32A and the selected position on the partial line 32A can be calculated in the same manner as in the case of the area line 31.
  • the area line, the inclination and the selected position for the white line 24A can be calculated in the same manner for the white line 24B.
  • the slope angle of the road surface included in the region of interest ROI can be estimated from the above slope calculation function.
  • the method of calculating the slope of the partial line (the slope of the region line included in the region of interest) by the linear approximation described in FIG. 10 can also be applied to the above-described embodiment.
  • the slope of the road surface may be estimated without setting the region of interest. For example, an arbitrary position on the area line is selected as the selection position, and the slope of the road surface corresponding to the selection position is obtained from the above-mentioned gradient calculation function based on the coordinates of the selected selection position and the inclination of the area line at the selection position. You may estimate. In this case, the same estimation result as in the case where the region of interest of the straight line passing through the selected position is set in the above-described embodiment and the slope of the road surface is estimated using the inclination of one side (left side or right side) and the coordinates of the selected position. Can be obtained.
  • the road surface gradient angle is estimated as in the first embodiment, but the configuration for estimating is different.
  • FIG. 11 is a block diagram of the imaging system 500 and the vehicle control system 301 according to the second embodiment.
  • the same elements as those in the block diagram of FIG. 1 of the first embodiment are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the image pickup system 500 includes an image pickup device 201, a depth sensor 401, and an information processing device 501.
  • the image pickup apparatus 201 is the same as that of the first embodiment.
  • the depth sensor 401 is a sensor device that detects the depth information of the space including the front of the vehicle. As the depth information, for example, a depth image in which the depth value is stored in each pixel is acquired.
  • the depth sensor 401 is, for example, a LiDAR, a laser, a stereo camera, a ToF sensor, or the like.
  • the posture of the depth sensor 401 is set so that, for example, the light source direction (imaging direction) is closer to the road surface (ground) (slightly downward), or is substantially oriented toward the road surface. There is.
  • the parameter information of the depth sensor 401 is stored in the storage unit 150. Information on the positional relationship between the depth sensor 401 and the imaging device 201 is also stored in the storage unit 150.
  • the depth sensor 401 acquires the depth information of the space including the front at regular time intervals while the vehicle is running.
  • the depth sensor 401 provides the acquired depth information to the information processing device 101.
  • the depth sensor 401 operates in synchronization with, for example, the image pickup apparatus 201.
  • the depth sensor 401 and the image pickup apparatus 201 operate according to the same synchronization signal.
  • the timing of sensing by the depth sensor 401 may be the timing of receiving an instruction from the vehicle control system 301.
  • the vehicle control system 301 receives an instruction for estimating the slope of the road surface from a user (passenger) in the vehicle via an input device (not shown)
  • the vehicle control system 301 may output a sensing instruction to the image pickup device 201.
  • the depth sensor 401 may be sensed while the vehicle is stopped.
  • the depth information acquisition unit 210 of the information processing device 101 is connected to the depth sensor 401 by wire or wirelessly, and acquires the depth information from the depth sensor 401.
  • the depth information acquisition unit 210 provides the acquired depth information to the estimation unit 240.
  • the image acquisition unit 110 acquires an image captured from the image pickup device 201 and provides the acquired image to the estimation unit 240.
  • the estimation unit 240 specifies the depth value of the observation target point (third position) on the roadway based on the depth information acquired from the depth information acquisition unit 210. Further, in the captured image, the position (fourth position) corresponding to the observation target point is specified. The slope of the observation target point on the road surface is estimated based on the depth value of the observation target point and the coordinates of the position corresponding to the observation target point. Below, two examples of estimating the gradient are shown.
  • FIG. 12 is a side view for explaining the operation of the estimation unit 240.
  • the traveling direction of the vehicle is the x-axis
  • the height direction is the z-axis
  • the direction perpendicular to the paper surface (the width direction of the road) is the y-axis.
  • the observation target point P1 is a position in real space and is a point on the road surface from which the gradient is to be estimated.
  • the observation target point P1 is, for example, a position where the preceding vehicle exists when the preceding vehicle exists. As the position where the preceding vehicle exists, the position at the lower end of the preceding vehicle or the position where the preceding vehicle and the road surface are in contact with each other may be used.
  • the preceding vehicle is detected by the estimation unit 240 based on the depth information acquired by the depth sensor 401.
  • FIG. 13 shows an example in which the position of the lower end of the rear part of the preceding vehicle is set as the observation target point P1.
  • the distance from the vehicle (more specifically, the depth sensor) to the observation target point P1 is the distance X.
  • the point at which the observation target point P1 is projected onto the plane on which the own vehicle exists is defined as the projection point P2 (second point).
  • the distance from the vehicle (more specifically, the depth sensor) to the projection point P2 is defined as the distance X'.
  • Z be the distance between the observation target point P1 and the projection point P2.
  • be the road surface gradient angle at the observation target point P1.
  • the road surface gradient angle ⁇ is the difference in the slope angle of the road surface when the road surface on which the own vehicle exists is used as a reference.
  • the observation target point P1 may not be a point on the road surface, but in this case as well, since it is considered that the observation target point P1 and the road surface are extremely close to each other, there is no problem in treating it as an error range. ..
  • the estimation unit 240 calculates the distance (pixel distance) ⁇ v on the captured image corresponding to the distance Z in the real space.
  • the distance (pixel distance) ⁇ v is uniquely determined.
  • FIG. 14 shows an example of calculating the distance ⁇ v in the captured image captured by the imaging device.
  • the principal point (origin) in the image coordinate system (uv coordinate system) obtained in advance from the parameter information of the imaging device 201 is shown as a point O.
  • Position on the captured image corresponding to the observation object point P1 is a point S1
  • the distance v coordinate direction at the point S1 with respect to the principal point O is v 1.
  • Position on the captured image corresponding to the projection point P2 is S2
  • the distance v coordinate direction at the point S2 with respect to the principal point O is v 2. Therefore, the distance ⁇ v corresponding to the distance Z is a value obtained by subtracting v 1 from v 2.
  • the road surface gradient angle ⁇ is calculated by the following formula.
  • fz is the focal length of the image pickup apparatus 201 in the vertical direction.
  • the estimation unit 240 specifies the position (fourth position) on the captured image corresponding to the observation target point (third position) on the roadway.
  • the position (sixth position) on the captured image corresponding to the position (fifth position) in which the observation target point is projected on the plane on which the own vehicle exists is specified.
  • the road surface gradient angle at the observation target point is calculated from the equation (20) based on the difference ⁇ v of the v-coordinates of the two specified positions (fourth position and sixth position).
  • FIG. 15 is a flowchart of an example of the operation of the information processing device when the estimation unit 240 uses the first estimation example.
  • the image acquisition unit 110 acquires the captured image from the image pickup apparatus 201, and the depth information acquisition unit 210 acquires the depth information from the depth sensor 401 (S201).
  • the estimation unit 240 detects the preceding vehicle based on the depth information (S202), and identifies the lower end position of the rear portion of the preceding vehicle as an observation target point (S203).
  • the estimation unit 240 identifies a position on the captured image corresponding to the observation target point and a position on the captured image corresponding to the projection point where the observation target point is projected on the plane on which the own vehicle exists (S204). Then, the difference (distance ⁇ v) of the v-coordinates of these two specified positions is calculated (S204).
  • the estimation unit 240 estimates the road surface gradient angle at the observation target point based on ⁇ v and the equation (20) (S205).
  • the road surface gradient angle (gradient difference) ⁇ is estimated by applying the principle of monocular distance measurement.
  • the principle of monocular ranging is disclosed, for example, in the following paper. Gideon P. Stein et al., “Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy” (2003)
  • fz is the focal length of the image pickup apparatus 201 in the vertical direction.
  • v 2 corresponds to the distance in the v coordinate direction from the principal point O of the position S2 when the position on the captured image corresponding to the projection point P2 is S2.
  • v 1 corresponds to the distance in the v coordinate direction from the principal point O of the position S1 when the position on the captured image corresponding to the observation target point P1 is S1.
  • fz is the focal length of the image pickup apparatus 201 in the vertical direction.
  • H is the height of the image pickup apparatus 201.
  • X is the distance (depth value) to the observation target point P1.
  • the estimation unit 240 is based on the distance X to the observation target point P1 and the distance v1 in the v coordinate direction from the principal point O of the position S1 on the captured image corresponding to the observation target point P1 from the equation (22). Calculate the gradient angle ⁇ .
  • the estimation unit 240 provides the vehicle control system 301 with information on the estimated road surface gradient angle ⁇ .
  • the operation of the vehicle control system 301 is the same as that of the first embodiment.
  • FIG. 16 is a flowchart of an example of the operation of the information processing apparatus when the estimation unit 240 uses the second estimation example.
  • the image acquisition unit 110 acquires the captured image from the image pickup apparatus 201, and the depth information acquisition unit 210 acquires the depth information from the depth sensor 401 (S301).
  • the estimation unit 240 detects the preceding vehicle based on the depth information (S302), and identifies the lower end position of the rear portion of the preceding vehicle as an observation target point (S303).
  • the estimation unit 240 specifies a position on the captured image corresponding to the observation target point (S304).
  • the v-coordinate of the specified position (distance v1 in the v-coordinate direction from the principal point O in the captured image) is specified.
  • Estimation unit 240, v 1 and, based on the equation (22) estimates the road surface slope angle at the observation target point (S305).
  • the road surface gradient angle can be estimated at high speed with a low calculation amount. Further, according to the present embodiment, the road surface gradient angle can be estimated with high accuracy.
  • the present disclosure may also have the following structure.
  • the detection unit that detects the first area line that identifies the area of the roadway, and the detection unit.
  • An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
  • An area setting unit for setting a target area including at least a part of the road surface of the roadway is provided for the image.
  • the detection unit has a slope of the road surface included in the target region based on the coordinates of the first position on the first region line in the target region and the inclination of the first region line in the first position.
  • the information processing apparatus further comprising estimating.
  • the detection unit detects the second region line facing the first region line, and The estimation unit is described in item 2 for estimating the gradient based on the coordinates of the second position on the second region line in the target region and the slope of the second region line at the second position.
  • Information processing equipment [Item 4] A segment portion for segmenting the image is included.
  • the detection unit The boundary portion of the segment of the roadway where the segment of the roadway is adjacent to the other segment, the boundary portion of the other segment where the other segment is adjacent to the segment of the roadway, or
  • the information processing apparatus according to any one of items 1 to 3, wherein the edge portion of the other segment on the side opposite to the object on the roadway adjacent to the other object is the first area line.
  • [Item 5] The information processing device according to item 4, wherein the other segment is a traffic line segment.
  • the other segment is a traffic line segment.
  • the inclination at the first position is an inclination of a tangent line with the first area line at the first position.
  • the estimation unit further estimates the gradient based on an angle in the optical axis direction of the image pickup device that has captured the image.
  • the estimation unit is based on a function in which a first input variable having the slope as an input, a second input variable having the first position as an input, and an output variable having the slope as an output correspond to each other.
  • the information processing apparatus includes the preceding vehicle, The information processing device according to item 2, wherein the target area is an area including at least a part of the preceding vehicle.
  • the information processing device according to any one of items 1 to 9, further comprising an acquisition unit for acquiring the image from an image pickup device mounted on a moving body.
  • the information processing device is a monocular camera.
  • the estimation unit identifies a fourth position on the image corresponding to the third position on the roadway based on the depth information of the roadway, and based on the coordinates of the fourth position, the slope of the road surface at the third position.
  • the information processing apparatus according to any one of items 1 to 11.
  • the image is an image acquired by an imaging device arranged at a predetermined height from the first plane.
  • the estimation unit is the coordinates of the fourth position in the image corresponding to the third position on the roadway and the coordinates of the sixth position in the image corresponding to the fifth position where the third position is projected onto the first plane.
  • the information processing apparatus according to any one of items 1 to 12, which estimates the slope of the road surface at the third position based on the above.
  • the information processing device according to item 12, wherein the third position is a position on the roadway where a preceding vehicle exists.
  • An imaging device that captures the space including the roadway, In the image acquired by the image pickup device, a detection unit that detects a first region line that identifies the region of the roadway, and a detection unit. An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position. Imaging system equipped with. [Item 16] In the image of the space including the roadway, the first area line that specifies the area of the roadway is detected. An information processing method for estimating the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
  • Imaging system 101 Imaging system 101
  • Information processing device 110 Image acquisition unit 120: Detection unit 130: Area setting unit 140, 240: Estimating unit 150: Storage unit 201: Imaging device 210: Depth information acquisition unit 301: Vehicle Control system 401: Depth sensor

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)

Abstract

[Problem] To estimate the slope of the road surface of a roadway through simple processing. [Solution] An information processing device of the present disclosure comprises a detection unit for detecting a first region line which specifies a region of a roadway in a captured image of a space containing the roadway, and an estimation unit for estimating the slope of the road surface corresponding to a first position on the basis of the coordinate of the first position on the first region line and the slope of the first region line at the first position.

Description

情報処理装置、撮像システム、情報処理方法及びコンピュータプログラムInformation processing equipment, imaging system, information processing method and computer program
 本開示は、情報処理装置、撮像システム、情報処理方法及びコンピュータプログラムに関する。 This disclosure relates to an information processing device, an imaging system, an information processing method, and a computer program.
 自動車等の車両の運転において、安全な運転(例えば自動運転)を支援するために、道路における前方の路面の勾配を知りたい要求がある。前方の路面の勾配が分かれば、より適切な運転が可能となる。例えば前方が上り坂であれば、車両が減速しないように、加速を行う制御を行ったり、アクセルをより深く踏むようドライバへ助言したりするといったことができる。 In driving a vehicle such as an automobile, there is a demand to know the slope of the road surface ahead on the road in order to support safe driving (for example, automatic driving). If the slope of the road surface ahead is known, more appropriate driving becomes possible. For example, if the vehicle is uphill in front, it is possible to control acceleration so that the vehicle does not decelerate, or advise the driver to step on the accelerator deeper.
 下記特許文献1には、自動車に搭載された2つのカメラ(ステレオカメラ等)を用いて当該自動車が走行している遠方の道路勾配を推定する画像処理装置が開示されている。しかしながら、この画像処理装置では、2つのカメラを用いる必要があるため、処理量が多く、消費電力量が大きくなる問題がある。また、費用が高くなる問題もある。 Patent Document 1 below discloses an image processing device that estimates the distant road gradient in which the automobile is traveling by using two cameras (stereo cameras and the like) mounted on the automobile. However, since it is necessary to use two cameras in this image processing device, there is a problem that the amount of processing is large and the amount of power consumption is large. There is also the problem of high costs.
特開2009-41972号公報Japanese Unexamined Patent Publication No. 2009-41972
 本開示は、車道の路面の勾配を簡単な処理で推定する情報処理装置、撮像システム、情報処理方法及びコンピュータプログラムを提供する。 The present disclosure provides an information processing device, an imaging system, an information processing method, and a computer program that estimate the slope of the road surface of a roadway by a simple process.
 本開示の情報処理装置は、
 車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出する検出部と、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する推定部と、
 を備える。
The information processing device of the present disclosure is
In the image of the space including the roadway, the detection unit that detects the first area line that identifies the area of the roadway, and the detection unit.
An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
To be equipped.
 本開示の撮像システムは、
 車道を含む空間を撮像する撮像装置と、
 前記撮像装置により取得された画像において、前記車道の領域を特定する第1領域線を検出する検出部と、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する推定部と、
 を備える。
The imaging system of the present disclosure is
An imaging device that captures the space including the roadway,
In the image acquired by the image pickup device, a detection unit that detects a first region line that identifies the region of the roadway, and a detection unit.
An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
To be equipped.
 本開示の情報処理装方法は、
 車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出し、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する。
The information processing method of the present disclosure is
In the image of the space including the roadway, the first area line that specifies the area of the roadway is detected.
The slope of the road surface corresponding to the first position is estimated based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
 本開示のコンピュータプログラムは、
 車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出するステップと、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定するステップと
 をコンピュータに実行させる。
The computer program of this disclosure is
In the image of the space including the roadway, the step of detecting the first area line that specifies the area of the roadway, and
Have the computer perform a step of estimating the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position. ..
本開示の第1実施形態に係る撮像システム、及び車両制御システムのブロック図。The block diagram of the image pickup system and the vehicle control system which concerns on 1st Embodiment of this disclosure. 実空間における道路の一例を平面的に示した図。A diagram showing an example of a road in real space in a plane. 撮像装置によって取得された撮像画像の例を示す図。The figure which shows the example of the captured image acquired by the imaging apparatus. 実空間における車両と車道との関係を示す側面図。A side view showing the relationship between a vehicle and a roadway in real space. 注目領域(対象領域)ROIを設定した例を示す図。The figure which shows the example which set the attention area (target area) ROI. 撮像装置と路面の幾何的関係を示す断面模式図。A schematic cross-sectional view showing the geometrical relationship between the image pickup device and the road surface. xyz座標系における曲線の車道の例を模式的に示す図。The figure which shows typically the example of the curved road in the xyz coordinate system. 注目領域ROIに含まれる路面の勾配角を算出する例を示す図。The figure which shows the example which calculates the gradient angle of the road surface included in the attention area ROI. 本実施形態に係る情報処理装置の動作の一例のフローチャート。The flowchart of an example of the operation of the information processing apparatus which concerns on this embodiment. 注目領域ROIを設定した他の例を示す図。The figure which shows another example which set the attention area ROI. 第2実施形態に係る撮像システム、及び車両制御システムのブロック図。The block diagram of the image pickup system and the vehicle control system which concerns on 2nd Embodiment. 推定部の動作を説明するための側面図。The side view for demonstrating the operation of the estimation part. 先行車両と路面とが接する箇所を観測対象点とする例を示す図。The figure which shows the example which sets the point where the preceding vehicle and the road surface are in contact with each other as an observation target point. 撮像画像において距離Δvを算出する例を示す図。The figure which shows the example which calculates the distance Δv in the captured image. 推定部が第1の推定例を用いる場合の情報処理装置の動作の一例のフローチャート。The flowchart of an example of the operation of the information processing apparatus when the estimation unit uses the first estimation example. 推定部が第2の推定例を用いる場合の情報処理装置の動作の一例のフローチャート。The flowchart of an example of the operation of the information processing apparatus when the estimation unit uses the second estimation example.
 以下、図面を参照して、本開示の実施形態について説明する。本開示において示される1以上の実施形態において、各実施形態が含む要素を互いに組み合わせることができ、かつ、当該組み合わせられた結果物も本開示が示す実施形態の一部をなす。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. In one or more embodiments set forth in the present disclosure, the elements included in each embodiment can be combined with each other, and the combined deliverables also form part of the embodiments set forth in the present disclosure.
(第1実施形態)
 図1は、本開示の第1実施形態に係る撮像システム100、及び車両制御システム301のブロック図である。図1の撮像システム100は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車等の車両に搭載される。その他にも、図1のシステムは、AGV(Automated Guided Vehicle)、パーソナルモビリティ、移動ロボット、建設機械、農業機械(トラクター)などのいずれかの種類の移動体に搭載されることもできる。これらの移動体も路面を走行可能な限り、本実施形態に係る車両の一形態に相当する。
(First Embodiment)
FIG. 1 is a block diagram of an imaging system 100 and a vehicle control system 301 according to the first embodiment of the present disclosure. The imaging system 100 of FIG. 1 is mounted on a vehicle such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, and a bicycle. In addition, the system of FIG. 1 can be mounted on any type of mobile body such as an AGV (Automated Guided Vehicle), personal mobility, a mobile robot, a construction machine, and an agricultural machine (tractor). As long as these moving bodies can travel on the road surface, they also correspond to one form of the vehicle according to the present embodiment.
 撮像システム100は、車両制御システム301に有線又は無線で接続されている。 The imaging system 100 is connected to the vehicle control system 301 by wire or wirelessly.
 車両制御システム301は、道路である車道を走行する車両の制御を行う。例えばレーダー、LiDAR、ToFセンサ、カメラ、気象センサなどを用いて先行車両等の障害物や歩行者等を検出し、検出した障害物や歩行者との衝突を回避するための制御等を行う。例えば車両の加減速の制御を行ったり、歩行者に対する警報の出力を制御したりする。車両制御システム301は、検出の結果又は制御の結果を撮像システム100にフィードバックしてもよい。 The vehicle control system 301 controls a vehicle traveling on a roadway. For example, radar, LiDAR, ToF sensor, camera, weather sensor, etc. are used to detect obstacles such as preceding vehicles and pedestrians, and control is performed to avoid collisions with the detected obstacles and pedestrians. For example, it controls the acceleration / deceleration of the vehicle and controls the output of an alarm to a pedestrian. The vehicle control system 301 may feed back the detection result or the control result to the imaging system 100.
 撮像システム100は、撮像装置201と情報処理装置101とを備えている。情報処理装置は、画像取得部110、検出部120、領域設定部130、推定部140及び記憶部150を備えている。画像取得部110、検出部120、領域設定部130、推定部140は、ハードウェア、ソフトウェア(プログラム)又はこれらの両方によって構成される。ハードウェアの例は、CPU(Central Processing Unit)等のプロセッサ、専用回路、プログラム可能な回路及びメモリ等を含む。記憶部150は、メモリ、ハードディスク、光ディスクなど、情報又はデータを記憶可能な任意の記録媒体を含む。 The image pickup system 100 includes an image pickup device 201 and an information processing device 101. The information processing device includes an image acquisition unit 110, a detection unit 120, an area setting unit 130, an estimation unit 140, and a storage unit 150. The image acquisition unit 110, the detection unit 120, the area setting unit 130, and the estimation unit 140 are composed of hardware, software (program), or both. Examples of hardware include a processor such as a CPU (Central Processing Unit), a dedicated circuit, a programmable circuit, a memory, and the like. The storage unit 150 includes any recording medium capable of storing information or data, such as a memory, a hard disk, and an optical disk.
 撮像装置201は、車両の前方の車道を含む空間を撮像するカメラ又はセンサ装置である。撮像装置201は、例えば、RGBカメラ等のカラーカメラ、モノクロカメラ、赤外線カメラ、輝度センサなどである。本実施形態では、撮像装置201は、RGBの単眼カメラである。撮像装置201は、例えば、車室内のフロントガラスの上部、又はフロントノーズなど、車両の前方を撮像可能な位置に取り付けられている。撮像装置201の姿勢は、例えば、光源方向(撮像方向)が、道路の路面(地面)に対して近づく方向(やや下向きに)なるように、あるいは、路面にほぼ方向になるように、設定されている。 The image pickup device 201 is a camera or sensor device that captures a space including a roadway in front of the vehicle. The image pickup device 201 is, for example, a color camera such as an RGB camera, a monochrome camera, an infrared camera, a brightness sensor, or the like. In this embodiment, the image pickup apparatus 201 is an RGB monocular camera. The image pickup device 201 is attached at a position where the front of the vehicle can be imaged, such as the upper part of the windshield in the vehicle interior or the front nose. The posture of the image pickup apparatus 201 is set so that, for example, the direction of the light source (imaging direction) is in the direction (slightly downward) toward the road surface (ground) of the road, or is substantially toward the road surface. ing.
 撮像装置201は、車両の走行中、一定の時間間隔で、車両の前方の車道を含む空間を撮像し、撮像画像を情報処理装置101に提供する。撮像装置201がセンシングするタイミングは、車両制御システム301から指示を受けたタイミングであってもよい。車両制御システム301は、車内のユーザ(搭乗者)から入力装置(図示しない)を介して路面の勾配推定の指示を受けた場合に、撮像装置201に撮像の指示を出力してもよい。撮像装置201は、車両の停車中に撮像することも可能である。 The image pickup device 201 images the space including the roadway in front of the vehicle at regular time intervals while the vehicle is traveling, and provides the captured image to the information processing device 101. The timing of sensing by the image pickup apparatus 201 may be the timing of receiving an instruction from the vehicle control system 301. The vehicle control system 301 may output an imaging instruction to the imaging device 201 when receiving an instruction for estimating the slope of the road surface from a user (passenger) in the vehicle via an input device (not shown). The image pickup device 201 can also take an image while the vehicle is stopped.
 記憶部150には、撮像装置201のパラメータ情報、後述する勾配算出関数、学習済みニューラルネットワークなど、本実施形態の処理に必要なデータ又は情報が格納されている。 The storage unit 150 stores data or information necessary for the processing of the present embodiment, such as parameter information of the image pickup apparatus 201, a gradient calculation function described later, and a learned neural network.
 画像取得部110は、撮像装置201に有線又は無線で接続されており、撮像装置201から撮像画像を取得する。画像取得部110は、取得した撮像画像を検出部120に提供する。 The image acquisition unit 110 is connected to the image pickup device 201 by wire or wirelessly, and acquires the captured image from the image pickup device 201. The image acquisition unit 110 provides the acquired captured image to the detection unit 120.
 検出部120は、撮像画像に含まれる車道の領域を特定する領域線を検出する。領域線の具体例として、路面に引かれた交通線(車道外線、中央線など)であってもよいし、交通線と車道との当該車道側の境界部分であってもよい。あるいは交通線における当該車道との境界部分、あるいは、交通線における当該車道と反対側のエッジ部分でもよい(交通線の幅は一定又は略一定であるとする)。あるいは、領域線は、路面の側溝であってもよいし、側溝と車道との当該車道側の境界部分、あるいは、路面の側溝において当該車道と反対側のエッジ部分でもよい(側溝の幅は一定又は略一定であるとする)。なお、交通線は、一般的に白色又はオレンジ色などで引かれている。あるいは、車道がアスファルト又は砂利道で構成されており、車道の片側又は両側が芝生の領域になっている場合に、道路と芝生との当該車道側の境界部分、又は芝生における当該車道との境界部分を領域線としてもよい。あるいは、車道に沿って塀が設けられている場合、塀と車道との当該車道側の境界部分、又は塀側の境界部分を領域線としてもよい。領域線の形状は、直線、曲線、又は直線と曲線の混合のいずれの場合もあり得る。本実施形態では、道路に引かれた交通線(白線等)は車道の一部ではないとして説明を行うが、交通線も車道の一部として定義することも可能である。 The detection unit 120 detects a region line that specifies the region of the roadway included in the captured image. As a specific example of the area line, it may be a traffic line drawn on the road surface (outside road line, central line, etc.), or may be a boundary portion between the traffic line and the roadway on the roadway side. Alternatively, it may be a boundary portion of the traffic line with the roadway, or an edge portion of the traffic line on the opposite side of the roadway (the width of the traffic line is constant or substantially constant). Alternatively, the area line may be a side groove on the road surface, a boundary portion between the side groove and the roadway on the roadway side, or an edge portion on the side groove on the road surface opposite to the roadway (the width of the side groove is constant). Or it is assumed to be substantially constant). The traffic line is generally drawn in white or orange. Alternatively, when the roadway is composed of asphalt or gravel road and one or both sides of the roadway are grass areas, the boundary portion between the road and the lawn on the roadway side, or the boundary between the roadway and the roadway on the lawn. The portion may be a region line. Alternatively, when a fence is provided along the roadway, the boundary portion between the fence and the roadway on the roadway side or the boundary portion on the wall side may be used as the area line. The shape of the area line can be straight, curved, or a mixture of straight and curved. In the present embodiment, the traffic line drawn on the road (white line, etc.) is described as not being a part of the roadway, but the traffic line can also be defined as a part of the roadway.
 図2は、実空間における道路の一例を平面的に示した図である。図2(A)は車道10Cの両側に車道外線(白線)10A、10Bが設けられている道路の例を示す。車道外線10A、10Bは直線状である。車道外線10A、10Bの車道10Cとの反対側は例えば車道以外の道路(歩行路等)になっている。 FIG. 2 is a plan showing an example of a road in real space. FIG. 2A shows an example of a road in which outside road lines (white lines) 10A and 10B are provided on both sides of the roadway 10C. The outside lines 10A and 10B are straight lines. The opposite side of the outer roadway lines 10A and 10B from the roadway 10C is, for example, a road other than the roadway (walking road, etc.).
 図2(B)は車道11Cの両側に設けられた車道外線(白線)11A、11Bが曲線である場合の例を示す。 FIG. 2B shows an example in which the outside road lines (white lines) 11A and 11B provided on both sides of the roadway 11C are curved lines.
 図2(C)は車道12Cの両側に溝(側溝)12A、12Bがある道路の例を示す。 FIG. 2C shows an example of a road having grooves (side grooves) 12A and 12B on both sides of the roadway 12C.
 図2(A)~図2(C)では車道の両側に白線又は側溝が存在するが、片側のみに存在してもよい。 In FIGS. 2 (A) to 2 (C), white lines or gutters are present on both sides of the roadway, but they may be present on only one side.
 図3は、撮像装置201によって取得された撮像画像において車道と領域線の具体例を示す。図3(A)は、直線状の車道において、自車両が存在する路面と、前方の路面との間に勾配差がない場合の例を示す。図3(B)は、直線状の車道において、自車両が存在する路面と、前方の路面との間に勾配差がある場合(前方が上り坂になっている場合)の例を示す。 FIG. 3 shows a specific example of the roadway and the area line in the captured image acquired by the imaging device 201. FIG. 3A shows an example in which there is no gradient difference between the road surface on which the own vehicle is present and the road surface in front of the road on a straight road. FIG. 3B shows an example of a straight road where there is a gradient difference between the road surface on which the own vehicle is present and the road surface in front (when the front is an uphill).
 図3(A)において、例えば、車道22のうち白線(車道外線)と接する境界部分が、領域線21A、21Bとして模式的に示されている。同様に、図3(B)において、車道25のうち白線(車道外線)と接する境界部分が、領域線24A、24Bとして式的に示されている。なお、図3(A)及び図3(B)において白線等、車道以外の物体の図示は省略している。 In FIG. 3A, for example, the boundary portion of the roadway 22 in contact with the white line (outside the roadway) is schematically shown as area lines 21A and 21B. Similarly, in FIG. 3B, the boundary portion of the roadway 25 in contact with the white line (outside the roadway) is formally shown as the area lines 24A and 24B. In addition, in FIGS. 3 (A) and 3 (B), the illustration of an object other than the roadway, such as a white line, is omitted.
 図3(A)における勾配差がない場合の撮像画像20では、領域線21A、21Bが直線状になっている(領域線21A、21Bに隣接する白線も同様に直線状になっている)。図3(B)における勾配差がある場合の撮像画像23では、領域線24A、24Bは、進行方向に途中で内側に少し屈曲した形状となっている。 In the captured image 20 when there is no gradient difference in FIG. 3A, the area lines 21A and 21B are linear (the white lines adjacent to the area lines 21A and 21B are also linear). In the captured image 23 when there is a gradient difference in FIG. 3B, the area lines 24A and 24B have a shape that is slightly bent inward in the middle in the traveling direction.
 図3(A)及び図3(B)の撮像画像における各画素(ピクセル)の位置は、横軸をu軸、縦軸をv軸としたuv座標系で表すことができる。uy座標系は、いわゆる画像座標系に相当する。 The position of each pixel in the captured images of FIGS. 3 (A) and 3 (B) can be represented by an uv coordinate system with the horizontal axis as the u axis and the vertical axis as the v axis. The ui coordinate system corresponds to the so-called image coordinate system.
 図4(A)及び図4(B)は、図3(A)及び図3(B)の撮像画像が取得されたときの実空間における車両と車道との関係を示す側面図である。車両の進行方向をx軸、高さ方向をz軸、紙面に垂直な方向(道路の幅方向)をy軸としている。xyz座標系は、いわゆる世界座標系に相当する。実空間上の位置はxyz座標によって表すことができる。 4 (A) and 4 (B) are side views showing the relationship between the vehicle and the roadway in the real space when the captured images of FIGS. 3 (A) and 3 (B) are acquired. The traveling direction of the vehicle is the x-axis, the height direction is the z-axis, and the direction perpendicular to the paper surface (the width direction of the road) is the y-axis. The xyz coordinate system corresponds to the so-called world coordinate system. The position in real space can be represented by xyz coordinates.
 図4(A)に示すように、車両が走行する車道26において、車両が存在する路面と、車両の前方の路面との間には勾配差はない。図4(B)に示すように、車両が走行する車道27において、車両が存在する路面と、車両の前方の路面との間には勾配差がある。 As shown in FIG. 4A, there is no gradient difference between the road surface on which the vehicle is present and the road surface in front of the vehicle on the roadway 26 on which the vehicle travels. As shown in FIG. 4B, there is a gradient difference between the road surface on which the vehicle is present and the road surface in front of the vehicle on the roadway 27 on which the vehicle travels.
 図4(A)のような勾配差がない直線状の車道で前方を撮像すると、図3(A)に示すような、車道の領域線が一定の傾きで狭まる形状になった画像が得られる。また、図4(B)のような勾配差がある直線状の車道で車両の前方を撮像すると、図3(B)に示すような、車道の領域線の線が進行方向に沿って内側に屈曲した画像が得られる。図4(A)の実空間と図3(A)の画像間の関係、図4(B)の実空間と図3(B)の画像間の関係は、実空間が属する世界座標系と画像が属する画像座標系とを用いて、一般に知られた投資投影変換によって表される。 When the front is imaged on a straight road with no gradient difference as shown in FIG. 4 (A), an image in which the area line of the road is narrowed with a constant inclination as shown in FIG. 3 (A) can be obtained. .. Further, when the front of the vehicle is imaged on a straight road with a gradient difference as shown in FIG. 4 (B), the area line of the road is inward along the traveling direction as shown in FIG. 3 (B). A bent image is obtained. The relationship between the real space of FIG. 4 (A) and the image of FIG. 3 (A) and the relationship between the real space of FIG. 4 (B) and the image of FIG. 3 (B) are the world coordinate system to which the real space belongs and the image. It is represented by a commonly known investment projection transformation using the image coordinate system to which it belongs.
 検出部120が撮像画像から車道の領域線を検出する方法として、例えばセマンティックセグメンテーションを用いることができる。セマンティックセグメンテーションでは、撮像画像を、予め定めた複数のクラスのいずれかに属する複数のセグメント(オブジェクト)に分割する。クラスの例として、車道、交通線(白線等)、又は溝などがある。この場合、例えば、車道クラスをもつオブジェクトの、隣接するオブジェクトとの境界部分(境界線)を、車道の領域線として検出することができる。または隣接するオブジェクトの、車道クラスをもつオブジェクトとの境界部分を、車道の領域線として検出できる。または、隣接するオブジェクトの、車道クラスをもつオブジェクトとの反対側のエッジ部分を、車道の領域線として検出できる(例えば隣接するオブジェクトが交通線のオブジェクトの場合など)。画像のセグメント化の手法は、色情報に基づくクラスタリング手法など、セマンティックセグメンテーション以外の方法を用いてもかまわない。また、画像のセグメンテーションではなく、画像から輪郭を検出する領域線検出の手法を用いて、車道の領域線を検出してもかまわない。以下、セマンティックセグメンテーションを用いてセグメンテーションを行う場合の動作の一例を記載する。 For example, semantic segmentation can be used as a method for the detection unit 120 to detect the area line of the roadway from the captured image. In semantic segmentation, the captured image is divided into a plurality of segments (objects) belonging to any of a plurality of predetermined classes. Examples of classes include roadways, traffic lines (white lines, etc.), or ditches. In this case, for example, the boundary portion (boundary line) of the object having the roadway class with the adjacent object can be detected as the area line of the roadway. Alternatively, the boundary portion of the adjacent object with the object having the roadway class can be detected as the area line of the roadway. Alternatively, the edge portion of the adjacent object on the opposite side of the object having the roadway class can be detected as a roadway area line (for example, when the adjacent object is a traffic line object). The image segmentation method may be a method other than semantic segmentation, such as a clustering method based on color information. Further, the region line of the roadway may be detected by using the region line detection method of detecting the contour from the image instead of the segmentation of the image. The following is an example of the operation when segmentation is performed using semantic segmentation.
 例えば、学習済みニューラルネットワークを用いて撮像画像のセマンティックセグメンテーションを行う。学習済みニューラルネットワークは記憶部150に格納しておく。セマンティックセグメンテーションは、画像の各画素のクラス(種別)を、画素単位で分類する手法である。画素毎にクラスを判定し、判定されたクラスを示したラベルを画素毎に出力する。例えば、予め、車道、交通線(白線等)、側溝といった複数のクラスが定義されている。各画素に対して、クラスの判定を行う。セマンティックセグメンテーションの結果として、各画素に対応するクラス値が得られる。同じクラス値の連続する画素のまとまりが1つのセグメント(オブジェクト)に対応する。学習済みニューラルネットワークを用いる場合、撮像画像をニューラルネットワークの入力として用い、出力として、各画素に対応するクラス値が得られる。同じクラス値の画素を同じ色にした画像(セマンティックセグメンテーション画像)を生成してもよい。これに同じ種類のオブジェクトは同じ色によって示されるため視覚的に分かり易くセグメンテーション結果を表示できる。 For example, semantic segmentation of the captured image is performed using a trained neural network. The trained neural network is stored in the storage unit 150. Semantic segmentation is a method of classifying the class (type) of each pixel of an image on a pixel-by-pixel basis. The class is determined for each pixel, and a label indicating the determined class is output for each pixel. For example, a plurality of classes such as a roadway, a traffic line (white line, etc.), and a gutter are defined in advance. The class is determined for each pixel. As a result of semantic segmentation, a class value corresponding to each pixel is obtained. A group of consecutive pixels with the same class value corresponds to one segment (object). When the trained neural network is used, the captured image is used as the input of the neural network, and the class value corresponding to each pixel is obtained as the output. An image (semantic segmentation image) in which pixels having the same class value have the same color may be generated. Objects of the same type are shown in the same color, so the segmentation results can be displayed in a visually easy-to-understand manner.
 領域設定部130は、撮像画像に対して、車道において勾配の推定の対象となる路面の一部を少なくとも部分的に含む注目領域(対象領域)を設定する。注目領域の形状は矩形でも、台形、円など任意の形状でよく、特定の形状に限定されない。また注目領域は直線(例えば1画素幅の線)でもよい。なお、撮像画像に対して注目領域を設定するとは、撮像画像に直接、注目領域を設定する他、セグメンテーション画像に注目領域を設定する場合を含む。 The area setting unit 130 sets an area of interest (target area) that includes at least a part of the road surface that is the target of slope estimation on the roadway with respect to the captured image. The shape of the region of interest may be rectangular, trapezoidal, circular, or any other shape, and is not limited to a specific shape. Further, the region of interest may be a straight line (for example, a line having a width of one pixel). Note that setting the region of interest for the captured image includes setting the region of interest directly for the captured image and setting the region of interest for the segmentation image.
 推定部140は、検出部120で検出された車道の領域線上において選択した位置(選択位置)と、選択位置における領域線の傾きとに基づいて、車道の路面において選択位置に対応する路面の勾配を推定する。2つの領域線が検出された場合に、一方の領域線上で選択した選択位置は第1位置、他方の領域線上で選択した選択した選択位置は第2位置に相当する。但し、領域線は1つのみ検出されてもよく、この場合当該領域線から選択した選択位置が第1位置に相当する。 The estimation unit 140 has a road surface gradient corresponding to the selected position on the road surface of the roadway based on the position (selected position) selected on the area line of the roadway detected by the detection unit 120 and the inclination of the area line at the selected position. To estimate. When two area lines are detected, the selected selected position on one area line corresponds to the first position, and the selected selected position selected on the other area line corresponds to the second position. However, only one area line may be detected, and in this case, the selected position selected from the area line corresponds to the first position.
 具体的には、まず、検出部120で検出された車道の領域線のうち、領域設定部130により設定された注目領域に含まれる部分の領域線(以下、部分線と呼ぶ)を検出する。推定部140は、検出した部分線上の任意の位置(選択位置)における傾きを算出する。 Specifically, first, among the area lines of the roadway detected by the detection unit 120, the area line of the part included in the area of interest set by the area setting unit 130 (hereinafter referred to as a partial line) is detected. The estimation unit 140 calculates the inclination at an arbitrary position (selected position) on the detected partial line.
 部分線の傾きは、uv座標系における傾きであり、部分線における選択位置は、uv座標によって表される。 The slope of the partial line is the slope in the uv coordinate system, and the selected position in the partial line is represented by the uv coordinate.
 部分線の傾きは、部分線上の選択位置における接線の傾きでもよいし、部分線を近似する直線の傾きでもよい。 The slope of the partial line may be the slope of the tangent line at the selected position on the partial line, or the slope of a straight line that approximates the partial line.
 選択位置は、部分線のうち当該接線と接する部分(接点)の位置でもよいし、部分線の重心(座標の平均値)でもよいし、部分線に含まれる座標から任意の選択した座標の位置でもよい。
 なお、部分線は1画素幅でも、2以上の画素幅でもよい。
The selected position may be the position of the part (contact point) of the partial line that is in contact with the tangent line, the center of gravity of the partial line (the average value of the coordinates), or the position of an arbitrary selected coordinate from the coordinates included in the partial line. It may be.
The partial line may have a pixel width of 1 or 2 or more.
 図5は、図3(B)の撮像画像23に対して注目領域(対象領域)ROIを設定した例を示す。撮像画像23のv座標方向のやや中心において、車道25を横切る矩形の注目領域ROIが設定されている。注目領域ROIは車道25の路面の一部を少なくとも部分的に含んでいる。車道25の領域線24Aのうち、注目領域ROIに含まれる領域線の部分が、部分線31Aとして特定されている。同様に、車道25を介して領域線24Aに対向する領域線24Bのうち、注目領域ROIに含まれる領域線の部分が、部分線31Bとして特定されている。2つの部分線のうちの一方のみを特定してもよい。 FIG. 5 shows an example in which the region of interest (target region) ROI is set for the captured image 23 of FIG. 3 (B). A rectangular attention region ROI that crosses the roadway 25 is set slightly at the center of the captured image 23 in the v-coordinate direction. The region of interest ROI includes at least part of the road surface of the roadway 25. Of the area line 24A of the roadway 25, the portion of the area line included in the region of interest ROI is specified as the partial line 31A. Similarly, of the region line 24B facing the region line 24A via the roadway 25, the portion of the region line included in the region of interest ROI is specified as the partial line 31B. Only one of the two sublines may be specified.
 注目領域ROIの設定方法は任意でよい。例えば、車両制御システム301が前方の車両を検知した場合に、検知された車両の少なくとも一部を含む範囲を注目領域ROIとして特定してもよい。あるいは、撮像画像において予め定めた座標範囲を注目領域ROIとしてもよい。あるいは、時系列の画像を解析して、領域線の傾きが一定以上変化した場合に、変化した箇所を含む範囲を注目領域として設定してもよい。あるいは、撮像画像を車内の表示装置に表示し、ユーザが注目領域を、入力装置(タッチパネル等)を用いて指定してもよい。 The method of setting the attention area ROI may be arbitrary. For example, when the vehicle control system 301 detects a vehicle in front, a range including at least a part of the detected vehicle may be specified as a region of interest ROI. Alternatively, a predetermined coordinate range in the captured image may be used as the region of interest ROI. Alternatively, when the inclination of the region line changes by a certain amount or more by analyzing the time-series images, the range including the changed portion may be set as the region of interest. Alternatively, the captured image may be displayed on a display device in the vehicle, and the user may specify the area of interest using an input device (touch panel or the like).
 画像座標系(uv座標系)における部分線31Aの傾きα、部分線31Bの傾きαが示される。傾きαは、部分線31A上の選択位置における接線の傾きでもよいし、部分線31Aを近似する直線の傾きでもよい。部分線31Aにおける選択位置である位置L(uL、vL)(後述する図8参照)は、例えば部分線31Aとその接線との接点でもよいし、部分線31Aの重心(例えばu座標の中心とv座標の中心)でもよいし、その他の方法で決定した位置でもよい。 Inclination alpha L of partial lines 31A in the image coordinate system (uv coordinate system), the inclination alpha R of partial lines 31B is shown. The slope α L may be the slope of the tangent line at the selected position on the partial line 31A, or the slope of a straight line that approximates the partial line 31A. The position L (uL, vL) (see FIG. 8 described later), which is the selected position on the partial line 31A, may be, for example, a contact point between the partial line 31A and its tangent line, or the center of gravity of the partial line 31A (for example, the center of the u coordinate). It may be the center of the v-coordinate) or the position determined by another method.
 同様に、傾きαは、部分線31B上の選択位置における接線の傾きでもよいし、部分線31Bを近似する直線の傾きでもよい。部分線31Bにおける選択位置である位置R(uR、vR)は、例えば部分線31Bとその接線との接点でもよいし、部分線31Bの重心(例えばu座標の中心とv座標の中心)でもよいし、その他の方法で決定した位置でもよい。 Similarly, the slope α R may be the slope of the tangent line at the selected position on the partial line 31B, or the slope of a straight line that approximates the partial line 31B. The position R (uR, vR), which is the selected position on the partial line 31B, may be, for example, a contact point between the partial line 31B and its tangent line, or the center of gravity of the partial line 31B (for example, the center of the u coordinate and the center of the v coordinate). However, the position may be determined by another method.
 位置L(uL、vL)と位置R(uR、vR)とのv座標が同じであることを制約条件として位置L及び位置Rを決定してもよい。部分線の位置及び傾きを算出する処理の詳細は後述する。 The position L and the position R may be determined on the condition that the v coordinates of the position L (uL, vL) and the position R (uR, vR) are the same. The details of the process of calculating the position and inclination of the partial line will be described later.
 推定部140は、部分線の傾きと選択位置とに基づいて、注目領域に含まれる路面部分の勾配を推定する。以下、注目領域ROIに含まれる部分線の傾きと選択位置とに基づいて、注目領域ROIに含まれる路面部分の勾配を推定できる理由について詳細に説明する。 The estimation unit 140 estimates the slope of the road surface portion included in the region of interest based on the slope of the partial line and the selected position. Hereinafter, the reason why the slope of the road surface portion included in the attention region ROI can be estimated based on the inclination of the partial line included in the attention region ROI and the selected position will be described in detail.
 図6は、撮像装置201と路面の幾何的関係を示す断面模式図である。図6では、図4(B)に示した場合と同様、直線状の車道において、前方に勾配差がある場合を例にしている。実空間における撮像装置201(カメラ)35と、車道27の路面と、観測対象点P(対象位置)における路面の角度(路面勾配角)θ0とが示されている。車両の進行方向がx軸、高さ方向がz軸である。y軸は車両の幅方向である。xyz座標系は世界座標系に相当する。 FIG. 6 is a schematic cross-sectional view showing the geometrical relationship between the image pickup device 201 and the road surface. In FIG. 6, as in the case shown in FIG. 4B, a case where there is a gradient difference in the front on a straight roadway is taken as an example. The image pickup device 201 (camera) 35 in the real space, the road surface of the roadway 27, and the road surface angle (road surface gradient angle) θ0 at the observation target point P (target position) are shown. The traveling direction of the vehicle is the x-axis, and the height direction is the z-axis. The y-axis is the width direction of the vehicle. The xyz coordinate system corresponds to the world coordinate system.
 車両の撮像装置201は、xyz座標系の原点から高さHに存在する。すなわち、車両が存在する平面38から高さHの位置に撮像装置201は存在する。カメラ光源の座標は(0,0,H)である。カメラ35の光軸36の方向(角度)はθである。観測対象点Pはカメラの光軸に平行な位置であり、例えば部分線における選択位置に対応する。路面勾配角θ0は、車道の路面における観測対象点Pでの接線37の傾きである。接線37とx軸の交点はx0である。 The vehicle image pickup device 201 exists at a height H from the origin of the xyz coordinate system. That is, the image pickup apparatus 201 exists at a position at a height H from the plane 38 on which the vehicle exists. The coordinates of the camera light source are (0, 0, H). The direction (angle) of the optical axis 36 of the camera 35 is θ. The observation target point P is a position parallel to the optical axis of the camera, and corresponds to, for example, a selected position on a partial line. The road surface gradient angle θ0 is the inclination of the tangent line 37 at the observation target point P on the road surface of the roadway. The intersection of the tangent line 37 and the x-axis is x0.
 前述したように世界座標と画像座標の関係は、透視投影変換によって表され、この関係は以下の式で表される。
Figure JPOXMLDOC01-appb-M000001
sはスケーリングファクタであり、カメラから観測対象点までのデプスの逆数に相当する。
 fyはカメラの水平方向の焦点距離(単位は画素)、fzはカメラの垂直方向の焦点距離(単位は画素)である。
 (cy、cz)は、カメラ中心(主点)、すなわちuv座標系の中心である。
 θは光軸方向の角度である。
 Hはカメラの高さである。
 (x、y、z)は観測対象点Pの座標であり、未知の値である。観測対象点Pは撮像画像における部分線の位置に対応する。
 式(1)の右辺において、1番目の行列はカメラの内部パラメータ、2番目の行列はカメラの外部パラメータに相当する。内部パラメータ及び外部パラメータは予めキャリブレーションにより測定され、記憶部150に格納されている。内部パラメータと外部パラメータとをまとめてカメラのパラメータ情報と呼ぶ。
As described above, the relationship between the world coordinates and the image coordinates is expressed by the perspective projection transformation, and this relationship is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000001
s is a scaling factor and corresponds to the reciprocal of the depth from the camera to the observation target point.
fy is the horizontal focal length of the camera (unit is pixel), and fz is the vertical focal length of the camera (unit is pixel).
(Cy, cz) is the center of the camera (principal point), that is, the center of the uv coordinate system.
θ is the angle in the optical axis direction.
H is the height of the camera.
(X, y, z) is the coordinates of the observation target point P, which is an unknown value. The observation target point P corresponds to the position of the partial line in the captured image.
On the right side of equation (1), the first matrix corresponds to the internal parameters of the camera and the second matrix corresponds to the external parameters of the camera. The internal parameters and external parameters are measured in advance by calibration and stored in the storage unit 150. Internal parameters and external parameters are collectively called camera parameter information.
 式(1)から以下の式(2)、式(3)、式(4)が導出される。z’=z-Hである。
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000004
The following equations (2), (3), and (4) are derived from the equation (1). z'= z-H.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000004
 部分線の傾きα(α又はαを表す)はuv座標系に基づき、tanα=dv/duと表すことができる。dv/du(vのuに関する微分係数)を、以下、解析的に導出する。まず微分に関する公式から式(5)が得られる。
Figure JPOXMLDOC01-appb-M000005
The slope α of the partial line ( representing α L or α R ) can be expressed as tan α = dv / du based on the uv coordinate system. The dv / du (differential coefficient of v with respect to u) is analytically derived below. First, equation (5) is obtained from the formula for differentiation.
Figure JPOXMLDOC01-appb-M000005
 図6に示した関係から、以下の式(6)が得られる。x、zは観測対象点Pのx、z座標である。θ0は観測対象点Pの接線とx軸とが交差する角度(路面勾配角)である。
Figure JPOXMLDOC01-appb-M000006
From the relationship shown in FIG. 6, the following equation (6) can be obtained. x and z are the x and z coordinates of the observation target point P. θ0 is the angle (road surface gradient angle) at which the tangent line of the observation target point P and the x-axis intersect.
Figure JPOXMLDOC01-appb-M000006
 式(3)及び式(6)より、vは、x、y、zのうちzのみの関数として表現される。このため、式(5)の第1項と第2項は0になり、以下の式(7)が得られる。
Figure JPOXMLDOC01-appb-M000007
From equations (3) and (6), v is expressed as a function of only z among x, y, and z. Therefore, the first term and the second term of the equation (5) become 0, and the following equation (7) is obtained.
Figure JPOXMLDOC01-appb-M000007
 以下、式(7)を具体的に導出する。まず、右辺の第1微分係数を導出する。
 式(3)及び式(6)より、以下の式(8)が得られる。
Figure JPOXMLDOC01-appb-M000008
Hereinafter, equation (7) is specifically derived. First, the first differential coefficient on the right side is derived.
From the formula (3) and the formula (6), the following formula (8) can be obtained.
Figure JPOXMLDOC01-appb-M000008
 式(3)より、以下の式(9)が得られる。
Figure JPOXMLDOC01-appb-M000009
From the formula (3), the following formula (9) can be obtained.
Figure JPOXMLDOC01-appb-M000009
 式(4)より、以下の式(10)が得られる。
Figure JPOXMLDOC01-appb-M000010
From the formula (4), the following formula (10) can be obtained.
Figure JPOXMLDOC01-appb-M000010
 式(8)~式(10)より、以下の式(11)が得られる。
Figure JPOXMLDOC01-appb-M000011
From the formulas (8) to (10), the following formula (11) can be obtained.
Figure JPOXMLDOC01-appb-M000011
 次に、式(7)の右辺の第2微分係数(δz/δu)を導出する。
 式(4)及び式(6)より、以下の式(12)が得られる。
Figure JPOXMLDOC01-appb-M000012
Next, the second differential coefficient (δz / δu) on the right side of the equation (7) is derived.
From the formula (4) and the formula (6), the following formula (12) can be obtained.
Figure JPOXMLDOC01-appb-M000012
 式(2)より、以下の式(13)が得られる。
Figure JPOXMLDOC01-appb-M000013
From the formula (2), the following formula (13) can be obtained.
Figure JPOXMLDOC01-appb-M000013
 式(12)及び式(13)より、以下の式(14)が得られる。
Figure JPOXMLDOC01-appb-M000014
From the formula (12) and the formula (13), the following formula (14) can be obtained.
Figure JPOXMLDOC01-appb-M000014
 式(14)より、以下の式(15)が得られる。
Figure JPOXMLDOC01-appb-M000015
From the formula (14), the following formula (15) can be obtained.
Figure JPOXMLDOC01-appb-M000015
 式(7)、式(11)、式(15)より、以下の式(16)が得られる。
Figure JPOXMLDOC01-appb-M000016
From the formulas (7), (11), and (15), the following formula (16) can be obtained.
Figure JPOXMLDOC01-appb-M000016
 式(5)、式(16)より、以下の式(17)が得られる。
Figure JPOXMLDOC01-appb-M000017
From the formulas (5) and (16), the following formula (17) can be obtained.
Figure JPOXMLDOC01-appb-M000017
 式(17)において、カメラの光軸36の方向θ、垂直方向の焦点距離fz、カメラ中心(cy、cz)はカメラのパラメータ情報から既知である。よって、部分線の角度αと、部分線における選択位置である位置(u、v)から、式(17)により、位置(u、v)に対応する路面上の対象位置(観測対象点)での路面勾配角θ0が算出される。すなわち部分線の角度αと、部分線における選択位置の座標である(u、v)とに基づき、対象位置における路面の勾配を推定できる。 In equation (17), the direction θ of the optical axis 36 of the camera, the focal length fz in the vertical direction, and the camera center (cy, cz) are known from the parameter information of the camera. Therefore, from the angle α of the partial line and the position (u, v) which is the selected position on the partial line, the target position (observation target point) on the road surface corresponding to the position (u, v) is determined by the equation (17). The road surface gradient angle θ0 is calculated. That is, the slope of the road surface at the target position can be estimated based on the angle α of the partial line and the coordinates (u, v) of the selected position on the partial line.
 式(17)の関数は、勾配算出関数として、記憶部150に予め記憶されている。勾配算出関数は、部分線の傾き(角度)が入力される入力変数、部分線における選択位置の座標が入力される入力変数、路面勾配角を出力とする出力変数とが対応づいた関数である。 The function of equation (17) is stored in advance in the storage unit 150 as a gradient calculation function. The slope calculation function is a function that corresponds to an input variable in which the slope (angle) of a partial line is input, an input variable in which the coordinates of the selected position on the partial line are input, and an output variable that outputs the road surface slope angle. ..
 上述した勾配算出関数の導出では、直線状の車道を想定したが(つまりxyz座標系においてyの値が同じ場合)、図2(B)に示したような曲線の車道の場合であっても、xとyとzの関係を多項式で表すことで、同様にして勾配算出関数を導出できる。 In the derivation of the gradient calculation function described above, a straight roadway is assumed (that is, when the value of y is the same in the xyz coordinate system), but even in the case of a curved roadway as shown in FIG. 2 (B). , The gradient calculation function can be derived in the same way by expressing the relationship between x, y, and z with a polynomial.
 図7は、xyz座標系における曲線の車道の例を模式的に示す。車道50の両側に白線51L、52Rが設けられている。例えば、xとyとzの関係を以下の式(18)と前述の式(6)によって表現できる。
Figure JPOXMLDOC01-appb-M000018
FIG. 7 schematically shows an example of a curved roadway in the xyz coordinate system. White lines 51L and 52R are provided on both sides of the roadway 50. For example, the relationship between x, y, and z can be expressed by the following equation (18) and the above equation (6).
Figure JPOXMLDOC01-appb-M000018
 yがxとzに依存するようになったため、前述の式(7)の右辺の第2項が非0になり、以下の式(19)になる。
Figure JPOXMLDOC01-appb-M000019
Since y now depends on x and z, the second term on the right side of the above equation (7) becomes non-zero, and the following equation (19) is obtained.
Figure JPOXMLDOC01-appb-M000019
 式(19)に基づき、以降の式を、上記と同様にして導出すればよい。 Based on the equation (19), the following equation may be derived in the same manner as above.
 推定部140は、勾配算出関数を用いて、部分線の角度と選択位置の座標とに基づき、注目領域ROIに含まれる路面の勾配角(路面勾配角)を推定する。 The estimation unit 140 estimates the slope angle (road surface slope angle) of the road surface included in the region of interest ROI based on the angle of the partial line and the coordinates of the selected position using the slope calculation function.
 図8は、図5と同じ撮像画像23において注目領域ROIに含まれる路面の勾配角を算出する例を示す。ここでは注目領域ROIを直線とした場合の例を示している。注目領域ROIが直線の場合、部分線(図5の部分線31A参照)は、注目領域ROI(直線)と領域線24Aとの交差部L(点L)に相当する。またこの点Lの座標は、部分線において選択した選択位置の座標にも相当する。部分線の角度αLは、例えば、点Lにおける接線39Lの角度である。部分線の角度αLと、位置L(uL、vL)とから、上述の勾配算出関数により、位置L(uL、vL)に対応する実空間の位置(対象位置)における路面勾配角θ0Lを算出する。 FIG. 8 shows an example of calculating the slope angle of the road surface included in the region of interest ROI in the same captured image 23 as in FIG. Here, an example is shown in the case where the ROI of interest is a straight line. When the region of interest ROI is a straight line, the partial line (see partial line 31A in FIG. 5) corresponds to the intersection L (point L) of the region of interest ROI (straight line) and the region line 24A. The coordinates of this point L also correspond to the coordinates of the selected position selected on the partial line. The angle αL of the partial line is, for example, the angle of the tangent line 39L at the point L. From the angle αL of the partial line and the position L (uL, vL), the road surface gradient angle θ0L at the position (target position) in the real space corresponding to the position L (uL, vL) is calculated by the above-mentioned gradient calculation function. ..
 同様にして、もう一方の部分線(図5の部分線31B参照)は、注目領域ROIと領域線との交差部R(点R)に相当する。またこの点Rの座標は、部分線において選択した選択位置の座標にも相当する。部分線の角度αRは、例えば、点Rにおける接線39Rの角度である。推定部140は、部分線の角度αRと、位置R(uR、vR)とから、上述の勾配算出関数により、位置L(uR、vR)に対応する実空間の位置(対象位置)における路面勾配角θ0Rを算出する。 Similarly, the other partial line (see partial line 31B in FIG. 5) corresponds to the intersection R (point R) between the region of interest ROI and the region line. The coordinates of this point R also correspond to the coordinates of the selected position selected on the partial line. The angle αR of the partial line is, for example, the angle of the tangent line 39R at the point R. From the angle αR of the partial line and the position R (uR, vR), the estimation unit 140 uses the above-mentioned gradient calculation function to determine the road surface gradient at the position (target position) in the real space corresponding to the position L (uR, vR). The angle θ0R is calculated.
 推定部140は、算出した2つの路面勾配角θ0L、θ0Rの平均を、注目領域に含まれる路面の路面勾配角として推定する。平均は加重平均でもよい。例えば路面が直進の場合は重みを1:1とし、左カーブの場合もしくは右カーブの場合は、異なる重みを左右で設定してもよい。カーブの曲率に応じて重みを調整してもよい。あるいは、2つの路面勾配角θ0L、θ0Rのいずれか一方(最大値又は最小値)を、注目領域に含まれる路面の路面勾配角として推定してもよい。 The estimation unit 140 estimates the average of the two calculated road surface gradient angles θ0L and θ0R as the road surface gradient angle of the road surface included in the region of interest. The average may be a weighted average. For example, when the road surface is straight, the weight may be set to 1: 1, and when the road surface is a left curve or a right curve, different weights may be set on the left and right. The weight may be adjusted according to the curvature of the curve. Alternatively, either one (maximum value or minimum value) of the two road surface gradient angles θ0L and θ0R may be estimated as the road surface gradient angle of the road surface included in the region of interest.
 推定部140は、注目領域ROIの推定した路面の勾配角を表す情報を車両制御システム301に出力する。車両制御システム301は、路面勾配角の情報を用いて車両の制御を行う。例えばドライバの運転支援として加減速制御を行い、障害物や歩行者等との衝突を回避する。具体的には路面勾配角が上り坂を示す場合は、加速を行い、下り坂を示す場合は、減速を行う。 The estimation unit 140 outputs information representing the slope angle of the road surface estimated by the ROI of interest to the vehicle control system 301. The vehicle control system 301 controls the vehicle by using the information of the road surface gradient angle. For example, acceleration / deceleration control is performed as driving support for the driver to avoid collisions with obstacles, pedestrians, and the like. Specifically, when the road surface gradient angle indicates an uphill, acceleration is performed, and when it indicates a downhill, deceleration is performed.
 図9は、本実施形態に係る情報処理装置の動作の一例のフローチャートである。画像取得部110が撮像装置201から撮像画像を取得する(S101)。検出部120が撮像画像において車道の領域を特定する領域線を検出する(S102)。領域設定部130が撮像画像において、勾配角を推定する対象となる路面を少なくとも部分的に含む注目領域(対象領域)を設定する(S103)。推定部140が、注目領域に含まれる領域線の部分である部分線の選択位置(第1位置又は第2位置)における傾きを算出する(S104)。推定部140は、部分線の傾きと選択位置の座標とをそれぞれ入力変数とする勾配算出関数を計算することにより、注目領域に含まれる路面の勾配角を推定する(S105)。 FIG. 9 is a flowchart of an example of the operation of the information processing device according to the present embodiment. The image acquisition unit 110 acquires an captured image from the imaging device 201 (S101). The detection unit 120 detects a region line that specifies the region of the roadway in the captured image (S102). The region setting unit 130 sets a region of interest (target region) including at least a part of the road surface for which the gradient angle is to be estimated in the captured image (S103). The estimation unit 140 calculates the inclination at the selected position (first position or second position) of the partial line which is a part of the area line included in the region of interest (S104). The estimation unit 140 estimates the slope angle of the road surface included in the region of interest by calculating a slope calculation function that uses the slope of the partial line and the coordinates of the selected position as input variables (S105).
 図9のフローチャートのステップの順序は一例で有り、一部のステップの順序が入れ替わってもよい。例えばステップS103がステップ102より先に実行されてもよい。 The order of the steps in the flowchart of FIG. 9 is an example, and the order of some steps may be changed. For example, step S103 may be executed before step 102.
 以上、第1実施形態によれば、撮像画像に設定した注目領域ROIに含まれる車道の領域線である部分線の選択位置における傾きと当該選択位置の座標とに基づき、注目領域に含まれる路面の傾斜角度を推定する。これにより、少ない処理量で高速に、路面傾斜角を推定できる。また、第1実施形態によれば、単眼カメラで撮像した画像を用いて推定を行うことができ、ステレオカメラ等の2台以上のカメラは不要なため、低コストで実現できる。また複数の部分線を用いて推定を行うことにより、推定の精度及びロバスト性を向上させることができる。 As described above, according to the first embodiment, the road surface included in the attention area is based on the inclination at the selection position of the partial line which is the area line of the roadway included in the attention area ROI set in the captured image and the coordinates of the selection position. Estimate the tilt angle of. As a result, the road surface inclination angle can be estimated at high speed with a small amount of processing. Further, according to the first embodiment, estimation can be performed using an image captured by a monocular camera, and since two or more cameras such as a stereo camera are unnecessary, it can be realized at low cost. Further, by performing the estimation using a plurality of partial lines, the accuracy and robustness of the estimation can be improved.
 (変形例1)
 上述の実施形態では車道の領域を特定する領域線として、車道と交通線(白線等)とのうち車道側の境界部分を用いる例を示したが、本変形例では、白線側の境界部分等を領域線とする例を示す。
(Modification example 1)
In the above-described embodiment, an example is shown in which the boundary portion on the roadway side of the roadway and the traffic line (white line, etc.) is used as the region line for specifying the region of the roadway, but in this modification, the boundary portion on the white line side, etc. Is shown as an example in which is a region line.
 図10は、本変形例に係る撮像画像の例を示す。車道40の両側に白線24A、24Bが設けられている。白線24A、24Bをu座標方向に横切る矩形の注目領域ROIが設定されている。なお、白線24A、24Bが一定の幅(2画素以上)を有している。 FIG. 10 shows an example of a captured image according to this modified example. White lines 24A and 24B are provided on both sides of the roadway 40. A rectangular attention area ROI that crosses the white lines 24A and 24B in the u coordinate direction is set. The white lines 24A and 24B have a certain width (two or more pixels).
 一例として、白線24Aのうち車道40に接する境界部分を領域線31として用いることができる。注目領域ROIに含まれる領域線31の部分が部分線31Aに相当する。部分線31Aにおいて、選択位置の例として、点45を選択することができる。この場合、点45における傾きαLは、点45における接線の傾きを用いることができる。
 あるいは、白線24Aと注目領域ROIとの内側の交点41、42同士を結ぶ直線(図示せず)の傾きを、部分線31Aの傾きとして算出してもよい。この場合、選択位置は、部分線31Aから選択した位置でもよいし、直線上の中点など直線上から任意に選択した位置でもよい。他の例として、領域線24Aと注目領域ROIとの外側の交点46、47同士を結ぶ直線の傾きを、部分線31Aの傾きとして算出してもよい。この場合、選択位置は、部分線31Aから選択した位置でもよいし、直線上の中点など、直線上から任意に選択した位置でもよい。あるいは交点41と交点46の中間点43と、交点42と交点44の中間点47とを結ぶ直線の傾きを部分線31Aの傾きとして算出してもよい。この場合、選択位置は、部分線31Aから選択した位置でもよいし、直線上の中点など、直線上から任意に選択した位置でもよい。
As an example, the boundary portion of the white line 24A in contact with the roadway 40 can be used as the area line 31. The portion of the region line 31 included in the region of interest ROI corresponds to the partial line 31A. On the partial line 31A, the point 45 can be selected as an example of the selection position. In this case, as the slope αL at the point 45, the slope of the tangent line at the point 45 can be used.
Alternatively, the slope of a straight line (not shown) connecting the intersections 41 and 42 inside the white line 24A and the region of interest ROI may be calculated as the slope of the partial line 31A. In this case, the selected position may be a position selected from the partial line 31A or a position arbitrarily selected from the straight line such as the midpoint on the straight line. As another example, the slope of the straight line connecting the outer intersections 46 and 47 of the region line 24A and the region of interest ROI may be calculated as the slope of the partial line 31A. In this case, the selected position may be a position selected from the partial line 31A, or a position arbitrarily selected from the straight line such as a midpoint on the straight line. Alternatively, the slope of the straight line connecting the midpoint 43 of the intersection 41 and 46 and the midpoint 47 of the intersection 42 and the intersection 44 may be calculated as the slope of the partial line 31A. In this case, the selected position may be a position selected from the partial line 31A, or a position arbitrarily selected from the straight line such as a midpoint on the straight line.
 他の例として、白線24Aのうち車道40と反対側のエッジ部分を領域線32として用いることもできる。注目領域ROIに含まれる領域線31の部分が部分線32Aに相当する。部分線32Aの傾き、及び部分線32A上の選択位置は、領域線31の場合と同様にして算出できる。 As another example, the edge portion of the white line 24A on the opposite side of the roadway 40 can be used as the area line 32. The portion of the region line 31 included in the region of interest ROI corresponds to the partial line 32A. The slope of the partial line 32A and the selected position on the partial line 32A can be calculated in the same manner as in the case of the area line 31.
 白線24Aについて領域線、傾き及び選択位置を算出する例を記載したが、白線24Bについても同様にして、領域線、傾き及び選択位置を算出できる。 Although an example of calculating the area line, the inclination and the selected position for the white line 24A has been described, the area line, the inclination and the selected position can be calculated in the same manner for the white line 24B.
 このように白線24A、24Bに対してそれぞれ算出した傾き及び選択位置を用いて、上記の勾配算出関数から、注目領域ROIに含まれる路面の傾斜角を推定できる。 Using the slopes and selected positions calculated for the white lines 24A and 24B, respectively, the slope angle of the road surface included in the region of interest ROI can be estimated from the above slope calculation function.
 図10で説明した直線近似により部分線の傾き(注目領域に含まれる領域線の傾き)を算出する手法は前述した実施形態にも適用できる。 The method of calculating the slope of the partial line (the slope of the region line included in the region of interest) by the linear approximation described in FIG. 10 can also be applied to the above-described embodiment.
 (変形例2)
 上述した実施形態では注目領域を設定したが、注目領域を設定せずに、路面の傾きを推定してもよい。例えば領域線上の任意の位置を選択位置として選択し、選択した選択位置の座標と、選択位置における領域線の傾きとに基づき、上述の勾配算出関数から、当該選択位置に対応する路面の勾配を推定してもよい。この場合、上述した実施形態において当該選択位置を通る直線の注目領域を設定し、片側(左側又は右側)の傾き及び選択位置の座標みを用いて路面の勾配を推定する場合と同様の推定結果を得ることができる。
(Modification 2)
Although the region of interest is set in the above-described embodiment, the slope of the road surface may be estimated without setting the region of interest. For example, an arbitrary position on the area line is selected as the selection position, and the slope of the road surface corresponding to the selection position is obtained from the above-mentioned gradient calculation function based on the coordinates of the selected selection position and the inclination of the area line at the selection position. You may estimate. In this case, the same estimation result as in the case where the region of interest of the straight line passing through the selected position is set in the above-described embodiment and the slope of the road surface is estimated using the inclination of one side (left side or right side) and the coordinates of the selected position. Can be obtained.
(第2実施形態)
 第2実施形態では、第1実施形態と同様に路面勾配角を推定するが、推定するための構成が異なる。
(Second Embodiment)
In the second embodiment, the road surface gradient angle is estimated as in the first embodiment, but the configuration for estimating is different.
 図11は、第2実施形態に係る撮像システム500、及び車両制御システム301のブロック図である。第1実施形態の図1のブロック図と同一の要素には同一の符号を付して、説明を適宜省略する。 FIG. 11 is a block diagram of the imaging system 500 and the vehicle control system 301 according to the second embodiment. The same elements as those in the block diagram of FIG. 1 of the first embodiment are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 撮像システム500は、撮像装置201、デプスセンサ401及び情報処理装置501を備えている。撮像装置201は第1実施形態と同じである。 The image pickup system 500 includes an image pickup device 201, a depth sensor 401, and an information processing device 501. The image pickup apparatus 201 is the same as that of the first embodiment.
 デプスセンサ401は、車両の前方を含む空間のデプス情報を検出するセンサ装置である。デプス情報として、例えば各画素にデプス値を格納したデプス画像を取得する。デプスセンサ401は、例えば、LiDAR、レーザー、ステレオカメラ、ToFセンサなどである。デプスセンサ401の姿勢は、例えば、光源方向(撮像方向)が、道路の路面(地面)に対して近づく方向(やや下向きに)なるように、あるいは、路面にほぼ方向になるように、設定されている。デプスセンサ401のパラメータ情報は記憶部150に格納されている。またデプスセンサ401と撮像装置201の位置関係に関する情報も記憶部150に格納されている。 The depth sensor 401 is a sensor device that detects the depth information of the space including the front of the vehicle. As the depth information, for example, a depth image in which the depth value is stored in each pixel is acquired. The depth sensor 401 is, for example, a LiDAR, a laser, a stereo camera, a ToF sensor, or the like. The posture of the depth sensor 401 is set so that, for example, the light source direction (imaging direction) is closer to the road surface (ground) (slightly downward), or is substantially oriented toward the road surface. There is. The parameter information of the depth sensor 401 is stored in the storage unit 150. Information on the positional relationship between the depth sensor 401 and the imaging device 201 is also stored in the storage unit 150.
 デプスセンサ401は、車両の走行中、一定の時間間隔で、前方を含む空間のデプス情報を取得する。デプスセンサ401は、取得したデプス情報を情報処理装置101に提供する。デプスセンサ401は例えば撮像装置201と同期して動作する。例えばデプスセンサ401と撮像装置201とは同じ同期信号に従って動作する。あるいは、デプスセンサ401がセンシングするタイミングは、車両制御システム301から指示を受けたタイミングでもよい。車両制御システム301は、車内のユーザ(搭乗者)から入力装置(図示しない)を介して路面の勾配推定の指示を受けた場合、撮像装置201にセンシングの指示を出力してもよい。デプスセンサ401は、車両の停車中にセンシングしてもよい。 The depth sensor 401 acquires the depth information of the space including the front at regular time intervals while the vehicle is running. The depth sensor 401 provides the acquired depth information to the information processing device 101. The depth sensor 401 operates in synchronization with, for example, the image pickup apparatus 201. For example, the depth sensor 401 and the image pickup apparatus 201 operate according to the same synchronization signal. Alternatively, the timing of sensing by the depth sensor 401 may be the timing of receiving an instruction from the vehicle control system 301. When the vehicle control system 301 receives an instruction for estimating the slope of the road surface from a user (passenger) in the vehicle via an input device (not shown), the vehicle control system 301 may output a sensing instruction to the image pickup device 201. The depth sensor 401 may be sensed while the vehicle is stopped.
 情報処理装置101のデプス情報取得部210は、デプスセンサ401に有線又は無線で接続されており、デプスセンサ401からデプス情報を取得する。デプス情報取得部210は、取得したデプス情報を推定部240に提供する。 The depth information acquisition unit 210 of the information processing device 101 is connected to the depth sensor 401 by wire or wirelessly, and acquires the depth information from the depth sensor 401. The depth information acquisition unit 210 provides the acquired depth information to the estimation unit 240.
 画像取得部110は、撮像装置201から撮像画像を取得し、取得した撮像画像を推定部240に提供する。 The image acquisition unit 110 acquires an image captured from the image pickup device 201 and provides the acquired image to the estimation unit 240.
 推定部240は、デプス情報取得部210から取得したデプス情報に基づき、車道における観測対象点(第3位置)のデプス値を特定する。また撮像画像において、観測対象点に対応する位置(第4位置)を特定する。観測対象点のデプス値と、観測対象点に対応する位置の座標とに基づき、路面における観測対象点の勾配を推定する。以下、勾配を推定する例を2つ示す。 The estimation unit 240 specifies the depth value of the observation target point (third position) on the roadway based on the depth information acquired from the depth information acquisition unit 210. Further, in the captured image, the position (fourth position) corresponding to the observation target point is specified. The slope of the observation target point on the road surface is estimated based on the depth value of the observation target point and the coordinates of the position corresponding to the observation target point. Below, two examples of estimating the gradient are shown.
(第1の推定例)
 図12は、推定部240の動作を説明するための側面図である。車両の進行方向をx軸、高さ方向をz軸、紙面に垂直な方向(道路の幅方向)をy軸としている。観測対象点P1は、実空間上の位置であり、勾配を推定したい路面上の箇所である。観測対象点P1は、例えば先行車両が存在する場合に、先行車両が存在する位置である。先行車両が存在する位置として、先行車両の下端の位置、又は先行車と路面が接する位置を用いてもよい。先行車両は、推定部240がデプスセンサ401により取得されたデプス情報に基づき検出される。
(First estimation example)
FIG. 12 is a side view for explaining the operation of the estimation unit 240. The traveling direction of the vehicle is the x-axis, the height direction is the z-axis, and the direction perpendicular to the paper surface (the width direction of the road) is the y-axis. The observation target point P1 is a position in real space and is a point on the road surface from which the gradient is to be estimated. The observation target point P1 is, for example, a position where the preceding vehicle exists when the preceding vehicle exists. As the position where the preceding vehicle exists, the position at the lower end of the preceding vehicle or the position where the preceding vehicle and the road surface are in contact with each other may be used. The preceding vehicle is detected by the estimation unit 240 based on the depth information acquired by the depth sensor 401.
 図13は先行車両の後部の下端の位置を観測対象点P1とする例を示している。 FIG. 13 shows an example in which the position of the lower end of the rear part of the preceding vehicle is set as the observation target point P1.
 車両(より具体的にはデプスセンサ)から観測対象点P1までの距離は距離Xである。観測対象点P1を自車が存在する平面に投影した点を投影点P2(第2箇所)とする。車両(より具体的にはデプスセンサ)から投影点P2までの距離を距離X’とする。観測対象点P1及び投影点P2間の距離をZとする。観測対象点P1における路面勾配角をθとする。路面勾配角θは自車が存在する路面を基準とした場合の路面の勾配角の差である。なお観測対象点P1が厳密には路面上の点でない場合もあり得るが、この場合も観測対象点P1と路面とはきわめて近い距離にあると考えられることから、誤差の範囲として扱って問題ない。 The distance from the vehicle (more specifically, the depth sensor) to the observation target point P1 is the distance X. The point at which the observation target point P1 is projected onto the plane on which the own vehicle exists is defined as the projection point P2 (second point). The distance from the vehicle (more specifically, the depth sensor) to the projection point P2 is defined as the distance X'. Let Z be the distance between the observation target point P1 and the projection point P2. Let θ be the road surface gradient angle at the observation target point P1. The road surface gradient angle θ is the difference in the slope angle of the road surface when the road surface on which the own vehicle exists is used as a reference. Strictly speaking, the observation target point P1 may not be a point on the road surface, but in this case as well, since it is considered that the observation target point P1 and the road surface are extremely close to each other, there is no problem in treating it as an error range. ..
 推定部240は、実空間の距離Zに対応する、撮像画像上の距離(画素距離)Δvを算出する。前提として、三角測量の原理、エピポーラ制約、撮像装置201のパラメータ情報と、デプスセンサ401のパラメータ情報と、撮像装置201とデプスセンサ401の位置関係とに基づき、デプス情報における画素の位置と、撮像画像における画素の位置との対応は一意に定まる。 The estimation unit 240 calculates the distance (pixel distance) Δv on the captured image corresponding to the distance Z in the real space. As a premise, based on the principle of triangulation, epipolar constraint, parameter information of the imaging device 201, parameter information of the depth sensor 401, and the positional relationship between the imaging device 201 and the depth sensor 401, the position of the pixel in the depth information and the captured image. The correspondence with the pixel position is uniquely determined.
 図14は、撮像装置で撮像された撮像画像において距離Δvを算出する例を示す。撮像画像において、撮像装置201のパラメータ情報から事前に求められた画像座標系(uv座標系)における主点(原点)が点Oとして示されている。観測対象点P1に対応する撮像画像上の位置は点S1であり、主点Oに対して点S1におけるv座標方向の距離はvである。投影点P2に対応する撮像画像上の位置はS2であり、主点Oに対して点S2におけるv座標方向の距離はvである。よって、距離Zに対応する距離Δvは、vからvを減算した値である。 FIG. 14 shows an example of calculating the distance Δv in the captured image captured by the imaging device. In the captured image, the principal point (origin) in the image coordinate system (uv coordinate system) obtained in advance from the parameter information of the imaging device 201 is shown as a point O. Position on the captured image corresponding to the observation object point P1 is a point S1, the distance v coordinate direction at the point S1 with respect to the principal point O is v 1. Position on the captured image corresponding to the projection point P2 is S2, the distance v coordinate direction at the point S2 with respect to the principal point O is v 2. Therefore, the distance Δv corresponding to the distance Z is a value obtained by subtracting v 1 from v 2.
 このとき、路面勾配角θは以下の式で算出される。fzは、撮像装置201の垂直方向における焦点距離である。
Figure JPOXMLDOC01-appb-M000020
At this time, the road surface gradient angle θ is calculated by the following formula. fz is the focal length of the image pickup apparatus 201 in the vertical direction.
Figure JPOXMLDOC01-appb-M000020
 このように推定部240は、車道における観測対象点(第3位置)に対応する、撮像画像上の位置(第4位置)を特定する。また、観測対象点を自車が存在する平面に投影した位置(第5位置)に対応する、撮像画像上の位置(第6位置)を特定する。特定した2つの位置(第4位置、第6位置)のv座標の差Δvに基づき、式(20)から、観測対象点における路面勾配角を算出する。 In this way, the estimation unit 240 specifies the position (fourth position) on the captured image corresponding to the observation target point (third position) on the roadway. In addition, the position (sixth position) on the captured image corresponding to the position (fifth position) in which the observation target point is projected on the plane on which the own vehicle exists is specified. The road surface gradient angle at the observation target point is calculated from the equation (20) based on the difference Δv of the v-coordinates of the two specified positions (fourth position and sixth position).
 図15は、推定部240が第1の推定例を用いる場合の情報処理装置の動作の一例のフローチャートである。画像取得部110が撮像装置201から撮像画像を取得し、デプス情報取得部210がデプスセンサ401からデプス情報を取得する(S201)。推定部240がデプス情報に基づき先行車両を検出し(S202)、先行車両の後部の下端位置を観測対象点として特定する(S203)。推定部240は、観測対象点に対応する撮像画像上の位置と、観測対象点を自車両が存在する平面に投影した投影点に対応する撮像画像上の位置とを特定する(S204)。そして、これら特定した2つの位置のv座標の差(距離Δv)を算出する(同S204)。推定部240は、Δvと、式(20)とに基づき、観測対象点における路面勾配角を推定する(S205)。 FIG. 15 is a flowchart of an example of the operation of the information processing device when the estimation unit 240 uses the first estimation example. The image acquisition unit 110 acquires the captured image from the image pickup apparatus 201, and the depth information acquisition unit 210 acquires the depth information from the depth sensor 401 (S201). The estimation unit 240 detects the preceding vehicle based on the depth information (S202), and identifies the lower end position of the rear portion of the preceding vehicle as an observation target point (S203). The estimation unit 240 identifies a position on the captured image corresponding to the observation target point and a position on the captured image corresponding to the projection point where the observation target point is projected on the plane on which the own vehicle exists (S204). Then, the difference (distance Δv) of the v-coordinates of these two specified positions is calculated (S204). The estimation unit 240 estimates the road surface gradient angle at the observation target point based on Δv and the equation (20) (S205).
(第2の推定例)
 第1の推定例の説明で用いた図12及び図14を参照して第2の推定例を説明する。
 第2の推定例では、単眼測距の原理を応用して、路面勾配角(勾配差)θを推定する。単眼測距の原理は、例えば下記論文に開示されている。
 Gideon P. Stein et al., “Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy” (2003)
(Second estimation example)
The second estimation example will be described with reference to FIGS. 12 and 14 used in the description of the first estimation example.
In the second estimation example, the road surface gradient angle (gradient difference) θ is estimated by applying the principle of monocular distance measurement. The principle of monocular ranging is disclosed, for example, in the following paper.
Gideon P. Stein et al., “Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy” (2003)
 単眼測距の原理から以下の式(21)が成立する。fzは、撮像装置201の垂直方向における焦点距離である。vは、図14で説明したように、投影点P2に対応する撮像画像上の位置をS2とした場合に、位置S2の主点Oからのv座標方向の距離に相当する。
Figure JPOXMLDOC01-appb-M000021
The following equation (21) holds from the principle of monocular distance measurement. fz is the focal length of the image pickup apparatus 201 in the vertical direction. As described with reference to FIG. 14, v 2 corresponds to the distance in the v coordinate direction from the principal point O of the position S2 when the position on the captured image corresponding to the projection point P2 is S2.
Figure JPOXMLDOC01-appb-M000021
 式(21)と図12に示した関係から以下の式(22)及び式(23)が成立する。vは、図14で説明したように、観測対象点P1に対応する撮像画像上の位置をS1とした場合に、位置S1の主点Oからのv座標方向の距離に相当する。fzは、撮像装置201の垂直方向における焦点距離である。Hは、撮像装置201の高さである。Xは、観測対象点P1までの距離(デプス値)である。
Figure JPOXMLDOC01-appb-M000022
Figure JPOXMLDOC01-appb-M000023
From the relationship shown in the equation (21) and FIG. 12, the following equations (22) and (23) are established. As described with reference to FIG. 14, v 1 corresponds to the distance in the v coordinate direction from the principal point O of the position S1 when the position on the captured image corresponding to the observation target point P1 is S1. fz is the focal length of the image pickup apparatus 201 in the vertical direction. H is the height of the image pickup apparatus 201. X is the distance (depth value) to the observation target point P1.
Figure JPOXMLDOC01-appb-M000022
Figure JPOXMLDOC01-appb-M000023
 推定部240は、観測対象点P1までの距離Xと、観測対象点P1に対応する撮像画像上の位置S1の主点Oからのv座標方向の距離v1とに基づき、式(22)から路面勾配角θを算出する。 The estimation unit 240 is based on the distance X to the observation target point P1 and the distance v1 in the v coordinate direction from the principal point O of the position S1 on the captured image corresponding to the observation target point P1 from the equation (22). Calculate the gradient angle θ.
 推定部240は、推定した路面勾配角θの情報を車両制御システム301に提供する。車両制御システム301の動作は第1実施形態と同様である。 The estimation unit 240 provides the vehicle control system 301 with information on the estimated road surface gradient angle θ. The operation of the vehicle control system 301 is the same as that of the first embodiment.
 図16は、推定部240が第2の推定例を用いる場合の情報処理装置の動作の一例のフローチャートである。画像取得部110が撮像装置201から撮像画像を取得し、デプス情報取得部210がデプスセンサ401からデプス情報を取得する(S301)。推定部240がデプス情報に基づき先行車両を検出し(S302)、先行車両の後部の下端位置を観測対象点として特定する(S303)。推定部240は、観測対象点に対応する撮像画像上の位置を特定する(S304)。特定した位置のv座標(撮像画像における主点Oからのv座標方向の距離v1)を特定する。推定部240は、vと、式(22)とに基づき、観測対象点における路面勾配角を推定する(S305)。 FIG. 16 is a flowchart of an example of the operation of the information processing apparatus when the estimation unit 240 uses the second estimation example. The image acquisition unit 110 acquires the captured image from the image pickup apparatus 201, and the depth information acquisition unit 210 acquires the depth information from the depth sensor 401 (S301). The estimation unit 240 detects the preceding vehicle based on the depth information (S302), and identifies the lower end position of the rear portion of the preceding vehicle as an observation target point (S303). The estimation unit 240 specifies a position on the captured image corresponding to the observation target point (S304). The v-coordinate of the specified position (distance v1 in the v-coordinate direction from the principal point O in the captured image) is specified. Estimation unit 240, v 1 and, based on the equation (22) estimates the road surface slope angle at the observation target point (S305).
 以上、第2実施形態によれば、低演算量で高速に路面勾配角を推定できる。また、本実施形態によれば高精度に路面勾配角を推定できる。 As described above, according to the second embodiment, the road surface gradient angle can be estimated at high speed with a low calculation amount. Further, according to the present embodiment, the road surface gradient angle can be estimated with high accuracy.
 なお、上述の実施形態は本開示を具現化するための一例を示したものであり、その他の様々な形態で本開示を実施することが可能である。例えば、本開示の要旨を逸脱しない範囲で、種々の変形、置換、省略又はこれらの組み合わせが可能である。そのような変形、置換、省略等を行った形態も、本開示の範囲に含まれると同様に、特許請求の範囲に記載された発明とその均等の範囲に含まれるものである。 Note that the above-described embodiment shows an example for embodying the present disclosure, and the present disclosure can be implemented in various other forms. For example, various modifications, substitutions, omissions, or combinations thereof are possible without departing from the gist of the present disclosure. The forms in which such modifications, substitutions, omissions, etc. are made are also included in the scope of the invention described in the claims and the equivalent scope thereof, as are included in the scope of the present disclosure.
 また、本明細書に記載された本開示の効果は例示に過ぎず、その他の効果があってもよい。 Further, the effects of the present disclosure described in the present specification are merely examples, and other effects may be obtained.
 なお、本開示は以下のような構成を取ることもできる。
 [項目1]
 車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出する検出部と、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する推定部と、
 を備えた情報処理装置。
 [項目2]
 前記画像に対して、前記車道の路面の一部を少なくとも部分的に含む対象領域を設定する領域設定部を備え、
 前記検出部は、前記対象領域内の前記第1領域線における前記第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記対象領域に含まれる路面の勾配を推定する
 を備えた項目1に記載の情報処理装置。
 [項目3]
 前記検出部は、前記第1領域線に対向する第2領域線を検出し、
 前記推定部は、前記対象領域内の前記第2領域線における第2位置の座標と、前記第2位置における前記第2領域線の傾きとにさらに基づいて、前記勾配を推定する
 項目2に記載の情報処理装置。
 [項目4]
 前記画像をセグメント化するセグメント部を含み、
 前記検出部は、
 前記車道のセグメントが他のセグメントと隣接する前記車道のセグメントの境界部分、 他のセグメントが前記車道のセグメントと隣接する前記他のセグメントの境界部分、又は、
 他のオブジェクトが隣接する前記車道のオブジェクトと反対側における前記他のセグメントのエッジ部分
 を前記第1領域線とする
 項目1~3のいずれか一項に記載の情報処理装置。
 [項目5]
 前記他のセグメントは、交通線のセグメントである
 項目4に記載の情報処理装置。
 [項目6]
 前記第1位置における前記傾きは、前記第1位置における前記第1領域線との接線の傾きである
 項目1~5のいずれか一項に記載の情報処理装置。
 [項目7]
 前記推定部は、前記画像を撮像した撮像装置の光軸方向の角度にさらに基づき、前記勾配を推定する
 項目1~6のいずれか一項に記載の情報処理装置。
 [項目8]
 前記推定部は、前記傾きを入力とする第1入力変数と、前記第1位置を入力とする第2入力変数と、前記勾配を出力とする出力変数とが対応づいた関数に基づき、前記勾配を推定する
 項目1~7のいずれか一項に記載の情報処理装置。
 [項目9]
 前記画像には先行車両が含まれ、
 前記対象領域は、前記先行車両の少なくとも一部を含む領域である
 項目2に記載の情報処理装置。
 [項目10]
 移動体に搭載された撮像装置から前記画像を取得する取得部
 を備えた項目1~9のいずれか一項に記載の情報処理装置。
 [項目11]
 前記撮像装置は、単眼カメラである
 項目10に記載の情報処理装置。
 [項目12]
 前記推定部は、前記車道のデプス情報に基づき、前記車道における第3位置に対応する前記画像上の第4位置を特定し、前記第4位置の座標に基づき、前記第3位置における路面の勾配を推定する
 項目1~11のいずれか一項に記載の情報処理装置。
 [項目13]
 前記画像は、第1平面から所定の高さに配置された撮像装置によって取得された画像であり、
 前記推定部は、前記車道における第3位置に対応する前記画像における第4位置の座標と、前記第3位置を前記第1平面に投影した第5位置に対応する前記画像における第6位置の座標とに基づいて、前記第3位置における路面の勾配を推定する
 項目1~12のいずれか一項に記載の情報処理装置。
 [項目14]
 前記第3位置は、前記車道上の先行車両が存在する位置である
 項目12に記載の情報処理装置。
 [項目15]
 車道を含む空間を撮像する撮像装置と、
 前記撮像装置により取得された画像において、前記車道の領域を特定する第1領域線を検出する検出部と、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する推定部と、
 を備えた撮像システム。
 [項目16]
 車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出し、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する
 情報処理方法。
 [項目17]
 車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出するステップと、
 前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定するステップと
 をコンピュータに実行させるためのコンピュータプログラム。
The present disclosure may also have the following structure.
[Item 1]
In the image of the space including the roadway, the detection unit that detects the first area line that identifies the area of the roadway, and the detection unit.
An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
Information processing device equipped with.
[Item 2]
An area setting unit for setting a target area including at least a part of the road surface of the roadway is provided for the image.
The detection unit has a slope of the road surface included in the target region based on the coordinates of the first position on the first region line in the target region and the inclination of the first region line in the first position. The information processing apparatus according to item 1, further comprising estimating.
[Item 3]
The detection unit detects the second region line facing the first region line, and
The estimation unit is described in item 2 for estimating the gradient based on the coordinates of the second position on the second region line in the target region and the slope of the second region line at the second position. Information processing equipment.
[Item 4]
A segment portion for segmenting the image is included.
The detection unit
The boundary portion of the segment of the roadway where the segment of the roadway is adjacent to the other segment, the boundary portion of the other segment where the other segment is adjacent to the segment of the roadway, or
The information processing apparatus according to any one of items 1 to 3, wherein the edge portion of the other segment on the side opposite to the object on the roadway adjacent to the other object is the first area line.
[Item 5]
The information processing device according to item 4, wherein the other segment is a traffic line segment.
[Item 6]
The information processing apparatus according to any one of items 1 to 5, wherein the inclination at the first position is an inclination of a tangent line with the first area line at the first position.
[Item 7]
The information processing device according to any one of items 1 to 6, wherein the estimation unit further estimates the gradient based on an angle in the optical axis direction of the image pickup device that has captured the image.
[Item 8]
The estimation unit is based on a function in which a first input variable having the slope as an input, a second input variable having the first position as an input, and an output variable having the slope as an output correspond to each other. The information processing apparatus according to any one of items 1 to 7.
[Item 9]
The image includes the preceding vehicle,
The information processing device according to item 2, wherein the target area is an area including at least a part of the preceding vehicle.
[Item 10]
The information processing device according to any one of items 1 to 9, further comprising an acquisition unit for acquiring the image from an image pickup device mounted on a moving body.
[Item 11]
The information processing device according to item 10, wherein the image pickup device is a monocular camera.
[Item 12]
The estimation unit identifies a fourth position on the image corresponding to the third position on the roadway based on the depth information of the roadway, and based on the coordinates of the fourth position, the slope of the road surface at the third position. The information processing apparatus according to any one of items 1 to 11.
[Item 13]
The image is an image acquired by an imaging device arranged at a predetermined height from the first plane.
The estimation unit is the coordinates of the fourth position in the image corresponding to the third position on the roadway and the coordinates of the sixth position in the image corresponding to the fifth position where the third position is projected onto the first plane. The information processing apparatus according to any one of items 1 to 12, which estimates the slope of the road surface at the third position based on the above.
[Item 14]
The information processing device according to item 12, wherein the third position is a position on the roadway where a preceding vehicle exists.
[Item 15]
An imaging device that captures the space including the roadway,
In the image acquired by the image pickup device, a detection unit that detects a first region line that identifies the region of the roadway, and a detection unit.
An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
Imaging system equipped with.
[Item 16]
In the image of the space including the roadway, the first area line that specifies the area of the roadway is detected.
An information processing method for estimating the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
[Item 17]
In the image of the space including the roadway, the step of detecting the first area line that specifies the area of the roadway, and
Have the computer perform a step of estimating the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the inclination of the first area line at the first position. Computer program for.
100、500:撮像システム
101、501:情報処理装置
110:画像取得部
120:検出部
130:領域設定部
140、240:推定部
150:記憶部
201:撮像装置
210:デプス情報取得部
301:車両制御システム
401:デプスセンサ
100, 500: Imaging system 101, 501: Information processing device 110: Image acquisition unit 120: Detection unit 130: Area setting unit 140, 240: Estimating unit 150: Storage unit 201: Imaging device 210: Depth information acquisition unit 301: Vehicle Control system 401: Depth sensor

Claims (17)

  1.  車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出する検出部と、
     前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する推定部と、
     を備えた情報処理装置。
    In the image of the space including the roadway, the detection unit that detects the first area line that identifies the area of the roadway, and the detection unit.
    An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
    Information processing device equipped with.
  2.  前記画像に対して、前記車道の路面の一部を少なくとも部分的に含む対象領域を設定する領域設定部を備え、
     前記推定部は、前記対象領域内の前記第1領域線における前記第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記対象領域に含まれる路面の勾配を推定する
     請求項1に記載の情報処理装置。
    An area setting unit for setting a target area including at least a part of the road surface of the roadway is provided for the image.
    The estimation unit has a slope of the road surface included in the target region based on the coordinates of the first position on the first region line in the target region and the inclination of the first region line in the first position. The information processing apparatus according to claim 1.
  3.  前記検出部は、前記第1領域線に対向する第2領域線を検出し、
     前記推定部は、前記対象領域内の前記第2領域線における第2位置の座標と、前記第2位置における前記第2領域線の傾きとにさらに基づいて、前記勾配を推定する
     請求項2に記載の情報処理装置。
    The detection unit detects the second region line facing the first region line, and
    According to claim 2, the estimation unit estimates the gradient based on the coordinates of the second position on the second region line in the target region and the slope of the second region line at the second position. The information processing device described.
  4.  前記画像をセグメント化するセグメント部を含み、
     前記検出部は、
     前記車道のセグメントが他のセグメントと隣接する前記車道のセグメントの境界部分、 他のセグメントが前記車道のセグメントと隣接する前記他のセグメントの境界部分、又は、
     他のオブジェクトが隣接する前記車道のオブジェクトと反対側における前記他のセグメントのエッジ部分
     を前記第1領域線とする
     請求項1に記載の情報処理装置。
    A segment portion for segmenting the image is included.
    The detection unit
    The boundary portion of the segment of the roadway where the segment of the roadway is adjacent to the other segment, the boundary portion of the other segment where the other segment is adjacent to the segment of the roadway, or
    The information processing apparatus according to claim 1, wherein the edge portion of the other segment on the side opposite to the object on the roadway adjacent to the other object is the first area line.
  5.  前記他のセグメントは、交通線のセグメントである
     請求項4に記載の情報処理装置。
    The information processing apparatus according to claim 4, wherein the other segment is a traffic line segment.
  6.  前記第1位置における前記傾きは、前記第1位置における前記第1領域線との接線の傾きである
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the inclination at the first position is an inclination of a tangent line with the first area line at the first position.
  7.  前記推定部は、前記画像を撮像した撮像装置の光軸方向の角度にさらに基づき、前記勾配を推定する
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the estimation unit further estimates the gradient based on an angle in the optical axis direction of the image pickup device that has captured the image.
  8.  前記推定部は、前記傾きを入力とする第1入力変数と、前記第1位置を入力とする第2入力変数と、前記勾配を出力とする出力変数とが対応づいた関数に基づき、前記勾配を推定する
     請求項1に記載の情報処理装置。
    The estimation unit is based on a function in which a first input variable having the slope as an input, a second input variable having the first position as an input, and an output variable having the slope as an output correspond to each other. The information processing apparatus according to claim 1.
  9.  前記画像には先行車両が含まれ、
     前記対象領域は、前記先行車両の少なくとも一部を含む領域である
     請求項2に記載の情報処理装置。
    The image includes the preceding vehicle,
    The information processing device according to claim 2, wherein the target area is an area including at least a part of the preceding vehicle.
  10.  移動体に搭載された撮像装置から前記画像を取得する取得部
     を備えた請求項1に記載の情報処理装置。
    The information processing device according to claim 1, further comprising an acquisition unit that acquires the image from an image pickup device mounted on a moving body.
  11.  前記撮像装置は、単眼カメラである
     請求項10に記載の情報処理装置。
    The information processing device according to claim 10, wherein the image pickup device is a monocular camera.
  12.  前記推定部は、前記車道のデプス情報に基づき、前記車道における第3位置に対応する前記画像上の第4位置を特定し、前記第4位置の座標に基づき、前記第3位置における路面の勾配を推定する
     請求項1に記載の情報処理装置。
    The estimation unit identifies a fourth position on the image corresponding to the third position on the roadway based on the depth information of the roadway, and based on the coordinates of the fourth position, the slope of the road surface at the third position. The information processing apparatus according to claim 1.
  13.  前記画像は、第1平面から所定の高さに配置された撮像装置によって取得された画像であり、
     前記推定部は、前記車道における第3位置に対応する前記画像における第4位置の座標と、前記第3位置を前記第1平面に投影した第5位置に対応する前記画像における第6位置の座標とに基づいて、前記第3位置における路面の勾配を推定する
     請求項1に記載の情報処理装置。
    The image is an image acquired by an imaging device arranged at a predetermined height from the first plane.
    The estimation unit is the coordinates of the fourth position in the image corresponding to the third position on the roadway and the coordinates of the sixth position in the image corresponding to the fifth position where the third position is projected onto the first plane. The information processing apparatus according to claim 1, wherein the slope of the road surface at the third position is estimated based on the above.
  14.  前記第3位置は、前記車道上の先行車両が存在する位置である
     請求項12に記載の情報処理装置。
    The information processing device according to claim 12, wherein the third position is a position where a preceding vehicle on the roadway exists.
  15.  車道を含む空間を撮像する撮像装置と、
     前記撮像装置により取得された画像において、前記車道の領域を特定する第1領域線を検出する検出部と、
     前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する推定部と、
     を備えた撮像システム。
    An imaging device that captures the space including the roadway,
    In the image acquired by the image pickup device, a detection unit that detects a first region line that identifies the region of the roadway, and a detection unit.
    An estimation unit that estimates the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
    Imaging system equipped with.
  16.  車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出し、
     前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定する
     情報処理方法。
    In the image of the space including the roadway, the first area line that specifies the area of the roadway is detected.
    An information processing method for estimating the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the slope of the first area line at the first position.
  17.  車道を含む空間を撮像した画像において、前記車道の領域を特定する第1領域線を検出するステップと、
     前記第1領域線における第1位置の座標と、前記第1位置における前記第1領域線の傾きとに基づいて、前記第1位置に対応する路面の勾配を推定するステップと
     をコンピュータに実行させるためのコンピュータプログラム。
    In the image of the space including the roadway, the step of detecting the first area line that specifies the area of the roadway, and
    Have the computer perform a step of estimating the slope of the road surface corresponding to the first position based on the coordinates of the first position on the first area line and the inclination of the first area line at the first position. Computer program for.
PCT/JP2021/013324 2020-04-22 2021-03-29 Information processing device, image capturing system, information processing method, and computer program WO2021215199A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020075990A JP2023089311A (en) 2020-04-22 2020-04-22 Information processing device, image capturing system, information processing method, and computer program
JP2020-075990 2020-04-22

Publications (1)

Publication Number Publication Date
WO2021215199A1 true WO2021215199A1 (en) 2021-10-28

Family

ID=78270653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/013324 WO2021215199A1 (en) 2020-04-22 2021-03-29 Information processing device, image capturing system, information processing method, and computer program

Country Status (2)

Country Link
JP (1) JP2023089311A (en)
WO (1) WO2021215199A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1137730A (en) * 1997-07-18 1999-02-12 Nissan Motor Co Ltd Road shape estimating apparatus
JP2003083742A (en) * 2001-09-13 2003-03-19 Fuji Heavy Ind Ltd Distance correction apparatus and method of monitoring system
JP2010040015A (en) * 2008-08-08 2010-02-18 Honda Motor Co Ltd Road shape detector for vehicle
JP2012212282A (en) * 2011-03-31 2012-11-01 Honda Elesys Co Ltd Road surface state detection device, road surface state detection method, and road surface state detection program
JP2015143979A (en) * 2013-12-27 2015-08-06 株式会社リコー Image processor, image processing method, program, and image processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1137730A (en) * 1997-07-18 1999-02-12 Nissan Motor Co Ltd Road shape estimating apparatus
JP2003083742A (en) * 2001-09-13 2003-03-19 Fuji Heavy Ind Ltd Distance correction apparatus and method of monitoring system
JP2010040015A (en) * 2008-08-08 2010-02-18 Honda Motor Co Ltd Road shape detector for vehicle
JP2012212282A (en) * 2011-03-31 2012-11-01 Honda Elesys Co Ltd Road surface state detection device, road surface state detection method, and road surface state detection program
JP2015143979A (en) * 2013-12-27 2015-08-06 株式会社リコー Image processor, image processing method, program, and image processing system

Also Published As

Publication number Publication date
JP2023089311A (en) 2023-06-28

Similar Documents

Publication Publication Date Title
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
US20210118161A1 (en) Vehicle environment modeling with a camera
CN110765922B (en) Binocular vision object detection obstacle system for AGV
JP6519262B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
JP6550881B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
JP4676373B2 (en) Peripheral recognition device, peripheral recognition method, and program
JP3895238B2 (en) Obstacle detection apparatus and method
EP2933790B1 (en) Moving object location/attitude angle estimation device and moving object location/attitude angle estimation method
JP5588812B2 (en) Image processing apparatus and imaging apparatus using the same
US8564657B2 (en) Object motion detection system based on combining 3D warping techniques and a proper object motion detection
JP5714940B2 (en) Moving body position measuring device
JP3729095B2 (en) Traveling path detection device
JP5023186B2 (en) Object motion detection system based on combination of 3D warping technique and proper object motion (POM) detection
JP6150164B2 (en) Information detection apparatus, mobile device control system, mobile object, and information detection program
CN104520894A (en) Roadside object detection device
JP6743171B2 (en) METHOD, COMPUTER DEVICE, DRIVER ASSISTING SYSTEM, AND MOTOR VEHICLE FOR DETECTION OF OBJECTS AROUND A ROAD OF A MOTOR VEHICLE
JP2008033750A (en) Object inclination detector
JP2018048949A (en) Object recognition device
JP2007264712A (en) Lane detector
JP4956099B2 (en) Wall detector
JP6340849B2 (en) Image processing apparatus, image processing method, image processing program, and mobile device control system
JP2007264717A (en) Lane deviation decision device, lane deviation prevention device and lane follow-up supporting device
JP4967758B2 (en) Object movement detection method and detection apparatus
CN112861599A (en) Method and device for classifying objects on a road, computer program and storage medium
JP3612821B2 (en) In-vehicle distance measuring device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21791605

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21791605

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP