JP2012225806A - Road gradient estimation device and program - Google Patents

Road gradient estimation device and program Download PDF

Info

Publication number
JP2012225806A
JP2012225806A JP2011094256A JP2011094256A JP2012225806A JP 2012225806 A JP2012225806 A JP 2012225806A JP 2011094256 A JP2011094256 A JP 2011094256A JP 2011094256 A JP2011094256 A JP 2011094256A JP 2012225806 A JP2012225806 A JP 2012225806A
Authority
JP
Japan
Prior art keywords
point
road surface
dimensional
dimensional position
object candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2011094256A
Other languages
Japanese (ja)
Inventor
Kiyosumi Shirodono
Akihiro Watanabe
Takashi Naito
Tatsuya Shiraishi
貴志 内藤
清澄 城殿
章弘 渡邉
達也 白石
Original Assignee
Toyota Central R&D Labs Inc
Toyota Motor Corp
トヨタ自動車株式会社
株式会社豊田中央研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Central R&D Labs Inc, Toyota Motor Corp, トヨタ自動車株式会社, 株式会社豊田中央研究所 filed Critical Toyota Central R&D Labs Inc
Priority to JP2011094256A priority Critical patent/JP2012225806A/en
Publication of JP2012225806A publication Critical patent/JP2012225806A/en
Application status is Withdrawn legal-status Critical

Links

Abstract

Even when a specific object such as a preceding vehicle or a lane boundary line does not exist, the gradient of the road ahead is accurately estimated.
A road surface reflection point extraction unit extracts a road surface reflection point from observation data of a laser radar, and a first solid object candidate extraction unit extracts a first solid object candidate from the remaining point group. To do. The second three-dimensional object candidate extraction unit 26 extracts a second three-dimensional object candidate from the captured image by detecting a vertical edge or pattern recognition, and the road surface contact point calculation unit 28 extracts the second three-dimensional object on the captured image. A road contact point is detected from the object candidate, and the three-dimensional position of the road contact point is calculated using distance information of the first solid object candidate corresponding to the second solid object candidate. The three-dimensional position of the road surface contact point calculated by the road surface contact point calculation unit 28 and the three-dimensional position of the road surface reflection point extracted by the road surface reflection point extraction unit 22 by the road gradient estimation unit 30 with the own vehicle as the center. The road shape is estimated by plotting in a three-dimensional coordinate space and fitting a road surface model.
[Selection] Figure 1

Description

  The present invention relates to a road gradient estimation apparatus and program.

  Conventionally, the scanning control means varies the scanning position in the laser height direction of the laser radar for ranging of the own vehicle following the light receiving position of the reflected light of the preceding vehicle, and the laser irradiation is performed by the inter-vehicle distance calculation means. The distance between the vehicle and the preceding vehicle is measured from the time difference between the received light and the reflected light, and the starting point detection means detects the gradient starting point ahead of the measured inter-vehicle distance from the point where the scanning position in the height direction has changed by more than the set value. When the vehicle reaches the starting point of the gradient, the gradient calculation means calculates the road surface ahead from the distance traveled by the vehicle and the detected value of the change in the scanning position in the height direction detected by the scanning change amount detection means. There has been proposed a road surface gradient estimation device that calculates and estimates the relative gradient of (see, for example, Patent Document 1).

  Also, it receives the road surface image from the video camera, extracts the lane boundary line from the road surface image, adjusts the irradiation angle of the radar based on the extracted lane boundary line, and the radar irradiated at the adjusted irradiation angle There has been proposed an inter-vehicle distance measuring device that detects a distance from a preceding vehicle even when the road is bent by calculating a distance from the preceding vehicle using a reflected wave of a beam (see, for example, Patent Document 2). ).

  In addition, the vehicle tilt angle is detected by detecting the pitch angle during traveling of the vehicle using a gyro sensor, and detecting the distance between the vehicle body and the road surface at a plurality of positions that are different in the front-rear direction of the vehicle. And a road gradient measuring device that detects the road gradient of the road on the basis of the deviation between the pitch angle and the vehicle inclination angle has been proposed (see, for example, Patent Document 3). In the road gradient measuring device of Patent Document 3, since the pitch angle is determined only by the vehicle inclination angle and the road gradient, the pitch angle is accurately detected using a gyro sensor, and the distance from the road surface is detected before and after the vehicle. By calculating the vehicle inclination angle, the road gradient is calculated from the deviation between the pitch angle and the vehicle inclination angle.

JP 2004-317323 A JP 2004-151080 A JP 2009-276109 A

  However, since the technique of Patent Document 1 uses a change in the scan line in which the reflector of the preceding vehicle is detected, there is a problem that the road gradient cannot be estimated in a situation where there is no preceding vehicle. In addition, there is a problem that it is sometimes difficult to detect the gradient starting point when the pitch of the host vehicle changes due to unevenness on the road surface.

  Further, in the technique of Patent Document 2, it is difficult to estimate the road gradient when the road where the lane boundary is not maintained, or when the detection of the lane boundary cannot be stably performed due to dirt or wear, There is a problem.

  Further, in the technique of Patent Document 3, the road surface directly under the vehicle is a measurement target, and the gradient of the road through which the vehicle has passed can be estimated with high accuracy, but the gradient of the road ahead is estimated to be estimated. There is a problem that it is not possible.

  The present invention has been made to solve the above problems, and a road gradient estimation device capable of accurately estimating the gradient of a road ahead even when a specific object such as a preceding vehicle or a lane boundary line does not exist. And to provide a program.

  In order to achieve the above object, a road gradient estimation apparatus according to the present invention includes a three-dimensional position based on the moving body at each of a plurality of points around the moving body, and the moving body is mounted on the moving body. A first unit that extracts a point indicating a three-dimensional object as a first three-dimensional object candidate based on a three-dimensional position of each of the plurality of points acquired by the acquiring unit and an acquisition unit that acquires an image captured by the imaging unit The second extraction means for extracting a second solid object candidate based on at least one of the vertical edge information of the image acquired by the acquisition means and the matching information with a predetermined shape; Calculating means for calculating a three-dimensional position of a contact point with the road surface of the second three-dimensional object candidate based on a three-dimensional position of the first three-dimensional object candidate corresponding to the second three-dimensional object candidate; Grounding point calculated by means Based on the three-dimensional position, is configured to include a, a gradient estimation means for estimating the road gradient around the movable body.

  According to the road gradient estimation apparatus of the present invention, the acquisition unit captures the three-dimensional position with respect to the moving body around each of the plurality of points around the moving body and the imaging unit mounted on the moving body. Acquired image. And the 1st extraction means extracts the point which shows a solid object as a 1st solid object candidate based on the three-dimensional position of each of several points acquired by the acquisition means. Since the first extraction unit extracts the three-dimensional object candidate based on the three-dimensional position of each point, it can have highly accurate distance information regarding the distance between the moving object and the three-dimensional object candidate.

  Further, the second extraction unit extracts the second solid object candidate based on at least one of the vertical edge information of the image acquired by the acquisition unit and the matching information with a predetermined shape. Since the second extraction unit extracts the three-dimensional object candidate from the image captured by the imaging unit, it is possible to accurately detect the contact point between the three-dimensional object candidate and the road surface on the image.

  Then, based on the three-dimensional position of the first three-dimensional object candidate corresponding to the second three-dimensional object candidate, the calculating means calculates the three-dimensional position of the contact point with the road surface of the second three-dimensional object candidate, A gradient estimation unit estimates a road gradient around the moving body based on the three-dimensional position of the ground contact point calculated by the calculation unit.

  As described above, in order to calculate the three-dimensional position of the ground contact point based on the ground contact point between the solid object candidate and the road surface detected from the image captured by the imaging unit and the highly accurate distance information to the solid object candidate. By using the three-dimensional position of this contact point, the gradient of the road ahead can be accurately estimated even when there is no specific object such as a preceding vehicle or a lane boundary line.

  Further, the calculating means is configured to perform the connection based on the direction from the origin of the three-dimensional coordinate space obtained from the second three-dimensional object candidate to the grounding point and the three-dimensional position of the corresponding first three-dimensional object candidate. The three-dimensional position of the point can be calculated.

  In addition, the road gradient estimation apparatus of the present invention is configured to include conversion means for converting the three-dimensional position of the grounding point calculated in the past to the current coordinate system based on the momentum of the moving body during a predetermined period. The gradient estimation means can estimate the road gradient based on a distribution including the three-dimensional position of the ground contact point converted by the conversion means. Thereby, since the observation point used for estimation of a road gradient can be increased, estimation accuracy can be improved.

  In addition, the calculation unit assigns a reliability based on a comparison result between the three-dimensional position of the currently calculated grounding point and the three-dimensional position of the grounding point calculated in the past, so that the currently calculated grounding point is calculated. A three-dimensional position is stored in a predetermined storage area, and the gradient estimation unit is configured to determine a three-dimensional position of a grounding point having a reliability equal to or higher than a predetermined value among the three-dimensional positions of the grounding point stored in the predetermined storage area. The three-dimensional position converted by the conversion means is added, or the three-dimensional position of the grounding point stored in the predetermined storage area is converted by the conversion means, and weighted according to the reliability. Can be added. In this way, since the three-dimensional position of the past grounding point with high reliability is used, or the three-dimensional position of the past grounding point is used with weighting according to the reliability, the estimation accuracy can be further improved.

  Further, the first extraction means extracts road surface points from each of the plurality of points, extracts points other than the road surface points as the first three-dimensional object candidates, and the gradient estimation means includes the road surface points. The road gradient can be estimated based on the distribution of the three-dimensional position including the three-dimensional position. The road surface point has a three-dimensional position. If the road surface point is extracted, it can be used as it is for estimating the road gradient, and the number of observation points used for the estimation can be increased, so that the estimation accuracy can be improved. it can.

  Further, the conversion means converts a three-dimensional position of the road surface point extracted in the past into a current coordinate system based on the momentum of the moving body for a predetermined period, and the gradient estimation means is converted by the conversion means. The road gradient can be estimated based on the distribution including the converted three-dimensional position of the road surface point. As with the ground contact point, the past point information is also used for the road surface point, so that the number of observation points can be increased, so that the estimation accuracy can be improved.

  The second extraction means extracts a vertical edge from the image, extracts a vertical edge indicating at least one of a power pole, a pole, a tree, and a building as the second solid object candidate, The calculating means can detect the lowest end of the vertical edge as the grounding point.

  Further, the second extraction means extracts a cylindrical object from the image as the second solid object candidate by pattern matching, and the calculation means includes a point where the cylindrical object and the road surface intersect. Can be detected as the grounding point.

  Further, the three-dimensional position of each of the plurality of points can be measured by a laser radar having a plurality of scanning lines and configured so that at least one of the scanning lines is irradiated on the road surface. . Thereby, since a road surface point is measured reliably, the observation point for estimating a road gradient is securable.

  In addition, the road gradient estimation program of the present invention includes a computer, a three-dimensional position of each of a plurality of points around a moving body based on the moving body, and an imaging unit mounted on the moving body around the moving body. Acquisition means for acquiring a captured image; first extraction means for extracting a point indicating a three-dimensional object as a first three-dimensional object candidate based on a three-dimensional position of each of the plurality of points acquired by the acquisition means; Second extraction means for extracting a second solid object candidate based on at least one of vertical edge information of the image acquired by the acquisition means and matching information with a predetermined shape; and the second solid object Based on the three-dimensional position of the first three-dimensional object candidate corresponding to the candidate, the calculation means for calculating the three-dimensional position of the ground contact point with the road surface of the second three-dimensional object candidate, and the calculation means Grounding point Based on the three-dimensional position, a program for functioning as the gradient estimation means for estimating the road slope near the movable body.

  The storage medium for storing the program of the present invention is not particularly limited, and may be a hard disk or a ROM. Further, it may be a CD-ROM, a DVD disk, a magneto-optical disk or an IC card. Furthermore, the program may be downloaded from a server or the like connected to the network.

  As described above, according to the road gradient estimation device and program of the present invention, the ground point between the solid object candidate and the road surface detected from the image captured by the imaging unit, and the distance information with high accuracy to the solid object candidate 3D position of the ground contact point can be calculated based on the above, and since the 3D position of the ground contact point is used, the gradient of the road ahead can be accurately estimated even when there is no specific object such as a preceding vehicle or lane boundary line. The effect that it can be obtained.

It is a block diagram which shows the road gradient estimation apparatus which concerns on 1st Embodiment. It is a figure which shows an example of the some scanning line of a laser radar. It is a figure which shows an example of the example of an observation by a laser radar. It is an image figure which shows an example which imaged the observation data observed with the laser radar. It is a figure which shows the concave pattern used for extraction of a 2nd solid object candidate. It is a figure which shows the example of extraction of the (a) 2nd solid object candidate, and the extraction example of the (b) corresponding 1st solid object candidate. It is a figure for demonstrating calculation of the three-dimensional position of a road surface grounding point. It is a figure for demonstrating estimation of a road surface gradient. It is a flowchart which shows the content of the road gradient estimation process routine in the road gradient estimation apparatus which concerns on 1st Embodiment. It is a block diagram which shows the road gradient estimation apparatus which concerns on 2nd Embodiment. It is a flowchart which shows the content of the road gradient estimation process routine in the road gradient estimation apparatus which concerns on 2nd Embodiment.

  Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the present embodiment, a case will be described as an example where the present invention is applied to a road gradient estimation device that is mounted on a vehicle and estimates the gradient of a forward road on which the vehicle will travel.

  As shown in FIG. 1, the road gradient estimation apparatus 10 according to the first embodiment irradiates the front of the host vehicle with a plurality of scanning line lasers while scanning in the horizontal direction, and the laser is reflected by the laser reflection. A laser radar 12 that detects a three-dimensional position of a point on an object irradiated with light, a CCD camera, and the like, an image pickup device 14 that picks up an image of the periphery of the vehicle and outputs image data, and a road ahead And a computer 20 that executes a process for estimating the gradient of.

  The laser radar 12 is a device that is installed in front of the vehicle, observes the surrounding environment at a viewing angle centered on the longitudinal direction of the vehicle body, and detects the azimuth and distance of an object existing in front of the vehicle with respect to the device. . For example, as shown in FIG. 2, the laser radar 12 scans lasers of a plurality of scanning lines to be output in the horizontal direction and is reflected at a plurality of points on a plurality of object surfaces existing in front of the host vehicle. Receives light. The distance to the reflection point of the laser beam is calculated based on the time from when the laser beam is irradiated until it is received. Further, since the direction in which the laser beam is irradiated is known, the three-dimensional position of the reflection point of the laser beam can be observed. As shown in FIG. 3, the depression angle is designed so that at least one of the plurality of scanning lines can observe the road surface.

  The observation data observed by the laser radar 12 is a set of three-dimensional coordinates representing the position of the laser light reflection point in front of the host vehicle. The observation processing by the laser radar 12 is executed in a fixed cycle, and the laser radar 12 outputs observation data indicating the three-dimensional positions of the reflection points of the laser light in front of a plurality of own vehicles at each time point to the computer 20. The three-dimensional position of each point indicated by the output observation data may be polar coordinates represented by the distance and orientation from the laser radar 12 to each reflection point, or on an orthogonal coordinate system centered on the laser radar 12. It may be a three-dimensional position coordinate.

  The imaging device 14 is installed in front of the vehicle, images an observation region common to the laser radar 12 at a viewing angle centered on the longitudinal direction of the vehicle body, and outputs image data to the computer 20. The image data output from the imaging device 14 may be a color image or a monochrome image.

  Note that the laser radar 12 and the imaging device 14 may have a viewing angle in a range in which a road gradient is desired to be estimated, and the number of sensors and the viewing angle are not particularly limited. Furthermore, since the installation positions and angles of the laser radar 12 and the imaging device 14 are known, the position of the object observed by the laser radar 12 corresponds to the position of the object in the captured image captured by the imaging device 14. Can be attached.

  The computer 20 includes a CPU, a ROM for storing various programs such as a program for executing a road gradient estimation processing routine, which will be described later, and a ROM for storing various data, a RAM for temporarily storing data, a memory for storing various information, and these It is comprised including the bus which connects.

  When the computer 20 is described in terms of functional blocks divided for each function realization means determined based on hardware and software, as shown in FIG. 1, laser light is reflected on the road surface from the observation data of the laser radar 12. A road surface reflection point extraction unit 22 that extracts a road surface reflection point, a first solid object candidate extraction unit 24 that extracts a first solid object candidate from a point group that has not been extracted as a road surface reflection point, and an imaging device 14. The second three-dimensional object candidate extraction unit 26 that extracts the second three-dimensional object candidate from the captured image that has been picked up, and the first three-dimensional object candidate and the second three-dimensional object candidate are collated to determine the three-dimensional object and the road surface. The road surface contact point calculation unit 28 that calculates the three-dimensional position of the contact point and the road gradient estimation unit 30 that estimates the road gradient based on the calculated distribution of the three-dimensional position of the contact point are used. be able to.

  The road surface reflection point extraction unit 22 and the first three-dimensional object candidate extraction unit 24 are examples of the first extraction unit of the present invention.

  Based on the observation result of the laser radar 12, the road surface reflection point extraction unit 22 extracts a road surface reflection point where the laser beam is reflected on the road surface from the reflection points indicated by the observation data. Specifically, it is determined whether or not each reflection point is a road surface reflection point based on the pulse shape of the laser beam and the signal intensity of the received laser beam. Moreover, you may determine whether it is a road surface reflection point using height information from the three-dimensional position of each reflection point. Further, it may be determined whether or not it is a road surface reflection point based on the arrangement (continuity) of each reflection point. For example, FIG. 4B shows an example of imaging observation data observed by the laser radar 12 in the surrounding environment (an image taken by the imaging device 14) as shown in FIG. Since the road surface is generally a flat surface, the road surface reflection points observed on the same scanning line are arcuate as shown in FIG. Therefore, a reflection point that draws an arc that traverses the traveling direction ahead of the vehicle when the reflection point is connected and the change in the height direction of the three-dimensional position between adjacent reflection points is small is extracted as a road surface reflection point. can do.

  The first three-dimensional object candidate extraction unit 24 has a predetermined distance between reflection points of reflection points that are not extracted as road surface reflection points by the road surface reflection point extraction unit 22 among the reflection points indicated by the observation data of the laser radar 12. The reflection points smaller than the distance are grouped together, and the adjacent reflection point group is regarded as one solid object and extracted as a first solid object candidate. In addition, the position of the first three-dimensional object candidate is detected based on the three-dimensional position of each reflection point constituting the first three-dimensional object candidate. For example, it is possible to detect the three-dimensional position of the reflection point whose position in the height direction is the smallest in the reflection point group constituting the first three-dimensional object candidate as the three-dimensional position of the first three-dimensional object candidate. .

  The second three-dimensional object candidate extraction unit 26 extracts a second three-dimensional object candidate corresponding to a utility pole, pole, or the like from the captured image captured by the imaging device 14 by detecting a vertical edge or pattern recognition.

  First, extraction of a second three-dimensional object candidate using a vertical edge will be described. The second three-dimensional object candidate is a pole-like object such as a utility pole, guardrail or sign post, or a cylindrical object such as a tree, and may further include a ridge line of a structure such as a building. Since such a three-dimensional object often includes a vertical edge, the vertical edge is extracted from the entire captured image. The extracted vertical edges are grouped based on the edge length, inclination, edge gradient, etc., and the grouped vertical edges are regarded as one solid object and extracted as a second solid object candidate.

  Next, extraction of the second three-dimensional object candidate by pattern recognition will be described. As shown in FIG. 5, a portion where a cylindrical solid object such as a utility pole or pole contacts the road surface is a concave pattern. Therefore, by pattern recognition, a portion corresponding to the concave pattern is detected from the captured image, and an object including the concave pattern is extracted as a second solid object candidate.

  The second three-dimensional object candidate extraction unit 26 executes at least one of the above-described vertical edge detection and pattern recognition.

  The road surface contact point calculation unit 28 takes correspondence between the first three-dimensional object candidate and the second three-dimensional object candidate based on the attitudes of the laser radar 12 and the imaging device 14. Specifically, the extracted first three-dimensional object candidate and the second three-dimensional object candidate are projected onto a three-dimensional coordinate space centered on the vehicle, and the horizontal position of the second three-dimensional object candidate is used as a clue. By searching in the depth direction and detecting a corresponding first three-dimensional object candidate, the first three-dimensional object candidate and the second three-dimensional object candidate are matched. FIG. 6A shows an example of extracting the second three-dimensional object candidate, and FIG. 6B shows an example of extracting the first three-dimensional object candidate. Here, for the sake of explanation, FIG. 5B shows a bird's eye view in which information in the height direction is deleted. The three-dimensional object candidates numbered 1 to 5 in each figure correspond to each other. In FIG. 5B, a reflection point group surrounded by a rectangular frame represents a grouped first three-dimensional object candidate.

  The road surface contact point calculation unit 28 detects a contact point between the three-dimensional object and the road surface (hereinafter referred to as “road surface contact point”) from the captured image. Specifically, when the second three-dimensional object candidate is extracted by detecting the vertical edge, the lowermost end of the vertical edge is detected as a road contact point, and when extracted by pattern recognition, the second three-dimensional object is detected. A point where the candidate and the road surface intersect, that is, a part of the concave pattern is detected as a road surface contact point.

  The road surface contact point calculation unit 28 also includes a direction from the origin of the three-dimensional coordinate space in which the second solid object candidate is obtained to the road surface contact point, and the first solid object candidate corresponding to the second solid object candidate that has detected the road surface contact point. Based on the distance between the three-dimensional object candidate and the host vehicle, the three-dimensional position of the road contact point is calculated. Note that the direction vector to the ground point can be calculated from the second solid object candidate. The laser radar 12 has a high distance measurement accuracy, but the resolution at a distant measurement point is particularly low. Therefore, by comparing with the information on the road surface contact point detected on the captured image, it is possible to accurately detect the road surface contact point 3. The dimension position can be calculated.

As shown in FIG. 7, for example, when the axis in the depth direction of the three-dimensional coordinate space centered on the vehicle is the optical axis of the imaging device 14 and the origin is the imaging surface of the imaging device 14, the vehicle is the center. The angle formed by the axis (optical axis) in the depth direction of the three-dimensional coordinate space, and the line segment connecting the origin (imaging device 14) of the three-dimensional coordinate space and the road surface ground point is the horizontal angle θ H and the vertical it can be determined as the direction of the angle theta V. The horizontal angle θ H is an angle formed by a vertical plane including the optical axis of the imaging device 14 and a vertical plane including a line segment connecting the imaging device 14 and the road surface ground point, and the vertical angle θ V is The angle between a plane perpendicular to the vertical plane and including the optical axis of the imaging device 14 and a plane including a line segment perpendicular to the vertical plane and connecting the imaging device 14 and the road surface ground point. Further, the distance between the first three-dimensional object candidate and the host vehicle can be calculated based on the three-dimensional position of the first three-dimensional object candidate detected by the first three-dimensional object candidate extraction unit 24.

  The road gradient estimator 30 is based on the three-dimensional position of the road surface contact point calculated by the road surface contact point calculator 28 and the three-dimensional position of the road surface reflection point extracted by the road surface reflection point extraction unit 22. The road surface reflection point is plotted on a coordinate system centered on the vehicle, and the road shape is estimated by fitting the road surface model using a known technique such as the least square method. For simplification of description, a case where the gradient of the road plane is estimated will be specifically described. As shown in FIG. 8, a road surface contact point and a road surface reflection point are projected on a plane constituted by two axes of the vehicle longitudinal direction (z axis) and the height direction (y axis). To this point group, a clothoid curve represented by the following equation (1) is fitted as a road surface model.

Where C 0 is the curvature, C 1 is the rate of change of curvature, C 2 is the offset in the y-axis direction, and θ p is the road surface angle in the yz plane at z = 0. Then, the four parameters are optimally estimated by the least square method using all the projected points. The estimated road gradient is output to other driving support systems and vehicle control systems.

  If there are many observation points, selection of observation points by random sampling and parameter estimation based on the selected observation points may be repeated several times. Thereby, a stable estimation result can be obtained while reducing the amount of calculation. Further, the road surface model is not limited to a clothoid curve. Moreover, it is not necessary to be a continuous function, and fitting may be performed using a discontinuous broken line model.

  Next, the operation of the road gradient estimation apparatus 10 according to the first embodiment will be described.

  First, the laser radar 12 scans the laser of a plurality of scanning lines in the horizontal direction in front of the host vehicle, and observes observation data specifying the three-dimensional position of each reflection point around the vehicle. Observation data observed by the laser radar 12 is obtained every time the laser is scanned. Further, the vehicle periphery is imaged by the imaging device 14 in synchronization with the observation timing of the laser radar. Then, the computer 20 executes a road gradient estimation processing routine shown in FIG.

  In step 100, observation data observed by the laser radar 12 and a captured image captured by the imaging device 14 are acquired.

  Next, in step 102, it is determined whether or not each reflection point is a road surface reflection point based on the pulse shape of the laser beam and the signal intensity of the received laser beam, or the height from the three-dimensional position of each reflection point. The observation data acquired in the above step 100 shows whether it is a road surface reflection point by using information to determine whether it is a road surface reflection point or by determining whether it is a road surface reflection point by the arrangement (continuity) of each reflection point. A road surface reflection point where the laser beam is reflected on the road surface is extracted from each reflection point.

  Next, in step 104, among the reflection points indicated by the observation data acquired in step 100, the reflection points that are not extracted as road surface reflection points in step 102 are reflected with a distance less than a predetermined distance. The points are grouped, and the adjacent reflection point group is regarded as one solid object and extracted as a first solid object candidate. Further, the position of the first three-dimensional object candidate is detected based on the three-dimensional position of each reflection point included in the first three-dimensional object candidate.

  Next, in Step 106, second solid object candidates corresponding to utility poles, poles, and the like are extracted from the captured image acquired in Step 100 by detection of vertical edges or pattern recognition.

  Next, in step 108, the first three-dimensional object candidate extracted in step 104 and the second three-dimensional object candidate extracted in step 106 are projected onto a three-dimensional coordinate space centered on the vehicle. The first three-dimensional object candidate and the second three-dimensional object candidate are detected by searching in the depth direction using the horizontal position of the second three-dimensional object candidate as a clue and detecting the corresponding first three-dimensional object candidate. Take action.

  Next, when the second three-dimensional object candidate is extracted by detecting the vertical edge in step 110, the lowest end of the vertical edge on the captured image is extracted, and the second three-dimensional object candidate is extracted by pattern recognition. Detects a point where the second three-dimensional object candidate and the road surface intersect as a road surface contact point. Then, a second solid object in which the angle between the axis in the depth direction of the three-dimensional coordinate space centered on the vehicle, the line connecting the origin of the three-dimensional coordinate space and the road surface contact point, and the road surface contact point is detected. Based on the distance between the first three-dimensional object candidate corresponding to the candidate and the host vehicle, the three-dimensional position of the road contact point is calculated.

  Next, in step 112, based on the three-dimensional position of the road surface contact point calculated in step 110 and the three-dimensional position of the road surface reflection point extracted in step 102, the road surface contact point and the road surface reflection point are Plot in a coordinate system centered on the vehicle, and fit the road plane by a known technique such as the least squares method to estimate the road shape. Then, the estimation result is output to another driving support system or vehicle control system, and the process is terminated.

  As described above, according to the road gradient estimation apparatus of the first embodiment, the road surface contact point detected from the captured image is calculated using the distance information of the laser radar, and this road surface contact point is calculated. Since the road plane is estimated using the three-dimensional position of the point, the gradient of the road ahead can be accurately estimated even when there is no specific object such as a preceding vehicle or a lane boundary line.

  Further, since the three-dimensional position of the road surface reflection point is also used in the observation data of the laser radar, the information on the observation point can be increased, and the estimation accuracy can be improved.

  Next, a second embodiment will be described. In addition, about the road gradient estimation apparatus of 2nd Embodiment, about the structure similar to the road gradient estimation apparatus 10 of 1st Embodiment, the same code | symbol is attached | subjected and detailed description is abbreviate | omitted.

  As shown in FIG. 10, a road gradient estimation device 210 according to the second embodiment includes a laser radar 12, an imaging device 14, a vehicle state sensor 16 configured to include a vehicle speed sensor, a gyro sensor, and the like. And a computer 220. The computer 220 will be described in terms of functional blocks divided for each function realizing means determined based on hardware and software. A road surface reflection point extracting unit 222, a first three-dimensional object candidate extracting unit, and a second three-dimensional object Candidate extraction unit 26, road surface contact point calculation unit 228, road gradient estimation unit 30, information storage unit 32 that stores three-dimensional position information of the road surface contact point and road surface reflection point, and the detection value of vehicle state sensor 16 Based on the observation values of the current and past laser radars 12, the own vehicle motion estimation unit 34 that estimates the motion of the host vehicle, and the three-dimensional past ground contact points and road surface reflection points stored in the information storage unit 32 The position (hereinafter also referred to as “accumulated data”) can be expressed by a configuration including an accumulated data coordinate conversion unit 36 that converts the position into the current coordinate system based on the estimated movement of the host vehicle.

  The road surface reflection point extraction unit 222 and the first three-dimensional object candidate extraction unit 224 are examples of the first extraction unit of the present invention, and the own vehicle motion estimation unit 34 and the accumulated data coordinate conversion unit 36 are the present invention. It is an example of the conversion means.

  The own vehicle motion estimation unit 34 collates the point cloud indicated by the past observation data accumulated in the information accumulation unit 32 with the point cloud indicated by the currently observed observation data, and calculates the momentum of the own vehicle between observations. presume. To estimate momentum by matching point clouds, use the ICP (Iterative Closet Point) algorithm, or extract multiple profile features from the point cloud and estimate them so that their positions are the best overall. Can do. For comparison between the past point cloud and the current point cloud, the vehicle speed, the attitude angle, and the like of the host vehicle, which are detection values of the vehicle state sensor 16, can be used as initial values of the host vehicle motion. Thereby, the calculation amount of collation can be reduced.

  In addition, when the own vehicle motion estimation unit 34 collates the past point cloud with the current point cloud, the matching point cloud can be determined as a stationary object, and the non-matching point cloud can be determined as a moving object or a road surface point. . In order to use the determination result for the extraction of the road surface reflection point and the extraction of the first solid object candidate, the determination result is output to the road surface reflection point extraction unit 222 and the first solid object candidate extraction unit 224.

  As with the road surface reflection point extraction unit 22 of the first embodiment, the road surface reflection point extraction unit 222 is configured to transmit laser light from the reflection points indicated by the observation data on the road surface based on the observation results of the laser radar 12. The reflected road surface reflection point is extracted. At that time, using the determination result output from the own vehicle motion estimation unit 34, the reflection point group determined as a moving object or a road surface point (a point group in which the past point cloud and the current point cloud did not match) To extract road surface reflection points. Further, the extracted three-dimensional position of the road surface reflection point is stored in the information storage unit 32.

  Similar to the first three-dimensional object candidate extraction unit 24 of the first embodiment, the first three-dimensional object candidate extraction unit 224 includes a road surface reflection point extraction unit among the reflection points indicated by the observation data of the laser radar 12. For the reflection points that are not extracted as the road surface reflection points in 222, the reflection points whose reflection point distance is smaller than the predetermined distance are grouped together, the adjacent reflection point group is regarded as one solid object, and the first solid object candidate is obtained. Extract. At that time, the determination result output from the own vehicle motion estimation unit 34 is also used, based on the information of the reflection point group determined as a stationary object (a point group in which the past point cloud and the current point cloud coincide). , Reextracting whether or not to use the reflection point constituting the first three-dimensional object candidate, or grouping the reflection point group. In addition, the position of the first three-dimensional object candidate is detected based on the three-dimensional position of each reflection point constituting the first three-dimensional object candidate.

  The road surface contact point calculation unit 228 detects the road surface contact point by taking the correspondence between the first solid object candidate and the second solid object candidate in the same manner as the road surface contact point calculation unit 28 of the first embodiment. Then, the three-dimensional position of the road surface contact point is calculated. Further, the calculated three-dimensional position of the road contact point is stored in the information storage unit 32. At this time, the accumulated data already stored in the information storage unit 32 is collated with the current three-dimensional position of the road surface contact point, and the reliability of the road surface contact point data that has been verified increases as the number of verifications increases. Is granted. For example, the coordinate system of the accumulation data stored in the information accumulation unit 32 is converted into the current coordinate system by the processing of the accumulation data coordinate conversion unit 36, which will be described later, and the coordinate system is partitioned into fine grid cells. If the current three-dimensional position of the road contact point is detected in a cell in which accumulated data has already been observed, the reliability corresponding to that cell is counted up. Note that the cell size is made sufficiently fine in consideration of the resolution of the laser radar 12 so that points indicating different road surface points are not plotted in the same cell. Thereby, it is possible to discriminate between highly reliable data and unreliable data. In addition, when the road surface reflection point extraction unit 222 stores the extracted road surface reflection points in the information storage unit 32, similarly, reliability may be given.

  The accumulated data coordinate conversion unit 36 converts the accumulated data accumulated in the information accumulation unit 32 to the current coordinate system based on the estimated movement of the host vehicle. For example, among the accumulated data one time before the current time t (t−1), the accumulated data having a reliability greater than or equal to a predetermined value is read from the information accumulation unit 32, and the vehicle at the time (t−1) is centered. Plot in a three-dimensional coordinate space. Then, the amount of movement of the host vehicle between time (t−1) and time t estimated by the host vehicle motion estimation unit 34 is acquired, and the three-dimensional coordinate space at time (t−1) is converted into the three-dimensional at time t. Convert to spatial coordinates.

  The road gradient estimation unit 230 is extracted by the current road surface contact point calculated by the road surface contact point calculation unit 228 and the road surface reflection point extraction unit 222 in the same manner as the road gradient estimation unit 30 of the first embodiment. The current road surface reflection point (hereinafter also referred to as “current data”) is plotted in a coordinate system centered on the vehicle, and the accumulated data converted by the accumulated data coordinate conversion unit 36 is also plotted in the same coordinate system. Then, the current data and the accumulated data are collated, and the points that can be regarded as matching are integrated. For example, as described above, the coordinate system for plotting data is divided into fine grid cells, and either data is deleted from the current data or accumulated data plotted in the same cell, or both data For example, taking the average of the data, it is integrated into one data. When there is only one data plotted in the cell, the data is used as it is.

  Further, as described above, the road gradient estimation unit 230 plots and integrates the current data and accumulated data in the coordinate system centered on the vehicle, and then, as in the first embodiment, the least square method. The road shape is estimated by fitting the road plane by a known technique such as.

  Next, the operation of the road gradient estimation apparatus 210 according to the second embodiment will be described.

  First, the laser radar 12 scans the laser of a plurality of scanning lines in the horizontal direction in front of the host vehicle, and observes observation data specifying the three-dimensional position of each reflection point around the vehicle. Observation data observed by the laser radar 12 is obtained every time the laser is scanned. Further, the vehicle periphery is imaged by the imaging device 14 in accordance with the observation timing of the laser radar. Then, a road gradient estimation processing routine shown in FIG. In addition, about the process same as the road gradient estimation process of 1st Embodiment, the same code | symbol is attached | subjected and detailed description is abbreviate | omitted.

  In step 200, observation data observed by the laser radar 12, a captured image captured by the imaging device 14, and a detection value of the vehicle state sensor 16 are acquired.

  Next, in step 202, the point cloud indicated by the past observation data stored in the information storage unit 32 is collated with the point cloud indicated by the observation data acquired in step 200, and the vehicle between the observations is checked. Estimate momentum. Here, it is assumed that the momentum of the host vehicle between time (t−1) and time t is estimated. For collation between the past point cloud and the current point cloud, the vehicle speed, posture angle, and the like of the host vehicle, which are detection values of the vehicle state sensor 16 acquired in step 200, are used as initial values of the host vehicle motion. When the past point cloud and the current point cloud are collated, the matching point cloud is determined as a stationary object, and the non-matching point cloud is determined as a moving object or a road surface point.

  Next, in step 204, the laser beam is selected from the reflection points determined as moving objects or road surface points in step 202 based on the shape, signal intensity, height information, continuity of reflection points, etc. of the received light pulse. The road surface reflection point reflected by the road surface is extracted.

  Next, in step 206, the three-dimensional position of the road surface reflection point extracted in step 204 above is collated with the accumulated data stored in the information accumulation unit 32, and the reliability is updated according to the collation result. Thus, the three-dimensional position of the road surface reflection point is stored in the information storage unit 32.

  Next, in step 208, out of the reflection points indicated by the observation data acquired in step 200, the reflection points that have not been extracted as road surface reflection points in step 204 are determined as stationary objects in step 202. With respect to the reflected points, the reflection points having a distance between the reflection points smaller than the predetermined distance are grouped and extracted as the first three-dimensional object candidate. Further, the position of the first three-dimensional object candidate is detected based on the three-dimensional position of each reflection point included in the first three-dimensional object candidate.

  Next, in step 106, a second solid object candidate is extracted from the captured image acquired in step 200. Next, in step 108, the correspondence between the first solid object candidate and the second solid object candidate. Next, in step 110, the three-dimensional position of the road contact point is calculated.

  Next, in step 209, the three-dimensional position of the road surface contact point calculated in step 110 and the accumulated data stored in the information accumulating unit 32 are collated, and the reliability is updated according to the collation result. Thus, the three-dimensional position of the road surface contact point is stored in the information storage unit 32.

  Next, in step 212, among the accumulated data one time before the current time t (t-1), the accumulated data having a reliability of a predetermined value or more is read from the information accumulation unit 32, and the time (t-1) Is plotted in a three-dimensional coordinate space centered on the vehicle. Then, based on the momentum of the host vehicle between time (t-1) and time t estimated in step 202, the three-dimensional coordinate space at time (t-1) is converted into the three-dimensional space coordinates at time t. To do.

  Next, in step 214, based on the three-dimensional position of the road contact point calculated in step 110, the three-dimensional position of the road surface reflection point extracted in step 204, and the accumulated data coordinate-converted in step 212. Then, the road surface contact point, the road surface reflection point, and the accumulated data are plotted in a coordinate system centered on the vehicle, and the road shape is estimated by fitting the road plane by a known technique such as the least square method. Then, the estimation result is output to another driving support system or vehicle control system, and the process is terminated.

  As described above, according to the road gradient estimation apparatus of the second embodiment, the three-dimensional position of the road contact point calculated in the past and the three-dimensional position of the road reflection point extracted in the past are also used. Since the road plane is estimated, information on observation points can be increased, and the estimation accuracy can be further improved.

  In addition, when using past data, using data with a certain degree of reliability corresponding to the number of matches with the current data reduces the influence of observation noise that appears or disappears at each observation. Thus, the estimation accuracy can be further improved.

  In the second embodiment, a case has been described in which the momentum of the host vehicle between observations is estimated by collating current and past observation data with a laser radar. However, the momentum of the host vehicle is detected by a vehicle state sensor. It may be estimated based on a value, or may be estimated from a captured image.

  In the second embodiment, the case where accumulated data having a reliability level equal to or higher than a predetermined value is used in addition to the current data has been described. However, an error is applied when the road surface model is fitted by the least square method or the like. You may make it utilize this reliability as a weight.

  Further, in the above-described embodiment, a case has been described in which information for identifying the positions of a plurality of reflection points is observed from the periphery of a vehicle using a laser radar, but the present invention is not limited to this, and electromagnetic waves such as millimeter waves are used. Also good. Further, for example, information specifying the positions of a plurality of points around the vehicle may be calculated by post-processing using an image captured by a stereo camera.

DESCRIPTION OF SYMBOLS 10,210 Road gradient estimation apparatus 12 Laser radar 14 Imaging apparatus 16 Vehicle state sensor 20, 220 Computer 22, 222 Road surface reflection point extraction part 24, 224 1st solid object candidate extraction part 26 2nd solid object candidate extraction part 28 228 Road surface contact point calculation unit 30, 230 Road gradient estimation unit 32 Information storage unit 34 Own vehicle motion estimation unit 36 Stored data coordinate conversion unit

Claims (12)

  1. Acquisition means for acquiring a three-dimensional position of each of a plurality of points around the moving body based on the moving body, and an image obtained by imaging the periphery of the moving body by an imaging means mounted on the moving body;
    First extraction means for extracting a point indicating a three-dimensional object as a first three-dimensional object candidate based on a three-dimensional position of each of the plurality of points acquired by the acquisition means;
    Second extraction means for extracting a second solid object candidate based on at least one of vertical edge information of the image acquired by the acquisition means and matching information with a predetermined shape;
    Calculating means for calculating a three-dimensional position of a contact point with the road surface of the second three-dimensional object candidate based on the three-dimensional position of the first three-dimensional object candidate corresponding to the second three-dimensional object candidate;
    Gradient estimating means for estimating a road gradient around the moving body based on the three-dimensional position of the ground contact point calculated by the calculating means;
    Including a road gradient estimation device.
  2.   The calculation means is configured to determine the position of the ground point based on the direction from the origin of the three-dimensional coordinate space obtained from the second solid object candidate to the ground point and the three-dimensional position of the corresponding first solid object candidate. The road gradient estimation apparatus according to claim 1, which calculates a three-dimensional position.
  3. Conversion means for converting a three-dimensional position of the grounding point calculated in the past into a current coordinate system based on the momentum of the moving body in a predetermined period;
    The road gradient estimation device according to claim 1, wherein the gradient estimation unit estimates the road gradient based on a distribution including a three-dimensional position of a contact point converted by the conversion unit.
  4. The calculation means gives a reliability based on a comparison result between the three-dimensional position of the currently calculated grounding point and the three-dimensional position of the grounding point calculated in the past, and the three-dimensional of the currently calculated grounding point. Store the position in a predetermined storage area,
    The gradient estimation unit adds a three-dimensional position obtained by converting the three-dimensional position of the grounding point having the reliability equal to or higher than a predetermined value among the three-dimensional positions of the grounding point stored in the predetermined storage area by the converting unit. The road gradient estimation apparatus according to claim 3, wherein the three-dimensional position of the contact point stored in the predetermined storage area is converted by the conversion means and weighted according to the reliability.
  5. The first extraction means extracts a road surface point from each of the plurality of points, extracts a point other than the road surface point as the first three-dimensional object candidate,
    The road gradient estimation device according to claim 1, wherein the gradient estimation unit estimates the road gradient based on a distribution of a three-dimensional position including a three-dimensional position of the road surface point.
  6. The first extraction means extracts a road surface point from each of the plurality of points, extracts a point other than the road surface point as the first three-dimensional object candidate,
    The road gradient estimation apparatus according to claim 3 or 4, wherein the gradient estimation means estimates the road gradient based on a distribution of a three-dimensional position including a three-dimensional position of the road surface point.
  7. The converting means converts a three-dimensional position of the road surface point extracted in the past into a current coordinate system based on the momentum of the moving body for a predetermined period,
    The road gradient estimation device according to claim 6, wherein the gradient estimation unit estimates the road gradient based on a distribution including a three-dimensional position of the road surface point converted by the conversion unit.
  8. The second extraction means extracts a vertical edge from the image, extracts a vertical edge indicating at least one of a power pole, a pole, a tree, and a building as the second solid object candidate,
    The road gradient estimation apparatus according to any one of claims 1 to 7, wherein the calculation unit detects a lowermost end of the vertical edge as the ground contact point.
  9. The second extraction means extracts a cylindrical object from the image as the second solid object candidate by pattern matching,
    The road gradient estimation apparatus according to any one of claims 1 to 8, wherein the calculation means detects a point where the cylindrical object and a road surface intersect as the contact point.
  10.   The three-dimensional position of each of the plurality of points is measured by a laser radar having a plurality of scanning lines and configured so that at least one of the scanning lines is irradiated onto a road surface. Item 10. The road gradient estimation device according to any one of Items 9.
  11. Computer
    An acquisition means for acquiring a three-dimensional position of each of a plurality of points around the moving body based on the moving body, and an image obtained by imaging the periphery of the moving body by an imaging means mounted on the moving body;
    First extraction means for extracting a point indicating a three-dimensional object as a first three-dimensional object candidate based on the three-dimensional position of each of the plurality of points acquired by the acquisition means;
    Second extraction means for extracting a second solid object candidate based on at least one of vertical edge information of the image acquired by the acquisition means and matching information with a predetermined shape;
    Calculating means for calculating a three-dimensional position of a contact point with the road surface of the second three-dimensional object candidate based on a three-dimensional position of the first three-dimensional object candidate corresponding to the second three-dimensional object candidate; and A road gradient estimation program for functioning as a gradient estimation unit for estimating a road gradient around the moving body based on a three-dimensional position of a ground contact point calculated by a calculation unit.
  12. The road gradient estimation program for functioning a computer as each means which comprises the road gradient estimation apparatus of any one of Claims 1-10.
JP2011094256A 2011-04-20 2011-04-20 Road gradient estimation device and program Withdrawn JP2012225806A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011094256A JP2012225806A (en) 2011-04-20 2011-04-20 Road gradient estimation device and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011094256A JP2012225806A (en) 2011-04-20 2011-04-20 Road gradient estimation device and program

Publications (1)

Publication Number Publication Date
JP2012225806A true JP2012225806A (en) 2012-11-15

Family

ID=47276132

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011094256A Withdrawn JP2012225806A (en) 2011-04-20 2011-04-20 Road gradient estimation device and program

Country Status (1)

Country Link
JP (1) JP2012225806A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015143979A (en) * 2013-12-27 2015-08-06 株式会社リコー Image processor, image processing method, program, and image processing system
CN104898128A (en) * 2014-03-07 2015-09-09 欧姆龙汽车电子株式会社 Laser radar device and object detecting method
JP2015205546A (en) * 2014-04-18 2015-11-19 アルパイン株式会社 Road gradient detection device, drive support device and computer program
JP2016206025A (en) * 2015-04-23 2016-12-08 株式会社デンソー Posture estimation device
KR101816034B1 (en) * 2015-12-11 2018-02-21 현대오트론 주식회사 Apparatus and method for detecting road boundary line using camera and radar
KR101886566B1 (en) * 2017-03-21 2018-08-07 연세대학교 산학협력단 Method and apparatus for providing image of road configuration using radar
CN108955584A (en) * 2017-05-23 2018-12-07 上海汽车集团股份有限公司 A kind of road surface detection method and device
WO2019188222A1 (en) * 2018-03-30 2019-10-03 株式会社小松製作所 Control device of working machine, control device of excavating machine, and control method of working machine
WO2019186641A1 (en) * 2018-03-26 2019-10-03 三菱電機株式会社 Object detection device, vehicle, object detection method, and object detection program
WO2019198789A1 (en) * 2018-04-12 2019-10-17 株式会社小糸製作所 Object identification system, automobile, vehicle lamp fitting, and object clustering method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015143979A (en) * 2013-12-27 2015-08-06 株式会社リコー Image processor, image processing method, program, and image processing system
CN104898128A (en) * 2014-03-07 2015-09-09 欧姆龙汽车电子株式会社 Laser radar device and object detecting method
JP2015205546A (en) * 2014-04-18 2015-11-19 アルパイン株式会社 Road gradient detection device, drive support device and computer program
JP2016206025A (en) * 2015-04-23 2016-12-08 株式会社デンソー Posture estimation device
KR101816034B1 (en) * 2015-12-11 2018-02-21 현대오트론 주식회사 Apparatus and method for detecting road boundary line using camera and radar
KR101886566B1 (en) * 2017-03-21 2018-08-07 연세대학교 산학협력단 Method and apparatus for providing image of road configuration using radar
CN108955584A (en) * 2017-05-23 2018-12-07 上海汽车集团股份有限公司 A kind of road surface detection method and device
WO2019186641A1 (en) * 2018-03-26 2019-10-03 三菱電機株式会社 Object detection device, vehicle, object detection method, and object detection program
WO2019188222A1 (en) * 2018-03-30 2019-10-03 株式会社小松製作所 Control device of working machine, control device of excavating machine, and control method of working machine
WO2019198789A1 (en) * 2018-04-12 2019-10-17 株式会社小糸製作所 Object identification system, automobile, vehicle lamp fitting, and object clustering method

Similar Documents

Publication Publication Date Title
CN103149939B (en) A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
DE102011100927B4 (en) Object and vehicle detection and tracking using 3-D laser rangefinder
US20090228204A1 (en) System and method for map matching with sensor detected objects
JP6325806B2 (en) Vehicle position estimation system
JP2008203992A (en) Detection device, method, and program thereof
KR101411668B1 (en) A calibration apparatus, a distance measurement system, a calibration method, and a computer readable medium recording a calibration program
JP5441549B2 (en) Road shape recognition device
JPWO2012017650A1 (en) Object detection apparatus, object detection method, and program
US7933433B2 (en) Lane marker recognition apparatus
DE102006045115A1 (en) System and method for target tracking using sensor fusion
JP4205825B2 (en) Object recognition device
US20050270286A1 (en) Method and apparatus for classifying an object
JP6151150B2 (en) Object detection device and vehicle using the same
EP2249311B1 (en) Systems and methods for extracting planar features, matching the planar features, and estimating motion from the planar features
CN102779280A (en) Traffic information extraction method based on laser sensor
CN102428385B (en) Object detecting device
US20060078197A1 (en) Image processing apparatus
Chen et al. Building reconstruction from LIDAR data and aerial imagery
CN103890606B (en) The method and system of map is created for using radar-optical imagery fusion
US20150221079A1 (en) Augmented Three Dimensional Point Collection of Vertical Structures
JP2011128844A (en) Road shape recognition device
JP2011185664A (en) Object detector
JP3596339B2 (en) Inter-vehicle distance measurement device
JP4561346B2 (en) Vehicle motion estimation device and moving object detection device
Ogawa et al. Pedestrian detection and tracking using in-vehicle lidar for automotive application

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20140701