CN112286178B - Identification system, vehicle control system, identification method, and storage medium - Google Patents

Identification system, vehicle control system, identification method, and storage medium Download PDF

Info

Publication number
CN112286178B
CN112286178B CN202010707780.9A CN202010707780A CN112286178B CN 112286178 B CN112286178 B CN 112286178B CN 202010707780 A CN202010707780 A CN 202010707780A CN 112286178 B CN112286178 B CN 112286178B
Authority
CN
China
Prior art keywords
vehicle
road surface
individual
unit
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010707780.9A
Other languages
Chinese (zh)
Other versions
CN112286178A (en
Inventor
李亦杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN112286178A publication Critical patent/CN112286178A/en
Application granted granted Critical
Publication of CN112286178B publication Critical patent/CN112286178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an identification system, a vehicle control system, an identification method and a storage medium capable of more accurately determining road surfaces. An identification system mounted on a vehicle, the identification system comprising: a detection unit that detects a position of an object existing in the periphery of the vehicle; and a determination unit that determines a road surface around the vehicle based on a detection result of the detection unit, wherein the determination unit determines whether or not each individual region obtained by dividing the detection result of the detection unit on a two-dimensional plane is a plane using a predetermined algorithm, and gathers the determination results for each individual region to determine the road surface around the vehicle.

Description

Identification system, vehicle control system, identification method, and storage medium
Technical Field
The invention relates to an identification system, a vehicle control system, an identification method and a storage medium.
Background
Conventionally, an invention of an automatic traveling vehicle is disclosed, which includes a road shape recognition unit that recognizes a road shape, a travel path creation unit that creates a travel path using the recognized road shape, and a vehicle travel control device that realizes automatic traveling according to the travel path, wherein the road shape recognition unit includes a coordinate information acquisition unit that acquires a plurality of pieces of coordinate information in which plane coordinates and height information are associated with each other, a coordinate extraction unit that extracts a plurality of coordinates of interest having a height difference equal to or greater than a predetermined value from the plurality of pieces of coordinate information, and a shape determination unit that determines the road shape by statistically processing the plurality of extracted coordinates of interest (patent document 1 (japanese patent application laid-open No. 2010-250743)).
Disclosure of Invention
[ problem ] to be solved by the invention
In the conventional technique, a portion having a height difference equal to or greater than a predetermined value is identified as a road shoulder or a ditch, for example, and a portion other than the portion is identified as a road surface. However, in this method, a part of the road surface may not be recognized as the road surface due to the curvature of the road surface itself, or an obstacle having a small size may be ignored due to an increase in the prescribed value.
An object of the present invention is to provide an identification system, a vehicle control system, an identification method, and a storage medium capable of more accurately determining a road surface.
[ means for solving the problems ]
The following configuration is adopted for the identification system, the vehicle control system, the identification method, and the storage medium of the present invention.
(1): an identification system according to an aspect of the present invention is mounted on an identification system of a vehicle, the identification system including: a detection unit that detects a position of an object existing in the periphery of the vehicle; and a determination unit that determines a road surface around the vehicle based on a detection result of the detection unit, wherein the determination unit determines whether or not each individual region obtained by dividing the detection result of the detection unit on a two-dimensional plane is a plane using a predetermined algorithm, and gathers the determination results for each individual region to determine the road surface around the vehicle.
(2): in the aspect of (1) above, the detection unit is a lidar.
(3): in the aspect of (2) above, the detection unit irradiates the laser beam to the periphery of the vehicle while changing the elevation angle or the depression angle and the azimuth angle, and the determination unit determines whether or not the object is a plane using the predetermined algorithm for each individual area obtained by dividing point cloud data in which the position of the object represented by at least the elevation angle or the depression angle, the azimuth angle, and the distance is projected onto the two-dimensional plane.
(4): in any one of the above (1) to (3), the determination section makes the sizes of the individual areas different based on the distance from the vehicle in the two-dimensional plane.
(5): in any one of the above (1) to (4), the determining unit obtains information indicating a distribution of objects around the vehicle, and changes the size of the individual region based on the obtained information indicating the distribution of objects.
(6): in any one of the aspects (1) to (5) above, the determination unit obtains information on a type of road on which the vehicle is present, and increases the size of the individual area when the obtained information on the type of road indicates a specific type of road, compared with when the obtained information on the type of road does not indicate a specific type of road.
(7): a vehicle control system including the identification system according to any one of the above (1) to (6); and a travel control device that performs travel control of the vehicle based on information excluding a portion corresponding to the road surface specified by the specifying unit from a detection result of the detecting unit in the identifying system.
(8): in another aspect of the present invention, a computer mounted on a vehicle executes: a detection result of a detection unit that detects the position of an object existing in the vicinity of a vehicle is acquired, a road surface in the vicinity of the vehicle is specified based on the detection result, and when the determination is made, a predetermined algorithm is used for each individual area obtained by dividing the detection result in a two-dimensional plane to determine whether the area is a plane, and the determination results for each individual area are collected to specify the road surface in the vicinity of the vehicle.
(9): a storage medium according to another aspect of the present invention stores a program for causing a computer mounted on a vehicle to execute: a detection result of a detection unit that detects the position of an object existing in the vicinity of a vehicle is acquired, a road surface in the vicinity of the vehicle is specified based on the detection result, and at the time of the specification, whether or not the vehicle is planar is determined for each individual region obtained by dividing the detection result into two-dimensional planes by using a predetermined algorithm, and the determination results for each individual region are collected to specify the road surface in the vicinity of the vehicle.
[ Effect of the invention ]
According to the aspects (1) to (9), the road surface can be more accurately determined.
Drawings
Fig. 1 is a diagram showing a case where an identification system and a vehicle control system are mounted on a vehicle.
Fig. 2 is a configuration diagram of the object recognition apparatus.
Fig. 3 is a diagram illustrating an example of point cloud data.
Fig. 4 is a diagram showing a set mesh.
Fig. 5 is a diagram showing point cloud data obtained by removing coordinates of a portion determined as a road surface by the method of the comparative example.
Fig. 6 is a diagram showing point cloud data obtained by removing coordinates of a portion determined as a road surface by the method of the embodiment.
Fig. 7 is a diagram showing point cloud data obtained by removing coordinates of a portion determined as a road surface by the method of the embodiment.
Fig. 8 is a flowchart showing an example of the flow of processing performed by the recognition system.
[ reference numerals description ]
10. Laser radar
50. Object recognition device
60. Laser radar data processing unit
61. Point cloud data generation unit
62. Information acquisition unit
63. Road surface determination unit
63A grid setting part
63B plane extraction processing unit
64. Non-road object extraction unit
65. Road dividing line recognition unit
100. And a travel control device.
Detailed Description
Embodiments of an identification system, a vehicle control system, an identification method, and a storage medium according to the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a diagram showing a case where an identification system and a vehicle control system are mounted on a vehicle M. The vehicle M is mounted with, for example, a laser radar (Light Detection and Ranging: LIDAR) 10 (an example of a "detection unit"), a camera 20, a radar device 30, an object recognition device 50, and a travel control device 100. The combination of the laser radar 10 and the object recognition device 50 is an example of a "recognition system", and the addition of the travel control device 100 thereto is an example of a "vehicle control system". As the detection unit, a detection device other than a laser radar may be used.
The lidar 10 irradiates light and detects reflected light, and detects the distance to the object by measuring the time from irradiation to detection. The lidar 10 is capable of changing the irradiation direction of light for both the elevation angle or depression angle (hereinafter, the irradiation direction Φ in the up-down direction) and the azimuth angle (the irradiation direction θ in the horizontal direction). The laser radar 10 repeatedly performs an operation of fixing the irradiation direction Φ, scanning while changing the irradiation direction θ, and then changing the irradiation direction Φ in the up-down direction, and fixing the irradiation direction Φ at a changed angle, and scanning while changing the irradiation direction θ. Hereinafter, the irradiation direction Φ will be referred to as a "layer", one scan performed while fixing the layer and changing the irradiation direction θ will be referred to as a "cycle", and scanning for all layers will be referred to as a "1 scan". The layers are set to a finite number (n is a natural number) from L1 to Ln, for example. The change of the layer is performed discontinuously with respect to the angle such as l0→l4→l2→l5→l1 … so that the light irradiated in the previous cycle does not interfere with detection in the present cycle. The change of the layer may be performed continuously with respect to the angle, without being limited thereto.
The laser radar 10 outputs a data set (laser radar data) in { Φ, θ, d, p } as one unit, for example, to the object recognition device 50. d is the distance and p is the intensity of the reflected light. The object recognition device 50 is provided at any position in the vehicle M. In fig. 1, the laser radar 10 is provided on the ceiling of the vehicle M so that the irradiation direction θ can be changed by 360 degrees, but this arrangement is merely an example, and for example, a laser radar provided on the front of the vehicle M so that the irradiation direction θ can be changed by 180 degrees around the front of the vehicle M, and a laser radar provided on the rear of the vehicle M so that the irradiation direction θ can be changed by 180 degrees around the rear of the vehicle M may be mounted on the vehicle M.
The camera 20 is provided at an arbitrary position capable of capturing images of the periphery (particularly, the front or rear) of the vehicle M. For example, the camera 20 is provided on the upper portion of the front windshield. The camera 20 is a digital camera including imaging elements such as CCD (Charge Coupled Device) and CMOS (Complementary Metal Oxide Semiconductor), and repeatedly captures the surroundings of the vehicle M at a predetermined cycle.
The radar device 30 emits radio waves such as millimeter waves to the periphery of the vehicle M, and detects at least the position (distance and azimuth) of the object by detecting the radio waves (reflected waves) reflected by the object. The radar device 30 is mounted on an arbitrary portion of the vehicle M. For example, the radar device 30 is mounted inside a front grille of the vehicle M.
Fig. 2 is a configuration diagram of the object recognition device 50. The object recognition device 50 includes, for example, a laser radar data processing unit 60, a camera image processing unit 70, a radar data processing unit 80, and a sensor fusion unit 90. The lidar data processing unit 60 includes, for example, a point cloud data generating unit 61, an information acquiring unit 62, a road surface specifying unit 63 (an example of a "specifying unit"), an off-road surface object extracting unit 64, and a road dividing line identifying unit 65. The road surface determination unit 63 includes, for example, a mesh setting unit 63A and a plane extraction processing unit 63B. These components are realized by a hardware processor such as CPU (Central Processing Unit) executing a program (software). Some or all of these components may be realized by hardware (including a circuit unit) such as LSI (Large Scale Integration), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), GPU (Graphics Processing Unit), or by a cooperative combination of software and hardware. The program may be stored in advance in a storage device such as HDD (Hard Disk Drive) or a flash memory (a storage device including a non-transitory storage medium), or may be stored in a removable storage medium (a non-transitory storage medium) such as a DVD or a CD-ROM, and installed by being mounted on a drive device through the storage medium.
The point cloud data generation unit 61 generates point cloud data based on the laser radar data. The point cloud data in the present embodiment is data obtained by projecting a position on a three-dimensional space of an object identified from laser radar data onto a position on a two-dimensional plane observed from above. Fig. 3 is a diagram illustrating an example of point cloud data. The two-dimensional plane (two-dimensional plane indicated by X-axis and Y-axis in the figure) of the predetermined point cloud data is, for example, a relative two-dimensional plane obtained by observation from the lidar 10. Although not shown in the figure, height information (displacement in a direction orthogonal to the X-axis and the Y-axis) is given to each coordinate of the point cloud data. The height information is calculated by the point cloud data generating unit 61 based on the continuity and dispersion between coordinates, the difference in irradiation angle between layers, and the like.
The information acquisition unit 62 acquires various information used when the mesh setting unit 63A sets the mesh.
For example, the information acquisition unit 62 acquires information indicating the distribution of objects around the vehicle M. The distribution of the objects around the vehicle M is, for example, a value (congestion index) obtained by indexing a part or all of the number of vehicles, the number of pedestrians, the number of bicycles, the signal, the crosswalk, the number of intersections, and the like within the recognizable range of the recognition system. The higher the density of the elements described above, the higher the value is indicated for the congestion index. The information acquisition unit 62 may calculate the congestion index by itself, or may acquire the congestion index from the camera image processing unit 70, the radar data processing unit 80, or the like.
The information acquisition unit 62 may acquire information related to the type of road on which the vehicle M is present. The information acquisition unit 62 may acquire information on the type of road from a navigation device (not shown) mounted on the vehicle M, or may derive the information from the result of the camera image processing unit 70 recognizing the road identifier from the camera image.
The mesh setting unit 63A of the road surface specifying unit 63 virtually sets a plurality of meshes, which are individual areas obtained by dividing the two-dimensional plane of the predetermined point cloud data. Fig. 4 is a diagram showing the set mesh G. The grid setting unit 63A sets the grid G in a rectangular shape (may be square or rectangular shape), for example. The mesh setting unit 63A may set the same-sized mesh G, or may vary the size of the mesh G based on the distance from the vehicle M in the two-dimensional plane. For example, as shown in the figure, the mesh setting unit 63A may increase the size of the mesh G as the mesh setting unit is further away from the vehicle M. The mesh setting unit 63A may not set the mesh G for an unnecessary recognition area (for example, an area on the opposite side of the guardrail or on the road outside such as a building) obtained by referring to the past recognition result of the recognition system. The mesh setting unit 63A may set the mesh G in any polygon (two or more types of polygons may be included) such as a triangle or a hexagon, or may set the mesh G in an amorphous form.
The mesh setting unit 63A may determine the size of the mesh G based on the congestion index. For example, the mesh setting unit 63A may decrease the size of the mesh G as the congestion index increases.
The mesh setting unit 63A may determine the size of the mesh G based on the road type. For example, the mesh setting unit 63A may increase the size of the mesh G when the road type is a specific type having fewer traffic participants than vehicles other than an expressway, a vehicle-specific road, or the like, as compared with a case where the road type is not a specific type.
The plane extraction processing unit 63B performs plane extraction processing based on a robust regression estimation method such as RANSAC (Random Sample Consensus) on the point cloud data included in each mesh G, determines whether or not the mesh G is a road surface (on which no object exists), and associates the determination result with each mesh G. The plane extraction processing unit 63B may perform other types of plane extraction processing instead of RANSAC.
RANSAC is performed, for example, in the following order. First, a number of samples (not all) equal to or larger than the number required for determining the model are randomly selected from the data set, a temporary model is derived from the selected samples by a least square method or the like, the temporary model is filled with data, and if the deviation value is not so large, the temporary model is added to the model candidates. This process is repeated several times, and the model candidate most matching the whole of the data set is set as the forward model. In the present embodiment, the plane extraction processing unit 63B determines that the road surface is the mesh G that becomes the plane with respect to the forward model.
The non-road surface object extraction unit 64 analyzes the point cloud data for the grids G other than the grids G determined to be road surfaces by the plane extraction processing unit 63B, extracts the contour of the object existing on the grids G, and identifies the position of the object corresponding to the contour based on the contour. Alternatively, the non-road surface object extraction unit 64 may extract the outline of the object based on the data corresponding to the mesh G other than the mesh G determined to be the road surface by the plane extraction processing unit 63B in the laser radar data, and identify the position of the object corresponding to the outline based on the outline.
The road dividing line recognition unit 65 focuses on the intensity p of the reflected light in the laser radar data, and recognizes a portion having a high rate of change in the intensity p due to a difference in color between the road surface and the road dividing line such as a white line or a yellow line as the outline of the road dividing line. Thus, the road dividing line recognition unit 65 recognizes the position of the road dividing line such as a white line.
The processing result of the non-road surface object extraction unit 64 and the road division line identification unit 65 is output to the sensor fusion unit 90. The processing results of the camera image processing unit 70 and the radar data processing unit 80 are also input to the sensor fusion unit 90.
The camera image processing unit 70 performs various image processing on the camera image acquired from the camera 20, and recognizes the position, size, type, and the like of the object existing in the periphery of the vehicle M. The image processing performed by the camera image processing unit 70 may include processing of inputting a camera image to a learned model obtained by machine learning, and processing of extracting edge points from the image, and connecting the edge points to obtain a contour line recognition object.
The radar data processing unit 80 performs various object extraction processes on the radar data acquired from the radar device 30, and recognizes the position, size, type, and the like of the object existing in the periphery of the vehicle M. The radar data processing unit 80 estimates the type of the object by estimating the material of the object based on the intensity of the reflected wave from the object, for example.
The sensor fusion unit 90 combines the processing results input from the laser radar data processing unit 60, the camera image processing unit 70, and the radar data processing unit 80, determines the positions of the object and the road dividing line, and outputs the positions to the travel control device 100. The processing of the sensor fusion unit 90 may include, for example, processing for obtaining a logical sum, a logical product, a weighted sum, or the like for each processing result.
By performing the processing as described above, the recognition system can more accurately determine the road surface. Fig. 5 is a diagram showing point cloud data obtained by removing coordinates of a portion determined as a road surface by the method of the comparative example. The method of the comparative example is a method of applying RANSAC to the entire laser radar data (without differentiating the mesh G) and removing coordinates of a region determined as a road surface. As shown in the figure, in the method of the comparative example, a large number of coordinates remain in the areas A1 and A2 corresponding to the road surface. In particular, the area A1 becomes an upward slope when viewed from the vehicle M, and it becomes difficult to recognize the area as a road surface when RANSAC is applied to the entire data. Further, since a general road structure has a high central portion and a low tendency toward the end portions, in the method of the comparative example, there is a possibility that the road is determined not to be a road surface due to a difference in level between the central portion and the end portions of the road. Further, since there are small recesses or the like on the road, there is a possibility that the road surface is locally recognized as not being a road surface.
In contrast, fig. 6 and 7 are diagrams showing point cloud data obtained by removing coordinates of a portion determined as a road surface by the method of the embodiment. Fig. 6 shows the result when one side of the square grid G is X1, and fig. 7 shows the result when one side of the square grid G is X2 (X1 > X2). As shown in these figures, according to the method of the embodiment, the coordinates are mostly removed in the areas A1, A2 corresponding to the road surface, thereby reducing the possibility of recognizing an object that is not actually present as an obstacle. The method of reducing one side of the mesh G (i.e., the method of reducing the size of the mesh G) can improve the accuracy of determining the road surface, but the method of reducing the size of the mesh G increases the processing load, and therefore, these methods have a trade-off relationship. In view of this, by increasing the size of the mesh G as the distance from the vehicle M increases, even if there is erroneous recognition, processing can be performed at a low load in a distance less affected by the erroneous recognition, and processing with a balanced recognition accuracy and processing load can be performed. Further, since the size of the mesh G is reduced as the congestion index is higher, the recognition accuracy is prioritized in a place where traffic participants such as urban areas are large, and the processing for prioritizing the reduction of the processing load is performed in a place other than such a place, it is possible to perform the processing with a good balance between the recognition accuracy and the processing load. In addition, when the road type is a specific type, the mesh G is increased in size compared with a case where the road type is not a specific type, so that the processing for prioritizing the reduction of the processing load can be performed in a place where traffic participants are small, and the processing for prioritizing the recognition accuracy can be performed in a place other than the place where the traffic participants are small, so that the processing for balancing the recognition accuracy and the processing load can be performed.
The travel control device 100 is, for example, an automatic driving control device that controls both acceleration and deceleration and steering of the vehicle M. The travel control device 100 automatically causes the vehicle M to travel in the set lane without coming into contact with the object, or automatically controls lane change, overtaking, branching, merging, stopping, and the like as necessary, based on the position of the object, white line, and the like output from the object recognition device 50. Instead of this, the travel control device 100 may be a driving support device or the like that automatically stops when an object approaches. In this way, the travel control device 100 performs travel control of the vehicle M based on the information output from the sensor fusion unit 90 from the non-road-surface object extraction unit 64 on the positions of the objects identified by the mesh G other than the mesh G determined as the road surface by the plane extraction processing unit 63B (the information excluding the portion corresponding to the determined road surface).
Fig. 8 is a flowchart showing an example of the flow of processing performed by the recognition system. The lidar 10 detects an object and repeatedly outputs lidar data to the lidar data processing section 60 (step S100).
The lidar data processing section 60 waits until 1 scan amount of lidar data is acquired (step S102). When the laser radar data of 1 scan amount is acquired, the process proceeds to step S106.
On the other hand, the information acquisition unit 62 operates in a non-synchronous manner with the components other than the information acquisition unit 62 in the laser radar data processing unit 60, acquires information for setting the mesh G, and supplies the information to the road surface determination unit 63 (step S104).
The point cloud data generation unit 61 generates point cloud data from the laser radar data (step S106). The mesh setting unit 63A sets the mesh G based on the information supplied from the information acquisition unit 62 (step S108).
The plane extraction processing unit 63B determines whether or not the road surface is the road surface for each mesh G (step S110). The non-road surface object extraction unit 64 performs object recognition on the mesh G of the non-road surface that is not specified as the road surface (step S112). Then, the lidar data processing unit 60 outputs the processing results of the non-road surface object extraction unit 64 and the road division line recognition unit 65 to the sensor fusion unit 90 (step S114). Whereby the routine of the flowchart of fig. 8 ends.
The identification system according to the above-described embodiment includes: a detection unit (laser radar 10) that detects the position of an object existing in the vicinity of the vehicle (M); and a determination unit (road surface determination unit 63) that determines the road surface around the vehicle based on the detection result of the detection unit, wherein the determination unit determines whether or not each of the individual areas (mesh G) obtained by dividing the detection result of the detection unit on the two-dimensional plane is a plane using a predetermined algorithm (RANSAC), and the determination results for each individual area are collected to determine the road surface around the vehicle, so that the road surface can be determined more accurately.
The recognition system may not include part or all of the camera 20, the radar device 30, the camera image processing unit 70, the radar data processing unit 80, and the sensor fusion unit 90. For example, the recognition system may include the laser radar 10 and the laser radar data processing unit 60, and may output the processing results of the non-road surface object extraction unit 64 and the road division line recognition unit 65 to the travel control device 100.
The specific embodiments of the present invention have been described above using the embodiments, but the present invention is not limited to such embodiments, and various modifications and substitutions can be made without departing from the scope of the present invention.

Claims (6)

1. An identification system mounted on a vehicle, wherein,
the identification system is provided with:
a detection unit that detects a position of an object existing in the periphery of the vehicle; and
a determination unit that determines a road surface around the vehicle based on a detection result of the detection unit,
the determination unit determines whether or not the vehicle is a plane using a predetermined algorithm for each individual region obtained by dividing the detection result of the detection unit in a two-dimensional plane, and determines a road surface around the vehicle by aggregating the determination results for each individual region,
the prescribed algorithm is the following algorithm: randomly selecting a predetermined number of samples from a data set as a detection result of the detection unit for each individual area, deriving model candidates based on the selected samples, setting a model candidate most matching the whole data set among the model candidates having a deviation value equal to or smaller than a predetermined number when the model candidates are filled with data as a forward solution model, determining that the individual area is a road surface when the forward solution model becomes a plane,
the detection unit is a laser radar that irradiates laser light to the periphery of the vehicle while changing an elevation angle, a depression angle, and an azimuth angle,
the determination unit determines whether or not the object is a plane by using the predetermined algorithm for each individual area obtained by dividing point cloud data obtained by projecting a position of the object represented by at least an elevation angle, a depression angle, an azimuth angle, and a distance onto a two-dimensional plane,
the determination section makes the sizes of the individual areas different based on the distances from the vehicle in the two-dimensional plane,
the determination unit does not set the individual area for the unnecessary recognition area obtained by referring to the past recognition result of the recognition system.
2. The identification system of claim 1, wherein,
the determination unit obtains information indicating a distribution of objects around the vehicle, and changes the size of the individual region based on the obtained information indicating the distribution of the objects.
3. The identification system according to claim 1 or 2, wherein,
the determination unit obtains information on a road type of the vehicle, and increases the size of the individual area when the obtained information on the road type indicates a road of a specific type, compared with when the obtained information on the road type does not indicate a road of a specific type.
4. A vehicle control system, wherein,
the vehicle control system includes:
the identification system of any one of claims 1 to 3; and
and a travel control device that performs travel control of the vehicle based on information excluding a portion corresponding to the road surface specified by the specifying unit from a detection result of the detecting unit in the identifying system.
5. An identification method, wherein,
the computer mounted on the vehicle performs the following processing:
a detection result of a detection unit for detecting the position of an object existing in the periphery of the vehicle is obtained,
determining a road surface of a periphery of the vehicle based on the detection result,
in the course of the determination as to what has been described,
a predetermined algorithm is used for each individual region obtained by dividing the detection result on a two-dimensional plane to determine whether or not the individual region is a plane,
the determination results for each of the individual regions are aggregated to determine the road surface of the periphery of the vehicle,
the prescribed algorithm is the following algorithm: randomly selecting a predetermined number of samples from a data set as a detection result of the detection unit for each individual area, deriving model candidates based on the selected samples, setting a model candidate most matching the whole data set among the model candidates having a deviation value equal to or smaller than a predetermined number when the model candidates are filled with data as a forward solution model, determining that the individual area is a road surface when the forward solution model becomes a plane,
the detection unit is a laser radar that irradiates laser light to the periphery of the vehicle while changing an elevation angle, a depression angle, and an azimuth angle,
in the course of the determination as to what has been described,
for each individual area obtained by dividing point cloud data obtained by projecting a position of an object represented by at least an elevation angle or depression angle, an azimuth angle, and a distance onto a two-dimensional plane, determining whether the object is a plane or not using the predetermined algorithm,
the sizes of the individual areas are made different based on the distance from the vehicle in the two-dimensional plane,
the individual area is not set for the unnecessary recognition area obtained by referring to the past recognition result of the computer.
6. A storage medium storing a program, wherein,
the program causes a computer mounted on a vehicle to execute:
a detection result of a detection unit for detecting the position of an object existing in the periphery of the vehicle is obtained,
determining a road surface of a periphery of the vehicle based on the detection result,
in the course of the said determination, the data of the said data,
a predetermined algorithm is used for each individual region obtained by dividing the detection result on a two-dimensional plane to determine whether or not the individual region is a plane,
the determination results for each of the individual regions are aggregated to determine the road surface of the periphery of the vehicle,
the prescribed algorithm is the following algorithm: randomly selecting a predetermined number of samples from a data set as a detection result of the detection unit for each individual area, deriving model candidates based on the selected samples, setting a model candidate most matching the whole data set among the model candidates having a deviation value equal to or smaller than a predetermined number when the model candidates are filled with data as a forward solution model, determining that the individual area is a road surface when the forward solution model becomes a plane,
the detection unit is a laser radar that irradiates laser light to the periphery of the vehicle while changing an elevation angle, a depression angle, and an azimuth angle,
in the course of the said determination, the data of the said data,
for each individual area obtained by dividing point cloud data obtained by projecting a position of an object represented by at least an elevation angle or depression angle, an azimuth angle, and a distance onto a two-dimensional plane, determining whether the object is a plane or not using the predetermined algorithm,
the sizes of the individual areas are made different based on the distance from the vehicle in the two-dimensional plane,
the individual area is not set for the unnecessary recognition area obtained by referring to the past recognition result of the computer.
CN202010707780.9A 2019-07-24 2020-07-21 Identification system, vehicle control system, identification method, and storage medium Active CN112286178B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019136089A JP7165630B2 (en) 2019-07-24 2019-07-24 Recognition system, vehicle control system, recognition method, and program
JP2019-136089 2019-07-24

Publications (2)

Publication Number Publication Date
CN112286178A CN112286178A (en) 2021-01-29
CN112286178B true CN112286178B (en) 2023-12-01

Family

ID=74420120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010707780.9A Active CN112286178B (en) 2019-07-24 2020-07-21 Identification system, vehicle control system, identification method, and storage medium

Country Status (2)

Country Link
JP (1) JP7165630B2 (en)
CN (1) CN112286178B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022152402A (en) * 2021-03-29 2022-10-12 本田技研工業株式会社 Recognition device, vehicle system, recognition method and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010071942A (en) * 2008-09-22 2010-04-02 Toyota Motor Corp Object detecting device
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
JP2018112887A (en) * 2017-01-11 2018-07-19 株式会社東芝 Information processing device, information processing method, and information processing program
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN109359614A (en) * 2018-10-30 2019-02-19 百度在线网络技术(北京)有限公司 A kind of plane recognition methods, device, equipment and the medium of laser point cloud

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011191239A (en) 2010-03-16 2011-09-29 Mazda Motor Corp Mobile object position detecting device
JP6385745B2 (en) * 2014-07-22 2018-09-05 日立建機株式会社 Mining work vehicle
JP6668740B2 (en) 2015-12-22 2020-03-18 いすゞ自動車株式会社 Road surface estimation device
US10444759B2 (en) 2017-06-14 2019-10-15 Zoox, Inc. Voxel based ground plane estimation and object segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010071942A (en) * 2008-09-22 2010-04-02 Toyota Motor Corp Object detecting device
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
JP2018112887A (en) * 2017-01-11 2018-07-19 株式会社東芝 Information processing device, information processing method, and information processing program
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN109359614A (en) * 2018-10-30 2019-02-19 百度在线网络技术(北京)有限公司 A kind of plane recognition methods, device, equipment and the medium of laser point cloud

Also Published As

Publication number Publication date
CN112286178A (en) 2021-01-29
JP2021021967A (en) 2021-02-18
JP7165630B2 (en) 2022-11-04

Similar Documents

Publication Publication Date Title
JP7069927B2 (en) Object recognition device and object recognition method
JP5822255B2 (en) Object identification device and program
JP6662388B2 (en) Image processing device, imaging device, device control system, distribution data generation method, and program
EP2767927B1 (en) Road surface information detection apparatus, vehicle device control system employing road surface information detection apparatus, and carrier medium of road surface information detection program
JP5407898B2 (en) Object detection apparatus and program
JP4650079B2 (en) Object detection apparatus and method
JP5145585B2 (en) Target detection device
JP6702340B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and program
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
JP6340850B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
CN109635816B (en) Lane line generation method, apparatus, device, and storage medium
US10748014B2 (en) Processing device, object recognition apparatus, device control system, processing method, and computer-readable recording medium
EP3324359B1 (en) Image processing device and image processing method
JP6753134B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and image processing program
JP7050763B2 (en) Detection of objects from camera images
EP3115933A1 (en) Image processing device, image capturing device, mobile body control system, image processing method, and computer-readable recording medium
CN111046719A (en) Apparatus and method for converting image
CN115327572A (en) Method for detecting obstacle in front of vehicle
US20220171975A1 (en) Method for Determining a Semantic Free Space
JP6340849B2 (en) Image processing apparatus, image processing method, image processing program, and mobile device control system
CN112286178B (en) Identification system, vehicle control system, identification method, and storage medium
EP3410345B1 (en) Information processing apparatus and non-transitory recording medium storing thereon a computer program
JP7031157B2 (en) Object detection method and object detection device
EP4024330B1 (en) Object recognition method and object recognition device
US11555928B2 (en) Three-dimensional object detection with ground removal intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant