WO2021074660A1 - Object recognition method and object recognition device - Google Patents

Object recognition method and object recognition device Download PDF

Info

Publication number
WO2021074660A1
WO2021074660A1 PCT/IB2019/001225 IB2019001225W WO2021074660A1 WO 2021074660 A1 WO2021074660 A1 WO 2021074660A1 IB 2019001225 W IB2019001225 W IB 2019001225W WO 2021074660 A1 WO2021074660 A1 WO 2021074660A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
clusters
vehicle
reference axis
Prior art date
Application number
PCT/IB2019/001225
Other languages
French (fr)
Japanese (ja)
Inventor
池上堯史
野田邦昭
Original Assignee
日産自動車株式会社
ルノー エス. ア. エス.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社, ルノー エス. ア. エス. filed Critical 日産自動車株式会社
Priority to PCT/IB2019/001225 priority Critical patent/WO2021074660A1/en
Publication of WO2021074660A1 publication Critical patent/WO2021074660A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Definitions

  • the present invention relates to an object recognition method and an object recognition device.
  • Patent Document 1 a rectangle is applied to the point group data acquired from the laser sensor mounted on the vehicle, and at least one surface of the front surface, the back surface, and one side surface of the detection target vehicle traveling around the own vehicle is applied.
  • a technique is described in which the shape of a part or the whole of the detection target vehicle is approximated by a rectangular frame having a side of, and the detection target vehicle is tracked using the corner of the rectangular frame as a reference point.
  • An object of the present invention is to improve the accuracy when fitting a rectangle or a rectangular parallelepiped to point cloud data obtained by detecting each of a plurality of positions on the surface of an object around a vehicle.
  • the position of the object surface around the vehicle is acquired as point group data consisting of a plurality of detection points, and the point group data is clustered into one or a plurality of clusters to form one or a plurality of points.
  • One of the above clusters is extracted as an object candidate, a reference axis extending in the vertical direction through the center of the object candidate is estimated based on any of the above clusters, and any of the above is centered on the reference axis.
  • An object is formed by forming a composite point group data obtained by synthesizing a point group data obtained by rotating a cluster 180 degrees and any of the above clusters, and applying a predetermined shape that is a rectangle or a rectangle to the composite point group data. recognize.
  • the present invention can improve the accuracy when fitting a rectangle or a rectangular parallelepiped to the point cloud data obtained by detecting each of a plurality of positions on the surface of an object around the vehicle.
  • the own vehicle 1 includes a running support device 10 that supports the running of the own vehicle 1.
  • the traveling support device 10 detects the self-position which is the current position of the own vehicle 1 and supports the traveling of the own vehicle 1 based on the detected self-position.
  • the driving support device 10 supports driving by performing autonomous driving control that automatically drives the own vehicle 1 without the driver's involvement, based on the detected self-position and the surrounding driving environment.
  • the driving operation related to the traveling of the own vehicle 1 may be partially supported, such as controlling only the steering angle or only the acceleration / deceleration based on the estimated self-position and the surrounding traveling environment.
  • the traveling support device 10 includes a positioning device 11, a map database 12, a distance measuring sensor 13, a vehicle sensor 14, a navigation system 15, a controller 16, and an actuator 17.
  • the map database is referred to as "map DB”.
  • the positioning device 11 measures the current position of the own vehicle 1.
  • the positioning device 11 may include, for example, a global positioning system (GNSS) receiver.
  • GNSS receiver is, for example, a Global Positioning System (GPS) receiver or the like, and receives radio waves from a plurality of navigation satellites to measure the current position of the own vehicle 1.
  • GPS Global Positioning System
  • the map database 12 is stored in a storage device such as a flash memory, and stores map information such as the position and type of road shapes, features, landmarks, and other targets necessary for estimating the self-position of the own vehicle 1.
  • map database 12 for example, high-precision map data suitable as a map for autonomous traveling (hereinafter, simply referred to as “high-precision map”) may be stored.
  • the high-precision map is map data with higher accuracy than the map data for navigation (hereinafter, simply referred to as "navigation map”), and includes information in units of traveling lanes (lanes) that is more detailed than information in units of roads.
  • the navigation map may be stored in the map database 12.
  • the navigation map contains information for each road.
  • a navigation map includes information on a road node indicating a reference point on a road reference line (for example, a central line of a road) and information on a road link indicating a section mode of a road between road nodes as information on a road basis. ..
  • the map database 12 may acquire map information from the outside via a communication system such as wireless communication (road-to-vehicle communication or vehicle-to-vehicle communication is also possible). In this case, the map database 12 may periodically obtain the latest map information and update the map information it holds. Further, the map database 12 may store the runway actually traveled by the own vehicle 1 as map information.
  • the distance measuring sensor 13 is mounted on the own vehicle 1, transmits an exploration wave to the periphery of the own vehicle 1, scans the periphery of the own vehicle 1, and receives the reflected wave reflected by the exploration wave on the surface of the object. Based on the received reflected wave, the distance measuring sensor 13 calculates the position of the reflection point (detection point) at which the exploration wave is reflected at a plurality of positions on the surface of the object as the relative position of the reflection point with respect to the own vehicle 1. Point group data representing the relative position of each reflection point is output to the controller 16. That is, the point cloud data represents the position of the reflection point in the vehicle coordinate system centered on the own vehicle 1.
  • the ranging sensor 13 may include a laser range finder (LRF), a radar, a LiDAR (Light Detection and Ranger) laser radar, and the like. Further, the distance measuring sensor 13 may be any sensor as long as it can acquire the position of the surface of the object around the vehicle as a point cloud, and is not limited to the above. For example, the position of the object surface is calculated for each pixel corresponding to the object in the image captured by the stereo camera around the vehicle, and the point cloud data with the position corresponding to each pixel as the detection point is output to the controller 16. You may. In the following description, as described above, the distance measuring sensor 13 is described as outputting the position of the reflection point on the object surface to the controller 16 as point cloud data based on the reflected wave of the transmitted exploration wave.
  • LRF laser range finder
  • the controller 16 LiDAR (Light Detection and Ranger) laser radar
  • the vehicle sensor 14 detects various information (vehicle information) obtained from the own vehicle 1.
  • the vehicle sensor 14 includes, for example, a vehicle speed sensor that detects the traveling speed (vehicle speed) of the own vehicle 1, a wheel speed sensor that detects the rotation speed of each tire included in the own vehicle 1, and an acceleration in the three axial directions of the own vehicle 1.
  • 3-axis accelerometer (G sensor) that detects deceleration
  • steering angle sensor that detects steering angle (including steering angle)
  • gyro sensor that detects angular speed generated in own vehicle 1
  • yaw rate that detects yaw rate
  • the sensor includes an accelerator sensor that detects the accelerator opening degree of the own vehicle 1, and a brake sensor that detects the amount of brake operation by the driver.
  • the navigation system 15 recognizes the current position of the own vehicle 1 by the positioning device 11, and acquires the map information at the current position from the map database 12.
  • the navigation system 15 sets a planned travel route to the destination input by the occupant, and provides route guidance to the occupant according to the planned travel route. Further, the navigation system 15 outputs the information of the set planned travel route to the controller 16.
  • the controller 16 automatically drives the own vehicle 1 (controls the driving behavior) so as to autonomously drive along the planned traveling route set by the navigation system 15.
  • the controller 16 is an electronic control unit (ECU: Electronic Control Unit) that controls the traveling support of the own vehicle 1.
  • the controller 16 includes a processor 20 and peripheral components such as a storage device 21.
  • the processor 20 may be, for example, a CPU (Central Processing Unit) or an MPU (Micro-Processing Unit).
  • the storage device 21 may include a semiconductor storage device, a magnetic storage device, an optical storage device, and the like.
  • the storage device 21 may include a memory such as a register, a cache memory, a ROM (Read Only Memory) and a RAM (Random Access Memory) used as a main storage device.
  • the function of the controller 16 described below is realized, for example, by the processor 20 executing a computer program stored in the storage device 21.
  • the controller 16 may be formed by dedicated hardware for executing each information processing described below.
  • the controller 16 may include a functional logic circuit set in a general-purpose semiconductor integrated circuit.
  • the controller 16 may have a programmable logic device (PLD: Programmable Logic Device) such as a field programmable gate array (FPGA: Field-Programmable Gate Array).
  • PLD Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the controller 16 detects the self-position, which is the current position of the own vehicle 1, and recognizes the objects around the own vehicle 1.
  • the controller 16 is based on its own position, the map information of the road in the map database 12, the route information output from the navigation system 15, the objects around the own vehicle 1, and the traveling state of the own vehicle 1.
  • a target traveling track on which the vehicle 1 is traveled is set.
  • the controller 16 When setting the target traveling track, the controller 16 recognizes the position, shape, and attitude of an object around the own vehicle 1 based on the point cloud data output from the distance measuring sensor 13, and recognizes the position, shape, and attitude of the object around the own vehicle 1.
  • the target traveling trajectory is set based on the recognition result of the object.
  • the controller 16 performs autonomous travel control of the own vehicle 1 based on the set target travel trajectory, and drives the actuator 17 to control the travel of the own vehicle 1.
  • the controller 16 uses the self-position and the map information of the road in the map database 12.
  • the actuator 17 is driven based on the objects around the own vehicle 1 and the traveling state of the own vehicle 1 to control the steering mechanism, the braking device, and the power unit of the own vehicle 1.
  • the actuator 17 operates the steering wheel, accelerator opening degree, and brake device of the own vehicle 1 in response to the control signal from the controller 16 to generate the vehicle behavior of the own vehicle 1.
  • the actuator 17 includes a steering actuator, an accelerator opening actuator, and a brake control actuator.
  • the steering actuator controls the steering direction and steering amount of the steering of the own vehicle 1.
  • the accelerator opening actuator controls the accelerator opening of the own vehicle 1.
  • the brake control actuator controls the braking operation of the brake device of the own vehicle 1.
  • FIG. 2A is a conceptual diagram of measurement of each position on the surface of the object 2 around the own vehicle 1 by the distance measuring sensor 13.
  • the broken line in FIG. 2A shows the transmission direction of the discrete exploration wave when the distance measuring sensor 13 scans the circumference of the own vehicle 1 with the exploration wave, and the round plot shows the plurality of reflections of the exploration wave on the surface of the object 2. Indicates the position of the reflection point of.
  • the distance measuring sensor 13 outputs point cloud data representing the relative position of the reflection point with respect to the own vehicle 1 to the controller 16.
  • the controller 16 applies a predetermined type of shape suitable for approximating the outer shape of the object 2 to the point cloud data (round plot) output from the distance measuring sensor 13, and recognizes the position, shape, and posture of the object 2. To do.
  • the controller 16 fits a rectangle to the point cloud data. If the rectangle is directly applied to the point cloud data output from the distance measuring sensor 13, the accuracy of fitting the rectangle to the point cloud data may not be ensured when the point cloud data of the object 2 to be recognized is small. Therefore, for example, as shown in FIG. 2B, there may be a problem that the position, posture, and shape of the recognized object 2 cannot be stably estimated.
  • the controller 16 adds the point cloud data (circle plot) obtained by measuring the object 2 with the distance measuring sensor 13 by rotating it 180 degrees around the vertical axis and duplicating it. Generate point cloud data (triangular plot).
  • the controller 16 forms a composite point cloud data by synthesizing the original point cloud data (round plot) and the additional point cloud data (triangular plot).
  • the controller 16 applies a rectangle to the composite point cloud data (round plot + triangular plot) and recognizes the position, shape, and orientation of the object 2. This makes it easier to fit the rectangle to the original point cloud data (round plot), and improves the accuracy of fitting the rectangle to the point cloud data. As a result, the position, posture, and shape of the recognized object 2 can be stably estimated.
  • the controller 16 includes a point cloud acquisition unit 30, an object candidate extraction unit 31, an object detection unit 32, a self-position estimation unit 33, a map position calculation unit 34, a travel trajectory generation unit 35, and a travel control unit 36.
  • the point cloud acquisition unit 30 acquires the point cloud data output by the distance measuring sensor 13 as three-dimensional point cloud data.
  • the object candidate extraction unit 31 clusters the three-dimensional point cloud data acquired by the point cloud acquisition unit 30 into one or a plurality of clusters for each object candidate of a stationary object or a moving object, and the obtained cluster is used as an object candidate. Extract.
  • the object candidate extraction unit 31 includes an object point cloud extraction unit 40 and a clustering unit 41.
  • the object point cloud extraction unit 40 extracts the point cloud data of the reflection points of a stationary object or a moving object by removing the reflection points on the road surface from the three-dimensional point cloud data.
  • the clustering unit 41 clusters the point cloud data extracted by the object point cloud extraction unit 40 into a cluster for each object candidate.
  • the clustering unit 41 may cluster the point cloud data that are close to each other into a cluster that is a group of the point cloud data by classifying them into one group.
  • the cluster obtained by the clustering unit 41 will be referred to as a “point cloud cluster”.
  • the object candidate extraction unit 31 may be configured to extract only a point cloud cluster having a shape similar to the shape of the object to be recognized as an object candidate. By doing so, it is possible to omit the subsequent processing for the point cloud data of the object that is not the recognition target, and it is possible to reduce the processing load.
  • a predetermined shape representing the outline outline of the object to be recognized is set, and the object candidate extraction unit 31 determines whether or not the point cloud cluster is a set of point clouds on the predetermined shape.
  • the object candidate extraction unit 31 is configured to extract only the point cloud cluster having a part of the set predetermined shape as the object candidate.
  • the recognition target is another vehicle, it may be configured to extract only the point cloud cluster having a part of the rectangular parallelepiped which is the outline outline of the vehicle as an object candidate.
  • the object candidate extraction unit 31 may determine whether or not the point cloud cluster includes a corner portion that is a part of a rectangular parallelepiped, and extract only the point cloud cluster including the corner portion as an object candidate.
  • the object candidate extraction unit 31 projects the point cloud 51 included in the point cloud cluster, which is the three-dimensional point cloud data, onto the two-dimensional grid map 50.
  • the object candidate extraction unit 31 calculates the evaluation points of each cell 52 according to the number of the point cloud 51 projected on each cell 52 of the grid map 50.
  • the object candidate extraction unit 31 generates an image in which pixels are arranged at positions corresponding to each cell 52, and sets the pixel value according to the evaluation value of each cell 52.
  • the object candidate extraction unit 31 detects the corner portion 53 of the point cloud cluster by performing predetermined image processing on the generated image. For example, the object candidate extraction unit 31 may detect the corner portion 53 of the point cloud cluster using a Harris corner detector.
  • the object detection unit 32 recognizes the point cloud cluster as an object by applying a predetermined shape such as a rectangular parallelepiped or a rectangle to each of the point cloud clusters, and detects the position, shape, and posture of the object.
  • the object detection unit 32 duplicates the point cloud cluster and rotates it 180 degrees around the vertical axis to generate additional point cloud data in order to improve the accuracy of fitting the predetermined shape to the point cloud data.
  • a predetermined shape is applied to the composite point cloud data obtained by synthesizing the original point cloud cluster and the additional point cloud data.
  • the object detection unit 32 includes a reference axis calculation unit 42, a composite point cloud forming unit 43, a fitting unit 44, and an object recognition unit 45.
  • the reference axis calculation unit 42 estimates the reference axis for rotating the point cloud cluster extracted by the object candidate extraction unit 31 by 180 degrees.
  • the reference axis which is the rotation axis of the additional point group data, passes through the center of the object candidate. It is preferably an axis extending in the vertical direction. Therefore, the reference axis calculation unit 42 estimates the reference axis extending in the vertical direction through the center of the object candidate based on the point cloud cluster. For example, the reference axis calculation unit 42 may estimate the vertical line passing through the center of the minimum inclusion circle of the point cloud cluster projected on the horizontal plane as the reference axis.
  • FIG. 5A it is assumed that a point cloud cluster is extracted for the vehicle 2 which is an object candidate.
  • the circle plot of FIG. 5A shows each of the point cloud data included in the point cloud cluster extracted for the vehicle 2.
  • the point cloud data of the right side surface and the back surface of the vehicle 2 is acquired.
  • the reference axis calculation unit 42 projects the point cloud cluster onto the horizontal plane in the vehicle coordinate system as shown in FIG. 5B.
  • the reference axis calculation unit 42 calculates the smallest circle (that is, the minimum inclusion circle 60) containing the point cloud cluster projected on the horizontal plane, and determines the center 61 of the minimum inclusion circle 60.
  • the reference axis calculation unit 42 may calculate the minimum inclusion circle 60 using, for example, Smallest ellipsoids (balls and ellipsoids) by Emo Welzl (https://inf.ethz.ch/personal/emo/PDF / SmallEnclDisk_LNCS555_91.pdf). See FIG. 5C.
  • the reference axis calculation unit 42 calculates with the vertical axis 62 passing through the center 61 of the minimum inclusion circle 60 as the reference axis.
  • the reference axis calculation unit 42 may estimate the vertical line in the vehicle coordinate system passing through the center of gravity of the point cloud cluster as the reference axis. See FIG. The reference axis calculation unit 42 estimates the vertical line passing through the midpoint 64 of the line segment connecting the farthest point cloud data 63a and 63f among the point cloud data 63a to 63f included in the point cloud cluster as the reference axis. May be good.
  • the above vertical line or horizontal direction may be determined with reference to the world coordinate system or the map coordinate system whose vertical direction is the direction of gravity.
  • the point cloud cluster Subsequent processing for may be omitted.
  • the object detection unit 32 may omit the subsequent processing for a point cloud cluster in which the size of the minimum inclusion circle 60 is not within a predetermined range. As a result, unnecessary calculations can be avoided and the processing load can be reduced.
  • the composite point cloud forming unit 43 duplicates the point cloud cluster extracted by the object candidate extraction unit 31 and rotates it 180 degrees around the reference axis estimated by the reference axis calculation unit 42 to generate additional point cloud data. .. See FIG. 7.
  • the round plot shows the point cloud data of the original point cloud cluster extracted by the object candidate extraction unit 31, and the triangular plot shows the additional point cloud data.
  • the composite point cloud forming unit 43 forms the composite point cloud data by synthesizing the original point cloud data (round plot) and the additional point cloud data (triangular plot).
  • the fitting unit 44 applies a predetermined shape 65 representing the approximate outer shape of the object to be recognized to the composite point cloud data (round plot + triangular plot).
  • the fitting unit 44 fits the rectangular parallelepiped 65 to the composite point cloud data.
  • the object recognition unit 45 recognizes the rectangular parallelepiped 65 applied to the composite point group data as an object, and estimates the position, shape, and posture of the object to be recognized.
  • the object recognition unit 45 may consider the position and shape of the fitted rectangular parallelepiped itself as the position and shape of the object, and may estimate the posture of the object by using the length of the side of the rectangular parallelepiped or the like.
  • a surface of a rectangular parallelepiped is close to the width of a known vehicle, that surface can be estimated to be the front or back of the vehicle. Further, by using map information or the like, it is possible to estimate whether the vehicle is facing forward or backward from the lane in which the vehicle exists.
  • the object recognition unit 45 may determine the attribute of the recognized object based on at least one of the size and the shape of the predetermined shape 65 to which the composite point cloud data applies. For example, the object recognition unit 45 may determine the vehicle type (whether a truck or a passenger car) of the recognized vehicle based on the size of the rectangular parallelepiped 65.
  • the object recognition unit 45 calculates the side length ratio as the shape of the rectangular parallelepiped 65, and recognizes the vehicle type according to the side length ratio (whether it is a four-wheeled vehicle or a two-wheeled vehicle). May be determined.
  • the self-position estimation unit 33 determines the absolute position of the own vehicle 1, that is, the position of the own vehicle 1 with respect to a predetermined reference point, based on the measurement result by the positioning device 13 and the odometry using the detection result from the vehicle sensor 12. Measure posture and speed.
  • the position calculation unit 34 in the map estimates the position and orientation of the own vehicle 1 in the map coordinate system from the absolute position of the own vehicle 1 obtained by the self-position estimation unit 33 and the map information stored in the map database 12. To do. Further, the position calculation unit 34 in the map is based on the estimated position and orientation of the own vehicle 1 and the recognition result of the object around the own vehicle 1 by the object recognition unit 45, and the position calculation unit 34 around the own vehicle 1 in the map coordinate system. Estimate the position and orientation of the object.
  • the traveling track generation unit 35 is based on the position and orientation of the own vehicle 1 estimated by the position calculation unit 34 in the map, the position and orientation of the objects around the own vehicle 1, and the high-precision map of the own vehicle 1.
  • a route space map that expresses the presence or absence of surrounding routes and objects and a risk map that quantifies the degree of danger of the driving yard are generated.
  • the traveling track generation unit 35 generates a driving action plan for automatically driving the own vehicle 1 on the planned traveling route based on the planned traveling route set by the navigation system 15, the route space map, and the risk map. ..
  • a driving action plan is a driving action at a lane level (lane level) in a medium- to long-distance range, which defines a driving lane (lane) in which the own vehicle is driven and a driving action required to drive the driving lane. It is a plan of.
  • Driving behavior determined by the traveling track generation unit 35 includes stopping at a stop line, turning right, left, and going straight at an intersection, traveling on a curved road having a predetermined curvature or more, passing a lane width change point, and merging section. Includes lane changes when driving in multiple lanes.
  • the traveling track generation unit 35 may determine whether or not the other vehicle is approaching the own vehicle based on the position of the other vehicle estimated by the position calculation unit 34 in the map. When it is determined that another vehicle is approaching the own vehicle, the own vehicle is stopped or decelerated, or a driving action plan accompanied by avoidance steering is generated.
  • the traveling track generation unit 35 generates candidates for a traveling track and a speed profile on which the own vehicle 1 is driven, based on the driving action plan, the motion characteristics of the own vehicle 1, and the route space map.
  • the traveling track generation unit 35 evaluates the future risk of each candidate based on the risk map, selects the optimum traveling track and speed profile, and sets the target traveling track and target speed profile to be traveled by the own vehicle 1.
  • the travel control unit 36 drives the actuator 17 so that the own vehicle 1 travels on the target travel trajectory at a speed according to the target speed profile generated by the travel track generation unit 35, and the own vehicle 1 automatically follows the planned travel route.
  • the driving behavior of the own vehicle 1 is controlled so as to travel in.
  • the traveling control unit 36 uses the self estimated by the position calculation unit 34 in the map.
  • the actuator 17 is driven based on the position and posture of the vehicle 1, the position and posture of objects around the own vehicle 1, the map information, and the running state of the own vehicle 1, and the steering mechanism and the braking device of the own vehicle 1 are driven.
  • Control at least one of the power units. For example, when it is determined that another vehicle is approaching the own vehicle, at least one of the steering mechanism, the braking device, and the power unit of the own vehicle 1 is stopped, decelerated, or accompanied by avoidance steering. Control one or the other.
  • step S1 the point cloud acquisition unit 30 acquires the point cloud data output by the distance measuring sensor 13 as three-dimensional point cloud data.
  • step S2 the object point cloud extraction unit 40 extracts the point cloud data of the reflection points of the stationary object or the moving object from the three-dimensional point cloud data.
  • step S3 the clustering unit 41 clusters the point cloud data extracted by the object point cloud extraction unit 40 into a point cloud cluster for each object candidate.
  • step S4 the reference axis calculation unit 42 estimates a reference axis extending in the vertical direction through the center of the object candidate based on the point cloud cluster.
  • step S5 the composite point cloud forming unit 43 duplicates the point cloud cluster extracted by the object candidate extraction unit 31, rotates it 180 degrees around the reference axis estimated by the reference axis calculation unit 42, and adds additional point cloud data. To generate.
  • the composite point cloud forming unit 43 forms a composite point cloud data obtained by synthesizing the original point cloud data extracted by the object candidate extraction unit 31 and the additional point cloud data.
  • the fitting unit 44 applies the rectangular parallelepiped 65 to the composite point cloud data.
  • the object recognition unit 45 recognizes the rectangular parallelepiped 65 applied to the composite point group data as an object, and estimates the position, shape, and posture of the object to be recognized. After that, the process ends.
  • the distance measuring sensor 13 and the point cloud acquisition unit 30 acquire the position of the surface of the object 2 around the own vehicle 1 as point cloud data including a plurality of detection points.
  • the object candidate extraction unit 31 clusters the point cloud data into one or a plurality of point cloud clusters, and extracts any one of the one or a plurality of point cloud clusters as an object candidate.
  • the reference axis calculation unit 42 estimates a reference axis extending in the vertical direction through the center of the object candidate based on the point cloud cluster.
  • the composite point cloud forming unit 43 forms the composite point cloud data obtained by synthesizing the point cloud data obtained by rotating the point cloud cluster 180 degrees around the reference axis and the original point cloud cluster.
  • the fitting unit 44 and the object recognition unit 45 recognize the object by applying the rectangular parallelepiped to the composite point cloud data.
  • the rectangular parallelepiped can be accurately applied to the point cloud data of the reflection points obtained from the stationary object or the moving object to be recognized, so that the recognition accuracy of the object can be improved.
  • the object candidate extraction unit 31 may extract only a point cloud cluster determined to have a shape of a part of a predetermined shape as an object candidate from one or a plurality of point cloud clusters. For example, out of one or a plurality of point cloud clusters, only the point cloud cluster including the corners may be extracted as an object candidate. As a result, it is possible to omit the processing for the object other than the recognition target, so that the processing load can be reduced without lowering the recognition accuracy.
  • the reference axis calculation unit 42 may estimate the vertical line passing through the center of the minimum inclusion circle of the two-dimensional point cloud data obtained by projecting the point cloud cluster on the horizontal plane as the reference axis.
  • the object detection unit 32 may recognize an object only for the point cloud clusters in which the size of the minimum inclusion circle is within a predetermined range among one or a plurality of point cloud clusters. As a result, point cloud clusters that are too large or too small compared to the known size of the recognition target can be excluded from the subsequent processing. Therefore, since the processing for the object other than the recognition target can be omitted, the processing load can be reduced without lowering the recognition accuracy.
  • the reference axis calculation unit 42 may estimate the vertical line passing through the center of gravity of any cluster as the reference axis. Alternatively, the reference axis calculation unit 42 may estimate the vertical straight line passing through the midpoint of the line segment connecting the two farthest points among the position information of the point cloud data included in any cluster as the reference axis. .. As a result, the reference axis can be estimated with a relatively simple amount of calculation.
  • the object detection unit 32 may determine the attribute of the object based on at least one of the size and the shape of the predetermined shape to which the composite point cloud data applies. Thereby, the attributes of the object (for example, a truck, a passenger car, and a two-wheeled vehicle) can be discriminated.
  • the attributes of the object for example, a truck, a passenger car, and a two-wheeled vehicle
  • the controller 16 of the second embodiment projects the point cloud data acquired by the point cloud acquisition unit 30 as three-dimensional point cloud data onto a horizontal plane and converts it into two-dimensional point cloud data, thereby converting the above-mentioned composite point cloud data. It is formed as two-dimensional point cloud data.
  • the controller 16 recognizes an object by fitting a rectangle to the composite point cloud data. As a result, the number of dimensions of the point cloud data is reduced, so that the processing load for object recognition can be reduced.
  • the controller 16 of the second embodiment has the same configuration as the controller of the first embodiment, and the same components are designated by the same reference numerals.
  • the object candidate extraction unit of the second embodiment includes a conversion unit 46.
  • the conversion unit 46 projects the three-dimensional point cloud data extracted by the object point cloud extraction unit 40 onto the horizontal plane of the vehicle coordinate system and converts it into two-dimensional point cloud data. Instead of the vehicle coordinate system, it may be projected on the horizontal plane of the world coordinate system or the map coordinate system.
  • the clustering unit 41 clusters the point cloud data converted into the two-dimensional point cloud data by the conversion unit 46 into a point cloud cluster for each object candidate.
  • the object candidate extraction unit 31 may be configured to extract only a point cloud cluster having a shape similar to the shape of the object to be recognized as an object candidate. For example, when the recognition target is another vehicle, it may be configured to extract only the point cloud cluster having a part of the rectangle which is the outline outline of the vehicle seen from above as the object candidate.
  • the object candidate extraction unit 31 may determine that the point cloud cluster including the corner portion has a part of the rectangle.
  • the reference axis calculation unit 42 estimates the reference axis for rotating the two-dimensional point cloud cluster extracted by the object candidate extraction unit 31 by 180 degrees.
  • the reference axis calculation unit 42 is, for example, a vertical line segment passing through the center of the minimum inclusion circle of the two-dimensional point group cluster, a vertical line segment passing through the center of gravity of the two-dimensional point group cluster, and a point group included in the two-dimensional point group cluster.
  • the vertical line passing through the midpoint of the line segment connecting the farthest point group data of the data may be estimated as the reference axis.
  • the composite point cloud forming unit 43 duplicates the two-dimensional point cloud cluster extracted by the object candidate extraction unit 31, rotates it 180 degrees around the reference axis estimated by the reference axis calculation unit 42, and adds additional point cloud data. To generate.
  • the composite point cloud forming unit 43 forms the composite point cloud data by synthesizing the original point cloud data and the additional point cloud data.
  • the fitting unit 44 fits a rectangle to the composite point cloud data.
  • the object recognition unit 45 recognizes the rectangle fitted to the composite point cloud data as an object, and estimates the position, shape, and posture of the object to be recognized.
  • the three-dimensional point cloud data acquired by the point cloud acquisition unit 30 is converted into two-dimensional point cloud data before clustering by the clustering unit 41, but the present invention is not limited to this.
  • the three-dimensional point cloud data may be converted into the two-dimensional point cloud data at any time after the extraction by the object point cloud extraction unit 40 and before the fitting of the rectangle by the fitting unit 44.
  • the three-dimensional point cloud data is converted into the two-dimensional point cloud data by projecting the three-dimensional point cloud data on the horizontal plane, but the present invention is not limited to this.
  • the conversion unit 46 may generate a binarized occupancy grid map from the three-dimensional point cloud data extracted by the object point cloud extraction unit 40.
  • the object candidate extraction unit 31 may also use the binarized occupancy grid map calculated for the driving action plan of the own vehicle 1.
  • the object candidate extraction unit 31 detects the occupied grid occupied by the point cloud data in the binarized occupied grid map.
  • the clustering unit 41, the reference axis calculation unit 42, the composite point cloud forming unit 43, the fitting unit 44, and the object recognition unit 45 use the set of occupied grids in the same manner as the two-dimensional point cloud data, and use the above two-dimensional point cloud data. Performs the same processing as clustering for, estimating the reference axis, forming the composite point cloud data, fitting the rectangle to the composite point cloud data, and recognizing the object.
  • steps S10 and S11 is the same as the processing of steps S1 and S2 described with reference to FIG.
  • the three-dimensional point cloud data extracted by the object point cloud extraction unit 40 in step S12 is projected onto a horizontal plane and converted into two-dimensional point cloud data.
  • steps S13 to S15 is the same as that of steps S3 to S5 described with reference to FIG. 8 except that the two-dimensional point cloud data is handled instead of the three-dimensional point cloud data.
  • the fitting unit 44 applies a rectangle to the composite point cloud data.
  • the object recognition unit 45 recognizes the rectangle fitted to the composite point cloud data as an object, and estimates the position, shape, and posture of the object to be recognized. After that, the process ends.
  • the conversion unit 46 projects the three-dimensional point cloud data extracted by the object point cloud extraction unit 40 onto the horizontal plane of the vehicle coordinate system and converts it into two-dimensional point cloud data.
  • the fitting unit 44 fits a rectangle to the two-dimensional composite point cloud data.
  • the object recognition unit 45 recognizes the rectangle fitted to the composite point cloud data as an object, and estimates the position, shape, and posture of the object to be recognized. As a result, the number of dimensions of the point cloud data is reduced, so that the processing load for object recognition can be reduced.
  • the conversion unit 46 generates an occupied grid map from the three-dimensional point cloud data extracted by the object point cloud extraction unit 40, and detects the occupied grid.
  • the clustering unit 41, the reference axis calculation unit 42, the composite point cloud forming unit 43, the fitting unit 44, and the object recognition unit 45 use the set of occupied grids in the same manner as the two-dimensional point cloud data, and cluster the two-dimensional point cloud data. , Estimate the reference axis, form the composite point cloud data, fit the rectangle to the composite point cloud data, and perform the same processing as object recognition. Therefore, when calculating the occupied grid map for both the driving behavior meters of the own vehicle 1, this occupied grid map can also be used, so that the amount of calculation in the entire system can be reduced.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

An object recognition method, wherein: positions of a surface of an object 2 in the vicinity of a vehicle 1 are acquired as point cloud data comprising a plurality of detection points (S1); the point cloud data is clustered into one cluster or a plurality of clusters, and the one cluster or one of the plurality of clusters is extracted as an object candidate (S3); on the basis of the cluster, a reference axis that passes through the center of the object candidate and extends in the vertical direction is estimated (S4); combined point cloud data is formed through the combination of the cluster and point cloud data obtained by rotating the cluster around the reference axis by 180 degrees (S5); and the object 2 is recognized through the fitting of a prescribed shape, which is a rectangle or cuboid, to the combined point cloud data (S6, S7).

Description

物体認識方法及び物体認識装置Object recognition method and object recognition device
 本発明は、物体認識方法及び物体認識装置に関する。 The present invention relates to an object recognition method and an object recognition device.
 下記特許文献1には、車両に搭載されたレーザーセンサから取得した点群データに矩形をあてはめ、自車両の周辺を走行する検知対象車両の前面、背面、および一側面の内、少なくとも一つの面を一辺とする矩形枠によって検知対象車両の一部又は全体の形状を近似し、矩形枠の角を基準点として検知対象車両を追跡する技術が記載されている。 In Patent Document 1 below, a rectangle is applied to the point group data acquired from the laser sensor mounted on the vehicle, and at least one surface of the front surface, the back surface, and one side surface of the detection target vehicle traveling around the own vehicle is applied. A technique is described in which the shape of a part or the whole of the detection target vehicle is approximated by a rectangular frame having a side of, and the detection target vehicle is tracked using the corner of the rectangular frame as a reference point.
特開2016−148514号公報Japanese Unexamined Patent Publication No. 2016-148514
 しかしながら、検知対象の物体の点群データが少ない場合には、これらの点群データに矩形や直方体などの所定形状をあてはめる際の精度が低下するという問題があった。
 本発明は、車両周囲の物体表面の複数の位置を各々検出して得られた点群データに、矩形や直方体をあてはめる際の精度を向上することを目的とする。
However, when the point cloud data of the object to be detected is small, there is a problem that the accuracy when applying a predetermined shape such as a rectangle or a rectangular parallelepiped to these point cloud data is lowered.
An object of the present invention is to improve the accuracy when fitting a rectangle or a rectangular parallelepiped to point cloud data obtained by detecting each of a plurality of positions on the surface of an object around a vehicle.
 本発明の一態様による物体認識方法では、車両周囲の物体表面の位置を、複数の検出点からなる点群データとして取得し、点群データを一又は複数のクラスタにクラスタリングして、一又は複数のクラスタのうちの何れかのクラスタを物体候補として抽出し、上記何れかのクラスタに基づいて物体候補の中心を通って鉛直方向に延びる基準軸を推定し、基準軸を中心として上記何れかのクラスタを180度回転した点群データと、上記何れかのクラスタと、を合成して得られる合成点群データを形成し、矩形又は直方体である所定形状を合成点群データにあてはめることで物体を認識する。 In the object recognition method according to one aspect of the present invention, the position of the object surface around the vehicle is acquired as point group data consisting of a plurality of detection points, and the point group data is clustered into one or a plurality of clusters to form one or a plurality of points. One of the above clusters is extracted as an object candidate, a reference axis extending in the vertical direction through the center of the object candidate is estimated based on any of the above clusters, and any of the above is centered on the reference axis. An object is formed by forming a composite point group data obtained by synthesizing a point group data obtained by rotating a cluster 180 degrees and any of the above clusters, and applying a predetermined shape that is a rectangle or a rectangle to the composite point group data. recognize.
 本発明の一態様によれば、本発明は、車両周囲の物体表面の複数の位置を各々検出して得られた点群データに、矩形や直方体をあてはめる際の精度を向上できる。
 本発明の目的及び利点は、特許請求の範囲に示した要素及びその組合せを用いて具現化され達成される。前述の一般的な記述及び以下の詳細な記述の両方は、単なる例示及び説明であり、特許請求の範囲のように本発明を限定するものでないと解するべきである。
According to one aspect of the present invention, the present invention can improve the accuracy when fitting a rectangle or a rectangular parallelepiped to the point cloud data obtained by detecting each of a plurality of positions on the surface of an object around the vehicle.
The objects and advantages of the present invention are embodied and achieved by using the elements and combinations thereof shown in the claims. It should be understood that both the general description above and the detailed description below are merely illustrations and explanations and do not limit the invention as in the claims.
実施形態の走行支援装置の一例を示す図である。It is a figure which shows an example of the running support device of embodiment. 実施形態の物体認識方法の概略説明図である。It is a schematic explanatory drawing of the object recognition method of an embodiment. 実施形態の物体認識方法の概略説明図である。It is a schematic explanatory drawing of the object recognition method of an embodiment. 実施形態の物体認識方法の概略説明図である。It is a schematic explanatory drawing of the object recognition method of an embodiment. 第1実施形態のコントローラの機能構成の一例を示す図である。It is a figure which shows an example of the functional structure of the controller of 1st Embodiment. 点群クラスタの角部の検出方法の一例の説明図である。It is explanatory drawing of an example of the detection method of the corner part of a point cloud cluster. 点群クラスタの角部の検出方法の一例の説明図である。It is explanatory drawing of an example of the detection method of the corner part of a point cloud cluster. 点群クラスタの角部の検出方法の一例の説明図である。It is explanatory drawing of an example of the detection method of the corner part of a point cloud cluster. 点群データの模式図である。It is a schematic diagram of a point cloud data. 点群データに応じた基準軸の推定方法の一例の説明図である。It is explanatory drawing of an example of the estimation method of the reference axis according to the point cloud data. 点群データに応じた基準軸の推定方法の一例の説明図である。It is explanatory drawing of an example of the estimation method of the reference axis according to the point cloud data. 点群データに応じた基準軸の推定方法の他の一例の説明図である。It is explanatory drawing of another example of the estimation method of the reference axis according to the point cloud data. 合成点群データと、合成点群データにあてはめた直方体の模式図である。It is a schematic diagram of the composite point cloud data and the rectangular parallelepiped applied to the composite point cloud data. 第1実施形態の物体認識方法の一例のフローチャートである。It is a flowchart of an example of the object recognition method of 1st Embodiment. 第2実施形態のコントローラの機能構成の一例を示す図である。It is a figure which shows an example of the functional structure of the controller of 2nd Embodiment. 第2実施形態の物体認識方法の一例のフローチャートである。It is a flowchart of an example of the object recognition method of 2nd Embodiment.
 (第1実施形態)
 以下、本発明の実施形態について、図面を参照しつつ説明する。
 (構成)
 自車両1は、自車両1の走行支援を行う走行支援装置10を備える。走行支援装置10は、自車両1の現在位置である自己位置を検出し、検出した自己位置に基づいて自車両1の走行を支援する。
 例えば、走行支援装置10は、検出した自己位置と周囲の走行環境とに基づいて、運転者が関与せずに自車両1を自動で運転する自律走行制御を行うことによって運転を支援する。なお、推定した自己位置と周囲の走行環境とに基づいて操舵角のみあるいは加減速のみを制御するなど、自車両1の走行に関わる運転動作を部分的に支援してもよい。
(First Embodiment)
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
(Constitution)
The own vehicle 1 includes a running support device 10 that supports the running of the own vehicle 1. The traveling support device 10 detects the self-position which is the current position of the own vehicle 1 and supports the traveling of the own vehicle 1 based on the detected self-position.
For example, the driving support device 10 supports driving by performing autonomous driving control that automatically drives the own vehicle 1 without the driver's involvement, based on the detected self-position and the surrounding driving environment. It should be noted that the driving operation related to the traveling of the own vehicle 1 may be partially supported, such as controlling only the steering angle or only the acceleration / deceleration based on the estimated self-position and the surrounding traveling environment.
 走行支援装置10は、測位装置11と、地図データベース12と、測距センサ13と、車両センサ14と、ナビゲーションシステム15と、コントローラ16と、アクチュエータ17を備える。なお、図面において、地図データベースを、「地図DB」と表記する。
 測位装置11は、自車両1の現在位置を測定する。測位装置11は、例えば全地球型測位システム(GNSS)受信機を備えてよい。GNSS受信機は、例えば地球測位システム(GPS)受信機等であり、複数の航法衛星から電波を受信して自車両1の現在位置を測定する。
The traveling support device 10 includes a positioning device 11, a map database 12, a distance measuring sensor 13, a vehicle sensor 14, a navigation system 15, a controller 16, and an actuator 17. In the drawings, the map database is referred to as "map DB".
The positioning device 11 measures the current position of the own vehicle 1. The positioning device 11 may include, for example, a global positioning system (GNSS) receiver. The GNSS receiver is, for example, a Global Positioning System (GPS) receiver or the like, and receives radio waves from a plurality of navigation satellites to measure the current position of the own vehicle 1.
 地図データベース12は、フラッシュメモリ等の記憶装置に格納され、自車両1の自己位置の推定に必要な道路形状や地物、ランドマーク等の物標の位置及び種類などの地図情報を記憶している。
 地図データベース12として、例えば、自律走行用の地図として好適な高精度地図データ(以下、単に「高精度地図」という。)を記憶してよい。高精度地図は、ナビゲーション用の地図データ(以下、単に「ナビ地図」という。)よりも高精度の地図データであり、道路単位の情報よりも詳細な走行レーン(車線)単位の情報を含む。
The map database 12 is stored in a storage device such as a flash memory, and stores map information such as the position and type of road shapes, features, landmarks, and other targets necessary for estimating the self-position of the own vehicle 1. There is.
As the map database 12, for example, high-precision map data suitable as a map for autonomous traveling (hereinafter, simply referred to as “high-precision map”) may be stored. The high-precision map is map data with higher accuracy than the map data for navigation (hereinafter, simply referred to as "navigation map"), and includes information in units of traveling lanes (lanes) that is more detailed than information in units of roads.
 また、地図データベース12にはナビ地図が記憶されていてもよい。ナビ地図は道路単位の情報を含む。例えば、ナビ地図は道路単位の情報として、道路基準線(例えば道路の中央の線)上の基準点を示す道路ノードの情報と、道路ノード間の道路の区間態様を示す道路リンクの情報を含む。
 なお、地図データベース12は、無線通信(路車間通信、または、車車間通信でも可)等の通信システムを介して外部から地図情報を取得してもよい。この場合、地図データベース12は、定期的に最新の地図情報を入手して、保有する地図情報を更新してもよい。また、地図データベース12は、自車両1が実際に走行した走路を、地図情報として蓄積してもよい。
Further, the navigation map may be stored in the map database 12. The navigation map contains information for each road. For example, a navigation map includes information on a road node indicating a reference point on a road reference line (for example, a central line of a road) and information on a road link indicating a section mode of a road between road nodes as information on a road basis. ..
The map database 12 may acquire map information from the outside via a communication system such as wireless communication (road-to-vehicle communication or vehicle-to-vehicle communication is also possible). In this case, the map database 12 may periodically obtain the latest map information and update the map information it holds. Further, the map database 12 may store the runway actually traveled by the own vehicle 1 as map information.
 測距センサ13は、自車両1に搭載されて、自車両1の周囲へ探査波を送信して自車両1の周囲を走査し、探査波が物体表面で反射した反射波を受信する。測距センサ13は、受信した反射波に基づいて、物体表面の複数の位置で探査波がそれぞれ反射した反射点(検出点)の位置を、自車両1に対する反射点の相対位置として算出し、各々の反射点の相対位置を表す点群データをコントローラ16に出力する。すなわち、点群データは自車両1を中心とする車両座標系における反射点の位置を表す。
 測距センサ13は、レーザレンジファインダ(LRF)やレーダ、LiDAR(Light Detection and Ranging)のレーザレーダなどを備えてよい。また、測距センサ13は車両周囲の物体表面の位置を点群として取得できれば、どのようなセンサであっても良く、上記に限定されるものでは無い。例えば車両周囲をステレオカメラで撮像した画像内の物体に対応した画素毎に物体表面の位置を算出し、各画素に対応する位置を検出点とした点群データをコントローラ16に出力するものであっても良い。
 以下の記載においては測距センサ13は前述のように、送信した探査波の反射波に基づいて物体表面の反射点の位置を点群データとしてコントローラ16に出力するものとして記載する。
The distance measuring sensor 13 is mounted on the own vehicle 1, transmits an exploration wave to the periphery of the own vehicle 1, scans the periphery of the own vehicle 1, and receives the reflected wave reflected by the exploration wave on the surface of the object. Based on the received reflected wave, the distance measuring sensor 13 calculates the position of the reflection point (detection point) at which the exploration wave is reflected at a plurality of positions on the surface of the object as the relative position of the reflection point with respect to the own vehicle 1. Point group data representing the relative position of each reflection point is output to the controller 16. That is, the point cloud data represents the position of the reflection point in the vehicle coordinate system centered on the own vehicle 1.
The ranging sensor 13 may include a laser range finder (LRF), a radar, a LiDAR (Light Detection and Ranger) laser radar, and the like. Further, the distance measuring sensor 13 may be any sensor as long as it can acquire the position of the surface of the object around the vehicle as a point cloud, and is not limited to the above. For example, the position of the object surface is calculated for each pixel corresponding to the object in the image captured by the stereo camera around the vehicle, and the point cloud data with the position corresponding to each pixel as the detection point is output to the controller 16. You may.
In the following description, as described above, the distance measuring sensor 13 is described as outputting the position of the reflection point on the object surface to the controller 16 as point cloud data based on the reflected wave of the transmitted exploration wave.
 車両センサ14は、自車両1から得られる様々な情報(車両情報)を検出する。車両センサ14には、例えば、自車両1の走行速度(車速)を検出する車速センサ、自車両1が備える各タイヤの回転速度を検出する車輪速センサ、自車両1の3軸方向の加速度(減速度を含む)を検出する3軸加速度センサ(Gセンサ)、操舵角(転舵角を含む)を検出する操舵角センサ、自車両1に生じる角速度を検出するジャイロセンサ、ヨーレートを検出するヨーレートセンサ、自車両1のアクセル開度を検出するアクセルセンサと、運転者によるブレーキ操作量を検出するブレーキセンサが含まれる。 The vehicle sensor 14 detects various information (vehicle information) obtained from the own vehicle 1. The vehicle sensor 14 includes, for example, a vehicle speed sensor that detects the traveling speed (vehicle speed) of the own vehicle 1, a wheel speed sensor that detects the rotation speed of each tire included in the own vehicle 1, and an acceleration in the three axial directions of the own vehicle 1. 3-axis accelerometer (G sensor) that detects deceleration), steering angle sensor that detects steering angle (including steering angle), gyro sensor that detects angular speed generated in own vehicle 1, yaw rate that detects yaw rate The sensor includes an accelerator sensor that detects the accelerator opening degree of the own vehicle 1, and a brake sensor that detects the amount of brake operation by the driver.
 ナビゲーションシステム15は、測位装置11により自車両1の現在位置を認識し、その現在位置における地図情報を地図データベース12から取得する。ナビゲーションシステム15は、乗員が入力した目的地までの走行予定経路を設定し、この走行予定経路に従って乗員に経路案内を行う。
 またナビゲーションシステム15は、設定した走行予定経路の情報をコントローラ16へ出力する。自律走行制御時にコントローラ16は、ナビゲーションシステム15が設定した走行予定経路に沿って自律走行するように自車両1を自動で運転(運転行動を制御)する。
The navigation system 15 recognizes the current position of the own vehicle 1 by the positioning device 11, and acquires the map information at the current position from the map database 12. The navigation system 15 sets a planned travel route to the destination input by the occupant, and provides route guidance to the occupant according to the planned travel route.
Further, the navigation system 15 outputs the information of the set planned travel route to the controller 16. At the time of autonomous driving control, the controller 16 automatically drives the own vehicle 1 (controls the driving behavior) so as to autonomously drive along the planned traveling route set by the navigation system 15.
 コントローラ16は、自車両1の走行支援制御を行う電子制御ユニット(ECU:Electronic Control Unit)である。コントローラ16は、プロセッサ20と、記憶装置21等の周辺部品とを含む。プロセッサ20は、例えばCPU(Central Processing Unit)やMPU(Micro−Processing Unit)であってよい。
 記憶装置21は、半導体記憶装置や、磁気記憶装置、光学記憶装置等を備えてよい。記憶装置21は、レジスタ、キャッシュメモリ、主記憶装置として使用されるROM(Read Only Memory)及びRAM(Random Access Memory)等のメモリを含んでよい。
 以下に説明するコントローラ16の機能は、例えばプロセッサ20が、記憶装置21に格納されたコンピュータプログラムを実行することにより実現される。
The controller 16 is an electronic control unit (ECU: Electronic Control Unit) that controls the traveling support of the own vehicle 1. The controller 16 includes a processor 20 and peripheral components such as a storage device 21. The processor 20 may be, for example, a CPU (Central Processing Unit) or an MPU (Micro-Processing Unit).
The storage device 21 may include a semiconductor storage device, a magnetic storage device, an optical storage device, and the like. The storage device 21 may include a memory such as a register, a cache memory, a ROM (Read Only Memory) and a RAM (Random Access Memory) used as a main storage device.
The function of the controller 16 described below is realized, for example, by the processor 20 executing a computer program stored in the storage device 21.
 なお、コントローラ16を、以下に説明する各情報処理を実行するための専用のハードウェアにより形成してもよい。
 例えば、コントローラ16は、汎用の半導体集積回路中に設定される機能的な論理回路を備えてもよい。例えばコントローラ16はフィールド・プログラマブル・ゲート・アレイ(FPGA:Field−Programmable Gate Array)等のプログラマブル・ロジック・デバイス(PLD:Programmable Logic Device)等を有していてもよい。
The controller 16 may be formed by dedicated hardware for executing each information processing described below.
For example, the controller 16 may include a functional logic circuit set in a general-purpose semiconductor integrated circuit. For example, the controller 16 may have a programmable logic device (PLD: Programmable Logic Device) such as a field programmable gate array (FPGA: Field-Programmable Gate Array).
 コントローラ16は、自車両1の現在位置である自己位置を検出し、自車両1の周囲の物体を認識する。コントローラ16は、自己位置と、地図データベース12の道路の地図情報と、ナビゲーションシステム15から出力された経路情報と、自車両1の周囲の物体と、自車両1の走行状態とに基づいて、自車両1を走行させる目標走行軌道を設定する。 The controller 16 detects the self-position, which is the current position of the own vehicle 1, and recognizes the objects around the own vehicle 1. The controller 16 is based on its own position, the map information of the road in the map database 12, the route information output from the navigation system 15, the objects around the own vehicle 1, and the traveling state of the own vehicle 1. A target traveling track on which the vehicle 1 is traveled is set.
 目標走行軌道を設定する際に、コントローラ16は、測距センサ13から出力される点群データに基づいて、自車両1の周囲の物体の位置、形状及び姿勢を認識し、自車両1の周囲の物体の認識結果に基づいて目標走行軌道を設定する。
 コントローラ16は、設定した目標走行軌道に基づいて自車両1の自律走行制御を行い、アクチュエータ17を駆動して自車両1の走行を制御する。
When setting the target traveling track, the controller 16 recognizes the position, shape, and attitude of an object around the own vehicle 1 based on the point cloud data output from the distance measuring sensor 13, and recognizes the position, shape, and attitude of the object around the own vehicle 1. The target traveling trajectory is set based on the recognition result of the object.
The controller 16 performs autonomous travel control of the own vehicle 1 based on the set target travel trajectory, and drives the actuator 17 to control the travel of the own vehicle 1.
 一方で、操舵角のみあるいは加減速のみを制御するなど、自車両1の走行に関わる運転動作を部分的に支援する場合には、コントローラ16は、自己位置と、地図データベース12の道路の地図情報と、自車両1の周囲の物体と、自車両1の走行状態とに基づいてアクチュエータ17を駆動し、自車両1の操舵機構、ブレーキ装置、パワーユニットを制御する。 On the other hand, when partially supporting the driving operation related to the running of the own vehicle 1 such as controlling only the steering angle or only the acceleration / deceleration, the controller 16 uses the self-position and the map information of the road in the map database 12. The actuator 17 is driven based on the objects around the own vehicle 1 and the traveling state of the own vehicle 1 to control the steering mechanism, the braking device, and the power unit of the own vehicle 1.
 アクチュエータ17は、コントローラ16からの制御信号に応じて、自車両1のステアリングホイール、アクセル開度及びブレーキ装置を操作して、自車両1の車両挙動を発生させる。アクチュエータ17は、ステアリングアクチュエータと、アクセル開度アクチュエータと、ブレーキ制御アクチュエータを備える。ステアリングアクチュエータは、自車両1のステアリングの操舵方向及び操舵量を制御する。アクセル開度アクチュエータは、自車両1のアクセル開度を制御する。ブレーキ制御アクチュエータは、自車両1のブレーキ装置の制動動作を制御する。 The actuator 17 operates the steering wheel, accelerator opening degree, and brake device of the own vehicle 1 in response to the control signal from the controller 16 to generate the vehicle behavior of the own vehicle 1. The actuator 17 includes a steering actuator, an accelerator opening actuator, and a brake control actuator. The steering actuator controls the steering direction and steering amount of the steering of the own vehicle 1. The accelerator opening actuator controls the accelerator opening of the own vehicle 1. The brake control actuator controls the braking operation of the brake device of the own vehicle 1.
 続いて、コントローラ16による自車両1の周囲の物体を認識する動作の概要を説明する。コントローラ16及び測距センサ13は、請求の範囲に記載の物体認識装置の一例である。
 図2Aは、測距センサ13による自車両1の周囲の物体2の表面の各位置の測定の概念図である。図2Aの破線は、測距センサ13が探査波で自車両1の周囲を走査する際の離散的な探査波の送信方向を示し、丸プロットは、物体2の表面で探査波が反射した複数の反射点の位置を示す。
 測距センサ13は、自車両1に対する反射点の相対位置を表す点群データをコントローラ16に出力する。
Subsequently, the outline of the operation of recognizing the objects around the own vehicle 1 by the controller 16 will be described. The controller 16 and the distance measuring sensor 13 are examples of the object recognition device described in the claims.
FIG. 2A is a conceptual diagram of measurement of each position on the surface of the object 2 around the own vehicle 1 by the distance measuring sensor 13. The broken line in FIG. 2A shows the transmission direction of the discrete exploration wave when the distance measuring sensor 13 scans the circumference of the own vehicle 1 with the exploration wave, and the round plot shows the plurality of reflections of the exploration wave on the surface of the object 2. Indicates the position of the reflection point of.
The distance measuring sensor 13 outputs point cloud data representing the relative position of the reflection point with respect to the own vehicle 1 to the controller 16.
 コントローラ16は、測距センサ13から出力された点群データ(丸プロット)に、物体2の外形を近似するのに適した所定種類の形状をあてはめて、物体2の位置、形状及び姿勢を認識する。図2Bに示す例では、コントローラ16は、点群データに矩形をあてはめる。
 測距センサ13から出力された点群データに矩形を直接あてはめると、認識対象の物体2の点群データが少ない場合に、点群データへの矩形のあてはめの精度を確保できなくなることがある。このため、例えば図2Bに示すように、認識した物体2の位置、姿勢及び形状を安定的に推定できないなどの問題が発生することがある。
The controller 16 applies a predetermined type of shape suitable for approximating the outer shape of the object 2 to the point cloud data (round plot) output from the distance measuring sensor 13, and recognizes the position, shape, and posture of the object 2. To do. In the example shown in FIG. 2B, the controller 16 fits a rectangle to the point cloud data.
If the rectangle is directly applied to the point cloud data output from the distance measuring sensor 13, the accuracy of fitting the rectangle to the point cloud data may not be ensured when the point cloud data of the object 2 to be recognized is small. Therefore, for example, as shown in FIG. 2B, there may be a problem that the position, posture, and shape of the recognized object 2 cannot be stably estimated.
 そこでコントローラ16は、図2Cに示すように、測距センサ13で物体2を測定して得た点群データ(丸プロット)を、鉛直軸を中心として180度回転させて複製することにより追加の点群データ(三角プロット)を生成する。コントローラ16は、元の点群データ(丸プロット)と追加の点群データ(三角プロット)とを合成した合成点群データを形成する。 Therefore, as shown in FIG. 2C, the controller 16 adds the point cloud data (circle plot) obtained by measuring the object 2 with the distance measuring sensor 13 by rotating it 180 degrees around the vertical axis and duplicating it. Generate point cloud data (triangular plot). The controller 16 forms a composite point cloud data by synthesizing the original point cloud data (round plot) and the additional point cloud data (triangular plot).
 コントローラ16は、合成点群データ(丸プロット+三角プロット)に矩形を当てはめて、物体2の位置、形状及び姿勢を認識する。
 これにより、元の点群データ(丸プロット)に矩形をあてはめやすくなり、点群データへの矩形のあてはめの精度が向上する。この結果、認識した物体2の位置、姿勢及び形状を安定的に推定できようになる。
The controller 16 applies a rectangle to the composite point cloud data (round plot + triangular plot) and recognizes the position, shape, and orientation of the object 2.
This makes it easier to fit the rectangle to the original point cloud data (round plot), and improves the accuracy of fitting the rectangle to the point cloud data. As a result, the position, posture, and shape of the recognized object 2 can be stably estimated.
 次に、図3を参照して、コントローラ16の機能構成の一例を詳述する。コントローラ16は、点群取得部30と、物体候補抽出部31と、物体検出部32と、自己位置推定部33と、地図内位置演算部34と、走行軌道生成部35と、走行制御部36を備える。
 点群取得部30は、測距センサ13が出力する点群データを3次元点群データとして取得する。
Next, an example of the functional configuration of the controller 16 will be described in detail with reference to FIG. The controller 16 includes a point cloud acquisition unit 30, an object candidate extraction unit 31, an object detection unit 32, a self-position estimation unit 33, a map position calculation unit 34, a travel trajectory generation unit 35, and a travel control unit 36. To be equipped.
The point cloud acquisition unit 30 acquires the point cloud data output by the distance measuring sensor 13 as three-dimensional point cloud data.
 物体候補抽出部31は、点群取得部30が取得した3次元点群データを、静止物体又は移動物体の物体候補ごとに、一又は複数のクラスタにクラスタリングし、得られたクラスタを物体候補として抽出する。
 物体候補抽出部31は、物体点群抽出部40と、クラスタリング部41とを備える。
 物体点群抽出部40は、3次元点群データから路面上の反射点などを除くことにより、静止物体又は移動物体の反射点の点群データを抽出する。
The object candidate extraction unit 31 clusters the three-dimensional point cloud data acquired by the point cloud acquisition unit 30 into one or a plurality of clusters for each object candidate of a stationary object or a moving object, and the obtained cluster is used as an object candidate. Extract.
The object candidate extraction unit 31 includes an object point cloud extraction unit 40 and a clustering unit 41.
The object point cloud extraction unit 40 extracts the point cloud data of the reflection points of a stationary object or a moving object by removing the reflection points on the road surface from the three-dimensional point cloud data.
 クラスタリング部41は、物体点群抽出部40が抽出した点群データを、物体候補毎のクラスタにクラスタリングする。例えば、クラスタリング部41は、互いに近接する点群データを1つのグループに分類することにより、点群データのグループであるクラスタにクラスタリングしてよい。以下、クラスタリング部41によって得られたクラスタを「点群クラスタ」と表記する。 The clustering unit 41 clusters the point cloud data extracted by the object point cloud extraction unit 40 into a cluster for each object candidate. For example, the clustering unit 41 may cluster the point cloud data that are close to each other into a cluster that is a group of the point cloud data by classifying them into one group. Hereinafter, the cluster obtained by the clustering unit 41 will be referred to as a “point cloud cluster”.
 物体候補抽出部31は、認識対象とする物体の形状に類似する形状の点群クラスタのみを物体候補として抽出するように構成してもよい。こうすることで、認識対象でない物体の点群データについて後続の処理を省略することができ、処理負荷を軽減できる。
 本実施形態では、認識対象の物体の概略の外形を表す所定形状を設定し、点群クラスタが所定形状上の点群の集合であるか否かを物体候補抽出部31が判定する。
The object candidate extraction unit 31 may be configured to extract only a point cloud cluster having a shape similar to the shape of the object to be recognized as an object candidate. By doing so, it is possible to omit the subsequent processing for the point cloud data of the object that is not the recognition target, and it is possible to reduce the processing load.
In the present embodiment, a predetermined shape representing the outline outline of the object to be recognized is set, and the object candidate extraction unit 31 determines whether or not the point cloud cluster is a set of point clouds on the predetermined shape.
 但し、認識対象とする物体を形成する全ての面について点群データが得られるとは限らない。図2Aの例では、測距センサ13に向いていない面の点群データは得られない。
 このため、物体候補抽出部31は、設定した所定形状の一部分を有する点群クラスタのみを物体候補として抽出するように構成する。
 例えば認識対象が他車両である場合には、車両の概略の外形である直方体の一部分を有する点群クラスタのみを物体候補として抽出するように構成してもよい。この場合、例えば、物体候補抽出部31は、直方体の一部分である角部を点群クラスタが含むか否かを判定し、角部を含む点群クラスタのみを物体候補として抽出してよい。
However, it is not always possible to obtain point cloud data for all the surfaces forming the object to be recognized. In the example of FIG. 2A, the point cloud data of the surface not facing the distance measuring sensor 13 cannot be obtained.
Therefore, the object candidate extraction unit 31 is configured to extract only the point cloud cluster having a part of the set predetermined shape as the object candidate.
For example, when the recognition target is another vehicle, it may be configured to extract only the point cloud cluster having a part of the rectangular parallelepiped which is the outline outline of the vehicle as an object candidate. In this case, for example, the object candidate extraction unit 31 may determine whether or not the point cloud cluster includes a corner portion that is a part of a rectangular parallelepiped, and extract only the point cloud cluster including the corner portion as an object candidate.
 図4A、図4B及び図4Cを参照して、点群クラスタの角部の検出方法の一例を説明する。まず図4Aに示すように、物体候補抽出部31は、3次元点群データである点群クラスタに含まれる点群51を、2次元のグリッドマップ50に投影する。
 次に物体候補抽出部31は、グリッドマップ50の各セル52に投影された点群51の数に応じて、各セル52の評価点を算出する。
An example of a method for detecting a corner of a point cloud cluster will be described with reference to FIGS. 4A, 4B, and 4C. First, as shown in FIG. 4A, the object candidate extraction unit 31 projects the point cloud 51 included in the point cloud cluster, which is the three-dimensional point cloud data, onto the two-dimensional grid map 50.
Next, the object candidate extraction unit 31 calculates the evaluation points of each cell 52 according to the number of the point cloud 51 projected on each cell 52 of the grid map 50.
 物体候補抽出部31は、図4Bに示すように、各セル52に対応する位置に画素を配列した画像を生成し、各セル52の評価値に応じて画素値を設定する。
 物体候補抽出部31は、生成した画像に所定の画像処理を施すことにより、点群クラスタの角部53を検出する。例えば物体候補抽出部31は、ハリスコーナー検出器を用いて点群クラスタの角部53を検出してよい。
As shown in FIG. 4B, the object candidate extraction unit 31 generates an image in which pixels are arranged at positions corresponding to each cell 52, and sets the pixel value according to the evaluation value of each cell 52.
The object candidate extraction unit 31 detects the corner portion 53 of the point cloud cluster by performing predetermined image processing on the generated image. For example, the object candidate extraction unit 31 may detect the corner portion 53 of the point cloud cluster using a Harris corner detector.
 図3を参照する。物体検出部32は、点群クラスタの各々に直方体や矩形といった所定形状をあてはめることで点群クラスタを物体として認識し、物体の位置、形状及び姿勢を検出する。
 物体検出部32は、点群データへの所定形状のあてはめの精度を向上するために、点群クラスタを複製して、鉛直軸を中心として180度回転させて追加の点群データを生成し、元の点群クラスタと追加の点群データとを合成した合成点群データに、所定形状をあてはめる。
See FIG. The object detection unit 32 recognizes the point cloud cluster as an object by applying a predetermined shape such as a rectangular parallelepiped or a rectangle to each of the point cloud clusters, and detects the position, shape, and posture of the object.
The object detection unit 32 duplicates the point cloud cluster and rotates it 180 degrees around the vertical axis to generate additional point cloud data in order to improve the accuracy of fitting the predetermined shape to the point cloud data. A predetermined shape is applied to the composite point cloud data obtained by synthesizing the original point cloud cluster and the additional point cloud data.
 物体検出部32は、基準軸計算部42と、合成点群形成部43と、フィッティング部44と、物体認識部45を備える。
 基準軸計算部42は、物体候補抽出部31が抽出した点群クラスタを180度回転させる基準軸を推定する。
The object detection unit 32 includes a reference axis calculation unit 42, a composite point cloud forming unit 43, a fitting unit 44, and an object recognition unit 45.
The reference axis calculation unit 42 estimates the reference axis for rotating the point cloud cluster extracted by the object candidate extraction unit 31 by 180 degrees.
 合成点群データに直方体や矩形にあてはめる場合、合成点群データが直方体や矩形に近い形状になるためには、追加の点群データの回転軸となる基準軸は、物体候補の中心を通って鉛直方向に延びる軸であることが好ましい。
 このため、基準軸計算部42は、点群クラスタに基づいて、物体候補の中心を通って鉛直方向に延びる基準軸を推定する。
 例えば、基準軸計算部42は、水平面に投影された点群クラスタの最小包含円の中心を通る鉛直線を基準軸として推定してよい。
When applying to a rectangular parallelepiped or rectangle to the composite point group data, in order for the composite point group data to have a shape close to a rectangular parallelepiped or rectangle, the reference axis, which is the rotation axis of the additional point group data, passes through the center of the object candidate. It is preferably an axis extending in the vertical direction.
Therefore, the reference axis calculation unit 42 estimates the reference axis extending in the vertical direction through the center of the object candidate based on the point cloud cluster.
For example, the reference axis calculation unit 42 may estimate the vertical line passing through the center of the minimum inclusion circle of the point cloud cluster projected on the horizontal plane as the reference axis.
 図5Aに示すように、物体候補である車両2に対して点群クラスタが抽出された場合を想定する。図5Aの丸プロットは、車両2に対して抽出された点群クラスタに含まれる点群データの各々を示す。図5Aの例では車両2の右側面及び背面の点群データが取得されている。
 基準軸計算部42は、図5Bに示すように点群クラスタを車両座標系における水平面に投影する。
 基準軸計算部42は、水平面に投影された点群クラスタを内包する最小の円(すなわち最小包含円60を算出し、最小包含円60の中心61を決定する。
As shown in FIG. 5A, it is assumed that a point cloud cluster is extracted for the vehicle 2 which is an object candidate. The circle plot of FIG. 5A shows each of the point cloud data included in the point cloud cluster extracted for the vehicle 2. In the example of FIG. 5A, the point cloud data of the right side surface and the back surface of the vehicle 2 is acquired.
The reference axis calculation unit 42 projects the point cloud cluster onto the horizontal plane in the vehicle coordinate system as shown in FIG. 5B.
The reference axis calculation unit 42 calculates the smallest circle (that is, the minimum inclusion circle 60) containing the point cloud cluster projected on the horizontal plane, and determines the center 61 of the minimum inclusion circle 60.
 基準軸計算部42は、例えば、Emo WelzlによるSmallest enclosing disks(balls and ellipsoids)を使用して最小包含円60を算出してよい(https://inf.ethz.ch/personal/emo/PublFiles/SmallEnclDisk_LNCS555_91.pdf)。
 図5Cを参照する。基準軸計算部42は、最小包含円60の中心61を通る鉛直軸62を基準軸として算出する。
The reference axis calculation unit 42 may calculate the minimum inclusion circle 60 using, for example, Smallest ellipsoids (balls and ellipsoids) by Emo Welzl (https://inf.ethz.ch/personal/emo/PDF / SmallEnclDisk_LNCS555_91.pdf).
See FIG. 5C. The reference axis calculation unit 42 calculates with the vertical axis 62 passing through the center 61 of the minimum inclusion circle 60 as the reference axis.
 なお、基準軸計算部42は、点群クラスタの重心を通る車両座標系における鉛直線を基準軸として推定してもよい。
 図6を参照する。基準軸計算部42は、点群クラスタに含まれる点群データ63a~63fのうち、最も離れた点群データ63a及び63fを結ぶ線分の中点64を通る鉛直線を基準軸として推定してもよい。
The reference axis calculation unit 42 may estimate the vertical line in the vehicle coordinate system passing through the center of gravity of the point cloud cluster as the reference axis.
See FIG. The reference axis calculation unit 42 estimates the vertical line passing through the midpoint 64 of the line segment connecting the farthest point cloud data 63a and 63f among the point cloud data 63a to 63f included in the point cloud cluster as the reference axis. May be good.
 なお、車両座標系に代えて、重力方向を鉛直方向とする世界座標系や地図座標系を基準にして上記の鉛直線や水平方向を定めてもよい。
 また、物体検出部32は、ある点群クラスタについて算出した最小包含円60の直径が、認識対象の物体の既知の大きさに比べて過大であるか過小である場合には、この点群クラスタに対する後続の処理を省略してもよい。例えば物体検出部32は、最小包含円60の大きさが所定の範囲内にない点群クラスタについて後続の処理を省略してもよい。これにより不要な計算を回避して、処理負荷を軽減できる。
Instead of the vehicle coordinate system, the above vertical line or horizontal direction may be determined with reference to the world coordinate system or the map coordinate system whose vertical direction is the direction of gravity.
Further, when the diameter of the minimum inclusion circle 60 calculated for a certain point cloud cluster is larger or smaller than the known size of the object to be recognized by the object detection unit 32, the point cloud cluster Subsequent processing for may be omitted. For example, the object detection unit 32 may omit the subsequent processing for a point cloud cluster in which the size of the minimum inclusion circle 60 is not within a predetermined range. As a result, unnecessary calculations can be avoided and the processing load can be reduced.
 図3を参照する。合成点群形成部43は、物体候補抽出部31が抽出した点群クラスタを複製して、基準軸計算部42が推定した基準軸を中心として180度回転させ、追加の点群データを生成する。図7を参照する。丸プロットは、物体候補抽出部31が抽出した元の点群クラスタの点群データを示し、三角プロットは追加の点群データを示す。
 合成点群形成部43は、元の点群データ(丸プロット)と追加の点群データ(三角プロット)とを合成した合成点群データを形成する。
See FIG. The composite point cloud forming unit 43 duplicates the point cloud cluster extracted by the object candidate extraction unit 31 and rotates it 180 degrees around the reference axis estimated by the reference axis calculation unit 42 to generate additional point cloud data. .. See FIG. 7. The round plot shows the point cloud data of the original point cloud cluster extracted by the object candidate extraction unit 31, and the triangular plot shows the additional point cloud data.
The composite point cloud forming unit 43 forms the composite point cloud data by synthesizing the original point cloud data (round plot) and the additional point cloud data (triangular plot).
 図3を参照する。フィッティング部44は、合成点群データ(丸プロット+三角プロット)に、認識対象の物体の概略の外形を表す所定形状65をあてはめる。図7の例では、フィッティング部44は、合成点群データに直方体65をあてはめる。
 図3を参照する。物体認識部45は、合成点群データにあてはめた直方体65を物体として認識し、認識対象の物体の位置、形状及び姿勢を推定する。例えば物体認識部45は、あてはめた直方体の位置及び形状そのものは、物体の位置及び形状としてみなし、物体の姿勢は直方体の辺の長さ等を用いて推定してもよい。
 例えば、直方体のある面の幅が、既知の車両の幅に近い場合は、その面を車両の前面または背面と推定できる。また、地図情報などを用いることで、車両の存在する車線から、車両が前向きであるか、後ろ向きであるかを推定できる。
See FIG. The fitting unit 44 applies a predetermined shape 65 representing the approximate outer shape of the object to be recognized to the composite point cloud data (round plot + triangular plot). In the example of FIG. 7, the fitting unit 44 fits the rectangular parallelepiped 65 to the composite point cloud data.
See FIG. The object recognition unit 45 recognizes the rectangular parallelepiped 65 applied to the composite point group data as an object, and estimates the position, shape, and posture of the object to be recognized. For example, the object recognition unit 45 may consider the position and shape of the fitted rectangular parallelepiped itself as the position and shape of the object, and may estimate the posture of the object by using the length of the side of the rectangular parallelepiped or the like.
For example, if the width of a surface of a rectangular parallelepiped is close to the width of a known vehicle, that surface can be estimated to be the front or back of the vehicle. Further, by using map information or the like, it is possible to estimate whether the vehicle is facing forward or backward from the lane in which the vehicle exists.
 また、物体認識部45は、合成点群データがあてはまる所定形状65の大きさ及び形状の少なくとも一方に基づいて、認識した物体の属性を判別してよい。例えば物体認識部45は、直方体65の大きさに基づいて、認識した車両の車種(トラック又は乗用車のいずれであるか)を判別してよい。 Further, the object recognition unit 45 may determine the attribute of the recognized object based on at least one of the size and the shape of the predetermined shape 65 to which the composite point cloud data applies. For example, the object recognition unit 45 may determine the vehicle type (whether a truck or a passenger car) of the recognized vehicle based on the size of the rectangular parallelepiped 65.
 例えば物体認識部45は、直方体65の形状として辺の長さの比を算出し、辺の長さの比に応じて認識した車両の車種(4輪車又は2輪車のいずれであるか)を判別してよい。
 自己位置推定部33は、測位装置13による測定結果や、車両センサ12からの検出結果を用いたオドメトリに基づいて、自車両1の絶対位置、すなわち、所定の基準点に対する自車両1の位置、姿勢及び速度を計測する。
For example, the object recognition unit 45 calculates the side length ratio as the shape of the rectangular parallelepiped 65, and recognizes the vehicle type according to the side length ratio (whether it is a four-wheeled vehicle or a two-wheeled vehicle). May be determined.
The self-position estimation unit 33 determines the absolute position of the own vehicle 1, that is, the position of the own vehicle 1 with respect to a predetermined reference point, based on the measurement result by the positioning device 13 and the odometry using the detection result from the vehicle sensor 12. Measure posture and speed.
 地図内位置演算部34は、自己位置推定部33により得られた自車両1の絶対位置、及び地図データベース12に記憶されている地図情報から、地図座標系における自車両1の位置及び姿勢を推定する。
 また、地図内位置演算部34は、推定した自車両1の位置及び姿勢と、物体認識部45による自車両1の周囲の物体の認識結果に基づいて、地図座標系における自車両1の周囲の物体の位置及び姿勢を推定する。
The position calculation unit 34 in the map estimates the position and orientation of the own vehicle 1 in the map coordinate system from the absolute position of the own vehicle 1 obtained by the self-position estimation unit 33 and the map information stored in the map database 12. To do.
Further, the position calculation unit 34 in the map is based on the estimated position and orientation of the own vehicle 1 and the recognition result of the object around the own vehicle 1 by the object recognition unit 45, and the position calculation unit 34 around the own vehicle 1 in the map coordinate system. Estimate the position and orientation of the object.
 走行軌道生成部35は、地図内位置演算部34が推定した自車両1の位置及び姿勢と、自車両1の周囲の物体の位置及び姿勢と、高精度地図とに基づいて、自車両1の周辺の経路や物体の有無を表現する経路空間マップと、走行場の危険度を数値化したリスクマップを生成する。
 走行軌道生成部35は、ナビゲーションシステム15により設定された走行予定経路と、経路空間マップ及びリスクマップに基づいて、走行予定経路上を自動で自車両1に走行させるための運転行動計画を生成する。
The traveling track generation unit 35 is based on the position and orientation of the own vehicle 1 estimated by the position calculation unit 34 in the map, the position and orientation of the objects around the own vehicle 1, and the high-precision map of the own vehicle 1. A route space map that expresses the presence or absence of surrounding routes and objects and a risk map that quantifies the degree of danger of the driving yard are generated.
The traveling track generation unit 35 generates a driving action plan for automatically driving the own vehicle 1 on the planned traveling route based on the planned traveling route set by the navigation system 15, the route space map, and the risk map. ..
 運転行動計画とは、自車両を走行させる走行レーン(車線)と、この走行レーンを走行させるのに要する運転行動とを定めた、中長距離の範囲におけるレーンレベル(車線レベル)での運転行動の計画である。
 走行軌道生成部35によって決定される運転行動には、停止線での停止や、交差点の右折、左折、直進や、所定曲率以上のカーブ路での走行、車線幅変化地点の通過、合流区間や複数車線を走行する際の車線変更が含まれる。
A driving action plan is a driving action at a lane level (lane level) in a medium- to long-distance range, which defines a driving lane (lane) in which the own vehicle is driven and a driving action required to drive the driving lane. It is a plan of.
Driving behavior determined by the traveling track generation unit 35 includes stopping at a stop line, turning right, left, and going straight at an intersection, traveling on a curved road having a predetermined curvature or more, passing a lane width change point, and merging section. Includes lane changes when driving in multiple lanes.
 例えば走行軌道生成部35は、地図内位置演算部34が推定した他車両の位置に基づいて他車両が自車両に近づいているか否かを判定してよい。他車両が自車両に接近していると判定した場合には、自車両を停止、又は減速し、若しくは回避操舵を伴う運転行動計画を生成する。 For example, the traveling track generation unit 35 may determine whether or not the other vehicle is approaching the own vehicle based on the position of the other vehicle estimated by the position calculation unit 34 in the map. When it is determined that another vehicle is approaching the own vehicle, the own vehicle is stopped or decelerated, or a driving action plan accompanied by avoidance steering is generated.
 走行軌道生成部35は、運転行動計画、自車両1の運動特性、経路空間マップに基づいて、自車両1を走行させる走行軌道及び速度プロファイルの候補を生成する。
 走行軌道生成部35は、リスクマップに基づいて各候補の将来リスクを評価して、最適な走行軌道及び速度プロファイルを選択し、自車両1に走行させる目標走行軌道及び目標速度プロファイルとして設定する。
The traveling track generation unit 35 generates candidates for a traveling track and a speed profile on which the own vehicle 1 is driven, based on the driving action plan, the motion characteristics of the own vehicle 1, and the route space map.
The traveling track generation unit 35 evaluates the future risk of each candidate based on the risk map, selects the optimum traveling track and speed profile, and sets the target traveling track and target speed profile to be traveled by the own vehicle 1.
 走行制御部36は、走行軌道生成部35が生成した目標速度プロファイルに従う速度で自車両1が目標走行軌道を走行するように、アクチュエータ17を駆動し、自車両1が走行予定経路に沿って自動で走行するように自車両1の運転行動を制御する。 The travel control unit 36 drives the actuator 17 so that the own vehicle 1 travels on the target travel trajectory at a speed according to the target speed profile generated by the travel track generation unit 35, and the own vehicle 1 automatically follows the planned travel route. The driving behavior of the own vehicle 1 is controlled so as to travel in.
 一方で、操舵角のみあるいは加減速のみを制御するなど、自車両1の走行に関わる運転動作を部分的に支援する場合には、走行制御部36は、地図内位置演算部34が推定した自車両1の位置及び姿勢と、自車両1の周囲の物体の位置及び姿勢と、地図情報と、自車両1の走行状態とに基づいてアクチュエータ17を駆動し、自車両1の操舵機構、ブレーキ装置、パワーユニットの少なくとも何れか1つを制御する。
 例えば他車両が自車両に接近していると判定した場合には、自車両1を停止、又は減速し、若しくは回避操舵を伴うように、自車両1の操舵機構、ブレーキ装置、パワーユニットの少なくとも何れか1つを制御する。
On the other hand, when partially supporting the driving operation related to the traveling of the own vehicle 1 such as controlling only the steering angle or only the acceleration / deceleration, the traveling control unit 36 uses the self estimated by the position calculation unit 34 in the map. The actuator 17 is driven based on the position and posture of the vehicle 1, the position and posture of objects around the own vehicle 1, the map information, and the running state of the own vehicle 1, and the steering mechanism and the braking device of the own vehicle 1 are driven. , Control at least one of the power units.
For example, when it is determined that another vehicle is approaching the own vehicle, at least one of the steering mechanism, the braking device, and the power unit of the own vehicle 1 is stopped, decelerated, or accompanied by avoidance steering. Control one or the other.
 (動作)
 次に、図8を参照して第1実施形態の物体認識方法を説明する。
 ステップS1において点群取得部30は、測距センサ13が出力する点群データを3次元点群データとして取得する。
 ステップS2において物体点群抽出部40は、3次元点群データから静止物体又は移動物体の反射点の点群データを抽出する。
(motion)
Next, the object recognition method of the first embodiment will be described with reference to FIG.
In step S1, the point cloud acquisition unit 30 acquires the point cloud data output by the distance measuring sensor 13 as three-dimensional point cloud data.
In step S2, the object point cloud extraction unit 40 extracts the point cloud data of the reflection points of the stationary object or the moving object from the three-dimensional point cloud data.
 ステップS3においてクラスタリング部41は、物体点群抽出部40が抽出した点群データを、物体候補毎の点群クラスタにクラスタリングする。
 ステップS4において基準軸計算部42は、点群クラスタに基づいて、物体候補の中心を通って鉛直方向に延びる基準軸を推定する。
 ステップS5において合成点群形成部43は、物体候補抽出部31が抽出した点群クラスタを複製して、基準軸計算部42が推定した基準軸を中心として180度回転させ、追加の点群データを生成する。
In step S3, the clustering unit 41 clusters the point cloud data extracted by the object point cloud extraction unit 40 into a point cloud cluster for each object candidate.
In step S4, the reference axis calculation unit 42 estimates a reference axis extending in the vertical direction through the center of the object candidate based on the point cloud cluster.
In step S5, the composite point cloud forming unit 43 duplicates the point cloud cluster extracted by the object candidate extraction unit 31, rotates it 180 degrees around the reference axis estimated by the reference axis calculation unit 42, and adds additional point cloud data. To generate.
 合成点群形成部43は、物体候補抽出部31が抽出した元の点群データと追加の点群データとを合成した合成点群データを形成する。
 ステップS6においてフィッティング部44は、合成点群データに、直方体65をあてはめる。
 ステップS7において物体認識部45は、合成点群データにあてはめた直方体65を物体として認識し、認識対象の物体の位置、形状及び姿勢を推定する。その後に処理は終了する。
The composite point cloud forming unit 43 forms a composite point cloud data obtained by synthesizing the original point cloud data extracted by the object candidate extraction unit 31 and the additional point cloud data.
In step S6, the fitting unit 44 applies the rectangular parallelepiped 65 to the composite point cloud data.
In step S7, the object recognition unit 45 recognizes the rectangular parallelepiped 65 applied to the composite point group data as an object, and estimates the position, shape, and posture of the object to be recognized. After that, the process ends.
 (第1実施形態の効果)
 (1)測距センサ13及び点群取得部30は、自車両1周囲の物体2表面の位置を、複数の検出点からなる点群データとして取得する。物体候補抽出部31は、点群データを一又は複数の点群クラスタにクラスタリングして、一又は複数の点群クラスタのうちの何れかの点群クラスタを物体候補として抽出する。基準軸計算部42は、点群クラスタに基づいて物体候補の中心を通って鉛直方向に延びる基準軸を推定する。
 合成点群形成部43は、基準軸を中心として点群クラスタを180度回転した点群データと、元の点群クラスタと、を合成して得られる合成点群データを形成する。フィッティング部44及び物体認識部45は、直方体を合成点群データにあてはめることで物体を認識する。
 これにより、認識対象である静止物体や移動物体から得られた反射点の点群データに対して精度良く直方体をあてはめることができるため、物体の認識精度を向上することができる。
(Effect of the first embodiment)
(1) The distance measuring sensor 13 and the point cloud acquisition unit 30 acquire the position of the surface of the object 2 around the own vehicle 1 as point cloud data including a plurality of detection points. The object candidate extraction unit 31 clusters the point cloud data into one or a plurality of point cloud clusters, and extracts any one of the one or a plurality of point cloud clusters as an object candidate. The reference axis calculation unit 42 estimates a reference axis extending in the vertical direction through the center of the object candidate based on the point cloud cluster.
The composite point cloud forming unit 43 forms the composite point cloud data obtained by synthesizing the point cloud data obtained by rotating the point cloud cluster 180 degrees around the reference axis and the original point cloud cluster. The fitting unit 44 and the object recognition unit 45 recognize the object by applying the rectangular parallelepiped to the composite point cloud data.
As a result, the rectangular parallelepiped can be accurately applied to the point cloud data of the reflection points obtained from the stationary object or the moving object to be recognized, so that the recognition accuracy of the object can be improved.
 (2)物体候補抽出部31は、一又は複数の点群クラスタのうち、所定形状の一部分の形状を有すると判定された点群クラスタのみ物体候補として抽出してよい。例えば、一又は複数の点群クラスタのうち、角部を含む点群クラスタのみを物体候補として抽出してよい。
 これにより、認識対象以外の物体に対する処理を省略することができるので、認識精度を低下させることなく処理負荷を軽減できる。
(2) The object candidate extraction unit 31 may extract only a point cloud cluster determined to have a shape of a part of a predetermined shape as an object candidate from one or a plurality of point cloud clusters. For example, out of one or a plurality of point cloud clusters, only the point cloud cluster including the corners may be extracted as an object candidate.
As a result, it is possible to omit the processing for the object other than the recognition target, so that the processing load can be reduced without lowering the recognition accuracy.
 (3)基準軸計算部42は、点群クラスタを水平面に投影した2次元点群データの最小包含円の中心を通る鉛直線を、基準軸として推定してよい。
 これにより、点群クラスタを複製して180度回転して元の点群クラスタと合成した時に、得られる合成点群が直方体に近い形状となる。このため、点群データに対して精度良く直方体をあてはめることができ、物体の認識精度を向上することができる。
(3) The reference axis calculation unit 42 may estimate the vertical line passing through the center of the minimum inclusion circle of the two-dimensional point cloud data obtained by projecting the point cloud cluster on the horizontal plane as the reference axis.
As a result, when the point cloud cluster is duplicated and rotated 180 degrees and synthesized with the original point cloud cluster, the obtained composite point cloud has a shape close to a rectangular parallelepiped. Therefore, the rectangular parallelepiped can be applied to the point cloud data with high accuracy, and the recognition accuracy of the object can be improved.
 (4)物体検出部32は、一又は複数の点群クラスタのうち、最小包含円の大きさが所定の範囲内にある点群クラスタのみについて物体を認識してよい。
 これにより、認識対象の既知の大きさに比べて過大であるか過小である点群クラスタを後続の処理から除外することができる。このため、認識対象以外の物体に対する処理を省略できるので、認識精度を低下させることなく処理負荷を軽減できる。
(4) The object detection unit 32 may recognize an object only for the point cloud clusters in which the size of the minimum inclusion circle is within a predetermined range among one or a plurality of point cloud clusters.
As a result, point cloud clusters that are too large or too small compared to the known size of the recognition target can be excluded from the subsequent processing. Therefore, since the processing for the object other than the recognition target can be omitted, the processing load can be reduced without lowering the recognition accuracy.
 (5)基準軸計算部42は、何れかのクラスタの重心を通る鉛直線を、基準軸として推定してもよい。または、基準軸計算部42は、何れかのクラスタに含まれる点群データの位置情報のうち、最も離れた2点を結ぶ線分の中点を通る鉛直線を、基準軸として推定してよい。
 これにより、比較的簡単な計算量で基準軸を推定できる。
(5) The reference axis calculation unit 42 may estimate the vertical line passing through the center of gravity of any cluster as the reference axis. Alternatively, the reference axis calculation unit 42 may estimate the vertical straight line passing through the midpoint of the line segment connecting the two farthest points among the position information of the point cloud data included in any cluster as the reference axis. ..
As a result, the reference axis can be estimated with a relatively simple amount of calculation.
 (6)物体検出部32は、合成点群データがあてはまる所定形状の大きさ及び形状の少なくとも一方に基づいて物体の属性を判別してよい。
 これにより、物体の属性(例えば、トラック、乗用車、2輪車)を判別することができる。
(6) The object detection unit 32 may determine the attribute of the object based on at least one of the size and the shape of the predetermined shape to which the composite point cloud data applies.
Thereby, the attributes of the object (for example, a truck, a passenger car, and a two-wheeled vehicle) can be discriminated.
 (第2実施形態)
 続いて、第2実施形態を説明する。第2実施形態のコントローラ16は、点群取得部30が3次元点群データとして取得した点群データを水平面に投影して2次元点群データに変換することにより、上記の合成点群データを2次元点群データとして形成する。コントローラ16は、合成点群データに矩形をあてはめることで物体を認識する。これにより、点群データの次元数が減るので、物体認識のための処理負荷を軽減できる。
(Second Embodiment)
Subsequently, the second embodiment will be described. The controller 16 of the second embodiment projects the point cloud data acquired by the point cloud acquisition unit 30 as three-dimensional point cloud data onto a horizontal plane and converts it into two-dimensional point cloud data, thereby converting the above-mentioned composite point cloud data. It is formed as two-dimensional point cloud data. The controller 16 recognizes an object by fitting a rectangle to the composite point cloud data. As a result, the number of dimensions of the point cloud data is reduced, so that the processing load for object recognition can be reduced.
 図9を参照する。第2実施形態のコントローラ16は、第1実施形態のコントローラと同様の構成を有しており、同様の構成要素には同じ参照符号を付している。
 第2実施形態の物体候補抽出部は、変換部46を備える。変換部46は、物体点群抽出部40が抽出した3次元点群データを、車両座標系の水平面に投影して、2次元点群データに変換する。車両座標系に代えて世界座標系や地図座標系の水平面に投影してもよい。
See FIG. The controller 16 of the second embodiment has the same configuration as the controller of the first embodiment, and the same components are designated by the same reference numerals.
The object candidate extraction unit of the second embodiment includes a conversion unit 46. The conversion unit 46 projects the three-dimensional point cloud data extracted by the object point cloud extraction unit 40 onto the horizontal plane of the vehicle coordinate system and converts it into two-dimensional point cloud data. Instead of the vehicle coordinate system, it may be projected on the horizontal plane of the world coordinate system or the map coordinate system.
 クラスタリング部41は、変換部46により2次元点群データに変換された点群データを、物体候補毎の点群クラスタにクラスタリングする。
 なお、物体候補抽出部31は、第1実施形態と同様に、認識対象とする物体の形状に類似する形状の点群クラスタのみを物体候補として抽出するように構成してもよい。
 例えば認識対象が他車両である場合には、上方から見た車両の概略の外形である矩形の一部分を有する点群クラスタのみを物体候補として抽出するように構成してもよい。物体候補抽出部31は、角部を含む点群クラスタが矩形の一部分を有すると判断してよい。
The clustering unit 41 clusters the point cloud data converted into the two-dimensional point cloud data by the conversion unit 46 into a point cloud cluster for each object candidate.
As in the first embodiment, the object candidate extraction unit 31 may be configured to extract only a point cloud cluster having a shape similar to the shape of the object to be recognized as an object candidate.
For example, when the recognition target is another vehicle, it may be configured to extract only the point cloud cluster having a part of the rectangle which is the outline outline of the vehicle seen from above as the object candidate. The object candidate extraction unit 31 may determine that the point cloud cluster including the corner portion has a part of the rectangle.
 基準軸計算部42は、物体候補抽出部31が抽出した2次元の点群クラスタを180度回転させる基準軸を推定する。基準軸計算部42は、例えば、2次元の点群クラスタの最小包含円の中心を通る鉛直線、2次元の点群クラスタの重心を通る鉛直線、2次元の点群クラスタに含まれる点群データのうち最も離れた点群データを結ぶ線分の中点を通る鉛直線を基準軸として推定してよい。 The reference axis calculation unit 42 estimates the reference axis for rotating the two-dimensional point cloud cluster extracted by the object candidate extraction unit 31 by 180 degrees. The reference axis calculation unit 42 is, for example, a vertical line segment passing through the center of the minimum inclusion circle of the two-dimensional point group cluster, a vertical line segment passing through the center of gravity of the two-dimensional point group cluster, and a point group included in the two-dimensional point group cluster. The vertical line passing through the midpoint of the line segment connecting the farthest point group data of the data may be estimated as the reference axis.
 合成点群形成部43は、物体候補抽出部31が抽出した2次元の点群クラスタを複製して、基準軸計算部42が推定した基準軸を中心として180度回転させ、追加の点群データを生成する。合成点群形成部43は、元の点群データと追加の点群データとを合成した合成点群データを形成する。
 フィッティング部44は、合成点群データに矩形をあてはめる。物体認識部45は、合成点群データにあてはめた矩形を物体として認識し、認識対象の物体の位置、形状及び姿勢を推定する。
The composite point cloud forming unit 43 duplicates the two-dimensional point cloud cluster extracted by the object candidate extraction unit 31, rotates it 180 degrees around the reference axis estimated by the reference axis calculation unit 42, and adds additional point cloud data. To generate. The composite point cloud forming unit 43 forms the composite point cloud data by synthesizing the original point cloud data and the additional point cloud data.
The fitting unit 44 fits a rectangle to the composite point cloud data. The object recognition unit 45 recognizes the rectangle fitted to the composite point cloud data as an object, and estimates the position, shape, and posture of the object to be recognized.
 なお、上記の説明では、点群取得部30が取得した3次元点群データを、クラスタリング部41によるクラスタリングの前に2次元点群データに変換したが、本発明はこれに限定されない。
 物体点群抽出部40による抽出の後であって、フィッティング部44による矩形のあてはめ前のいずれかの時点で、3次元点群データを2次元点群データに変換すればよい。
In the above description, the three-dimensional point cloud data acquired by the point cloud acquisition unit 30 is converted into two-dimensional point cloud data before clustering by the clustering unit 41, but the present invention is not limited to this.
The three-dimensional point cloud data may be converted into the two-dimensional point cloud data at any time after the extraction by the object point cloud extraction unit 40 and before the fitting of the rectangle by the fitting unit 44.
 また上記の説明では、3次元点群データを水平面に投影することによって3次元点群データを2次元点群データへ変換したが、本発明はこれに限定されない。
 例えば変換部46は、物体点群抽出部40が抽出した3次元点群データから二値化占有グリッドマップを生成してもよい。物体候補抽出部31は、自車両1の運転行動計画等のために算出した二値化占有グリッドマップを兼用してもよい。
Further, in the above description, the three-dimensional point cloud data is converted into the two-dimensional point cloud data by projecting the three-dimensional point cloud data on the horizontal plane, but the present invention is not limited to this.
For example, the conversion unit 46 may generate a binarized occupancy grid map from the three-dimensional point cloud data extracted by the object point cloud extraction unit 40. The object candidate extraction unit 31 may also use the binarized occupancy grid map calculated for the driving action plan of the own vehicle 1.
 物体候補抽出部31は、二値化占有グリッドマップにおいて点群データに占有された占有グリッドを検出する。
 クラスタリング部41、基準軸計算部42、合成点群形成部43、フィッティング部44及び物体認識部45は、占有グリッドの集合を2次元点群データと同様に用いて、上記の2次元点群データに対するクラスタリング、基準軸の推定、合成点群データの形成、合成点群データへの矩形のあてはめ及び物体認識と同様の処理を実行する。
The object candidate extraction unit 31 detects the occupied grid occupied by the point cloud data in the binarized occupied grid map.
The clustering unit 41, the reference axis calculation unit 42, the composite point cloud forming unit 43, the fitting unit 44, and the object recognition unit 45 use the set of occupied grids in the same manner as the two-dimensional point cloud data, and use the above two-dimensional point cloud data. Performs the same processing as clustering for, estimating the reference axis, forming the composite point cloud data, fitting the rectangle to the composite point cloud data, and recognizing the object.
 (動作)
 次に、図10を参照して第2実施形態の物体認識方法を説明する。
 ステップS10及びS11の処理は、図8を参照して説明したステップS1及びS2の処理と同様である。
 ステップS12において物体点群抽出部40が抽出した3次元点群データを、水平面に投影して、2次元点群データに変換する。
(motion)
Next, the object recognition method of the second embodiment will be described with reference to FIG.
The processing of steps S10 and S11 is the same as the processing of steps S1 and S2 described with reference to FIG.
The three-dimensional point cloud data extracted by the object point cloud extraction unit 40 in step S12 is projected onto a horizontal plane and converted into two-dimensional point cloud data.
 ステップS13~ステップS15の処理は、3次元点群データに代えて2次元点群データを取り扱う点を除いて、図8を参照して説明したステップS3~S5と同様である。
 ステップS16にてフィッティング部44は、合成点群データに矩形をあてはめる。
 ステップS17において物体認識部45は、合成点群データにあてはめた矩形を物体として認識し、認識対象の物体の位置、形状及び姿勢を推定する。
 その後に処理は終了する。
The processing of steps S13 to S15 is the same as that of steps S3 to S5 described with reference to FIG. 8 except that the two-dimensional point cloud data is handled instead of the three-dimensional point cloud data.
In step S16, the fitting unit 44 applies a rectangle to the composite point cloud data.
In step S17, the object recognition unit 45 recognizes the rectangle fitted to the composite point cloud data as an object, and estimates the position, shape, and posture of the object to be recognized.
After that, the process ends.
 (第2実施形態の効果)
 (1)変換部46は、物体点群抽出部40が抽出した3次元点群データを、車両座標系の水平面に投影して、2次元点群データに変換する。フィッティング部44は、2次元の合成点群データに矩形をあてはめる。物体認識部45は、合成点群データにあてはめた矩形を物体として認識し、認識対象の物体の位置、形状及び姿勢を推定する。
 これにより点群データの次元数が減るので、物体認識のための処理負荷を軽減できる。
(Effect of the second embodiment)
(1) The conversion unit 46 projects the three-dimensional point cloud data extracted by the object point cloud extraction unit 40 onto the horizontal plane of the vehicle coordinate system and converts it into two-dimensional point cloud data. The fitting unit 44 fits a rectangle to the two-dimensional composite point cloud data. The object recognition unit 45 recognizes the rectangle fitted to the composite point cloud data as an object, and estimates the position, shape, and posture of the object to be recognized.
As a result, the number of dimensions of the point cloud data is reduced, so that the processing load for object recognition can be reduced.
 (2)変換部46は、物体点群抽出部40が抽出した3次元点群データから占有グリッドマップを生成し、占有グリッドを検出する。
 クラスタリング部41、基準軸計算部42、合成点群形成部43、フィッティング部44及び物体認識部45は、占有グリッドの集合を2次元点群データと同様に用いて、2次元点群データに対するクラスタリング、基準軸の推定、合成点群データの形成、合成点群データへの矩形のあてはめ及び物体認識と同様の処理を実行する。
 このため、自車両1の運転行動計両のために占有グリッドマップを計算する場合に、この占有グリッドマップを兼用できるので、システム全体での計算量を軽減できる。
(2) The conversion unit 46 generates an occupied grid map from the three-dimensional point cloud data extracted by the object point cloud extraction unit 40, and detects the occupied grid.
The clustering unit 41, the reference axis calculation unit 42, the composite point cloud forming unit 43, the fitting unit 44, and the object recognition unit 45 use the set of occupied grids in the same manner as the two-dimensional point cloud data, and cluster the two-dimensional point cloud data. , Estimate the reference axis, form the composite point cloud data, fit the rectangle to the composite point cloud data, and perform the same processing as object recognition.
Therefore, when calculating the occupied grid map for both the driving behavior meters of the own vehicle 1, this occupied grid map can also be used, so that the amount of calculation in the entire system can be reduced.
 ここに記載されている全ての例及び条件的な用語は、読者が、本発明と技術の進展のために発明者により与えられる概念とを理解する際の助けとなるように、教育的な目的を意図したものであり、具体的に記載されている上記の例及び条件、並びに本発明の優位性及び劣等性を示すことに関する本明細書における例の構成に限定されることなく解釈されるべきものである。本発明の実施例は詳細に説明されているが、本発明の精神及び範囲から外れることなく、様々な変更、置換及び修正をこれに加えることが可能であると解すべきである。 All examples and conditional terms described herein are for educational purposes to help the reader understand the invention and the concepts conferred by the inventor for the advancement of technology. Is intended and should be construed without limitation to the above examples and conditions specifically described and the constitution of the examples herein relating to demonstrating superiority and inferiority of the present invention. It is a thing. Although examples of the present invention have been described in detail, it should be understood that various changes, substitutions and modifications can be made to this without departing from the spirit and scope of the invention.
 1…自車両、2…自車両の周囲の物体、10…走行支援装置、11…測位装置、12…地図データベース、13…測距センサ、14…車両センサ、15…ナビゲーションシステム、16…コントローラ、17…アクチュエータ、20…プロセッサ、21…記憶装置、30…点群取得部、31…物体候補抽出部、32…物体検出部、33…自己位置推定部、34…地図内位置演算部、35…走行軌道生成部、36…走行制御部、40…物体点群抽出部、41…クラスタリング部、42…基準軸計算部、43…合成点群形成部、44…フィッティング部、45…物体認識部、46…変換部 1 ... own vehicle, 2 ... objects around the own vehicle, 10 ... running support device, 11 ... positioning device, 12 ... map database, 13 ... distance measurement sensor, 14 ... vehicle sensor, 15 ... navigation system, 16 ... controller, 17 ... Actuator, 20 ... Processor, 21 ... Storage device, 30 ... Point cloud acquisition unit, 31 ... Object candidate extraction unit, 32 ... Object detection unit, 33 ... Self-position estimation unit, 34 ... Position calculation unit in map, 35 ... Travel trajectory generation unit, 36 ... Travel control unit, 40 ... Object point cloud extraction unit, 41 ... Clustering unit, 42 ... Reference axis calculation unit, 43 ... Composite point cloud formation unit, 44 ... Fitting unit, 45 ... Object recognition unit, 46 ... Conversion unit

Claims (12)

  1.  車両周囲の物体表面の位置を、複数の検出点からなる点群データとして取得し、
     前記点群データを一又は複数のクラスタにクラスタリングして、前記一又は複数のクラスタのうちの何れかのクラスタを物体候補として抽出し、
     前記何れかのクラスタに基づいて前記物体候補の中心を通って鉛直方向に延びる基準軸を推定し、
     前記基準軸を中心として前記何れかのクラスタを180度回転した点群データと、前記何れかのクラスタと、を合成して得られる合成点群データを形成し、
     矩形又は直方体である所定形状を前記合成点群データにあてはめることで前記物体を認識する、
     ことを特徴とする物体認識方法。
    The position of the object surface around the vehicle is acquired as point cloud data consisting of a plurality of detection points.
    The point cloud data is clustered into one or a plurality of clusters, and any one of the one or a plurality of clusters is extracted as an object candidate.
    Based on any of the clusters, a reference axis extending vertically through the center of the object candidate is estimated.
    A point cloud data obtained by synthesizing the point cloud data obtained by rotating any of the clusters by 180 degrees around the reference axis and the cluster is formed.
    The object is recognized by applying a predetermined shape that is a rectangle or a rectangular parallelepiped to the composite point cloud data.
    An object recognition method characterized by this.
  2.  3次元点群データとしての前記合成点群データに直方体をあてはめることで前記物体を認識することを特徴とする請求項1に記載の物体認識方法。 The object recognition method according to claim 1, wherein the object is recognized by applying a rectangular parallelepiped to the composite point cloud data as three-dimensional point cloud data.
  3.  前記点群データを3次元点群データとして取得し、
     3次元点群データを水平面に投影して2次元点群データに変換することにより、前記合成点群データを2次元点群データとして形成し、
     前記合成点群データに矩形をあてはめることで前記物体を認識することを特徴とする請求項1に記載の物体認識方法。
    The point cloud data is acquired as three-dimensional point cloud data, and the point cloud data is acquired.
    By projecting the 3D point cloud data onto a horizontal plane and converting it into 2D point cloud data, the composite point cloud data is formed as 2D point cloud data.
    The object recognition method according to claim 1, wherein the object is recognized by fitting a rectangle to the composite point cloud data.
  4.  前記点群データを3次元点群データとして取得し、
     3次元点群データから占有グリッドマップを生成し、
     前記占有グリッドマップの占有グリッドをクラスタリングすることにより前記何れかのクラスタを抽出し、
     前記合成点群データに矩形をあてはめることで前記物体を認識することを特徴とする請求項1に記載の物体認識方法。
    The point cloud data is acquired as three-dimensional point cloud data, and the point cloud data is acquired.
    Generate an occupied grid map from 3D point cloud data
    Any of the above clusters is extracted by clustering the occupied grids of the occupied grid map.
    The object recognition method according to claim 1, wherein the object is recognized by fitting a rectangle to the composite point cloud data.
  5.  前記一又は複数のクラスタのうち、前記所定形状の一部分の形状を有すると判定されたクラスタのみ前記物体候補として抽出する、ことを特徴とする請求項1~4の何れか一項に記載の物体認識方法。 The object according to any one of claims 1 to 4, wherein only the cluster determined to have a shape of a part of the predetermined shape is extracted as the object candidate from the one or a plurality of clusters. Recognition method.
  6.  前記一又は複数のクラスタのうち、角部を含むクラスタのみを前記物体候補として抽出する、請求項5に記載の物体認識方法。 The object recognition method according to claim 5, wherein only the cluster including the corner portion is extracted as the object candidate from the one or a plurality of clusters.
  7.  前記何れかのクラスタを水平面に投影した2次元点群データの最小包含円の中心を通る鉛直線を、前記基準軸として推定することを特徴とする請求項1~6の何れか一項に記載の物体認識方法。 The invention according to any one of claims 1 to 6, wherein a vertical line passing through the center of the minimum inclusion circle of the two-dimensional point cloud data obtained by projecting any of the clusters on a horizontal plane is estimated as the reference axis. Object recognition method.
  8.  前記一又は複数のクラスタのうち、前記最小包含円の大きさが所定の範囲内にあるクラスタのみについて前記物体を認識することを特徴とする請求項7に記載の物体認識方法。 The object recognition method according to claim 7, wherein the object is recognized only for the clusters in which the size of the minimum inclusion circle is within a predetermined range among the one or a plurality of clusters.
  9.  前記何れかのクラスタの重心を通る鉛直線を、前記基準軸として推定することを特徴とする請求項1~6の何れか一項に記載の物体認識方法。 The object recognition method according to any one of claims 1 to 6, wherein a vertical straight line passing through the center of gravity of any of the clusters is estimated as the reference axis.
  10.  前記何れかのクラスタに含まれる点群データの位置情報のうち、最も離れた2点を結ぶ線分の中点を通る鉛直線を、前記基準軸として推定することを特徴とする請求項1~6の何れか一項に記載の物体認識方法。 Claims 1 to 1, wherein the vertical line passing through the midpoint of the line segment connecting the two most distant points among the position information of the point cloud data included in any of the clusters is estimated as the reference axis. The object recognition method according to any one of 6.
  11.  前記合成点群データがあてはまる前記所定形状の大きさ及び形状の少なくとも一方に基づいて前記物体の属性を判別する、ことを特徴とする請求項1~10の何れか一項に記載の物体認識方法。 The object recognition method according to any one of claims 1 to 10, wherein the attribute of the object is determined based on at least one of the size and the shape of the predetermined shape to which the composite point cloud data applies. ..
  12.  車両周囲の物体表面の位置を、複数の検出点からなる点群データとして取得するセンサと、
     前記点群データを一又は複数のクラスタにクラスタリングして、前記一又は複数のクラスタのうちの何れかのクラスタを物体候補として抽出し、前記何れかのクラスタに基づいて前記物体候補の中心を通って鉛直方向に延びる基準軸を推定し、前記基準軸を中心として前記何れかのクラスタを180度回転した点群データと、前記何れかのクラスタと、を合成して得られる合成点群データを形成し、矩形又は直方体である所定形状を前記合成点群データにあてはめることで前記物体を認識する、コントローラと、
     を備えることを特徴とする物体認識装置。
    A sensor that acquires the position of the surface of an object around the vehicle as point cloud data consisting of multiple detection points,
    The point group data is clustered into one or a plurality of clusters, one of the one or a plurality of clusters is extracted as an object candidate, and the center of the object candidate is passed based on the one or more clusters. The point group data obtained by estimating the reference axis extending in the vertical direction and rotating one of the clusters by 180 degrees around the reference axis and the composite point group data obtained by synthesizing the one of the clusters. A controller that recognizes the object by forming and applying a predetermined shape that is a rectangular parallelepiped to the composite point group data.
    An object recognition device characterized by comprising.
PCT/IB2019/001225 2019-10-18 2019-10-18 Object recognition method and object recognition device WO2021074660A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2019/001225 WO2021074660A1 (en) 2019-10-18 2019-10-18 Object recognition method and object recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2019/001225 WO2021074660A1 (en) 2019-10-18 2019-10-18 Object recognition method and object recognition device

Publications (1)

Publication Number Publication Date
WO2021074660A1 true WO2021074660A1 (en) 2021-04-22

Family

ID=75537819

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/001225 WO2021074660A1 (en) 2019-10-18 2019-10-18 Object recognition method and object recognition device

Country Status (1)

Country Link
WO (1) WO2021074660A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343840A (en) * 2021-06-02 2021-09-03 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113807442A (en) * 2021-09-18 2021-12-17 湖南大学无锡智能控制研究院 Target shape and course estimation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003189293A (en) * 2001-09-07 2003-07-04 Matsushita Electric Ind Co Ltd Device for displaying state of surroundings of vehicle and image-providing system
JP2016148514A (en) * 2015-02-10 2016-08-18 国立大学法人金沢大学 Mobile object tracking method and mobile object tracking device
JP2017138219A (en) * 2016-02-04 2017-08-10 株式会社デンソー Object recognition device
US20180341019A1 (en) * 2017-05-26 2018-11-29 Toyota Motor Engineering & Manufacturing North America, Inc. Publishing lidar cluster data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003189293A (en) * 2001-09-07 2003-07-04 Matsushita Electric Ind Co Ltd Device for displaying state of surroundings of vehicle and image-providing system
JP2016148514A (en) * 2015-02-10 2016-08-18 国立大学法人金沢大学 Mobile object tracking method and mobile object tracking device
JP2017138219A (en) * 2016-02-04 2017-08-10 株式会社デンソー Object recognition device
US20180341019A1 (en) * 2017-05-26 2018-11-29 Toyota Motor Engineering & Manufacturing North America, Inc. Publishing lidar cluster data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OHASHI, YUYA ET AL.: "Points interpolation technique for generating mesh", PROCEEDINGS OF THE 70TH NATIONAL CONVENTION OF IPSJ, vol. 4, 13 March 2008 (2008-03-13), pages 4 - 399 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343840A (en) * 2021-06-02 2021-09-03 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113343840B (en) * 2021-06-02 2022-03-08 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113807442A (en) * 2021-09-18 2021-12-17 湖南大学无锡智能控制研究院 Target shape and course estimation method and system
CN113807442B (en) * 2021-09-18 2022-04-19 湖南大学无锡智能控制研究院 Target shape and course estimation method and system

Similar Documents

Publication Publication Date Title
JP6906011B2 (en) Method for converting the 2D boundary frame of an object to the 3D position of an autonomous vehicle [METHOD FOR TRANSFORMING 2D BOUNDING BOXES OF OBJECTS INTO 3D POSITIONS FOR AUTONOMOUS DRIVING VEHICLES
US10769793B2 (en) Method for pitch angle calibration based on 2D bounding box and its 3D distance for autonomous driving vehicles (ADVs)
US11651553B2 (en) Methods and systems for constructing map data using poisson surface reconstruction
US9495602B2 (en) Image and map-based detection of vehicles at intersections
CN110816548A (en) Sensor fusion
US11255681B2 (en) Assistance control system
US10955857B2 (en) Stationary camera localization
US20220146676A1 (en) Doppler-assisted object mapping for autonomous vehicle applications
US10777084B1 (en) Vehicle location identification
US11361484B1 (en) Methods and systems for ground segmentation using graph-cuts
US20220205804A1 (en) Vehicle localisation
US20230384441A1 (en) Estimating three-dimensional target heading using a single snapshot
KR20200084938A (en) Method and Apparatus for Planning Car Motion
JP7321035B2 (en) OBJECT POSITION DETECTION METHOD AND OBJECT POSITION DETECTION DEVICE
WO2021074660A1 (en) Object recognition method and object recognition device
JP7032062B2 (en) Point cloud data processing device, mobile robot, mobile robot system, and point cloud data processing method
EP4285083A1 (en) Methods and system for generating a lane-level map for an area of interest for navigation of an autonomous vehicle
US20230384442A1 (en) Estimating target heading using a single snapshot
EP4141482A1 (en) Systems and methods for validating camera calibration in real-time
WO2023173076A1 (en) End-to-end systems and methods for streaming 3d detection and forecasting from lidar point clouds
US20220212694A1 (en) Methods and systems for generating a longitudinal plan for an autonomous vehicle based on behavior of uncertain road users
US11358598B2 (en) Methods and systems for performing outlet inference by an autonomous vehicle to determine feasible paths through an intersection
US20220067399A1 (en) Autonomous vehicle system for performing object detections using a logistic cylinder pedestrian model
US20230150543A1 (en) Systems and methods for estimating cuboid headings based on heading estimations generated using different cuboid defining techniques
US20240192369A1 (en) Systems and methods for infant track association with radar detections for velocity transfer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19949362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19949362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP