WO2022206977A1 - 一种面向车路协同的感知信息融合表征及目标检测方法 - Google Patents

一种面向车路协同的感知信息融合表征及目标检测方法 Download PDF

Info

Publication number
WO2022206977A1
WO2022206977A1 PCT/CN2022/084925 CN2022084925W WO2022206977A1 WO 2022206977 A1 WO2022206977 A1 WO 2022206977A1 CN 2022084925 W CN2022084925 W CN 2022084925W WO 2022206977 A1 WO2022206977 A1 WO 2022206977A1
Authority
WO
WIPO (PCT)
Prior art keywords
lidar
voxel
point cloud
roadside
vehicle
Prior art date
Application number
PCT/CN2022/084925
Other languages
English (en)
French (fr)
Inventor
许军
赵聪
朱逸凡
陆日琪
Original Assignee
许军
马儒争
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 许军, 马儒争 filed Critical 许军
Priority to CN202280026658.2A priority Critical patent/CN117441113A/zh
Publication of WO2022206977A1 publication Critical patent/WO2022206977A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • the invention belongs to the technical field of automatic driving vehicle-road collaboration, and relates to a vehicle-road collaboration target detection method using perception information fusion representation.
  • Autonomous driving is a complete software and hardware interaction system.
  • the core technologies of autonomous driving include hardware (automotive manufacturing technology, autonomous driving chips), autonomous driving software, high-precision maps, and sensor communication networks. From the software point of view, it can be divided into the following three modules as a whole, namely environmental perception, behavioral decision-making and motion control.
  • Perception is the first link of autonomous driving and the link between the vehicle and the environment.
  • the overall performance of an autonomous driving system depends first and foremost on the performance of the perception system.
  • the perception of autonomous vehicles is achieved through sensors, where lidar uses laser light to detect and measure. The principle is to emit a pulsed laser to the surroundings, reflect it back after encountering an object, and calculate the distance through the time difference between back and forth, so as to establish a three-dimensional model of the surrounding environment.
  • Lidar has high detection accuracy and long distance; due to the short wavelength of laser, it can detect very small targets and the detection distance is very long.
  • the point cloud data perceived by lidar has a large amount of information and higher accuracy, and is mostly used for target detection and classification in the perception loop of autonomous driving.
  • LiDAR subverts the traditional two-dimensional projection imaging mode. It can collect the depth information of the target surface and obtain relatively complete spatial information of the target. After data processing, the three-dimensional surface of the target is reconstructed to obtain a three-dimensional figure that can better reflect the geometric shape of the target. It can also obtain rich feature information such as target surface reflection characteristics and motion speed, providing sufficient information support for data processing such as target detection, recognition, and tracking, and reducing the difficulty of algorithms; on the other hand, the application of active laser technology makes it possible to measure High resolution, strong anti-interference ability, strong anti-stealth ability, strong penetrating ability and the characteristics of all-weather work.
  • lidar can be divided into mechanical lidar and solid-state lidar.
  • solid-state lidar is considered to be the general trend of the future, in the current lidar battlefield, mechanical lidar still occupies the mainstream position.
  • Mechanical lidars have rotating parts that control the angle of laser emission, while solid-state lidars do not require mechanical rotating parts and rely mainly on electronic components to control the angle of laser emission.
  • lidar is basically the most important sensor in its environment perception module, and undertakes most tasks such as real-time map establishment, positioning and target detection in environment perception.
  • Google Waymo has added five lidars to its sensor configuration scheme, and the four side lidars are distributed on the front, rear, left and right of the vehicle, which are medium and short-range multi-line radars to supplement the blind spot vision; the top is configured with high Line-count lidars are used for large-scale perception, and their blind spots are supplemented by four side lidars.
  • Point cloud data refers to a set of vectors in a three-dimensional coordinate system. These vectors are usually represented in the form of X, Y, Z three-dimensional coordinates. In addition to the three-dimensional coordinates of each point, some may contain color information (RGB) or reflection intensity information (Intensity).
  • RGB color information
  • Intensity reflection intensity information
  • the X, Y, and Z columns of data represent the three-dimensional position of the point data in the sensor coordinate system or the world coordinate system, generally in meters.
  • the Intensity column represents the laser reflection intensity at each point. The value has no unit and is generally normalized to between 0 and 255.
  • the installation height of the vehicle lidar itself is limited by the size of the vehicle, the information it can detect is easily affected by the obstructions around the vehicle. For example, a truck driving in front of a small car can be almost completely blocked. The front view of the lidar on the small car is severely weakened.
  • the performance of the radar itself is also limited by the overall cost of the vehicle, and the vehicle end is often not equipped with a relatively expensive high-line-count lidar. Therefore, the point cloud data obtained by the vehicle lidar often have blind spots or sparseness, and it is difficult to rely only on the vehicle's sensors to complete the automatic driving perception task.
  • lidars installed on roadside facilities have a more transparent view and are not easily blocked because they can be placed on higher gantry or lamp posts.
  • roadside lidar has a higher tolerance for cost, can use lidar with higher line count, and can configure roadside computing unit with higher computing power, so as to achieve higher detection performance and faster detection speed .
  • the vehicle-road coordination system is in the upsurge of research and testing.
  • the intelligent vehicle-road collaboration solution based on V2X technology can enhance the assisted driving functions that can be realized at this stage, enhance vehicle driving safety and road operation efficiency, and can be used in the long run.
  • the existing LiDAR vehicle-road coordination scheme is that vehicles and roadside facilities each perform target detection based on LiDAR point cloud data, and then the facility side sends the detection results to the vehicle.
  • Most researchers' research focuses on the transmission of data. In reliability analysis or calculation of the relative pose between the two ends of the vehicle and road, or the processing of the data transmission delay at both ends of the vehicle and road, the result of the target detection is directly sent by the vehicle-road coordination process by default. Although this solution has a low data transmission amount, it still cannot fully utilize the detection data at both ends. For example, when a relatively complete target point cloud is not detected in the two gears of the vehicle and road, it is easy to miss detection and false detection, resulting in errors in the target detection results after collaboration. In this regard, some scholars propose to send the original point cloud data directly to prevent information loss. For example, the Cooper framework proposed in 2019 first proposed a cooperative perception scheme at the level of original point cloud data, which greatly improves the perception performance by fusing point cloud data from different sources.
  • Patent document US20150187216A1
  • the present invention provides a vehicle-road collaboration-oriented perception information fusion representation and target detection method, and provides a vehicle-road collaboration scheme based on lidar point cloud data that weighs the size of the transmission data and the degree of information loss.
  • a vehicle-road collaboration scheme based on lidar point cloud data that weighs the size of the transmission data and the degree of information loss.
  • the specific technical problems to be solved include determining the roadside lidar layout plan, selecting the roadside lidar external parameter calibration method, calculating the deflection parameters based on the relative pose of the autonomous vehicle and the roadside lidar, and determining the appropriate roadside lidar.
  • a collaborative representation of information includes determining the roadside lidar layout plan, selecting the roadside lidar external parameter calibration method, calculating the deflection parameters based on the relative pose of the autonomous vehicle and the roadside lidar, and determining the appropriate roadside lidar.
  • the objective of the present invention is to reduce the amount of information transmission on the premise of ensuring the vehicle-road cooperative perception capability.
  • the roadside computing device calculates the relative pose of the autonomous vehicle relative to the roadside lidar based on the autonomous vehicle positioning data and the roadside lidar external parameters;
  • the roadside computing device deflects the roadside lidar point cloud detected by the roadside lidar into the coordinate system of the autonomous driving vehicle according to the relative pose to obtain the deflection point cloud.
  • the roadside computing device performs voxelization processing on the deflection point cloud to obtain the voxelized deflection point cloud.
  • the autonomous vehicle performs voxelization processing on the vehicle lidar point cloud detected by the vehicle lidar to obtain the voxelized vehicle lidar point cloud;
  • the roadside computing device calculates the voxel-level features of the voxelized deflection point cloud to obtain the voxel-level features of the deflection point cloud.
  • the autonomous vehicle calculates voxelized vehicle lidar point cloud voxel-level features, and obtains vehicle-mounted lidar point cloud voxel-level features;
  • Sub-scheme I completes steps G 1 , H 1 , I 1 on the roadside computing device;
  • Sub-scheme II completes steps G 2 , H 2 , I 2 on the autonomous vehicle;
  • Sub-scheme III completes steps G 3 , H 3 , I 3 .
  • the autonomous vehicle compresses the voxel-level features of the vehicle lidar point cloud to obtain the voxel-level features of the compressed vehicle lidar point cloud, and transmits it to the roadside computing device, which receives the compressed vehicle lidar point cloud Cloud voxel-level features, the compressed vehicle-mounted lidar point cloud voxel-level features are restored to vehicle-mounted lidar point cloud voxel-level features;
  • the roadside computing device performs data splicing and data aggregation on the vehicle-mounted lidar point cloud voxel-level features and the deflection point cloud voxel-level features to obtain aggregated voxel-level features;
  • a roadside computing device inputs aggregated voxel-level features into a three-dimensional target detection network model based on voxel-level features to obtain target detection results, and transmits the target detection results to an autonomous vehicle;
  • the roadside computing device compresses the voxel-level features of the deflection point cloud to obtain the voxel-level features of the compressed deflection point cloud, and transmits them to the autonomous driving vehicle; the autonomous driving vehicle receives the voxel-level features of the compressed deflection point cloud and converts the The voxel-level features of the compressed deflection point cloud are restored to the voxel-level features of the deflection point cloud;
  • the autonomous vehicle performs data splicing and data aggregation on the vehicle-mounted lidar point cloud voxel-level features and the deflection point cloud voxel-level features to obtain aggregated voxel-level features;
  • the self-driving vehicle inputs the aggregated voxel-level features into a three-dimensional target detection network model based on the voxel-level features to obtain target detection results;
  • the autonomous vehicle compresses the voxel-level features of the vehicle lidar point cloud to obtain the voxel-level features of the compressed vehicle lidar point cloud, and transmits it to the cloud.
  • the roadside computing device compresses the voxel-level features of the deflection point cloud, obtains the voxel-level features of the compressed deflection point cloud, and transmits it to the cloud.
  • the cloud receives the voxel-level features of the compressed deflection point cloud and the voxel-level features of the compressed vehicle lidar point cloud, restores the voxel-level features of the compressed deflection point cloud to the voxel-level features of the deflection point cloud, and converts the voxel-level features of the compressed vehicle lidar point cloud. Level features are restored to vehicle-mounted lidar point cloud voxel-level features;
  • the cloud performs data splicing and data aggregation on the voxel-level features of the vehicle lidar point cloud and the voxel-level features of the deflection point cloud to obtain aggregated voxel-level features;
  • the cloud inputs the aggregated voxel-level features into a three-dimensional target detection network model based on voxel-level features to obtain target detection results, and transmits the target detection results to the autonomous vehicle.
  • the roadside laser radar layout is determined according to the existing roadside column facilities and the installed laser radar type in the vehicle-road coordination scene.
  • the existing roadside LiDAR installation is in the form of vertical poles or horizontal poles, and the specific installation location is on the infrastructure pillars with power support such as roadside gantry, street lamps, and signal lamp posts.
  • lidar According to whether there are rotating parts inside, lidar can be divided into mechanical rotary lidar, hybrid lidar and solid-state lidar.
  • mechanical rotary lidar and solid-state lidar are two types of lidar commonly used on the roadside.
  • a roadside lidar with a detection range greater than or equal to the scene range or including key areas in the scene can be deployed.
  • roadside lidar layout guidelines For long-distance and large-scale complex scenes such as expressways, highways, and parks, it is recommended to follow the following roadside lidar layout guidelines to ensure that the roadside lidar coverage meets the full coverage requirements of the scene. Supplementation of detection blind spots below other roadside lidars to achieve better vehicle-road coordinated target detection.
  • the roadside lidar layout guidelines are divided into roadside mechanical rotating lidar layout guidelines and roadside all-solid-state lidar layout guidelines according to the types of roadside lidars used.
  • the mechanical rotating lidar realizes laser scanning through mechanical rotation; the laser emitting components are arranged in a line array of laser light sources in the vertical direction, and can generate light beams pointed at different angles in the vertical plane through the lens; driven by the motor, it continues to Rotation Even if the beam in the vertical plane changes from "line” to "plane", multiple laser “planes” are formed by rotating and scanning, so as to realize detection in the detection area.
  • Hybrid solid-state lidar refers to the use of semiconductor "micro-movement” devices (such as MEMS scanning mirrors) to replace macroscopic mechanical scanners, and to achieve laser scanning at the radar transmitter at the microscopic scale.
  • the roadside mechanical rotary lidar and roadside hybrid solid-state lidar layout guidelines require that the roadside mechanical rotary lidar and roadside roadside hybrid solid-state lidar be installed horizontally to ensure full utilization of beam information in all directions. As shown in Figure 2, the layout of roadside mechanical rotating lidar and roadside hybrid solid-state lidar should at least meet the following requirements:
  • H a represents the installation height of roadside mechanical rotating lidar or roadside hybrid solid-state lidar
  • L a represents the distance between two adjacent roadside mechanical rotating lidars or roadside hybrid solid-state lidar installation poles
  • the all-solid-state lidar completely cancels the mechanical scanning structure, and the laser scanning in both horizontal and vertical directions is realized electronically.
  • the phased laser transmitter is a rectangular array composed of several transmitting and receiving units. By changing the phase difference of the light emitted by different units in the array, the purpose of adjusting the angle and direction of the emitted laser can be achieved.
  • the laser light source enters the optical waveguide array after passing through the optical beam splitter, and the phase of the light wave is changed by means of external control on the waveguide, and the beam scanning is realized by using the optical wave phase difference between the waveguides.
  • the roadside all-solid-state lidar layout guidelines require that the roadside all-solid-state lidar should at least meet the following requirements:
  • H b represents the installation height of the roadside all-solid-state lidar
  • L b represents the distance between two adjacent roadside all-solid-state lidar installation poles
  • H c represents the installation height of the all-solid-state lidar on the roadside
  • L c represents the distance between two adjacent roadside all-solid-state lidar installation poles
  • the roadside mechanical rotating lidar or all-solid-state lidar is laid out according to the above requirements, and the scanning area of each lidar is increased when there is room for conditions.
  • the roadside lidar layout conditions can meet the roadside lidar layout guidelines by laying new poles and the number of roadside lidars.
  • the external parameters of lidar can be represented by the following vectors:
  • V 0 [x 0 y 0 z 0 a 0 ⁇ 0 ⁇ 0 ] (4)
  • x 0 represents the X coordinate of the roadside lidar in the reference coordinate system
  • y 0 represents the Y coordinate of the roadside lidar in the reference coordinate system
  • z 0 represents the Z coordinate of the roadside lidar in the reference coordinate system
  • a 0 represents the rotation angle of the roadside lidar around the X axis in the reference coordinate system
  • ⁇ 0 represents the rotation angle of the roadside lidar around the Y axis in the reference coordinate system
  • ⁇ 0 represents the rotation angle of the roadside lidar around the Z axis in the reference coordinate system
  • the above-mentioned reference coordinate system may be a latitude and longitude coordinate system represented by GCJ02 and WGS84, or a geodetic coordinate system based on a certain geographic point, such as Beijing 54 coordinate system and Xi'an 80 coordinate system.
  • a latitude and longitude coordinate system represented by GCJ02 and WGS84
  • a geodetic coordinate system based on a certain geographic point, such as Beijing 54 coordinate system and Xi'an 80 coordinate system.
  • the relationship between the actual coordinates of a point in the reference coordinate system and the coordinates in the roadside lidar coordinate system obtained after being detected by the above lidar is:
  • x lidar is the X coordinate of the point in the roadside lidar coordinate system
  • y lidar is the Y coordinate of the point in the roadside lidar coordinate system
  • z lidar is the Z coordinate of the point in the roadside lidar coordinate system
  • x real is the X coordinate of the point in the reference coordinate system
  • y real is the Y coordinate of the point in the reference coordinate system
  • z real is the Z coordinate of the point in the reference coordinate system
  • R x (a 0 ), R y ( ⁇ 0 ), R z ( ⁇ 0 ) are the sub-rotation matrices calculated according to the three angle extrinsic parameters a 0 , ⁇ 0 and ⁇ 0 ;
  • the specific values of the external parameters of the roadside lidar are calculated by measuring the coordinates of the control point in the roadside lidar coordinate system and the reference coordinate system. The steps are as follows:
  • the reflectivity feature points refer to the points whose reflectivity is significantly different from the surrounding objects, such as traffic signs, license plates, etc.
  • the purpose of selecting reflectivity feature points as control points is to facilitate the difference between the position and reflection intensity of other points in the point cloud data. The difference can quickly find the corresponding points, so as to quickly establish the correspondence between the points in the multiple pairs of point clouds and a coordinate in the reference coordinate system.
  • Control points should be distributed as discretely as possible. Under the conditions allowed by the scene environment and the selection of control points meets the following requirements, the more control points, the better the calibration effect.
  • the control point selection requirements include: it should be discretely distributed, and any three control points should not be collinear; within the detection range of the roadside lidar, the selected control point should be as far away as possible from the roadside lidar, usually this distance should be greater than 50% of the farthest detection distance of lidar. When it is difficult to select control points at 50% of the farthest detection distance of the lidar due to scene limitations, the control points can be selected at less than 50% of the farthest detection distance of the lidar, but the number of control points should be increased.
  • the three-dimensional registration algorithm uses the three-dimensional registration algorithm to calculate the optimal value of the external parameter vector V 0 of the lidar, and use the result as the calibration result.
  • Commonly used three-dimensional registration algorithms include ICP algorithm, NDT algorithm, etc.
  • the ICP algorithm is mainly used when applied to the problem of external parameter calibration of lidar.
  • the basic principle of the ICP algorithm is that in the matched target point set P (the coordinate set of the control point in the roadside lidar coordinate system) and the source point set Q (the coordinate set of the control point in the reference coordinate system), the most The external parameters are optimally matched to minimize the error function.
  • the error function is:
  • E(R,T) is the target error function
  • R is the rotation transformation matrix
  • T is the translation transformation matrix
  • n is the number of nearest point pairs in the point set
  • p i is the coordinate of the i-th point in the target point set P;
  • q i is the point in the source point set Q that forms the closest point pair with point p i ;
  • the relative pose of the autonomous vehicle and the roadside lidar is determined according to the positioning data of the autonomous vehicle and the external parameter calibration results of the roadside lidar in the preparatory work. Its relative pose is calculated according to the following formula:
  • V′ [V′ xyz V′ a ⁇ ] (12)
  • V 1 [x 1 y 1 z 1 a 1 ⁇ 1 ⁇ 1 ] T (15)
  • V′ is the position and angle vector of the autonomous vehicle relative to the roadside lidar
  • V′ xyz is the position vector of the autonomous vehicle relative to the roadside lidar
  • V′ a ⁇ is the angle vector of the autonomous vehicle relative to the roadside lidar
  • V 1 is the position and angle vector of the autonomous vehicle in the reference coordinate system
  • the roadside lidar point cloud D r is deflected into the autonomous vehicle coordinate system according to the following formula:
  • H rc is the transformation matrix of the roadside lidar coordinate system deflected to the autonomous vehicle coordinate system
  • x ego , y ego , and z ego are the coordinates of a point in the roadside lidar point cloud after deflecting to the coordinate system of the autonomous driving vehicle, and the corresponding point coordinates in the roadside lidar coordinate system are [x lidar y lidar z lidar ] T ;
  • O is the perspective transformation vector. Since there is no perspective transformation in this scene, O takes [0 0 0];
  • Voxel is the abbreviation of Volume Pixel, which is the smallest unit of digital data in three-dimensional space division. Conceptually similar to the smallest unit of two-dimensional space - pixel.
  • the data features can be calculated once for the point cloud data in each voxel, and the features of the set composed of the point cloud data in each voxel are called voxel-level features.
  • a large class of existing 3D target detection algorithms process lidar point cloud data based on voxel-level features, extract voxel-level features through voxelized point cloud data, and input subsequent 3D targets based on voxel-level features.
  • the detection network model obtains the target detection result.
  • the size of the designed voxel is [D V W V H V ].
  • the vehicle lidar is divided into voxels according to the designed voxel size.
  • S ego is the spatial range where the vehicle lidar point cloud D c is located
  • S lidar ′ is the expanded deflection point cloud the scope of the space
  • K lidar_start ′, K lidar_end ′ are the expanded deflection point cloud in K dimension range start and end values
  • K lidar_start and K lidar_end are the deflection point cloud in K dimension range start and end values
  • Kego_start and Kego_end are the starting and ending values of the range of the vehicle lidar point cloud D c in the K dimension;
  • V K is the size of the voxel in the K dimension
  • the method used to compute the voxel-level features of the point cloud varies.
  • the steps are as follows:
  • x i , yi , and zi are the X, Y, and Z coordinates of the i -th point, respectively;
  • ri is the reflection intensity of the i -th point
  • the processing logic of the VFE layer is to first make each After passing through a layer of fully connected network, the point-level features of each point are obtained, and then the point-level features are subjected to maximum pooling to obtain voxel-level features. Stitching feature results.
  • each voxel-level feature is a 1 ⁇ C-dimensional vector.
  • Voxelized vehicle lidar point cloud and voxelized deflection point cloud Using the above methods, the voxel-level features of the vehicle lidar point cloud can be obtained respectively and deflected point cloud voxel-level features
  • point clouds exist sparsely in space, there is no scattered data in many voxels, so there is no corresponding voxel-level feature.
  • the data size can be greatly compressed, and the transmission difficulty when sent to the processing device is reduced, that is, the point cloud voxel-level features are compressed.
  • One of the special structures available is a hash table, which is a data structure that is directly accessed based on key values. It accesses records by mapping the key value to a location in the table to speed up lookups.
  • the hash key of the hash table is the spatial coordinate of the voxel, and the corresponding value is the voxel-level feature.
  • G 2 Voxel-level features of deflection point cloud by roadside computing equipment Perform compression processing to obtain voxel-level features of compressed deflection point clouds and transmitted to the self-driving vehicle; the self-driving vehicle receives the voxel-level features of the compressed deflection point cloud Compressed deflection point cloud voxel-level features Revert to deflected point cloud voxel-level features
  • Autonomous vehicle - to-vehicle LiDAR point cloud voxel-level features Perform compression processing to obtain voxel-level features of compressed vehicle lidar point clouds and transmitted to the cloud.
  • Voxel-level features of deflection point clouds by roadside computing equipment Perform compression processing to obtain voxel-level features of compressed deflection point clouds and transmitted to the cloud.
  • Cloud-Received Compressed Deflection Point Cloud Voxel-Level Features and compressed vehicular lidar point cloud voxel-level features Compressed deflection point cloud voxel-level features Revert to deflected point cloud voxel-level features Will compress vehicle lidar point cloud voxel-level features Restored to vehicle lidar point cloud voxel-level features
  • the voxel-level feature of the one that is not empty is taken as the aggregated voxel-level feature.
  • the final aggregated voxel-level feature is calculated as follows:
  • f k is the aggregated voxel-level feature the value at position k;
  • f ego_k is the voxel-level feature of the vehicle lidar point cloud the value at position k;
  • f lidar_k is the voxel-level feature of the deflected point cloud the value at position k;
  • the features of the same coordinate voxels are aggregated using the method of max pooling.
  • the roadside computing equipment calculates the voxel-level features of the vehicle lidar point cloud according to the above method and deflected point cloud voxel-level features Perform data stitching and data aggregation to obtain aggregated voxel-level features
  • the self-driving vehicle analyzes the voxel-level features of the vehicle lidar point cloud according to the above method and deflected point cloud voxel-level features Perform data stitching and data aggregation to obtain aggregated voxel-level features
  • the cloud uses the above method to analyze the voxel-level features of the vehicle lidar point cloud and deflected point cloud voxel-level features Perform data stitching and data aggregation to obtain aggregated voxel-level features
  • the aggregated voxel-level features are input into the subsequent 3D target detection network model to obtain the detection target. Still taking VoxelNet as an example, after the aggregated voxel-level features are obtained, they are input into the 3D target detection network model based on voxel-level features to obtain target detection results.
  • the target detection result can be expressed as U, specifically:
  • u i is the information of the ith target in the target detection result
  • x i is the x-axis coordinate of the i-th detection target in the coordinate system of the autonomous vehicle
  • y i is the y-axis coordinate of the i-th detection target in the coordinate system of the autonomous driving vehicle
  • z i is the z-axis coordinate of the i-th detection target in the coordinate system of the autonomous driving vehicle
  • C i is the confidence of the i-th detection target
  • Wi is the width of the detection frame corresponding to the i -th detection target
  • D i is the length of the detection frame corresponding to the ith detection target
  • H i is the height of the detection frame corresponding to the ith detection target
  • v xi is the projection of the motion speed of the i-th detection target on the x-axis direction in the coordinate system of the autonomous vehicle.
  • v yi is the projection of the moving speed of the i-th detection target on the y-axis direction in the coordinate system of the autonomous vehicle.
  • v zi is the projection of the movement speed of the i-th detection target on the z-axis direction in the coordinate system of the autonomous vehicle.
  • the target detection result should include at least the position of the target, ie, x i , yi , and zi .
  • the target detection results should include C i , Wi , D i , H i , Some or all of the vxi , vyi , vzi attributes. Among them , the three attributes of Wi, D i and Hi can only exist or not exist in the target detection result at the same time. The three attributes v xi , v yi , and v zi can only exist or not exist in the target detection result at the same time.
  • object detection is performed on the roadside computing device.
  • the roadside computing device will aggregate voxel-level features Input the 3D target detection network model based on voxel-level features to obtain the target detection result U, and transmit the target detection result to the autonomous vehicle.
  • object detection is performed on the autonomous vehicle.
  • Autonomous vehicles will aggregate voxel-level features Input the 3D target detection network model based on voxel-level features to obtain the target detection result U.
  • object detection is performed in the cloud.
  • the cloud will aggregate voxel-level features Input the 3D target detection network model based on voxel-level features to obtain the target detection result, and transmit the target detection result U to the autonomous vehicle.
  • roadside lidar as a complement to autonomous vehicle perception improves the range and accuracy of autonomous vehicle recognition of surrounding objects.
  • the use of voxelized features as the data transmitted between vehicles not only ensures that the original data information is hardly lost, but also reduces the bandwidth requirements during data transmission.
  • Figure 1 shows a proposed vehicle-road collaboration-oriented perception information fusion representation and target detection method
  • Figure 2 is a schematic diagram of laying a roadside mechanical rotating lidar
  • Figure 3 is a schematic diagram of laying a roadside all-solid-state lidar
  • Figure 4 is a schematic diagram of the roadside all-solid-state lidar (two reverse lidars are installed on the same pole)
  • FIG. 5 is a schematic diagram of the VFE layer processing point cloud data
  • Figure 6 is a schematic diagram of voxelized feature extraction and aggregation
  • Figure 7 is a schematic diagram of the target detection of the merged voxel point cloud
  • Figure 8 is a schematic diagram of the roadside lidar point cloud coordinate transformation
  • Figure 9 is a schematic diagram of the comparison of the target detection results (the left picture is the vehicle-road collaborative detection method proposed by the patent, and the right picture is the detection results of the respective high-confidence targets directly)
  • the invention relates to a vehicle-road coordination-oriented perception information fusion representation and target detection method. It can be divided into three main steps:
  • the first step is the installation of roadside lidar sensors and the pre-calibration work.
  • the layout of roadside lidar is determined based on the existing roadside column facilities and the type of lidar installed in the vehicle-road collaboration scenario.
  • the existing roadside LiDAR installation is in the form of vertical poles or horizontal poles, and the specific installation location is on the infrastructure pillars with power support such as roadside gantry, street lamps, and signal lamp posts.
  • a roadside lidar with a detection range greater than or equal to the scene range or including key areas in the scene can be deployed.
  • roadside lidar layout guidelines in the content of the invention, so that the roadside lidar coverage can meet the full coverage requirements of the scene, that is, the realization of a single roadside lidar It is a supplement to the detection blind spot under other roadside lidars in the scene to achieve better vehicle-road coordinated target detection effect.
  • the roadside lidar is used to improve the perception capability of the autonomous vehicle, that is, the ability to obtain information such as the position of the target around the vehicle relative to the vehicle, its category, size, and direction of travel. Therefore, the roadside lidar itself should also have as strong a perception capability as possible, and parameters such as the number of radar lines and sampling frequency should not be lower than the relevant parameters of the vehicle lidar as much as possible.
  • the sensing range of the roadside lidar should ensure that it covers all areas where occlusions occur frequently, and the roadside lidar should be controlled to detect transparent and unobstructed objects. occlude.
  • the installation position and angle of the roadside lidar need to be calibrated, that is, external parameter calibration, that is, the relative position of the lidar can be obtained.
  • Coordinate position parameters and angle attitude parameters of a reference coordinate system are selected as control points in the roadside lidar detection area.
  • the reflectivity feature points refer to the points whose reflectivity is significantly different from the surrounding objects, such as traffic signs, license plates, etc.
  • the purpose of selecting reflectivity feature points as control points is to facilitate the difference between the position and reflection intensity of other points in the point cloud data.
  • Control points should be distributed as discretely as possible. Under the conditions allowed by the scene environment and the selection of control points meets the following requirements, the more control points, the better the calibration effect.
  • the control point selection requirements include: it should be discretely distributed, and any three control points should not be collinear; within the detection range of the roadside lidar, the selected control point should be as far away as possible from the roadside lidar, usually this distance should be Greater than 50% of the farthest detection distance of lidar.
  • control points When it is difficult to select control points at 50% of the farthest detection distance of the lidar due to scene limitations, the control points can be selected at less than 50% of the farthest detection distance of the lidar, but the number of control points should be increased. Then use a handheld high-precision RTK and other high-precision measuring instruments to measure the precise coordinates of the control points, and find the corresponding point coordinates in the roadside lidar point cloud; when holding a high-precision map file of the roadside lidar layout scene, no need to use a handheld Precision RTK and other high-precision measuring instruments can directly find the coordinates of the corresponding feature points in the high-precision map.
  • the three-dimensional registration algorithm is used to calculate the optimal value of the external parameter vector of the lidar, and the result is used as the calibration result.
  • Commonly used three-dimensional registration algorithms include ICP algorithm, NDT algorithm, etc.
  • the ICP algorithm is mainly used when applied to the problem of external parameter calibration of lidar.
  • the basic principle of the ICP algorithm is that in the matched target point set P (the coordinate set of the control point in the roadside lidar coordinate system) and the source point set Q (the coordinate set of the control point in the reference coordinate system), the most The external parameters are optimally matched to minimize the error function.
  • the method used to calibrate the roadside lidar external parameters is not limited here, but it should be ensured that the calibration result includes the sensor's 3D world coordinates, as well as the pitch, yaw, and roll angles for the point cloud deflection in the subsequent steps.
  • the second step is the processing and feature extraction of the LiDAR point cloud data at the vehicle end and the road end.
  • the real-time world coordinates and pitch, yaw and roll angles of the vehicle are first obtained based on the positioning module that comes with the automatic driving.
  • the relative pose of the autonomous vehicle relative to the roadside lidar is calculated, and the roadside lidar point cloud data is deflected into the vehicle coordinate system.
  • the size of the voxel is designed and the vehicle lidar is divided into voxels.
  • the same voxel division method as the vehicle lidar point cloud is used for division to ensure that the spatial grid of the deflection point cloud is completely coincident with the vehicle lidar point cloud.
  • the scatter data in the same voxel belong to the same group. Due to the inhomogeneity and sparsity of points, the number of scattered data in each voxel is not necessarily the same, and there may be no scattered data in some voxels.
  • the two sets of point cloud data are divided into several discrete voxels with a fixed-size lattice, expanded, and the feature vector of each voxel is calculated separately using the voxelization method described above. Taking the classic VoxelNet network model in the 3D target detection algorithm as an example, several consecutive VFE layers are used to extract feature vectors for each voxel.
  • FIG. 5 A schematic diagram of cloud data is shown in Figure 5.
  • the processing logic of the VFE layer is to first pass each expanded scatter data through a layer of fully connected network to obtain point-level features of each point, and then perform maximum pooling on the point-level features to obtain voxel-level features.
  • the voxel-level features are spliced with the point-level features obtained in the previous step to obtain the point-level splicing feature results.
  • the final voxel-level features are obtained through fully connected layer integration and max pooling.
  • point clouds exist sparsely in space, there is no scattered data in many voxels, so there is no corresponding voxel-level feature.
  • the data size can be greatly compressed, and the transmission difficulty when sent to the processing device is reduced.
  • One of the special structures available is a hash table, which is a data structure that is directly accessed based on key values. It accesses records by mapping the key value to a location in the table to speed up lookups.
  • the hash key of the hash table is the spatial coordinate of the voxel, and the corresponding value is the voxel-level feature.
  • the third step is to perform data splicing and data aggregation on the voxel-level features of the vehicle lidar point cloud and the voxel-level features of the deflection point cloud to obtain aggregated voxel-level features, and perform target detection.
  • the computing device can be a roadside computing device, an autonomous vehicle, or the cloud.
  • data aggregation, data splicing and subsequent processing are performed on roadside computing equipment; when using sub-scheme II, data aggregation, data splicing and subsequent processing are performed on autonomous vehicles.
  • sub-scheme III data aggregation and data splicing and subsequent processing are performed in the cloud.
  • the voxel-level features of the vehicle radar point cloud can still be supplemented according to the voxel-level features of the deflected point cloud in the previous step for data splicing.
  • the operation is to align the voxel-level features of the vehicle lidar point cloud and the voxel-level features of the deflection point cloud according to the position of the voxels in the coordinate system of the autonomous driving vehicle.
  • the aggregated voxel-level features are input into the subsequent 3D target detection network model to obtain the detection target. See Figure 7, still taking the VoxelNet network model as an example, the spliced data is input into the continuous convolution layer in the VoxelNet network model to obtain the spatial feature map, and finally input into the RPN (Region Proposal Network, region generation network) in the VoxelNet network model. Get the final target detection result.
  • RPN Registered Proposal Network, region generation network
  • roadside lidar as a complement to autonomous vehicle perception improves the range and accuracy of vehicle recognition of surrounding objects.
  • the use of point cloud voxelization features as the data transmitted between vehicles not only ensures that the original data information is hardly lost, but also reduces the bandwidth requirements for data transmission.
  • Embodiment 1 is as follows:
  • any three control points satisfy the non-collinearity condition.
  • Use handheld RTK to measure the precise coordinates of the control points, match the coordinates of the corresponding control points in the lidar point cloud, and use the ICP algorithm to calibrate the lidar.
  • the position of the roadside lidar point cloud in the autonomous vehicle coordinate system can be obtained.
  • the roadside lidar point cloud is aligned to the autonomous vehicle coordinate system.
  • the deflection point cloud is divided into voxels according to the coordinate system of the autonomous vehicle and a lattice with a fixed size of [0.4m 0.4m 0.5m] and expanded to obtain a voxelized deflection point cloud.
  • the voxel-level features are calculated in the multi-layer VFE. For voxels that do not contain scattered data, no calculation is required.
  • a 128-dimensional feature vector representation For voxels that do not contain scattered data, no calculation is required.
  • the roadside computing device stores the calculated voxel-level features in the hash table, the spatial position of the voxel is used as the hash key, and the corresponding content is the voxel-level feature of the corresponding voxel, and the voxel of the compressed deflection point cloud is obtained. level features.
  • the autonomous vehicle performs the same processing on the vehicle-mounted lidar point cloud until the voxel-level features of the vehicle-mounted lidar point cloud are obtained, that is, there is no need to establish a hash table for the vehicle-side lidar point cloud data. At this point, the data size is reduced to about 1/10 compared to the original point cloud data.
  • the autonomous vehicle receives the voxel-level features of the compressed deflection point cloud sent by the roadside computing device, and decompresses it and restores it to the voxel-level features of the deflection point cloud. Since the coordinate system of the received deflected point cloud voxel-level features has been deflected to the autonomous vehicle coordinate system, the deflected point cloud voxel-level features can be directly spliced with the vehicle-mounted lidar point cloud voxel-level feature data in the same coordinate system.
  • Embodiment 2 is as follows:
  • the LiDAR installation height is 6.5m
  • the depression angle is 7°
  • one LiDAR is installed between every 8 poles, which is in line with the roadside all-solid-state LiDAR. layout plan.
  • any three control points satisfy the non-collinearity condition.
  • Use handheld RTK to measure the precise coordinates of the control points, match the coordinates of the corresponding control points in the lidar point cloud, and use the ICP algorithm to calibrate the lidar.
  • step (2) in Embodiment 1 the voxel-level features of the deflection point cloud and the voxel-level features of the vehicle lidar point cloud are obtained.
  • the self-driving car stores the calculated voxel-level features of the vehicle lidar point cloud in the hash table, the spatial position of the voxel is used as the hash key, and the corresponding content is the voxel-level feature of the corresponding voxel, and the compressed vehicle is obtained.
  • LiDAR point cloud voxel-level features are obtained.
  • the roadside computing device receives the voxel-level features of the compressed vehicle lidar point cloud sent by the autonomous vehicle, and decompresses them into voxel-level features of the vehicle lidar point cloud. Subsequent data splicing, data aggregation and target detection steps are the same as (3) in Embodiment 1, until the target detection result is obtained, the roadside computing device sends the target detection result to the autonomous vehicle.
  • Embodiment 3 is as follows:
  • the LiDAR installation height is 6.5m
  • the depression angle is 7°
  • two poles are installed between every 9 poles, which is in line with the roadside all-solid state Guidelines for Lidar Layout Plans.
  • any three control points satisfy the non-collinearity condition.
  • Use handheld RTK to measure the precise coordinates of the control points, match the coordinates of the corresponding control points in the lidar point cloud, and use the ICP algorithm to calibrate the lidar.
  • the voxel-level feature of the compressed deflection point cloud is obtained in the same step (2) in Embodiment 1, and the voxel-level feature of the compressed vehicle lidar point cloud is obtained in the same step (2) in Embodiment 2.
  • the cloud receives the voxel-level features of the compressed vehicle lidar point cloud sent by the autonomous vehicle, and decompresses them into the voxel-level features of the vehicle lidar point cloud; the cloud receives the voxel-level features of the compressed deflection point cloud sent by the roadside computing device , and decompress it into voxel-level features of the deflected point cloud. Subsequent data splicing, data aggregation and target detection steps are the same as (3) in Embodiment 1, until the target detection result is obtained, the cloud sends the target detection result to the autonomous vehicle.

Abstract

一种面向车路协同的感知信息融合表征及目标检测方法,包含以下步骤:布设路侧激光雷达,为其配置相应的路侧计算设备;标定路侧激光雷达外参;路侧计算设备根据自动驾驶车辆定位数据和路侧激光雷达外参计算自动驾驶车辆相对于路侧激光雷达的相对位姿;路侧计算设备根据相对位姿将路侧激光雷达检测到的路侧激光雷达点云偏转至自动驾驶车辆坐标系中,得到偏转点云;路侧计算设备对偏转点云进行体素化处理,得到体素化偏转点云。自动驾驶车辆对车载激光雷达检测到的车载激光雷达点云进行体素化处理得到体素化车载激光雷达点云;路侧计算设备计算体素化偏转点云的体素级特征,得到偏转点云体素级特征。自动驾驶车辆计算体素化车载激光雷达点云体素级特征,得到车载激光雷达点云体素级特征;将各点云体素级特征压缩并传输至计算设备,传输设备可以是自动驾驶车辆、路侧计算设备或是云端。计算设备对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征;计算设备将聚合体素级特征输入基于体素级特征的三维目标检测网络模型得到目标检测结果;当计算设备为路侧计算设备或是云端时,最后将目标检测结果发送至自动驾驶车辆。

Description

一种面向车路协同的感知信息融合表征及目标检测方法 技术领域
本发明属于自动驾驶车路协同技术领域,涉及运用感知信息融合表征的车路协同目标检测方法。
背景技术
21世纪,随着城市道路和汽车产业的不断发展,汽车成为人们出行的必备交通工具之一,为人类的日常生产生活带来极大的方便。但是,汽车过度使用的同时也带来了环境污染、交通堵塞、交通事故等问题。为了缓解汽车过度使用问题,将人从交通系统中脱离出来,提高车辆驾驶能力的同时,解放了驾驶员的双手,自动驾驶汽车也渐渐成为未来汽车发展的重要方向。随着深度学习技术的崛起、人工智能的备受关注,自动驾驶,作为AI中备受关注的重要落脚点,也被炒的火热。
自动驾驶是一个完整的软硬件交互系统,自动驾驶核心技术包括硬件(汽车制造技术、自动驾驶芯片)、自动驾驶软件、高精度地图、传感器通信网络等。从软件方面来看,总体上可分为如下三个模块,即环境感知、行为决策和运动控制。
感知是自动驾驶的第一环,是车辆和环境交互的纽带。一个自动驾驶系统的整体上表现好坏,首先就取决于感知系统的性能。自动驾驶车辆的感知是通过传感器实现的,其中激光雷达利用激光来进行探测和测量。其原理是向周围发射脉冲激光,遇到物体后反射回来,通过来回的时间差,计算出距离,从而对周围环境建立起三维模型。激光雷达探测精度高、距离长;由于激光的波长短,所以可以探测到非常微小的目标,并且探测距离很长。激光雷达感知到的点云数据信息量大且精度更高,多用于自动驾驶感知环中的目标检测和分类。一方面,激光雷达颠覆传统了二维投影成像模式,可采集目标表面深度信息,得到目标相对完整的空间信息,经数据处理重构目标三维表面,获得更能反映目标几何外形的三维图形,同时还能获取目标表面反射特性、运动速度等丰富的特征信息,为目标探测、识别、跟踪等数据处理提供充分的信息支持、降低算法难度;另一方面,主动激光技术的应用,使得其具有测量分辨率高,抗干扰能力强、抗隐身能力强、穿透能力强和全天候工作的特点。
目前根据有无机械部件来分,激光雷达可分为机械激光雷达和固态激光雷达,虽然固态激光雷达被认为是未来的大势所趋,但在当前激光雷达战场,机械激光雷达仍占据主流地位。机械激光雷达带有控制激光发射角度的旋转部件,而固态激光雷达则无需机械旋转部件,主要依靠电子部件来控制激光发射角度。
在现有的自动驾驶方案中,激光雷达基本都是其环境感知模块中的最主要传感器,承担环境感知中的实时地图建立和定位与目标检测等大部分任务。例如,谷歌Waymo在其传感器配置方案中加入了五个激光雷达,四个侧方激光雷达分别分布在车辆的前后左右,为中短距离多线雷达,用于补充盲区视野;顶部则配置了高线数的激光雷达用于大范围的感知,其视野盲区由四个侧方激光雷达补充。
激光雷达传感器的扫描资料以点云的形式记录。点云数据是指在一个三维坐标系统中的一组向量的集合。这些向量通常以X,Y,Z三维坐标的形式表示。每一个点除了包含有三维坐标之外,有些可能含有颜色信息(RGB)或反射强度信息(Intensity)。
其中,X、Y、Z三列数据代表了点数据在传感器坐标系或是世界坐标系中的三维位置,一般以米为单位。Intensity列下则代表了每一个点处的激光反射强度,其值没有单位,一般被归一化到0~255之间。
由于车载激光雷达本身的安装高度受到车型大小的限制多仅为两米左右,其所能探测到的信息易受到车辆周围遮挡物的影响,例如在小型车前方行驶的载货卡车几乎可以完全遮挡住小型车上激光雷达的前方视野,使其环境感知能力严重减弱。另外雷达本身的性能也会受到车辆整体成本的限制,车辆端往往不会配置较为昂贵的高线数激光雷达。因此,车载激光雷达所能获得的点云数据常常存在出现盲区或是稀疏情况,仅依靠车辆的传感器来完成自动驾驶感知任务较为困难。相比车载激光雷达,安装在路侧设施端的激光雷达因为可以布设在较高的龙门架或是灯柱上,具有更加通透的视野不易被遮挡。另外,路侧激光雷达对于成本的容忍度更高,可以使用较高线数的激光雷达,同时可以配置算力较高的路侧计算单元,借以达到更高的探测性能和更快的检测速度。
目前,车路协同系统正处在研究和测试的热潮中,基于V2X技术实现的智能车路协同方案可以增强现阶段可实现的辅助驾驶功能,增强车辆驾驶安全及道路运行效率,在远期可以为自动驾驶提供数据服务与技术支持。
现有的激光雷达车路协同方案为车辆及路侧设施各自根据激光雷达点云数据进行目标检测,随后设施端将检测结果发送给车辆,大部分的学者的研究重点则放在了传输数据的可靠性分析或是车路两端之间的相对位姿计算上或是车路两端的数据传输时延处理上,均默认了车路协同过程直接发送目标检测的结果。此方案虽数据传输量较低,但仍然无法完全利用两端的检测数据。例如当车路两档均未探测到较完整的目标点云时,很容易发生漏检误检情况,导致协同之后的目标检测结果出现误差。对此,部分学者提议直接发送原点云数据来防止信息丢失,例如2019年提出的Cooper框架最早提出了原始点云数据级别的合作感知方案,通过融合不同来源的点云数据大幅提高感知性能。
但与此同时,单帧激光雷达点云数据大小常常在十余M甚至数十M,现有的车路协同通信条件难以支撑如此大量的实时点云数据传输。因此,自动驾驶技术迫切需要一种更好的利用两端激光雷达数据的协同检测方法,既满足目标检测精度的需求,又能尽可能减少数据传输量。
现有的基于激光雷达点云数据的目标识别和分类算法均基于深度神经网络技术。
现有技术
专利文件US9562971B2
专利文件US20150187216A1
专利文件CN110989620A
专利文件CN110781927A
专利文件CN111222441A
专利文件CN108010360A
发明内容
为了解决上述的问题,本发明提供一种面向车路协同的感知信息融合表征及目标检测方法,提供一种权衡传输数据大小和信息损失程度的基于激光雷达点云数据的车路协同方案,用于解决目前自动驾驶车辆单车感知能力不足同时车路协同通信带宽不足的问题。
具体要解决的技术问题包括确定路侧激光雷达布设方案、选择路侧激光雷达外参标定方法、依据自动驾驶车辆与路侧激光雷达相对位姿计算偏转参数的方法、确定合适的用于车路协同的信息表征形式。
本发明的目标是:保证车路协同感知能力的前提下减小信息传输量。
本发明专利解决其技术问题分为准备阶段和应用阶段,准备阶段的步骤如下:
A.布设路侧激光雷达,为路侧激光雷达配置相应的路侧计算设备;
B.标定路侧激光雷达外参。
应用阶段的步骤如下:
C.路侧计算设备根据自动驾驶车辆定位数据和路侧激光雷达外参计算自动驾驶车辆相对于路侧激光雷达的相对位姿;
D.路侧计算设备根据相对位姿将路侧激光雷达检测到的路侧激光雷达点云偏转至自动驾驶车辆坐标系中,得到偏转点云。
E.路侧计算设备对偏转点云进行体素化处理,得到体素化偏转点云。自动驾驶车辆对车载激光雷达检测到的车载激光雷达点云进行体素化处理得到体素化车载激光雷达点云;
F.路侧计算设备计算体素化偏转点云的体素级特征,得到偏转点云体素级特征。自动驾驶车辆计算体素化车载激光雷达点云体素级特征,得到车载激光雷达点云体素级特征;
后续步骤分为I、II、III三个子方案。子方案I在路侧计算设备完成步骤G 1、H 1、I 1;子方案II在自动驾驶车辆完成步骤G 2、H 2、I 2;子方案III在云端完成步骤G 3、H 3、I 3
子方案I中:
G 1.自动驾驶车辆对车载激光雷达点云体素级特征进行压缩处理,得到压缩车载激光雷达点云体素级特征,并传输至路侧计算设备,路侧计算设备接收压缩车载激光雷达点云体素级特征,将压缩车载激光雷达点云体素级特征还原为车载激光雷达点云体素级特征;
H 1.路侧计算设备对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征;
I 1.路侧计算设备将聚合体素级特征输入基于体素级特征的三维目标检测网络模型得到目标检测结果,并将目标检测结果传输至自动驾驶车辆;
子方案II中:
G 2.路侧计算设备对偏转点云体素级特征进行压缩处理,得到压缩偏转点云体素级特征,并传输至自动驾驶车辆;自动驾驶车辆接收压缩偏转点云体素级特征,将压缩偏转点云体素级特征还原为偏转点云体素级特征;
H 2.自动驾驶车辆对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征;
I 2.自动驾驶车辆将聚合体素级特征输入基于体素级特征的三维目标检测网络模型得到目标检测结果;
子方案III中:
G 3.自动驾驶车辆对车载激光雷达点云体素级特征进行压缩处理,得到压缩车载激光雷达点云体素级特征,并传输至云端。路侧计算设备对偏转点云体素级特征进行压缩处理,得到压缩偏转点云体素级特征,并传输至云端。云端接收压缩偏转点云体素级特征和压缩车载激光雷达点云体素级特征,将压缩偏转点云体素级特征还原为偏转点云体素级特征,将压缩车载激光雷达点云体素级特征还原为车载激光雷达点云体素级特征;
H 3.云端对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征;
I 3.云端将聚合体素级特征输入基于体素级特征的三维目标检测网络模型得到目标检测结果,并将目标检测结果传输至自动驾驶车辆。
本发明专利上述步骤中的具体技术方案如下:
A.布设激光雷达
所述的布设路侧激光雷达依据对车路协同场景中现有的路侧立柱设施及所安装的激光雷达类型决定。现有的路侧激光雷达安装形式用立杆或横杆方式,具体安装位置为路侧龙门架、路灯、信号灯柱等有电力支持的基础设施柱体上。
按内部有无旋转部件,激光雷达可分为机械旋转式激光雷达、混合式激光雷达和固态激光雷达,其中,机械旋转式激光雷达和固态激光雷达两种类型为路侧常用激光雷达类型。
对于交叉口等场景,布设一个检测范围大于等于场景范围或是包含场景内关键区域的路侧激光雷达即可。对于快速路,高速公路,园区等长距离大范围复杂场景,建议遵循以下路侧激光雷达布设导则,使路侧激光雷达覆盖范围满足场景全覆盖要求,即单一路侧激光雷达实现对场景内其他路侧激光雷达下方检测盲区的补充,以达到更好的车路协同目标检测效果。
路侧激光雷达布设导则按使用的路侧激光雷达种类不同分为路侧机械旋转式激光雷达布设导则和路侧全固态激光雷达布设导则。
A 1)路侧机械旋转式激光雷达、路侧混合固态激光雷达布设方案
机械旋转式激光雷达通过机械旋转实现激光扫描;激光发射部件在竖直方向上排布成激光光源线阵,并可通过透镜在竖直面内产生不同角度指向的光束;在电机的驱动下持续旋转即使竖直面内的光束由“线”变成“面”,经旋转扫描形成多个激光“面”,从而实现探测区域内的探测。混合固态激光雷达则指用半导体“微动”器件(如MEMS扫描镜)来代替宏观机械式扫描器,在微观尺度上实现雷达发射端的激光扫描方式。
路侧机械旋转式激光雷达、路侧混合固态激光雷达布设导则要求路侧安装机械旋转式激光雷达和路侧路侧混合固态激光雷达时使其水平安置,保证各个方向光束信息的充分利用。如图2,布设路侧机械旋转式激光雷达和路侧混合固态激光雷达应至少满足以下要求:
Figure PCTCN2022084925-appb-000001
其中:
H a表示路侧机械旋转式激光雷达或路侧混合固态激光雷达安装高度;
Figure PCTCN2022084925-appb-000002
表示路侧机械旋转式激光雷达或路侧混合固态激光雷达最高仰角光束与水平方向的夹 角;
L a表示相邻两个路侧机械旋转式激光雷达或路侧混合固态激光雷达安装杆位间的距离;
A 2)路侧全固态激光雷达布设方案
全固态激光雷达完全取消了机械扫描结构,其水平垂直两个方向的激光扫描均通过电子方式实现。相控激光发射器由若干发射接收单元组成的一个矩形阵列,通过改变阵列中不同单元发射光线的相位差,可以达到调节射出激光角度和方向的目的。激光光源经过光分束器后进入光波导阵列,在波导上通过外加控制的方式改变光波的相位,利用波导间的光波相位差来实现光束扫描。
如图3,路侧全固态激光雷达布设导则要求布设路侧全固态激光雷达应至少满足以下要求:
Figure PCTCN2022084925-appb-000003
其中:
H b表示路侧全固态激光雷达安装高度;
Figure PCTCN2022084925-appb-000004
表示路侧全固态激光雷达在垂直方向上的视场角度;
Figure PCTCN2022084925-appb-000005
表示路侧全固态激光雷达最高仰角光束与水平方向的夹角;
L b表示相邻两个路侧全固态激光雷达安装杆位间的距离;
对于安装全固态激光雷达的场景,还可通同一杆安装两个反向激光雷达的方法弥补路侧感知盲区,减小对于路侧杆位数量的需求,此时应满足如图4所示的要求,即:
Figure PCTCN2022084925-appb-000006
其中:
H c表示路侧全固态激光雷达安装高度;
Figure PCTCN2022084925-appb-000007
表示路侧全固态激光雷达最高仰角光束与水平方向的夹角;
L c表示相邻两个路侧全固态激光雷达安装杆位间的距离;
对于可以满足上述条件的激光雷达车路协同场景,路侧机械旋转式激光雷达或全固态激光雷达按如上要求布设,并在条件有余裕时增大各激光雷达扫描区域。对于无法满足上述条件的激光雷达车路协同场景则通过布设新杆及路侧激光雷达数量的方法实现路侧激光雷达布设条件满足路侧激光雷达布设导则。
B.外参标定
为了计算路侧激光雷达与车载激光雷达的相对位姿,需要对路侧激光雷达的安装位置和角度进行标定,即外参标定,即得到激光雷达相对某一基准坐标系的坐标位置参数和角度姿态参数。激光雷达的外参可以用以下向量表示:
V 0=[x 0 y 0 z 0 a 0 β 0 γ 0]            (4)
其中:
x 0表示路侧激光雷达在基准坐标系中的X坐标;
y 0表示路侧激光雷达在基准坐标系中的Y坐标;
z 0表示路侧激光雷达在基准坐标系中的Z坐标;
a 0表示路侧激光雷达在基准坐标系中绕X轴的旋转角度;
β 0表示路侧激光雷达在基准坐标系中绕Y轴的旋转角度;
γ 0表示路侧激光雷达在基准坐标系中绕Z轴的旋转角度;
上述的基准坐标系可以是以GCJ02、WGS84为代表的经纬度坐标系,也可以是以某一特定的地理点以基准的大地坐标系,例如北京54坐标系和西安80坐标系。对应的,在基准坐标系中一点的实际坐标,与被上述激光雷达检测到后得到的路侧激光雷达坐标系中的坐标关系为:
Figure PCTCN2022084925-appb-000008
Figure PCTCN2022084925-appb-000009
Figure PCTCN2022084925-appb-000010
Figure PCTCN2022084925-appb-000011
其中:
x lidar为该点在路侧激光雷达坐标系中的X坐标;
y lidar为该点在路侧激光雷达坐标系中的Y坐标;
z lidar为该点在路侧激光雷达坐标系中的Z坐标;
x real为该点在基准坐标系中的X坐标;
y real为该点在基准坐标系中的Y坐标;
z real为该点在基准坐标系中的Z坐标;
R x(a 0)、R y0)、R z0)为根据三个角度外参a 0、β 0和γ 0计算而得的子旋转矩阵;
路侧激光雷达的外参具体数值通过测量控制点在路侧激光雷达坐标系和基准坐标系中的坐标计算而得,其步骤如下:
①在路侧激光雷达检测范围内选取最少4个反射率特征点作为控制点。反射率特征点指反射率与周边物体有明显差别的点,例如交通标志牌、车牌等,选取反射率特征点作为控制点的目的在于便于在点云数据中依靠位置和反射强度与其他点的区别快速的找出相应的点,从而快速建立多对点云中的点与基准坐标系中的一个坐标之间的对应关系。 控制点应尽可能离散分布。在场景环境允许的条件下且控制点选择符合以下要求的条件下,控制点越多标定效果越好。控制点选择要求包括:应离散分布,且任意三个控制点间不可共线;在路侧激光雷达检测范围内,选取控制点应尽可能距离路侧激光雷达距离更远,通常这个距离应大于激光雷达最远检测距离的50%。对于因场景限制难以在激光雷达最远检测距离的50%选取控制点时,可在小于激光雷达最远检测距离的50%处选取控制点,但应增加控制点的数量。
②使用手持高精度RTK等高精度测量仪器测量控制点精确坐标,在路侧激光雷达点云中找到相应点坐标;当持有路侧激光雷达布设场景的高精度地图文件时,无需使用手持高精度RTK等高精度测量仪器测量,可直接与高精度地图中找到对应特征点的坐标。
③使用三维配准算法对计算激光雷达外参向量V 0的最优值,以其结果作为标定结果。常用的三维配准算法包括ICP算法、NDT算法等,其中应用于激光雷达外参标定问题时即主要使用ICP算法。ICP算法的基本原理是在匹配的目标点集P(控制点在路侧激光雷达坐标系中的坐标集合)和源点集Q(控制点在基准坐标系中的坐标集合)中,计算出最优匹配外参,使得误差函数最小。误差函数为:
Figure PCTCN2022084925-appb-000012
R=R x(a 0)R y0)R z0)         (10)
T=[x 0 y 0 z 0] T             (11)
其中:
E(R,T)为目标误差函数;
R为旋转变换矩阵;
T为平移变换矩阵;
n为点集中最临近点对的个数;
p i为目标点集P中第i个的点的坐标;
q i为源点集Q中与点p i组成最临近点对的点;
C.计算相对位姿
根据自动驾驶车辆定位数据和前期准备工作中路侧激光雷达外参标定结果确定自动驾驶车辆与路侧激光雷达的相对位姿。其相对位姿按如下公式计算:
V′=[V′ xyz V′ aβγ]         (12)
Figure PCTCN2022084925-appb-000013
V′ aβγ=[a′ β′ γ′] T=[a′ 1 β′ 1 γ′ 1] T-[a′ 0 β′ 0 γ′ 0] T       (14)
V 1=[x 1 y 1 z 1 a 1 β 1 γ 1] T       (15)
其中:
V′为自动驾驶车辆相对于路侧激光雷达的位置与角度向量
V′ xyz为自动驾驶车辆相对于路侧激光雷达的位置向量
V′ aβγ为自动驾驶车辆相对于路侧激光雷达的角度向量
V 1为自动驾驶车辆在基准坐标系中的位置和角度向量
D.偏转
按如下公式将路侧激光雷达点云D r偏转至自动驾驶车辆坐标系中:
Figure PCTCN2022084925-appb-000014
Figure PCTCN2022084925-appb-000015
R=R x(a′)R y(β′)R z(γ′)           (18)
T=[x′ y′ z′]           (19)
其中:
H rc为路侧激光雷达坐标系偏转至自动驾驶车辆坐标系的变换矩阵;
x ego、y ego、z ego为路侧激光雷达点云中的一点偏转到自动驾驶车辆坐标系后的坐标,对应路侧激光雷达坐标系中的点坐标为[x lidar y lidar z lidar] T
O为透视变换向量,由于此场景中无透视变换,O取[0 0 0];
E.体素化
体素是体积元素(Volume Pixel)的简称,是数字数据于三维空间分割上的最小单位。概念上类似二维空间的最小单位——像素。使用体素将点云数据进行分割后,可以对每个体素之内的点云数据分别先计算一次数据特征,每个体素内的点云数据组成的集合的特征称为体素级特征。现有的三维目标检测算法中一大类算法基于体素级特征处理激光雷达点云数据,通过体素化点云数据后提取体素级特征,并输入后续的基于体素级特征的三维目标检测网络模型得到目标检测结果。
点云数据体素化的步骤如下:
E 1)根据车载激光雷达点云D c所在的空间维度大小[D W H],设计体素的大小为[D V W V H V]。根据设计的体素大小对车载激光雷达进行体素划分。
E 2)对于偏转点云
Figure PCTCN2022084925-appb-000016
使用与车载激光雷达点云D c相同体素划分方式进行划分,保证划分偏转点云
Figure PCTCN2022084925-appb-000017
的空间网格与车载激光雷达点云D c完全重合。例如,车载激光雷达点云D c的分布空 间在X轴方向为[-31m,33m],体素D V为4m,此时若偏转点云
Figure PCTCN2022084925-appb-000018
的分布空间在X轴方向为[-32m,34m],则应将其扩充至[-35m,37m],得到扩充后的偏转点云
Figure PCTCN2022084925-appb-000019
以保证车载激光雷达点云D c和扩充后的偏转点云
Figure PCTCN2022084925-appb-000020
的体素划分网格一致。具体计算公式如下:
Figure PCTCN2022084925-appb-000021
Figure PCTCN2022084925-appb-000022
Figure PCTCN2022084925-appb-000023
n 1,n 2∈N
其中:
S ego为车载激光雷达点云D c所在空间范围;
S lidar′为扩充后的偏转点云
Figure PCTCN2022084925-appb-000024
所在空间范围;
K lidar_start′、K lidar_end′为K维度上扩充后的偏转点云
Figure PCTCN2022084925-appb-000025
的范围起始值和终止值;
K lidar_start、K lidar_end为K维度上偏转点云
Figure PCTCN2022084925-appb-000026
的范围起始值和终止值;
K ego_start、K ego_end为K维度上车载激光雷达点云D c的范围起始值和终止值;
V K为体素在K维度上的大小;
E 3)根据车载激光雷达点云D c和扩充后的偏转点云
Figure PCTCN2022084925-appb-000027
中散点数据所在的体素进行分组,同一体素内的散点数据为同一组。由于点的不均匀性和稀疏性,每个体素中的散点数据的数量不一定相同,一部分体素中可能没有散点数据。
E 4)为了减小计算负担,同时消除因为密度不一致带来的判别问题,针对体素内散点数据量大于一定阈值的体素进行随机采样,阈值建议取值为35,当点云数据中散点数据较少时,可适当减小阈值。这种策略可以节省计算资源、降低体素之间的不平衡性。
通过步骤E 1~E 4,体素化车载激光雷达点云D c得到体素化车载激光雷达点云
Figure PCTCN2022084925-appb-000028
体素化扩充后的偏转点云
Figure PCTCN2022084925-appb-000029
得到体素化偏转点云
Figure PCTCN2022084925-appb-000030
F.计算体素级特征
根据自动驾驶车辆所使用的目标检测模型不同,计算点云体素级特征使用的方法也有所不同。以自动驾驶车辆使用VoxelNet模型进行目标检测为例,步骤如下:
①首先整理体素化点云,对于体素A内第i个点,其被采集到的原始数据为:
a i=[x i y i z i r i]          (23)
其中:
x i、y i、z i分别为第i个点的X、Y、Z坐标;
r i为第i个点的反射强度;
②随后计算该体素内所有点坐标的均值,记为[[v x v y v z]]。
③之后对第i个点采用相对于中心的偏移量来补充其信息,即:
Figure PCTCN2022084925-appb-000031
其中:
Figure PCTCN2022084925-appb-000032
为补充后的第i个点的信息;
④将处理后的体素化点云输入级联的连续VFE层中,VFE层处理体素化点云的数据的示意图如图5。VFE层的处理逻辑为首先使每一个
Figure PCTCN2022084925-appb-000033
通过一层全连接网络后得到各点的点级特征,随后将点级特征进行最大值池化处理得到体素级特征,最后将体素级特征与上一步得到的点级特征拼接得到点级拼接特征结果。
⑤经过级联的连续VFE层处理后,再通过全连接层整合和最大值池化得到最终的体素级特征,每个体素级特征为一个1×C维的向量。
对体素化车载激光雷达点云
Figure PCTCN2022084925-appb-000034
和体素化偏转点云
Figure PCTCN2022084925-appb-000035
使用以上方法可分别得到车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000036
和偏转点云体素级特征
Figure PCTCN2022084925-appb-000037
G.点云体素级特征传输
由于点云在空间中稀疏存在,许多体素内无散点数据,因此也无对应的体素级特征。将点云体素级特征用特殊结构存储后可大幅压缩数据大小,减小发送至处理设备时的传输难度,即对点云体素级特征进行了压缩处理。可用特殊结构之一是哈希表,哈希表是根据关键码值而直接进行访问的数据结构。它通过把关键码值映射到表中一个位置来访问记录,以加快查找的速度。其中,哈希表的哈希键是体素的空间坐标,对应值则为体素级特征。
使用子方案I时,后续处理在路侧计算设备进行。
G 1)自动驾驶车辆对车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000038
进行压缩处理,得到压缩车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000039
并传输至路侧计算设备,路侧计算设备接收压缩车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000040
将压缩车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000041
还原为车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000042
使用子方案II时,后续处理于自动驾驶车辆上进行。
G 2)路侧计算设备对偏转点云体素级特征
Figure PCTCN2022084925-appb-000043
进行压缩处理,得到压缩偏转点云体素级特征
Figure PCTCN2022084925-appb-000044
并传输至自动驾驶车辆;自动驾驶车辆接收压缩偏转点云体素级特征
Figure PCTCN2022084925-appb-000045
将压缩偏转点云体素级特征
Figure PCTCN2022084925-appb-000046
还原为偏转点云体素级特征
Figure PCTCN2022084925-appb-000047
使用子方案III时,后续处理于云端进行。
G 3)自动驾驶车辆对车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000048
进行压缩处理,得到压缩车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000049
并传输至云端。路侧计算设备对偏转点云体素级特征
Figure PCTCN2022084925-appb-000050
进行压缩处理,得到压缩偏转点云体素级特征
Figure PCTCN2022084925-appb-000051
并传输至云端。云端接收压缩偏转点云体素级特征
Figure PCTCN2022084925-appb-000052
和压缩车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000053
将压缩偏转点云体素级特征
Figure PCTCN2022084925-appb-000054
还原为偏转点云体素级特征
Figure PCTCN2022084925-appb-000055
将压缩车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000056
还原为车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000057
H.数据拼接和数据聚合
进行数据拼接操作,即将车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000058
和偏转点云体素级特征
Figure PCTCN2022084925-appb-000059
依据其中体素在自动驾驶车辆坐标系中的位置进行对齐。
进行数据聚合操作,即对于车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000060
和偏转点云体素级特征
Figure PCTCN2022084925-appb-000061
任意一方体素为空的位置,取不为空的一方的体素级特征作为聚合后的体素级特征。对与两方均不为空的体素,最后得到的聚合体素级特征按如下公式计算:
Figure PCTCN2022084925-appb-000062
Figure PCTCN2022084925-appb-000063
其中:
Figure PCTCN2022084925-appb-000064
为聚合体素级特征;
f k为聚合体素级特征
Figure PCTCN2022084925-appb-000065
在位置k的值;
f ego_k为车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000066
在位置k的值;
f lidar_k为偏转点云体素级特征
Figure PCTCN2022084925-appb-000067
在位置k的值;
即使用最大值池化的方法聚合相同坐标体素的特征。
使用子方案I时,后续处理在路侧计算设备进行。
H 1)路侧计算设备按如上方法对车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000068
和偏转点云体素级特征
Figure PCTCN2022084925-appb-000069
进行数据拼接和数据聚合得到聚合体素级特征
Figure PCTCN2022084925-appb-000070
使用子方案II时,后续处理于自动驾驶车辆上进行。
H 2)自动驾驶车辆按如上方法对车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000071
和偏转点云体素级特征
Figure PCTCN2022084925-appb-000072
进行数据拼接和数据聚合得到聚合体素级特征
Figure PCTCN2022084925-appb-000073
使用子方案III时,后续处理于云端进行。
H 3)云端按如上方法对车载激光雷达点云体素级特征
Figure PCTCN2022084925-appb-000074
和偏转点云体素级特征
Figure PCTCN2022084925-appb-000075
进行数据拼接和数据聚合得到聚合体素级特征
Figure PCTCN2022084925-appb-000076
I.目标检测
将聚合体素级特征输入后续三维目标检测网络模型得到检测目标。仍以VoxelNet为例,在得到聚合体素级特征之后,将其输入到基于体素级特征的三维目标检测网络模型中得到目标检测结果。
目标检测结果可表示为U,具体为:
U=[u 1 ... u n]            (27)
Figure PCTCN2022084925-appb-000077
其中:
u i为目标检测结果中第i个目标的信息;
x i为第i个检测目标在自动驾驶车辆坐标系中的x轴坐标;
y i为第i个检测目标在自动驾驶车辆坐标系中的y轴坐标;
z i为第i个检测目标在自动驾驶车辆坐标系中的z轴坐标;
C i为第i个检测目标的置信度;
W i为第i个检测目标对应的检测框的宽度;
D i为第i个检测目标对应的检测框的长度;
H i为第i个检测目标对应的检测框的高度;
Figure PCTCN2022084925-appb-000078
为第i个检测目标对应的检测框的方向角;
v xi为第i个检测目标运动速度在自动驾驶车辆坐标系中的x轴方向上的投影。
v yi为第i个检测目标运动速度在自动驾驶车辆坐标系中的y轴方向上的投影。
v zi为第i个检测目标运动速度在自动驾驶车辆坐标系中的z轴方向上的投影。
对于任意一种基于体素级特征的三维目标检测网络模型,其目标检测结果至少应包括目标的位置,即x i、y i、z i。对于高性能的基于体素级特征的三维目标检测网络模型,其目标检测结果应包括检测目标的C i、W i、D i、H i
Figure PCTCN2022084925-appb-000079
v xi、v yi、v zi属性的一部分或所有。其中,W i、D i、H i三个属性只可同时存在于或同时不存在于目标检测结果中。v xi、v yi、v zi三个属性只可同时存在于或同时不存在于目标检测结果中。
使用子方案I时,目标检测在路侧计算设备进行。
I 1)路侧计算设备将聚合体素级特征
Figure PCTCN2022084925-appb-000080
输入基于体素级特征的三维目标检测网络模型得到目标检测结果U,并将目标检测结果传输至自动驾驶车辆。
使用子方案II时,目标检测于自动驾驶车辆上进行。
I 2)自动驾驶车辆将聚合体素级特征
Figure PCTCN2022084925-appb-000081
输入基于体素级特征的三维目标检测网络模型得到目标检测结果U。
使用子方案III时,目标检测于云端进行。
I 3)云端将聚合体素级特征
Figure PCTCN2022084925-appb-000082
输入基于体素级特征的三维目标检测网络模型得到目标检测结果,并将目标检测结果U传输至自动驾驶车辆。
本发明具有技术关键点和优势包括:
使用路侧激光雷达作为自动驾驶车辆感知的补充,提升了自动驾驶车辆对于周边物体识别的范围和准确性。同时,使用体素化特征作为车路间传输的数据,既保证了几乎不丢失原始数据信息,同时降低数据传输时对带宽的需求。
以上符号及其所表示含义归纳如下表:
Figure PCTCN2022084925-appb-000083
Figure PCTCN2022084925-appb-000084
Figure PCTCN2022084925-appb-000085
以上名词及其所表示含义归纳如下表:
Figure PCTCN2022084925-appb-000086
Figure PCTCN2022084925-appb-000087
附图简要说明
图1为提出的一种面向车路协同的感知信息融合表征及目标检测方法;
图2为布设路侧机械旋转式激光雷达示意图
图3为布设路侧全固态激光雷达示意图
图4为布设路侧全固态激光雷达示意图(同杆安装两个反向激光雷达)
图5为VFE层处理点云数据示意图
图6为体素化特征提取及聚合示意图
图7为合并后体素点云目标检测示意图
图8为路侧激光雷达点云坐标转换示意图
图9为目标检测结果对比示意图(左图为本专利提出车路协同检测方法,右图为直接取各自高置信度目标检测结果)
具体实施方式
下面结合附图和具体实施方式对本发明作详细说明。
本发明涉及一种面向车路协同的感知信息融合表征及目标检测方法。可以分为三个主要步骤:
第一步,路侧激光雷达传感器的安装以及前期标定工作。
布设路侧激光雷达依据对车路协同场景中现有的路侧立柱设施及所安装的激光雷达类型决定。现有的路侧激光雷达安装形式用立杆或横杆方式,具体安装位置为路侧龙门架、路灯、信号灯柱等有电力支持的基础设施柱体上。
对于交叉口等场景,布设一个检测范围大于等于场景范围或是包含场景内关键区域的路侧激光雷达即可。对于快速路,高速公路,园区等长距离大范围复杂场景,建议遵循发明内容中的路侧激光雷达布设导则,使路侧激光雷达覆盖范围满足场景全覆盖要求,即单一路侧激光雷达实现对场景内其他路侧激光雷达下方检测盲区的补充,以达到更好的车路协同目标检测效果。在车路协同方案中,使用路侧激光雷达提升自动驾驶车辆的感知能力,即获得车辆周围目标相对于自车位置及其类别、大小尺寸、行进方向等信息的能力。因此路侧激光雷达本身也应有尽可能强的感知能力,包括雷达线数和采样频率等参数应尽可能不低于车载激光雷达的相关参数。另外,为了弥补车载激光雷达易被遮挡的缺陷同时实现感知数据冗余,路侧激光雷达的感知范围应确保覆盖所有遮挡现象的常发区域,并控制路侧激光雷达探测视线通透无障碍物遮挡。
在完成路侧激光雷达传感器的安装后,为了计算路侧激光雷达与车载激光雷达的相对位姿,需要对路侧激光雷达的安装位置和角度进行标定,即外参标定,即得到激光雷达相对某一基准坐标系的坐标位置参数和角度姿态参数。首先在路侧激光雷达检测区域内选取最少4个反射率特征点作为控制点。反射率特征点指反射率与周边物体有明显差别的点,例如交通标志牌、车牌等,选取反射率特征点作为控制点的目的在于便于在点云数据中依靠位置和反射强度与其他点的区别快速的找出相应的点,从而快速建立多对点云中的点与基准坐标系中的一个坐标之间的对应关系。控制点应尽可能离散分布。在场景环境允许的条件下且控制点选择符合以下要求的条件下,控制点越多标定效果越好。控制点选择要求包括:应离散分布,且任意三个控制点间不可共线;在路侧激光雷达检测的范围内,选取控制点应尽可能距离路侧激光雷达距离更远,通常这个距离应大于激光雷达最远检测距离的50%。对于因场景限制难以在激光雷达最远检测距离的50%选取控制点时,可在小于激光雷达最远检测距离的50%处选取控制点,但应增加控制点的数量。随后使用手持高精度RTK等高精度测量仪器测量控制点精确坐标,在路侧激光雷达点云中找到相应点坐标;当持有路侧激光雷达布设场景的高精度地图文件时,无需使用手持高精度RTK等高精度测量仪器测量,可直接与高精度地图中找到对应特征点的坐标。最后使用三维配准算法对计算激光雷达外参向量的最优值,以其结果作为标定结果。常用的三维配准算法包括ICP算法、NDT算法等,其中应用于激光雷达外参标定问题时即主要使用ICP算法。ICP算法的基本原理是在匹配的目标点集P(控制点在路侧激光雷达坐标系中的坐标集合)和源点集Q(控制点在基准坐标系中的坐标集合)中,计算出最优匹配外参,使得误差函数最小。
这里不限定标定路侧激光雷达外参所使用的方法,但应确保标定结果包含传感器的三维世界坐标以及俯仰角、偏航角和翻滚角,以用于后续步骤中的点云偏转。
第二步,车端路端激光雷达点云数据的处理及特征提取。
在实际车路协同自动驾驶过程中,首先基于自动驾驶自带的定位模块等获得车辆实时的世界坐标及俯仰角、偏航角和翻滚角。基于车辆RTK定位结果与路侧激光雷达的外参标定结果,计算自动驾驶车辆相对于路侧激光雷达的相对位姿,将路侧激光雷达点云数据偏转至车辆坐标系内。
根据车载激光雷达点云所在的空间维度大小,设计体素的大小并对车载激光雷达进行体素划分。对于偏转点云,使用与车载激光雷达点云相同体素划分方式进行划分,保证划分偏转点云的空间网格与车载激光雷达点云完全重合。根据车载激光雷达点云和扩充后的偏转点云中散点数据所在的体素进行分组,同一体素内的散点数据为同一组。由于点的不均匀性和稀疏性,每个体素中的散点数据的数量不一定相同,一部分体素中可能没有散点数据。为了减小计算负担,同时消除因为密度不一致带来的判别问题,针对体素内散点数据量大于一定阈值的体素进行随机采样,阈值建议取值为35,当点云数据中散点数据较少时,可适当减小阈值。这种策略可以节省计算资源、降低体素之间的不平衡性。见图6,用固定尺寸的晶格将两组点云数据分成若干个离散的体素,扩充,使用上述的体素化方法分别计算每个体素的特征向量。以三维目标检测算法中较为经典的VoxelNet网络模型为例,使用若干个连续的VFE层对每个体素进行特征向量的提取。即使用体素内的每个散点数据相对于中心的偏移量来补充其系信息,并将处理后的体素化点云输入级联的连续VFE层中,VFE层处理体素化点云的数据的示意图如图5。VFE层的处理逻辑为首先使每一个扩充后的散点数据通过一层全连接网络后得到各点的点级特征,随后将点级特征进行最大值池化处理得到体素级特征,最后将体素级特征与上一步得到的点级特征拼接得到点级拼接特征结果。经过级联的连续VFE层处理后,再通过全连接层整合和最大值池化得到最终的体素级特征。
由于点云在空间中稀疏存在,许多体素内无散点数据,因此也无对应的体素级特征。将点云体素级特征用特殊结构存储后可大幅压缩数据大小,减小发送至处理设备时的传输难度。可用特殊结构之一是哈希表,哈希表是根据关键码值而直接进行访问的数据结构。它通过把关键码值映射到表中一个位置来访问记录,以加快查找的速度。其中,哈希表的哈希键是体素的空间坐标,对应值则为体素级特征。
第三步,对对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征,并进行目标检测。
在进行数据聚合和数据拼接之前,首先需要将点云体素级特征压缩并传输至计算设备上。计算设备可以为路侧计算设备、自动驾驶车辆或是云端。使用子方案I时,数据聚合和数据拼接及后续处理在路侧计算设备进行;使用子方案II时,数据聚合和数据拼接及后续处理于自动驾驶车辆上进行。使用子方案III时,数据聚合和数据拼接及后续处理于云端进行。
数据拼接和数据聚合过程中,由于体素化不改变点云的空间相对位置,因此仍可依据上一步中偏转点云体素级特征对车载雷达点云体素级特征进行补充,进行数据拼接操作,即将车载激光雷达点云体素级特征和偏转点云体素级特征依据其中体素在自动驾驶车辆坐标系中的位置进行对齐。进行数据聚合操作,即对于车载激光雷达点云体素级特征和偏转点云体素级特征任意一方体素为空的位置,取不为空的一方的体素级特征作为聚合后的体素级特征。对于两组数据中相同空间坐标的体素级特征向量,使用最大值池化方法聚合特征向量,对于不重合的体素级特征向量则保持非空体素一方的特征向量值。
将聚合体素级特征输入后续三维目标检测网络模型得到检测目标。见图7,仍以VoxelNet网络模型为例,将拼接后的数据输入VoxelNet网络模型中的连续卷积层得到空间特征图,最后输入VoxelNet网络模型中的RPN(Region Proposal Network,区域生成网络)中得到最后的目标检测结果。
本发明具有以下技术关键点和优势:
使用路侧激光雷达作为自动驾驶车辆感知的补充,提升了车辆对于周边物体识别的范围和准确性。同时,使用点云体素化特征作为车路间传输的数据,既保证了几乎不丢失原始数据信息,同时降低了数据传输对带宽的要求。
在同济大学嘉定校区交通运输工程学院路口设置实验场景,场景中路段上每20m距离设有一个高为6.4m的立杆。以使用Innovusion捷豹阵列式300线激光雷达和Ouster 128线360°激光雷达作为路侧激光雷达为例。Innovusion捷豹阵列式300线激光雷达的垂直方向视场角为40°,最远检测距离为200m。Ouster 128线360°激光雷达的垂直视场角为45°,最远检测距离为140m。自动驾驶车辆使用Ouster 64线360°雷达作为车载激光雷达,安装高度为2m,水平安装。车载激光雷达与车体为刚性连接,两者间的相对姿态和位移保持不变,并已在出厂时完成标定,车辆运动时根据车载RTK测得车辆实时位移及偏转实时修正车载激光雷达位置与角度。
实施例1如下:
(1)路侧激光雷达传感器的布设以及标定
仅使用Ouster 128线360°激光雷达,考虑激光雷达本身的大小,Ouster 128线360°激光雷达的安装高度为6.5m,并每5根立杆间安装一个,此时符合路侧机械旋转式激光雷达、路侧混合固态激光雷达布设方案导则。
在激光雷达区域内选取六个反射率特征点作为控制点,六个控制点分别取距离激光雷达安装立柱80m、100m、120m处两侧立柱的柱脚。由于路段有一定曲率,任意三个控制点满足不共线条件。使用手持RTK测量控制点精确坐标,在激光雷达点云中匹配相应控制点坐标,使用ICP算法对激光雷达进行标定。
(2)点云数据的处理及特征提取。
经过(1)的标定工作可获取路侧激光雷达点云在自动驾驶车辆坐标系中的位置,如图8所示,将路侧激光雷达点云对齐到自动驾驶车辆坐标系中。按自动驾驶车辆坐标系和固定大小为[0.4m 0.4m 0.5m]的晶格将偏转点云划分为体素并扩充,得到体素化偏转点云。对体素化偏转点云内每个散点数据补充体素均值信息后,输入多层VFE中计算体素级特征,对于不包含散点数据的体素则不用计算,每一个体素最终用一个128维的特征向量表示。路侧计算设备将计算好的体素级特征存储于哈希表中,体素的空间位置作为哈希键,对应的内容则为相应体素的体素级特征,得到压缩偏转点云体素级特征。自动驾驶车辆对于车载激光雷达点云作相同处理,直到获取到车载激光雷达点云体素级特征,即不需要对车端激光雷达点云数据建立哈希表。此时,相比原始的点云数据,数据大小约减少到1/10。
(3)体素级特征的数据拼接、数据聚合及目标检测
自动驾驶车辆接收路侧计算设备发送的压缩偏转点云体素级特征,并将其解压恢复成偏转点云体素级特征。由于接收到的偏转点云体素级特征的坐标系已被偏转至自动驾驶车辆坐标系,偏转点云体素级特征可以直接与相同坐标系的车载激光雷达点云体素级特征数据拼接。使用最大值池化的方法对相同坐标体素级特征进行数据聚合操作,例如体素级特征[15,45,90,……,17]和体素级特征[8,17,110,……,43]的聚合结果为[15,45,110,……,43]。完成对所有体素级特征的数据拼接和数据聚合后将其输入后续的RPN中得到目标检测结果。分别将本专利提出的车路协同检测方法和直接融合基于车载激光雷达点云和路侧激光雷达点云的目标检测结果及置信度绘制 在点云俯视图上如图9。可见,通过共享神经网络特征的方法进行车路协同目标检测,可以大幅度提高目标检测精度,并降低数据传输带宽要求。
实施例2如下:
(1)路侧激光雷达传感器的布设以及标定
当仅使用Innovusion捷豹阵列式300线激光雷达且每杆只设置一个激光雷达时,激光雷达安装高度为6.5m,俯角7°,每8根立杆间安装一个,此时符合路侧全固态激光雷达布设方案。
在激光雷达区域内选取六个反射率特征点作为控制点,六个控制点分别取距离激光雷达安装立柱100m、120m、140m处两侧立柱的柱脚。由于路段有一定曲率,任意三个控制点满足不共线条件。使用手持RTK测量控制点精确坐标,在激光雷达点云中匹配相应控制点坐标,使用ICP算法对激光雷达进行标定。
(2)点云数据的处理及特征提取。
同实施例1中步骤(2)得到偏转点云体素级特征和车载激光雷达点云体素级特征。自动驾驶汽车将计算好的车载激光雷达点云体素级特征存储于哈希表中,体素的空间位置作为哈希键,对应的内容则为相应体素的体素级特征,得到压缩车载激光雷达点云体素级特征。
(3)体素级特征的数据拼接、数据聚合及目标检测
路侧计算设备接收自动驾驶车辆发送的压缩车载激光雷达点云体素级特征,并将其解压恢复成车载激光雷达点云体素级特征。后续数据拼接、数据聚合及目标检测步骤同实施例1中(3),直至得到目标检测结果后,路侧计算设备将目标检测结果发送至自动驾驶车辆。
实施例3如下:
(1)路侧激光雷达传感器的布设以及标定
当仅使用Innovusion捷豹阵列式300线激光雷达且每杆设置两个反向的激光雷达时,激光雷达安装高度为6.5m,俯角7°,每9根立杆间安装两个,符合路侧全固态激光雷达布设方案导则。
在激光雷达区域内选取六个反射率特征点作为控制点,六个控制点分别取距离激光雷达安装立柱100m、120m、140m处两侧立柱的柱脚。由于路段有一定曲率,任意三个控制点满足不共线条件。使用手持RTK测量控制点精确坐标,在激光雷达点云中匹配相应控制点坐标,使用ICP算法对激光雷达进行标定。
(2)点云数据的处理及特征提取。
同实施例1中步骤(2)得到压缩偏转点云体素级特征,同实施例2中步骤(2)得到压缩车载激光雷达点云体素级特征。
(3)体素级特征的数据拼接、数据聚合及目标检测
云端接收自动驾驶车辆发送的压缩车载激光雷达点云体素级特征,并将其解压恢复成车载激光雷达点云体素级特征;云端接收路侧计算设备发送的压缩偏转点云体素级特征,并将其解压恢复成偏转点云体素级特征。后续数据拼接、数据聚合及目标检测步骤同实施例1中(3),直至得到目标检测结果后,云端将目标检测结果发送至自动驾驶车辆。
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权力要求的保护范围为准。

Claims (9)

  1. 一种面向车路协同的感知信息融合表征及目标检测方法,所述方法包括以下步骤:
    准备阶段:
    A.布设路侧激光雷达,为路侧激光雷达配置相应的路侧计算设备;
    B.标定路侧激光雷达外参;
    应用阶段:
    C.路侧计算设备根据自动驾驶车辆定位数据和路侧激光雷达外参计算自动驾驶车辆相对于路侧激光雷达的相对位姿;
    D.路侧计算设备根据相对位姿将路侧激光雷达检测到的路侧激光雷达点云偏转至自动驾驶车辆坐标系中,得到偏转点云;
    E.路侧计算设备对偏转点云进行体素化处理,得到体素化偏转点云。自动驾驶车辆对车载激光雷达检测到的车载激光雷达点云进行体素化处理得到体素化车载激光雷达点云;
    F.路侧计算设备计算体素化偏转点云的体素级特征,得到偏转点云体素级特征;自动驾驶车辆计算体素化车载激光雷达点云体素级特征,得到车载激光雷达点云体素级特征;
    G.自动驾驶车辆对车载激光雷达点云体素级特征进行压缩处理,得到压缩车载激光雷达点云体素级特征,并传输至路侧计算设备,路侧计算设备接收压缩车载激光雷达点云体素级特征,将压缩车载激光雷达点云体素级特征还原为车载激光雷达点云体素级特征;
    H.路侧计算设备对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征;
    I.路侧计算设备将聚合体素级特征输入基于体素级特征的三维目标检测网络模型得到目标检测结果,并将目标检测结果传输至自动驾驶车辆。
  2. 一种面向车路协同的感知信息融合表征及目标检测方法,所述方法包括以下步骤:
    准备阶段:
    A.布设路侧激光雷达,为路侧激光雷达配置相应的路侧计算设备;
    B.标定路侧激光雷达外参;
    应用阶段:
    C.路侧计算设备根据自动驾驶车辆定位数据和路侧激光雷达外参计算自动驾驶车辆相对于路侧激光雷达的相对位姿;
    D.路侧计算设备根据相对位姿将路侧激光雷达检测到的路侧激光雷达点云偏转至自动驾驶车辆坐标系中,得到偏转点云;
    E.路侧计算设备对偏转点云进行体素化处理,得到体素化偏转点云。自动驾驶车辆对车载激光雷达检测到的车载激光雷达点云进行体素化处理得到体素化车载激光雷达点云;
    F.路侧计算设备计算体素化偏转点云的体素级特征,得到偏转点云体素级特征;自动驾驶车辆计算体素化车载激光雷达点云体素级特征,得到车载激光雷达点云体素级特征;
    G.路侧计算设备对偏转点云体素级特征进行压缩处理,得到压缩偏转点云体素级特征,并传输至自动驾驶车辆;自动驾驶车辆接收压缩偏转点云体素级特征,将压缩偏转点 云体素级特征还原为偏转点云体素级特征;
    H.自动驾驶车辆对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征;
    I.自动驾驶车辆将聚合体素级特征输入基于体素级特征的三维目标检测网络模型得到目标检测结果。
  3. 一种面向车路协同的感知信息融合表征及目标检测方法,所述方法包括以下步骤:
    准备阶段:
    A.布设路侧激光雷达,为路侧激光雷达配置相应的路侧计算设备;
    B.标定路侧激光雷达外参;
    应用阶段:
    C.路侧计算设备根据自动驾驶车辆定位数据和路侧激光雷达外参计算自动驾驶车辆相对于路侧激光雷达的相对位姿;
    D.路侧计算设备根据相对位姿将路侧激光雷达检测到的路侧激光雷达点云偏转至自动驾驶车辆坐标系中,得到偏转点云;
    E.路侧计算设备对偏转点云进行体素化处理,得到体素化偏转点云。自动驾驶车辆对车载激光雷达检测到的车载激光雷达点云进行体素化处理得到体素化车载激光雷达点云;
    F.路侧计算设备计算体素化偏转点云的体素级特征,得到偏转点云体素级特征。自动驾驶车辆计算体素化车载激光雷达点云体素级特征,得到车载激光雷达点云体素级特征;
    G.自动驾驶车辆对车载激光雷达点云体素级特征进行压缩处理,得到压缩车载激光雷达点云体素级特征,并传输至云端;路侧计算设备对偏转点云体素级特征进行压缩处理,得到压缩偏转点云体素级特征,并传输至云端;云端接收压缩偏转点云体素级特征和压缩车载激光雷达点云体素级特征,将压缩偏转点云体素级特征还原为偏转点云体素级特征,将压缩车载激光雷达点云体素级特征还原为车载激光雷达点云体素级特征;
    H.云端对车载激光雷达点云体素级特征和偏转点云体素级特征进行数据拼接和数据聚合得到聚合体素级特征;
    I.云端将聚合体素级特征输入基于体素级特征的三维目标检测网络模型得到目标检测结果,并将目标检测结果传输至自动驾驶车辆。
  4. 如权利要求1至3之一所述的方法,其特征在于,路侧激光雷达的配置准则为:
    ①对于路侧安装机械旋转式激光雷达和同杆安装两反向全固态激光雷达的情况,应至少满足:
    Figure PCTCN2022084925-appb-100001
    其中:
    H表示激光雷达安装高度;
    θ 2表示激光雷达最高仰角光束与水平方向的夹角;
    L表示相邻两个激光雷达安装杆位间的距离;
    ②对于路侧安装路侧全固态激光雷达应至少满足以下要求:
    Figure PCTCN2022084925-appb-100002
    其中:
    H b表示路侧全固态激光雷达安装高度;
    Figure PCTCN2022084925-appb-100003
    表示路侧全固态激光雷达在垂直方向上的视场角度;
    Figure PCTCN2022084925-appb-100004
    表示路侧全固态激光雷达最高仰角光束与水平方向的夹角;
    L b表示相邻两个路侧全固态激光雷达安装杆位间的距离。
  5. 如权利要求1至3之一所述的方法,其特征在于,对所述路侧激光雷达的外参标定时,在路侧激光雷达扫描区域内选择特征点作为控制点时考虑控制点的数量、位置离散型和共线性。
  6. 如权利要求1至3之一所述的方法,其特征在于,所述路侧激光雷达的外参采用如下方法标定:将控制点在路侧激光雷达坐标系中的坐标和RTK测得的基准坐标系中的坐标分别作为目标点集P和源点集Q,使用ICP算法计算激光雷达外参。
  7. 如权利要求1至3之一所述的方法,其特征在于,所述的点云体素化过程中对偏转点云作扩充处理,保证车载激光雷达点云D c和扩充后的偏转点云
    Figure PCTCN2022084925-appb-100005
    的体素划分网格一致,其计算公式为:
    Figure PCTCN2022084925-appb-100006
    其中:
    K lidar_start′、K lidar_end′为K维度上扩充后的偏转点云
    Figure PCTCN2022084925-appb-100007
    的范围起始值和终止值;
    K lidar_start、K lidar_end为K维度上偏转点云
    Figure PCTCN2022084925-appb-100008
    的范围起始值和终止值;
    V K为体素在K维度上的大小。
  8. 如权利要求1至3之一所述的方法,其特征在于,在提取点云体素级特征时对点采用对中心的偏移量来补充其信息,即:
    Figure PCTCN2022084925-appb-100009
    其中:
    Figure PCTCN2022084925-appb-100010
    为补充后的体素A中第i个点的信息;
    x i、y i、z i为体素A中第i个点的坐标;
    r i为体素A中第i个点的反射强度;
    v x、v y、v z为体素A内所有点坐标的均值。
  9. 如权利要求1至3之一所述的方法,其特征在于,所述的体素级特征数据聚合方法使用最大值池化方法聚合相同坐标的体素级特征,其公式如下:
    Figure PCTCN2022084925-appb-100011
    f k为聚合体素级特征
    Figure PCTCN2022084925-appb-100012
    在位置k的值;
    f ego_k为车载激光雷达点云体素级特征
    Figure PCTCN2022084925-appb-100013
    在位置k的值;
    f lidar_k为偏转点云体素级特征
    Figure PCTCN2022084925-appb-100014
    在位置k的值;
PCT/CN2022/084925 2021-01-01 2022-04-01 一种面向车路协同的感知信息融合表征及目标检测方法 WO2022206977A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280026658.2A CN117441113A (zh) 2021-01-01 2022-04-01 一种面向车路协同的感知信息融合表征及目标检测方法

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110000327 2021-01-01
CN202110228419 2021-03-01
CNPCT/CN2021/085148 2021-04-01
PCT/CN2021/085148 WO2022141912A1 (zh) 2021-01-01 2021-04-01 一种面向车路协同的感知信息融合表征及目标检测方法

Publications (1)

Publication Number Publication Date
WO2022206977A1 true WO2022206977A1 (zh) 2022-10-06

Family

ID=82260124

Family Applications (9)

Application Number Title Priority Date Filing Date
PCT/CN2021/085147 WO2022141911A1 (zh) 2021-01-01 2021-04-01 一种基于路侧感知单元的动态目标点云快速识别及点云分割方法
PCT/CN2021/085148 WO2022141912A1 (zh) 2021-01-01 2021-04-01 一种面向车路协同的感知信息融合表征及目标检测方法
PCT/CN2021/085150 WO2022141914A1 (zh) 2021-01-01 2021-04-01 一种基于雷视融合的多目标车辆检测及重识别方法
PCT/CN2021/085146 WO2022141910A1 (zh) 2021-01-01 2021-04-01 一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法
PCT/CN2021/085149 WO2022141913A1 (zh) 2021-01-01 2021-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2022/084912 WO2022206974A1 (zh) 2021-01-01 2022-04-01 一种基于路侧感知单元的静态和非静态物体点云识别方法
PCT/CN2022/084925 WO2022206977A1 (zh) 2021-01-01 2022-04-01 一种面向车路协同的感知信息融合表征及目标检测方法
PCT/CN2022/084929 WO2022206978A1 (zh) 2021-01-01 2022-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2022/084738 WO2022206942A1 (zh) 2021-01-01 2022-04-01 一种基于行车安全风险场的激光雷达点云动态分割及融合方法

Family Applications Before (6)

Application Number Title Priority Date Filing Date
PCT/CN2021/085147 WO2022141911A1 (zh) 2021-01-01 2021-04-01 一种基于路侧感知单元的动态目标点云快速识别及点云分割方法
PCT/CN2021/085148 WO2022141912A1 (zh) 2021-01-01 2021-04-01 一种面向车路协同的感知信息融合表征及目标检测方法
PCT/CN2021/085150 WO2022141914A1 (zh) 2021-01-01 2021-04-01 一种基于雷视融合的多目标车辆检测及重识别方法
PCT/CN2021/085146 WO2022141910A1 (zh) 2021-01-01 2021-04-01 一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法
PCT/CN2021/085149 WO2022141913A1 (zh) 2021-01-01 2021-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2022/084912 WO2022206974A1 (zh) 2021-01-01 2022-04-01 一种基于路侧感知单元的静态和非静态物体点云识别方法

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/CN2022/084929 WO2022206978A1 (zh) 2021-01-01 2022-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2022/084738 WO2022206942A1 (zh) 2021-01-01 2022-04-01 一种基于行车安全风险场的激光雷达点云动态分割及融合方法

Country Status (3)

Country Link
CN (5) CN116685873A (zh)
GB (2) GB2618936A (zh)
WO (9) WO2022141911A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724362B (zh) * 2022-03-23 2022-12-27 中交信息技术国家工程实验室有限公司 一种车辆轨迹数据处理方法
CN115358530A (zh) * 2022-07-26 2022-11-18 上海交通大学 一种车路协同感知路侧测试数据质量评价方法
CN115113157B (zh) * 2022-08-29 2022-11-22 成都瑞达物联科技有限公司 一种基于车路协同雷达的波束指向校准方法
CN115480243B (zh) * 2022-09-05 2024-02-09 江苏中科西北星信息科技有限公司 多毫米波雷达端边云融合计算集成及其使用方法
CN115166721B (zh) * 2022-09-05 2023-04-07 湖南众天云科技有限公司 路侧感知设备中雷达与gnss信息标定融合方法及装置
CN115272493B (zh) * 2022-09-20 2022-12-27 之江实验室 一种基于连续时序点云叠加的异常目标检测方法及装置
CN115235478B (zh) * 2022-09-23 2023-04-07 武汉理工大学 基于视觉标签和激光slam的智能汽车定位方法及系统
CN115830860B (zh) * 2022-11-17 2023-12-15 西部科学城智能网联汽车创新中心(重庆)有限公司 交通事故预测方法及装置
CN115966084B (zh) * 2023-03-17 2023-06-09 江西昂然信息技术有限公司 全息路口毫米波雷达数据处理方法、装置及计算机设备
CN116189116B (zh) * 2023-04-24 2024-02-23 江西方兴科技股份有限公司 一种交通状态感知方法及系统
CN117471461B (zh) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种用于车载辅助驾驶系统的路侧雷达服务装置和方法
CN117452392B (zh) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种用于车载辅助驾驶系统的雷达数据处理系统和方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160071162A (ko) * 2014-12-11 2016-06-21 현대자동차주식회사 라이다를 이용한 멀티 오브젝트 추적 장치 및 그 방법
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
KR20180066618A (ko) * 2016-12-09 2018-06-19 (주)엠아이테크 자율주행차량을 위한 거리 데이터와 3차원 스캔 데이터의 정합 방법 및 그 장치
CN110220529A (zh) * 2019-06-17 2019-09-10 深圳数翔科技有限公司 一种路侧自动驾驶车辆的定位方法
CN110296713A (zh) * 2019-06-17 2019-10-01 深圳数翔科技有限公司 路侧自动驾驶车辆定位导航系统及单个、多个车辆定位导航方法
CN110906939A (zh) * 2019-11-28 2020-03-24 安徽江淮汽车集团股份有限公司 自动驾驶定位方法、装置、电子设备、存储介质及汽车
CN111766608A (zh) * 2020-06-12 2020-10-13 苏州泛像汽车技术有限公司 一种基于激光雷达的环境感知系统
CN111985322A (zh) * 2020-07-14 2020-11-24 西安理工大学 一种基于激光雷达的道路环境要素感知方法
CN111999741A (zh) * 2020-01-17 2020-11-27 青岛慧拓智能机器有限公司 路侧激光雷达目标检测方法及装置

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661370B2 (en) * 2001-12-11 2003-12-09 Fujitsu Ten Limited Radar data processing apparatus and data processing method
US9562971B2 (en) * 2012-11-22 2017-02-07 Geosim Systems Ltd. Point-cloud fusion
TWI597513B (zh) * 2016-06-02 2017-09-01 財團法人工業技術研究院 定位系統、車載定位裝置及其定位方法
CN105892471B (zh) * 2016-07-01 2019-01-29 北京智行者科技有限公司 汽车自动驾驶方法和装置
WO2018126248A1 (en) * 2017-01-02 2018-07-05 Okeeffe James Micromirror array for feedback-based image resolution enhancement
CN106846494A (zh) * 2017-01-16 2017-06-13 青岛海大新星软件咨询有限公司 倾斜摄影三维建筑物模型自动单体化算法
US10281920B2 (en) * 2017-03-07 2019-05-07 nuTonomy Inc. Planning for unknown objects by an autonomous vehicle
CN108629231B (zh) * 2017-03-16 2021-01-22 百度在线网络技术(北京)有限公司 障碍物检测方法、装置、设备及存储介质
CN107133966B (zh) * 2017-03-30 2020-04-14 浙江大学 一种基于采样一致性算法的三维声纳图像背景分割方法
CN108932462B (zh) * 2017-05-27 2021-07-16 华为技术有限公司 驾驶意图确定方法及装置
FR3067495B1 (fr) * 2017-06-08 2019-07-05 Renault S.A.S Procede et systeme d'identification d'au moins un objet en deplacement
CN109509260B (zh) * 2017-09-14 2023-05-26 阿波罗智能技术(北京)有限公司 动态障碍物点云的标注方法、设备及可读介质
CN107609522B (zh) * 2017-09-19 2021-04-13 东华大学 一种基于激光雷达和机器视觉的信息融合车辆检测系统
CN108152831B (zh) * 2017-12-06 2020-02-07 中国农业大学 一种激光雷达障碍物识别方法及系统
CN108639059B (zh) * 2018-05-08 2019-02-19 清华大学 基于最小作用量原理的驾驶人操控行为量化方法及装置
CN109188379B (zh) * 2018-06-11 2023-10-13 深圳市保途者科技有限公司 驾驶辅助雷达工作角度的自动校准方法
KR20210025523A (ko) * 2018-07-02 2021-03-09 소니 세미컨덕터 솔루션즈 가부시키가이샤 정보 처리 장치 및 정보 처리 방법, 컴퓨터 프로그램, 그리고 이동체 장치
US10839530B1 (en) * 2018-09-04 2020-11-17 Apple Inc. Moving point detection
CN109297510B (zh) * 2018-09-27 2021-01-01 百度在线网络技术(北京)有限公司 相对位姿标定方法、装置、设备及介质
CN111429739A (zh) * 2018-12-20 2020-07-17 阿里巴巴集团控股有限公司 一种辅助驾驶方法和系统
JP7217577B2 (ja) * 2019-03-20 2023-02-03 フォルシアクラリオン・エレクトロニクス株式会社 キャリブレーション装置、キャリブレーション方法
CN110532896B (zh) * 2019-08-06 2022-04-08 北京航空航天大学 一种基于路侧毫米波雷达和机器视觉融合的道路车辆检测方法
CN110443978B (zh) * 2019-08-08 2021-06-18 南京联舜科技有限公司 一种摔倒报警设备及方法
CN110458112B (zh) * 2019-08-14 2020-11-20 上海眼控科技股份有限公司 车辆检测方法、装置、计算机设备和可读存储介质
CN110850378B (zh) * 2019-11-22 2021-11-19 深圳成谷科技有限公司 一种路侧雷达设备自动校准方法和装置
CN110850431A (zh) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 一种拖车偏转角的测量系统和方法
CN111121849B (zh) * 2020-01-02 2021-08-20 大陆投资(中国)有限公司 传感器的方位参数的自动校准方法、边缘计算单元和路侧传感系统
CN111157965B (zh) * 2020-02-18 2021-11-23 北京理工大学重庆创新中心 车载毫米波雷达安装角度自校准方法、装置及存储介质
CN111476822B (zh) * 2020-04-08 2023-04-18 浙江大学 一种基于场景流的激光雷达目标检测与运动跟踪方法
CN111554088B (zh) * 2020-04-13 2022-03-22 重庆邮电大学 一种多功能v2x智能路侧基站系统
CN111192295B (zh) * 2020-04-14 2020-07-03 中智行科技有限公司 目标检测与跟踪方法、设备和计算机可读存储介质
CN111537966B (zh) * 2020-04-28 2022-06-10 东南大学 一种适用于毫米波车载雷达领域的阵列天线误差校正方法
CN111880191B (zh) * 2020-06-16 2023-03-28 北京大学 基于多智能体激光雷达和视觉信息融合的地图生成方法
CN111880174A (zh) * 2020-07-03 2020-11-03 芜湖雄狮汽车科技有限公司 一种用于支持自动驾驶控制决策的路侧服务系统及其控制方法
CN111914664A (zh) * 2020-07-06 2020-11-10 同济大学 基于重识别的车辆多目标检测和轨迹跟踪方法
CN111862157B (zh) * 2020-07-20 2023-10-10 重庆大学 一种机器视觉与毫米波雷达融合的多车辆目标跟踪方法
CN112019997A (zh) * 2020-08-05 2020-12-01 锐捷网络股份有限公司 一种车辆定位方法及装置
CN112509333A (zh) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 一种基于多传感器感知的路侧停车车辆轨迹识别方法及系统

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160071162A (ko) * 2014-12-11 2016-06-21 현대자동차주식회사 라이다를 이용한 멀티 오브젝트 추적 장치 및 그 방법
KR20180066618A (ko) * 2016-12-09 2018-06-19 (주)엠아이테크 자율주행차량을 위한 거리 데이터와 3차원 스캔 데이터의 정합 방법 및 그 장치
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
CN110220529A (zh) * 2019-06-17 2019-09-10 深圳数翔科技有限公司 一种路侧自动驾驶车辆的定位方法
CN110296713A (zh) * 2019-06-17 2019-10-01 深圳数翔科技有限公司 路侧自动驾驶车辆定位导航系统及单个、多个车辆定位导航方法
CN110906939A (zh) * 2019-11-28 2020-03-24 安徽江淮汽车集团股份有限公司 自动驾驶定位方法、装置、电子设备、存储介质及汽车
CN111999741A (zh) * 2020-01-17 2020-11-27 青岛慧拓智能机器有限公司 路侧激光雷达目标检测方法及装置
CN111766608A (zh) * 2020-06-12 2020-10-13 苏州泛像汽车技术有限公司 一种基于激光雷达的环境感知系统
CN111985322A (zh) * 2020-07-14 2020-11-24 西安理工大学 一种基于激光雷达的道路环境要素感知方法

Also Published As

Publication number Publication date
GB202313215D0 (en) 2023-10-11
WO2022141911A1 (zh) 2022-07-07
CN117441113A (zh) 2024-01-23
WO2022206974A1 (zh) 2022-10-06
WO2022206942A1 (zh) 2022-10-06
WO2022141912A1 (zh) 2022-07-07
GB2620877A (en) 2024-01-24
WO2022206978A1 (zh) 2022-10-06
CN116685873A (zh) 2023-09-01
GB2618936A (en) 2023-11-22
CN117836653A (zh) 2024-04-05
WO2022141914A1 (zh) 2022-07-07
GB202316625D0 (en) 2023-12-13
CN117441197A (zh) 2024-01-23
CN117836667A (zh) 2024-04-05
WO2022141913A1 (zh) 2022-07-07
WO2022141910A1 (zh) 2022-07-07

Similar Documents

Publication Publication Date Title
WO2022206977A1 (zh) 一种面向车路协同的感知信息融合表征及目标检测方法
WO2021226776A1 (zh) 一种车辆可行驶区域检测方法、系统以及采用该系统的自动驾驶车辆
JP6441993B2 (ja) レーザー点クラウドを用いる物体検出のための方法及びシステム
US11768293B2 (en) Method and device for adjusting parameters of LiDAR, and LiDAR
US8989944B1 (en) Methods and devices for determining movements of an object in an environment
US9201424B1 (en) Camera calibration using structure from motion techniques
US11861784B2 (en) Determination of an optimal spatiotemporal sensor configuration for navigation of a vehicle using simulation of virtual sensors
WO2018066351A1 (ja) シミュレーションシステム、シミュレーションプログラム及びシミュレーション方法
US20230264715A1 (en) Puddle occupancy grid for autonomous vehicles
CN114283394A (zh) 一种车载传感器融合的交通目标检测系统
US11798289B2 (en) Streaming object detection and segmentation with polar pillars
CN109895697B (zh) 一种行车辅助提示系统及方法
CN117501311A (zh) 利用一个或多个摄像机生成和/或使用三维信息的系统和方法
Zhu et al. Design of laser scanning binocular stereo vision imaging system and target measurement
Jia et al. A new multi-sensor platform for adaptive driving assistance system (ADAS)
CN117237919A (zh) 跨模态监督学习下多传感器融合检测的卡车智驾感知方法
CN116403186A (zh) 基于FPN Swin Transformer与Pointnet++ 的自动驾驶三维目标检测方法
CN116106931A (zh) 一种涉水预警的方法及相关装置
CN215495425U (zh) 复眼摄像系统及使用复眼摄像系统的车辆
TWM618998U (zh) 複眼攝像系統及使用複眼攝像系統的車輛
WO2022246273A1 (en) Streaming object detection and segmentation with polar pillars
CN115440067A (zh) 复眼摄像系统、使用复眼摄像系统的车辆及其影像处理方法
Tigadi et al. SURVEY ON SENSOR FUSION FOR AUTONOMOUS DRIVING: TECHNIQUES AND CHALLENGES
TW202248963A (zh) 複眼攝像系統,使用複眼攝像系統的車輛及其影像處理方法
CN116699620A (zh) 一种基于激光雷达的车-路协同定位方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22779157

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22779157

Country of ref document: EP

Kind code of ref document: A1