GB2621048A - Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field - Google Patents

Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field Download PDF

Info

Publication number
GB2621048A
GB2621048A GB2316614.3A GB202316614A GB2621048A GB 2621048 A GB2621048 A GB 2621048A GB 202316614 A GB202316614 A GB 202316614A GB 2621048 A GB2621048 A GB 2621048A
Authority
GB
United Kingdom
Prior art keywords
point cloud
data
safety field
safety
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2316614.3A
Other versions
GB202316614D0 (en
Inventor
Zhao Cong
Du Yuchuan
Ji Yuxiong
Ni Lantao
Shen Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority claimed from PCT/CN2021/085146 external-priority patent/WO2022141910A1/en
Publication of GB202316614D0 publication Critical patent/GB202316614D0/en
Publication of GB2621048A publication Critical patent/GB2621048A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking

Abstract

A vehicle-road laser radar point cloud dynamic segmentation and fusion method based on a driving safety risk field, comprising the following steps: (1) proposing a driving safety risk field computation mechanism, and quantitatively analyzing the level of risk of static objects such as vehicles parked roadside, road blocks and traffic signs and moving objects such as moving vehicles, non-motor vehicles and pedestrians with regard to a certain position; (2) using a computation method and using laser radar point cloud data of a roadside sensing unit as a data source, computing the level of risk other objects pose to a target vehicle, which is a self-driving vehicle, within a scanning range, and constructing a unified driving safety risk field distribution centered on the target vehicle; (3) using a threshold to screen out an area which is highly risky for the target vehicle, segmenting point cloud data from original data as supplementary sensing information provided to the self-driving vehicle; and (4) processing and fusing point cloud-level information acquired by the laser radar of the roadside sensing unit with point cloud-level information acquired by the vehicle-side laser radar, and providing a reference evaluation system of the fusion method.

Description

VEHICLE-ROAD LASER RADAR POINT CLOUD DYNAMIC SEGMENTATION AND FUSION METHOD BASED ON DEWING SAFETY RISK FIELD
Technical Field
The present invention relates to a vehicle infrastructure cooperative perception technology, and particularly relates to a safety field-based dynamic point cloud segmentation and fusion method for roadside light detection and ranging (LiDAR) modules in cooperative vehicle infrastructure systems (CV1S). CV1S enhances the perception capabilities of autonomous vehicles (AVs) through data sharing from the infrastructure side. The present invention is designed to analyse and enhance the point cloud acquired by roadside LiDAR to reduce the communication overhead while improving the perception performance for AVs in CV1S.
Background Technology
Autonomous driving technology is an emerging field in the transportation sector that intensified investment are pouring in. Various technology roadmaps have been proposed for safety and efficiency considerations, among which the vehicle infrastructure cooperation technology is receiving increasing attention. Rather than relying solely on individual vehicle intelligence, it proposes to expand the smart system to the infrastructure side, using multidimensional data channels, such as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, to compensate for the limitations of individual vehicle perception and data processing capabilities, so that the safety and efficiency of the entire transportation system is guaranteed.
Since the 1990s. automotive companies have proposed numerous safety evaluation algorithms for advanced driver assistance systems (ADAS). For longitudinal controls, the primary approach is the safety distance models. When the following distance is less than the safety distance, the assistance system issues an alert and applies automatic braking. Many safety distance models determine the vehicle's safety status by analysing real-time safe distances between the front and rear vehicles. For lateral controls, the algorithms are primarily based on the car's current lane position (CLP), time to lane crossing (TLC), and variable roadside boundaries (VRBS). Existing safety evaluation models mostly rely on vehicle kinematics and dynamics, characterizing the driving safety based on each individual vehicle's states, such as position, speed, acceleration, and yaw rate. Relative motions between neighbouring vehicles, like the relative speed and distance, have also been taken under consideration in some models. However, existing models cannot describe the comprehensive impact of various traffic factors on driving safety. The interactions between driver behaviours, vehicle states, and road environments have not been fully incorporated in the safety evaluation, yielding insufficient information for vehicle control.
To tackle this problem, the field theory has become an emerging direction in the field of autonomous driving safety evaluation. Originally used to address vehicle and robot navigation, field theory-based safety evaluation algorithms allow vehicles to autonomously navigate using only positioning and local sensor measurements. With obstacles modelled as repulsive potential fields (safety fields), vehicles can use the field strength gradients to generate control actions to navigate around obstacles. However, the field theory is primarily applied to motion planning for autonomous vehicles, and driver behaviour modelling under specific scenarios, such as car following Existing field-based safety evaluation models do not adequately factor in driver psychology and physiological characteristics, complex road conditions, and human-vehicle-road interactions, limiting their practical applications.
Environment perception is a prerequisite for achieving decision-making and planning in autonomous driving. Various sensors have been applied in environment perception systems, including high-definition cameras, infrared sensors. LiDAR, millimetre-wave radar, etc. In the field of 3D perception, LiDAR technology features a broad scanning range, intuitive results, and is not affected by ambient natural light conditions, making it well-suited for the application in AV. LiDAR outputs results in the form of a point cloud, presenting low-level data features. Scan data is recorded in the form of points, each containing three-dimensional coordinates. Additional information like colour or reflectance intensity depends on the specific technology used. Methods for processing LiDAR point cloud are actively developing, including target detection, target tracking, semantic segmentation, and more.
Semantic segmentation is a fundamental task in computer vision. It provides a granular understanding of the sensory data, which is particularly useful in the field of autonomous driving. When dealing with raw point cloud, it aims to perceive features such as the types and quantities of objects in the scene, rendering points of like class with like tags. Three-dimensional point cloud segmentation requires an understanding of both the global geometric structure and the fine-grained details of the point cloud. Based on the level of segmentation, 3D point cloud segmentation methods can be categorized into three types: semantic segmentation, instance segmentation, and panoptic segmentation.
The effectiveness of point cloud segmentation is closely related to the quality of point cloud. in the scenario of vehicle infrastructure cooperative perception, the perceptual performance of the vehicle is maximized when the infrastructure side sends raw point cloud without compression, but this can lead to the problem of excessive data transmission. The latest V2V research indicates that sending information on all road objects detected by onboard sensors from other vehicles may still result in high transmission network load. Therefore, a dynamic segmentation mechanism for road objects is critical to dump redundant points from raw point cloud so that the communication load is reduced, while the key points are kept for the perception task.
The specific implementation process is listed as follows. First, establish a point value evaluation model based on theoretical derivation from the safety field theory. Then, set a value for each point in the point cloud based on the safety field evaluation. Finally, determine whether to include the point into the data sent to the V2X network. Simulation research based on road traffic data shows that the safety field-based point cloud segmentation can significantly improve the cooperative perception performance under limited communication budget, which is of great significance for improving the safety of autonomous driving.
Existing Technology CN1 II 161267A CN111192284A CN1I0780314A CN111337941A CN107886772A Invention Content To address the above-mentioned issues, the present invention provides a method for dynamic segmentation and fusion of roadside LiDAR point clouds based on a secure safety field mechanism. Drawing inspiration from the safety field calculation theory and specific point cloud derived in Jianqiang Wang et al.'s paper "The Driving Safety Field Based on Driver-Vehicle-Road Interactions," the invention proposes a practical and comprehensive mechanism for calculating the driving safety risk of objects that significantly impact road safety. Based on the calculation results, it segments the point cloud of objects posing a high risk to autonomous vehicles as the final transmission result. Subsequently, this segmented point cloud is fused with the point cloud collected by the LiDAR of the receiving vehicle, and the method is evaluated. The steps include the following: A. Data Acquisition In the context of autonomous driving in traffic scenarios, point cloud of the traffic scene is obtained through LiDAR scans. This data serves as the source for all subsequent processes, and the data acquisition module's workflow is depicted in Figure 2.
Within the data acquisition module, there are two alternative approaches: Al: The first approach involves LiDAR scanning solely by the roadside perception unit's roadside LiDAR to construct the scene's point cloud. Subsequent processes, such as building the safety field and numerical calculations, exclusively use the point cloud from the roadside LiDAR.
A2: The second approach entails LiDAR scanning conducted by both the roadside LiDAR in the roadside perception unit and LiDAR mounted on pre-defined vehicles within the scene. This is done to construct the scene's point cloud. In this case, the subsequent processes for building the safety field and numerical calculations employ both the point cloud from the roadside LiDAR and the point cloud from the LiDAR on the designated vehicles. This approach facilitates mutual validation and cross-checking.
B. Data Calculation The data calculation module comprises the target detection submodule and the safety field calculation submodule, as illustrated in Figure 3.
B: Target Detection Submodule. In this submodule, the point cloud obtained in step (1) undergoes 3D object detection using deep learning, specifically the PV-RCNN algorithm. The input is the scenario point cloud, and the output is the result of object detection. Since the data source is LiDAR point cloud, the deployment position of the LiDAR determines the size and features of the scenario point cloud. The bounding boxes represent the boundaries of each target in the scene, with attributes such as position, length, height, width, and yaw angle, as shown in Figure 4.
B2: Scene Acquisition Submodulc. This submodule is designed to obtain features and information from the scene in advance of the target detection submodule. This facilitates better object detection and subsequent safety field calculation. There are multiple alternative approaches for this submodule: B2 I: By incorporating a camera sensor in the roadside perception unit, RGB information of the scene is captured, along with corresponding horizontal and vertical boundaries. This information helps in determining the type of objects and assists in identifying static objects.
B22: Prior to the automated processing of the target detection submodule, an artificial judgment process is introduced. Trained personnel manually calibrate static objects in the traffic scene to achieve the goal of identifying static objects.
B23: By utilizing existing high-precision maps and locating the scene based on the coordinate system, the submodule identifies static object types using lane-level information from the high-precision map.
B3: Safety Field Calculation Submodule. The inputs to this submodule are the types of static objects and the bounding boxes obtained from target detection. Drawing inspiration from field theory methods in physics, such as gravity fields and magnetic fields, all potential risk-inducing elements in the traffic environment are treated as sources of danger The safety field strength can be understood as the risk coefficient at a certain distance from thc danger source. The closer the distance to the danger centre, the higher the likelihood of an accident. When the distance approaches zero, it can be considered that a collision has occurred between the target object and the danger source, indicating that a traffic accident has taken place.
The safety field model consists of a static safety field and a dynamic safety field, i.e., Safety field -Static Safety field + Dynamic Safety field.
Es represents the safety field strength (vector). ER represents the static safety field strength (vector). represents the dynamic safety field strength (vector). The safety field model can be expressed as the potential driving risk caused by traffic factors in actual scenarios. Risk is measured through the probability of accidents and the severity of accidents.
The safety field is categorized based on the source of generation lamely static safety field source and dynamic safety field source: I) Static Safety field: The source is objects in the traffic environment that are relatively stationary. This includes road markings such as lane dividers and rigid separation facilities like central dividers. These objects have two characteristics: (r) Without considering road construction, they are relatively stationary compared to the target object. IS?) Except for some rigid separation facilities, these objects, based on legal effects, cause drivers to intentionally stay away from their positions. However, even if drivers actually cross lane lines, a traffic accident may not necessarily occur immediately.
For this type of object, based on the above analysis, it is assumed that the potential field formed by the static safety field source a at the position (x c, yu) has a field strength vector ER for the target object j at the position xi, xi). rui
ER = LT" * RE, fa rai = (xi -xa, yi -ya), raj -LTE, is the risk coefficient for different lane markings of type al. RE, is a positive constant representing the road condition influencing factor at the position (x", K). fa is the distance influencing factor for different types of lane markings al. rcri is the distance vector between the lane marking al and the target object j. In this case, (x;, y) is the centroid of the target object j, and (x", ya) represents the point where the perpendicular line from (x1, y) intersects with the lane marking al. k1 is a positive constant representing the distance amplification factor. raj represents the direction of the field aj strength. ER value increases with the increase of static safety field source a, indicating a higher risk imposed by static safety field source a on the target object j. Static safety field sources include but are not limited to lane markings.
2) Dynamic Safety field: The source consists of objects in the traffic environment that are relatively in motion, primarily including vehicles, pedestrians, and obstacle facilities. These objects also have two characteristics: © They have relative velocities with moving target objects. Collisions among these objects are strictly prohibited, as they will inevitably lead to serious traffic accidents. For this type of object, based on the above analysis, it is assumed that the potential field formed by the dynamic safety field source b at the position (xb, yb) has a field strength vector Ev for the target object j at the position (xi, yj).
Ev = G * Rh-TN ri, exp(k3vbi cos 0) Tbf rbi = (r./ -XbYj -yb) The x-axis is located along the road line, and the y -axis is perpendicular to the road line rbi represents the distance vector between dynamic safety field source b and target object j. k,, k3 and G are constants greater than 0 Rb has the same meaning as RET. Th./ is the type correction factor between dynamic safety field source b and target object J. vi,1 is the relative velocity between dynamic safety field source b and target object j. 0 is the angle between 1% and rb; direction, with the clockwise direction considered positive. A higher value of Ev indicates a higher risk imposed by dynamic safety field source b on target object j.
Based on the above method for calculating driving safety risks, various objects on the road can be analysed for their risk levels concerning a specific object. Given the comprehensiveness of data collection and the advantages in target localization, the invention selects point cloud obtained from roadside LiDAR (Light Detection and Ranging) as the data source, with point cloud scan results in unobstructed roadside scans serving as the calculation carrier.
For a particular object in the scene, the risk calculation process for each object is as follows.
1) Through preliminary data collection, static scene data for point cloud scan results is constructed.
Manually separate static safety field sources in the static scene, including lane dividers, central dividers, roadside areas, etc. Linear equations for each static safety field source are fitted by random sampling.
2) Select a specific frame of data as the calculation moment and extract the previous frame's data as a reference for object movement speed. Utilizing a 3D object detection and tracking algorithm based on point cloud, identify all target objects (usually vehicles, pedestrians, etc.) in both the calculation frame and the previous frame, establishing correspondences between objects in the two frames. Calculate the object's movement speed using the annotation box of the target object and the LiDAR's scanning frame rate. For newly added objects without previous frame data for speed calculation, consider their speed as the standard speed.
3) Randomly select a target object for risk calculation. Incorporate relative positions, types, and other attributes of the target object and other target objects, as well as parameters such as the distance between static safety field sources and the target object from step 1. Include traffic conditions and other environmental factors in the safety field calculation mechanism. Set the speed of the receiving object (vehicle) in the relative velocity as an unknown parameter. Extend the calculation process backward, and the relative velocity becomes an expression with unknown parameters. Obtain the safety risk for each object in the scanning range concerning the calculated target, thus forming a driving safety risk distribution centred around the calculated target.
C. Data Segmentation The data segmentation module's workflow is depicted in Figure R. First, it is necessary to divide the scenario point cloud into two categories: points within the bounding box and points outside the bounding box. Based on the input scenario point cloud and bounding box data, an algorithm is designed to detect whether a point is inside the bounding box, thus separating the point cloud into two categories: points inside the bounding box and points outside the bounding box.
Before conducting research on safety risk scenarios, there are two methods for data segmentation: sampling and segmentation. After introducing safety risk scenarios and calculating the results, a safety risk scenario threshold is set. The threshold selection method is utilized to filter out objects with higher calculated risks. There are four alternative data segmentation approaches: CI: Sampling Approach In this approach, data collected by the data collection sub-module, scenario point cloud Pt, data from the target detection bounding box Xi, and safety risk scenario data Si are used as sub-module inputs. Firstly, through a conditional check, the points in the scenario point cloud Pt are evaluated to determine whether they are inside the bounding box X I, resulting in bounded point cloud P11 and unbounded point cloud P12. Then, hypeqmrameters 11 and 12 are set to randomly sample the data Pit and P12, resulting in segmented point cloud P2.
C2: Segmentation Approach For this approach, the data collected by the data collection sub-module, scenario point cloud P1 data from the target detection bounding box Xl, and safety risk scenario data Si arc used as sub-module inputs. Following a conditional check_ the points in the scenario point cloud P1 are evaluated to determine whether they are inside the bounding box XI. resulting in bounded point cloud P11 and unbounded point cloud P12. The segmented point cloud P2 is obtained by selecting the bounded point cloud Pit and removing the unbounded point cloud P I 2 C3: Sampling Approach Based on Safety Risk Scenarios In this approach, data collected by the data collection sub-module, scenario point cloud PI, data from the target detection bounding box Xl, and safety risk scenario data Si are used as sub-module inputs. After a conditional check, the scenario point cloud PI is divided into bounded point cloud Pll and unbounded point cloud P12. Then, a safety risk scenario threshold value 13 is set, and point cloud P11 and P12 are sampled based on this threshold, resulting in segmented point cloud P2.
C4: Segmentation Approach Based on Safety Risk Scenarios For this approach, data collected by the data collection sub-module, scenario point cloud Pt, data from the target detection bounding box Xl, and safety risk scenario data Si are used as sub-module inputs. After a condition& check, the scenario point cloud Pt is divided into bounded point cloud P I I and unbounded point cloud P12. Then, a safety risk scenario threshold value £3 is set, and the point cloud Pit and P12 are segmented based on this threshold, resulting in segmented point cloud P2.
The extraction method for the danger zone is as follows: For a static safety field source, the danger zone is a region centred around the linear equation of the safety field source, with a width of d/2 on each side, where d is the width of the calculated object.
For a dynamic safety field source, the danger zone is a rectangular region centred around the centroid of the danger target, with a width of 1.5d and a length of (0.51+0.51k), where d is the width, 1 is the length, and k is a speed correction factor greater than or equal to I Danger zones are extracted based on the risk coefficient of the danger source, and overlapping regions are extracted only once. The extracted total danger zone results can be provided as perception auxiliary data to the receiving object.
D. Data distribution The workflow of the data publishing module is depicted in Figure 9. Based on the results of data segmentation, the data is compressed by the roadside perception unit. Subsequently, a data transmission channel is established between the roadside perception unit and the receiving object's vehicle. The receiving object should meet the following criteria: at a timcstamp in a specific moment, a certain numbered vehicle is at a particular position in the scene. The data receiving vehicle's movement is then assessed, leading to two alternative scenarios: Dl: If the receiving object's vehicle is stationary, the segmented point cloud, static safety field, dynamic safety field vectors, and the resultant safe safety field vectors are directly published, with their magnitudes as numerical values.
D2: If the receiving object's vehicle is in motion, the segmented point cloud, static safety field, and dynamic (semi-finished) safety field data are published. The receiving object vehicle's speed is then
S
incorporated to obtain its safe safety field vector, with the magnitude represented as a numerical value.
E. Data fusion The segmented data is fused with the point cloud scanned by the receiving object's vehicle's LiDAR. This involves designing a point cloud coordinate transformation matrix to align high-risk data points between the vehicle and roadside, followed by compressing the fused point cloud.
F. Performance evaluation Experiments are conducted for different data segmentation methods. "V" and "V+I" represent the raw point cloud fusion from the vehicle and the combined vehicle and roadside, respectively. "V+Il" and "V+12" represent the point cloud fusion from the vehicle and roadside with segmentation and sampling, respectively. "V+I1+S" and "V+I2+S" represent the point cloud fusion from the vehicle and roadside with segmentation based on the safety field and sampling based on the safety field, respectively. Finally, an evaluation system is presented for assessing the performance of different methods.
Terminology LiDAR: Light detection and ranging, referred to in this invention as an active remote sensing device that emits lasers.
Roadside perception unit: Sensors installed roadside. including but not limited to LiDAR and cameras.
Onboard perception unit: Sensors installed onboard, including but not limited to LiDAR and cameras.
Target vehicle: The vehicle requesting data support from roadside perception units.
Onboard LiDAR: The LiDAR installed onboard the target vehicle, part of the onboard perception unit.
Roadside LiDAR: The LiDAR installed roadside, part of the onboard perception unit RSU: Roadside unit, communicating with onboard units.
OBU: Onboard unit, communicating with roadside units.
V2V: Vehicle-to-vehicle communication, enabling wireless data exchange between vehicles, with or without support from base stations.
V2X: Vehicle-to-everything communication enabling wireless data exchange between vehicles, infrastructure, pedestrians, etc. Point cloud: Dataset under 3-dimensional cartesian coordinate system, including x, y, z coordinates, colour, intensity, timestamps, etc, formatted as a matrix.
Skeleton point: Critical points in a point cloud.
Point cloud alignment: The process of calculating the spatial transformation, including the rotation matrix and the translation vector between point clouds under different coordinate systems.
Onboard LiDAR point cloud: The point cloud collected by onboard LiDAR.
Roadside LiDAR point cloud: The point cloud collected by roadside LiDAR.
Scenario point cloud: The point cloud of the traffic scenario, specifically referred to as the point cloud collected by roadside LiDAR.
Safety field: A virtual representation of the space around a static or dynamic object that is used to evaluate the driving risk level.
Safety field source: Static or dynamic objects impacting driving risk evaluation.
Safety field strength vector: The vectorized driving risk quantification at a specific location in the safety
field.
Safety field strength: The norm of the safety strength vector.
Safey field strength threshold: Manually configured threshold as a tolerable driving risk indicator. Danger zone: The space where the safety field strength exceeds the safety field strength threshold. Hazardous object: The target object inside danger zones.
Voxcl: Volume element, which can be represented by stereo rendering or extracting polygonal isosurfaces with a given threshold contour. it is the smallest unit of digital data in three-dimensional spatial segmentation.
Segmentation: An alternative for data partitioning, separating the part of point cloud belonging to target and non-target objects.
Sampling: An alternative for data partition tg randomly selecting points from point cloud belonging to target mid non-target objects.
Convolution: A mathematical operation that combines two matrices.
CNN: Convolutional neural network, a type of feedforward neural network that includes convolutional computation with a deep structure, representative algorithm of deep learning.
MLP: Multilayer perceptron, also known as artificial neural network, composed of multiple hidden layers in addition to the input and output layers. The simplest MLP only contains one hidden layer, which is a three-layer structure.
The mathematical symbols and their meanings are summarized in the table below.
Symbol Meaning I Target object, at the controid of which the safety field is evaluated Es Safety field strength vector at a specific position
a Static safety field source
al Lane markings, one case of static safety field sources ER Field strength vector generated by a static safety field source LTa Lane marking type correction factor for static safety field source a Ra Road condition correction factor for stale safety field source a k1 Distance correction factor for static safety field sources D Lane width d Target vehicle width a] Displacement vector from static field source a to target object j x._I X-coordinate of the centroid of target object 1 37./ Y-coordinate of the centroid of target object j
xi, X-coordinate of the static field source a
Ya Y-coordinate of the static field source a
b Dynamic field source
Ev Dynamic safety field strength vector
k2 Distance correction factor for dynamic safety field sources k3 Velocity correction factor for dynamic safety field sources G Normalization factor for dynamic safety field sources Rb Road condition correction factor for dynamic safety field b To Field source type correction factor for dynamic safety field source b on target object j Relative velocity from dynamic safety field source b to target object j rbj Displacement vector from dynamic safety field source b to target object j 0 Angle between vo and rbj k Velocity correction factor for danger zone generation The data flow symbols and their meanings are summarized in the table below.
Symbol Meaning P1 Scenario point cloud P11 Point cloud inside object bounding boxes P12 Point cloud outside object bounding boxes P2 Segmented point cloud P3 Point cloud collected by the target vehicle P4 Fused point cloud Ps Compressed fused point cloud Mi Type of static objccts X1 Object bounding box
V Static safety field source
V2 Dynamic safety field source
Si Safety field
5.2 Safety field function of undetermined valuables, such as speed of the target vehicle.
fl Sampling weight inside bounding boxes 12 Sampling weight outside bounding boxes
/3 Safety field threshold
Ri Object detection results A Data collection module with traffic scenario as input and scenario point cloud as output B Data calculation module, including object detection submodule Bi, scenario sensing submodulc B2, and safety field calculation submodulc 83. with scenario point cloud P1 as input, object bounding boxes Xi, and safety field Si or safety field function S2 as output.
C Data segmentation module, with scenario point cloud Pi, object bounding boxes X1, and safety field S1 or safety field function S2 as input, segmented point cloud P2 as output.
D Data distribution module D. The roadside communication unit sends segmented point cloud P2, and safety field Si or safety field function S2 to the target vehicle.
E Data fusion module, integrating segmented point cloud P2 and onboard LiDAR point cloud, and obtaining fused point cloud P4 and compressed point cloud Ps.
F Performance evaluation module. PV-RCNN is used on compressed point cloud P5 to obtain the object detection result Ri. Based on object detection result RI, an evaluation system is established and the optimal data segmentation scheme is selected.
The invention may be best understood by reference to the following descriptions.
Figure I: A Dynamic Segmentation and Fusion Method for Roadside LiDAR Point Cloud Based on Safety field Mechanism Figure 2: Data Acquisition Module Flowchart Figure 3: Data Calculation Module Flowchart Figure 4: Illustration of Target Detection Results with Bounding Boxes Figure 5: Illustration of the Distribution of Two Types of Safety fields
(a) Distribution of Stationary Safety field
(b) Distribution of Moving Safety field
(c) Calculation Explanation of Stationary Safety field Figure 6: Illustration of Safety field Distribution
Figure 7: XoY Plane Projection of Safety field
Figure 8: Data Segmentation Module Flowchart Figure 9: Data Publication Module Flowchart Figure 10: Data Fusion Module Flowchart Figure 11: Scheme Evaluation Module Flowchart Figure 12: Schematic Representation of Scheme Evaluation Reference System Figure 13: Scheme Variation Flowchart (A, B, C) Figure 14: Scheme Variation Flowchart (A, B, C. D) Figure 15: Scheme Variation Flowchart (A, B, C. D, E) Implementations Below is a detailed description of the invention, combined with accompanying figures and specific implementation methods.
A LiDAR point cloud segmentation method based on a security safety field mechanism is presented. The flowchart is illustrated in Figure 1, comprising six modules: data acquisition module A, data computation module B, data segmentation module C, data publishing module D. data fusion module E, and method evaluation module F. Two specific implementation examples are introduced: Al, B, Cl; A2, B. C3 D2, F. F. 1. Al, B, Cl The flowchart is illustrated in Figure 13.
A. Data Collection A 1: Construct the scene's point cloud solely by the roadside perception unit's roadside LiDAR. Subsequent processes, such as building the safety field and numerical calculations, exclusively use the point cloud from the roadside LiDAR.
B. Data Calculation The data calculation module comprises the target detection submodule and the safety,' field calculation submodule, as illustrated in Figure 3, Bl: Target Detection Submodule. In this submodule, the point cloud obtained in step (1) undergoes 3D object detection using deep learning, specifically the PV-R(NN algorithm. The input is the scenario point cloud, and the output is the result of object detection. Since the data source is LiDAR point cloud, the deployment position of the LiDAR detennines the size and features of the scenario point cloud. The bounding boxes represent the boundaries of each target in the scene, with attributes such as position, length, height, width, and yaw angle, as shown in Figure 4.
B2: Scene Acquisition Submodule. This submodule is designed to obtain features and information from the scene in advance of the target detection submodule. This facilitates better object detection and subsequent safety field calculation. There are multiple alternative approaches for this submodule: B21: By incorporating a camera sensor in the roadside perception unit, ROB information of the scene is captured, along with corresponding horizontal and vertical boundaries. This information helps in determining the type of objects and assists in identifying static objects.
B22: Prior to the automated processing of the target detection submodule. an artificial judgment process is introduced. Trained personnel manually calibrate static objects in the traffic scene to achieve the goal of identifying static objects.
B23: By utilizing existing high-precision maps and locating the scene based on the coordinate system, the submodule identifies static object types using lane-level information from the high-precision map.
B3: Safety Field Calculation Submodule. The inputs to this submodule are the types of static objects and the bounding boxes obtained from target detection. Drawing inspiration from field theory methods in physics, such as gravity fields and magnetic fields, all potential risk-inducing elements in the traffic environment are treated as sources of danger. The safety field strength can be understood as the risk coefficient at a certain distance from the danger source. The closer the distance to the danger centre, the higher the likelihood of an accident. When the distance approaches zero, it can be considered that a collision has occurred between the target object and the danger source, indicating that a traffic accident has taken place.
The safety field model consists of a static safety field and a dynamic safety field, i.e., Safety field = Static Safety field -1 Dynamic Safety field.
Es = ER ± Ev Es represents the safety field strength vector. ER represents the static safety field strength vector. Et, represents the dynamic safety field strength vector. The safety field model can be expressed as the potential driving risk caused by traffic factors in actual scenarios. Risk is measured through the probability of accidents and the severity of accidents.
The safety field is categorized based on the source of generation namely static safety field source and dynamic safety field source: 1) Static Safety field: The source is objects in the traffic environment that are relatively stationary. This includes road markings such as lane dividers and rigid separation facilities like central dividers. These objects have two characteristics: 0 Without considering road construction, they are relatively stationary compared to the target object. 2) Except for some rigid separation facilities, these objects, based on legal effects, cause drivers to intentionally stay away from their positions. However, even if drivers actually cross lane lines, a traffic accident may not necessarily occur immediately.
For this type of object, based on the above analysis, it is assumed that the potential field formed by the static safety field source a at the position (xa, ya) has a field strength vector ER for the target object j at the position (xj, yi) ER = LTa * R. fa raj f d= (-D ±c1 if a is a lane marking instance if a is a physical separator instance 2 Iraf 0 ' LTa -R r * fd _ a ai (rajl -4) rnil rtti = (xi - yi -ya), raj -2 L71, is the risk coefficient for different lane markings of type al. According to traffic regulations, general rigid partition facilities > cannot be crossed over lane divider lines > can be crossed over lane divider lines. Common facility and lane line parameter values are as follows: Central median with guardrail or green belt type: 20-25; Sidewalk curbs: 18-20; Yellow solid line or dashed line: 15-18; White solid line: 10-15; White dashed line: 0-5.
Ra is a positive constant representing the road condition influencing factor at the position (Ka, Ya) -The choice of a fixed value for a road segment is typically based on factors such as road surface friction coefficient, road gradient, road curvature, and visibility in the vicinity of object A. In practical use, a common range of values used is typically within the interval [0.5, 1.5].
fa is the distance influencing factor for different types of lane markings al. D is the lane width. d is the target object j width, generally setting as width of the bounding box of the target object j.
raj is the distance vector between the lane marking al and the target object j. In this case, (x1, yj) is the centroid of the target object j, and (xa, Yu) represents the point where the perpendicular line from (x1, y1) intersects with the lane marking al. lc, is a positive constant representing the distance amplification factor because collision risk typically does not vary linearly with the distance between two objects. The typical range for the value of lc, is 0.5 to 15 Irajl represents the direction of the field strength ER value increases with the increase of static safety field source a, indicating a higher risk imposed by static safety field source a on the target object j. Static safety field sources include but are not limited to lane markings. The field strength distribution results are as shown in Figure 5 (a).
2) Dynamic Safety field: The source consists of objects in the traffic environment that are relatively in motion, primarily including vehicles, pedestrians, and obstacle facilities. These objects also have two characteristics: © They have relative velocities with moving target objects. Collisions among these objects are strictly prohibited, as they will inevitably lead to serious traffic accidents. For this type of object, based on the above analysis, it is assumed that the potential field formed by the dynamic safety field source b at the position (xb, yb) has a field strength vector Ev for the target object j at the position (xj,)'i).
G * Rb 'Tb exp(k3v cos 0) Ev - "k2
VW I Tbj
rb = (xi -xi"yi -yb) The x-axis is located along the road line, and the y-axis is perpendicular to the road line.
rb; represents the distance vector between dynamic safety field source b and target object j. 1(2, k3 and G are constants greater than 0. Typically, the range for the value of k2 is 0.5 to 1.5, k3 falls within the range of 0.05 to 0.2, and G is usually set to 0.001.
Rb has the same meaning as Ra.
Tbi is the type correction factor between dynamic safety field source h and target object j. The danger coefficients for different types of collisions, such as car-to-car and car-to-human collisions, vary. Commonly used modification parameter values for different collision types are as follows: Car-to-Car Frontal Collision: 2.5 to 3; Car-to-Car Rear-end Collision: Ito 1.5; Human-to-Car Collision: 2 to 2.5; Car-to-Barrier (Roadblock) Collision: 1.5 to 2.
These parameter ranges represent the typical values used to adjust the danger coefficients for different collision scenarios.
vb.; is the relative velocity between dynamic safety field source b and target object j. The relative velocity function evaluates to the following equation. 0 is the angle between vb./ and rb direction, with the clockwise direction considered positive.
Vb./ = 1213 -121 If target object j is static, then v1 = 0, vb./ = vb. If target object j moves, then vbj = vb -1.71. A higher value of Ev indicates a higher risk imposed by dynamic safety field source h on target object j.
Based on the above method for calculating driving safety risks, various objects on the road can be analysed for their risk levels concerning a specific object. Given the comprehensiveness of data collection and the advantages in target localization, the invention selects point cloud obtained from roadside LiDAR as the data source, with point cloud scan results in unobstructed roadside scans serving as the calculation carrier.
For a particular object the scene, the risk calculation process for each object is as follows.
I) Through preliminary data collection, static scene data for point cloud scan results is constructed.
a. Collect multiple frames of point cloud and divide each frame into n statistical spaces. The value of n can range from 50 to 100, depending on the scanning range of the LiDAR.
b. Starting from the initial frame, sequentially superimpose the next frame of point cloud. During the superimposition, manually remove dynamic objects as much as possible and ensure that the point cloud does not contain occluded regions.
c. During each superimposition, monitor the point cloud density in each statistical space. If the density exceeds a threshold a (which is related to point cloud density and is typically set to 1000), peifonn random sampling of the point cloud within that space to maintain its density. This process results in an ideal global static point cloud background.
Manually separate static safety field sources in the static scene, including lane markers, centre dividers, roadside areas, etc. This is achieved through random sampling and fitting of linear plane equations for each static safety field source. Generally, it is required to uniformly collect 100 or more points along the visual line direction, and the collected points should not deviate too far from the target.
2) Select a specific frame of data as the calculation moment and extract the previous frame's data as a reference for object movement speed. Utilizing a 3D object detection and tracking algorithm based on point cloud, identify all target objects (usually vehicles, pedestrians, etc.) in both the calculation frame and the previous frame, establishing correspondences between objects in the two frames. Calculate the object's movement speed using the annotation box of the target object and the LiDAR's scanning frame rate. For newly added objects without previous frame data for speed calculation, consider their speed as the standard speed.
3) Randomly select a target object for risk calculation. Incorporate relative positions, types, and other attributes of the target object and other target objects, as well as parameters such as the distance between static safety field sources and the target object from step 1. include traffic conditions and other environmental factors in the safety field calculation mechanism. Set the speed of the receiving object (vehicle) in the relative velocity as an unknown parameter. Extend the calculation process backward, and the relative velocity becomes an expression with unknown parameters. Obtain the safety risk for each object in the scanning range concerning the calculated target, thus forming a driving safety risk distribution centred around the calculated target.
C. Data Segmentation Cl: Sampling Approach In this approach, data collected by the data collection sub-module, scenario point cloud Pt, data from the target detection bounding box Xl, and safety risk scenario data Si are used as sub-module inputs. Firstly, through a conditional check, the points in the scenario point cloud Pt are evaluated to determine whether they are inside the bounding box Xl, resulting in bounded point cloud P11 and unbounded point cloud P12. Then, hyperparameters fl and f2 are set to randomly sample the data Pit and P12, resulting in segmented point cloud P2.
2. A2, B, C3, D2, E, F The flowchart is illustrated in Figure 13. A. Data Collection A2: The second approach entails LiDAR scanning conducted by both the roadside LiDAR in the roadside perception unit and LiDAR mounted on pre-defined vehicles within the scene. This is done to construct the scene's point cloud. In this case, the subsequent processes for building the safety field and numerical calculations employ both the point cloud from the roadside LiDAR and the point cloud from the LiDAR on the designated vehicles. This approach facilitates mutual validation and cross-checking.
B. Data Calculation The data calculation module comprises the target detection submodule and the safety field calculation submodule, as illustrated in Figure 3.
Bl: Target Detection Submodule, In this submodule, the point cloud obtained in step (1) undergoes 3D object detection using deep learning, specifically the PV-RCNN algorithm. The input is the scenario point cloud, and the output is the result of object detection. Since the data source is LiDAR point cloud, the deployment position of the LiDAR determines the size and features of the scenario point cloud. The bounding boxes represent the boundaries of each target in the scene, with attributes such as position, length, height, width, and yaw angle, as shown in Figure 4.
B2: Scene Acquisition Submodule. This submodule is designed to obtain features and information from the scene in advance of the target detection submodule. This facilitates better object detection and subsequent safety field calculation. There are multiple alternative approaches for this submodule: B21: By incorporating a camera sensor in the roadside perception unit, RGB information of the scene is captured, along with corresponding horizontal and vertical boundaries. This information helps in determining the type of objects and assists in identifying static objects.
B22: Prior to the automated processing of the target detection submodule, an artificial judgment process is introduced. Trained personnel manually calibrate static objects in the traffic scene to achieve the goal of identifying static objects.
B23: By utilizing existing high-precision maps and locating the scene based on the coordinate system, the submodule identifies static object types using lane-level infommtion from the high-precision map.
B3: Safety Field Calculation Submodule. The inputs to this submodule are the types of static objects and the bounding boxes obtained from target detection. Drawing inspiration from field theory methods in physics, such as gravity fields and magnetic fields, all potential risk-inducing elements in the traffic environment are treated as sources of danger. The safety field strength can be understood as the risk coefficient at a certain distance from the danger source. The closer the distance to the danger centre, the higher the likelihood of an accident. When the distance approaches zero, it can be considered that a collision has occurred between the target object and the danger source, indicating that a traffic accident has taken place.
The safety field model consists of a static safety field and a dynamic safety field, i.e., Safety field = Static Safety field + Dynamic Safety field.
Es = ER ± Ev Es represents the safety field strength vector. ER represents the static safety field strength vector. Et, represents the dynamic safety field strength vector. The safety field model can be expressed as the potential driving risk caused by traffic factors in actual scenarios. Risk is measured through the probability of accidents and the severity of accidents.
The safety field is categorized based on the source of generation, namely static safety field source and dynamic safety field source: 1) Static Safety field: The source is objects in the traffic environment that are relatively stationary. This includes road markings such as lane dividers and rigid separation facilities like central dividers. These objects have two characteristics: 0 Without considering road construction, they are relatively stationary compared to the target object. 0 Except for some rigid separation facilities, these objects, based on legal effects, cause drivers to intentionally stay away from their positions. However, even if drivers actually cross lane lines, a traffic accident may not necessarily occur immediately.
For this type of object, based on the above analysis, it is assumed that the potential field formed by the static safety field source a at the position (xa, ya) has a field strength vector ER for the target object j at the position (x1, y1).
ER = LTa * Ra fa fci (D +2 cl Irajolc, , if a is a lane marking instance if a is a physical separator instance fci _ LT" * RE, raj (Vial -7 1 1 d)k1 'rail ci raj = (xi -Xii, yi -ya), raj -2 LTR is the risk coefficient for different lane markings of type al. According to traffic regulations, general rigid partition facilities > cannot be crossed over lane divider lines > can be crossed over lane divider lines. Common facility and lane line parameter values are as follows: Central median with guardrail or green belt type: 20-25; Sidewalk curbs: 18-20; Yellow solid line or dashed line: 15-18; White solid line: 10-15; White dashed line: 0-5.
RR is a positive constant representing the road condition influencing factor at the position (xa, ya) The choice of a fixed value for a road segment is typically based on factors such as road surface friction coefficient, road gradient, road curvature, and visibility in the vicinity of object A. In practical use, a common range of values used is typically within the interval [0.5, 1.51.
fa is the distance influencing factor for different types of lane markings al. D is the lane width, d is the target object j width, generally setting as width of the bounding box of the target object j.
raj is the distance vector between the lane marking al and the target object j. In this case, (xj, y1) is the centroid of the target object j, and (xa, ya) represents the point where the perpendicular line from (xp yi) intersects with the lane marking al. k1 is a positive constant representing the distance amplification factor because collision risk typically does not vary linearly with the distance between two objects. The typical range for the value of lc, is 0.5 to 1.5, raj
represents the direction of the field strength.
ER value increases with the increase of static safety field source a, indicating a higher risk imposed by static safety field source a on the target object j. Static safety field sources include but are not limited to lane markings. The field strength distribution results are as shown in Figure 5 (a).
2) Dynamic Safety field: The source consists of objects in the traffic environment that are relatively in motion, primarily including vehicles, pedestrians, and obstacle facilities. These objects also have two characteristics: 0 They have relative velocities with moving target objects. 0 Collisions among these objects are strictly prohibited, as they will inevitably lead to serious traffic accidents. For this type of object, based on the above analysis, it is assumed that the potential field formed by the dynamic safety field source b at the position (zi"yh) has a field strength vector Ev for the target object j at the position (x1, y1).
G * Rb Thi rbi Ev - exp(k3vhi cos 0) rbj = (Xj XbiYj -YU) The x-axis is located along the road line, and the y-axis is perpendicular to the road line.
rhi represents the distance vector between dynamic safety field source b and target object j. 1(2, k3 and G are constants greater than 0. Typically, the range for the value of 1(2 is 0.5 to 1.5, k3 falls within the range of 0.05 to 0.2, and G is usually set to 0.001.
Ri, has the same meaning as R Tryi is the type correction factor between dynamic safety field source b and target object j. The danger coefficients for different types of collisions, such as car-to-car and car-to-human collisions, vary Commonly used modification parameter values for different collision types are as follows: Car-to-Car Frontal Collision: 2.5 to 3; Car-to-Car Rear-end Collision: 1 to 1.5; Human-to-Car Collision: 2 to 2.5; Car-to-Barrier (Roadblock) Collision: 1.5 to 2.
These parameter ranges represent the typical values used to adjust the danger coefficients for different collision scenarios.
vbj is the relative velocity between dynamic safety field source b and target object j. The relative velocity function evaluates to the following equation. 6 is the angle between Vhf and rbi direction, with the clockwise direction considered positive.
Vhf = Vry -12) If target object j is static, then 12) = 0, my; = If target object j moves, then Vhf = vh -A higher value of Et, indicates a higher risk imposed by dynamic safety field source b on target object j.
Based on the above method for calculating driving safety risks, various objects on the road can be analysed for their risk levels concerning a specific object. Given the comprehensiveness of data collection and the advantages in target localization, the invention selects point cloud obtained from roadside LiDAR as the data source, with point cloud scan results in unobstructed roadside scans serving as the calculation carrier.
For a particular object in the scene, the risk calculation process for each object is as follows.
1) Through preliminary data collection, static scene data for point cloud scan results is constructed.
a. Collect multiple frames of point cloud and divide each frame into n statistical spaces. The value of n can range from 50 to 100, depending on the scanning range of the LiDAR.
b. Starting from the initial frame, sequentially superimpose the next frame of point cloud. During the superimposition, manually remove dynamic objects as much as possible and ensure that the point cloud does not contain occluded regions.
c. During each superimposition, monitor the point cloud density in each statistical space. if the density exceeds a threshold a (which is related to point cloud density and is typically set to 1000), perform random sampling of the point cloud within that space to maintain its density. This process results in an ideal global static point cloud background.
Manually separate static safety field sources in the static scene, including lane markers, centre dividers, roadside areas. etc. This is achieved through random sampling and fitting of linear plane equations for each static safety field source. Generally, it is required to uniformly collect 100 or more points along the visual line direction, and the collected points should not deviate too far from the =get.
2) Select a specific frame of data as the calculation moment and extract the previous frame's data as a reference for object movement speed. Utilizing a 3D object detection and tracking algorithm based on point cloud, identify all target objects (usually vehicles, pedestrians. etc.) in both the calculation frame and the previous frame, establishing correspondences between objects in the two frames. Calculate the object's movement speed using the annotation box of the target object and the LiDAR's scanning frame rate. For newly added objects without previous frame data for speed calculation, consider their speed as the standard speed.
3) Randomly select a target object for risk calculation. Incorporate relative positions, types, and other attributes of the target object and other target objects, as well as parameters such as the distance between static safety field sources and the target object from step 1. Include traffic conditions and other environmental factors in the safety field calculation mechanism. Set the speed of the receiving object (vehicle) in the relative velocity as an unknown parameter. Extend the calculation process backward, and the relative velocity becomes an expression with unknown parameters. Obtain the safety risk for each object in the scanning range concerning the calculated target, thus forming a driving safety risk distribution centred around the calculated target.
C. Data Segmentation C3: Sampling Approach Based on Safety Risk Scenarios In this approach, data collected by the data collection sub-module, scenario point cloud PI, data from the target detection bounding box X I, and safety risk scenario data 51 are used as sub-module inputs.
After a condition& check, the scenario point cloud Pt is divided into bounded point cloud P I I and unbounded point cloud P12. Then, a safety risk scenario threshold value 13 is set, and point cloud P11 and P I 2 are sampled based on this threshold, resulting in segmented point cloud P2.
D. Data Distribution D2: The workflow of the data publishing module is depicted in Figure 9. Based on the results of data segmentation, the data is compressed by the roadside perception unit. Subsequently, a data transmission channel is established between the roadside perception unit and the receiving object's vehicle. The receiving object should meet the following criteria: at a timestamp in a specific moment, a certain numbered vehicle is at a particular position in the scene.
if the receiving object's vehicle is in motion, the segmented point cloud, static safety field, and dynamic (semi-finished) safety field data are published. The receiving object vehicle's speed is then incorporated to obtain its safe safety field vector, with the magnitude represented as a numerical value.
E. Data fusion The segmented data is fused with the point cloud scanned by the receiving object's vehicle's LiDAR. This involves designing a point cloud coordinate transformation matrix to align high-risk data points between the vehicle and roadside, followed by compressing the fused point cloud.
F. Performance evaluation Experiments are conducted for different data segmentation methods. "V" and "V-FI" represent the raw point cloud fusion from the vehicle and the combined vehicle and roadside, respectively. "V+Il" and "V-FT2" represent the point cloud fusion from the vehicle and roadside with segmentation and sampling, respectively. "V+11+S" and "V+12+S" represent the point cloud fusion from the vehicle and roadside with segmentation based on the safety field and sampling based on the safety field, respectively. Finally, an evaluation system is presented for assessing the performance of different methods.

Claims (1)

  1. Claims I A vehicle-road laser radar point cloud dynamic segmentation and fusion method based on a driving safety risk field, comprising the following steps.I) Data collection Roadside LiDAR is deployed to scan and obtain a scenario point cloud, which serves as a data source for subsequent steps;
    2) Safety field calculationA safety field model consists of a static safety field and a dynamic safety field.E,= ER ± Ev Es represents safety field strength ER represents static safety field strength E, represents dynamicsafety field strength;3) Data segmentation Based on said scenario point cloud and bounding boxes, design an algorithm to detect whether points in said scenario point cloud arc within said bounding boxes, and divide said scenario point cloud into two parts: point cloud inside bounding boxes and point cloud outside bounding boxes; Based on said safety field strength calculated in step 2), set a safety field strength threshold, and use a threshold screening method to select objects of higher threat to target vehicles; Take said objects of higher threat as centres, and apply data sampling or data segmentation to extract points around said objects of higher threat as a danger zone; 4) Data distribution Compress point cloud inside said danger zone obtained from data segmentation through roadside perception units, obtain compressed point cloud, Establish data transmission channels between said roadside perception units and said target vehicles.; Determine whether each of said target vehicles is moving; If a target vehicle is stationary, said compressed point cloud and safety field strength acquired from said data segmentation arc directly published; If a target vehicle moves, said compressed point cloud and a safety field function acquired from data segmentation will be published, and then the speed of said target vehicle will be substituted into said safety field function to obtain said safety field strength; 5) Data fusion Decompress said compressed point cloud, and fuse decompressed point cloud with an onboard LiDAR point cloud; Design a point cloud coordinate conversion matrix for registration between said point cloud inside danger zones and said onboard LiDAR point cloud; 6) Performance evaluation Conduct experiments on different data segmentation methods; V represents the original point cloud of the vehicle without processing; I represents the origin& point cloud obtained by the unprocessed roadside perception unit; 11 represents the point cloud obtained by segmenting the original point cloud using the data segmentation method used by the roadside perception unit; 12 represents the point cloud obtained by segmenting the original point cloud using sampling in the data segmentation method by the roadside perception unit; represents the points obtained by segmenting the original point cloud using the safety field-based segmentation method in the roadside perception unit cloud; represents the point cloud obtained by segmenting the original point cloud using safety field-based sampling in the data segmentation method by the roadside perception unit; The detection results of said data segmentation methods are obtained through said experiments and evaluated.2. A method according to claim 1. characterized in that static safety field sources are objects that are stationary in the traffic environment, including lane and other road markings, as well as physical separation facilities such as a central median; It is assumed that the safety field formed by a static safety field source a at the position (x", ya) has a field strength vector ER for a target object] at the position (xj, yj); said strength vector of said static safety field is calculated as follows: ER = LT, * Ra fa raj = (xj -xa, yj -ya), raj -2 LT" is the risk coefficient for different lane markings of type al Ra is a positive constant representing the road condition influencing factor at the posi on xa, Ya) fd is the distance influencing factor for different types of lane markings ai raj is the distance vector between the lane marking al and the target object]; In this case, (xj, yj) is the centroid of the target object]. and (xa, ya) represents the point where the perpendicular line from (xj, yj) intersects with the lane marking al ki is a positive constant representing the distance amplification factorrepresents the direction of the field strengthER value increases with the increase of static safety field source a, indicating a higher risk imposed by static safety field source a on the target object]; Static safety field sources include but are not limited to lane markings.;3 A method according to claim I, characterized in that dynamic safety field sources are objects that are moving in the traffic environment, including vehicles, pedestrians, and roadblock facilities; The strength vector of said dynamic safety field can be calculated as follows: G * RI, -Thi rbi Ev - exp(k3vbj case) Irbi = (x; -Xh,yi -yb) The x-axis is located along the road line, and the y-axis is perpendicular to the road line raj represents the distance vector between dynamic risk source b and target object] k2, k3 and G are constants greater than 0 Rb has the same meaning as R" Tbi is the type correction factor between dynamic risk source b and target object] via; is the relative velocity between dynamic risk source h and target object] is the angle between raj and raj direction, with the clockwise direction considered positive.
    4. A method according to claim I, characterized in that for a static or dynamic object in said traffic scene, said safety field strength calculation process is as follows 4.1) Through preliminary data collection, static scene data for point cloud scan results is constructed; Manually separate static safety field sources in the static scene, including lane dividers, central dividers, roadside areas, Linear equations for each static safety field source are fitted by random sampling; 4.2) Select a specific frame of data as the calculation moment and extract the previous frame's data as a reference for object movement speed; Utilizing a 3D object detection and tracking algorithm based on point cloud, identify all target objects in both the calculation frame and the previous frame, establishing correspondences between objects in the two frames; Calculate the object's movement speed using the annotation box of the target object and the LiDAR's scanning frame rate; For newly added objects without previous taus frame data for speed calculation, consider their speed as the standard speed; 4.3) Randomly select a target object for risk calculation; incorporate relative positions, types, and other attributes of the target object and other target objects, as well as parameters such as the distance between static safety field sources and the target object; Include traffic conditions and other environmental factors in the safety field calculation mechanism; Set the speed of the receiving object in the relative velocity as an unknown parameter; Extend the calculation process backward, and the relative velocity becomes an expression with unknown parameters; Obtain the safety risk for each object in the scanning range concerning the calculated target, thus forming a driving safety risk distribution centred around the calculated target.A method according to claim 1, characterized in that the method of data segmentation to extract points as danger zones is as follows: 5.1) For a static safety field source, the danger zone is a region centred around the linear equation of the safety field source, with a width of d/2 on each side, where d is the width of the calculated object; 5.2) For a dynamic safety field source, the danger zone is a rectangular region centred around the centroid of the hazardous target, with a width of 1.5d and a length of (0.51+0.51k), where d is the width, 1 is the length, and k is a speed correction factor greater than or equal to 1; 5.3) Danger zones are extracted based on the risk coefficient of the danger source, and overlapping regions are extracted only once; 5.4) The extracted total danger zone results can be provided as auxiliary perception data to a target vehicle.6. A method according to claim 1, characterized in that said safety field calculation involves the following sub-modules: 6.1) Object detection sub-module; 6.2) Scenario acquisition sub-module;
    6.3) Safety field calculation sub-module.
    7. A method according to claim 6, characterized in that said scenario acquisition sub-module uses one of the following steps.
    6.2.1) By adding a camera sensor to the roadside perception unit and utilizing the scene RGB information collected by the camera sensor, as well as the corresponding horizontal and vertical boundaries, to determine the type of object and assist in identifying the type of static object; 6.2.2) Before the automation processing of the target detection submodule, a manual judgment process is added, and the static objects in the traffic scene are calibrated by professionals who have received relevant professional training to achieve the purpose of identifying static objects; 6.2.3) Using existing high-precision maps" locate the location of the scene based on the coordinate system, and use lane level information in high-precision maps to distinguish the type of static objects.
    8. A method according to claim 1, characterized in that said extraction of the danger range adopts the following data segmentation scheme: In this approach, data collected by the data collection sub-module, scenario point cloud PI, data from the target detection bounding box XI, and safety risk scenario data S1 are used as sub-module inputs; Firstly, through a conditional check, the points in the scenario point cloud PI are evaluated to determine whether they are inside the bounding box XI, resulting in bounded point cloud Pu and unbounded point cloud P12, Then, hyperparameters f) and f2 are set to randomly sample the data 1311 and Pp, reslLlting in segmented point cloud P2.
    9. A method according to claim 1, characterized in that said extraction of the danger range adopts the following data segmentation scheme: For this approach, the data collected by the data collection sub-module, scenario point cloud P1, data from the target detection bounding box Xi, and safety risk scenario data SI are used as submodule inputs; Following a conditional check, the points in the scenario point cloud P1 are evaluated to determine whether they are inside the bounding box XI, resulting in bounded point cloud P11 and unbounded point cloud P12, The segmented point cloud P2 is obtained by selecting the bounded point cloud PI, and removing the unbounded point cloud P12.
    10. A method according to claim 1, characterized in that said extraction of the danger range adopts the following data segmentation scheme: In this approach, data collected by the data collection sub-module, scenario point cloud PI, data from the target detection bounding box Xi, and safety risk scenario data SI are used as submodule inputs; After a conditional check, the scenario point cloud Pi is divided into bounded point cloud Pli and unbounded point cloud Pp; Then, a safety risk scenario threshold value fs. is set, and point cloud P11 and P12 are sampled based on this threshold, resulting in segmented point cloud P2.
    11 A method according to claim 1, characterized in that said extraction of the danger range adopts the following data segmentation scheme.For this approach, data collected by the data collection submodule, scenario point cloud PI, data from the target detection bounding box XI, and safety risk scenario data Si are used as sub-module inputs; After a conditional check, the scenario point cloud Pi is divided into bounded point cloud P11 and unbounded point cloud PI,: Then, a safety risk scenario threshold value f3 is set, and the point cloud Pn and Pp are segmented based on this threshold, resulting in segmented point cloud p,.
GB2316614.3A 2021-03-01 2021-04-01 Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field Pending GB2621048A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110228419 2021-03-01
PCT/CN2021/085146 WO2022141910A1 (en) 2021-01-01 2021-04-01 Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field

Publications (2)

Publication Number Publication Date
GB202316614D0 GB202316614D0 (en) 2023-12-13
GB2621048A true GB2621048A (en) 2024-01-31

Family

ID=84842153

Family Applications (2)

Application Number Title Priority Date Filing Date
GB2313217.8A Pending GB2619196A (en) 2021-03-01 2021-04-01 Multi-target vehicle detection and re-identification method based on radar and video fusion
GB2316614.3A Pending GB2621048A (en) 2021-03-01 2021-04-01 Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB2313217.8A Pending GB2619196A (en) 2021-03-01 2021-04-01 Multi-target vehicle detection and re-identification method based on radar and video fusion

Country Status (2)

Country Link
CN (2) CN115943439A (en)
GB (2) GB2619196A (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116193085B (en) * 2023-04-24 2023-07-18 中汽信息科技(天津)有限公司 Automobile tracking and positioning method and system based on edge computing technology
CN116894102B (en) * 2023-06-26 2024-02-20 珠海微度芯创科技有限责任公司 Millimeter wave imaging video stream filtering method, device, equipment and storage medium
CN116564098B (en) * 2023-07-10 2023-10-03 北京千方科技股份有限公司 Method, device, equipment and medium for identifying same vehicle in different data sources
CN117095314B (en) * 2023-08-22 2024-03-26 中国电子科技集团公司第五十四研究所 Target detection and re-identification method under cross-domain multi-dimensional air-space environment
CN117093872B (en) * 2023-10-19 2024-01-02 四川数字交通科技股份有限公司 Self-training method and system for radar target classification model
CN117470254B (en) * 2023-12-28 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Vehicle navigation system and method based on radar service
CN117672007B (en) * 2024-02-03 2024-04-26 福建省高速公路科技创新研究院有限公司 Road construction area safety precaution system based on thunder fuses

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892471A (en) * 2016-07-01 2016-08-24 北京智行者科技有限公司 Automatic automobile driving method and device
CN108639059A (en) * 2018-05-08 2018-10-12 清华大学 Driver based on least action principle manipulates behavior quantization method and device
CN108932462A (en) * 2017-05-27 2018-12-04 华为技术有限公司 Driving intention determines method and device
US10281920B2 (en) * 2017-03-07 2019-05-07 nuTonomy Inc. Planning for unknown objects by an autonomous vehicle
CN110850431A (en) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 System and method for measuring trailer deflection angle
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609522B (en) * 2017-09-19 2021-04-13 东华大学 Information fusion vehicle detection system based on laser radar and machine vision
KR20210025523A (en) * 2018-07-02 2021-03-09 소니 세미컨덕터 솔루션즈 가부시키가이샤 Information processing device and information processing method, computer program, and mobile device
CN110532896B (en) * 2019-08-06 2022-04-08 北京航空航天大学 Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN111862157B (en) * 2020-07-20 2023-10-10 重庆大学 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892471A (en) * 2016-07-01 2016-08-24 北京智行者科技有限公司 Automatic automobile driving method and device
US10281920B2 (en) * 2017-03-07 2019-05-07 nuTonomy Inc. Planning for unknown objects by an autonomous vehicle
CN108932462A (en) * 2017-05-27 2018-12-04 华为技术有限公司 Driving intention determines method and device
CN108639059A (en) * 2018-05-08 2018-10-12 清华大学 Driver based on least action principle manipulates behavior quantization method and device
CN110850431A (en) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 System and method for measuring trailer deflection angle
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar

Also Published As

Publication number Publication date
CN115943439A (en) 2023-04-07
CN115605777A (en) 2023-01-13
GB202313217D0 (en) 2023-10-11
GB202316614D0 (en) 2023-12-13
GB2619196A (en) 2023-11-29

Similar Documents

Publication Publication Date Title
GB2621048A (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
WO2022206942A1 (en) Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN112236346B (en) Method and apparatus for simulating autonomous driving
US11217012B2 (en) System and method for identifying travel way features for autonomous vehicle motion control
CN112700470B (en) Target detection and track extraction method based on traffic video stream
Zhao et al. On-road vehicle trajectory collection and scene-based lane change analysis: Part i
DE112020000487T5 (en) AUTOMATIC SELECTION OF DATA SAMPLE FOR ANNOTATION
WO2018020954A1 (en) Database construction system for machine-learning
DE112019001657T5 (en) SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM AND MOBILE BODY
CN112930554A (en) Electronic device, system and method for determining a semantic grid of a vehicle environment
EP4089659A1 (en) Map updating method, apparatus and device
CN113359709B (en) Unmanned motion planning method based on digital twins
KR102565573B1 (en) Metric back-propagation for subsystem performance evaluation
US11887324B2 (en) Cross-modality active learning for object detection
CN116685874A (en) Camera-laser radar fusion object detection system and method
US11620838B2 (en) Systems and methods for answering region specific questions
DE102021127118A1 (en) Identifying objects with LiDAR
CN116830164A (en) LiDAR decorrelated object detection system and method
CN113895464A (en) Intelligent vehicle driving map generation method and system fusing personalized driving style
DE112022003364T5 (en) COMPLEMENTARY CONTROL SYSTEM FOR AN AUTONOMOUS VEHICLE
CN114882182A (en) Semantic map construction method based on vehicle-road cooperative sensing system
WO2022098511A2 (en) Architecture for map change detection in autonomous vehicles
Jung et al. Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles.
KR20220073472A (en) Cross section integrated information providing system and method based on V2X
CN115662166B (en) Automatic driving data processing method and automatic driving traffic system