WO2022141910A1 - Procédé dynamique de fusion et de segmentation de nuage de points de cinémomètre à laser de voie de circulation routière à base de champ de risque de sécurité de conduite - Google Patents

Procédé dynamique de fusion et de segmentation de nuage de points de cinémomètre à laser de voie de circulation routière à base de champ de risque de sécurité de conduite Download PDF

Info

Publication number
WO2022141910A1
WO2022141910A1 PCT/CN2021/085146 CN2021085146W WO2022141910A1 WO 2022141910 A1 WO2022141910 A1 WO 2022141910A1 CN 2021085146 W CN2021085146 W CN 2021085146W WO 2022141910 A1 WO2022141910 A1 WO 2022141910A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
data
field
risk
bounding box
Prior art date
Application number
PCT/CN2021/085146
Other languages
English (en)
Chinese (zh)
Inventor
杜豫川
许军
暨育雄
赵聪
倪澜涛
沈煜
曹静
王金栋
Original Assignee
杜豫川
许军
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杜豫川, 许军 filed Critical 杜豫川
Priority to GB2316614.3A priority Critical patent/GB2621048A/en
Priority to CN202280026657.8A priority patent/CN117441197A/zh
Priority to PCT/CN2022/084738 priority patent/WO2022206942A1/fr
Publication of WO2022141910A1 publication Critical patent/WO2022141910A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • the invention relates to a vehicle automatic driving perception assistance technology, in particular to a vehicle-road laser radar point cloud dynamic segmentation and fusion method based on a driving safety risk field, which is mainly oriented to infrastructure data collection and driving safety in a vehicle-road collaborative environment. Risk field calculation, data segmentation, data publishing, and data fusion to enhance the hazard perception capabilities of autonomous vehicles.
  • Autonomous driving technology is an emerging technology in the field of transportation that my country and the world are investing more in. All countries are trying to develop and establish a set of safe and efficient autonomous driving technology routes. Among them, my country is strongly supporting a technology.
  • the solution is a vehicle-road coordination model.
  • vehicle-road coordination model not only relies on the intelligent means of bicycles, but also expands the intelligent system to the entire traffic environment. Thereby improving the overall operational safety and efficiency of the entire transportation system.
  • the safety distance model is mainly used. When the following distance is less than the safe distance, the auxiliary system will issue an alarm and automatically brake.
  • Many safety distance models determine the safety state of the vehicle by analyzing the safety distance of the relative movement of the front and rear vehicles in real time.
  • the driver safety assistance algorithm is mainly based on the current position of the car (CCP), time to lane crossing (TLC) and variable rumble band (VRBS).
  • field theory has become an emerging direction in the field of autonomous driving safety, originally used to solve vehicle and robot navigation.
  • a distinct advantage is that it allows the vehicle to navigate autonomously using only its position and local sensor measurements.
  • the obstacles of the vehicle are modeled as repulsive potential fields (risk fields).
  • the vehicle can use the field strength gradient at its location to generate control actions to navigate while avoiding obstacles.
  • field theory is mainly applied to motion planning of autonomous vehicles and modeling of driver behavior in specific traffic scenarios, such as car following.
  • risk factors such as driver's personality, psychological and physiological characteristics, complex road conditions, etc., are not fully considered, and the driver-vehicle-road interaction is insufficiently described. Therefore, the practical application of these models is limited.
  • lidar technology is a widely used and effective technical means. It has the advantages of wide scanning range, intuitive results, and is not affected by ambient natural light. It is very suitable for the perception field of autonomous driving.
  • the output result of lidar is in point cloud format, which has relatively low-level data characteristics.
  • the scanned data is recorded in the form of points. Each point contains three-dimensional coordinates, and some may contain color information (RGB) or reflection intensity information (Intensity). Similar to the vigorous development of image processing technology, the processing methods of lidar point cloud data are gradually increasing, involving target detection, target tracking, semantic segmentation and other directions.
  • Semantic segmentation is a basic task in computer vision. Semantic segmentation can have a more fine-grained understanding of images, which is very important in the field of autonomous driving.
  • point cloud semantic segmentation it is an extension of image semantic segmentation in computer vision. Facing the original point cloud data, it perceives the target type, quantity and other characteristics in the scene, and renders the points of the same target into the same color.
  • 3D point cloud segmentation requires knowledge of both the global geometry and fine-grained details of each point. According to the different segmentation granularity, 3D point cloud segmentation methods can be divided into three categories: semantic segmentation, instance segmentation and partial segmentation.
  • the effect of point cloud segmentation has a lot to do with the quality of point cloud data.
  • the roadside equipment can provide unprocessed point cloud data to maximize the detection effect, but this will lead to the problem of excessive data transmission.
  • the latest V2V research shows that sending information on all road objects detected by onboard sensors can still lead to high load on the transmission network. Then, a dynamic screening mechanism is needed for road objects to screen out more representative "skeleton" point clouds that only lose a little or no feature information.
  • the specific implementation process is as follows: according to the theoretical derivation, formulate the point cloud value judgment standard; set a value for each point in the point cloud data; judge whether to input it into the transmission network according to the value.
  • the present invention provides a vehicle-road lidar point cloud dynamic segmentation and fusion method based on the driving safety risk field. Based on the theoretical idea of risk field calculation and specific point cloud data, a driving safety risk calculation mechanism that can be used for actual calculation and covers most of the objects affecting road safety is proposed. The point cloud of the risky object is used as the final transmission result; it is then fused with the point cloud collected by the target vehicle lidar, and the method is evaluated. It includes the following steps:
  • the point cloud of the traffic scene is obtained through lidar scanning. This data is the data source of all subsequent links.
  • the flow chart of the data acquisition module is shown in Figure 2.
  • A1 The first one is to use only the roadside lidar in the roadside perception unit to perform lidar scanning to construct the scene point cloud, then the construction of the safety risk field and the numerical calculation in the subsequent links only use the roadside Lidar point cloud;
  • a 2 The second is to use the roadside lidar in the roadside perception unit and the lidar mounted on the preset calibration vehicle in the scene to scan the lidar to construct the scene point cloud, so it is safe in the subsequent links.
  • the construction of the risk field and the numerical calculation both use the point cloud of the roadside lidar and the point cloud of the lidar mounted on the calibration vehicle, which play the role of mutual verification and proofreading.
  • the driving safety risk field calculation module includes a target detection sub-module and a safety field calculation sub-module.
  • the flow chart is shown in Figure 3.
  • B 1 target detection sub-module.
  • the scene point cloud obtained in (1) is subjected to deep learning 3D target detection, and the algorithm is PV-RCNN. That is, input the scene point cloud data to obtain the target detection result.
  • the data source is lidar point cloud data
  • the layout position of lidar determines the size and characteristics of the scene point cloud data, etc.
  • the bounding box is the bounding box of each target in the scene, and the attributes are position, length, height, width, Deflection angle, etc., as shown in Figure 4.
  • the scene acquisition sub-module is to obtain some features and information in the scene in advance before the target detection sub-module, so as to facilitate better target detection and subsequent safety field calculation.
  • This sub-module There are various options for this sub-module as follows:
  • the RGB information of the scene collected by the camera sensor and the corresponding horizontal and vertical boundaries are used to determine the type of the object, so as to assist in identifying the type of the static object.
  • B 3 Security field calculation sub-module.
  • the input to this submodule is the type of static object and the object detection bounding box. Drawing on the field theory methods such as gravity field and magnetic field in physics, everything that may cause risks in the traffic environment is regarded as the source of danger, and it spreads around it.
  • the field strength of the risk field can be understood as the distance from the source of danger.
  • the magnitude of the risk factor at a certain distance The closer the distance to the danger center, the greater the possibility of an accident, and the farther the distance, the lower the accident probability. When the distance approaches 0, it can be considered that there is a contact collision between the target vehicle and the source of danger, that is, a traffic accident has occurred. .
  • ES is the field strength vector of the driving safety risk field
  • ER is the field strength vector of the static risk field
  • EV is the field strength vector of the dynamic risk field.
  • the driving safety risk field model can be expressed as the traffic factor in the actual scene. potential driving risk. Risk is measured by the likelihood of an accident and the severity of the accident.
  • the driving safety risk field is divided into two categories according to the different sources, namely the static risk field source and the dynamic risk field source:
  • Static risk field The field source is a relatively static object in the traffic environment, mainly road markings such as lane dividing lines, and rigid separation facilities such as the central divider. This type of object has two characteristics: (1) Without considering road construction, this type of object is in a stationary state relative to the target vehicle; (2) Except for some rigid separation facilities, this type of object makes the driver intentionally stay away from its location based on legal effects. , but even if the driver actually crosses the lane line, it is not necessarily an immediate traffic accident.
  • LT a is the risk coefficient of different types of lane markings a 1 ;
  • R a is a constant greater than 0, representing the road condition influencing factor at (x a , ya a );
  • f d is the distance influencing factor of different types of lane marking a 1 ;
  • r aj is the distance vector between the lane mark a 1 and the target vehicle j, in this case (x j , y j ) is the centroid of the target vehicle j, (x a , y a ) means (x j , y j ) ) is the point where the vertical line intersects with the lane marking a 1 ;
  • k 1 is a constant greater than 0, which represents the amplification factor of the distance; r aj /
  • the field source is the relatively moving objects in the traffic environment, mainly vehicles, pedestrians, roadblocks, etc. This type of object also has two characteristics: (1) The moving target vehicle is used as the reference frame and has relative speed. 2It is strictly forbidden to collide with such objects, otherwise it will inevitably cause a serious traffic accident.
  • the field strength vector E V is:
  • r bj (x j -x b ,y j -y b )
  • r bj is the distance vector between the dynamic risk field source b and the target vehicle j; k 2 , k 3 and G are all constants greater than 0; R b has the same meaning as Ra ; T bj is the dynamic risk field source b and the target vehicle j Type correction coefficient between ; v bj is the relative velocity between dynamic risk field source b and target vehicle j, ⁇ is the angle between v bj and r bj directions, positive in the clockwise direction. The larger the EV value, the higher the risk caused by the dynamic risk field source b to the target vehicle j.
  • the present invention selects the point cloud data obtained by the roadside laser radar as the data source, and uses the roadside unobstructed point cloud scanning result as the calculation carrier.
  • step 3 Substitute the attributes such as the relative position and type of the target vehicle and other objects other than it, as well as parameters such as the distance of the static risk field source relative to the target vehicle in step 1), as well as traffic environment factors such as road conditions into the driving safety
  • the speed of the target vehicle in the relative speed is set as an unknown parameter, and the calculation process is delayed, then the relative speed is an expression with unknown parameters.
  • the safety risk of each object in the scanning range for the calculation target is obtained, so as to form the driving safety risk distribution with the target vehicle as the core.
  • the flow chart of the data segmentation module is shown in Figure 8.
  • the scene point cloud is divided into the point cloud inside the bounding box and the point cloud outside the bounding box.
  • an algorithm is designed to detect whether the point cloud is within the bounding box, so that the point cloud can be divided into two types: point cloud inside the bounding box and point cloud outside the bounding box.
  • C 1 Sampling scheme; the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-module, and the condition is first judged , judging whether the points in the scene point cloud data P 1 are within the bounding box X 1 , and obtain point cloud data P 11 within the bounding box and point cloud data P 12 outside the bounding box. Then set the hyperparameters f 1 and f 2 , and randomly sample the point cloud data P 11 and P 12 inside and outside the bounding box according to f 1 and f 2 to obtain the point cloud data P 2 after data segmentation.
  • C 2 segmentation scheme; the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-module, and the condition is first judged , judging whether the points in the scene point cloud data P 1 are within the bounding box X 1 , and obtain point cloud data P 11 within the bounding box and point cloud data P 12 outside the bounding box. Then, the point cloud data P 11 in the bounding box is selected, the point cloud data P 12 outside the bounding box is eliminated, and the point cloud data P 2 after data segmentation is obtained.
  • C3 Based on the safety risk field sampling scheme; the scene point cloud data P1 obtained by the data acquisition submodule ; the target detection bounding box X1 and the safety risk field data S1 obtained by the driving safety risk field calculation module are input as the submodules , First, through conditional judgment, it is judged whether the point in the scene point cloud data P 1 is within the bounding box X 1 , and the point cloud data P 11 within the bounding box and the point cloud data P 12 outside the bounding box are obtained. Then set the numerical threshold value f 3 of the security risk field, sample the point cloud data P 11 and P 12 inside and outside the bounding box according to the threshold value, and obtain the point cloud data P 2 after data segmentation.
  • the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-modules, and the scene point cloud data is first judged after conditional judgment. Whether the point in P 1 is within the bounding box X 1 , the point cloud data P 11 in the bounding box and the point cloud data P 12 outside the bounding box are obtained. Then, set the numerical threshold value f3 of the security risk field to divide the point cloud data P11 and P12 inside and outside the bounding box according to the threshold value, and obtain the point cloud data P2 after data segmentation .
  • the specific method for extracting the dangerous range is: if the dangerous target is a static risk field source, then take the plane where the risk field source is located as the center, and the areas with a width of d/2 on the left and right are the dangerous range, where d is the width of the dangerous target; If the dangerous target is a dynamic risk field source, then take the centroid of the dangerous target as the center, and intercept a rectangular area with a width of 1.5d and a length of (0.5l+0.5l ⁇ k) as the dangerous range, where d is the width of the dangerous target, l is the length of the dangerous target, and k is the speed correction coefficient not less than 1; the dangerous range is extracted in sequence according to the size of the risk coefficient of the dangerous source, and the overlapping area of the dangerous range is only extracted once.
  • the final extracted total danger range result can be provided to the target vehicle as the perception assistance data of the target vehicle.
  • the flow chart of the data publishing module is shown in Figure 9. Based on the results of data segmentation, the data is compressed by the roadside sensing unit, and then a data transmission channel between the roadside sensing unit and the target vehicle is established. numbered vehicle. Then to determine whether the target vehicle is moving, there are two alternatives:
  • D 1 If the target vehicle is stationary, the point cloud, static risk field, and dynamic risk field vector sum after data segmentation can be directly released, that is, the safety risk field vector sum, and the modulus is a numerical value;
  • D 2 If the target vehicle moves, publish the point cloud, static risk field, and semi-finished product risk field data after data segmentation, and then substitute the speed of the target vehicle to obtain the vector sum of its safety risk field, and the modulus is a numerical value.
  • the point cloud of a certain area around the object with high safety field risk value obtained after data segmentation is fused with the point cloud scanned by the target vehicle's lidar, that is, the point cloud coordinate transformation matrix is designed to match the high-risk data point cloud between the vehicle and the roadside. and compress the data of the fused point cloud to obtain compressed point cloud data
  • V represents the original point cloud of the vehicle without processing
  • I represents the original point cloud obtained by the unprocessed roadside perception unit
  • I 1 represents the point cloud obtained by segmenting the original point cloud by the roadside perception unit using the segmentation in the data segmentation method
  • I 2 represents the point cloud obtained by the roadside perception unit segmenting the original point cloud using the sampling in the data segmentation method
  • I 1S represents the point cloud obtained by dividing the original point cloud by the roadside sensing unit using the segmentation based on the safety field in the data segmentation method
  • I 2S represents the point cloud obtained by segmenting the original point cloud by the roadside sensing unit using the sampling based on the safety field in the data segmentation method
  • Driving safety risk field the distribution of the driving safety risk of the static and dynamic objects in the scene to the driving vehicle. In the present invention, unless otherwise specified, it is synonymous with the safety risk field.
  • Lidar an active remote sensing device that uses a laser as the emission light source and adopts photoelectric detection technology.
  • Point Cloud A point cloud is a dataset of points in a coordinate system.
  • Point cloud data including three-dimensional coordinates X, Y, Z, color, intensity value, time, etc., that is, a structured matrix.
  • Target vehicle-side lidar L 1 lidar mounted on the target vehicle.
  • Roadside Lidar L 2 Lidar installed on the roadside.
  • Vehicle-side lidar point cloud Represents the point cloud acquired by the vehicle-side lidar.
  • Roadside lidar point cloud represents the point cloud acquired by roadside lidar.
  • Scene Point Cloud The point cloud of the traffic scene.
  • V2V Vehicles in motion provide end-to-end wireless communication, that is, through V2V communication technology, vehicle terminals exchange wireless
  • Convolution The result of summing two variables after multiplying them in a certain range.
  • CNN Convolutional Neural Network
  • Voxel short for volume element, a volume containing voxels can be rendered by volume rendering or by extracting polygons for a given threshold contour
  • the shape isosurface is displayed. It is the smallest unit of division of digital data in three-dimensional space.
  • MLP Multi-layer perceptron, also known as artificial neural network, in addition to the input and output layers, there can be multiple hidden layers in the middle, the simplest
  • the MLP contains only one hidden layer, that is, a three-layer structure.
  • V2X vehicle to everything, that is, the exchange of information between vehicles and the outside world.
  • RSU Road Side Unit, the roadside unit, is installed on the roadside and communicates with the vehicle-mounted unit.
  • OBU On board Unit, the on-board unit.
  • Skeleton point The key node of the 3D point cloud model.
  • Security risk threshold manually set the value according to the actual application scenario.
  • the safety risk value for the target vehicle is greater than the set threshold.
  • Field source all kinds of objects involved in the calculation process in the calculation of driving safety risk.
  • Point cloud registration For point cloud data in different coordinate systems, the transformation matrix, that is, the rotation matrix R and the translation matrix T, is obtained through registration, and the error is calculated to compare the matching results.
  • Data acquisition module A The function is data acquisition, the input is the traffic scene, and the output is the scene point cloud data P 1 .
  • Driving safety risk field calculation module B the function is driving safety risk field calculation, including target detection sub-module B 1 , scene acquisition sub-module B 2 , and safety field calculation sub-module B 3 , the input is the scene point cloud data P 1 , and the output is Target detection bounding box X 1 , safety risk field value S 1 /semi-finished product S 2 .
  • Bounding box The point cloud target detection result, the attributes are position, length, height, width, deflection angle, etc., such as the target detection bounding box X 1 .
  • Safety risk field value S 1 the modulus of the safety risk field vector sum of all risk sources in the scene for a certain object.
  • Semi-finished product risk field S 2 that is, when calculating the dynamic risk field, the speed of the target vehicle is set as an unknown parameter, and the expression with the unknown parameter and passed backwards is called semi-finished product.
  • Data segmentation module C the function is data segmentation, the input is the scene point cloud data P 1 , the target detection bounding box X 1 , the safety risk field value S 1 /semi-finished product risk field S 2 , and the output is the point cloud data P 2 after data segmentation .
  • Data release module D The function is data release, that is, the roadside perception unit releases data to the target vehicle in the scene, and the released data is the point cloud data P 2 after data segmentation, and the safety risk field/semi-finished product risk field data S 1 /S 2 .
  • Data fusion module E The function is to fuse the point cloud data P 2 after data segmentation with the point cloud data P 3 of the target vehicle to obtain the fused point cloud data P 4 , and obtain compressed point cloud data P 5 through data compression.
  • Method evaluation module F For the compressed point cloud data P 5 , the target detection result R 1 is obtained through the PV-RCNN deep learning target detection algorithm, and the evaluation is given for the target detection result R 1 , and the best data segmentation scheme is selected.
  • Roadside perception unit including but not limited to roadside lidar, cameras and other sensors.
  • Target vehicle The object of risk generated by each risk source in the driving safety risk field is also the object of data transmission by the drive test perception unit in data transmission, that is, ego car.
  • Target vehicle the target vehicle in the process of calculating the safety risk field.
  • Target vehicle lidar L 3 lidar mounted on the target vehicle.
  • Segmentation The method of data segmentation, which separates the point cloud of the detection target and the non-detection target, which is different from the meaning of data segmentation.
  • Sampling The method of data segmentation, randomly sampling the point cloud of the detection target and the non-detection target according to the weight, the default parameters are 0.8 and 0.2.
  • Figure 1 Flow chart of a dynamic segmentation and fusion method of vehicle and road LiDAR point cloud based on driving safety risk field
  • FIG. 4 Schematic diagram of the bounding box of the target detection result
  • FIG. 6 Schematic diagram of security risk field distribution
  • FIG. 7 Schematic diagram of the xoy plane projection of the security risk field
  • FIG. 12 Schematic diagram of the method evaluation reference system
  • a roadside lidar point cloud segmentation method based on the safety risk field mechanism its flowchart is shown in Figure 1, that is, six modules: data acquisition module A, driving safety risk field calculation module B, data segmentation module C, data Publishing module D, data fusion module E, and method evaluation module F.
  • a 1 In an autonomous driving traffic scene, only the roadside lidar in the roadside perception unit performs lidar scanning to obtain the point cloud data P1 of the traffic scene, which is the data source for all subsequent links.
  • the driving safety risk field calculation module includes a target detection sub-module and a safety field calculation sub-module.
  • the flow chart is shown in Figure 3.
  • B 1 target detection sub-module.
  • the scene point cloud obtained in A is used for deep learning 3D target detection, and the algorithm is PV-RCNN. That is, input the scene point cloud data to obtain the target detection result.
  • the data source is lidar point cloud data
  • the layout position of lidar determines the size and characteristics of the scene point cloud data, etc.
  • the bounding box is the bounding box X 1 of each target in the scene, and the attributes are position, length, height, width, deflection angle, etc., as shown in Figure 4.
  • the scene acquisition sub-module is to obtain some features and information in the scene in advance before the target detection sub-module, so as to facilitate better target detection and subsequent safety field calculation.
  • This sub-module There are various options for this sub-module as follows:
  • the RGB information of the scene collected by the camera and the corresponding horizontal and vertical boundaries are used to determine the type of the object, so as to assist in identifying the type of the static object.
  • B 3 Security field calculation sub-module.
  • the input to this submodule is the type of static object and the object detection bounding box. Drawing on the field theory methods such as gravity field and magnetic field in physics, everything that may cause risks in the traffic environment is regarded as the source of danger, and it spreads around it.
  • the field strength of the risk field can be understood as the distance from the source of danger.
  • the magnitude of the risk factor at a certain distance The closer the distance to the danger center, the greater the possibility of an accident, and the farther the distance, the lower the accident probability. When the distance approaches 0, it can be considered that there is a contact collision between the target vehicle and the source of danger, that is, a traffic accident has occurred. .
  • ES is the field strength vector of the driving safety risk field
  • ER is the field strength vector of the static risk field
  • EV is the field strength vector of the dynamic risk field.
  • the driving safety risk field model can be expressed as the traffic factor in the actual scene. potential driving risk. Risk is measured by the likelihood of an accident and the severity of the accident.
  • the driving safety risk field is divided into two categories according to the different sources, namely the static risk field source and the dynamic risk field source:
  • Static risk field The field source is a relatively static object in the traffic environment, mainly road markings such as lane dividing lines, and rigid separation facilities such as the central divider.
  • road markings such as lane dividing lines
  • rigid separation facilities such as the central divider.
  • traffic regulations stipulate that vehicles are not allowed to drive or cross in solid lanes. However, if the driver unintentionally leaves the current lane, perceiving the risk of violating the constraints of the lane markings, the driver will steer the vehicle back into the center of the lane. At the same time, the closer the vehicle is to the lane markings, the greater the risk.
  • Driving risk is also related to road conditions, and poor road conditions can lead to high risks.
  • the driving risk of relatively stationary objects is mainly affected by visibility, and the lower the visibility, the higher the driving risk.
  • This type of object has two characteristics: (1) Without considering road construction, this type of object is in a stationary state relative to the target vehicle, because its actual meaning is a dangerous boundary and does not have speed attributes; (2) Except for some rigid separation facilities, This type of object makes the driver intentionally stay away from its location based on legal effects, but even if the driver actually crosses the lane line, it may not necessarily cause a traffic accident immediately.
  • LT a is the risk factor of different lane marking a 1 types, which is determined by traffic laws, generally rigid separation facilities > can not cross the lane separation line > can cross the lane separation line.
  • the parameters of common facilities and lane lines are as follows: guardrail type or green belt type median: 20 to 25; sidewalk stones: 18 to 20; yellow solid line or dotted line: 15 to 18; white solid line: 10 ⁇ 15; white dotted line: 0-5.
  • R a is a constant greater than 0, which indicates the influencing factors of road conditions at (x a , y a ), which are determined by traffic environment factors such as road adhesion coefficient, road slope, road curvature and visibility in the vicinity of object a. Pick a fixed value for a section of road.
  • the data interval generally used is [0.5, 1.5].
  • f d is the influence factor of the distance of different types of lane markings a 1 , which is determined by the object type, object width, lane width and so on.
  • D is the lane width
  • d is the width of the target vehicle j
  • d generally takes the width of the bounding box of the target vehicle j.
  • r aj is the distance vector between the lane mark a 1 and the target vehicle j, in this case (x j , y j ) is the centroid of the target vehicle j, (x a , y a ) means (x j , y j ) ) to be the point where the vertical line intersects the lane marking a1.
  • k 1 is a constant greater than 0, representing the amplification factor of the distance, because the collision risk and the distance between the two objects do not change linearly in general. Generally, the value of k 1 ranges from 0.5 to 1.5.
  • represents the direction of the field strength, but in general practical applications, even if the field strength directions of the two safety risk field sources are opposite to a certain point, the risk size of the point cannot be considered to be reduced accordingly, so it is usually still The scalars are superimposed.
  • the field strength distribution results are shown in Fig. 5(a).
  • Dynamic risk field The field source is a relatively dynamic object in the traffic environment, and the magnitude and direction of its field strength vector are determined by the properties and states of the moving object and road conditions.
  • the dynamic objects here refer to dynamic objects that can actually collide with vehicles and cause heavy losses, mainly vehicles, pedestrians, and roadblocks.
  • This type of object also has two characteristics: (1) Although the above-mentioned objects may be stationary relative to the road, such as roadside parking, roadblock facilities, etc., they still have relative speed with the dynamic target vehicle as the reference frame. 2It is strictly forbidden to collide with such objects, otherwise it will inevitably cause a serious traffic accident.
  • the present invention assumes that the power function form of driving risk is a function of vehicle-target distance.
  • r bj (x j -x b ,y j -y b )
  • r bj is the distance vector between the dynamic risk field source b and the target vehicle j.
  • k 2 , k 3 and G are all constants greater than 0.
  • the meaning of k 2 is the same as that of k 1 above, and k 3 is the hazard correction for different speeds.
  • G is analogous to the electrostatic force constant and is used to describe an object with two units of mass at a unit distance. The size of the risk factor between. Generally, the value range of k 2 is 0.5 to 1.5, the value range of k 3 is 0.05 to 0.2, and the value of G is usually 0.001.
  • R b is the same as that of R a , and the data interval used is also [0.5, 1.5].
  • T bj is the type correction coefficient between the dynamic risk field source b and the target vehicle j.
  • the risk coefficients of vehicle-vehicle collision and vehicle-person collision are different.
  • Common types of correction parameters are as follows: vehicle-vehicle frontal collision: 2.5 to 3; vehicle-vehicle rear-end collision: 1 to 1.5; person-vehicle collision: 2 to 2.5; vehicle-to-barrier collision: 1.5 to 2.
  • v bj is the relative velocity between the dynamic risk field source b and the target vehicle j, that is, the vector difference between the velocity v b of the field source b and the velocity v j of the target vehicle j.
  • is the angle between the v bj and r bj directions, positive in the clockwise direction.
  • the semi-finished product of relative velocity is:
  • the present invention selects the lidar point cloud data as the data source, and uses the roadside unobstructed point cloud scanning result as the calculation carrier.
  • n can range from 50 to 100;
  • the point cloud density of each statistical space is detected. If it is greater than the threshold ⁇ (related to the point cloud density, the general value is 1000), the point cloud in the space is randomly sampled to keep its density, and finally obtain a more ideal global static point cloud background.
  • the static risk field sources in the static scene are separated, including the lane dividing line, the median strip, the roadside area, etc., and the plane linear equation of each static risk field source is fitted by random sampling. Generally, it is required to collect more than 100 points evenly along the visual linear direction, and the collected points should not deviate too far from the target.
  • the speed is regarded as the standard speed.
  • the standard speed is the average vehicle speed in the point cloud scanning section under historical statistical conditions, and the direction is consistent with the lane where the target is located.
  • the flow chart of the data segmentation module is shown in Figure 8.
  • the scene point cloud is divided into the point cloud inside the bounding box and the point cloud outside the bounding box.
  • an algorithm is designed to detect whether the point cloud is within the bounding box, so that the point cloud can be divided into two types: point cloud inside the bounding box and point cloud outside the bounding box.
  • C 1 Sampling scheme; the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-module, and the condition is first judged , judging whether the points in the scene point cloud data P 1 are within the bounding box X 1 , and obtain point cloud data P 11 within the bounding box and point cloud data P 12 outside the bounding box. Then set the hyperparameters f 1 and f 2 , and randomly sample the point cloud data P 11 and P 12 inside and outside the bounding box according to f 1 and f 2 to obtain the point cloud data P 2 after data segmentation. Taking the screened object as the center, combining sampling or segmentation to extract the point cloud of a certain area around it as the dangerous area.
  • the specific method of extracting the dangerous area is as follows:
  • the dangerous target is a static risk field source, take the plane where the risk field source is located as the center, and the left and right intercepted areas with a width of d/2 are the dangerous range, where d is the width of the target vehicle.
  • the dangerous target is a dynamic risk field source, take the centroid of the dangerous target as the center, and intercept a rectangular area with a width of 1.5d and a length of (0.5l+0.5l ⁇ k) as the dangerous range, of which the 0.5l part is far away from the target vehicle
  • the half side length of , 0.5l ⁇ k is the half side length close to the target vehicle.
  • d is the width of the dangerous target
  • l is the length of the dangerous target.
  • k is a speed correction coefficient not less than 1, which depends on the speed of the dangerous target.
  • Dangerous ranges are extracted in sequence according to the hazard coefficient of the hazard source, and the overlapping areas of the hazard ranges are only extracted once.
  • the final extracted total danger range result can be provided to the target vehicle as the perception assistance data of the target vehicle.
  • A2 In the traffic scene of automatic driving, the roadside lidar in the roadside perception unit and the lidar mounted on the preset calibration vehicle in the scene are used to scan the lidar to obtain the point cloud data P1 of the traffic scene , which is the data source for all subsequent links.
  • the driving safety risk field calculation module includes a target detection sub-module and a safety field calculation sub-module.
  • the flow chart is shown in Figure 3.
  • B 1 target detection sub-module.
  • the scene point cloud obtained in A is used for deep learning 3D target detection, and the algorithm is PV-RCNN. That is, input the scene point cloud data to obtain the target detection result.
  • the data source is lidar point cloud data
  • the layout position of lidar determines the size and characteristics of the scene point cloud data, etc.
  • the bounding box is the bounding box X 1 of each target in the scene, and the attributes are position, length, height, width, deflection angle, etc., as shown in Figure 4.
  • the scene acquisition sub-module is to obtain some features and information in the scene in advance before the target detection sub-module, so as to facilitate better target detection and subsequent safety field calculation.
  • This sub-module There are various options for this sub-module as follows:
  • the RGB information of the scene collected by the camera and the corresponding horizontal and vertical boundaries are used to determine the type of the object, so as to assist in identifying the type of the static object.
  • B 3 Security field calculation sub-module.
  • the input to this submodule is the type of static object and the object detection bounding box. Drawing on the field theory methods such as gravity field and magnetic field in physics, everything that may cause risks in the traffic environment is regarded as the source of danger, and it spreads around it.
  • the field strength of the risk field can be understood as the distance from the source of danger.
  • the magnitude of the risk factor at a certain distance The closer the distance to the danger center, the greater the possibility of an accident, and the farther the distance, the lower the accident probability. When the distance approaches 0, it can be considered that there is a contact collision between the target vehicle and the source of danger, that is, a traffic accident has occurred. .
  • ES is the field strength vector of the driving safety risk field
  • ER is the field strength vector of the static risk field
  • EV is the field strength vector of the dynamic risk field.
  • the driving safety risk field model can be expressed as the traffic factor in the actual scene. potential driving risk. Risk is measured by the likelihood of an accident and the severity of the accident.
  • the driving safety risk field is divided into two categories according to the different sources, namely the static risk field source and the dynamic risk field source:
  • Static risk field The field source is a relatively static object in the traffic environment, mainly road markings such as lane dividing lines, and rigid separation facilities such as the central divider.
  • road markings such as lane dividing lines
  • rigid separation facilities such as the central divider.
  • traffic regulations stipulate that vehicles are not allowed to drive or cross in solid lanes. However, if the driver unintentionally leaves the current lane, perceiving the risk of violating the constraints of the lane markings, the driver will steer the vehicle back into the center of the lane. At the same time, the closer the vehicle is to the lane markings, the greater the risk.
  • Driving risk is also related to road conditions, and poor road conditions can lead to high risks.
  • the driving risk of relatively stationary objects is mainly affected by visibility, and the lower the visibility, the higher the driving risk.
  • This type of object has two characteristics: (1) Without considering road construction, this type of object is in a stationary state relative to the target vehicle, because its actual meaning is a dangerous boundary and does not have speed attributes; (2) Except for some rigid separation facilities, This type of object makes the driver intentionally stay away from its location based on legal effects, but even if the driver actually crosses the lane line, it may not necessarily cause a traffic accident immediately.
  • LT a is the risk factor of different lane marking a 1 types, which is determined by traffic laws, generally rigid separation facilities > can not cross the lane separation line > can cross the lane separation line.
  • the parameters of common facilities and lane lines are as follows: guardrail type or green belt type median: 20 to 25; sidewalk stones: 18 to 20; yellow solid line or dotted line: 15 to 18; white solid line: 10 ⁇ 15; white dotted line: 0-5.
  • R a is a constant greater than 0, which indicates the influencing factors of road conditions at (x a , y a ), which are determined by traffic environment factors such as road adhesion coefficient, road slope, road curvature and visibility in the vicinity of object a. Pick a fixed value for a section of road.
  • the data interval generally used is [0.5, 1.5].
  • f d is the influence factor of the distance of different types of lane markings a 1 , which is determined by the object type, object width, lane width and so on.
  • v is the lane width
  • d is the width of the target vehicle j
  • d generally takes the width of the bounding box of the target vehicle j.
  • r aj is the distance vector between the lane mark a 1 and the target vehicle j, in this case (x j , y j ) is the centroid of the target vehicle j, (x a , y a ) means (x j , y j ) ) to be the point where the vertical line intersects the lane marking a1.
  • k 1 is a constant greater than 0, representing the amplification factor of the distance, because the collision risk and the distance between the two objects do not change linearly in general. Generally, the value of k 1 ranges from 0.5 to 1.5.
  • represents the direction of the field strength, but in general practical applications, even if the field strength directions of the two safety risk field sources are opposite to a certain point, the risk size of the point cannot be considered to be reduced accordingly, so it is usually still The scalars are superimposed.
  • the field strength distribution results are shown in Fig. 5(a).
  • Dynamic risk field The field source is a relatively dynamic object in the traffic environment, and the magnitude and direction of its field strength vector are determined by the properties and states of the moving object and road conditions.
  • the dynamic objects here refer to dynamic objects that can actually collide with vehicles and cause heavy losses, mainly vehicles, pedestrians, and roadblocks.
  • This type of object also has two characteristics: (1) Although the above-mentioned objects may be stationary relative to the road, such as roadside parking, roadblock facilities, etc., they still have relative speed with the dynamic target vehicle as the reference frame. 2It is strictly forbidden to collide with such objects, otherwise it will inevitably cause a serious traffic accident.
  • the present invention assumes that the power function form of driving risk is a function of vehicle-target distance.
  • r bj (x j -x b ,y j -y b )
  • r bj is the distance vector between the dynamic risk field source b and the target vehicle j.
  • k 2 , k 3 and G are all constants greater than 0.
  • the meaning of k 2 is the same as that of k 1 above, and k 3 is the hazard correction for different speeds.
  • G is analogous to the electrostatic force constant and is used to describe an object with two units of mass at a unit distance. The size of the risk factor between. Generally, the value range of k 2 is 0.5 to 1.5, the value range of k 3 is 0.05 to 0.2, and the value of G is usually 0.001.
  • R b is the same as that of R a , and the data interval used is also [0.5, 1.5].
  • T bj is the type correction coefficient between the dynamic risk field source b and the target vehicle j.
  • the risk coefficients of vehicle-vehicle collision and vehicle-person collision are different.
  • Common types of correction parameters are as follows: vehicle-vehicle frontal collision: 2.5 to 3; vehicle-vehicle rear-end collision: 1 to 1.5; person-vehicle collision: 2 to 2.5; vehicle-to-barrier collision: 1.5 to 2.
  • v bj is the relative velocity between the dynamic risk field source b and the target vehicle j, that is, the vector difference between the velocity v b of the field source b and the velocity v j of the object j.
  • is the angle between the v bj and r bj directions, positive in the clockwise direction.
  • the semi-finished product of relative velocity is:
  • the present invention selects the lidar point cloud data as the data source, and uses the roadside unobstructed point cloud scanning result as the calculation carrier.
  • n can range from 50 to 100;
  • the point cloud density of each statistical space is detected. If it is greater than the threshold ⁇ (related to the point cloud density, the general value is 1000), the point cloud in the space is randomly sampled to keep its density, and finally obtain a more ideal global static point cloud background.
  • the static risk field sources in the static scene are separated, including lane dividing lines, median strips, roadside areas, etc., and the plane linear equations of each static risk field source are fitted by random sampling. Generally, it is required to collect more than 100 points evenly along the visual linear direction, and the collected points should not deviate too far from the target.
  • the speed is regarded as the standard speed.
  • the standard speed is the average vehicle speed in the point cloud scanning section under historical statistical conditions, and the direction is consistent with the lane where the target is located.
  • the flow chart of the data segmentation module is shown in Figure 8.
  • the scene point cloud is divided into the point cloud inside the bounding box and the point cloud outside the bounding box.
  • an algorithm is designed to detect whether the point cloud is within the bounding box, so that the point cloud can be divided into two types: point cloud inside the bounding box and point cloud outside the bounding box.
  • C3 Based on the safety risk field sampling scheme; the scene point cloud data P1 obtained by the data acquisition submodule ; the target detection bounding box X1 and the safety risk field data S1 obtained by the driving safety risk field calculation module are input as the submodules , First, through conditional judgment, it is judged whether the point in the scene point cloud data P 1 is within the bounding box X 1 , and the point cloud data P 11 within the bounding box and the point cloud data P 12 outside the bounding box are obtained. Then set the numerical threshold value f 3 of the security risk field, sample the point cloud data P 11 and P 12 inside and outside the bounding box according to the threshold value, and obtain the point cloud data P 2 after data segmentation.
  • the dangerous target is a static risk field source, take the plane where the risk field source is located as the center, and the left and right intercepted areas with a width of d/2 are the dangerous range, where d is the width of the target vehicle.
  • the dangerous target is a dynamic risk field source, take the centroid of the dangerous target as the center, and intercept a rectangular area with a width of 1.5d and a length of (0.5l+0.5l ⁇ k) as the dangerous range, of which the 0.5l part is far away from the target vehicle
  • the half side length of , 0.5l ⁇ k is the half side length close to the target vehicle.
  • d is the width of the dangerous target
  • l is the length of the dangerous target.
  • k is a speed correction coefficient not less than 1, which depends on the speed of the dangerous target.
  • Dangerous ranges are extracted in sequence according to the hazard coefficient of the hazard source, and the overlapping areas of the hazard ranges are only extracted once.
  • the final extracted total danger range result can be provided to the target vehicle as the perception assistance data of the target vehicle.
  • the flow chart of the data publishing module is shown in Figure 9. Based on the result of data segmentation, the data is compressed by the roadside sensing unit, and then the data transmission channel between the roadside sensing unit and the target vehicle is established. numbered vehicle. Then determine whether the target vehicle is moving:
  • D 2 point cloud data P 2 and semi-finished product risk field data S 2 after data segmentation are released, and the semi-finished product risk field data S 2 is substituted into the target vehicle speed to obtain safety risk field data S 1 .
  • the flow chart of the data fusion module is shown in Figure 10.
  • the point cloud data P 2 after data segmentation and the target vehicle point cloud data P 3 scanned by the target vehicle lidar are fused, that is, the point cloud coordinate transformation matrix is designed to register the high-risk data point cloud between the vehicle end and the road side, and the fusion result is obtained.
  • the point cloud data P 4 is obtained by data compression on the fused point cloud data P 4 to obtain the compressed point cloud data P 5 .
  • the flowchart of the method evaluation module is shown in Figure 11, which belongs to an optional sub-module.
  • the following describes the method evaluation sub-module for reference.
  • the target detection result R 1 is obtained through the PV-RCNN deep learning target detection algorithm
  • the target detection result R1 is evaluated.
  • the reference system is shown in Figure 12.
  • V represents the original point cloud of the vehicle without processing
  • I represents the original point cloud obtained by the unprocessed roadside perception unit
  • I 1 represents the point cloud obtained by segmenting the original point cloud by the roadside perception unit using the segmentation in the data segmentation method
  • I 2 represents the point cloud obtained by the roadside perception unit segmenting the original point cloud using the sampling in the data segmentation method
  • I 1S represents the point cloud obtained by dividing the original point cloud by the roadside sensing unit using the segmentation based on the safety field in the data segmentation method
  • I 2S represents the point cloud obtained by segmenting the original point cloud by the roadside sensing unit using the sampling based on the safety field in the data segmentation method

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

La présente invention concerne un procédé dynamique de fusion et de segmentation de nuage de points de cinémomètre à laser de voie de circulation routière à base d'un champ de risque de sûreté de conduite. Le procédé comprend les étapes suivantes : (1) proposer un mécanisme de calcul de champ de risque de sécurité de conduite et analyser de manière quantitative le niveau de risque d'objets statiques, tels que des véhicules stationnés sur le bord de la route, des blocs de route et des panneaux de signalisation, et des objets mobiles tels que des véhicules circulant, des véhicules non motorisés et des piétons, en ce qui concerne une certaine position ; (2) utiliser un procédé de calcul et utiliser des données de nuage de points de cinémomètre à laser d'une unité de détection de bord de route en tant que source de données, calculer le niveau de risque d'autres objets face à un véhicule cible, qui est un véhicule autonome, dans une plage de balayage, et construire une distribution de champs de risque de sécurité de conduite unifiée et centrée sur le véhicule cible ; (3) utiliser un seuil pour filtrer une zone qui présente de grands risques pour le véhicule cible, segmenter des données de nuage de points à partir des données d'origine en tant qu'informations de détection supplémentaire fournies au véhicule autonome ; et (4) traiter et fusionner des informations de niveau de nuage de points acquises par le cinémomètre à laser de l'unité de détection de bord de route avec des informations de niveau de nuage de points acquises par le cinémomètre à laser côté véhicule, et fournir un système d'évaluation de référence du procédé de fusion.
PCT/CN2021/085146 2021-01-01 2021-04-01 Procédé dynamique de fusion et de segmentation de nuage de points de cinémomètre à laser de voie de circulation routière à base de champ de risque de sécurité de conduite WO2022141910A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB2316614.3A GB2621048A (en) 2021-03-01 2021-04-01 Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN202280026657.8A CN117441197A (zh) 2021-01-01 2022-04-01 一种基于行车安全风险场的激光雷达点云动态分割及融合方法
PCT/CN2022/084738 WO2022206942A1 (fr) 2021-01-01 2022-04-01 Procédé de segmentation et de fusion dynamiques de nuages de points de radar laser basé sur un champ de risque de sécurité au volant

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110000327 2021-01-01
CN202110000327.9 2021-01-01
CN202110228419 2021-03-01
CN202110228419.2 2021-03-01

Publications (1)

Publication Number Publication Date
WO2022141910A1 true WO2022141910A1 (fr) 2022-07-07

Family

ID=82260124

Family Applications (9)

Application Number Title Priority Date Filing Date
PCT/CN2021/085147 WO2022141911A1 (fr) 2021-01-01 2021-04-01 Procédé basé sur une unité de détection de bord de route pour une reconnaissance rapide de nuage de points cibles dynamiques et une segmentation de nuage de points
PCT/CN2021/085148 WO2022141912A1 (fr) 2021-01-01 2021-04-01 Procédé de représentation de fusion d'informations de détection orientée collaboration de route de véhicule et de détection de cible
PCT/CN2021/085150 WO2022141914A1 (fr) 2021-01-01 2021-04-01 Procédé de détection et de ré-identification de véhicule multi-cible sur la base d'une fusion radar et vidéo
PCT/CN2021/085146 WO2022141910A1 (fr) 2021-01-01 2021-04-01 Procédé dynamique de fusion et de segmentation de nuage de points de cinémomètre à laser de voie de circulation routière à base de champ de risque de sécurité de conduite
PCT/CN2021/085149 WO2022141913A1 (fr) 2021-01-01 2021-04-01 Procédé d'étalonnage d'un radar routier à ondes millimétriques faisant appel à un dispositif de positionnement embarqué
PCT/CN2022/084912 WO2022206974A1 (fr) 2021-01-01 2022-04-01 Procédé de reconnaissance de nuage de points d'objet statique et non statique basé sur une unité de détection de bord de route
PCT/CN2022/084925 WO2022206977A1 (fr) 2021-01-01 2022-04-01 Représentation de fusion d'informations de détection orientée infrastructure de véhicule coopératif et procédé de détection de cible
PCT/CN2022/084929 WO2022206978A1 (fr) 2021-01-01 2022-04-01 Procédé d'étalonnage d'un radar routier à ondes millimétriques faisant appel à un appareil de positionnement monté sur véhicule
PCT/CN2022/084738 WO2022206942A1 (fr) 2021-01-01 2022-04-01 Procédé de segmentation et de fusion dynamiques de nuages de points de radar laser basé sur un champ de risque de sécurité au volant

Family Applications Before (3)

Application Number Title Priority Date Filing Date
PCT/CN2021/085147 WO2022141911A1 (fr) 2021-01-01 2021-04-01 Procédé basé sur une unité de détection de bord de route pour une reconnaissance rapide de nuage de points cibles dynamiques et une segmentation de nuage de points
PCT/CN2021/085148 WO2022141912A1 (fr) 2021-01-01 2021-04-01 Procédé de représentation de fusion d'informations de détection orientée collaboration de route de véhicule et de détection de cible
PCT/CN2021/085150 WO2022141914A1 (fr) 2021-01-01 2021-04-01 Procédé de détection et de ré-identification de véhicule multi-cible sur la base d'une fusion radar et vidéo

Family Applications After (5)

Application Number Title Priority Date Filing Date
PCT/CN2021/085149 WO2022141913A1 (fr) 2021-01-01 2021-04-01 Procédé d'étalonnage d'un radar routier à ondes millimétriques faisant appel à un dispositif de positionnement embarqué
PCT/CN2022/084912 WO2022206974A1 (fr) 2021-01-01 2022-04-01 Procédé de reconnaissance de nuage de points d'objet statique et non statique basé sur une unité de détection de bord de route
PCT/CN2022/084925 WO2022206977A1 (fr) 2021-01-01 2022-04-01 Représentation de fusion d'informations de détection orientée infrastructure de véhicule coopératif et procédé de détection de cible
PCT/CN2022/084929 WO2022206978A1 (fr) 2021-01-01 2022-04-01 Procédé d'étalonnage d'un radar routier à ondes millimétriques faisant appel à un appareil de positionnement monté sur véhicule
PCT/CN2022/084738 WO2022206942A1 (fr) 2021-01-01 2022-04-01 Procédé de segmentation et de fusion dynamiques de nuages de points de radar laser basé sur un champ de risque de sécurité au volant

Country Status (3)

Country Link
CN (5) CN116685873A (fr)
GB (2) GB2618936A (fr)
WO (9) WO2022141911A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115235478A (zh) * 2022-09-23 2022-10-25 武汉理工大学 基于视觉标签和激光slam的智能汽车定位方法及系统
CN115480243A (zh) * 2022-09-05 2022-12-16 江苏中科西北星信息科技有限公司 多毫米波雷达端边云融合计算集成及其使用方法
CN115966084A (zh) * 2023-03-17 2023-04-14 江西昂然信息技术有限公司 全息路口毫米波雷达数据处理方法、装置及计算机设备
WO2024021871A1 (fr) * 2022-07-26 2024-02-01 上海交通大学 Procédé d'évaluation de qualité de données de test de bord de route détecté basé sur une collaboration véhicule-route

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724362B (zh) * 2022-03-23 2022-12-27 中交信息技术国家工程实验室有限公司 一种车辆轨迹数据处理方法
CN115113157B (zh) * 2022-08-29 2022-11-22 成都瑞达物联科技有限公司 一种基于车路协同雷达的波束指向校准方法
CN115166721B (zh) * 2022-09-05 2023-04-07 湖南众天云科技有限公司 路侧感知设备中雷达与gnss信息标定融合方法及装置
CN115272493B (zh) * 2022-09-20 2022-12-27 之江实验室 一种基于连续时序点云叠加的异常目标检测方法及装置
CN115830860B (zh) * 2022-11-17 2023-12-15 西部科学城智能网联汽车创新中心(重庆)有限公司 交通事故预测方法及装置
CN116189116B (zh) * 2023-04-24 2024-02-23 江西方兴科技股份有限公司 一种交通状态感知方法及系统
CN117471461B (zh) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种用于车载辅助驾驶系统的路侧雷达服务装置和方法
CN117452392B (zh) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种用于车载辅助驾驶系统的雷达数据处理系统和方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892471A (zh) * 2016-07-01 2016-08-24 北京智行者科技有限公司 汽车自动驾驶方法和装置
CN108639059A (zh) * 2018-05-08 2018-10-12 清华大学 基于最小作用量原理的驾驶人操控行为量化方法及装置
CN108932462A (zh) * 2017-05-27 2018-12-04 华为技术有限公司 驾驶意图确定方法及装置
US10281920B2 (en) * 2017-03-07 2019-05-07 nuTonomy Inc. Planning for unknown objects by an autonomous vehicle
CN110850431A (zh) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 一种拖车偏转角的测量系统和方法
CN111985322A (zh) * 2020-07-14 2020-11-24 西安理工大学 一种基于激光雷达的道路环境要素感知方法

Family Cites Families (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661370B2 (en) * 2001-12-11 2003-12-09 Fujitsu Ten Limited Radar data processing apparatus and data processing method
US9562971B2 (en) * 2012-11-22 2017-02-07 Geosim Systems Ltd. Point-cloud fusion
KR101655606B1 (ko) * 2014-12-11 2016-09-07 현대자동차주식회사 라이다를 이용한 멀티 오브젝트 추적 장치 및 그 방법
TWI597513B (zh) * 2016-06-02 2017-09-01 財團法人工業技術研究院 定位系統、車載定位裝置及其定位方法
WO2018126248A1 (fr) * 2017-01-02 2018-07-05 Okeeffe James Réseau de micromiroirs destiné à améliorer la résolution d'image sur la base d'une rétroaction
KR102056147B1 (ko) * 2016-12-09 2019-12-17 (주)엠아이테크 자율주행차량을 위한 거리 데이터와 3차원 스캔 데이터의 정합 방법 및 그 장치
CN106846494A (zh) * 2017-01-16 2017-06-13 青岛海大新星软件咨询有限公司 倾斜摄影三维建筑物模型自动单体化算法
CN108629231B (zh) * 2017-03-16 2021-01-22 百度在线网络技术(北京)有限公司 障碍物检测方法、装置、设备及存储介质
CN107133966B (zh) * 2017-03-30 2020-04-14 浙江大学 一种基于采样一致性算法的三维声纳图像背景分割方法
FR3067495B1 (fr) * 2017-06-08 2019-07-05 Renault S.A.S Procede et systeme d'identification d'au moins un objet en deplacement
CN109509260B (zh) * 2017-09-14 2023-05-26 阿波罗智能技术(北京)有限公司 动态障碍物点云的标注方法、设备及可读介质
CN107609522B (zh) * 2017-09-19 2021-04-13 东华大学 一种基于激光雷达和机器视觉的信息融合车辆检测系统
CN108152831B (zh) * 2017-12-06 2020-02-07 中国农业大学 一种激光雷达障碍物识别方法及系统
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
CN109188379B (zh) * 2018-06-11 2023-10-13 深圳市保途者科技有限公司 驾驶辅助雷达工作角度的自动校准方法
KR20210025523A (ko) * 2018-07-02 2021-03-09 소니 세미컨덕터 솔루션즈 가부시키가이샤 정보 처리 장치 및 정보 처리 방법, 컴퓨터 프로그램, 그리고 이동체 장치
US10839530B1 (en) * 2018-09-04 2020-11-17 Apple Inc. Moving point detection
CN109297510B (zh) * 2018-09-27 2021-01-01 百度在线网络技术(北京)有限公司 相对位姿标定方法、装置、设备及介质
CN111429739A (zh) * 2018-12-20 2020-07-17 阿里巴巴集团控股有限公司 一种辅助驾驶方法和系统
JP7217577B2 (ja) * 2019-03-20 2023-02-03 フォルシアクラリオン・エレクトロニクス株式会社 キャリブレーション装置、キャリブレーション方法
CN110220529B (zh) * 2019-06-17 2023-05-23 深圳数翔科技有限公司 一种路侧自动驾驶车辆的定位方法
CN110296713A (zh) * 2019-06-17 2019-10-01 深圳数翔科技有限公司 路侧自动驾驶车辆定位导航系统及单个、多个车辆定位导航方法
CN110532896B (zh) * 2019-08-06 2022-04-08 北京航空航天大学 一种基于路侧毫米波雷达和机器视觉融合的道路车辆检测方法
CN110443978B (zh) * 2019-08-08 2021-06-18 南京联舜科技有限公司 一种摔倒报警设备及方法
CN110458112B (zh) * 2019-08-14 2020-11-20 上海眼控科技股份有限公司 车辆检测方法、装置、计算机设备和可读存储介质
CN110850378B (zh) * 2019-11-22 2021-11-19 深圳成谷科技有限公司 一种路侧雷达设备自动校准方法和装置
CN110906939A (zh) * 2019-11-28 2020-03-24 安徽江淮汽车集团股份有限公司 自动驾驶定位方法、装置、电子设备、存储介质及汽车
CN111121849B (zh) * 2020-01-02 2021-08-20 大陆投资(中国)有限公司 传感器的方位参数的自动校准方法、边缘计算单元和路侧传感系统
CN111999741B (zh) * 2020-01-17 2023-03-14 青岛慧拓智能机器有限公司 路侧激光雷达目标检测方法及装置
CN111157965B (zh) * 2020-02-18 2021-11-23 北京理工大学重庆创新中心 车载毫米波雷达安装角度自校准方法、装置及存储介质
CN111476822B (zh) * 2020-04-08 2023-04-18 浙江大学 一种基于场景流的激光雷达目标检测与运动跟踪方法
CN111554088B (zh) * 2020-04-13 2022-03-22 重庆邮电大学 一种多功能v2x智能路侧基站系统
CN111192295B (zh) * 2020-04-14 2020-07-03 中智行科技有限公司 目标检测与跟踪方法、设备和计算机可读存储介质
CN111537966B (zh) * 2020-04-28 2022-06-10 东南大学 一种适用于毫米波车载雷达领域的阵列天线误差校正方法
CN111766608A (zh) * 2020-06-12 2020-10-13 苏州泛像汽车技术有限公司 一种基于激光雷达的环境感知系统
CN111880191B (zh) * 2020-06-16 2023-03-28 北京大学 基于多智能体激光雷达和视觉信息融合的地图生成方法
CN111880174A (zh) * 2020-07-03 2020-11-03 芜湖雄狮汽车科技有限公司 一种用于支持自动驾驶控制决策的路侧服务系统及其控制方法
CN111914664A (zh) * 2020-07-06 2020-11-10 同济大学 基于重识别的车辆多目标检测和轨迹跟踪方法
CN111862157B (zh) * 2020-07-20 2023-10-10 重庆大学 一种机器视觉与毫米波雷达融合的多车辆目标跟踪方法
CN112019997A (zh) * 2020-08-05 2020-12-01 锐捷网络股份有限公司 一种车辆定位方法及装置
CN112509333A (zh) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 一种基于多传感器感知的路侧停车车辆轨迹识别方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892471A (zh) * 2016-07-01 2016-08-24 北京智行者科技有限公司 汽车自动驾驶方法和装置
US10281920B2 (en) * 2017-03-07 2019-05-07 nuTonomy Inc. Planning for unknown objects by an autonomous vehicle
CN108932462A (zh) * 2017-05-27 2018-12-04 华为技术有限公司 驾驶意图确定方法及装置
CN108639059A (zh) * 2018-05-08 2018-10-12 清华大学 基于最小作用量原理的驾驶人操控行为量化方法及装置
CN110850431A (zh) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 一种拖车偏转角的测量系统和方法
CN111985322A (zh) * 2020-07-14 2020-11-24 西安理工大学 一种基于激光雷达的道路环境要素感知方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024021871A1 (fr) * 2022-07-26 2024-02-01 上海交通大学 Procédé d'évaluation de qualité de données de test de bord de route détecté basé sur une collaboration véhicule-route
CN115480243A (zh) * 2022-09-05 2022-12-16 江苏中科西北星信息科技有限公司 多毫米波雷达端边云融合计算集成及其使用方法
CN115480243B (zh) * 2022-09-05 2024-02-09 江苏中科西北星信息科技有限公司 多毫米波雷达端边云融合计算集成及其使用方法
CN115235478A (zh) * 2022-09-23 2022-10-25 武汉理工大学 基于视觉标签和激光slam的智能汽车定位方法及系统
CN115235478B (zh) * 2022-09-23 2023-04-07 武汉理工大学 基于视觉标签和激光slam的智能汽车定位方法及系统
CN115966084A (zh) * 2023-03-17 2023-04-14 江西昂然信息技术有限公司 全息路口毫米波雷达数据处理方法、装置及计算机设备

Also Published As

Publication number Publication date
GB202313215D0 (en) 2023-10-11
WO2022141911A1 (fr) 2022-07-07
CN117441113A (zh) 2024-01-23
WO2022206974A1 (fr) 2022-10-06
WO2022206942A1 (fr) 2022-10-06
WO2022141912A1 (fr) 2022-07-07
GB2620877A (en) 2024-01-24
WO2022206978A1 (fr) 2022-10-06
CN116685873A (zh) 2023-09-01
WO2022206977A1 (fr) 2022-10-06
GB2618936A (en) 2023-11-22
CN117836653A (zh) 2024-04-05
WO2022141914A1 (fr) 2022-07-07
GB202316625D0 (en) 2023-12-13
CN117441197A (zh) 2024-01-23
CN117836667A (zh) 2024-04-05
WO2022141913A1 (fr) 2022-07-07

Similar Documents

Publication Publication Date Title
WO2022141910A1 (fr) Procédé dynamique de fusion et de segmentation de nuage de points de cinémomètre à laser de voie de circulation routière à base de champ de risque de sécurité de conduite
Han et al. Research on road environmental sense method of intelligent vehicle based on tracking check
CN112700470B (zh) 一种基于交通视频流的目标检测和轨迹提取方法
CN111874006B (zh) 路线规划处理方法和装置
EP4152204A1 (fr) Procédé de détection de ligne de voie et appareil associé
Zhao et al. On-road vehicle trajectory collection and scene-based lane change analysis: Part i
CN113313154A (zh) 一体化融合多传感器自动驾驶智能感知装置
CN112633176B (zh) 一种基于深度学习的轨道交通障碍物检测方法
GB2621048A (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN113705636B (zh) 一种自动驾驶车辆轨迹预测方法、装置及电子设备
CN111880191B (zh) 基于多智能体激光雷达和视觉信息融合的地图生成方法
CN114821507A (zh) 一种用于自动驾驶的多传感器融合车路协同感知方法
Yuan et al. Comap: A synthetic dataset for collective multi-agent perception of autonomous driving
CN106446785A (zh) 基于双目视觉的可行道路检测方法
CN113359709A (zh) 一种基于数字孪生的无人驾驶运动规划方法
Beck et al. Automated vehicle data pipeline for accident reconstruction: New insights from LiDAR, camera, and radar data
CN115019043A (zh) 基于交叉注意力机制的图像点云融合三维目标检测方法
CN114882182A (zh) 一种基于车路协同感知系统的语义地图构建方法
Tarko et al. Tscan: Stationary lidar for traffic and safety studies—object detection and tracking
Cao et al. Data generation using simulation technology to improve perception mechanism of autonomous vehicles
CN117115690A (zh) 一种基于深度学习和浅层特征增强的无人机交通目标检测方法及系统
Jung et al. Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles.
Liu et al. Research on security of key algorithms in intelligent driving system
CN117372991A (zh) 基于多视角多模态融合的自动驾驶方法及系统
Shan et al. Vehicle collision risk estimation based on RGB-D camera for urban road

Legal Events

Date Code Title Description
WPC Withdrawal of priority claims after completion of the technical preparations for international publication

Ref document number: 202110000327.9

Country of ref document: CN

Date of ref document: 20230528

Free format text: WITHDRAWN AFTER TECHNICAL PREPARATION FINISHED

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 202316614

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20210401