CN116879918A - Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar - Google Patents

Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar Download PDF

Info

Publication number
CN116879918A
CN116879918A CN202310990378.XA CN202310990378A CN116879918A CN 116879918 A CN116879918 A CN 116879918A CN 202310990378 A CN202310990378 A CN 202310990378A CN 116879918 A CN116879918 A CN 116879918A
Authority
CN
China
Prior art keywords
vehicle
point cloud
cross
depth map
dimensional depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310990378.XA
Other languages
Chinese (zh)
Inventor
韩瑞智
廉扬
韩士元
何光明
季书成
赵晶航
高笑天
陈月辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202310990378.XA priority Critical patent/CN116879918A/en
Publication of CN116879918A publication Critical patent/CN116879918A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

According to the cross-mode vehicle speed measurement method based on the vehicle-mounted laser radar, data are acquired through the vehicle-mounted laser radar and are converted into the two-dimensional depth map to carry out vehicle detection and tracking, so that speed measurement analysis between vehicles under the condition of real time or near real time is achieved. The method is suitable for vehicles of different types and sizes, is not influenced by factors such as the shape and the color of the vehicle, and can be used for measuring the speed without direct contact with the vehicle. Compared with the traditional speed measuring method, the method has the following advantages: the non-contact speed measurement and data acquisition are convenient, the application range is wide, the real-time performance is strong, the cost is low, and the non-contact speed measurement and data acquisition device has wide application prospect and can be used in the fields of traffic management, intelligent vehicles and the like.

Description

Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar
Technical Field
The application relates to the field of traffic management and intelligent traffic, in particular to a cross-mode vehicle speed measurement method based on a vehicle-mounted laser radar.
Background
The current vehicle speed measurement mainly comprises coil speed measurement, radar speed measurement, laser speed measurement, camera speed measurement and the like, wherein the coil speed measurement needs to be embedded into a lane, asphalt can be damaged to damage a road surface, traffic is influenced in the installation and maintenance process, the asphalt is easily influenced by factors such as freezing and road surface subsidence, the asphalt is easy to damage, the maintenance difficulty is high, the maintenance cost is high, and when traffic is jammed, the detection precision can be greatly reduced; the radar speed measurement can measure the speed of a vehicle running at a long distance and a high speed, and the cost is high due to the fact that special equipment and sensors are needed although the speed measurement precision is high; the vehicle speed measurement based on the camera can realize real-time measurement of the passing time and position of the vehicle and calculate the speed of the vehicle, and the speed measurement precision is limited by the visual field range and the image resolution although the cost is lower.
Compared with the method, the method for measuring the speed of the vehicle-mounted laser radar based on the cross-modal vehicle provided by the application has the advantages that the road is not required to be damaged and modified, the problems of asphalt pavement damage and high maintenance cost are avoided, meanwhile, a reliable result can be provided in the aspect of speed measurement precision under the condition that additional equipment and sensors are not required, and finally, compared with the method for measuring the speed based on a camera, the method utilizes the high-precision distance and angle information of the vehicle-mounted laser radar, overcomes the limitation of the visual field range and the image resolution, and is applicable to various vehicle shapes and colors, so that the non-contact speed measurement is realized. Therefore, the method has obvious contrast and advantages, can be widely applied to the fields of traffic management, intelligent vehicles and the like, and has wider application prospect.
Disclosure of Invention
According to the problems related to the current speed measuring method, the application provides a cross-mode vehicle speed measuring method based on a vehicle-mounted laser radar.
The specific steps of the application are as follows:
step 1: the vehicle-mounted lidar is a sensor mounted on a vehicle for acquiring three-dimensional point cloud data of the surrounding environment of the vehicle. Acquiring a three-dimensional point cloud data set of a multi-frame current traffic road scene through a vehicle-mounted laser radar and preprocessing;
step 2: and converting the point cloud data into a two-dimensional depth map to form a cross-modal data set, and performing network training through YOLOv 8.
Step 3: according to step 2, in the obtained multi-frame continuous two-dimensional depth map, a vehicle target is detected and tracked by utilizing a pre-trained model by utilizing YOLOv8 in combination with a target tracking algorithm.
Step 4: and 3, acquiring centroid coordinates of the target vehicle, and calculating the distance difference and the time difference of the two-dimensional depth map of the adjacent frames to realize the speed measurement of the vehicle.
Further, in the step 1, a point cloud data set is obtained through the vehicle-mounted laser radar and is processed, a self-adaptive filtering method is adopted to dynamically adjust the filtering radius and direction according to the local density and geometric shape change of the point cloud data, so that point cloud noise and irrelevant points are removed, and further accuracy and quality of the point cloud data are improved.
The specific detailed description is as follows:
step 1.1: and voxelized processing is carried out on the point cloud, and the point cloud data is converted into a three-dimensional voxel grid structure, so that the space analysis and the processing are facilitated, and the efficiency and the precision of the point cloud processing are improved aiming at the next filtering operation.
Step 1.2: the direction of filtering is determined by calculating the point cloud normal within its neighborhood for each voxel center point.
Step 1.3: and calculating the filtered point cloud position and attribute value by adopting retrograde weighted average processing on the point cloud data in the neighborhood of each voxel.
Furthermore, according to specific application scenarios and requirements, the forming of the two-dimensional depth map of the cross-modal dataset in step 2 mainly includes the following operations by selecting a proper projection range or a proper projection parameter, etc.:
on the one hand, when the resolution of the point cloud data set is low, the density of the point cloud is insufficient to support full-dense projection or the scale of the point cloud data set is large (including millions or tens of millions of point cloud data), the computation complexity of the full-dense projection is high, and real-time processing is difficult to realize, sparse projection is needed to improve the generation efficiency and quality of the two-dimensional depth map.
The specific detailed description is as follows:
(1) And selecting point cloud data to be projected through the point cloud characteristics.
(2) And transforming the point cloud data from a laser radar coordinate system to a projection coordinate system through internal parameters and external parameters of the vehicle-mounted laser radar.
(3) And removing part of the point cloud data by adopting a distance-based sparsification processing method for the transformed point cloud data, thereby reducing the size and complexity of the two-dimensional depth map.
(4) And projecting the sparse point cloud data into a two-dimensional depth map to form a cross-modal data set.
On the other hand, in some scenes where fine detection is required, it is necessary to provide a high-quality and comprehensive two-dimensional depth map to achieve accurate vehicle target detection and tracking, or when post-processing, analysis and application of the two-dimensional depth map of the traffic scene are required, a dense projection method may be used in step 2 to provide a finer and comprehensive two-dimensional depth map, so as to form a more accurate cross-modal dataset.
The specific detailed description is as follows:
(1) And (3) normalizing the three-dimensional coordinates of the point cloud preprocessed in the step (1), and only designating a minimum depth value for a corresponding voxel for a plurality of points projected to the same voxel.
(2) The grid is densified through the local minimum pooling operation to ensure visual continuity, and then original empty voxels between sparse points can be effectively filled by reasonable depth values on the premise of ensuring that background voxels are empty, so that denser and smoother spatial representation can be obtained.
(3) The partial pooling operation of the previous step may introduce artifacts on certain three-dimensional surfaces, so shape smoothing and noise filtering are performed using non-parametric gaussian kernels to obtain a more compact and smooth shape.
(4) The depth and the dimension of the network are compressed to perform cross-modal work so as to obtain a projected two-dimensional depth map, and a required cross-modal data set is obtained.
Further, in the step 2, after the projection is completed to obtain the required cross-modal data set, the vehicles in the cross-modal data set are marked through marking software, and then the pre-training model is obtained by performing network training by utilizing YOLOv 8.
Further, in the step 3, a vehicle target detection and tracking method based on deep learning is used, and the vehicle target detection and tracking method is selected to use YOLOv8 in combination with a target tracking algorithm.
The specific detailed description is as follows:
step 3.1: and taking the two-dimensional depth map generated by sparse projection or dense projection as input data.
Step 3.2: the pre-trained model is input into the YOLOv8, the input data is subjected to vehicle target detection, and the YOLOv8 algorithm directly predicts the boundary box and the class probability on the whole two-dimensional depth map by using a single neural network, so that the vehicle target detection can be rapidly and efficiently carried out.
Step 3.3: after the vehicle is detected, the vehicle is tracked and updated in real time using a target tracking algorithm.
Further, in step 4, the centroid coordinates of the vehicle are calculated as the vehicle speed measurement target point. The specific implementation method is as follows:
step 4.1: after the vehicle is tracked, the centroid coordinates of the vehicle are calculated as vehicle speed measurement target points through a target detection and tracking algorithm.
Step 4.2: and calculating the relative distance difference of the two-dimensional depth map speed measuring target points of the adjacent frames according to the coordinate positions of the vehicle speed measuring target points. Euclidean distance formulas may be used, namely:
wherein the method comprises the steps ofAnd->Respectively representing the coordinate positions of two vehicle speed measurement target points.
Step 4.3: calculating the time difference between two-dimensional depth maps of adjacent frames according to multi-frame continuous two-dimensional depth maps, namelyWherein->And->Respectively representing the time stamps of two-dimensional depth maps of adjacent frames.
Step 4.4: the speed of the vehicle is calculated from the distance difference and the time difference. The speed calculation formula is:
wherein the method comprises the steps ofIndicating the relative speed of the vehicle.
Absolute speed of the vehicle relative to the groundIt can be calculated as:
wherein the method comprises the steps ofIndicating the speed of the vehicle lidar relative to the ground.
Considering the possible acceleration and deceleration of the vehicle, the method selects a plurality of frames of two-dimensional depth maps for smoothing processing so as to obtain more stable and accurate vehicle speed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application.
FIG. 1 is a flow chart of a cross-modal vehicle speed measurement method based on a vehicle-mounted laser radar of the application
Fig. 2 is a network architecture diagram of a cross-mode vehicle speed measurement method based on a vehicle-mounted laser radar.
FIG. 3 is a flow chart of the present application for vehicle target detection and tracking in traffic scenarios.
Fig. 4 is a two-dimensional depth map of a road traffic scene according to an embodiment of the present application.
Fig. 5 is a diagram of a network architecture employing an object detection algorithm in accordance with the present application.
FIG. 6 is a block diagram of an algorithm employing target tracking in accordance with the present application.
Detailed Description
The application is described in further detail below with reference to the examples and with reference to the accompanying drawings.
The embodiment of the application discloses a cross-mode vehicle speed measurement method based on a vehicle-mounted laser radar. The general flow is shown in fig. 1, and comprises the following steps:
step 1: the vehicle-mounted lidar is a sensor mounted on a vehicle for acquiring three-dimensional point cloud data of the surrounding environment of the vehicle. And acquiring three-dimensional point cloud data of a plurality of frames of current traffic road scenes by using the vehicle-mounted laser radar, and further carrying out data preprocessing on the acquired data set.
Step 2: and (3) converting the point cloud data preprocessed in the step (1) into a two-dimensional depth map to form a cross-modal data set, and performing network training through YOLOv 8.
Step 3: and in the obtained multi-frame continuous two-dimensional depth map, utilizing YOLOv8 to combine with a target tracking algorithm, and using a pre-trained model to detect and track the vehicle target.
Step 4: and 3, acquiring centroid coordinates of the target vehicle, and calculating the distance difference and the time difference of the two-dimensional depth map of the adjacent frames to realize the speed measurement of the vehicle.
Further, the step of preprocessing in step 1 mainly includes:
step 1.1: and voxelized processing is carried out on the point cloud, and the point cloud data is converted into a three-dimensional voxel grid structure, so that the space analysis and the processing are facilitated, and the efficiency and the precision of the point cloud processing can be improved through the step aiming at the next filtering operation.
For each point in the point cloud, calculating its corresponding voxel coordinates in the voxel grid, for one point in the point cloudThe corresponding voxel coordinates are:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the side length of the voxel, ">、/>And->Respectively the origin of the voxel coordinate system.
Point to PointAssigned to voxel->In (c) can be obtained by adding voxels->The number of points in the number of points is added with 1 to realize:
wherein, the liquid crystal display device comprises a liquid crystal display device,for voxel->The number of points within.
Through the formula, the point cloud data can be converted into a voxel grid structure, and subsequent processing and analysis are convenient.
Step 1.2: the direction of filtering is determined by calculating the point cloud normal within its neighborhood for each voxel center point.
For each voxel, finding all points in the neighborhood of the voxel, calculating the average normal vector of the points, and normalizing the average normal vector to obtain a unit vectorThen determine the filtering direction +.>Select->Is used as the filtering direction.
The specific implementation formula is as follows:
for a certain voxelIt is assumed that there is one +.>Points, the normal vector of each point is +.>The average normal vector for that voxel is:
the average normal vector is normalized to be the same,obtaining a unit vector
Selection ofIs taken as the filtering direction +.>The following formula may be used:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the filtering direction +.>And->Respectively unit vector->At->Shaft and->Components on the axis.
Through the formula, the filtering direction can be determined according to the normal direction of the point cloud in the neighborhood, so that the purpose of smoothing the point cloud data is achieved.
Step 1.3: for each voxel, a retrograde weighted average process is used to calculate a filtered point cloud location and attribute value from the point cloud data within its neighborhood.
The specific implementation formula is as follows:
assuming a voxelIs +.>A plurality of points, each point having a coordinate of +.>Weight is +.>The voxel filtered position +.>And attribute value +.>The calculation formula of (2) is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the filtered position +.>For the filtered attribute value, +.>Is->Weights of individual points ∈>Is->The position of the individual points->Is->Attribute value of individual point +_>Representing a summation operation.
This process may enable smoothing of the point cloud data to remove noise and reduce uncertainty of the data.
Further, the generation of the two-dimensional depth map for projection in step 2 can be described in detail as:
on the one hand, when the resolution of the point cloud data set is low, the point cloud density is insufficient to support full dense projection or the scale of the point cloud data set is large, sparse projection is adopted to improve the generation efficiency and quality of the two-dimensional depth map, so that the required cross-modal data set is obtained.
On the other hand, in some scenes where fine detection is required, it is desirable to provide a high quality and comprehensive two-dimensional depth map to achieve accurate vehicle target detection and tracking or when post-processing, analysis and application of the two-dimensional depth map of the traffic scene is required, dense projections may be employed to provide a cross-modal dataset of finer and comprehensive two-dimensional depth maps.
The formula relating to the transformation of point cloud data from a lidar coordinate system to a projection coordinate system is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the point cloud coordinates under the lidar coordinate system,/->Representing the point cloud coordinates under the projection coordinate system, < >>And->The rotation matrix and the translation vector of the laser radar coordinate system to the projection coordinate system are respectively.
Specifically, the matrix is rotatedThe calculation can be made by the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,、/>、/>respectively is around->、/>、/>The rotation matrix of the shaft can be determined by the mounting attitude and alignment of the lidar.
Translation vectorThe method can be calculated according to the translation relation between the laser radar and the projection coordinate system, and needs to be determined by calibration and other methods in actual measurement.
Through the formula, the point cloud data under the laser radar coordinate system can be transformed to the projection coordinate system so as to carry out subsequent processing and analysis.
Further, the YOLOv8 algorithm flow involved in steps 2 and 3 is shown in fig. 3, and includes the following steps:
step 3.1: and processing a cross-modal data set formed by the two-dimensional depth map, such as adjusting the size of the two-dimensional depth map, normalizing, cutting and the like, and taking the processed two-dimensional depth map as input data.
Firstly, the two-dimensional depth map is reduced to a specified size according to specific requirements so as to adapt to the input size of a vehicle target detection model.
And secondly, normalizing the two-dimensional depth map, so that the stability and convergence rate of the model can be improved.
And finally, cutting the two-dimensional depth map according to the requirements of the vehicle target detection task, and extracting the region of interest.
Step 3.2: network training is performed through YOLOv8 with the pre-processed cross-modality dataset.
And (3) marking the vehicle by using a two-dimensional depth map (shown in fig. 4) generated by the point cloud, and taking the vehicle data set marked by the user as training data.
Step 3.3: vehicle target detection is performed by inputting a pre-trained model into YOLOv 8.
As shown in FIG. 4, the Yolov8 backbone network adopts CSPDarkNet, the bottleneck part adopts a C2f structure with richer gradient flow, different channel numbers are adjusted for different scale models, and the model performance is greatly improved.
On the one hand, the core of the algorithm is mainly based onIs used to predict the target box and target class using convolutional neural network, wherein +.>Is a set of predefined target boxes, in each cell a number of +.>To accommodate vehicle targets of different sizes and shapes.
On the other hand, a dynamic task alignment allocation strategy is adopted:
is the predictive value corresponding to the labeling category, +.>Is a predictive frame and a real frame +.>The alignment degree can be measured by multiplying the two values>And->Is weight exceeding parameter, is->The classification score and +.>Is realized by optimizingHigh quality of interest for guiding network dynamics>
Step 3.4: the vehicle is tracked and updated in real time using a target tracking algorithm after the vehicle is detected.
Because YOLOv8 cannot realize target tracking, real-time tracking and state updating of targets are realized by combining a target tracking algorithm based on deep learning.
The structure of the target tracking algorithm based on deep learning is shown in fig. 6, and the algorithm flow is specifically described in detail as follows:
(1) Prediction state: the vehicle track generated by the previous iteration is predicted by Kalman filtering, the average and covariance of the round are calculated, and the state (determination and uncertainty) of the data is unchanged.
(2) First matching: the vehicle track in (1) and the detection of the target detector of the present wheel are sent into cascade connection together for matching, and the result of three states is generated: an unmatched vehicle track, an unmatched detection, a matched vehicle track.
(3) Second matching: (2) And (2) combining the detection omission with the undetermined vehicle track in the step (1), and matching again by using the IOU to obtain the unmatched vehicle track, unmatched detection and matched vehicle track of the comparison leaning spectrum.
(4) Processing a failure object: the state in the unmatched vehicle track that has not been determined and that has been determined but has a lifetime exceeding the threshold is set to delete.
(5) The results are output and data is prepared for the next round, incorporating the following three sources of vehicle trajectories: (3) Combining the matched vehicle tracks in the step (4), updating Kalman filtering, and outputting the vehicle tracks with the service life of +1; newly building a vehicle track by the unmatched Detections in the step (3); the vehicle track determined in (4) and not over-age.
The vehicle trajectories from these three sources are taken together as the output of the present wheel and are also the input for the next iteration, continuing with 1.
Step 3.5: and finally outputting a detection tracking result of the vehicle target.
Further, the specific implementation method for selecting the vehicle speed measurement target point in the step 4 is as follows:
step 4.1: after the vehicle is tracked, the centroid coordinates of the vehicle are calculated as vehicle speed measurement target points through a target detection and tracking algorithm.
Step 4.2: and calculating the relative distance difference of the speed measurement target points of the two-dimensional depth maps of the adjacent frames according to the coordinate positions of the speed measurement target points of the vehicles. Euclidean distance formulas may be used, namely:
wherein the method comprises the steps ofAnd->Respectively representing the coordinate positions of the two speed measurement target points.
Step 4.3: calculating the time difference between two-dimensional depth maps of adjacent frames according to multi-frame continuous two-dimensional depth maps, namelyWherein->And->Respectively representing the time stamps of two-dimensional depth maps of adjacent frames.
Step 4.4: the speed of the vehicle is calculated from the distance difference and the time difference. The speed calculation formula is:
wherein the method comprises the steps ofIndicating the relative speed of the vehicle.
Absolute speed of the vehicle relative to the groundIt can be calculated as:
wherein the method comprises the steps ofIndicating the speed of the vehicle lidar relative to the ground.
Considering the possible acceleration and deceleration of the vehicle, the method selects a plurality of frames of two-dimensional depth maps for smoothing processing so as to obtain more stable and accurate vehicle speed.
The specific flow of the related algorithm of the method and system and the solutions considered with respect to the projection problem have been described above with respect to specific embodiments. Not all of the elements of the above general description may be required, certain data processing details or portions of the apparatus may not be required, and one or more further included elements may be performed in addition to those described. Furthermore, the above-disclosed embodiments are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter.

Claims (5)

1. The cross-mode vehicle speed measurement method based on the vehicle-mounted laser radar is characterized by comprising the following steps of:
step 1: acquiring a three-dimensional point cloud data set of a multi-frame current traffic road scene through a vehicle-mounted laser radar and preprocessing;
step 2: converting the point cloud data into a two-dimensional depth map to form a cross-modal data set, and performing network training through YOLOv 8;
step 3: according to the step 2, in the obtained multi-frame continuous two-dimensional depth map, utilizing YOLOv8 to combine with a target tracking algorithm, and using a pre-trained model to detect and track a vehicle target;
step 4: and 3, acquiring centroid coordinates of the target vehicle, and calculating the distance difference and the time difference of the two-dimensional depth map of the adjacent frames to realize the speed measurement of the vehicle.
2. The method for measuring speed of a cross-modal vehicle based on a vehicle-mounted laser radar according to claim 1, wherein the two-dimensional depth map of the cross-modal data set formed in the step 2 is mainly projected by selecting a proper projection range or a proper projection parameter according to specific application scenes and requirements.
3. The method for measuring speed of a cross-modal vehicle based on a vehicle-mounted laser radar as claimed in claim 1, wherein when the resolution of the point cloud data set is low, the density of the point cloud is insufficient to support full-dense projection or the scale of the point cloud data set is large (including millions or tens of millions of point cloud data), the calculation complexity of the full-dense projection is high, and real-time processing is difficult to realize, sparse projection is needed to improve the generation efficiency and quality of the two-dimensional depth map. The specific detailed description is as follows:
(1) Selecting point cloud data to be projected through the point cloud characteristics;
(2) Transforming the point cloud data from a laser radar coordinate system to a projection coordinate system through internal parameters and external parameters of the vehicle-mounted laser radar;
(3) Removing part of the point cloud data by adopting a distance-based sparsification processing method for the transformed point cloud data, thereby reducing the size and complexity of the two-dimensional depth map;
(4) And projecting the sparse point cloud data into a two-dimensional depth map to form a cross-modal data set.
4. The method for cross-modal vehicle speed measurement based on vehicle-mounted lidar as claimed in claim 1, wherein in some scenes requiring fine detection, a high-quality and comprehensive two-dimensional depth map is required to be provided to achieve accurate target detection and tracking, or when the two-dimensional depth map of the traffic scene is required to be post-processed, analyzed and applied, a dense projection method can be adopted in step 2 to provide a finer and comprehensive two-dimensional depth map, so that a more accurate cross-modal data set is formed. The specific detailed description is as follows:
(1) Normalizing the three-dimensional coordinates of the point cloud preprocessed in the step 1, and only designating a minimum depth value for a corresponding voxel for a plurality of points projected to the same voxel;
(2) The grid is densified through the local minimum pooling operation to ensure visual continuity, so that original empty voxels between sparse points can be effectively filled with reasonable depth values on the premise of ensuring that background voxels are empty, and therefore denser and smoother spatial representation can be obtained;
(3) The partial pooling operation of the previous step may introduce artifacts on certain three-dimensional surfaces, so shape smoothing and noise filtering are performed with non-parametric gaussian kernels to obtain a more compact and smooth shape;
(4) The depth and the dimension of the network are compressed to perform cross-modal work so as to obtain a projected two-dimensional depth map, and a required cross-modal data set is obtained.
5. The method for measuring the speed of the cross-modal vehicle based on the vehicle-mounted laser radar, which is characterized in that the two-dimensional depth map forming the cross-modal data set in the step 2 is marked, and network training is carried out through YOLOv8 to obtain a pre-training model.
CN202310990378.XA 2023-08-08 2023-08-08 Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar Pending CN116879918A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310990378.XA CN116879918A (en) 2023-08-08 2023-08-08 Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310990378.XA CN116879918A (en) 2023-08-08 2023-08-08 Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar

Publications (1)

Publication Number Publication Date
CN116879918A true CN116879918A (en) 2023-10-13

Family

ID=88264444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310990378.XA Pending CN116879918A (en) 2023-08-08 2023-08-08 Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar

Country Status (1)

Country Link
CN (1) CN116879918A (en)

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110032949B (en) Target detection and positioning method based on lightweight convolutional neural network
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN111340855A (en) Road moving target detection method based on track prediction
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN112001958A (en) Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
Börcs et al. Fast 3-D urban object detection on streaming point clouds
CN112070756B (en) Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography
CN111144213A (en) Object detection method and related equipment
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114565616A (en) Unstructured road state parameter estimation method and system
CN115128628A (en) Road grid map construction method based on laser SLAM and monocular vision
CN112379393A (en) Train collision early warning method and device
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN115100741A (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
CN113721254A (en) Vehicle positioning method based on road fingerprint space incidence matrix
CN111091077B (en) Vehicle speed detection method based on image correlation and template matching
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
CN116879918A (en) Cross-modal vehicle speed measurement method based on vehicle-mounted laser radar
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
Wang et al. A 64-line Lidar-based road obstacle sensing algorithm for intelligent vehicles
Qian et al. 3D Vehicle Detection Enhancement Using Tracking Feedback in Sparse Point Clouds Environments
Dong et al. Semantic Lidar Odometry and Mapping for Mobile Robots Using RangeNet++

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination