CN113345237A - Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data - Google Patents

Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data Download PDF

Info

Publication number
CN113345237A
CN113345237A CN202110726268.3A CN202110726268A CN113345237A CN 113345237 A CN113345237 A CN 113345237A CN 202110726268 A CN202110726268 A CN 202110726268A CN 113345237 A CN113345237 A CN 113345237A
Authority
CN
China
Prior art keywords
lane
vehicle
point cloud
lanes
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110726268.3A
Other languages
Chinese (zh)
Inventor
吴建清
张营超
宋修广
张涵
侯福金
李利平
王凯
马兆有
杨梓梁
李辉
吕斌
冉斌
霍光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong High Speed Construction Management Group Co ltd
Shandong University
Original Assignee
Shandong High Speed Construction Management Group Co ltd
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong High Speed Construction Management Group Co ltd, Shandong University filed Critical Shandong High Speed Construction Management Group Co ltd
Priority to CN202110726268.3A priority Critical patent/CN113345237A/en
Publication of CN113345237A publication Critical patent/CN113345237A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Development Economics (AREA)
  • Biophysics (AREA)
  • Game Theory and Decision Science (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a lane change identification and prediction method, a system, equipment and a storage medium for extracting vehicle tracks based on roadside laser radar data, wherein the method comprises the following steps: (1) carrying out background filtering on the point cloud picture scanned by the laser radar; (2) clustering point cloud data of the in-transit target; (3) adopting an artificial neural network to classify vehicles and people; (4) tracking the same target in successive frames; (5) and predicting the lane change behavior of the vehicle according to the threshold value of the selected evaluation index. The invention can predict the lane change or turning behavior of the vehicle in real time with higher precision, and can develop a related vehicle lane change early warning system by utilizing the lane change information, thereby adding tiles for the development of intelligent traffic engineering.

Description

Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data
Technical Field
The invention belongs to the technical field of traffic engineering, and particularly relates to a method, a system, equipment and a storage medium for recognizing and predicting lane change based on roadside lidar extraction vehicle track.
Background
Intelligent networked automobile technology (CV) has become an important component in future intelligent transportation systems. In an ideal CV network, all road users can communicate with each other through various wireless communication techniques. Internet of vehicles technology has many advantages, including reduced congestion, improved traffic safety, and reduced fuel consumption. Lane change recognition and prediction are important components of CV network functionality, which is considered one of the most challenging driving actions. The identification and prediction of lane changes is of great significance to avoid collisions. In order to provide effective warning information to the driver, the behavior of lane change needs to be predicted in real time. Theoretically, any lane-change action in a CV environment can be detected because all vehicles interact with the roadside infrastructure and network through onboard sensors, sharing their real-time position, speed, and direction of travel. However, it takes time to install networked vehicle devices on all vehicles, which means that networked and non-networked vehicles will exist for decades or even longer in the future.
It is a challenge how to identify and predict lane change behavior for traditional traffic to all-pass traffic transition times. The infrastructure at the roadside establishes a bridge between road users not connected to the network and road users connected to the network on the road.
At present, a great number of traffic sensors are installed along the road, but the traffic sensors are traditional traffic sensors and are not required by the CV network. Conventional traffic sensors typically include video detectors, bluetooth sensors, radar sensors, etc., primarily to provide macroscopic traffic data such as traffic flow, average speed, occupancy, etc. Typically CV networks require traffic data provided by high resolution microprocessors, which means that traffic data per second (even higher frequency) is required for all road users. However, since there are only a few networked vehicles, the high resolution micro sensors can usually only collect the sample data provided by these vehicles, and all the data of the road users are needed to achieve CV technology. There is therefore a need for an improved roadside traffic sensor.
There have been related researchers who have started to deploy laser radar at the roadside, which is a radar system that detects a characteristic quantity of a position, a speed, and the like of an object with a laser beam emitted. The working principle is that a detection signal (laser beam) is transmitted to a target, then a received signal (target echo) reflected from the target is compared with the transmitted signal, and after appropriate processing, relevant information of the target, such as target distance, direction, height, speed, attitude, even shape and other parameters, can be obtained, so that the target is detected, tracked and identified. The laser changes the electric pulse into optical pulse and emits it, and the optical receiver restores the reflected optical pulse from the target into electric pulse and sends it to the display. It can detect a target around 360 ° with high accuracy. By deploying the lidar along the road or at the intersection, the real-time information of all road users (including networked vehicles and non-networked vehicles) can be acquired through a wireless technology. Therefore, the roadside laser radar fills the gap that lane change identification and prediction cannot be carried out in the transition period from the non-networked vehicle to the fully-networked vehicle.
The existing lane change identification and prediction method of the vehicle track mostly depends on a camera sensor. But the vehicle lane change prediction device based on the camera can achieve a better prediction effect only in the daytime or when the light is sufficient.
Disclosure of Invention
Given that networked vehicles are not yet widespread at present, the CV network cannot obtain real-time traffic information of the non-networked vehicles. The invention provides a lane change identification and prediction method for extracting a vehicle track based on a roadside laser radar.
The invention also provides a lane change identification and prediction system for extracting the vehicle track based on the roadside laser radar.
The invention additionally provides computer equipment and a storage medium.
Interpretation of terms:
1. the laser radar is a radar system for detecting the position, speed and other characteristic quantities of a target by emitting laser beams. In the invention, the adopted laser radar is a VLP-16 model, the rotating speed of the laser radar is 5-20 revolutions per second, 60 ten thousand 3D points can be generated per second, and 360-degree horizontal view and 30-degree vertical view are covered, and the upper part and the lower part are +/-15 degrees.
2. The point cloud picture is a point cloud picture, a point cloud set scanned by a laser radar is obtained by sampling the periphery of the point cloud set according to a laser measurement principle, wherein the points in the set are called point clouds and comprise three-dimensional coordinates (XYZ coordinates) and laser reflection intensity.
3. In the background point cloud, in the point cloud image, the point cloud belonging to the surrounding environment is called background point cloud, and the point cloud belonging to the vehicle or the pedestrian is called target point cloud.
4. The nearest lane boundary does not adopt a curve form of a high-order equation to simulate a lane line, so that the nearest lane boundary needs to be determined from the time of starting to track the vehicle when the nearest lane boundary is judged, and does not change in the tracking time. This means that the "nearest lane boundary" is not always the physically nearest lane, as the vehicle may change lanes or turn at an intersection.
5. In the background filtering technology, a point cloud image scanned by a laser radar includes road surroundings (such as trees, signal lamps, building facilities, and the like) and an in-transit target, and in order to separately study the motion law of the in-transit target, an algorithm is required to filter point cloud data of the surroundings.
6. The DBSCAN algorithm is a spatial clustering algorithm based on density. The algorithm divides the area with sufficient density into clusters and finds arbitrarily shaped clusters in a spatial database with noise, which defines clusters as the largest set of density-connected points. It should be noted that the algorithm takes into account the spatial point cloud distribution, including the length, width and height of the clusters.
7. Point cloud squaring: and in a two-dimensional point cloud plane image scanned by the laser radar, dividing the two-dimensional point cloud plane image into square areas according to the point cloud density.
8. DNLB, which represents the distance of the point of the vehicle to the nearest lane boundary, is shown in fig. 2.
9. CDNLB, which indicates the amount of DNLB change.
10. CCDNLB represents the cumulative amount of DNLB change.
11. And aggregating frame numbers, and fusing background point cloud data of different frames.
12. The weight refers to the frequency count of each number in the weighted average, and is also called weight or weight.
13. An input layer responsible for receiving information from outside the network.
14. And the output layer, which is the last layer of the network, is responsible for outputting the calculation result of the network.
15. The hidden layer, the layers other than the input layer and the output layer, is called the hidden layer. The hidden layer does not directly receive external signals or directly send signals to the outside.
16. And the front key points are point clouds closest to the laser radar in the point cloud image of the vehicle target and are called as front key points.
17. And the rear key points are point clouds which are farthest away from the laser radar in the point cloud image of the vehicle target and are called as rear key points.
18. The core point is the core point in the DBSCAN algorithm if the number of points included in an epsilon neighborhood (corresponding to a radius epsilon) of a point is equal to or greater than M (threshold).
19. And if the number of points contained in the neighborhood of the past point epsilon is less than M, the point is the noise point.
20. Density is direct, a point is said to be density direct from the core point if it is in the epsilon neighborhood of another core point.
21. A point is said to be density reachable if it can be associated with another core point by a series of neighboring points that are mutually density reachable.
22. Density is connected, two points of unknown relationship are said to be density connected if both are known to be density-reachable with the same point.
The technical scheme of the invention is as follows:
a lane change identification and prediction method for extracting vehicle tracks by using roadside laser radar data comprises the following steps:
(1) background filtering and target clustering: background filtering is carried out on the point cloud picture obtained by the roadside laser radar by using a background filtering technology, and point clouds in the point cloud picture after the background filtering processing are clustered by using a DBSCAN algorithm to obtain point cloud clusters;
(2) object classification: distinguishing pedestrians from vehicles, removing the point cloud of the pedestrians, and only keeping the point cloud of the vehicles;
(3) target tracking: generating lane boundary lines and matching the in-transit vehicles to corresponding lanes;
(4) predicting lane change behavior: and (4) predicting the lane change behavior of the vehicle in real time by using the preset CDNLB and CCDNLB.
Preferably, according to the present invention, in step (1), background filtering means: filtering out background point clouds when no passing target passes from the point cloud picture when the passing target passes to obtain the background point clouds of the passing target;
preferably, in step (1), the clustering of the targets includes the following steps:
A. scanning a point cloud picture obtained after the whole background filtering processing to find any core point, and expanding the core point, namely searching all density-connected data points starting from the core point;
B. traversing all core points in the epsilon neighborhood of the core points, and searching data points connected with all densities of the core points; boundary nodes of the clustered clusters are all non-core points;
C. rescanning the data set, searching for core points which are not clustered, and repeating the steps A to B until no new core points exist in the point cloud picture;
D. and filtering noise points, wherein the noise points refer to data points which are not contained in the point cloud cluster.
Preferably, in step (2), the object classification includes the following steps:
2.1: constructing an artificial neural network for vehicle and pedestrian classification;
2.2: making a data set, wherein the data set is divided into a training set, a verification set and a test set;
2.3: training an artificial neural network for vehicle and pedestrian classification through a training set, and verifying the artificial neural network for vehicle and pedestrian classification through a verification set to obtain the trained artificial neural network for vehicle and pedestrian classification;
2.4: inputting the test set into a trained artificial neural network for vehicle and pedestrian classification to obtain a classification result, wherein the classification result comprises pedestrians and vehicles;
2.5: and filtering the point cloud cluster of which the classification result is the pedestrian.
Further preferably, the artificial neural network for vehicle and pedestrian classification comprises an input layer, a hidden layer and an output layer;
the input layer is used for inputting the point cloud picture into an artificial neural network for vehicle and pedestrian classification;
the hidden layer comprises a plurality of layers of neural networks, convolution operation is carried out on an input point cloud image, point cloud information characteristics are extracted, the point cloud information characteristics comprise the number, the length, the width, the height and the density of point clouds of point cloud clusters, final overall characteristics are obtained through carrying out multiple times of convolution in the hidden layer, the shallow layer characteristics are multi-segment lines formed by appearance contour points of the point cloud clusters, the overall characteristics are combined overall contours of the multi-segment lines, and classification of pedestrians and vehicles is achieved;
the output layer comprises two neurons which are respectively a pedestrian and a vehicle; the output layer is connected with the last layer of neural network of the hidden layer by adopting a full connection method, the probability that the target to be detected is a pedestrian and a vehicle is obtained through parameter operation between the neurons, and the higher probability is used as a classification result.
It is further preferable that a random factor is added to the neurons in each layer of neural network, and in a certain layer of neural network, the random factor randomly inhibits some neurons in the layer of neural network; the specific implementation process is as follows: in a certain layer of neural network, a plurality of neurons are randomly extracted, and the connection between the neurons and other neurons is cut off, namely, the neuron in the upper layer skips the neuron and is directly connected with the neuron in the next layer.
Further preferably, the creator creates a data set to be labeled, the data set including: the type of the target point cloud, the distance between the point cloud cluster and the laser radar and the number of the point clouds of the target cluster; the target point cloud types comprise pedestrians and vehicles; the distance between the target cluster and the laser radar is the distance between the front key point of the target and the laser radar;
further preferably, step 2.4 further includes comparing the actual output with the estimated output, and the specific implementation process is as follows: and feeding back the error between the actual output and the estimated output to the input layer again to serve as a guide for adjusting the weight in the next round of training, gradually learning the intrinsic relationship between the input and the output by adjusting the weight of each neuron in each layer of neural network through the iterative process to achieve the optimal precision, and terminating the training process when the error between the actual output and the estimated output is less than a preset value or the iteration times exceed the maximum times.
According to the invention, the step (3) is preferably implemented by the following steps:
3.1: setting point cloud information characteristics of vehicles to be concentrated in the range of 1.5m on the left side and the right side of the center of a lane;
3.2: dividing lanes into small squares in a two-dimensional plane, taking the place with the most sparse point cloud density according to the point cloud density of the small squares, dividing different lanes, and displaying a continuous nonlinear lane line in a point cloud image through multi-section linear fitting; in addition, the number and direction of lanes are estimated according to the moving direction of the vehicle;
estimating the number and direction of lanes according to the moving direction of the vehicle is: when scanning an in-transit target, the laser radar judges the moving direction of the vehicle and the number of traffic flows, and historical point clouds of the vehicles on the same lane form a traffic flow, so that the total number of the traffic flows represents the number of the lanes, and the moving direction of the lanes is judged according to the moving direction of the vehicle, so that the direction of the lanes is judged;
3.3: storing small squares representing the area of a lane in an N-4 matrix, wherein N represents N small squares, and 4 represents coordinate information of four corner points of the small squares;
3.4: after lane division, the total number of lanes and the number of lanes in a single direction are calculated, and the calculation method comprises the following steps: the point clouds on the lanes are distributed in the range of 1.5m respectively at the left and right of the center line of each lane, the historical point cloud information of the vehicles on the lanes forms a space band-shaped object called traffic flow, the number of the traffic flows, namely the total number of the lanes, is calculated, and the number of the lanes in a single direction is the total number of the lanes divided by 2;
3.5: when the laser radar scans a vehicle target, finding the coordinates of the front key points of the vehicle, positioning the coordinates in the corresponding small square, matching the vehicle in the corresponding lane, and calculating the number L of adjacent lanes of the vehicle;
further preferably, the specific implementation method of step 3.2 includes:
a. determining the vehicle historical point cloud density distribution of the whole lane: analyzing historical point cloud pictures of vehicles on the lane by adopting a laser radar, and selecting historical point cloud data of the vehicles of the whole lane by adopting a small square frame from an area with the most dense point cloud density;
b. calculating the number of point clouds contained in each small square, setting a point cloud number threshold A, and selecting the small squares with the point cloud number smaller than A; repeatedly trying to change the point cloud number threshold value A until small squares on the actual lane line are selected; automatically filtering independent small squares in the selected small squares; and connecting the continuous small squares with the lowest point cloud density, and fitting the lane boundary line by multi-section linear fitting.
Further preferably, in step b, the connection mode between the small squares is as follows: and connecting the middle points of the overlapped parts of the small squares to obtain the multi-section linearly-fitted lane line.
Preferably, in step 3.5, if there are lanes on both sides of the lane where the vehicle is located, the possibility of lane change of the vehicle is increased, and L is multiplied by a lane factor L, where the lane factor L is a parameter indicating the lane change tendency of the vehicle given by integrating the number of the lanes where the vehicle is located and the number of the lanes on both sides, and the value range of the parameter is 0.9-1.1;
according to the invention, the specific implementation process of the step (4) is as follows:
4.1: tracking the same vehicle under different frames by adopting a global nearest neighbor algorithm GNN;
4.2: calculating the vertical distance from each tracking point to the nearest lane boundary DNLB;
4.3: calculating the variation CDNLB and the accumulated variation CCDNLB of the DNLB;
4.4: and predicting lane change of the vehicle in real time by considering the number of lanes around the lane where the vehicle is located.
Further preferably, in step 4.1, a specific implementation method for tracking the same vehicle under different frames by using the global nearest neighbor algorithm GNN is as follows:
tracking front key points and rear key points of the vehicle, namely: and labeling the clustered targets, and labeling the targets on the adjacent frames as the same target if the distance between the key points before the two adjacent frames is less than a set threshold value, so as to realize target tracking between different frames.
More preferably, the set threshold is 15 m.
Further preferably, in step 4.2, the boundary of the lane is a piecewise linear function, as shown in formula (i):
Figure BDA0003137696410000061
in formula (I), knAnd bnThe coefficient of the piecewise linear function is represented, x and y represent point cloud coordinate points of a two-dimensional space, and the coordinate system is as follows: taking the installation position of the laser radar as a (0,0) point, designating any direction as an x-axis direction, and designating a direction forming an included angle of 90 degrees with the x-axis direction as a y-axis;
the vertical foot coordinates of the key point in front of the vehicle and the nearest lane line are calculated by the formulas (II) and (III) respectively:
Figure BDA0003137696410000062
Yip=Xip×k+b (Ⅲ)
in the formulae (II) and (III), ip represents a drop foot point, and xiX-coordinate, y, representing the ith tracking pointiThe y-coordinate of the ith tracking point is represented,xipan x coordinate representing the drop-foot point of the ith tracking point on the nearest lane line; y isipA y coordinate representing the drop-foot point of the ith tracking point on the nearest lane line; k represents the slope of the nearest lane line on the selected coordinate system; b represents the intercept of the nearest lane line on the selected coordinate system;
the distance to the nearest lane boundary is calculated by the formula (iv):
Figure BDA0003137696410000063
if the y-axis coordinate y of the key pointiThe y-axis coordinate of the closest point of the lane boundary is higher, then the sign of DNLB is "+", otherwise it is "-" (meaning positive or negative);
more preferably, in step 4.3, the calculation formulas of the variation CDNLB of DNLB and the cumulative variation CCDNLB are respectively represented by formula (v) and formula (vi):
CDNLBi=DNLBi-DNLBi-1 (Ⅴ)
Figure BDA0003137696410000071
in the formulas (V) and (VI), the subscript i represents DNLB in the ith frame, and i: i + n represents frames from the ith frame to the (i + n) th frame;
CDNLBiindicating CDNLB, DNLB in the ith frameiIndicates DNLB in the i-th framei-1Denotes DNLB, CCDNLB in the i-1 th frameiIndicating the cumulative variation CCDNLB from frame i to frame i + n.
More preferably, in step 4.4, the threshold of the adopted CCDNLB is set to 0.8m, and in order to eliminate abnormal values in the tracking points, 3 frames closest to the front and rear of the tracking frame are searched, and the tracking frame is set as the ith frame; the prediction process includes the following:
c. computing CDNLBi、CDNLBi-1、CDNLBi-2If the three have the same sign ("+" or "-") and CCDNLB ≧ 0.8m, then the vehicle tracked in the frame will be determined as beingWill make lane change movement or turn movement; if the sign is not the same, then the lane change motion is not predicted, because the high CCDNLB may be caused by an abnormal value, and step d is entered;
d. computing CDNLBi+1、CDNLBi+2To determine whether they are in contact with CDNLBiHave the same sign ("+" or "-"), if CDNLBi+1、CDNLBi+2And CDNLBiAnd obtaining a lane change factor I when the signs are the same, and judging that the vehicle needs to perform lane change when the I is more than or equal to 1, otherwise, judging that the vehicle does not perform lane change.
Further preferably, the determination of I is as shown in formula (VII):
Figure BDA0003137696410000072
more preferably, when there is a sufficient lane around the vehicle, the vehicle is more willing to travel in a middle lane, so the value I is corrected according to the number of lanes, and the correction coefficient of the number of lanes to the value I is α, and the specific correction method is as follows:
for a bidirectional four-lane (one-way two-lane), when a lane in the same direction exists beside the nearest lane line, the vehicle has a lane change condition, and alpha is 1.0;
for a bidirectional six-lane (unidirectional three-lane), setting the three lanes in one direction as a, b, c and d in sequence, and when a vehicle is positioned in the lane, if the nearest lane line in DNLB calculation is c, the vehicle with uniform speed can more easily run on the lane according to driving habits, and the alpha is 1.0; if the nearest lane line is b, then alpha is 0.9; when the vehicle is positioned in the lane III, the lane is a passing lane, and the vehicle can be changed back to the lane II after passing, and the lane change possibility of the lane is high, so that alpha is 1.1; when the vehicle is positioned in the lane I, the lane is a slow lane, and the possibility that the vehicle changes lanes to the lane II is high, so that alpha is 1.1;
for a bidirectional eight-lane (unidirectional four-lane), setting four lanes in one direction as (i) 1, (ii) 1, (iii) 1 and (iv) 1 in sequence, setting lane lines of the four lanes as (a) 1, (b) 1, (c) 1, (d) 1 and (e) 1 in sequence, when a vehicle is positioned in the lane (i) 1, only changing the lane to the lane (ii) 1, and taking alpha as 1.1; when the vehicle is positioned in a lane II 1, when changing lanes to the lane I1, the alpha is 0.9, and when changing lanes to the lane III 1, the alpha is 1.0; when the vehicle is positioned in a lane 1, when changing lanes to a lane II 1, the alpha is 1.0, and when changing lanes to the lane 1, the alpha is 1.1; when the vehicle is positioned on the lane (1) and changes lanes to the lane (1), the alpha is 1.1;
after determining the correction coefficient of the number of the lanes to the value I, comprehensively judging whether the lane change occurs to the vehicle by adopting a formula (VIII):
Ifin=αI (Ⅷ)
when I isfinWhen the speed is more than 1.0, predicting that the vehicle has lane change motion; when I isfinAnd when the speed is less than 1.0, the lane change motion of the vehicle is predicted not to occur.
A lane change identification and prediction system for extracting vehicle trajectories using roadside lidar data, comprising: the system comprises a target point cloud acquisition module, a data processing module and a prediction module; the target point cloud acquisition module is used for: collecting point cloud data of an in-transit target and a background; the data processing module is used for: filtering background point cloud data, filtering pedestrian point cloud data, clustering vehicle point clouds, tracking vehicle targets in different frames, dividing lanes and matching the vehicle targets to corresponding lanes; the prediction module is to: it is predicted whether a lane change motion of the vehicle in transit will occur.
Preferably, the data processing module comprises a background point cloud filtering and clustering module, an object classification module and a lane boundary identification module;
the background point cloud filtering and clustering module is used for: filtering background point cloud data, and filtering point cloud data outside an in-transit target (vehicle and pedestrian); adopting a DBSCAN algorithm to perform clustering operation on the in-transit targets;
the object classification module is to: constructing and training an artificial neural network for classifying vehicles and pedestrians, taking the total number of point clouds, the distance from a laser radar and the direction of a target cluster formed by the point clouds as input, excluding pedestrians from each picture through the processing of an input layer, a hidden layer and an output layer, and only keeping the target cluster formed by the point clouds of the vehicles;
the lane boundary identification module is used for: generating lane boundary lines and matching the in-transit vehicles to corresponding lanes;
preferably, the target point cloud acquisition module is a multi-thread laser radar; the multithreading laser radar is connected with a visual program on the microcomputer, a point cloud picture scanned by the laser radar is displayed in real time, and only point cloud data of the vehicle is displayed on the microcomputer through background filtering and on-road target classification (vehicles and pedestrians);
the prediction module is mainly used for predicting the behavior of the vehicle by integrating the factors according to the influence of DNLB, CDNLB and CCDNLB of the key points of the vehicle, a preset threshold and the number of lanes on the value I.
A computer apparatus comprising a memory storing a computer program and a processor implementing, when executing the computer program, the steps of a lane change identification and prediction method for extracting a vehicle trajectory using roadside lidar data.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of a lane change identification and prediction method for extracting vehicle trajectories using roadside lidar data.
The invention has the beneficial effects that:
1. according to the invention, by combining the laser radar and the point cloud processing algorithm, the lane change or turning behavior of the vehicle can be predicted in real time with higher precision, and the lane change information can be utilized to develop a related vehicle lane change early warning system, so that the intelligent traffic engineering is developed by adding tiles.
2. The vehicle lane change prediction device based on the laser radar is not influenced by light and weather conditions, and can give good prediction even at night.
3. The lane change prediction algorithm provided by the invention integrates the driving habits of the driver, considers the big data of the lane change condition of the driver to the adjacent lane on the lane, and integrates the two aspects of the shortest distance between the vehicle and the nearest lane line to comprehensively consider and predict the lane change behavior of the vehicle. The device can well predict the lane-changing behavior of the vehicle through the road detection.
Drawings
FIG. 1 is a schematic flow diagram of a lane-change identification and prediction method for extracting vehicle trajectories using roadside lidar data in accordance with the present invention;
FIG. 2(a) is a first schematic view of lane line fitting;
FIG. 2(b) is a schematic diagram of lane line fitting;
FIG. 2(c) is a third schematic view of lane line fitting;
FIG. 3 is a schematic illustration of the vertical distance of each tracking point to the nearest lane boundary (DNLB);
FIG. 4 is a schematic diagram of point cloud data before background filtering;
FIG. 5 is a schematic diagram of background filtered point cloud data;
fig. 6 is a diagram illustrating a lane division result;
FIG. 7 is a schematic flow chart illustrating a process for predicting lane change behavior of a vehicle without considering the number of lanes;
FIG. 8 is a schematic flow chart of predicting lane change behavior in consideration of the number of lanes;
FIG. 9 is an exemplary diagram of a bidirectional six lane;
FIG. 10 is a two-way eight lane example diagram;
FIG. 11 is a schematic diagram of a lane-change recognition and prediction system for extracting vehicle trajectories using roadside lidar data according to the present invention.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
Example 1
A lane change identification and prediction method for extracting vehicle track by using roadside lidar data is disclosed as shown in FIG. 1, and comprises the following steps:
(1) background filtering and target clustering: background filtering is carried out on the point cloud picture obtained by the roadside laser radar by using a background filtering technology, and point clouds in the point cloud picture after the background filtering processing are clustered by using a DBSCAN algorithm to obtain point cloud clusters;
(2) object classification: distinguishing pedestrians from vehicles, removing the point cloud of the pedestrians, and only keeping the point cloud of the vehicles;
(3) target tracking: generating lane boundary lines and matching the in-transit vehicles to corresponding lanes;
(4) predicting lane change behavior: and (4) predicting the lane change behavior of the vehicle in real time by using the preset CDNLB and CCDNLB.
Example 2
The lane change identification and prediction method for extracting the vehicle track by using the roadside lidar data in the embodiment 1 is characterized in that:
in step (1), background filtering means: filtering out background point clouds when no passing target passes from the point cloud picture when the passing target passes to obtain the background point clouds of the passing target; FIG. 4 is a schematic diagram of point cloud data before background filtering; fig. 5 is a schematic diagram of background filtered point cloud data.
In the step (1), the target clustering comprises the following steps:
A. scanning a point cloud picture obtained after the whole background filtering processing to find any core point, and expanding the core point, namely searching all density-connected data points starting from the core point;
B. traversing all core points in the epsilon neighborhood of the core point (note: the boundary points are not expandable), finding data points that are connected to all densities of these core points; until there are no extensible data points. Boundary nodes of the clustered clusters are all non-core points;
C. rescanning the data set (without any point in the previous cluster), searching the core point which is not clustered, and repeating the steps A to B until no new core point exists in the point cloud picture;
D. and filtering noise points, wherein the noise points refer to data points which are not contained in the point cloud cluster.
In the step (2), the object classification comprises the following steps:
2.1: constructing an artificial neural network for vehicle and pedestrian classification;
2.2: making a data set, wherein the data set is divided into a training set, a verification set and a test set;
2.3: training an artificial neural network for classifying vehicles and pedestrians through a training set, and verifying the artificial neural network for classifying the vehicles and the pedestrians through a verification set, wherein the verification process is as follows: and (3) verifying the pictures in the verification set by using the artificial neural network trained by the training set, and if the verification effect is not ideal, continuing to train by using the training set until the requirements are met. The trained artificial neural network for classifying the vehicles and the pedestrians can be obtained through the steps;
2.4: inputting the test set into a trained artificial neural network for vehicle and pedestrian classification to obtain a classification result, wherein the classification result comprises pedestrians and vehicles;
2.5: and filtering the point cloud cluster of which the classification result is the pedestrian.
The artificial neural network for classifying the vehicles and the pedestrians comprises an input layer, a hidden layer and an output layer;
the input layer is used for inputting the point cloud picture into an artificial neural network for vehicle and pedestrian classification;
the hidden layer comprises a plurality of layers of neural networks, convolution operation is carried out on an input point cloud image, point cloud information characteristics are extracted, the point cloud information characteristics comprise the number, the length, the width, the height and the density of point clouds of point cloud clusters, final overall characteristics are obtained through carrying out multiple times of convolution in the hidden layer, the shallow layer characteristics are multi-segment lines formed by appearance contour points of the point cloud clusters, the overall characteristics are combined overall contours of the multi-segment lines, and classification of pedestrians and vehicles is achieved;
the output layer comprises two neurons which are respectively a pedestrian and a vehicle; the output layer is connected with the last layer of neural network of the hidden layer by adopting a full connection method, the probability that the target to be detected is a pedestrian and a vehicle is obtained through parameter operation between the neurons, and the higher probability is used as a classification result.
In order to avoid the overfitting phenomenon of the network, a random factor is added into the neurons in each layer of neural network, and in a certain layer of neural network, the random factor randomly inhibits certain neurons in the layer of neural network; the specific implementation process is as follows: in a certain layer of neural network, a plurality of neurons are randomly extracted, and the connection between the neurons and other neurons is cut off, namely, the neuron in the upper layer skips the neuron and is directly connected with the neuron in the next layer. For example, if there are 100 neurons in a certain layer of neural network and the random factor is 0.05, then 100 × 0.05 — 5 neurons are selected in the layer of neural network. The selected 5 neurons are inhibited, and the upper layer and the lower layer jump over the neurons to be directly connected. Random factors are introduced between each layer of neurons.
Even if it does not function, the neurons of the upper or lower layer connected to the neuron are not affected. This is because in the hidden layer, neurons in each layer are all connected, and when a neuron is inhibited, neurons in adjacent layers are not affected.
And when the distances from the laser radar are the same, the point cloud numbers of the point cloud clusters after the pedestrian and the vehicle are clustered are different. For this purpose, a data set is required to be labeled by a maker, and the data set comprises: the type of the target point cloud, the distance between the point cloud cluster and the laser radar and the number of the point clouds of the target cluster; the target point cloud types comprise pedestrians and vehicles; the distance between the target cluster and the laser radar is the distance between the front key point of the target and the laser radar;
the artificial neural network can automatically learn the difference between the vehicle target point cloud and the pedestrian target point cloud, for example, when the distance is fixed, the number of the vehicle point cloud is always larger than that of the pedestrian, and the volume index of the vehicle point cloud cluster is larger than that of the pedestrian. When a data set is manufactured, 10000 target point cloud pictures in total are collected, wherein 7000 are used for training of the artificial neural network, 2000 are used for verifying the artificial neural network, and 1000 are used for testing the artificial neural network.
Step 2.4 also includes comparison between actual output and estimated output (actual type of target to be detected), and the specific implementation process is as follows: and feeding back the error between the actual output and the estimated output to the input layer again to serve as a guide for adjusting the weight in the next round of training, gradually learning the intrinsic relationship between the input and the output by adjusting the weight of each neuron in each layer of neural network through the iterative process to achieve the optimal precision, and terminating the training process when the error between the actual output and the estimated output is less than a preset value or the iteration times exceed the maximum times.
The specific implementation process of the step (3) is as follows:
3.1: when the vehicle normally runs on a highway, the vehicle does not continuously change lanes but runs in the middle of the same lane, so that the point cloud information characteristics of the vehicle are set to be concentrated in the range of 1.5m on each of the left side and the right side of the center of the lane; and point cloud data above the lane lines are sparse. In the historical point cloud processing of the vehicle, the point cloud above the lane line is filtered out as a noise point due to low density.
So that points on the same lane can be grouped together. On a normally traveling lane, there are generally few vehicles that make lane changes, and the vehicles generally travel at the center of the lane while traveling.
3.2: when the lane line is generated, the clustered small square point cloud is used for representing the lane in a two-dimensional space. Dividing lanes into small squares in a two-dimensional plane, taking the place with the most sparse point cloud density according to the point cloud density of the small squares, dividing different lanes, and displaying a continuous nonlinear lane line in a point cloud image through multi-section linear fitting; as shown in fig. 2(a), 2(b) and 2(c), in addition, the number and direction of lanes are estimated from the moving direction of the vehicle; such as two-way four lanes, two-way six lanes, etc. Estimating the number and direction of lanes according to the moving direction of the vehicle is: when scanning an in-transit target, the laser radar judges the moving direction of the vehicle and the number of traffic flows, and historical point clouds of the vehicles on the same lane form a traffic flow, so that the total number of the traffic flows represents the number of the lanes, and the moving direction of the lanes is judged according to the moving direction of the vehicle, so that the direction of the lanes is judged; for example, a bidirectional four lane, a bidirectional six lane, or a bidirectional eight lane, etc.
3.3: storing small squares representing the area of a lane in an N-4 matrix, wherein N represents N small squares, and 4 represents coordinate information of four corner points of the small squares; according to the coordinate axes established by the user, as shown in fig. 6, the x-axis coordinate positions and the y-axis coordinate positions of 4 corner points of the square are stored in the computer in a matrix form. In fig. 6, the whole diagram shows a top view of a road, a two-dimensional coordinate is established by itself, an abscissa shows a position in an x direction of the top view, and an ordinate shows a position in a y direction of the top view, and different from a function of obtaining lane lines by regression, it can be found that lane lines divided by continuous squares are not continuous curves and the shapes of lanes are not regular.
3.4: after lane division, the total number of lanes and the number of lanes in a single direction are calculated, and the calculation method comprises the following steps: the point clouds on the lanes are distributed in the range of 1.5m respectively at the left and right of the center line of each lane, the historical point cloud information of the vehicles on the lanes forms a space band-shaped object called traffic flow, the number of the traffic flows, namely the total number of the lanes, is calculated, the number of the general lanes is even, and the number of the lanes in a single direction is the total number of the lanes divided by 2;
3.5: when the laser radar scans a vehicle target, finding the coordinates of the front key points of the vehicle, positioning the coordinates in the corresponding small square, matching the vehicle in the corresponding lane, and calculating the number L of adjacent lanes of the vehicle; the method for calculating the number L of adjacent lanes comprises the following steps: according to the coordinates in the two-dimensional top view of the front key points of the vehicle point cloud cluster, finding a small square where the coordinates of the front key points are located in a lane line storage matrix, judging a lane where the vehicle is located and the moving direction of the vehicle on the lane after the position of the small square is determined, and judging that two sides of the lane are provided with a plurality of lanes in the same direction respectively. L will contribute to the prediction of the lane change behavior of the vehicle.
On the same lane, the density of the vehicle point cloud in the middle of the lane reaches the maximum, and gradually decreases towards the two sides of the lane, so that the lane can be divided by adopting a plurality of squares, and the specific implementation method of the step 3.2 comprises the following steps:
a. determining the vehicle historical point cloud density distribution of the whole lane: analyzing a vehicle historical point cloud picture on a lane by adopting a laser radar, and selecting vehicle historical point cloud data of the whole lane by adopting a small square frame with the side length of 0.2m from an area with the most dense point cloud density;
b. calculating the number of point clouds contained in each small square, setting a point cloud number threshold A, and selecting the small squares with the point cloud number smaller than A; the square on the actual lane line should have the lowest point cloud density, so the point cloud number threshold A needs to be repeatedly tried to be changed until the small squares on the actual lane line are all selected; at this time, an unconventional point cloud square (a point cloud square positioned on a traffic lane) is selected, and the point cloud squares are usually independent, so that when a lane line is fitted, the algorithm can automatically filter the unconventional squares due to the fact that no point cloud square is connected with the point cloud squares. Automatically filtering independent small squares in the selected small squares; and (3) connecting the continuous small squares with the lowest point cloud density in a connection mode shown in figure 2, and fitting the lane boundary line through multi-section linear fitting.
In step b, the connection mode between the small squares is as follows: and connecting the middle points of the overlapped parts of the small squares to obtain the multi-section linearly-fitted lane line.
In step 3.5, if there are lanes on both sides of the lane where the vehicle is located, the possibility of lane change of the vehicle is increased, and L is multiplied by a lane factor L, where the lane factor L is a parameter representing the lane change tendency of the vehicle given by integrating the number of the lanes where the vehicle is located and the number of the lanes on both sides, and the value range of the parameter is 0.9-1.1;
the specific implementation process of the step (4) is as follows:
4.1: tracking the same vehicle under different frames by adopting a global nearest neighbor algorithm GNN;
4.2: calculating the vertical distance from each tracking point to the nearest lane boundary DNLB; as shown in fig. 3.
4.3: calculating the variation CDNLB and the accumulated variation CCDNLB of the DNLB;
4.4: and predicting lane change of the vehicle in real time by considering the number of lanes around the lane where the vehicle is located.
In step 4.1, a specific implementation method for tracking the same vehicle under different frames by adopting a global nearest neighbor algorithm GNN is as follows:
tracking front key points and rear key points of the vehicle, namely: and labeling the clustered targets, and labeling the targets on the adjacent frames as the same target if the distance between the key points before the two adjacent frames is less than a set threshold value, so as to realize target tracking between different frames.
The set threshold was 15 m.
In step 4.2, when DNLB is performed, since the lane fitting is performed by multi-segment linear fitting, the boundary of the lane adopts a piecewise linear function, as shown in formula (i):
Figure BDA0003137696410000131
in formula (I), knAnd bnThe coefficient of the piecewise linear function is represented, x and y represent point cloud coordinate points of a two-dimensional space, and the coordinate system is as follows: taking the installation position of the laser radar as a (0,0) point, designating any direction as an x-axis direction, and designating a direction forming an included angle of 90 degrees with the x-axis direction as a y-axis; the laser radar is generally arranged in the middle of a road, and the height of the laser radar is about 3-5 m;
the vertical foot coordinates of the key point in front of the vehicle and the nearest lane line are calculated by the formulas (II) and (III) respectively:
Figure BDA0003137696410000141
Yip=Xip×k+b (Ⅲ)
in the formulae (II) and (III), ip represents a drop foot point, and xiX-coordinate, y, representing the ith tracking pointiY-coordinate, x, representing the ith tracking pointipAn x coordinate representing the drop-foot point of the ith tracking point on the nearest lane line; y isipA y coordinate representing the drop-foot point of the ith tracking point on the nearest lane line; k represents the nearest laneThe slope of the line on the selected coordinate system; b represents the intercept of the nearest lane line on the selected coordinate system;
the distance to the nearest lane boundary is calculated by the formula (iv):
Figure BDA0003137696410000142
if the y-axis coordinate y of the key pointiThe y-axis coordinate of the closest point of the lane boundary is higher, then the sign of DNLB is "+", otherwise it is "-" (meaning positive or negative); it should be noted that the nearest lane boundary determined here does not change during the tracking time of the vehicle. And calculating whether the nearest lane boundary has an adjacent lane or not according to the number of the determined lanes.
The lane change behavior of the vehicle cannot be well predicted by only relying on DNLB, and the amount of variation and the accumulated amount of variation of DNLB are also required.
In step 4.3, the calculation formulas of the variation CDNLB of DNLB and the cumulative variation CCDNLB are respectively shown by formula (v) and formula (vi):
CDNLBi=DNLBi-DNLBi-1 (Ⅴ)
Figure BDA0003137696410000143
in the formulas (V) and (VI), the subscript i represents DNLB in the ith frame, and i: i + n represents frames from the ith frame to the (i + n) th frame;
CDNLBiindicating CDNLB, DNLB in the ith frameiIndicates DNLB in the i-th framei-1Denotes DNLB, CCDNLB in the i-1 th frameiIndicating the cumulative variation CCDNLB from frame i to frame i + n.
In step 4.4, setting the threshold of the adopted CCDNLB as 0.8m, searching the closest 3 frames before and after the tracking frame in order to eliminate abnormal values in the tracking point, and setting the tracking frame as the ith frame; the prediction process includes the following:
c. computing CDNLBi、CDNLBi-1、CDNLBi-2If the three have the same sign ("+" or "-") and CCDNLB is greater than or equal to 0.8m, the tracked vehicle in the frame will be determined to be about to make lane change movement or turning movement; if the sign is not the same, then the lane change motion is not predicted, because the high CCDNLB may be caused by an abnormal value, and step d is entered; fig. 7 is a flowchart for predicting lane change behavior of the vehicle regardless of the number of lanes.
d. Computing CDNLBi+1、CDNLBi+2To determine whether they are in contact with CDNLBiHave the same sign ("+" or "-"), if CDNLBi+1、CDNLBi+2And CDNLBiAnd obtaining a lane change factor I when the signs are the same, and judging that the vehicle needs to perform lane change when the I is more than or equal to 1, otherwise, judging that the vehicle does not perform lane change. FIG. 8 is a schematic flow chart of predicting lane change behavior in consideration of the number of lanes;
the determination method of I is shown as the formula (VII):
first guarantee CDNLBiHaving the same sign, and secondly when CCDNLB is greater than 0.8m, I is taken to be 1 if the actual CCDNLBsjLess than 0.8m, then taking I according to formula (VII):
Figure BDA0003137696410000151
the number of adjacent lanes also influences the lane changing behavior of the vehicle, when there are sufficient lanes around the vehicle, the vehicle is more willing to run in the middle lane, so the value I is corrected according to the number of lanes, the correction coefficient of the number of lanes to the value I is α, and the specific correction method is as follows:
for a bidirectional four-lane (one-way two-lane), when a lane in the same direction exists beside the nearest lane line, the vehicle has a lane change condition, and alpha is 1.0;
for a bidirectional six-lane (unidirectional three-lane), as shown in fig. 9, setting three lanes in one direction as (i), (ii) and (iii), and setting lane lines of the three lanes as (a), (b), (c) and (d) in sequence, when a vehicle is located in the lane (ii), if the nearest lane line when calculating DNLB is (c), the vehicle at a constant speed can more easily run on the lane according to driving habits, and at the moment, alpha is 1.0; if the nearest lane line is b, then alpha is 0.9; when the vehicle is positioned in the lane III, the lane is a passing lane, and the vehicle can be changed back to the lane II after passing, and the lane change possibility of the lane is high, so that alpha is 1.1; when the vehicle is positioned in the lane I, the lane is a slow lane, and the possibility that the vehicle changes lanes to the lane II is high, so that alpha is 1.1;
for a bidirectional eight-lane (unidirectional four-lane), as shown in fig. 10, four lanes in one direction are set as (i) 1, (ii) 1, (iii) 1 and (iv) 1 in sequence, lane lines of the four lanes are (a) 1, (b) 1, (c) 1, (d) 1 and (e) 1 in sequence, when a vehicle is located in the lane (i) 1, the lane change can be performed only to the lane (ii) 1, and alpha is 1.1; when the vehicle is positioned in a lane II 1, when changing lanes to the lane I1, the alpha is 0.9, and when changing lanes to the lane III 1, the alpha is 1.0; when the vehicle is positioned in a lane 1, when changing lanes to a lane II 1, the alpha is 1.0, and when changing lanes to the lane 1, the alpha is 1.1; when the vehicle is positioned on the lane (1) and changes lanes to the lane (1), the alpha is 1.1;
after determining the correction coefficient of the number of the lanes to the value I, comprehensively judging whether the lane change occurs to the vehicle by adopting a formula (VIII):
Ifin=αI (Ⅷ)
when I isfinWhen the speed is more than 1.0, predicting that the vehicle has lane change motion; when I isfinAnd when the speed is less than 1.0, the lane change motion of the vehicle is predicted not to occur.
In this embodiment, the closest 3 frames before and after the tracking frame are searched, and the tracking frame is set as the ith frame. Computing CDNLBi,CDNLBi-1,CDNLBi-2If the three have the same sign ("+" or "-") and CCDNLB ≧ 0.8m, then the lane-change factor I for that vehicle takes 1.0; if less than 0.8m, the calculation is performed according to the formula (VIII). And determining the number of lanes of the road according to a lane line dividing method, and predicting lane change of the vehicle by taking the I value of the vehicle and the correction coefficient as alpha. If calculating CDNLBi,CDNLBi-1,CDNLBi-2If the three have no identical sign ("+" or "-"), the remaining frames are searched and processed if they have identical signsStep 2 is carried out; if three continuous frames without the same sign exist, the vehicle is directly judged as a non-lane-change vehicle.
In an actual field test, only 4 lane change behaviors among 96 detected lane change behaviors are judged to be in a straight line. For this result, the present invention can satisfy the real-time lane change prediction of the CV network.
Example 3
A lane-change identification and prediction system for extracting vehicle tracks by using roadside lidar data is disclosed, as shown in FIG. 11, and is used for realizing a lane-change identification and prediction method for extracting vehicle tracks by using roadside lidar data, and the method comprises the following steps: the system comprises a target point cloud acquisition module, a data processing module and a prediction module; the target point cloud acquisition module is used for: collecting point cloud data of an in-transit target and a background; the data processing module is used for: filtering background point cloud data, filtering pedestrian point cloud data, clustering vehicle point clouds, tracking vehicle targets in different frames, dividing lanes and matching the vehicle targets to corresponding lanes; the prediction module is to: it is predicted whether a lane change motion of the vehicle in transit will occur. The data processing module integrates a plurality of point cloud processing algorithms: adopting a DBSCAN algorithm for clustering the in-transit target point cloud; adopting a self-designed artificial neural network for vehicle and pedestrian classification; using GNN for object tracking between successive frames;
the data processing module comprises a background point cloud filtering and clustering module, an object classification module and a lane boundary identification module;
the background point cloud filtering and clustering module is used for: filtering background point cloud data by adopting a traditional laser radar background point cloud filtering technology, and filtering point cloud data outside on-road targets (vehicles and pedestrians); adopting a DBSCAN algorithm to perform clustering operation on the in-transit targets;
the object classification module is to: constructing and training an artificial neural network for classifying vehicles and pedestrians, taking the total number of point clouds, the distance from a laser radar and the direction of a target cluster formed by the point clouds as input, excluding pedestrians from each picture through the processing of an input layer, a hidden layer and an output layer, and only keeping the target cluster formed by the point clouds of the vehicles;
the lane boundary identification module is used for: generating lane boundary lines and matching the in-transit vehicles to corresponding lanes; generating a lane boundary by adopting a point density method, and determining the position of a lane according to the distribution condition of historical vehicle point clouds on the road; first, assuming that the density of vehicle points on the road is much higher than in other areas and the proportion of lane-changing vehicles is much lower than that of lane-unchanging vehicles, vehicle points on the same lane may come together under both assumptions. In a top view of a point cloud image scanned by a laser radar, a road two-dimensional plane is divided into small squares according to point cloud density, regions with the most sparse point cloud density are connected, namely general positions of actual lane lines, and continuous lane lines are obtained through multi-segment linear fitting. And storing the square area for recording the lane line information in a computer, and when a vehicle passes through the scanning area of the laser radar, judging the point cloud square where the point (front key point) closest to the laser radar is located by the computer, and positioning the vehicle in the lane where the point cloud square is located.
The target point cloud acquisition module is a multi-thread laser radar; the multithreading laser radar is connected with a visual program on the microcomputer, a point cloud picture scanned by the laser radar is displayed in real time, and only point cloud data of the vehicle is displayed on the microcomputer through background filtering and on-road target classification (vehicles and pedestrians);
the prediction module is mainly used for predicting the behavior of the vehicle by integrating the factors according to the influence of DNLB, CDNLB and CCDNLB of the key points of the vehicle, a preset threshold and the number of lanes on the value I.
Example 4
A computer device comprising a memory storing a computer program and a processor implementing the steps of the lane change identification and prediction method for extracting a vehicle trajectory using roadside lidar data of any of embodiments 1 or 2 when the computer program is executed.
Example 5
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the lane-change identification and prediction method for extracting a vehicle trajectory using roadside lidar data of any of embodiments 1 or 2.

Claims (10)

1. A lane change identification and prediction method for extracting vehicle tracks by using roadside lidar data is characterized by comprising the following steps of:
(1) background filtering and target clustering: background filtering is carried out on the point cloud picture obtained by the roadside laser radar by using a background filtering technology, and point clouds in the point cloud picture after the background filtering processing are clustered by using a DBSCAN algorithm to obtain point cloud clusters;
(2) object classification: distinguishing pedestrians from vehicles, removing the point cloud of the pedestrians, and only keeping the point cloud of the vehicles;
(3) target tracking: generating lane boundary lines and matching the in-transit vehicles to corresponding lanes;
(4) predicting lane change behavior: and (4) predicting the lane change behavior of the vehicle in real time by using the preset CDNLB and CCDNLB.
2. The lane change identification and prediction method for extracting vehicle tracks by using roadside lidar data according to claim 1, wherein in the step (1), the background filtering is: filtering out background point clouds when no passing target passes from the point cloud picture when the passing target passes to obtain the background point clouds of the passing target;
in the step (1), the target clustering comprises the following steps:
A. scanning a point cloud picture obtained after the whole background filtering processing to find any core point, and expanding the core point, namely searching all density-connected data points starting from the core point;
B. traversing all core points in the epsilon neighborhood of the core points, and searching data points connected with all densities of the core points; boundary nodes of the clustered clusters are all non-core points;
C. rescanning the data set, searching for core points which are not clustered, and repeating the steps A to B until no new core points exist in the point cloud picture;
D. and filtering noise points, wherein the noise points refer to data points which are not contained in the point cloud cluster.
3. The lane change identification and prediction method for extracting vehicle track by using roadside lidar data of claim 1, wherein the step (2) of classifying the object comprises the following steps:
2.1: constructing an artificial neural network for vehicle and pedestrian classification;
2.2: making a data set, wherein the data set is divided into a training set, a verification set and a test set;
2.3: training an artificial neural network for vehicle and pedestrian classification through a training set, and verifying the artificial neural network for vehicle and pedestrian classification through a verification set to obtain the trained artificial neural network for vehicle and pedestrian classification;
2.4: inputting the test set into a trained artificial neural network for vehicle and pedestrian classification to obtain a classification result, wherein the classification result comprises pedestrians and vehicles;
2.5: filtering out the point cloud cluster of which the classification result is the pedestrian;
further preferably, the artificial neural network for vehicle and pedestrian classification comprises an input layer, a hidden layer and an output layer;
the input layer is used for inputting the point cloud picture into an artificial neural network for vehicle and pedestrian classification;
the hidden layer comprises a plurality of layers of neural networks, convolution operation is carried out on an input point cloud image, point cloud information characteristics are extracted, the point cloud information characteristics comprise the number, the length, the width, the height and the density of point clouds of point cloud clusters, final overall characteristics are obtained through carrying out multiple times of convolution in the hidden layer, the shallow layer characteristics are multi-segment lines formed by appearance contour points of the point cloud clusters, the overall characteristics are combined overall contours of the multi-segment lines, and classification of pedestrians and vehicles is achieved;
the output layer comprises two neurons which are respectively a pedestrian and a vehicle; the output layer is connected with the last layer of neural network of the hidden layer by adopting a full connection method, the probability that the target to be detected is a pedestrian and a vehicle is obtained through parameter operation among the neurons, and the higher probability is used as a classification result;
it is further preferable that a random factor is added to the neurons in each layer of neural network, and in a certain layer of neural network, the random factor randomly inhibits some neurons in the layer of neural network; the specific implementation process is as follows: in a certain layer of neural network, randomly extracting a plurality of neurons, and cutting off the connection between the neurons and other neurons, namely, the neuron in the upper layer skips over the neuron and is directly connected with the neuron in the lower layer;
further preferably, the creator creates a data set to be labeled, the data set including: the type of the target point cloud, the distance between the point cloud cluster and the laser radar and the number of the point clouds of the target cluster; the target point cloud types comprise pedestrians and vehicles; the distance between the target cluster and the laser radar is the distance between the front key point of the target and the laser radar;
further preferably, step 2.4 further includes comparing the actual output with the estimated output, and the specific implementation process is as follows: and feeding back the error between the actual output and the estimated output to the input layer again to serve as a guide for adjusting the weight in the next round of training, gradually learning the intrinsic relationship between the input and the output by adjusting the weight of each neuron in each layer of neural network through the iterative process to achieve the optimal precision, and terminating the training process when the error between the actual output and the estimated output is less than a preset value or the iteration times exceed the maximum times.
4. The lane change identification and prediction method for extracting the vehicle track by using the roadside lidar data as claimed in claim 1, wherein the step (3) is implemented by:
3.1: setting point cloud information characteristics of vehicles to be concentrated in the range of 1.5m on the left side and the right side of the center of a lane;
3.2: dividing lanes into small squares in a two-dimensional plane, taking the place with the most sparse point cloud density according to the point cloud density of the small squares, dividing different lanes, and displaying a continuous nonlinear lane line in a point cloud image through multi-section linear fitting; in addition, the number and direction of lanes are estimated according to the moving direction of the vehicle;
estimating the number and direction of lanes according to the moving direction of the vehicle is: when scanning an in-transit target, the laser radar judges the moving direction of the vehicle and the number of traffic flows, and historical point clouds of the vehicles on the same lane form a traffic flow, so that the total number of the traffic flows represents the number of the lanes, and the moving direction of the lanes is judged according to the moving direction of the vehicle, so that the direction of the lanes is judged;
3.3: storing small squares representing the area of a lane in an N-4 matrix, wherein N represents N small squares, and 4 represents coordinate information of four corner points of the small squares;
3.4: after lane division, the total number of lanes and the number of lanes in a single direction are calculated, and the calculation method comprises the following steps: the point clouds on the lanes are distributed in the range of 1.5m respectively at the left and right of the center line of each lane, the historical point cloud information of the vehicles on the lanes forms a space band-shaped object called traffic flow, the number of the traffic flows, namely the total number of the lanes, is calculated, and the number of the lanes in a single direction is the total number of the lanes divided by 2;
3.5: when the laser radar scans a vehicle target, finding the coordinates of the front key points of the vehicle, positioning the coordinates in the corresponding small square, matching the vehicle in the corresponding lane, and calculating the number L of adjacent lanes of the vehicle;
the specific implementation method of the step 3.2 comprises the following steps:
a. determining the vehicle historical point cloud density distribution of the whole lane: analyzing historical point cloud pictures of vehicles on the lane by adopting a laser radar, and selecting historical point cloud data of the vehicles of the whole lane by adopting a small square frame from an area with the most dense point cloud density;
b. calculating the number of point clouds contained in each small square, setting a point cloud number threshold A, and selecting the small squares with the point cloud number smaller than A; repeatedly trying to change the point cloud number threshold value A until small squares on the actual lane line are selected; automatically filtering independent small squares in the selected small squares; connecting continuous small squares with the lowest point cloud density, and fitting lane boundary lines through multi-section linear fitting;
in step b, the connection mode between the small squares is as follows: connecting the middle points of the overlapped parts of the small squares to obtain a multi-section linearly-fitted lane line;
in step 3.5, if there are lanes on both sides of the lane where the vehicle is located, the possibility of lane change of the vehicle is increased, and L is multiplied by a lane factor L, where the lane factor L is a parameter representing the lane change tendency of the vehicle given by integrating the number of the lanes where the vehicle is located and the number of the lanes on both sides, and the value range of the parameter is 0.9-1.1.
5. The lane change identification and prediction method for extracting the vehicle track by using the roadside lidar data as claimed in claim 1, wherein the step (4) is implemented by:
4.1: tracking the same vehicle under different frames by adopting a global nearest neighbor algorithm GNN;
4.2: calculating the vertical distance from each tracking point to the nearest lane boundary DNLB;
4.3: calculating the variation CDNLB and the accumulated variation CCDNLB of the DNLB;
4.4: considering the number of lanes around the lane where the vehicle is located, and predicting lane change of the vehicle in real time;
further preferably, in step 4.1, a specific implementation method for tracking the same vehicle under different frames by using the global nearest neighbor algorithm GNN is as follows:
tracking front key points and rear key points of the vehicle, namely: labeling the clustered targets, and labeling the targets on the adjacent frames as the same target if the distance between the key points before the two adjacent frames is less than a set threshold value, so as to realize target tracking between different frames;
more preferably, the set threshold is 15 m.
6. The lane change recognition and prediction method for extracting vehicle trajectory by using roadside lidar data of claim 5, wherein in step 4.2, the boundary of the lane adopts a piecewise linear function, as shown in formula (I):
Figure FDA0003137696400000041
in formula (I), knAnd bnThe coefficient of the piecewise linear function is represented, x and y represent point cloud coordinate points of a two-dimensional space, and the coordinate system is as follows: taking the installation position of the laser radar as a (0,0) point, designating any direction as an x-axis direction, and designating a direction forming an included angle of 90 degrees with the x-axis direction as a y-axis;
the vertical foot coordinates of the key point in front of the vehicle and the nearest lane line are calculated by the formulas (II) and (III) respectively:
Figure FDA0003137696400000042
Yip=Xip×k+b (Ⅲ)
in the formulae (II) and (III), ip represents a drop foot point, and xiX-coordinate, y, representing the ith tracking pointiY-coordinate, x, representing the ith tracking pointipAn x coordinate representing the drop-foot point of the ith tracking point on the nearest lane line; y isipA y coordinate representing the drop-foot point of the ith tracking point on the nearest lane line; k represents the slope of the nearest lane line on the selected coordinate system; b represents the intercept of the nearest lane line on the selected coordinate system;
the distance to the nearest lane boundary is calculated by the formula (iv):
Figure FDA0003137696400000043
if the y-axis coordinate y of the key pointiHigher than the y-axis coordinate of the closest point of the lane boundary, the sign of DNLB is "+", otherwise "-";
more preferably, in step 4.3, the calculation formulas of the variation CDNLB of DNLB and the cumulative variation CCDNLB are respectively represented by formula (v) and formula (vi):
CDNLBi=DNLBi-DNLBi-1 (Ⅴ)
Figure FDA0003137696400000044
in the formulas (V) and (VI), the subscript i represents DNLB in the ith frame, and i: i + n represents frames from the ith frame to the (i + n) th frame;
CDNLBiindicating CDNLB, DNLB in the ith frameiIndicates DNLB in the i-th framei-1Represents DNLB in the i-1 th frame,
CCDNLBiindicating the cumulative variation CCDNLB from frame i to frame i + n.
7. The lane change recognition and prediction method for extracting vehicle track by using roadside lidar data according to claim 1, wherein in step 4.4, the adopted threshold value of CCDNLB is set to 0.8m, the closest 3 frames before and after the tracking frame are searched, and the tracking frame is set as the ith frame; the prediction process includes the following:
c. computing CDNLBi、CDNLBi-1、CDNLBi-2If the three have the same sign and CCDNLB is more than or equal to 0.8m, the tracked vehicle in the frame is judged to be about to do lane change movement or turning movement; if the sign is not the same, then the lane change motion is not predicted, because the high CCDNLB may be caused by an abnormal value, and step d is entered;
d. computing CDNLBi+1、CDNLBi+2To determine whether they are in contact with CDNLBiWith the same sign if CDNLBi+1、CDNLBi+2And CDNLBiObtaining a lane change factor I when the signs are the same, and judging that the vehicle needs to perform lane change when the I is more than or equal to 1, otherwise, judging that the vehicle does not perform lane change;
further preferably, the determination of I is as shown in formula (VII):
Figure FDA0003137696400000051
more preferably, when there is a sufficient lane around the vehicle, the vehicle is more willing to travel in a middle lane, so the value I is corrected according to the number of lanes, and the correction coefficient of the number of lanes to the value I is α, and the specific correction method is as follows:
for the bidirectional four lanes, when the lane in the same direction exists beside the nearest lane line, the vehicle has a lane change condition, and the alpha is 1.0;
for a bidirectional six-lane, setting three lanes in one direction as a, b, c and d in sequence, and when a vehicle is positioned in the lane, if the nearest lane line in DNLB is calculated as c, the vehicle at a constant speed can more easily run on the lane according to driving habits, and alpha is 1.0; if the nearest lane line is b, then alpha is 0.9; when the vehicle is positioned in the lane III, the lane is a passing lane, and the vehicle can be changed back to the lane II after passing, and the lane change possibility of the lane is high, so that alpha is 1.1; when the vehicle is positioned in the lane I, the lane is a slow lane, and the possibility that the vehicle changes lanes to the lane II is high, so that alpha is 1.1;
for a bidirectional eight-lane, setting four lanes in one direction as (i) 1, (ii) 1, (iii) 1 and (iv) 1 in sequence, setting lane lines of the four lanes as (a) 1, (b) 1, (c) 1, (d) 1 and (e) 1 in sequence, and when a vehicle is positioned in the lane (i) 1, only changing lanes to the lane (ii) 1, wherein alpha is 1.1; when the vehicle is positioned in a lane II 1, when changing lanes to the lane I1, the alpha is 0.9, and when changing lanes to the lane III 1, the alpha is 1.0; when the vehicle is positioned in a lane 1, when changing lanes to a lane II 1, the alpha is 1.0, and when changing lanes to the lane 1, the alpha is 1.1; when the vehicle is positioned on the lane (1) and changes lanes to the lane (1), the alpha is 1.1;
after determining the correction coefficient of the number of the lanes to the value I, comprehensively judging whether the lane change occurs to the vehicle by adopting a formula (VIII):
Ifin=αI (Ⅷ)
when I isfinWhen the speed is more than 1.0, predicting that the vehicle has lane change motion; when I isfinWhen the vehicle speed is less than 1.0, the vehicle is predictedNo lane change motion occurs.
8. A lane-change identification and prediction system for extracting vehicle tracks by using roadside lidar data, which is used for realizing the lane-change identification and prediction method for extracting vehicle tracks by using the roadside lidar data, as claimed in any one of claims 1 to 7, and is characterized by comprising the following steps: the system comprises a target point cloud acquisition module, a data processing module and a prediction module; the target point cloud acquisition module is used for: collecting point cloud data of an in-transit target and a background; the data processing module is used for: filtering background point cloud data, filtering pedestrian point cloud data, clustering vehicle point clouds, tracking vehicle targets in different frames, dividing lanes and matching the vehicle targets to corresponding lanes; the prediction module is to: predicting whether the in-transit vehicle can generate lane change movement;
preferably, the data processing module comprises a background point cloud filtering and clustering module, an object classification module and a lane boundary identification module;
the background point cloud filtering and clustering module is used for: filtering background point cloud data, and filtering point cloud data outside the in-transit target; adopting a DBSCAN algorithm to perform clustering operation on the in-transit targets;
the object classification module is to: constructing and training an artificial neural network for classifying vehicles and pedestrians, taking the total number of point clouds, the distance from a laser radar and the direction of a target cluster formed by the point clouds as input, excluding pedestrians from each picture through the processing of an input layer, a hidden layer and an output layer, and only keeping the target cluster formed by the point clouds of the vehicles;
the lane boundary identification module is used for: generating lane boundary lines and matching the in-transit vehicles to corresponding lanes;
preferably, the target point cloud acquisition module is a multi-thread laser radar; the multithreading laser radar is connected with a visual program on the microcomputer, a point cloud picture scanned by the laser radar is displayed in real time, and only point cloud data of the vehicle is displayed on the microcomputer through background filtering and on-road target classification;
the prediction module is mainly used for predicting the behavior of the vehicle according to the influence of DNLB, CDNLB and CCDNLB of key points of the vehicle, preset threshold values and lane number on the I value.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of a lane change identification and prediction method for extracting a vehicle trajectory using roadside lidar data according to any of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of any of claims 1-7 of a method for lane change identification and prediction using roadside lidar data to extract a trajectory of a vehicle.
CN202110726268.3A 2021-06-29 2021-06-29 Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data Pending CN113345237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110726268.3A CN113345237A (en) 2021-06-29 2021-06-29 Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110726268.3A CN113345237A (en) 2021-06-29 2021-06-29 Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data

Publications (1)

Publication Number Publication Date
CN113345237A true CN113345237A (en) 2021-09-03

Family

ID=77481385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110726268.3A Pending CN113345237A (en) 2021-06-29 2021-06-29 Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data

Country Status (1)

Country Link
CN (1) CN113345237A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780214A (en) * 2021-09-16 2021-12-10 上海西井信息科技有限公司 Method, system, device and storage medium for image recognition based on crowd
CN113866743A (en) * 2021-12-06 2021-12-31 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Roadside laser point cloud simplification method and system for cooperative vehicle and road sensing
CN113888621A (en) * 2021-09-29 2022-01-04 中科海微(北京)科技有限公司 Loading rate determining method, loading rate determining device, edge computing server and storage medium
CN114067552A (en) * 2021-11-08 2022-02-18 山东高速建设管理集团有限公司 Pedestrian crossing track tracking and predicting method based on roadside laser radar
CN114093165A (en) * 2021-11-17 2022-02-25 山东大学 Roadside laser radar-based vehicle-pedestrian conflict automatic identification method
CN114299719A (en) * 2021-12-30 2022-04-08 安徽中科美络信息技术有限公司 Track-based lane temporary pointing identification method and system in end cloud system
CN114373298A (en) * 2021-12-16 2022-04-19 苏州思卡信息系统有限公司 Method for calculating lane number by adopting roadside laser radar
CN115273549A (en) * 2022-07-29 2022-11-01 山东高速集团有限公司 Expressway shunting area anti-collision method based on roadside laser radar
CN115719364A (en) * 2022-11-14 2023-02-28 重庆数字城市科技有限公司 Method and system for tracking pedestrian based on mobile measurement point cloud data
CN116153057A (en) * 2022-09-12 2023-05-23 东北林业大学 Method for estimating lane width based on laser radar point cloud

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858460A (en) * 2019-02-20 2019-06-07 重庆邮电大学 A kind of method for detecting lane lines based on three-dimensional laser radar
CN110705543A (en) * 2019-08-23 2020-01-17 芜湖酷哇机器人产业技术研究院有限公司 Method and system for recognizing lane lines based on laser point cloud
CN111540201A (en) * 2020-04-23 2020-08-14 山东大学 Vehicle queuing length real-time estimation method and system based on roadside laser radar
US20210061294A1 (en) * 2017-12-27 2021-03-04 Bayerische Motoren Werke Aktiengesellschaft Vehicle Lane Change Prediction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210061294A1 (en) * 2017-12-27 2021-03-04 Bayerische Motoren Werke Aktiengesellschaft Vehicle Lane Change Prediction
CN109858460A (en) * 2019-02-20 2019-06-07 重庆邮电大学 A kind of method for detecting lane lines based on three-dimensional laser radar
CN110705543A (en) * 2019-08-23 2020-01-17 芜湖酷哇机器人产业技术研究院有限公司 Method and system for recognizing lane lines based on laser point cloud
CN111540201A (en) * 2020-04-23 2020-08-14 山东大学 Vehicle queuing length real-time estimation method and system based on roadside laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUEPENG CUIA, JIANQING WU 等: "Lane change identification and prediction with roadside LiDAR data", 《OPTICS AND LASER TECHNOLOGY》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780214A (en) * 2021-09-16 2021-12-10 上海西井信息科技有限公司 Method, system, device and storage medium for image recognition based on crowd
CN113780214B (en) * 2021-09-16 2024-04-19 上海西井科技股份有限公司 Method, system, equipment and storage medium for image recognition based on crowd
CN113888621A (en) * 2021-09-29 2022-01-04 中科海微(北京)科技有限公司 Loading rate determining method, loading rate determining device, edge computing server and storage medium
CN114067552A (en) * 2021-11-08 2022-02-18 山东高速建设管理集团有限公司 Pedestrian crossing track tracking and predicting method based on roadside laser radar
CN114093165A (en) * 2021-11-17 2022-02-25 山东大学 Roadside laser radar-based vehicle-pedestrian conflict automatic identification method
CN113866743A (en) * 2021-12-06 2021-12-31 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Roadside laser point cloud simplification method and system for cooperative vehicle and road sensing
CN114373298A (en) * 2021-12-16 2022-04-19 苏州思卡信息系统有限公司 Method for calculating lane number by adopting roadside laser radar
CN114373298B (en) * 2021-12-16 2023-02-17 苏州思卡信息系统有限公司 Method for calculating lane number by adopting roadside laser radar
CN114299719A (en) * 2021-12-30 2022-04-08 安徽中科美络信息技术有限公司 Track-based lane temporary pointing identification method and system in end cloud system
CN115273549A (en) * 2022-07-29 2022-11-01 山东高速集团有限公司 Expressway shunting area anti-collision method based on roadside laser radar
CN115273549B (en) * 2022-07-29 2023-10-31 山东高速集团有限公司 Expressway diversion area anti-collision method based on roadside laser radar
CN116153057A (en) * 2022-09-12 2023-05-23 东北林业大学 Method for estimating lane width based on laser radar point cloud
CN115719364A (en) * 2022-11-14 2023-02-28 重庆数字城市科技有限公司 Method and system for tracking pedestrian based on mobile measurement point cloud data
CN115719364B (en) * 2022-11-14 2023-09-08 重庆数字城市科技有限公司 Pedestrian tracking method and system based on mobile measurement point cloud data

Similar Documents

Publication Publication Date Title
CN113345237A (en) Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data
CN111540201B (en) Vehicle queuing length real-time estimation method and system based on roadside laser radar
Wu et al. Automatic lane identification using the roadside LiDAR sensors
US11380105B2 (en) Identification and classification of traffic conflicts
JP6591842B2 (en) Method and system for performing adaptive ray-based scene analysis on semantic traffic space, and vehicle comprising such a system
CN101966846B (en) Travel's clear path detection method for motor vehicle involving object deteciting and enhancing
Chen et al. Deer crossing road detection with roadside LiDAR sensor
CN107423679A (en) A kind of pedestrian is intended to detection method and system
Chen et al. Architecture of vehicle trajectories extraction with roadside LiDAR serving connected vehicles
CN107730903A (en) Parking offense and the car vision detection system that casts anchor based on depth convolutional neural networks
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN110852177B (en) Obstacle detection method and system based on monocular camera
CN116685874A (en) Camera-laser radar fusion object detection system and method
CN106327880B (en) A kind of speed recognition methods and its system based on monitor video
CN112437501A (en) Multi-sensor beyond-the-horizon ad hoc network method based on traffic semantics and game theory
CN114093165A (en) Roadside laser radar-based vehicle-pedestrian conflict automatic identification method
CN107221175A (en) A kind of pedestrian is intended to detection method and system
Wu et al. A variable dimension-based method for roadside LiDAR background filtering
Lin et al. Vehicle detection and tracking using low-channel roadside LiDAR
Lin et al. Density variation-based background filtering algorithm for low-channel roadside lidar data
CN114694060A (en) Road shed object detection method, electronic equipment and storage medium
Zhao Exploring the fundamentals of using infrastructure-based LiDAR sensors to develop connected intersections
Wu et al. Grid-based lane identification with roadside LiDAR data
Chiang et al. Fast multi-resolution spatial clustering for 3D point cloud data
CN114067552A (en) Pedestrian crossing track tracking and predicting method based on roadside laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903

RJ01 Rejection of invention patent application after publication