CN116935630A - Vehicle-road co-location system and method - Google Patents
Vehicle-road co-location system and method Download PDFInfo
- Publication number
- CN116935630A CN116935630A CN202310737285.6A CN202310737285A CN116935630A CN 116935630 A CN116935630 A CN 116935630A CN 202310737285 A CN202310737285 A CN 202310737285A CN 116935630 A CN116935630 A CN 116935630A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- target
- road
- detection
- road side
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000008447 perception Effects 0.000 claims abstract description 134
- 238000004891 communication Methods 0.000 claims abstract description 47
- 238000004364 calculation method Methods 0.000 claims abstract description 29
- 230000005540 biological transmission Effects 0.000 claims abstract description 20
- 238000007781 pre-processing Methods 0.000 claims abstract description 17
- 238000001514 detection method Methods 0.000 claims description 167
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 14
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 5
- 230000008030 elimination Effects 0.000 claims description 2
- 238000003379 elimination reaction Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 6
- 238000012216 screening Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Analytical Chemistry (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a vehicle and road co-location system and a method. The vehicle-road co-location system comprises a road side sensor unit, an edge calculation unit, a road side communication unit, a vehicle-mounted communication unit and a vehicle-mounted calculation unit which are sequentially connected for data transmission; the vehicle-mounted computing unit comprises a preprocessing module, a multi-target matching tracking module, a target position prediction module and a self-vehicle probability prediction module which are connected in sequence; the vehicle-mounted computing unit is used for positioning the vehicle in the road area according to the road side perception information and the state information of the vehicle. The invention can get rid of the dependence on GNSS signals, can realize long-distance continuous positioning of the internet-connected vehicles, can acquire the relative position relation with surrounding vehicles, and has more practicability.
Description
Technical Field
The invention belongs to the technical field of intelligent driving systems, and particularly relates to a vehicle-road co-location system and a method.
Background
In recent years, with the development of automatic driving technology, internet of vehicles communication technology and road side sensing technology, intelligent internet-connected automobiles and related derivative products thereof (such as automatic driving buses, unmanned mine cars, unmanned delivery vehicles and the like) are gradually popularized and landing application is realized. Accurate vehicle positioning is critical to safe and efficient driving of vehicles and vehicle dispatch management. At present, global Navigation Satellite System (GNSS) has become a mainstream vehicle positioning technology, however, in some complex traffic scenarios (such as overpasses, tunnels, surrounding high-rise buildings, etc.), positioning failure is easily caused by losing GNSS signals.
In the prior art, in a bicycle positioning method aiming at a GNSS-free scene, a positioning method based on vision and a laser odometer is easily influenced by an accumulation effect, so that the long-distance positioning performance is poor; the positioning method based on map matching has high map updating and maintaining cost.
In the prior art, a positioning technology based on vehicle-vehicle coordination and vehicle-road coordination also exists.
Technologies based on vehicle co-ordination require reliance on the vehicle mobile communications network and still rely on GNSS signals from or around the vehicle to assist in positioning.
The positioning method based on the vehicle-road cooperation is mostly based on signal reflection intensity and reflection time to calculate the distance and the azimuth of the vehicle and road communication equipment, so that the vehicle positioning is realized. The existing vehicle-road cooperative positioning method often needs to deploy a large amount of road side equipment in a short distance, and the own vehicle cannot acquire the relative positioning relation with other vehicles. In addition, the existing vehicle positioning method based on road side perception information mostly stays in a simulation stage or only depends on cameras to perform rough positioning, and the problem of cooperative positioning accuracy based on multi-source heterogeneous perception information under mixed traffic is not considered. Under an actual traffic scene, long-distance continuous positioning of road-side equipment with multiple points, different communication ranges and road sections with perception ranges is difficult, and the difficulty and accuracy of accurate positioning of the self-vehicle under the condition of mixed running of the network-connected vehicle and the non-network-connected vehicle are greatly reduced, so that the existing vehicle-road co-positioning method does not have good positioning precision and practicability.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, a first object of the present invention is to provide a vehicle-road co-location system, and a second object is to provide a vehicle-road co-location method for improving the practicability and accuracy of vehicle location.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a vehicle-road co-location system comprises a road side sensor unit, an edge calculation unit, a road side communication unit, a vehicle-mounted communication unit and a vehicle-mounted calculation unit which are sequentially connected for data transmission;
the road side sensor unit is used for collecting information of vehicles in a road area;
the edge computing unit is used for processing and forming road side perception information of the vehicle in the road area according to the information acquired by the road side sensor unit; the road side communication unit is used for sending the road side perception information obtained by the edge calculation unit to the vehicle-mounted communication unit;
the vehicle-mounted communication unit is used for transmitting the received road side perception information to the vehicle-mounted calculation unit;
the vehicle-mounted computing unit is used for positioning the vehicle in the road area according to the road side perception information and the state information of the vehicle;
the vehicle-mounted computing unit comprises a preprocessing module, a multi-target matching tracking module, a target position prediction module and a self-vehicle probability prediction module which are connected in sequence;
The preprocessing module is connected with the vehicle-mounted communication unit and is used for preprocessing the road side perception information and filtering invalid non-vehicle targets and repeated vehicle targets;
the multi-target matching tracking module is used for acquiring and tracking the position and speed change of the vehicle target according to the road side perception information;
the target position prediction module is used for predicting road side perception information by adopting an extended Kalman filter according to the position and speed change of the vehicle target, correcting the position of the vehicle target and then updating the road side perception information;
the self-vehicle probability prediction module is used for calculating the probability that the currently tracked vehicle target is the vehicle according to the updated road side perception information, so that the vehicle is positioned.
Preferably, the road side sensor unit comprises a plurality of sensors respectively deployed on a plurality of points in the road area, and the types of the sensors comprise a laser radar and a camera;
the laser radar and the camera are used for acquiring point cloud and image data of the road area;
each sensor is electrically connected with one edge computing unit for data transmission, and each sensor is provided with global positioning coordinates and joint calibration parameters corresponding to the current coordinate system of the road area.
Further, the edge computing unit comprises a target detection module, a perception fusion module, a coordinate conversion module and a coding module;
the target detection module is connected with a plurality of sensors in the road side sensor unit and is used for respectively carrying out two-dimensional target detection and three-dimensional target detection on the images and the point cloud data acquired by the road side sensors and outputting target detection information of different sensors; the target detection information comprises the geometric size, the position and a detection frame of a vehicle target;
the perception fusion module is connected with the target detection module and is used for carrying out data matching, association and false detection elimination on the two-dimensional target detection information of the image and the three-dimensional target detection information of the point cloud, and outputting the processed three-dimensional target detection information as road side perception information;
the coordinate conversion module is connected with the perception fusion module and is used for converting coordinate points in the current coordinate system of the road area into longitude and latitude coordinates in the world coordinate system;
the coding module is connected with the coordinate conversion module and is used for carrying out data coding on the road side perception information according to a standard C-V2X data communication protocol.
Further, the preprocessing module is used for receiving the encoded road side perception information, decoding the road side perception information, converting longitude and latitude coordinates of a world coordinate system in the road side perception information into coordinates under a current road area coordinate system, and eliminating invalid data such as non-vehicle targets, repeated detection targets and the like according to the road side perception information.
Further, the process of acquiring and tracking the respective position and speed changes of a plurality of vehicle targets is as follows:
traversing all detected vehicle targets and all tracked road side perception information of the vehicle targets in the current frame, and setting the road side perception information as detected targets and tracked targets respectively;
if the current frame is the first frame for starting tracking, each detection target in the current frame is directly used as a tracking target of the next frame;
if the current frame is not the first frame for starting tracking, calculating the geometric dimension intersection ratio of each detection target and all tracking targets in the current frame, the position distance and the speed deduced according to the position distance and the time difference of the previous and the next frames;
the geometric dimension blending ratio is calculated by the following steps: the geometric center points of the detection frames of the detection target and the tracking target are overlapped, and the ratio of the intersection volume to the union volume of the two detection frames is calculated;
the position distance is calculated by the following steps: calculating the Euclidean distance between the geometric center points of the detection frames of the detection target and the tracking target;
the speed is calculated by the following steps: calculating the distance between the detection target and the detection frame of the tracking target divided by the time interval between the front frame and the rear frame;
setting a speed matching probability Prob vel And the calculation is as follows:
Wherein v is c 、v t 、v thresh Respectively representing the speed of the detection target in the current frame, the speed of the tracking target and a set speed threshold;
setting a distance matching probability Prob dis And the calculation is as follows:
wherein d o 、d thresh Respectively representing the distance between the detection target and the tracking target in the current frame and a set distance threshold;
setting the value of the geometric dimension merging ratio as the IOU matching probability Prob iou ;
According to IOU matching probability Prob iou Probability of distance matching Prob dis Probability of speed match Prob vel Calculating the target matching probability Prob of the detection target and all tracking targets in the current frame m Probability of target match Prob m The following formula is shown:
Prob m =w iou Prob iou +w vel Prob vel +w dis Prob dis
wherein w is iou 、w vel 、w dis Respectively corresponding to IOU matching probability Prob iou Probability of speed match Prob vel Probability of distance matching Prob dis The respective weight values;
after traversing all tracking targets correspondingly for all detection targets of the current frame, according to target matching probability Prob m And updating the road side perception information of all the tracking targets.
Further, according to the target matching probability Prob m The process of updating the road side perception information of all tracking targets is as follows:
if there is a detection target relative to some or all tracking targetsProbability of target match Prob m If the matching probability is larger than the set matching threshold, judging the target matching probability Prob m The highest corresponding tracking target and one detection target of the current frame are the same vehicle, and then the road side perception information of the detection target of the current frame is used for replacing the road side perception information of the corresponding tracking target;
if the current frame has a target matching probability Prob that the detected target corresponds to all the tracked targets m If the detected target is smaller than the set matching threshold, judging that the detected target cannot find the corresponding tracking target in the previous frame, and then directly taking the road side perception information of the detected target as the road side perception information of the corresponding tracking target of the next frame.
Further, the process of predicting and correcting the target position by adopting the extended Kalman filter to the road side perception information is as follows:
if the detection target in the current frame cannot be matched with the tracking target, constructing and initializing an extended Kalman filter by utilizing the position of the detection target in the current frame;
if the detection target in the current frame can be matched with a certain tracking target, predicting an extended Kalman filter corresponding to the tracking target by utilizing the position and the speed corresponding to the detection target in the current frame, and outputting the corrected position as the latest position of the detection target;
If one tracking target cannot be matched with the detection target of the current frame, predicting and updating the position of the tracking target corresponding to the current frame by utilizing the road side perception information of the tracking target and an extended Kalman filter of the tracking target and combining the time interval of the current frame and the previous frame.
Further, the process of calculating the probability that the vehicle object belongs to the vehicle itself is as follows:
traversing the road side perception information of all the vehicle targets in the updated current frame, and calculating the geometric dimension, the position, the speed and the moved position distance of each vehicle target in a statistical way;
according to the geometric dimension of the vehicle, the speed of the vehicle and the position distance information of the movement of the vehicle, respectively calculating the vehicleLatest intersection ratio of vehicles themselves with detection frames of all vehicle targets respectivelyAverage cross ratio->Self speed probability of the vehicle object belonging to the vehicle itself +.>Probability of distance travelled by the vehicle>Then, based on the self-adaptive weighting method of the variation coefficient statistics, the weighted self-vehicle matching probability Prob of the vehicle target belonging to the vehicle is calculated ego According to the weighted bicycle matching probability Prob ego Judging whether a vehicle target corresponding to the corresponding tracking sequence belongs to the vehicle or not;
probability of speed of vehicle Probability of distance travelled by the vehicle>Weighted bicycle matching probability Prob ego The calculation formula of (2) is shown as follows:
wherein v is t 、v e Respectively representing vehiclesSpeed of vehicle target, speed of vehicle itself, d t 、d e 、d thresh Respectively representing the moving distance of the vehicle target, the moving distance of the own vehicle and the moving distance threshold value, w i The weight is represented by a weight that,the weighting term corresponding to type i is represented.
Further, the adaptive weighting method based on the coefficient of variation statistics comprises the following steps:
for the latest cross ratioAverage cross ratio->Speed probability of vehicle>Probability of distance travelled by the vehicle>After the weight of the model is initialized, the cross-ratio variation coefficient and the speed variation coefficient are set to carry out self-adaptive adjustment on different weights, and the calculation of the self-adaptive adjustment is shown as the following formula:
wherein CV j A cross-over ratio variation coefficient representing a history of a certain vehicle target or a speed variation coefficient of the history, j representing a corresponding cross-over ratio or speed,cross-over ratio representing all vehicle target histories of current frameMaximum value of coefficient of variation or historical coefficient of variation of speed, A iou 、A vel Cross ratio adaptive coefficient, historical speed adaptive coefficient, respectively representing a certain vehicle target history,/->Representing the initialization weights.
A vehicle-road co-location method comprises the following steps:
Collecting information of vehicles in a road area;
processing according to the acquired information to form road side perception information of the vehicle in the road area;
preprocessing road side perception information of each vehicle, and filtering invalid non-vehicle targets and repeated vehicle targets;
acquiring and tracking the position and speed change of a vehicle target according to road side perception information of a plurality of vehicles;
predicting and correcting the position of the road side perception information set by adopting an extended Kalman filter according to the position and speed change of the vehicle target, and then updating the road side perception information;
and calculating the probability that the currently tracked vehicle target is the vehicle according to the updated road side perception information, thereby realizing the positioning of the vehicle.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the vehicle-road co-positioning system and method, road co-positioning is carried out by means of road side sensing information, the geometric dimension of a vehicle and speed information of various types of sensors, so that dependence on GNSS signals can be eliminated compared with the existing vehicle-vehicle co-positioning mode, long-distance continuous positioning of network-connected vehicles can be realized compared with the existing vehicle-road co-positioning mode based on the strength of received signals of road side RSUs and the like, and the relative position relation between the network-connected vehicles and surrounding vehicles can be acquired, and the vehicle-road co-positioning system and method are more practical;
Further, the method and the device can predict the position information of the current tracked target vehicle according to the historical data under the condition that the road side perception information is lost by predicting the target position, and can effectively improve the output frequency of positioning signals and provide more real-time positioning information for the network-connected vehicle compared with a single GPS positioning or a single road side perception positioning method;
furthermore, the method can effectively position the position information of the vehicle under the condition that the network-connected vehicle and the non-network-connected vehicle are mixed by predicting the self-vehicle probability and combining the self-adaptive weighting method based on the variation coefficient statistics, has higher positioning accuracy and positioning precision, and has practical application value.
Drawings
FIG. 1 is a schematic diagram of a structural framework of a vehicle-road co-location system of the present invention;
FIG. 2 is a schematic flow chart of the vehicle-road co-location method of the present invention;
FIG. 3 is a flow chart of the multi-objective matching tracking of FIG. 2;
FIG. 4 is a flow chart of the target position prediction of FIG. 2;
fig. 5 is a flowchart of the prediction of the probability of driving a vehicle in fig. 2.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that the positional or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like are based on the positional or positional relationship shown in the drawings, are merely for convenience of describing the present disclosure and simplifying the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present disclosure.
Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Likewise, the terms "a," "an," or "the" and similar terms do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that elements or items appearing before the word are encompassed by the element or item recited after the word and equivalents thereof, and that other elements or items are not excluded. The terms "electrically connected" or "connected" and the like are not limited to physical or mechanical electrical connection, but may include electrical connection, whether direct or indirect.
Example 1
As shown in fig. 1. The embodiment 1 provides a vehicle and road co-location system, which is particularly suitable for vehicle and road co-location in traffic scenes such as parks, urban road networks, highways and the like, and is particularly suitable for vehicle and road co-location in sensing scenes with multiple sensing, multiple point positions and multiple visual angles on road sides.
The system of the embodiment comprises a road side sensor unit, an edge computing unit (MEC), a road side communication unit (RSU), an on-board communication unit (OBU) and an on-board computing unit which are sequentially connected and can perform data bidirectional transmission. In this embodiment, the on-board communication unit (OBU) and the on-board computing unit are preferably integrated in a vehicle system of the intelligent network-connected vehicle.
The road side sensor units are arranged in the road area, and a plurality of road side sensor units can be respectively deployed at a plurality of points in the same continuous road area. Each road side sensor unit is internally provided with one or more types of sensors, the types of the sensors comprise a laser radar, a camera, a millimeter wave radar and the like, each type of sensor has different perception ranges and perception visual angles, and the various types of sensors are combined with information fusion perception ranges which can cover road areas as much as possible, so that the function of multiple visual angles is achieved. Each sensor is electrically connected with one edge computing unit for data transmission, and each sensor is provided with global positioning coordinates and joint calibration parameters corresponding to the current coordinate system of the road area. The road side sensor unit is used for information of vehicles in a road area, and the data form of the information comprises images, point cloud data and the like. In this embodiment, preferably, the roadside sensor unit is installed on the intelligent road pole, and each sensor is respectively in data transmission with the edge computing unit through a network line direct communication manner.
The edge computing units are arranged in the road area, and can be respectively deployed on a plurality of points in the same continuous road area, or the edge computing unit of one point is connected with the road side sensor units of a plurality of points. In this embodiment, preferably, the specific type of the edge computing unit is a special computing device, and the edge computing unit is also installed on the intelligent road pole, and multiple types of sensors can be installed under each edge computing unit to receive and process the road side sensing information. The edge calculation unit is used for carrying out coordinate joint calibration on a plurality of sensors in the road side sensor unit, carrying out two-dimensional target detection and three-dimensional target detection according to images and point cloud data to obtain target detection information, converting road side perception information under different sensor coordinate systems into a unified world coordinate system, and encoding three-dimensional target detection data after converting the coordinate system.
The edge computing unit comprises a target detection module, a perception fusion module, a coordinate conversion module and a coding module.
The target detection module is connected with a plurality of sensors in the road side sensor unit and is used for respectively carrying out two-dimensional target detection on images and point cloud data acquired by the road side sensors and three-dimensional target detection on the point cloud data and outputting target detection information. The object detection information includes the category, geometry, center point coordinates, and the like of the vehicle object.
The perception fusion module is connected with the target detection module and is used for carrying out data matching and correlation on the two-dimensional target detection information of the image and the three-dimensional target detection information of the point cloud, eliminating false detection processing and outputting the three-dimensional target detection information after fusion processing as road side perception information. The three-dimensional target detection information comprises a three-dimensional target detection frame and detection frame information in the point cloud.
The coordinate conversion module is connected with the perception fusion module for data transmission. The coordinate conversion module is used for uniformly converting coordinate points of various data such as a sensor, a vehicle target, a three-dimensional target detection frame and the like under a current coordinate system of a road area into longitude and latitude coordinates under a world coordinate system, and then combining the longitude and latitude coordinates with information of the detection frame to form road side sensing information (RSM), wherein specific data forms of the road side sensing information comprise the detection frame, the target detection information and the longitude and latitude coordinates. In the present embodiment, the coordinate conversion of each sensor and each vehicle is preferably based on the center point coordinates. In this embodiment, it is further preferable that the road side perception information generated by the perception fusion module has time-stamped data. In this embodiment, it is further preferable that the road side sensing information of each vehicle target is recorded in the form of a data sequence, and the data sequence is named as a tracking sequence, and the tracking sequences of all vehicle targets form a road side sensing information set.
The coding module is connected with the coordinate conversion module for data transmission. The coding module is used for carrying out data coding on the road side perception information according to a set data communication protocol to form a data format suitable for wireless communication transmission. In this embodiment, the encoding module preferably performs data encoding on the road side sensing information according to the C-V2X protocol.
The road side communication unit is electrically connected with the coding module for data transmission. The road side communication unit is used for wirelessly transmitting the road side perception information after data encoding, so that vehicles in the road area can wirelessly receive the corresponding road side perception information, and wireless communication transmission of the road side perception information is completed.
The vehicle-mounted communication unit and the road side communication unit perform data transmission in a wireless communication mode, and in the embodiment, the wireless communication of the vehicle-mounted communication unit and the road side communication unit preferably adopts a C-V2X protocol and is transmitted in a broadcast mode. The vehicle-mounted communication unit is used for receiving corresponding road side perception information.
The vehicle-mounted computing unit is used for acquiring and storing speed data, geometric dimension data and vehicle type information of the vehicle.
The vehicle-mounted computing unit is electrically connected with the vehicle-mounted communication unit for data transmission and is used for positioning corresponding vehicle targets in the road area. The vehicle-mounted computing unit comprises a preprocessing module, a multi-target matching tracking module, a target position prediction module and a self-vehicle probability prediction module.
The preprocessing module is connected with the vehicle-mounted communication unit. The preprocessing module is used for processing data required for positioning corresponding vehicles, receiving coded road side perception information, decoding the road side perception information, converting longitude and latitude coordinates of a world coordinate system in the road side perception information into coordinates under a current road area coordinate system, filtering and removing invalid data such as non-vehicle targets and repeated detection targets according to information such as vehicle types and time stamps in the road side perception information, and synchronizing time stamps of the acquired road side perception information. The timestamp synchronization is used for determining information such as a detection frame, target detection information, coordinates and the like corresponding to each frame at a corresponding time point.
The multi-target matching tracking module is connected with the preprocessing module for data transmission. The multi-target matching tracking module is used for acquiring and tracking the position and speed changes of a plurality of vehicle targets in the road area.
The method for acquiring and tracking the vehicle target in the road area by the multi-target matching tracking module comprises the following steps:
traversing the tracking root sequences of the corresponding road side perception information of all detected vehicle targets and all tracked vehicle targets in the current frame, and setting the tracking root sequences as detection targets and tracking targets respectively;
If the current frame is the first frame for starting tracking, each detection target in the current frame is directly used as a tracking target of the next frame, and the related data in the tracking sequence of the detection target is used as the related data of the tracking sequence of the tracking target of the next frame;
if the current frame is not the first frame for starting tracking, calculating the geometric dimension merging ratio (IOU), the position distance and the speed deduced according to the position distance and the time difference of the previous and the next frames of each detected target in the current frame and all tracked targets from the tracking sequence of the detected target and the tracking sequence of the tracked target;
the geometric dimension blending ratio is calculated by the following steps: the geometric center points of the three-dimensional detection frames of the detection target and the tracking target are overlapped, and the ratio of the intersection volume to the union volume of the two three-dimensional detection frames is calculated;
the position distance is calculated by the following steps: calculating the Euclidean distance between the geometric center points of the three-dimensional detection frames of the detection target and the tracking target;
the speed is calculated by the following steps: calculating the distance between the detection target and the three-dimensional detection frame of the tracking target divided by the time interval between the front frame and the rear frame;
screening the tracking target corresponding to the previous frame for the detection target, realizing the function of matching the detection target and the tracking target in the current frame, and introducing IOU matching probability Prob iou Probability of distance matching Prob dis Probability of speed match Prob vel To aid in screening;
setting a speed matching probability Prob vel Is calculated as follows:
wherein v is c 、v t 、v thresh The speed threshold value which is set and indicates the speed of the detection target in the current frame, the speed of the tracking target and the speed of the tracking target respectively are preferably set as v in the embodiment thresh =4m/s;
Setting a distance matching probability Prob dis Is calculated as follows:
wherein d o 、d thresh Respectively representing the distance between the detected target and the tracked target in the current frame and a set distance threshold, and in the preferred embodiment, the distance threshold is set to d thresh =20m, Δt is the time interval between the current frame and the previous frame;
setting the value of the geometric dimension intersection ratio IOU as the IOU matching probability Prob iou ;
According to IOU matching probability Prob iou Probability of distance matching Prob dis Probability of speed match Prob vel Meter (D)Calculating the target matching probability Prob of the detected target in the current frame and all tracking targets in the previous frame m Probability of target match Prob m The calculation is as follows:
Prob m =w iou Prob iou +w vel Prob vel +w dis Prob dis
wherein w is iou 、w vel 、w dis Respectively corresponding to IOU matching probability Prob iou Probability of speed match Prob vel Probability of distance matching Prob dis The respective weight values, the preferred IOU matching probability Prob in this embodiment iou The weight value of (2) is set to w iou =0.2, velocity matching probability Prob vel The weight value of (2) is set to w vel =0.3, distance matching probability Prob dis The weight value of (2) is set to w dis =0.5;
After traversing all tracking targets correspondingly for all detection targets of the current frame, according to target matching probability Prob m Updating the road side perception information of all detection targets according to whether the matched tracking targets exist or not; if there is a target matching probability Prob that the detected target corresponds to part or all of the tracked targets m If the matching probability is larger than the set matching threshold, judging the target matching probability Prob m The highest corresponding tracking target and one detection target of the current frame are the same vehicle, then the tracking sequence of the tracking target is replaced in the tracking sequence of the detection target of the current frame to form a new road side perception information set, so that the information of the geometric size, the position, the speed and the like of the detection target is saved, and the road side perception information of the detection target is updated to the road side perception information of the tracking target of the next frame; if the current frame has a target matching probability Prob that the detected target corresponds to all the tracked targets m If the detected target is smaller than the set matching threshold, judging that the detected target cannot find the corresponding tracked target in the previous frame, then establishing a new tracking sequence for the detected target of the current frame, and adding the new tracking sequence into a historical road side perception information set so as to save the geometric size and position information of the detected target of the current frame and enable the detection to be carried out The road side perception information of the target becomes the road side perception information of a newly added tracking target of the next frame; the preferred target matching probability Prob of this embodiment m The corresponding set matching threshold is 0.5.
And the target position prediction module is connected with the multi-target matching tracking module for data transmission. And the target position prediction module predicts the road side perception information by adopting an extended Kalman filter according to the position and speed change of the vehicle target, corrects the position of the vehicle target and then updates the road side perception information.
According to the result of the multi-target matching tracking in the last step, the method for predicting and correcting the position of the vehicle target by using the extended Kalman filter is as follows:
if the detection target in the current frame cannot be matched with the tracking target, constructing and initializing an extended Kalman filter by utilizing the position of the detection target in the current frame;
if the detection target in the current frame can be matched with a certain tracking target, predicting an extended Kalman filter corresponding to the tracking target by utilizing the position and the speed corresponding to the detection target in the current frame, and outputting the corrected position as the latest position of the detection target;
if one tracking target cannot be matched with the detection target of the current frame, predicting and updating the position of the tracking target corresponding to the current frame by utilizing the road side perception information of the tracking target and an extended Kalman filter of the tracking target and combining the time interval of the current frame and the previous frame;
After the positions of all the vehicle targets are updated in the three situations, the new road side perception information set of the current frame is correspondingly updated.
And the self-vehicle probability prediction module is connected with the target position prediction module for data transmission. The self-vehicle probability prediction module is used for calculating the probability that the vehicle target belongs to the vehicle according to the road side perception information set updated by the target position prediction module. The method for calculating the probability that the vehicle object belongs to the vehicle is as follows:
receiving a road side perception information set updated by a target position prediction module, traversing all tracking sequences in the updated road side perception information set, and counting the geometric dimension, the target position, the target speed and the moved position distance of a vehicle target in each tracking sequence;
according to the geometric dimension of the vehicle, the speed of the vehicle and the position distance information of the movement of the vehicle, respectively calculating the latest intersection ratio of the vehicle and all the three-dimensional detection frames of the vehicle targetsAverage cross-over ratioSelf speed probability of the vehicle object belonging to the vehicle itself +.>Probability of distance travelled by the vehicle>Then, weighting and calculating weighted self-vehicle matching probability Probego that the vehicle target belongs to the vehicle, and judging whether the vehicle target corresponding to the corresponding tracking sequence belongs to the vehicle according to the weighted self-vehicle matching probability Probego;
The related calculation formula is shown as follows:
wherein v is t 、v e Respectively representing the speed of the vehicle target and the speed of the vehicle itself, d t 、d e 、d thresh Respectively representing the moving distance of the vehicle target, the moving distance of the own vehicle and the moving distance threshold value, w i The weight is represented by a weight that,representing a weighting item corresponding to the type i;
in order to better distinguish the vehicle from the surrounding vehicles, the present embodiment preferably further adopts an adaptive weighting method based on coefficient of variation statistics, so as to compare the latest intersection ratioAverage cross ratio->Probability of speed of vehicleProbability of distance travelled by the vehicle>After the weight of the model is initialized, the cross-ratio variation coefficient and the speed variation coefficient are set to carry out self-adaptive adjustment on different weights, and the calculation of the self-adaptive adjustment is shown as the following formula:
wherein CV j A cross-over ratio variation coefficient representing a history of a certain vehicle target or a speed variation coefficient of the history, j representing a corresponding cross-over ratio or speed,represents the maximum value of the cross ratio variation coefficient of all vehicle target histories or the speed variation coefficient of the histories of the current frame,A iou 、A vel cross ratio adaptive coefficient, w, each representing history of a particular vehicle target i init Indicating the initialization weights, the embodiment preferably sets the initialization weights to be respectively
The weight w obtained by self-adaptive adjustment i Substitution weighted bicycle matching probability Prob ego Obtaining the weighted self-vehicle matching probability Prob of the corresponding tracking sequence belonging to the vehicle after self-adaptive weighted calculation ego Weighted vehicle matching probability Prob is taken ego The target position of the highest vehicle target is taken as the current position of the vehicle itself.
In the operation process of the embodiment, the transmission chain of the data information is sequentially as follows: the method comprises the steps of sequentially obtaining positioning information of a final vehicle from a road side sensor unit to a data source through a perception fusion module, a target detection module, a coordinate conversion module, a coding module, a road side communication unit, a vehicle-mounted communication unit, a preprocessing module, a multi-target matching tracking module and a target position prediction module.
Compared with the prior art, the embodiment 1 has the beneficial effects that:
according to the method, road co-location is carried out by means of road side sensing information of various types of sensors, the geometric dimension of the vehicle and speed information, dependence on GNSS signals can be eliminated compared with the existing vehicle co-location mode based on vehicle co-location, long-distance continuous location of the network-connected vehicles can be achieved compared with the existing vehicle co-location mode based on road side RSU received signal strength and the like, and the relative position relation between the network-connected vehicles and surrounding vehicles can be obtained, so that the method is more practical;
Further, the target position prediction module can predict the position information of the vehicle of the current tracked target according to the historical data under the condition that the road side perception information is lost, and compared with a single GPS positioning or a single road side perception positioning method, the method can effectively improve the output frequency of positioning signals and provide more real-time positioning information for the network-connected vehicle;
furthermore, the self-vehicle probability prediction module is combined with the self-adaptive weighting method based on variation coefficient statistics, so that the position information of the vehicle can be effectively positioned under the condition that the network-connected vehicle and the non-network-connected vehicle are mixed, and the self-vehicle probability prediction module has higher positioning accuracy and positioning precision and has practical application value.
Example 2
As shown in fig. 2 to 5. The embodiment 2 provides a vehicle-road co-location method. The preferred embodiment of the present method is based on the vehicle co-location system of embodiment 1. The steps of the vehicle-road co-location method in this embodiment 2 are as follows:
s1, acquiring image and point cloud data in a road area by using sensors such as a road side camera and a laser radar, performing coordinate conversion after identifying a vehicle target, then performing data coding, and performing wireless communication on road side sensing data with target detection information and a world coordinate system with the vehicle in the road area through road side communication equipment;
The specific process comprises the following steps:
s11, acquiring point clouds and image data of vehicles in the same continuous road area by using a road side sensor unit;
in the embodiment, preferably, the sensor is provided with global positioning coordinates and joint calibration parameters corresponding to the current coordinate system of the road area;
s12, transmitting the image and the point cloud data in the road environment to a target detection module in an edge calculation unit by using a road side sensor unit, respectively carrying out two-dimensional target detection on the image and the point cloud data acquired by the road side multi-sensor unit, and carrying out three-dimensional target detection on the point cloud data, and outputting perception information of different sensors;
s13, performing data matching and association on the two-dimensional image target detection information and the three-dimensional point cloud target detection information by using a perception fusion module, eliminating false detection processing, and outputting the three-dimensional target detection information after fusion processing;
preferably, the target detection information includes a category, a geometric dimension, a center point coordinate, and a detection frame of the vehicle, where the detection frame includes a two-dimensional detection frame and a three-dimensional detection frame;
s14, uniformly converting coordinate points of a sensor and a vehicle target in a current coordinate system of a road area into longitude and latitude coordinates in a world coordinate system on three-dimensional target detection information subjected to sensing information fusion by using a coordinate conversion module, and then combining the longitude and latitude coordinates with the target detection information to form road side sensing information;
S15, using an encoding module to encode the road side perception information according to a C-V2X protocol to form a data format suitable for wireless communication transmission;
s16, using a road side communication unit to transmit the road side sensing information subjected to data coding to each vehicle-mounted communication unit of each intelligent network-connected vehicle in a road area in a wireless communication mode according to a C-V2X protocol;
s2, acquiring and storing speed data, geometric dimension data and vehicle types of the vehicle by using a vehicle-mounted state unit when receiving road side perception information by a vehicle-mounted communication unit of the intelligent network vehicle, and then respectively transmitting the road side perception information, the data of the vehicle and the vehicle types to a vehicle-mounted computing unit by the vehicle-mounted communication unit and the vehicle-mounted state unit;
s3, the intelligent network-connected vehicle uses a preprocessing module of a vehicle-mounted computing unit to receive coded road side perception information, decodes the road side perception information according to a C-V2X protocol, converts longitude and latitude coordinates of a world coordinate system in the road side perception information into coordinates under a current road area coordinate system, filters and eliminates invalid data such as non-vehicle targets and repeated detection targets according to information such as vehicle types and time stamps in the road side perception information, and then performs time stamp synchronization on the acquired road side perception information;
In this embodiment, the timestamp synchronization is preferably used to determine a detection frame, target detection information, coordinates, a vehicle category, a vehicle speed, and vehicle geometry information corresponding to each frame at a corresponding time point;
s4, acquiring and tracking the position and speed changes of a plurality of vehicle targets in a road area by using a multi-target matching tracking module of the vehicle-mounted computing unit;
the specific process of the multi-target matching tracking comprises the following steps:
s41, traversing tracking root sequences of corresponding road side perception information of all detected vehicle targets and all tracked vehicle targets in a current frame, and setting the tracking root sequences as detected targets and tracked targets respectively;
s42, if the current frame is the first frame for starting tracking, each detection target in the current frame is directly used as a tracking target of the next frame, and the related data in the tracking sequence of the detection target is used as the tracking sequence related data of the tracking target of the next frame;
s43, if the current frame is not the first frame for starting tracking, calculating the geometric combination ratio (IOU), the position distance and the speed deduced according to the position distance and the time difference of the previous and next frames of each detected target in the current frame and all tracked targets from the tracking sequence of the detected targets and the tracking sequence of the tracked targets;
The geometric dimension blending ratio is calculated by the following steps: the geometric center points of the three-dimensional detection frames of the detection target and the tracking target are overlapped, and the ratio of the intersection volume to the union volume of the two three-dimensional detection frames is calculated;
the position distance is calculated by the following steps: calculating the Euclidean distance between the geometric center points of the three-dimensional detection frames of the detection target and the tracking target;
the speed is calculated by the following steps: calculating the distance between the detection target and the three-dimensional detection frame of the tracking target divided by the time interval between the front frame and the rear frame;
screening the tracking target corresponding to the previous frame for the detection target, realizing the function of matching the detection target and the tracking target in the current frame, and introducing IOU matching probability Prob iou Probability of distance matching Prob dis Probability of speed match Prob vel To aid in screening;
setting a speed matching probability Prob vel Is calculated as follows:
wherein v is c 、v t 、v thresh The speed threshold value which is set and indicates the speed of the detection target in the current frame, the speed of the tracking target and the speed of the tracking target respectively are preferably set as v in the embodiment thresh =4m/s;
Setting a distance matching probability Prob dis Is calculated as follows:
wherein d o 、d thresh Respectively representing the distance between the detected target and the tracked target in the current frame and a set distance threshold, and in the preferred embodiment, the distance threshold is set to d thresh =20m, Δt is the time interval between the current frame and the previous frame;
setting the value of the geometric dimension intersection ratio IOU as the IOU matching probability Prob iou ;
S44, according to the IOU matching probability Prob iou Probability of distance matching Prob dis Probability of speed match Prob vel Calculating the target matching probability Prob of the detected target in the current frame and all tracking targets in the previous frame m Probability of target match Prob m The calculation is as follows:
Prob m =w iou Prob iou +w vel Prob vel +w dis Prob dis
wherein w is iou 、w vel 、w dis Respectively corresponding to IOU matching probability Prob iou Probability of speed match Prob vel Probability of distance matching Prob dis The respective weight values, the preferred IOU matching probability Prob in this embodiment iou The weight value of (2) is set to w iou =0.2, velocity matching probability Prob vel The weight value of (2) is set to w vel =0.3, distance matching probability Prob dis The weight value of (2) is set to w dis =0.5;
S45, correspondingly detecting all targets of the current frameAfter traversing all tracking targets, according to the target matching probability Prob m Updating the road side perception information of all detection targets according to whether the matched tracking targets exist or not;
s451, if there is a target matching probability Prob that the detected target corresponds to a part or all of the tracked targets m If the matching probability is larger than the set matching threshold, judging the target matching probability Prob m The highest corresponding tracking target and one detection target of the current frame are the same vehicle, then the tracking sequence of the tracking target is replaced in the tracking sequence of the detection target of the current frame to form a new road side perception information set, so that the information of the geometric size, the position, the speed and the like of the detection target is saved, and the road side perception information of the detection target is updated to the road side perception information of the tracking target of the next frame;
S452, if there is a target matching probability Prob of the detected target corresponding to all tracking targets in the current frame m If the detected target is smaller than the set matching threshold, judging that the detected target cannot find a corresponding tracking target in the previous frame, then establishing a new tracking sequence for the detected target of the current frame, and then adding the new tracking sequence into a historical road side perception information set, so that the geometric size and position information of the detected target of the current frame are saved, and the road side perception information of the detected target becomes the road side perception information of a newly added tracking target of the next frame; the preferred target matching probability Prob of this embodiment m The corresponding set matching threshold value is 0.5;
s5, using a target position prediction module, predicting and correcting the position of a vehicle target in the current frame by adopting an extended Kalman filter for the road side perception information set, and updating the road side perception information set;
the specific process comprises the following steps:
s51, if a detection target in the current frame cannot be matched with a tracking target, constructing and initializing an extended Kalman filter by utilizing the position of the detection target in the current frame;
s52, if the detection target in the current frame can be matched with a certain tracking target, predicting an extended Kalman filter corresponding to the tracking target by utilizing the position and the speed corresponding to the detection target in the current frame, and outputting the corrected position as the latest position of the detection target;
S53, if a tracking target cannot be matched with a detection target of the current frame, predicting and updating the position of the tracking target corresponding to the current frame by utilizing the road side perception information of the tracking target and an extended Kalman filter of the tracking target and combining the time interval of the current frame and the previous frame;
s6, calculating the probability that the vehicle target belongs to the vehicle by using the self-vehicle probability prediction module;
the specific process comprises the following steps:
s61, receiving a road side perception information set updated by a target position prediction module, traversing all tracking sequences in the updated road side perception information set, and counting the geometric dimension, the target position, the target speed and the moved position distance of a vehicle target in each tracking sequence;
s62, respectively calculating the latest intersection ratio of the vehicle and all the vehicle target three-dimensional detection frames according to the geometric dimensions of the vehicle, the speed of the vehicle and the moving position distance information of the vehicleAverage cross-over ratioSelf speed probability of the vehicle object belonging to the vehicle itself +.>Probability of distance travelled by vehicleThen, weighting and calculating weighted self-vehicle matching probability Probego that the vehicle target belongs to the vehicle, and judging whether the vehicle target corresponding to the corresponding tracking sequence belongs to the vehicle according to the weighted self-vehicle matching probability Probego;
Probability of speed of vehicleProbability of distance travelled by the vehicle>The weighted self-vehicle matching probability Probego calculation formula is shown as follows:
wherein v is t 、v e Respectively representing the speed of the vehicle target and the speed of the vehicle itself, d t 、d e 、d thresh Respectively representing the moving distance of the vehicle target, the moving distance of the own vehicle and the moving distance threshold value, w i The weight is represented by a weight that,representing a weighting item corresponding to the type i;
s63, adopting an adaptive weighting method based on variation coefficient statistics to update the cross ratioAverage cross ratio->Speed probability of vehicle>Probability of distance travelled by the vehicle>After the weight of the model is initialized, the cross-ratio variation coefficient and the speed variation coefficient are set to carry out self-adaptive adjustment on different weights, and the calculation of the self-adaptive adjustment is shown as the following formula:
wherein CV j A cross-over ratio variation coefficient representing a history of a certain vehicle target or a speed variation coefficient of the history, j representing a corresponding cross-over ratio or speed,a represents the maximum value of the cross ratio variation coefficient of all vehicle target histories or the speed variation coefficient of the histories in the current frame iou 、A vel Cross ratio adaptive coefficient, w, each representing history of a particular vehicle target i init Representing an initialization weight;
the initialization weights are preferably set in this embodiment to be respectively
S63, self-adaptively adjusting the obtained weight w i Substitution weighted bicycle matching probability Prob ego Obtaining the weighted self-vehicle matching probability Prob of the corresponding tracking sequence belonging to the vehicle after self-adaptive weighted calculation ego Weighted vehicle matching probability Prob is taken ego The target position of the highest vehicle target is taken as the current position of the vehicle itself.
Compared with the prior art, the embodiment 2 has the beneficial effects that:
according to the method, road co-location is carried out by means of road side sensing information of various types of sensors, the geometric dimension of the vehicle and speed information, dependence on GNSS signals can be eliminated compared with the existing vehicle co-location mode based on vehicle co-location, long-distance continuous location of the network-connected vehicles can be achieved compared with the existing vehicle co-location mode based on road side RSU received signal strength and the like, and the relative position relation between the network-connected vehicles and surrounding vehicles can be obtained, so that the method is more practical;
further, according to the method, through the step of target position prediction, the position information of the vehicle of the current tracked target can be predicted according to historical data under the condition that the road side perception information is lost, and compared with a single GPS positioning or a single road side perception positioning method, the positioning signal output frequency can be effectively improved, and more real-time positioning information is provided for the network-connected vehicle;
Further, the method can effectively position the position information of the vehicle under the condition that the networked vehicle and the non-networked vehicle are mixed by combining the self-vehicle probability prediction step and the self-adaptive weighting method based on the variation coefficient statistics, has higher positioning accuracy and positioning precision, and has practical application value.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (10)
1. The vehicle-road co-location system is characterized by comprising a road side sensor unit, an edge calculation unit, a road side communication unit, a vehicle-mounted communication unit and a vehicle-mounted calculation unit which are sequentially connected for data transmission;
the road side sensor unit is used for collecting information of vehicles in a road area;
the edge computing unit is used for processing and forming road side perception information of the vehicle in the road area according to the information acquired by the road side sensor unit; the road side communication unit is used for sending the road side perception information obtained by the edge calculation unit to the vehicle-mounted communication unit;
The vehicle-mounted communication unit is used for transmitting the received road side perception information to the vehicle-mounted calculation unit;
the vehicle-mounted computing unit is used for positioning the vehicle in the road area according to the road side perception information and the state information of the vehicle;
the vehicle-mounted computing unit comprises a preprocessing module, a multi-target matching tracking module, a target position prediction module and a self-vehicle probability prediction module which are connected in sequence;
the preprocessing module is connected with the vehicle-mounted communication unit and is used for preprocessing the road side perception information and filtering invalid non-vehicle targets and repeated vehicle targets;
the multi-target matching tracking module is used for acquiring and tracking the position and speed change of a vehicle target according to the road side perception information;
the target position prediction module is used for predicting road side perception information by adopting an extended Kalman filter according to the position and speed change of a vehicle target, correcting the position of the vehicle target and then updating the road side perception information;
the self-vehicle probability prediction module is used for calculating the probability that the currently tracked vehicle target is the vehicle according to the updated road side perception information, so that the vehicle is positioned.
2. The vehicle-road co-location system of claim 1, wherein the road-side sensor unit comprises a plurality of sensors deployed at a plurality of points in the road area, respectively, the types of sensors comprising lidar, cameras;
The laser radar and the camera are used for acquiring point cloud and image data of a road area;
each sensor is electrically connected with one edge computing unit for data transmission, and each sensor is provided with global positioning coordinates and joint calibration parameters corresponding to the current coordinate system of the road area.
3. The vehicle-road co-location system of claim 3, wherein the edge computing unit comprises a target detection module, a perception fusion module, a coordinate conversion module, and a coding module;
the target detection module is connected with a plurality of sensors in the road side sensor unit and is used for respectively carrying out two-dimensional target detection and three-dimensional target detection on the image and the point cloud data acquired by the road side multi-sensor unit and outputting target detection information of different sensors; the target detection information comprises the geometric size, the position and a detection frame of a vehicle target;
the perception fusion module is connected with the target detection module and is used for carrying out data matching, association and false detection elimination on the two-dimensional target detection information of the image and the three-dimensional target detection information of the point cloud, and outputting the processed three-dimensional target detection information as road side perception information;
The coordinate conversion module is connected with the perception fusion module and is used for converting coordinate points in the current coordinate system of the road area into longitude and latitude coordinate target detection information in the world coordinate system;
the coding module is connected with the coordinate conversion module and is used for carrying out data coding on the road side perception information according to a standard C-V2X data communication protocol.
4. The vehicle-road co-location system of claim 3, wherein the preprocessing module is configured to receive the encoded road-side sensing information, decode the road-side sensing information, convert longitude and latitude coordinates of a world coordinate system in the road-side sensing information to coordinates in a current road area coordinate system, and reject invalid data such as non-vehicle targets and duplicate detection targets according to the road-side sensing information.
5. The vehicle co-location system of claim 4, wherein the process of acquiring and tracking the respective position and velocity changes of the plurality of vehicle targets is as follows:
traversing all detected vehicle targets and all tracked road side perception information of the vehicle targets in the current frame, and setting the road side perception information as detected targets and tracked targets respectively;
if the current frame is the first frame for starting tracking, each detection target in the current frame is directly used as a tracking target of the next frame;
If the current frame is not the first frame for starting tracking, calculating the geometric dimension intersection ratio of each detection target and all tracking targets in the current frame, the position distance and the speed deduced according to the position distance and the time difference of the previous and the next frames;
the geometric dimension merging ratio calculation mode is as follows: the geometric center points of the detection frames of the detection target and the tracking target are overlapped, and the ratio of the intersection volume to the union volume of the two detection frames is calculated;
the position distance is calculated in the following way: calculating the Euclidean distance between the geometric center points of the detection frames of the detection target and the tracking target;
the speed is calculated by the following steps: calculating the distance between the detection target and the detection frame of the tracking target divided by the time interval between the front frame and the rear frame;
setting a speed matching probability Prob vel And the calculation is as follows:
wherein v is c 、v t Vthresh represents the speed of the detection target in the current frame, the speed of the tracking target and a set speed threshold value respectively;
setting a distance matching probability Prob dis And the calculation is as follows:
wherein d o 、d thresh Respectively representing the distance between the detection target and the tracking target in the current frame and a set distance threshold;
setting the value of the geometric dimension merging ratio as the IOU matching probability Prob iou ;
According to IOU matching probability Prob iou Probability of distance matching Prob dis Probability of speed match Prob vel Calculating the target matching probability Prob of the detection target and all tracking targets in the current frame m Probability of target match Prob m The following formula is shown:
Prob m =w iou Prob iou +w vel Prob vel +w dis Prob dis
wherein w is iou 、w vel 、w dis Respectively corresponding to IOU matching probability Prob iou Probability of speed match Prob vel Probability of distance matching Prob dis The respective weight values;
after traversing all tracking targets correspondingly for all detection targets of the current frame, according to target matching probability Prob m And updating the road side perception information of all the tracking targets.
6. The vehicle co-location system of claim 5, wherein the target matching probability Prob is based on m The process of updating the road side perception information of all tracking targets is as follows:
if there is a target matching probability Prob that the detected target corresponds to part or all of the tracked targets m If the matching probability is larger than the set matching threshold, judging the target matching probability Prob m The highest corresponding tracking target and one detection target of the current frame are the same vehicle, and then the road side perception information of the detection target of the current frame is used for replacing the road side perception information of the corresponding tracking target;
if the current frame has a target matching probability Prob that the detected target corresponds to all the tracked targets m If the detected target is smaller than the set matching threshold, judging that the detected target cannot find the corresponding tracking target in the previous frame, and then directly taking the road side perception information of the detected target as the road side perception information of the corresponding tracking target of the next frame.
7. The vehicle-road co-location method according to claim 6, wherein the process of predicting and correcting the target position by using the extended kalman filter is as follows:
if the detection target in the current frame cannot be matched with the tracking target, constructing and initializing an extended Kalman filter by utilizing the position of the detection target in the current frame;
if the detection target in the current frame can be matched with a certain tracking target, predicting an extended Kalman filter corresponding to the tracking target by utilizing the position and the speed corresponding to the detection target in the current frame, and outputting the corrected position as the latest position of the detection target;
if one tracking target cannot be matched with the detection target of the current frame, predicting and updating the position of the tracking target corresponding to the current frame by utilizing the road side perception information of the tracking target and an extended Kalman filter of the tracking target and combining the time interval of the current frame and the previous frame.
8. The vehicle-road co-location method of claim 7, wherein the process of calculating the probability that the vehicle object belongs to the vehicle itself is as follows:
traversing the road side perception information of all the vehicle targets in the updated current frame, and calculating the geometric dimension, the position, the speed and the moved position distance of each vehicle target in a statistical way;
according to the geometric dimension of the vehicle, the speed of the vehicle and the position distance information of the movement of the vehicle, respectively calculating the latest intersection ratio of the vehicle and the detection frames of all vehicle targets respectivelyAverage cross ratio->Self speed probability of the vehicle object belonging to the vehicle itself +.>Probability of distance travelled by the vehicle>Then, calculating a weighted self-vehicle matching probability Probego of the vehicle target belonging to the vehicle according to a self-adaptive weighting method based on variation coefficient statistics, and judging whether the vehicle target corresponding to the corresponding tracking sequence belongs to the vehicle according to the weighted self-vehicle matching probability Probego;
the speed probability of the vehicleProbability of distance travelled by the vehicle>The calculation formula of the weighted self-vehicle matching probability Probego is shown as follows:
wherein v is t 、v e Respectively representing the speed of the vehicle target and the speed of the vehicle itself, d t 、d e 、d thresh Respectively representing the moving distance of the vehicle target, the moving distance of the own vehicle and the moving distance threshold value, w i The weight is represented by a weight that,the weighting term corresponding to type i is represented.
9. The vehicle-road co-location method according to claim 8, wherein the adaptive weighting method based on coefficient of variation statistics comprises:
for the latest cross ratioAverage cross ratio->Speed probability of vehicle>Probability of distance travelled by the vehicle>After the weight of the model is initialized, the cross-ratio variation coefficient and the speed variation coefficient are set to carry out self-adaptive adjustment on different weights, and the calculation of the self-adaptive adjustment is shown as the following formula:
wherein CV j A cross-over ratio variation coefficient representing a history of a certain vehicle target or a speed variation coefficient of the history, j representing a corresponding cross-over ratio or speed,a represents the maximum value of historical cross ratio variation coefficient or historical speed variation coefficient in all vehicle targets of the current frame iou 、A vel Historical cross ratio adaptive coefficients, w, respectively representing a vehicle target i init Representing initialization rightsHeavy.
10. The vehicle-road cooperative positioning method is characterized by comprising the following steps of:
collecting information of vehicles in a road area;
processing according to the acquired information to form road side perception information of the vehicle in the road area;
preprocessing road side perception information of each vehicle, and filtering invalid non-vehicle targets and repeated vehicle targets;
Acquiring and tracking the position and speed change of a vehicle target according to road side perception information of a plurality of vehicles;
predicting and correcting the position of the road side perception information set by adopting an extended Kalman filter according to the position and speed change of the vehicle target, and then updating the road side perception information;
and calculating the probability that the currently tracked vehicle target is the vehicle according to the updated road side perception information, thereby realizing the positioning of the vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310737285.6A CN116935630A (en) | 2023-06-20 | 2023-06-20 | Vehicle-road co-location system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310737285.6A CN116935630A (en) | 2023-06-20 | 2023-06-20 | Vehicle-road co-location system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116935630A true CN116935630A (en) | 2023-10-24 |
Family
ID=88393334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310737285.6A Pending CN116935630A (en) | 2023-06-20 | 2023-06-20 | Vehicle-road co-location system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116935630A (en) |
-
2023
- 2023-06-20 CN CN202310737285.6A patent/CN116935630A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109920246B (en) | Collaborative local path planning method based on V2X communication and binocular vision | |
CN111554088B (en) | Multifunctional V2X intelligent roadside base station system | |
US11235777B2 (en) | Vehicle path prediction and target classification for autonomous vehicle operation | |
CN105793669B (en) | Vehicle position estimation system, device, method, and camera device | |
US11315420B2 (en) | Moving object and driving support system for moving object | |
US10964216B2 (en) | Method for providing information about a vehicle's anticipated driving intention | |
CN113706737B (en) | Road surface inspection system and method based on automatic driving vehicle | |
CN112099040A (en) | Whole-course continuous track vehicle tracking system and method based on laser radar network | |
CN109147390B (en) | Vehicle trajectory tracking method based on quantization adaptive Kalman filtering | |
JP2021099793A (en) | Intelligent traffic control system and control method for the same | |
CN111477010A (en) | Device for intersection holographic sensing and control method thereof | |
US20230041031A1 (en) | Systems and methods for efficient vehicle extent estimation | |
US20200394424A1 (en) | Vehicle control system, sensing device and sensing data processing method | |
KR20230120974A (en) | Curb-based feature extraction for localization and lane detection using radar | |
US12046049B2 (en) | Automatically detecting traffic signals using sensor data | |
CN117387647A (en) | Road planning method integrating vehicle-mounted sensor data and road sensor data | |
CN113179303A (en) | Method, device and program carrier for reporting traffic events | |
US20230388481A1 (en) | Image based lidar-camera synchronization | |
US12074667B2 (en) | Antenna monitoring and selection | |
US20240079795A1 (en) | Integrated modular antenna system | |
CN116935630A (en) | Vehicle-road co-location system and method | |
CN115909806A (en) | Collision early warning method and device and road side equipment | |
CN117553811B (en) | Vehicle-road co-location navigation method and system based on road side camera and vehicle-mounted GNSS/INS | |
US12122417B2 (en) | Discriminator network for detecting out of operational design domain scenarios | |
US12037012B2 (en) | Ensemble-based vehicle motion planner |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |