CN117387647A - Road planning method integrating vehicle-mounted sensor data and road sensor data - Google Patents

Road planning method integrating vehicle-mounted sensor data and road sensor data Download PDF

Info

Publication number
CN117387647A
CN117387647A CN202311374379.8A CN202311374379A CN117387647A CN 117387647 A CN117387647 A CN 117387647A CN 202311374379 A CN202311374379 A CN 202311374379A CN 117387647 A CN117387647 A CN 117387647A
Authority
CN
China
Prior art keywords
road
vehicle
data
planning
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311374379.8A
Other languages
Chinese (zh)
Inventor
蒙杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202311374379.8A priority Critical patent/CN117387647A/en
Publication of CN117387647A publication Critical patent/CN117387647A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a road planning method for fusing vehicle-mounted sensor data and road sensor data, which is characterized by collecting direct sensing data of a vehicle-mounted sensor and road sensing data of a public road sensor, preprocessing the data, reconstructing an environment model, fusing the preprocessed direct sensing data and the road sensing data, carrying out interactive processing on a road scene according to the fused sensing data, and carrying out road planning on a vehicle, thereby solving the problem of inaccurate road planning of the vehicle in an automatic driving mode or in a complex road environment, and improving the safety and comfort of the vehicle.

Description

Road planning method integrating vehicle-mounted sensor data and road sensor data
Technical Field
The invention relates to the technical field of automatic driving, in particular to a road planning method for fusing vehicle-mounted sensor data and road sensor data.
Background
Due to the complexity of the driving environment, the surrounding environment needs to be accurately perceived and timely and accurately decided, and along with the rapid development of the automatic driving technology, the vehicle starts an automatic driving mode or the vehicle needs to accurately perceive the surrounding environment and make intelligent decisions. The existing road planning method mainly depends on vehicle-mounted sensors, such as laser radar, millimeter wave radar and ultrasonic sensor, and generates road planning decisions according to acquired obstacle information around the vehicle. However, the vehicle-mounted sensor is easily affected by factors such as weather and illumination, and errors may be generated in a complex environment, thereby affecting the accuracy of road planning.
On the other hand, urban traffic management departments typically deploy a large number of cameras and other sensors to monitor public roads and capture traffic flow, travel track and other information. However, these data are often ignored in existing road planning and are underutilized. Therefore, how to mutually fuse the direct perception data and the road perception data to optimize road planning and improve the safety and the efficiency of the vehicle becomes a worth solving problem.
The prior art discloses a road information fusion system and method for a vehicle, which are used for acquiring various road information acquired by various road sensing sensors of the vehicle, carrying out initial fusion on the various road information, and then carrying out deep fusion on the various road information after the initial fusion to output a road model of the vehicle, wherein the deep fusion comprises information coordinate conversion, road feature point extraction, lane line fitting, lane line calculation and information comprehensive management. However, in this scheme, although fusion of various road information acquired by various types of road-sensing sensors can be achieved, scene understanding and road planning cannot be achieved according to the fusion information.
Disclosure of Invention
One of the purposes of the invention is to provide a road planning method integrating vehicle-mounted sensor data and road sensor data, so as to solve the problem of inaccurate road planning of vehicles in an automatic driving mode or in a complex road environment in the prior art; the second purpose is to provide a road planning system integrating vehicle-mounted sensor data and road sensor data.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
collecting direct sensing data of a vehicle-mounted sensor and road sensing data of a public road sensor;
preprocessing direct perception data and road perception data;
constructing an environment model, and fusing the preprocessed direct perception data and road perception data by using the environment model;
according to the fused perception data, carrying out interactive processing on a road scene where the vehicle is located;
and planning the road for the vehicle based on the result of the road scene interaction processing.
According to the technical means, the direct sensing data of the vehicle-mounted sensor and the road sensing data of the public road sensor are subjected to data fusion, scene interaction processing is performed according to the fused sensing data, and road planning in a complex road environment is reasonably realized.
Further, the vehicle-mounted sensor comprises a laser radar, a millimeter wave radar and an ultrasonic sensor of the vehicle and is used for acquiring direct perception data of the surrounding environment of the vehicle;
the public road sensor comprises a camera arranged on a public road and is used for acquiring road perception data, wherein the road perception data comprise image data of traffic flow, traffic signals, road vehicles, pedestrians and traffic sign information.
According to the technical means, various sensor data under the road environment are fully acquired with high precision and high frequency.
Further, preprocessing the direct perceived data, including:
and calibrating the direct sensing data, smoothing the calibrated direct sensing data by using a filtering algorithm, and removing noise and abnormal values.
Further, data calibration is carried out between the direct sensing data and the road sensing data so as to eliminate data deviation between the sensors;
and performing time synchronization between the direct sensing data and the road sensing data to eliminate time deviation between the sensors.
Performing data calibration on the direct sensing data and the road sensing data to eliminate data deviation between sensors;
performing time synchronization on the direct sensing data and the road sensing data to eliminate time deviation between the sensors;
and smoothing the direct perceived data by using a filtering algorithm to remove noise and outliers.
Further, setting a vehicle coordinate system, and acquiring original vehicle coordinates by the laser radar based on the vehicle coordinate system; when the direct sensing data is subjected to data calibration, the method comprises the steps of determining global coordinates of a vehicle by utilizing a GPS receiver carried by the vehicle, constructing a vehicle calibration model, and calibrating the original vehicle coordinates based on the position relationship between the original vehicle coordinates and the global coordinates, wherein the vehicle calibration model is as follows:
x_global = x_vehicle * cos() - y_vehicle * sin(/>) + xOffset
y_global = x_vehicle * sin() + y_vehicle * cos(/>) + yOffset
where (x_global, y_global) is the calibrated global coordinates of the vehicle, (x_vehicle, y_vehicle) is the original vehicle coordinates,is the rotation angle, and xOffset and yOffset are the translational offsets.
According to the technical means, accurate calibration of vehicle coordinates is achieved.
Further, preprocessing the road perception data, including: denoising the road perception data, detecting and tracking a target in the road perception data by using a deep learning algorithm, and extracting the position and movement track information of the target, wherein the target comprises road vehicles, pedestrians and traffic signs.
According to the technical means, the utilization rate and consistency of the sensing data are improved, so that the sensing data can be fused later.
Further, the input of the environment model is preprocessed direct perception data and road perception data, a multi-sensor data fusion algorithm is adopted to fuse the direct perception data and the road perception data, the output of the environment model comprises target state estimation, map information and characteristic positioning, the target state estimation comprises the position, the speed and the acceleration of a target detected in the road perception data, the map information comprises road topology, the position of an obstacle and the position of a traffic sign, and the characteristic positioning comprises the detected target, the obstacle and a traffic sign positioning frame.
According to the technical means, the target state estimation for road planning decision is obtained, and map information and characteristic positioning available for visualization and a human-computer interface are utilized to help drivers or driving systems understand the environment and help vehicles better understand the surrounding environment.
Further, according to the fused perception data, interactive processing is performed on the road scene, including:
carrying out semantic segmentation operation on the road scene image obtained by the public road sensor, and dividing the road scene image into different semantic areas;
constructing a road environment model, and perceiving the traffic environment in real time according to the semantic region and the output result of the environment model;
identifying the running state of the vehicle and simultaneously carrying out abnormality detection on the running state;
and detecting and identifying the real-time state of the traffic signal lamp, and controlling the running state of the vehicle according to the real-time state of the traffic signal lamp.
Further, based on the result of the interactive processing of the road scene, a road planning strategy is generated to realize road planning of the vehicle, including:
the result of the road scene interaction processing is connected with a vehicle control system, so that the vehicle makes a road planning strategy according to the road scene;
dividing a road planning strategy into a plurality of layers, wherein a high-level planning strategy comprises destination selection, a medium-level planning strategy comprises path selection and a low-level planning strategy comprises vehicle driving state control;
and optimizing the road planning strategy through a machine learning algorithm.
According to the technical means, reasonable automatic driving road planning is realized, and the vehicles can be ensured to run safely and efficiently.
Further, the direct sensing data of the vehicle-mounted sensor and the road sensing data of the public road sensor are continuously updated and fused, and the road planning strategy of the vehicle is updated in real time.
According to the technical means, the method is suitable for dynamic traffic conditions.
A road planning system that fuses vehicle-mounted sensor data with road sensor data, comprising:
the data acquisition unit is used for acquiring direct sensing data of the vehicle-mounted sensor and road sensing data of the public road sensor;
the data preprocessing unit is used for preprocessing the direct perception data and the road perception data;
the environment model construction unit is used for constructing an environment model and fusing the preprocessed direct perception data and road perception data by utilizing the environment model;
the interaction unit is used for carrying out interaction processing on the road scene where the vehicle is located through the fused data;
the road planning unit is used for planning the road of the vehicle according to the result of the road scene interaction processing;
and the real-time updating unit is used for continuously updating the direct sensing data of the vehicle-mounted sensor and the road sensing data of the public road sensor and updating the road planning strategy of the vehicle in real time.
The invention has the beneficial effects that:
the invention provides a road planning method and a road planning system for fusing vehicle-mounted sensor data and road sensor data, which are used for collecting direct sensing data of a vehicle-mounted sensor and road sensing data of a public road sensor, preprocessing the data, building an environment model, fusing the preprocessed direct sensing data and the road sensing data, carrying out interactive processing on a road scene according to the fused sensing data, planning a road of a vehicle, solving the problem of inaccurate road planning of the vehicle in an automatic driving mode or in a complex road environment, and improving the safety and comfort of the vehicle.
Drawings
Fig. 1 is a schematic flow chart of a road planning method for fusing vehicle-mounted sensor data and road sensor data according to the present invention;
FIG. 2 is a schematic diagram of the data flow in the present invention;
FIG. 3 is a schematic view of a reconstruction of a scene of fusion of camera sensor data and vehicle data at an intersection of the present invention;
fig. 4 is a schematic diagram of a road planning system for fusing vehicle-mounted sensor data and road sensor data according to the present invention;
wherein, 1-vehicle; 2-road vehicles; 3-camera sensor.
Detailed Description
Further advantages and effects of the present invention will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
As shown in fig. 1, the present embodiment proposes a road planning method for fusing vehicle-mounted sensor data and road sensor data, including:
s1, referring to FIG. 2, collecting direct sensing data of a vehicle-mounted sensor and road sensing data of a public road sensor;
the vehicle-mounted sensor comprises vehicle-mounted sensors such as a laser radar, a millimeter wave radar, an ultrasonic sensor and the like of a vehicle and is used for acquiring direct sensing data of the surrounding environment of the vehicle;
the public road sensor comprises sensors such as cameras and the like which are deployed on a public road by an urban traffic management department and are used for acquiring road perception data, wherein the road perception data comprise image data of traffic flow, traffic signals, road vehicles, pedestrians and traffic sign information.
S2, preprocessing direct perception data and road perception data;
performing data calibration on different types of sensor data to eliminate data deviation between sensors; in the present embodiment, the calibration of the lidar sensor and the camera sensor is performed by using a calibration plate, aligning their coordinate systems;
in this embodiment, the vehicle coordinate system uses the center of gravity of the vehicle or other specific point as the origin, the vehicle advancing direction as the x-axis, the left side of the vehicle as the y-axis, and a rotation and translation relationship exists between the vehicle coordinate system and the global coordinate system. Acquiring global coordinates of a vehicle, in the embodiment, receiving satellite signals by using a global positioning system (Global Positioning System, GPS) mounted on the vehicle, and determining longitude coordinates and latitude coordinates of the vehicle, namely, the global coordinates of the vehicle according to the satellite signals; in addition, the vehicle may also assist in determining its location in the global coordinate system by other sensors, such as an inertial navigation system. Constructing a vehicle calibration model, and calibrating original vehicle coordinates based on the position relation between the original vehicle coordinates and global coordinates, wherein the vehicle calibration model is as follows:
x_global = x_vehicle * cos() - y_vehicle * sin(/>) + xOffset
y_global = x_vehicle * sin() + y_vehicle * cos(/>) + yOffset
where (x_global, y_global) is the calibrated global coordinates of the vehicle, (x_vehicle, y_vehicle) is the original vehicle coordinates,is the rotation angle, and xOffset and yOffset are the translational offsets in the x-axis direction and the y-axis direction, respectively.
Time synchronization is carried out on the sensor data of different types so as to eliminate time deviation among the sensors and facilitate subsequent fusion;
smoothing the direct perceived data using a filtering algorithm, such as kalman filtering or mean filtering, to remove noise and outliers; in this embodiment, the vehicle-mounted sensor data is filtered using kalman filtering, and the motion state is updated as follows:
wherein,is a state vector of the vehicle, the state vector comprising position and speed information,/or->Is a state transition matrix, ">Is a control input matrix, ">Is a control input,/->Is process noise.
Preprocessing road perception data, including:
denoising image data in the road perception data by using an image denoising algorithm, such as Gaussian filtering or median filtering, so as to eliminate noise in the image and improve the accuracy of target identification; in this embodiment, the image denoising is performed by using a gaussian filtering algorithm, and the formula is as follows:
wherein,representing the relative position of the interior of the gaussian filter on the image, in this embodiment +.>Is the offset calculated from the center pixel of the gaussian filter, where (0, 0) represents the center pixel and x and y can be integer or floating point values; />Is the standard deviation of the gaussian kernel function, +.>The larger the value of (2), the wider the gaussian distribution, and the more pronounced the blurring effect; />The smaller the value of (2), the sharper the gaussian distribution and the weaker the blurring effect.
And then, detecting and tracking the target in the image data of the map by using a deep learning algorithm such as a convolutional neural network or a target detector (such as a YOLO network model), and extracting the position and movement track information of the target, wherein the target comprises road vehicles, pedestrians and traffic signs.
S3, constructing an environment model, and fusing the preprocessed direct perception data and road perception data by using the environment model;
the input of the environment model is preprocessed direct perception data and road perception data, a multi-sensor data fusion algorithm is adopted to fuse the direct perception data and the road perception data, the output of the environment model comprises target state estimation, map information and characteristic positioning, the target state estimation comprises the position, the speed and the acceleration of a target detected in the road perception data, the map information comprises road topology, the position of an obstacle and the position of a traffic sign, and the characteristic positioning comprises the detected target, the obstacle and a traffic sign positioning frame.
In this embodiment, the multi-sensor data fusion algorithm may be an extended kalman filter algorithm, a particle filter algorithm, or a deep learning algorithm, where:
SA, extended Kalman filtering algorithm: the method comprises the specific steps of fusing direct perception data and road perception data by linearizing a state transition model and a measurement model to obtain target state estimation, wherein the specific steps comprise:
SA1: state vector of prediction target:
prediction error covariance:
wherein,is a predictive state->Is the prediction error covariance, F is the state transition matrix, B is the control input matrix, +.>Is the control input and Q is the process noise covariance.
SA2, calculating Kalman gain:
updating the state vector:
updating the error covariance:
wherein,is the Kalman gain, H is the measurement matrix, R is the measurement noise covariance, ++>Is a measurement value, and in particular parameters and matrices need to be appropriately adjusted and set depending on the particular application and sensor characteristics.
SA3, based on the influence factors, adjusting the process noise covariance Q and the measurement noise covariance R in the Kalman filtering according to actual conditions so as to balance the weights of the direct perception data and the road perception data; wherein, Q can be set according to the motion property of the vehicle and the sensor precision, and R can be set according to the performance of the camera and the environmental condition.
SB: particle filter algorithm: generating a group of particles by a random sampling method, and sampling and updating a state space to obtain a target state estimation, wherein the method comprises the following specific steps of:
SB1, particle generation, the process includes: initializing a set of particles based on a priori knowledge or existing data; the hypothetical estimate of the state of the particle is typically generated by random sampling, in this embodiment from a priori distribution, e.g., the estimated state is two-dimensional coordinates (x, y), then the particle is generated by random uniform sampling over the range of possible coordinates; then, a weight is allocated to each particle to reflect the importance of the particle to the description of the current state; the weight is generally determined based on how well the particles fit to the measured values, with particles closer to the actual measured values being given higher weight.
SB2, constructing a state space model, wherein the state space model comprises a state transition model and a measurement model.
The state transition model is used to describe how the state evolves over time, typically expressed as a state transition equation, in this embodiment, to update the dynamics of the particles. For example, when used to estimate a vehicle position, the state transition model is a motion model that predicts the vehicle position at the next time based on the speed and direction of the vehicle.
The measurement model is used to describe how states are mapped to measurement space, typically expressed as a measurement equation, which in this embodiment is used to calculate the weight of each particle, i.e. the probability that a measurement is observed in a given state. For example, when used to estimate the position of a target, the measurement model is a distance sensor model, mapping states to distance measurements.
SB 3. Filtering the particles, the process comprising:
predicting the state of each particle based on a state transition model, namely pushing the particles forward in time;
updating the weight of each particle based on the measurement model and the actual observed value, wherein the weight represents the consistency of the estimated state of the particle and the observed value;
performing a resampling step, selecting new particles from the current particle set according to the weights of the particles, wherein in the process, particles with high weights are more likely to be selected, and particles with low weights are more likely to be discarded so as to maintain diversity;
all particles are weighted evenly to obtain a final state estimate, wherein the weight of each particle reflects the contribution of the particle.
SC, deep learning algorithm, which performs data fusion by constructing a deep neural network model, wherein the input of the deep neural network is direct perception data and road perception data, and the output of the deep neural network is target state estimation;
in this embodiment, the deep neural network model includes:
an encoder network for encoding different sensor data into a shared representation of the characteristics, the data of each sensor being characteristic extracted by a separate encoder network; in this embodiment, a convolutional neural network or a cyclic neural network is selected as an encoder network that extracts image features of image data of a drawing acquired by a camera sensor.
The feature fusion module is used for fusing the features extracted by different encoder networks; in this embodiment, feature fusion is achieved through feature connection or a self-attention mechanism, for example, feature connection is performed on image data of a graph and laser radar data, so as to form a richer feature representation.
Generating an countermeasure network, the generating countermeasure network comprising a generator and a arbiter, the generator for receiving the feature fused data and attempting to generate an environmental model capable of predicting the state of the environment in a reasonable manner; in this embodiment, the generator is a deep neural network comprising a convolutional layer and a fully-connected layer; the discriminator is used for evaluating the quality of the environment model output by the generator, and after receiving the model generated by the generator and the real environment data, trying to distinguish which is the real environment model; in this embodiment, the generator is another deep neural network.
When training the generated countermeasure network, the direct countermeasure loss of the generator and the discriminator is optimized through the minimum, so that the generator can generate a vivid environment model, and the discriminator cannot distinguish training targets of real data and generated data. To ensure that the generated environmental model is consistent with the real environment, other loss functions, such as mean square error loss or perceptual loss, may also be added to ensure that the generated environmental model is as close as possible to the distribution of the real data.
In addition, data fusion can be achieved by other means, for example, preprocessed direct perception data and camera data are converted into shared information representations, such as feature vectors or image feature graphs, a weighted fusion method is used for distributing weights for information of different sensors, and the weights are adjusted according to reliability and accuracy of the information.
In the implementation process, the generated environment model can also output uncertainty information related to the estimated value, including covariance matrix or confidence distribution, so as to help to know the reliability of the target state estimation.
S4, carrying out interactive processing on a road scene where the vehicle is located according to the fused perception data, wherein the interactive processing comprises the following steps:
the semantic segmentation operation is carried out on the road scene image obtained by the public road sensor, and the road scene image is divided into different semantic areas, such as roads, sidewalks, buildings and the like, so that the structure of the road environment can be understood more accurately;
a road environment model is built to better predict the behavior of other traffic participants. And sensing the traffic environment in real time according to the output result of the semantic region and the environment model, for example, identifying information such as traffic signs, traffic lights, pedestrians, vehicles and the like on roads. For example, in the case of a crossroad far ahead, the vehicle owner can predict the situation of the crossroad lane vehicle and the like in advance to make a certain decision, as shown in fig. 3;
the driving state of the vehicle is identified, and the automatic driving system can predict the future actions of other vehicles, pedestrians, bicycles and the like by predicting the actions of other traffic participants, so that the driving strategy can be correspondingly planned; at the same time, abnormal detection is carried out on the running state so as to identify possible dangerous situations or unusual events, such as sudden stop or sudden acceleration of the vehicle;
the real-time state of the traffic signal lamp is detected and identified, and the running state of the vehicle is controlled according to the real-time state of the traffic signal lamp, so that the automatic driving system can follow the traffic rules.
S5, generating a road planning strategy based on the result of the road scene interaction processing, realizing road planning of the vehicle, and optimizing the path planning of the vehicle by considering the current position, the destination, the traffic sign, the speed limit, the motion state of other vehicles and other factors of the vehicle, so as to ensure that the vehicle can safely and efficiently run, wherein the method comprises the following steps:
the result of the road scene interaction processing is connected with a vehicle control system, so that the vehicle makes a road planning strategy according to the road scene, and in the embodiment, the road planning strategy comprises behavior decisions such as speed control, overtaking, lane changing, obstacle avoidance and the like;
dividing a road planning strategy into a plurality of levels, and coordinating planning strategies of the levels, wherein a high-level planning strategy comprises destination selection, a medium-level planning strategy comprises path selection, and a low-level planning strategy comprises vehicle driving state control;
the road planning strategy is optimized, such as reinforcement learning, through a machine learning algorithm to improve the planning strategy, so that the vehicle can continuously improve the performance in continuous learning and optimization. In this embodiment, the road planning strategy optimization includes the steps of:
constructing a historical dataset comprising: direct sensing data of the vehicle-mounted sensor, road sensing data of the public road sensor, vehicle behavior decision records, environmental conditions and the like;
constructing a machine learning model, and designing proper characteristic data as input of the machine learning model, wherein the machine learning model can select a reinforcement learning model, a decision tree, a neural network model and the like;
training a machine learning model by using the historical data set, and evaluating and verifying the machine learning model to obtain a trained machine learning model;
the trained machine learning model is deployed into an automatic driving system, and the road planning strategy is continuously optimized on line so as to adapt to the continuously changing traffic and road conditions.
In the implementation process, a traffic flow simulation model can be constructed to simulate the behavior of the vehicle under different traffic conditions so as to better predict potential traffic jam or collision risks.
In this embodiment, the executing process of the above steps includes continuously updating and fusing the direct sensing data of the vehicle-mounted sensor and the road sensing data of the public road sensor, updating the road planning strategy of the vehicle in real time, and updating the dynamic map through communication between the vehicle-mounted sensor and the vehicle.
The embodiment also provides a road planning method for fusing vehicle-mounted sensor data and road sensor data, as shown in fig. 4, including:
a data acquisition unit 101 for acquiring direct sensing data of the vehicle-mounted sensor and road sensing data of the public road sensor;
a data preprocessing unit 102, configured to preprocess the direct perception data and the road perception data;
an environmental model construction unit 103, configured to construct an environmental model, and fuse the preprocessed direct perception data and road perception data by using the environmental model;
the interaction unit 104 is configured to perform interaction processing on a road scene where the vehicle is located through the fused data;
a road planning unit 105, configured to plan a road for a vehicle according to a result of the road scene interaction processing;
and the real-time updating unit 106 is used for continuously updating the direct sensing data of the vehicle-mounted sensor and the road sensing data of the public road sensor and updating the road planning strategy of the vehicle in real time.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (10)

1. A road planning method for fusing vehicle-mounted sensor data and road sensor data, comprising:
collecting direct sensing data of a vehicle-mounted sensor and road sensing data of a public road sensor;
preprocessing direct perception data and road perception data;
constructing an environment model, and fusing the preprocessed direct perception data and road perception data by using the environment model;
according to the fused perception data, carrying out interactive processing on a road scene where the vehicle is located;
and planning the road for the vehicle based on the result of the road scene interaction processing.
2. The road planning method for fusing vehicle-mounted sensor data and road sensor data as claimed in claim 1, wherein preprocessing the direct perception data comprises:
and calibrating the direct sensing data, smoothing the calibrated direct sensing data by using a filtering algorithm, and removing noise and abnormal values.
3. The road planning method of fusing vehicle-mounted sensor data and road sensor data of claim 2, wherein calibrating the direct perception data comprises:
data calibration is carried out between the direct sensing data and the road sensing data so as to eliminate data deviation between sensors;
and performing time synchronization between the direct sensing data and the road sensing data to eliminate time deviation between the sensors.
4. A road planning method for fusing vehicle-mounted sensor data and road sensor data as claimed in claim 3, wherein a vehicle coordinate system is set, and the vehicle-mounted sensor acquires original vehicle coordinates based on the vehicle coordinate system; when the direct sensing data is subjected to data calibration, the method comprises the steps of determining global coordinates of a vehicle by utilizing a GPS receiver carried by the vehicle, constructing a vehicle calibration model, and calibrating the original vehicle coordinates based on the position relationship between the original vehicle coordinates and the global coordinates, wherein the vehicle calibration model is as follows:
x_global = x_vehicle * cos() - y_vehicle * sin(/>) + xOffset
y_global = x_vehicle * sin() + y_vehicle * cos(/>) + yOffset
where (x_global, y_global) is the calibrated global coordinates of the vehicle, (x_vehicle, y_vehicle) is the original vehicle coordinates,is the rotation angle, and xOffset and yOffset are the translational offsets.
5. A road planning method for fusing vehicle-mounted sensor data and road sensor data as defined in claim 3, wherein preprocessing the road perception data comprises: denoising the road perception data, detecting and tracking a target in the road perception data by using a deep learning algorithm, and extracting the position and movement track information of the target, wherein the target comprises road vehicles, pedestrians and traffic signs.
6. The road planning method for fusing vehicle-mounted sensor data and road sensor data according to claim 5, wherein the input of the environment model is preprocessed direct perception data and road perception data, and a multi-sensor data fusion algorithm is adopted to fuse the direct perception data and the road perception data; the output of the environment model comprises a target state estimate, map information and feature positioning, wherein the target state estimate comprises the position, the speed and the acceleration of a target detected in road perception data, the map information comprises road topology, obstacle positions and traffic sign positions, and the feature positioning comprises a detected target positioning frame.
7. The road planning method for fusing vehicle-mounted sensor data and road sensor data according to claim 1 or 6, wherein the interactive processing of the road scene where the vehicle is located according to the fused perception data comprises:
carrying out semantic segmentation operation on the road scene image obtained by the public road sensor, and dividing the road scene image into different semantic areas;
constructing a road environment model, and perceiving the traffic environment in real time according to the semantic region and the output result of the environment model;
identifying the running state of the vehicle and simultaneously carrying out abnormality detection on the running state;
and detecting and identifying the real-time state of the traffic signal lamp, and controlling the running state of the vehicle according to the real-time state of the traffic signal lamp.
8. The road planning method of integrating vehicle-mounted sensor data and road sensor data as claimed in claim 7, wherein generating a road planning strategy based on a result of interactive processing of a road scene, to implement road planning of a vehicle, comprises:
the result of the road scene interaction processing is connected with a vehicle control system, so that the vehicle makes a road planning strategy according to the road scene;
dividing a road planning strategy into a plurality of levels, and coordinating planning strategies of the levels, wherein a high-level planning strategy comprises destination selection, a medium-level planning strategy comprises path selection, and a low-level planning strategy comprises vehicle driving state control;
and optimizing the road planning strategy through a machine learning algorithm.
9. The road planning method for fusing vehicle-mounted sensor data and road sensor data as claimed in claim 8, wherein the direct sensing data of the vehicle-mounted sensor and the road sensing data of the public road sensor are continuously updated and fused, and the road planning strategy of the vehicle is updated in real time.
10. A road planning system for fusing vehicle-mounted sensor data and road sensor data, applied to the road planning method for fusing vehicle-mounted sensor data and road sensor data according to any one of claims 1 to 9, characterized by comprising:
the data acquisition unit is used for acquiring direct sensing data of the vehicle-mounted sensor and road sensing data of the public road sensor;
the data preprocessing unit is used for preprocessing the direct perception data and the road perception data;
the environment model construction unit is used for constructing an environment model and fusing the preprocessed direct perception data and road perception data by utilizing the environment model;
the interaction unit is used for carrying out interaction processing on the road scene where the vehicle is located through the fused data;
the road planning unit is used for planning the road of the vehicle according to the result of the road scene interaction processing;
and the real-time updating unit is used for continuously updating the direct sensing data of the vehicle-mounted sensor and the road sensing data of the public road sensor and updating the road planning strategy of the vehicle in real time.
CN202311374379.8A 2023-10-23 2023-10-23 Road planning method integrating vehicle-mounted sensor data and road sensor data Pending CN117387647A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311374379.8A CN117387647A (en) 2023-10-23 2023-10-23 Road planning method integrating vehicle-mounted sensor data and road sensor data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311374379.8A CN117387647A (en) 2023-10-23 2023-10-23 Road planning method integrating vehicle-mounted sensor data and road sensor data

Publications (1)

Publication Number Publication Date
CN117387647A true CN117387647A (en) 2024-01-12

Family

ID=89467960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311374379.8A Pending CN117387647A (en) 2023-10-23 2023-10-23 Road planning method integrating vehicle-mounted sensor data and road sensor data

Country Status (1)

Country Link
CN (1) CN117387647A (en)

Similar Documents

Publication Publication Date Title
US11217012B2 (en) System and method for identifying travel way features for autonomous vehicle motion control
US11164016B2 (en) Object detection and property determination for autonomous vehicles
Krämmer et al. Providentia--A Large-Scale Sensor System for the Assistance of Autonomous Vehicles and Its Evaluation
KR102565533B1 (en) Framework of navigation information for autonomous navigation
US10891497B2 (en) Traffic boundary mapping
US10229363B2 (en) Probabilistic inference using weighted-integrals-and-sums-by-hashing for object tracking
CN113313154A (en) Integrated multi-sensor integrated automatic driving intelligent sensing device
CN110356412B (en) Method and apparatus for automatic rule learning for autonomous driving
CN113343746B (en) Method and device for lane detection on a vehicle driving surface
JP2016115334A (en) Method and system for adaptive ray-based scene analysis of semantic traffic spaces, and vehicle equipped with such system
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
WO2022005576A1 (en) Systems and methods for optimizing trajectory planner based on human driving behaviors
CN116685874A (en) Camera-laser radar fusion object detection system and method
KR102565573B1 (en) Metric back-propagation for subsystem performance evaluation
CN117440908A (en) Method and system for pedestrian action prediction based on graph neural network in automatic driving system
CN113665570A (en) Method and device for automatically sensing driving signal and vehicle
JP2023548879A (en) Methods, devices, electronic devices and storage media for determining traffic flow information
US11820397B2 (en) Localization with diverse dataset for autonomous vehicles
US20230222671A1 (en) System for predicting near future location of object
KR20230004212A (en) Cross-modality active learning for object detection
KR20230120974A (en) Curb-based feature extraction for localization and lane detection using radar
CN115713687A (en) Method, system, and medium for determining dynamic parameters of an object
CN113792598A (en) Vehicle-mounted camera-based vehicle collision prediction system and method
US11555928B2 (en) Three-dimensional object detection with ground removal intelligence
CN117387647A (en) Road planning method integrating vehicle-mounted sensor data and road sensor data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination