CN109920246B - Collaborative local path planning method based on V2X communication and binocular vision - Google Patents

Collaborative local path planning method based on V2X communication and binocular vision Download PDF

Info

Publication number
CN109920246B
CN109920246B CN201910133334.9A CN201910133334A CN109920246B CN 109920246 B CN109920246 B CN 109920246B CN 201910133334 A CN201910133334 A CN 201910133334A CN 109920246 B CN109920246 B CN 109920246B
Authority
CN
China
Prior art keywords
vehicle
information
traffic
path planning
local path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910133334.9A
Other languages
Chinese (zh)
Other versions
CN109920246A (en
Inventor
蒋建春
张卓鹏
曾素华
奚川龙
王肖
贾敬森
郭真妮
彭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Mouyi Technology Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910133334.9A priority Critical patent/CN109920246B/en
Publication of CN109920246A publication Critical patent/CN109920246A/en
Application granted granted Critical
Publication of CN109920246B publication Critical patent/CN109920246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a collaborative local path planning method based on V2X communication and binocular vision. The OBU obtains vehicle body information and environment information through the communication of the double current cameras, the OBD equipment and the V2X and transmits the vehicle body information and the environment information to the collaborative local path planning controller; then, carrying out state prediction on the state of the traffic object through a traffic object mixed situation estimation model, carrying out space-time data fusion on the traffic object information subjected to situation estimation and traffic rules through a multi-source data fusion algorithm, then constructing a traffic road condition map with high reliability through a coordinate mapping model on the fused traffic data, calculating the own vehicle situation through a own vehicle situation estimation model by combining the own vehicle information, and then calculating the optimal smooth running path of the current vehicle through an improved local path planning algorithm; the path is finally transmitted to the OBU and broadcast to other vehicles via V2X. The method combines the traffic object state perception of the sight distance and the non-sight distance, and improves the accuracy of vehicle environment perception and local path planning.

Description

Collaborative local path planning method based on V2X communication and binocular vision
Technical Field
The invention belongs to the field of intelligent networked automobiles and intelligent traffic, and relates to a local path planning method based on V2X communication and binocular vision.
Background
With the rapid increase of the vehicle reserves in China, the number of traffic accidents is increasing continuously, wherein most of the traffic accidents are caused by improper operation of drivers. In an emergency situation, a driver needs a certain reaction time and makes an optimal decision difficult, which are potential factors causing traffic accidents. Therefore, reducing the impact of human factors on traffic accidents is now an increasing concern. Before an accident occurs, the vehicle can automatically take corresponding measures to avoid the traffic accident, and the vehicle active safety technology has very important significance for preventing the traffic accident. The local path planning is one of key technologies of vehicle active safety, and the traditional local path planning mainly acquires traffic environment information of surrounding roads through vehicle-mounted equipment such as a vehicle-mounted sensor or a camera and the like; obtaining body data of the vehicle, such as information of a steering lamp, a steering wheel, speed, brake and the like, through equipment such as an on-board OBD (on-board diagnostics); then, a certain algorithm is applied to plan a path which can guide the vehicle to avoid the obstacle, so that the accident is avoided, and the purpose of safe driving of the vehicle is achieved. However, since a single vehicle has a small sensing range and a relatively limited sensing capability for the environment, under special conditions such as shading or bad weather, misjudgment is likely to occur, and this situation may bring great safety hazards to the driver. Therefore, how to enhance the reliability of the car for the environment perception and expand the perception range of the vehicle is a very important part of the vehicle local path planning problem.
With the development of communication technology, the development of smart vehicles is continuously advancing from vehicle intelligence to vehicle networking, short-distance communication based on V2X is widely researched in vehicle networking, such as 802.11p, LTE-V and future 5G communication, and communication between automobiles and other traffic objects becomes a reality. V2X is a communication method with high reliability and low time delay between vehicles and other elements in the field of car networking, and mainly includes V2V (Vehicle-to-Vehicle), V2I (Vehicle-to-Infrastructure), V2P (Vehicle-to-peer) and V2N (Vehicle-to-Network). Binocular stereo vision is a form of machine vision, two cameras are used for acquiring two images of a measured object in a visual range from different positions based on a parallax principle, and the three-dimensional geometric information of the object is acquired by calculating the position deviation between corresponding points of the images, so that the information such as the space position of the object is accurately positioned. The V2X communication can acquire information of the position, speed, direction, etc. of traffic objects around the vehicle, and from the information, auxiliary safety-related situation estimation and decision analysis can be performed. Therefore, the binocular vision and the V2X communication are combined, the detection and the identification of traffic objects in the vehicle sight distance and in the non-sight distance and short distance can be realized, and the information processing and decision control related to vehicle auxiliary safety are realized.
Disclosure of Invention
The invention aims at the problems that a single vehicle has small environmental perception range and low reliability on environmental perception information, and can not effectively detect the obstacle information on a road and the like. The collaborative local path planning method based on V2X communication and binocular vision is provided, which improves the accuracy of vehicle environment perception, thereby improving the accuracy of local path planning and the safety of vehicle driving, and effectively improves the reliability of the vehicle on road environment perception information. The technical scheme of the invention is as follows:
a collaborative local path planning method based on V2X communication and binocular vision is cooperatively realized through an on-board terminal (OBU), a road side device (RSU), a cloud server, a mobile terminal, an OBD device, a vehicle central control device, a binocular camera and a collaborative local path planning controller, and specifically comprises the following steps:
step 1, in the driving process of a vehicle, triggering a collaborative local path planning algorithm to work by combining navigation path and vehicle steering lamp information, and turning to step 2;
step 2, the vehicle-mounted terminal OBU collects vehicle information through OBD equipment and vehicle central control equipment, and detects road boundary information and obstacle information through a binocular camera;
step 3, the vehicle-mounted terminal OBU obtains traffic object information in the driving process of the vehicle from road side equipment, other vehicles and pedestrians in a V2X communication mode;
step 4, the cooperative local path planning controller acquires traffic object information through an OBU (on-board unit), and then establishes a traffic object mixed situation estimation model according to the traffic object information, wherein the traffic object mixed situation estimation model is used for acquiring dynamic traffic object information after situation estimation;
step 5, performing data fusion on the dynamic traffic object information after the situation estimation of the traffic object mixed situation estimation model, the static traffic object information and the traffic rules by adopting a traffic object information fusion algorithm;
step 6, constructing a traffic road condition map by the fused real-time traffic data through a coordinate mapping model;
step 7, establishing a situation estimation model of the vehicle according to the vehicle body information including vehicle navigation map information, the current steering lamp of the vehicle and the three-axis acceleration information of the vehicle, and evaluating the running situation and the suggested speed of the vehicle;
step 8, according to the traffic road condition map established in the step 6, the driving situation of the vehicle and the turning radius of the vehicle are comprehensively considered, and the optimal driving path of the current vehicle is calculated through a local path planning algorithm;
and 9, transmitting the obtained optimal driving path and the vehicle information to the OBU in cooperation with the local path planning controller, broadcasting the information to other traffic objects by the OBU in a V2X communication mode, and transmitting the information to the vehicle networking road side equipment and the vehicle networking cloud platform.
Further, the on-board terminal OBU is equipped with an information detection module including a high-precision positioning module and a three-axis gyroscope and V2X communication equipment, and is used for completing vehicle information acquisition, road boundary information reception, obstacle information reception and V2X message reception and analysis; the collaborative local path planning controller is a vehicle-mounted computing center for local path planning, and mainly completes a traffic object mixed situation estimation model, a multi-source data fusion algorithm, traffic road condition map construction, a self-vehicle situation estimation model and a local path planning algorithm; the binocular camera consists of two cameras with the same specification parameters, which are horizontally fixed on the vehicle, and is used for acquiring and processing image data; the high-precision positioning module is used for acquiring high-precision longitude and latitude and altitude information of a vehicle in the driving process; the three-axis gyroscope is used for acquiring three-axis acceleration information of the vehicle during running; the V2X communication device is used for broadcasting the fused environment data and the optimal local planning path information.
Further, in the step 2, road boundary information and obstacle information are detected through a binocular camera, the road boundary information comprises guardrail positions of roads and road lane position information, and the obstacle information comprises obstacle types, position information and obstacle danger level information; the method specifically comprises the following steps: calibrating the vehicle-mounted binocular camera by a stereo calibration method, thereby calculating an internal and external parameter matrix, eliminating distortion and aligning rows and columns; detecting positions of lane lines and guardrails in the image by adopting an image edge detection algorithm, and then detecting an image of an obstacle objective lens in the lane by image ROI segmentation; the method comprises the steps of detecting obstacles of left and right views through an object detection algorithm, marking pixel positions and categories of the obstacles, wherein the obstacle categories are mainly classified into pedestrians, dangerous vehicles, animals, potholes and rockfall, and then calculating three-dimensional coordinates of the obstacles relative to HV through a stereo matching algorithm.
Further, the multi-source traffic object mixed situation estimation model is a traffic object situation estimation model of a binocular camera and a traffic object situation estimation mixed model of V2X, and relates to information obtained in the aspects of visual range (camera) and non-visual range (V2X), and the motion state of a dynamic traffic object is predicted through the mixed situation estimation model according to the traffic object time sequence data;
further, the step 4 divides the acquired road environment information into three categories, namely, vehicles, obstacles and pedestrians, and respectively establishes traffic object models as follows:
Figure BDA0001976124360000041
wherein V represents a vehicle, Position3D (Latitude, Longitude, Elevation) represents Latitude and Longitude information of the vehicle, Transmissionstate (Neutral, Park, ForwardGears, RevereGears) represents a vehicle gear state, Speed represents a vehicle Speed magnitude, Heading represents a vehicle Heading angle, AccelerationSet4Way (Long, Lat, Vert, Yaw) represents a four-axis acceleration of the vehicle, BrakeSystemStatus (brakePadel, wheelBrackes, traction, abs, scs, brakeBoost, auxkeys) represents a brake system state of the vehicle, Vehiclesize (VehicleWidth, VehicleLength, VehicleHeight) represents a dimension of the vehicle,
O(Position3D,ObstacleSize,DangerClasses) (5)
where O denotes an obstacle, Position3D (Latitude, Longitude, Elevation) denotes the Position of the obstacle, ObstacleSize denotes the size of the obstacle, and DangerClasses denotes the risk level of the obstacle.
H(Position3D,Speed,Heading,AccelerationSet4Way) (6)
Where H denotes a pedestrian, Position3D (Latitude, Longitude, Elevation) denotes Latitude and Longitude Position information of the pedestrian, Speed denotes the current Speed of the pedestrian, Heading denotes the Heading angle of the pedestrian, and AccelerationSet4Way (Long, Lat, Vert, Yaw) denotes the four-axis acceleration of the pedestrian.
Further, the step 5 of calculating the traffic object information with high reliability by a multi-source traffic object information fusion algorithm specifically comprises the following steps; firstly, preprocessing multi-source data by combining with traffic rules, analyzing and removing the longitude and latitude of a traffic object, triaxial acceleration abnormal data and abnormal state equipment data which exceed a path planning range, and then uniformly caching data formats according to a standard format which is established in advance; the data sampling frequencies of different sources are different, and the data are ensured to be acquired at the same time by unifying the sampling frequencies of the data of different sources, so that the time correlation of the data is completed; coordinate systems of different data are different, and fusion processing can be performed only by unifying multi-source data to the same coordinate system before data fusion; and then, based on a bitmap method, the optimal fusion number and the optimal fusion set in the data are obtained, and finally, a characteristic value after the traffic information is fused is obtained by calculating a confidence distance matrix and a relation matrix.
Further, the step 6 of constructing a traffic road condition map by using the fused real-time traffic data through a coordinate mapping model specifically includes:
firstly, establishing a mapping relation model between an HV coordinate system and a geographic space coordinate system by taking longitude and latitude coordinates of a vehicle as reference points; and then converting the longitude and latitude coordinate data of other traffic objects into coordinate data under an HV coordinate system through a coordinate mapping algorithm, thereby constructing a traffic road condition map of real-time road conditions with high reliability.
Further, the step 7 of establishing a situation estimation model of the vehicle according to the vehicle body information including the vehicle navigation map information, the current turn signal of the vehicle and the three-axis acceleration information of the vehicle, and evaluating the driving situation and the suggested speed of the vehicle specifically includes:
firstly, dividing the driving action of the vehicle into 4 types of lane following, left lane changing, right lane changing and overtaking; and then establishing a situation estimation model of the vehicle, evaluating the driving situation of the vehicle by analyzing vehicle body information including navigation information of vehicle central control equipment, current steering lamps of the vehicle and three-axis acceleration information of the vehicle and applying a situation estimation algorithm of the vehicle, and evaluating the driving suggested speed of the vehicle according to information of other vehicles in the traffic environment and traffic rules.
Further, the step 8 of calculating the optimal driving path of the current vehicle by a local path planning algorithm according to the traffic road condition map established in the step 6 and by comprehensively considering the driving situation of the current vehicle and the turning radius of the vehicle includes: firstly, comprehensively considering the turning radius of a vehicle, then setting an opening node list of initial nodes on a map, calculating a cost function of each node in the list, taking the node with the minimum cost function value as the next point, moving the point into a closing list, and repeating the process until path planning is finished; and then, the curvature of each node in the path and the distance between each node and the obstacle are constrained, and the original driving path is fitted by adopting a cubic Bezier curve smoothing method to obtain a smooth optimal driving path.
The invention has the following advantages and beneficial effects:
according to the invention 1, the sensing range of the vehicle to the road traffic environment can be effectively expanded by combining the V2X communication technology and the object detection and positioning technology of the binocular camera. Even in the range which cannot be detected by the binocular camera, the information of the range can still be obtained through the V2X message broadcast by the surrounding vehicles, so that more road environment information can be obtained, and the perception range of the vehicle is effectively expanded. The perception and prediction of the traffic object state of the visual distance (binocular camera) and the non-visual distance (V2X communication) are combined, the perception accuracy of the vehicle environment can be improved, and therefore the accuracy of local path planning and the safety of vehicle driving are improved.
2. By combining the V2X communication technology and the data fusion technology, the reliability of the vehicle on the perception information of the surrounding road environment can be effectively improved. In abnormal situations, the perception of the surrounding road environment by a single vehicle may be biased or even misjudged, which may bring about a significant life safety hazard to the driver. Obstacle information sent by surrounding vehicles, road side equipment, pedestrian mobile equipment or a vehicle network cloud platform can be acquired through a V2X communication technology, the mode of data source is changed from single vehicle perception to multi-equipment perception, and then data fusion processing is carried out on multi-source data, so that the reliability of the vehicle on road environment perception information can be effectively improved.
3. By combining the V2X communication technology and the local path planning algorithm, the calculation frequency of the local path planning algorithm can be reduced, and the calculation load of a CPU (Central processing Unit) can be reduced. Compared with a single vehicle perception mode, the perception range of the vehicle can be effectively expanded through V2X communication, and a longer optimal path can be obtained under the condition that a local path of the vehicle is calculated at a single time, so that the calculation frequency of an algorithm is effectively reduced. And after the optimal local path is calculated, the path information can be broadcasted to surrounding vehicles through V2X, and development of multi-vehicle collaborative path planning is facilitated.
Drawings
FIG. 1 is a general architecture diagram of a coordinated local path planning method based on V2X communication and binocular vision according to a preferred embodiment of the present invention;
FIG. 2 is a V2X communication frame diagram in a collaborative local path planning method based on V2X communication and binocular vision;
FIG. 3 is a schematic diagram of an OBU in a collaborative local path planning method based on V2X communication and binocular vision;
FIG. 4 is a block diagram of the collaborative local path planning method based on V2X communication and binocular vision according to the preferred embodiment of the present invention;
FIG. 5 is a flowchart of the steps of a collaborative local path planning method based on V2X communication and binocular vision;
FIG. 6 is a schematic view of obstacle coordinates in a vehicle coordinate system;
FIG. 7 is an input/output relationship diagram of a traffic object multi-source information fusion algorithm;
FIG. 8 is a flow chart of a traffic object multi-source information fusion algorithm;
FIG. 9 is a flowchart of a local path planning algorithm in the collaborative local path planning method based on V2X communication and binocular vision;
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the method for collaborative local path planning based on V2X communication and binocular vision is mainly completed by an on-board terminal (OBU), a road side device (RSU), a cloud server, a mobile terminal, an OBD device, a vehicle central control device, a binocular camera, and a collaborative local path planning controller as shown in fig. 1, and is a frame diagram of V2X communication as shown in fig. 2. The OBU device on the main vehicle hv (host vehicle) is an information acquisition center for local path planning, and as shown in fig. 3, the OBU device includes a high-precision positioning module, a three-axis gyroscope and other information detection modules, and a V2X communication device, and needs to complete vehicle information acquisition, road boundary information detection, and V2X message receiving and analyzing; the cooperative local path planning controller is a vehicle-mounted computing center for local path planning, and mainly completes a traffic object mixed situation estimation model, a multi-source data fusion algorithm, traffic road condition map construction, a self-vehicle situation estimation model and a local path planning algorithm; the binocular camera consists of two cameras with completely same specification parameters, which are horizontally fixed on the vehicle according to a certain baseline distance, and is used for acquiring and processing image data; the three-axis gyroscope is used for acquiring three-axis acceleration information of the vehicle during running; the V2X communication device is used for broadcasting the fused environment data and the optimal local planning path information.
As shown in fig. 4, the OBU obtains vehicle body information and environmental information through vehicle-mounted dual front cameras, OBD equipment and V2X communication and transmits the vehicle body information and environmental information to the coordinated local path planning controller; then, a traffic object mixed situation estimation model is established according to traffic object information by the cooperation of the local path planning controller, space-time data fusion is carried out on the traffic object information after situation estimation and traffic rules through a multi-source data fusion algorithm, then a traffic road condition map with high reliability is constructed through a coordinate mapping model on the fused real-time traffic data, the own vehicle situation is calculated through the own vehicle situation estimation model according to the own vehicle information, and then the current real-time optimal smooth running path of the vehicle is calculated through an improved local path planning algorithm; and finally, the path is transmitted to the OBU by cooperating with the local path planning controller, and the OBU broadcasts the path to other vehicles through V2X so as to improve the real-time performance and accuracy of path planning and the cooperativity between the vehicles.
The collaborative local path planning method based on V2X communication and binocular vision is shown in FIG. 5, and comprises the following steps:
step one, in the driving process of a vehicle, combining a navigation path and vehicle steering lamp information in vehicle central control equipment, and triggering a cooperative local path planning algorithm to work according to a corresponding driving scene. For example, the navigation information in the central control device indicates that the vehicle is driven by following a lane, if the turn light is a left turn light, the turn light information of the vehicle is not consistent with the central control navigation information, and the driver is considered to overtake, and the cooperative local path planning algorithm is triggered to work.
Secondly, the OBU in the collaborative local path planning algorithm acquires vehicle information, such as information of a steering lamp, a steering wheel and the like of the vehicle, of the vehicle through OBD equipment, and the future trend of the vehicle can be pre-judged through the vehicle body information; then, cameras with the same left and right specification parameters are horizontally and fixedly placed on a vehicle according to a certain base line distance to form a binocular camera, the binocular camera is calibrated by using a stereo calibration method, the internal and external parameters and the relative position relation of the two cameras are calculated, distortion elimination and row (column) alignment are respectively carried out on the two cameras by using a stereo correction algorithm according to the calculated parameters, so that epipolar lines of the two images are exactly on the same horizontal line, thus any point on one image and the corresponding point on the other image have the same row number, the corresponding point can be matched only by one-dimensional search on the row, namely the imaging origin coordinates of the left and right views are consistent; after the binocular camera acquires the image, the lane line and the guardrail position in the image are detected immediately by adopting an image edge detection algorithm, then ROI segmentation is carried out on the image, and the lane line or the area on the inner side of the guardrail is intercepted to carry out subsequent obstacle detection; detecting obstacles of left and right views based on an object detection algorithm and marking pixel positions and categories of the obstacles, wherein the obstacle categories are mainly classified into pedestrians, dangerous vehicles, animals, potholes and rockfall, matching obstacle elements of the corrected left and right views by a stereo matching algorithm based on characteristics to obtain disparity maps of left and right cameras, and calculating three-dimensional coordinates of the obstacles relative to a binocular camera according to the disparity maps;
as shown in fig. 6, taking the position of the binocular camera in the vehicle as the origin O of the coordinate system, the heading angle of the vehicle is parallel to the Y axis, P is the position of the obstacle, the three-dimensional coordinates of the obstacle P relative to the origin O are (x, Y, z), and the angle α between the obstacle and the positive direction of the camera can be expressed as:
Figure BDA0001976124360000091
because the heading angle of the vehicle is parallel to the Y axis, and the heading angle of the vehicle is beta at the moment, the direction angle delta of the binocular camera relative to the obstacle is alpha + beta. If the GPS coordinate of the point O is (O)j,Ow,Oh) Let the longitude and latitude coordinates of point P be (P)j,Pw,Ph) Then, there are:
Figure BDA0001976124360000092
wherein, R is the average radius of the earth, and radian C is as follows:
Figure BDA0001976124360000093
through the simultaneous expression of (1), (2) and (3), the spatial coordinates of the obstacle in the longitude and latitude coordinate system can be obtained through the position relation of the obstacle relative to the vehicle coordinate system.
Thirdly, acquiring traffic object information in the driving process of the vehicle through a V2X communication mode in cooperation with a local path planning algorithm, wherein the traffic object information comprises information of other vehicles RV, road obstacle information detected by other vehicles and information of pedestrians on a road, compared with the traffic environment information detected only through a vehicle sensor, the vehicle sensing range can be obviously expanded through V2X communication, the road environment information can be acquired from other vehicles, road side equipment, mobile terminals of pedestrians and a cloud server, the acquired road environment information is divided into three types, and traffic object models are respectively established as follows:
Figure BDA0001976124360000101
where V denotes a vehicle, Position3D (Longitude, Elevation) denotes Latitude and Longitude information of the vehicle, TransmissionState (Neutral, Park, ForwardGears, ReverseGears) denotes a vehicle gear state, Speed denotes a vehicle Speed magnitude, Heading denotes a vehicle Heading angle, AccelerationSet4Way (Long, last, Vert, raw) denotes a four-axis acceleration of the vehicle, brakesystestatus (brakeladel, wheelbraker, traction, abs, scs, brakeboot, auxkeys) denotes a brake system state of the vehicle, and VehicleSize (VehicleWidth, VehicleLength, VehicleHeight) denotes a size of the vehicle.
O(Position3D,ObstacleSize,DangerClasses) (5)
Where O denotes an obstacle, Position3D (Latitude, Longitude, Elevation) denotes the Position of the obstacle, ObstacleSize denotes the size of the obstacle, and DangerClasses denotes the risk level of the obstacle.
H(Position3D,Speed,Heading,AccelerationSet4Way) (6)
Wherein, H represents the pedestrian, Position3D (Latitude, Longitude, Elevation) represents the Longitude and Latitude Position information of the pedestrian, Speed represents the current Speed of the pedestrian, Heading represents the Heading angle of the pedestrian, AccelerationSet4Way (Long, Lat, Vert, Yaw) represents the four-axis acceleration of the pedestrian;
and fourthly, acquiring traffic object information through the OBU by the aid of the cooperative local path planning controller, and establishing a traffic object mixed situation estimation model according to the traffic object information acquired in the two aspects of line-of-sight (camera) and non-line-of-sight (V2X communication). Firstly analyzing time sequence data received by V2X through a traffic object situation estimation algorithm based on V2X, and then synthesizing an image key frame sequence calculated by the traffic object situation estimation algorithm based on a binocular camera to obtain the motion situation information of the dynamic traffic object;
and step five, as shown in fig. 7, performing data fusion on the traffic information in the traffic object model by a collaborative local path planning algorithm through a traffic object information fusion algorithm, and calculating high-reliability traffic object information (such as traffic object type, position, size and speed) through a multi-source traffic object information fusion algorithm through environment perception information such as high-precision positioning information, V2X communication information and binocular camera detection data. As shown in fig. 8, preprocessing is performed by combining the multi-source data with the traffic rule, including analyzing and eliminating the traffic object longitude and latitude, the three-axis acceleration abnormal data and the abnormal state device data which exceed the path planning range, and then performing unified caching of the data format according to the standard format established in advance; the data sampling frequencies of different sources are different, and the data are ensured to be acquired at the same time by unifying the sampling frequencies of the data of different sources, so that the time correlation of the data is completed; the coordinate systems of different data are also different, and before data are fused, multi-source data need to be unified to the same coordinate system (the coordinate system of HV), and then fusion processing can be performed. Then, based on a bitmap method, the optimal fusion number and the optimal fusion set in the data are obtained, and finally, a characteristic value after traffic information fusion can be obtained by calculating a confidence distance matrix and a relation matrix;
and step six, constructing a traffic road condition map with high reliability by using the fused real-time traffic data through a coordinate mapping model in cooperation with a local path planning algorithm. Firstly, establishing a mapping relation model between an HV coordinate system and a geographic space coordinate system by taking longitude and latitude coordinates of a vehicle as reference points; and then converting the longitude and latitude coordinate data of other traffic objects into coordinate data under an HV coordinate system through a coordinate mapping algorithm, thereby constructing a three-dimensional plane map of real-time road conditions with high reliability.
Let the longitude and latitude coordinates of the known HV be (O)j,Ow,Oh) The longitude and latitude coordinates of RV are (P)j,Pw,Ph) And the course angle of the HV is known as beta at this time, the Y axis is parallel to the course angle in the coordinate system of the HV, and the Z axis is vertical to the horizontal plane and faces upwards. Assuming that the coordinates of the RV in the HV coordinate system at this time are (x, y, z), the azimuth angle α of the line from the HV to the RV is:
Figure BDA0001976124360000121
therefore, θ ═ α - β can be obtained.
Figure BDA0001976124360000122
And
Figure BDA0001976124360000123
x and y can be obtained by combining the two formulas, wherein R is the average radius of the earth. And because z is Ph-OhObtaining the coordinates of RV in HV coordinate system;
step seven, dividing the driving action of the vehicle into 4 types of lane following, left lane changing, right lane changing and overtaking; then establishing a situation estimation model of the vehicle, estimating the driving situation of the vehicle by analyzing vehicle body information such as navigation information of vehicle central control equipment, current steering lamps of the vehicle, three-axis acceleration information of the vehicle and the like and applying a situation estimation algorithm of the vehicle, and then estimating the driving suggested speed of the vehicle according to information of other vehicles in a traffic environment, traffic rules and the like;
and step eight, comprehensively considering the driving situation of the vehicle and the turning radius of the vehicle and calculating the optimal driving path of the current vehicle through a local path planning algorithm according to the obtained real-time traffic road condition map. As shown in fig. 9, first, the turning radius of the vehicle is considered comprehensively, then an opening node list of the initial node is set on the map, a cost function of each node in the list is calculated, the node with the minimum cost function value is used as the next point and the point is moved into the closing list, and then the process is repeated until the path planning is completed; then, the curvature of each node in the path and the distance between each node and an obstacle are constrained, and the original driving path is fitted by adopting a cubic Bezier curve smoothing method to obtain a smooth optimal driving path;
and step nine, the cooperative local path planning controller transmits the obtained optimal driving path to the OBU, and then the OBU broadcasts the optimal driving path and the vehicle information to other traffic objects in a V2X communication mode and sends the information to the vehicle networking road side equipment and the vehicle networking cloud platform, so that cooperative driving among vehicles is facilitated.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (7)

1. A collaborative local path planning method based on V2X communication and binocular vision is characterized in that the collaborative local path planning method is achieved through cooperation of an on-board terminal (OBU), road side equipment (RSU), a cloud server, a mobile terminal, OBD equipment, vehicle central control equipment, a binocular camera and a collaborative local path planning controller, and specifically comprises the following steps:
step 1, in the driving process of a vehicle, triggering a collaborative local path planning algorithm to work by combining navigation path and vehicle steering lamp information, and turning to step 2;
step 2, the vehicle-mounted terminal OBU collects vehicle information through OBD equipment and vehicle central control equipment, and detects road boundary information and obstacle information through a binocular camera;
step 3, the vehicle-mounted terminal OBU obtains traffic object information in the driving process of the vehicle from road side equipment, other vehicles and pedestrians in a V2X communication mode;
step 4, the cooperative local path planning controller acquires traffic object information through an OBU (on-board unit), and then establishes a traffic object mixed situation estimation model according to the traffic object information, wherein the traffic object mixed situation estimation model is used for acquiring dynamic traffic object information after situation estimation;
step 5, performing data fusion on the dynamic traffic object information after the situation estimation of the traffic object mixed situation estimation model, the static traffic object information and the traffic rules by adopting a traffic object information fusion algorithm;
step 6, constructing a traffic road condition map by the fused real-time traffic data through a coordinate mapping model, and specifically comprising the following steps:
firstly, establishing a mapping relation model between an HV coordinate system and a geographic space coordinate system by taking longitude and latitude coordinates of a vehicle as reference points; then, converting longitude and latitude coordinate data of other traffic objects into coordinate data under an HV coordinate system through a coordinate mapping algorithm, thereby constructing a traffic road condition map of real-time road conditions with high reliability;
step 7, establishing a situation estimation model of the vehicle according to the vehicle body information including vehicle navigation map information, the current steering lamp of the vehicle and the three-axis acceleration information of the vehicle, and evaluating the running situation and the suggested speed of the vehicle;
step 8, according to the traffic road condition map established in the step 6, comprehensively considering the driving situation of the vehicle and the turning radius of the vehicle, and calculating the optimal driving path of the current vehicle through a local path planning algorithm;
and 9, transmitting the obtained optimal driving path and the vehicle information to the vehicle-mounted terminal OBU in cooperation with the local path planning controller, broadcasting the vehicle-mounted terminal OBU to other traffic objects in a V2X communication mode, and transmitting the vehicle-mounted terminal OBU to the vehicle networking road side equipment and the vehicle networking cloud platform.
2. The collaborative local path planning method based on V2X communication and binocular vision according to claim 1, wherein the vehicle-mounted terminal OBU is equipped with an information detection module including a high-precision positioning module and a three-axis gyroscope, and a V2X communication device, and is used for completing vehicle information acquisition, road boundary information reception, obstacle information reception, and V2X message reception and analysis; the collaborative local path planning controller is a vehicle-mounted computing center for local path planning, and mainly completes a traffic object mixed situation estimation model, a multi-source data fusion algorithm, traffic road condition map construction, a self vehicle situation estimation model and a local path planning algorithm; the binocular camera consists of two cameras with the same specification parameters, which are horizontally fixed on the vehicle, and is used for acquiring and processing image data; the high-precision positioning module is used for acquiring high-precision longitude and latitude and altitude information of a vehicle in the driving process; the three-axis gyroscope is used for acquiring three-axis acceleration information of the vehicle during running; the V2X communication device is used for broadcasting the fused environment data and the optimal local planning path information.
3. The collaborative local path planning method based on V2X communication and binocular vision according to claim 1 or 2, wherein the step 2 detects road boundary information and obstacle information through a binocular camera, the road boundary information includes guardrail position of a road, road lane line position information, and the obstacle information includes obstacle type, position information, and obstacle danger level information; the method specifically comprises the following steps: calibrating the vehicle-mounted binocular camera by a stereo calibration method, thereby calculating an internal and external parameter matrix, eliminating distortion and aligning rows and columns; detecting positions of lane lines and guardrails in the image by adopting an image edge detection algorithm, and then detecting an image of an obstacle objective lens in the lane by image ROI segmentation; the method comprises the steps of detecting obstacles of left and right views through an object detection algorithm, marking pixel positions and categories of the obstacles, wherein the obstacle categories are mainly classified into pedestrians, dangerous vehicles, animals, potholes and rockfall, and then calculating three-dimensional coordinates of the obstacles relative to HV through a stereo matching algorithm.
4. The collaborative local path planning method based on V2X communication and binocular vision is characterized in that the traffic object mixed situation estimation model is a mixed traffic object situation estimation model of a binocular camera and a mixed traffic object situation estimation model of V2X, which relates to information obtained from both visual range and non-visual range, and the motion state of a dynamic traffic object is predicted through the traffic object mixed situation estimation model according to the traffic object time series data.
5. The collaborative local path planning method based on V2X communication and binocular vision according to claim 4, wherein the step 5 is to calculate traffic object information with high reliability by a traffic object information fusion algorithm, and specifically comprises; firstly, preprocessing multi-source data by combining with traffic rules, analyzing and removing the longitude and latitude of a traffic object, triaxial acceleration abnormal data and abnormal state equipment data which exceed a path planning range, and then uniformly caching data formats according to a standard format which is established in advance; the data sampling frequencies of different sources are different, and the data are ensured to be acquired at the same time by unifying the sampling frequencies of the data of different sources, so that the time correlation of the data is completed; coordinate systems of different data are different, and fusion processing can be performed only by unifying multi-source data to the same coordinate system before data fusion; and then, based on a bitmap method, the optimal fusion number and the optimal fusion set in the data are obtained, and finally, a characteristic value after the traffic information is fused is obtained by calculating a confidence distance matrix and a relation matrix.
6. The collaborative local path planning method based on V2X communication and binocular vision according to claim 5, wherein the step 7 is to establish a situation estimation model of the host vehicle according to the host vehicle body information including vehicle navigation map information, current turn signals of the vehicle and three-axis acceleration information of the vehicle, and to evaluate the driving situation and the suggested speed of the host vehicle, and specifically comprises:
firstly, dividing the driving action of the vehicle into 4 types of lane following, left lane changing, right lane changing and overtaking; and then establishing a situation estimation model of the vehicle, evaluating the driving situation of the vehicle by analyzing vehicle body information including navigation information of vehicle central control equipment, current steering lamps of the vehicle and three-axis acceleration information of the vehicle and applying a situation estimation algorithm of the vehicle, and evaluating the driving suggested speed of the vehicle according to information of other vehicles in the traffic environment and traffic rules.
7. The collaborative local path planning method based on V2X communication and binocular vision according to claim 6, wherein the step 8 is to calculate the optimal driving path of the current vehicle by a local path planning algorithm according to the traffic road condition map established in the step 6, taking the driving situation of the vehicle and the turning radius of the vehicle into consideration, and specifically comprises: firstly, comprehensively considering the turning radius of a vehicle, then setting an opening node list of initial nodes on a map, calculating a cost function of each node in the list, taking the node with the minimum cost function value as the next point, moving the point into a closing list, and repeating the process until path planning is finished; and then, the curvature of each node in the path and the distance between each node and the obstacle are constrained, and the original driving path is fitted by adopting a cubic Bezier curve smoothing method to obtain a smooth optimal driving path.
CN201910133334.9A 2019-02-22 2019-02-22 Collaborative local path planning method based on V2X communication and binocular vision Active CN109920246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910133334.9A CN109920246B (en) 2019-02-22 2019-02-22 Collaborative local path planning method based on V2X communication and binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910133334.9A CN109920246B (en) 2019-02-22 2019-02-22 Collaborative local path planning method based on V2X communication and binocular vision

Publications (2)

Publication Number Publication Date
CN109920246A CN109920246A (en) 2019-06-21
CN109920246B true CN109920246B (en) 2022-02-11

Family

ID=66961879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910133334.9A Active CN109920246B (en) 2019-02-22 2019-02-22 Collaborative local path planning method based on V2X communication and binocular vision

Country Status (1)

Country Link
CN (1) CN109920246B (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110132290B (en) * 2019-05-20 2021-12-14 北京百度网讯科技有限公司 Intelligent driving road side equipment perception information fusion processing method, device and equipment
CN110362077B (en) * 2019-07-03 2020-09-04 上海交通大学 Unmanned vehicle emergency hedge decision making system, method and medium
CN112298194B (en) * 2019-07-29 2022-05-13 魔门塔(苏州)科技有限公司 Lane changing control method and device for vehicle
CN110745142B (en) * 2019-09-11 2021-10-01 北京汽车集团有限公司 Vehicle control method and device and vehicle
CN110736479A (en) * 2019-09-26 2020-01-31 国唐汽车有限公司 V2X-based intelligent automobile autonomous charging path planning method
CN110827578B (en) * 2019-10-23 2022-05-10 江苏广宇协同科技发展研究院有限公司 Vehicle anti-collision prompting method, device and system based on vehicle-road cooperation
CN111076731B (en) * 2019-10-28 2023-08-04 张少军 Automatic driving high-precision positioning and path planning method
CN110884502B (en) * 2019-12-06 2021-02-26 北京京东乾石科技有限公司 Automatic driving path planning quality evaluation method, device and system
CN111028544A (en) * 2019-12-06 2020-04-17 无锡物联网创新中心有限公司 Pedestrian early warning system with V2V technology and vehicle-mounted multi-sensor integration
CN111006681B (en) * 2019-12-25 2023-06-30 星觅(上海)科技有限公司 Auxiliary navigation method, device, equipment and medium
CN111301316B (en) * 2020-01-20 2021-06-08 杭州金通科技集团股份有限公司 Intelligent bus-mounted terminal system
CN112241167A (en) * 2020-03-05 2021-01-19 北京新能源汽车技术创新中心有限公司 Information processing method and device in automatic driving and storage medium
CN111785062B (en) * 2020-04-01 2021-09-14 北京京东乾石科技有限公司 Method and device for realizing vehicle-road cooperation at signal lamp-free intersection
DE102020207065B3 (en) * 2020-06-05 2021-02-11 Volkswagen Aktiengesellschaft Vehicle, method, computer program and device for merging object information about one or more objects in the surroundings of a vehicle
CN111726784A (en) * 2020-06-10 2020-09-29 桑德科技(重庆)有限公司 V2X-based vehicle driving safety management method
CN112188386B (en) * 2020-07-31 2022-08-09 广东中达道信科技发展有限公司 Vehicle positioning method based on ETC signal intensity
CN112124319B (en) * 2020-08-28 2021-04-23 青岛慧拓智能机器有限公司 Intelligent driving system
CN112113578A (en) * 2020-09-23 2020-12-22 安徽工业大学 Obstacle motion prediction method for automatic driving vehicle
CN112231428B (en) * 2020-10-16 2022-09-13 中国电子科技集团公司第二十八研究所 Vehicle path planning method fusing battlefield situation information
CN112308076B (en) * 2020-10-30 2023-05-30 济南蓝图士智能技术有限公司 Multi-semantic safety map construction, use and scheduling method for AGV navigation scheduling
CN112362074B (en) * 2020-10-30 2024-03-19 重庆邮电大学 Intelligent vehicle local path planning method under structured environment
CN112389448B (en) * 2020-11-23 2022-07-01 重庆邮电大学 Abnormal driving behavior identification method based on vehicle state and driver state
CN112558608B (en) * 2020-12-11 2023-03-17 重庆邮电大学 Vehicle-mounted machine cooperative control and path optimization method based on unmanned aerial vehicle assistance
CN112735185A (en) * 2020-12-21 2021-04-30 南京熊猫电子股份有限公司 Vehicle management system based on 5G and Beidou technology
CN112598753B (en) * 2020-12-25 2023-09-12 南京市德赛西威汽车电子有限公司 Vehicle-mounted camera calibration method based on road side unit RSU information
CN112927559A (en) * 2021-01-20 2021-06-08 国汽智控(北京)科技有限公司 Vehicle danger avoiding control method, device and system
CN113091737A (en) * 2021-04-07 2021-07-09 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative positioning method and device, automatic driving vehicle and road side equipment
CN113238496B (en) * 2021-04-20 2023-03-31 东风汽车集团股份有限公司 Parallel driving controller control system, method and medium of integrated on-board unit (OBU)
CN113267188A (en) * 2021-05-06 2021-08-17 长安大学 Vehicle co-location method and system based on V2X communication
CN113358113A (en) * 2021-06-18 2021-09-07 刘治昊 Navigation device based on clothes hanger reflection principle
CN113537606B (en) * 2021-07-22 2023-05-30 上汽通用五菱汽车股份有限公司 Accident prediction method, device and computer readable storage medium
CN113381926B (en) * 2021-08-12 2022-01-18 深圳市城市交通规划设计研究中心股份有限公司 Vehicle-road cooperative data resource management method and device and storage medium
CN113870553B (en) * 2021-08-20 2023-08-29 西安电子科技大学 Road network running state detection system and method for mixed traffic flow
CN113954868B (en) * 2021-10-08 2023-04-25 南京航空航天大学 Lane-level path planning method and system based on space-time traffic model
CN114125712B (en) * 2021-11-23 2023-08-08 上海优咔网络科技有限公司 Vehicle-mounted intercom method
CN114194201A (en) * 2021-12-28 2022-03-18 中国第一汽车股份有限公司 Vehicle control method and device, electronic equipment and storage medium
CN114510055B (en) * 2022-02-18 2022-11-08 科大国创合肥智能汽车科技有限公司 Method for constructing rear lane line
CN114822048A (en) * 2022-05-25 2022-07-29 云控智行科技有限公司 Cloud planning service system and method for Internet vehicles
CN115497323B (en) * 2022-09-29 2023-06-23 斯润天朗(北京)科技有限公司 V2X-based vehicle collaborative lane changing method and device
CN115339453B (en) * 2022-10-19 2022-12-23 禾多科技(北京)有限公司 Vehicle lane change decision information generation method, device, equipment and computer medium
CN115649198A (en) * 2022-11-03 2023-01-31 中科天极(新疆)空天信息有限公司 Automatic vehicle driving method and system
CN116229726B (en) * 2023-05-08 2023-08-08 湖南车路协同智能科技有限公司 Vehicle-road cooperation method and system for regulating and controlling running state of target road vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503451A (en) * 2014-11-27 2015-04-08 华南农业大学 Obstacle-avoidance automatic guidance method and automatic guided vehicle based on vision and ultrasonic sensing
CN105551284A (en) * 2016-01-29 2016-05-04 武汉光庭科技有限公司 Open-type automatic driving system
CN105702083A (en) * 2016-04-13 2016-06-22 重庆邮电大学 Distributed vision-based parking lot-vehicle cooperative intelligent parking system and method
CN106773968A (en) * 2016-12-22 2017-05-31 北京航天益森风洞工程技术有限公司 Vehicle-mounted intelligent end device based on V2X radio communications
CN106846892A (en) * 2017-03-07 2017-06-13 重庆邮电大学 Parking lot vehicle cooperative intelligent shutdown system and method based on machine vision
US9679487B1 (en) * 2015-01-20 2017-06-13 State Farm Mutual Automobile Insurance Company Alert notifications utilizing broadcasted telematics data
CN107223200A (en) * 2016-12-30 2017-09-29 深圳前海达闼云端智能科技有限公司 Navigation method, navigation device and terminal equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130122172A (en) * 2012-04-30 2013-11-07 서울시립대학교 산학협력단 Apparatus for detecting and transferring information about sharp turn and sudden stop of vehicle
US9740945B2 (en) * 2015-01-14 2017-08-22 Magna Electronics Inc. Driver assistance system for vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503451A (en) * 2014-11-27 2015-04-08 华南农业大学 Obstacle-avoidance automatic guidance method and automatic guided vehicle based on vision and ultrasonic sensing
US9679487B1 (en) * 2015-01-20 2017-06-13 State Farm Mutual Automobile Insurance Company Alert notifications utilizing broadcasted telematics data
CN105551284A (en) * 2016-01-29 2016-05-04 武汉光庭科技有限公司 Open-type automatic driving system
CN105702083A (en) * 2016-04-13 2016-06-22 重庆邮电大学 Distributed vision-based parking lot-vehicle cooperative intelligent parking system and method
CN106773968A (en) * 2016-12-22 2017-05-31 北京航天益森风洞工程技术有限公司 Vehicle-mounted intelligent end device based on V2X radio communications
CN107223200A (en) * 2016-12-30 2017-09-29 深圳前海达闼云端智能科技有限公司 Navigation method, navigation device and terminal equipment
CN106846892A (en) * 2017-03-07 2017-06-13 重庆邮电大学 Parking lot vehicle cooperative intelligent shutdown system and method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于综合信息感知的智能汽车轨迹规划的研究;庞俊康;《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;20190115;全文 *

Also Published As

Publication number Publication date
CN109920246A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109920246B (en) Collaborative local path planning method based on V2X communication and binocular vision
US11940539B2 (en) Camera-to-LiDAR calibration and validation
US10659677B2 (en) Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium
DE102018116108B4 (en) CALIBRATION TEST METHOD FOR THE OPERATION OF AUTONOMOUS VEHICLES AND VEHICLE WITH A CONTROLLER FOR EXECUTING THE METHOD
DE102018109366B4 (en) METHOD FOR LIGHT SIGNAL DETECTION
CN108572663B (en) Target tracking
DE102018109371A1 (en) CALIBRATION VALIDATION FOR THE OPERATION OF AUTONOMOUS VEHICLES
CN113223317B (en) Method, device and equipment for updating map
DE102018107754A1 (en) OBJECT TRACKING
CN112166059A (en) Position estimation device for vehicle, position estimation method for vehicle, and computer-readable recording medium storing computer program programmed to execute the method
CN110574357B (en) Imaging control apparatus, method for controlling imaging control apparatus, and moving body
CN111508276B (en) High-precision map-based V2X reverse overtaking early warning method, system and medium
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
US11148594B2 (en) Apparatus and method for around view monitoring using lidar
JP2023126642A (en) Information processing device, information processing method, and information processing system
CN113885062A (en) Data acquisition and fusion equipment, method and system based on V2X
DE112018004691T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS, PROGRAM AND MOVING BODY
DE102021132853A1 (en) CAMERA CALIBRATION BASED ON DEEP LEARNING
CN112598899A (en) Data processing method and device
DE112021002953T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM
CN116572995B (en) Automatic driving method and device of vehicle and vehicle
JP2023122597A (en) Information processor, information processing method and program
WO2023036032A1 (en) Lane line detection method and apparatus
US20220309693A1 (en) Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation
WO2023021755A1 (en) Information processing device, information processing system, model, and model generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240307

Address after: No. 10-20, Building 4, No. 170 Keyuan Fourth Road, Jiulongpo District, Chongqing, 400041

Patentee after: Chongqing Mouyi Technology Co.,Ltd.

Country or region after: China

Address before: 400065 Chongwen Road, Nanshan Street, Nanan District, Chongqing

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China