CN116853284A - Vehicle-road collaborative perception fusion method and system - Google Patents

Vehicle-road collaborative perception fusion method and system Download PDF

Info

Publication number
CN116853284A
CN116853284A CN202310588105.2A CN202310588105A CN116853284A CN 116853284 A CN116853284 A CN 116853284A CN 202310588105 A CN202310588105 A CN 202310588105A CN 116853284 A CN116853284 A CN 116853284A
Authority
CN
China
Prior art keywords
vehicle
sensing
target
road
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310588105.2A
Other languages
Chinese (zh)
Inventor
赵奕铭
马泽
熊吉
郭剑锐
咸志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Motor Corp
Original Assignee
Dongfeng Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Motor Corp filed Critical Dongfeng Motor Corp
Priority to CN202310588105.2A priority Critical patent/CN116853284A/en
Publication of CN116853284A publication Critical patent/CN116853284A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0016Planning or execution of driving tasks specially adapted for safety of the vehicle or its occupants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/46Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Analytical Chemistry (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle-road collaborative awareness fusion method, which comprises the following steps that S1, vehicle-end awareness information is obtained through vehicle-mounted awareness equipment of a self vehicle, vehicle-road collaborative awareness information is obtained through vehicle-mounted awareness equipment and road side awareness equipment of other vehicles, and the vehicle-end awareness information and the vehicle-road collaborative awareness information are respectively put into a vehicle-end awareness information cache queue and a vehicle-road collaborative awareness information cache queue according to time sequences; s2, acquiring a vehicle end sensing message and a vehicle road collaborative sensing message corresponding to the vehicle end sensing message cache queue and the vehicle road collaborative sensing message cache queue; s3, synchronizing the acquired sensing target coordinate information in the vehicle-end sensing message and the vehicle-road cooperative sensing message under the same coordinate system; s4, carrying out Kalman filtering processing on the perception targets of the vehicle end perception message and the vehicle road collaborative perception message after the coordinate information is synchronized; and S5, fusing the measurement values obtained by different sensors in the vehicle-end sensing message and the vehicle-road cooperative sensing message by adopting an analytic hierarchy process.

Description

Vehicle-road collaborative perception fusion method and system
Technical Field
The invention relates to the technical field of vehicle sensing, in particular to a vehicle-road collaborative awareness fusion method and system.
Background
At present, the application and verification scenes of the automatic driving automobile are wider and wider, the L2 and L3 already start mass production, and the L4 also starts the test and trial operation verification of the specific scene. Because of certain perception limitations of sensors such as cameras and radars, the intelligent perception limitations of the single vehicle for driving assistance/automatic driving are caused, for example, a blocked intersection can not recognize whether vehicles enter in other directions, a traffic light intersection can not recognize traffic light phase information, and collision test accidents occur. How to combine the Chinese complex scene breaks through the perception limitation of a bicycle, and it is more and more important to improve the auxiliary/automatic driving safety of the bicycle.
The intelligent network perception cooperative system is one of key research points in an intelligent transportation system, and mainly shares traffic environment information perceived between vehicles and road sides, and carries out fusion processing on perception data of the two. Compared with intelligent perception of a bicycle, the intelligent perception system has wider perception range and more accurate perception information.
The first prior art discloses a control decision method for cutting into a scene in front of a vehicle based on a road sensing fusion technology. The method comprises the following steps: acquiring first-type target information (position information, traveling speed, deceleration and distance to the host vehicle) about the preceding vehicle based on the on-vehicle sensing device of the host vehicle; acquiring second-type target information (body CAN data, position information, steering wheel rotation angle, running speed and braking state) about a front vehicle based on vehicle-mounted OBU equipment of the vehicle; predicting whether the preceding vehicle is about to cut into a lane where the vehicle is located according to the first type of target information and the second type of target information; and responding to a prediction result of the lane where the front vehicle is going to cut into the vehicle, and determining an auxiliary driving control mode of the vehicle according to a preset auxiliary driving control decision strategy, wherein the auxiliary driving control mode is AEB braking triggering, mild deceleration braking triggering or front vehicle cutting-in early warning for a driver. The first disadvantage of the prior art comprises that in order to reduce the complexity and the requirement of a perception fusion algorithm, the scheme selects to use various sensors to respectively perceive different information, and does not exert the potential of a vehicle-road cooperative V2X communication technology; in addition, the scheme does not provide a fusion algorithm for sensing environment information in a clear vehicle-mounted sensor and vehicle road cooperative mode and a calculation method for the confidence coefficient of each sensor.
The second prior art discloses a target fusion method, a system, a vehicle and a storage medium based on road side equipment, wherein a camera and a radar at the road side are adopted as front end terminal equipment, and partial data of the video of the camera and the radar are fused and deeply analyzed through a video microwave detection fusion calculation unit, so that the output of information such as the position, the speed, the acceleration, the course angle and the like of the detected target vehicle and the pedestrian is finished, the perception fusion of the automatic driving vehicle is realized, and the decision control of the cooperative perception fusion of the vehicle and the road is realized. The defects of the second prior art include that the method provided by the scheme still utilizes the camera and the radar to sense the road environment information, the information interaction between vehicles is not utilized, the vehicles only introduce the information sent by the road side equipment in a single storage, the information of the vehicles cannot be sent to the road side, and the defects of the camera and the radar sensing are not overcome; in addition, the method provided by the scheme has extremely high implementation cost due to the limitation of the sensing distance of the camera and the radar, and the system risk is relatively high in severe weather.
Disclosure of Invention
The invention aims to provide a vehicle-road collaborative perception fusion method and a vehicle-road collaborative perception fusion system, which are used for solving the perception limitation of intelligent auxiliary driving/automatic driving of a bicycle caused by a certain perception limitation of sensors such as a vehicle-mounted camera and a radar.
In order to solve the technical problems, the invention provides a technical scheme that: a vehicle-road collaborative awareness fusion method comprises the following steps,
s1, acquiring a vehicle end sensing message through vehicle-mounted sensing equipment of a self vehicle, acquiring a vehicle road collaborative sensing message through vehicle-mounted sensing equipment and road side sensing equipment of other vehicles, and respectively putting the vehicle end sensing message and the vehicle road collaborative sensing message into a vehicle end sensing message cache queue and a vehicle road collaborative sensing message cache queue according to time sequences;
s2, acquiring vehicle end perception information and vehicle road collaborative perception information corresponding to a vehicle end perception information cache queue and a vehicle road collaborative perception information cache queue at a certain frequency;
s3, synchronizing the sensing target coordinate information in the vehicle end sensing message and the vehicle road cooperative sensing message acquired in the S2 under the same coordinate system;
s4, carrying out Kalman filtering processing on the perception targets of the vehicle end perception message and the vehicle road collaborative perception message after the coordinate information is synchronized;
and S5, fusing the measurement values obtained by different sensors in the vehicle-end sensing message and the vehicle-road cooperative sensing message by adopting an analytic hierarchy process.
According to the scheme, the vehicle-end sensing message and the vehicle-road cooperative sensing message comprise sensing target coordinate information and a time stamp; the vehicle-end sensing information is obtained through a vehicle-mounted sensor of the vehicle, and the vehicle-road cooperative sensing information is obtained through vehicle-mounted sensors of other vehicles and sensors arranged on the road side.
According to the scheme, the vehicle-end sensing message cache queue and the vehicle-road cooperative sensing message cache queue filter messages in the cache queue in real time, specifically, messages which do not contain sensing target information in the queue are removed, the corresponding time stamps of the vehicle-end sensing messages and the vehicle-road cooperative sensing messages are differentiated, and the vehicle-end sensing messages and the vehicle-road cooperative sensing messages with time differences larger than a set time threshold are removed.
According to the scheme, the perceived target in the S3 is expressed as a target candidate frame, the perceived target coordinate information comprises the coordinates of the central point of the target candidate frame under the world coordinate system and the length l and the width w of the target candidate frame, and the synchronization process is specifically,
firstly, determining coordinates of four vertexes of a target candidate frame under a coordinate system of the target candidate frame, wherein an origin of the coordinate system of the target candidate frame is a center point of the target candidate frame, an x-axis positive direction of the coordinate system is horizontal left vertical to a running direction of the target candidate frame, and a y-axis positive direction of the coordinate system is the running direction;
then, rotating the four vertex coordinates of the target candidate frame around the center point of the target candidate frame according to the course angle of the target candidate frame, so that the coordinate axis of the coordinate system of the target candidate frame is parallel to the coordinate axis of the world coordinate system;
and finally, translating the four vertexes of the target candidate frame and the center point of the target candidate frame together to enable the origin of the coordinate system of the target candidate frame to coincide with the origin of the world coordinate system.
According to the scheme, the Kalman filtering process in S4 is specifically as follows,
representing the state of a perception object asWherein p is the position of the perception target, v is the speed of the perception target;
the distance that the perception object passes within the time t is expressed as,
in the above, L x ,L y The distance between the motion of the sensing target in the x-axis direction and the y-axis direction in the t time, v x ,v y Sensing the movement speed of the target in the x-axis direction and the y-axis direction respectively, wherein a is the acceleration of the sensing target;
so that there is a number of the steps,
there is then a number of such a method,
in the above, x k 、y k To sense the x-axis and y-axis coordinates of the target at time k, x k+t 、y k+t Is the x-axis and y-axis coordinates of the perception target at the moment k+t, v x.k+t 、v y.k+t To sense the movement speed of the target in the directions of the x axis and the y axis at the moment k+t;
the state of the perception object is expressed in vector form,
wherein p is x 、p y Representing the components of p in the x-axis direction and the y-axis direction;
then the state model of the perception target is established as follows,
in the above-mentioned method, the step of,representing the perceived target state at time k+t, < >>Representing the perceived target state at time k, A k For the system state transition matrix, B uk Is a system error matrix;
the above formula is thus expressed as,
wherein B is uk It is indicated that the number of the elements is,
the covariance matrix of the state model of the perceived target is,
the observation model of the perception target is then expressed as,
in the above, C k =I 4X4 ,v k System noise is observed for the model; the observation model system noise covariance matrix R is expressed as,
in the above-mentioned method, the step of,p is respectively x 、p y 、v x 、v y Is a variance of (2);
the perceived target state is represented as a prediction of time of day using kalman filtering,
the prediction error is represented as a result of,
the kalman gain is expressed as,
the best estimated state of the perceived target state after the update at time k is expressed as,
according to the proposal, the process of fusing the measurement values by adopting the analytic hierarchy process in S5 is specifically as follows,
if there are p sensors for acquiring sensing information at time W, there are q sensing targets, and the measurement value of the sensing target j by the sensor i is m ij (i=1, 2, …, p; j=1, 2, …, q); determining the credibility of the sensor i as a according to a fusion algorithm of weighted average i (i=1, 2, …, p), the fusion result of the perceived target j is,
in the above, a i Satisfy the following requirements
For the perceived target j, the average of the measured values of all the corresponding sensors is,
the ratio of the credibility of the sensor s and the sensor t to the perceived target j is defined as,
for the perceived target j, the sensor's reliability judgment matrix is,
matrix D is then formed j Normalization processing of each column to obtain matrixThe elements of which are generally referred to as,
matrix is formedAdding by row to obtain an excessive confidence vector +.>The elements of which are generally referred to as,
will thenNormalizing to obtain a reliability vector W j Wherein the element->Representing the trustworthiness of the sensor r to the perceived target j,/>It is indicated that the number of the elements is,
a vehicle-road cooperative sensing fusion device receives a vehicle-end sensing message and a vehicle-road cooperative sensing message and executes the vehicle-road cooperative sensing fusion method.
A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the vehicle road collaborative awareness fusion method described above.
An automobile provided with the road cooperative sensing fusion device described above.
A vehicle-road collaborative awareness fusion system for implementing the vehicle-road collaborative awareness fusion method described above, comprising,
the vehicle-mounted sensing equipment is used for acquiring a vehicle-end sensing message;
the road side sensing equipment is used for acquiring road side sensing information;
the road side communication unit is used for sending road side perception information to the OBU;
the OBU forwards the vehicle end sensing information of the own vehicle to the sensing fusion unit and sends the vehicle end sensing information of the own vehicle to OBU of other vehicles; meanwhile, the vehicle OBU receives road end sensing information from the road side communication unit and vehicle end sensing information of other vehicles sent by other OBUs of the vehicles to form a vehicle-road cooperative sensing information;
the vehicle-end sensing information and the vehicle-road cooperative sensing information sent by the vehicle OBU are received by the sensing fusion unit arranged on each vehicle, and the steps of the vehicle-road cooperative sensing fusion method are executed according to any one of claims 1-6.
The beneficial effects of the invention are as follows: the sensing information from the self-vehicle, surrounding vehicles and roads is acquired in time through integration, so that the driving safety is improved, for example, rear-end collision accidents are prevented or the pedestrian accident rate is reduced; the coordinate information of the target is perceived by adopting Kalman filtering, so that the data are more accurate and stable, and the recognition of the driving environment and the accuracy of vehicle path prediction are improved; the analytic hierarchy process fusion is carried out on the measurement values acquired by different sensors, so that the confidence level of the perception information is improved, the driving experience is optimized, and the user can make decisions and plans more easily. The scheme solves the problem that certain perception limitations exist in sensors such as a vehicle-mounted camera and a radar, so that the intelligent perception limitation of the driving assisting/automatic driving bicycle is caused, for example, whether vehicles enter in other directions or not can not be recognized at a blocked serious intersection, the traffic signal lamp intersection can not recognize the phase information of the traffic signal lamp, and the perception precision and the perception range of an intelligent driving vehicle perception system are enhanced.
Drawings
FIG. 1 is a block diagram of a vehicle-road cooperative sensing fusion system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the principle of coordinate rotation according to an embodiment of the present invention;
fig. 3 is a flow chart of a vehicle-road cooperative sensing fusion algorithm according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present disclosure. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without the need for inventive faculty, are within the scope of the present disclosure, based on the described embodiments of the present disclosure.
(1) The vehicle-road cooperative system combines road side sensing equipment and vehicle-mounted sensing equipment. The road side sensing equipment and the vehicle-mounted sensing equipment mainly comprise sensors such as various radars and cameras. The vehicle-road collaborative post-perception fusion is to synchronize perception information from vehicle-mounted perception equipment and perception result information of road side perception equipment in time and space, perform fusion and remove redundant target processing, and finally obtain consistency interpretation and description of perception targets which are most in line with driving behaviors such as vehicle early warning and track planning. The message set fused by the vehicle-road collaborative post-perception fusion algorithm is derived from vehicle-mounted perception equipment and vehicle-mounted OBU (On Board Unit, namely a vehicle-mounted unit). The Road Side sensing device and the cloud system synchronize sensed data information to the Road Side computing Unit in real time, the Road Side computing Unit analyzes the sensed data and converts the sensed data into a V2X (Vehicle to Everything, namely, the exchange of all information between the vehicle and the outside world) general message, the general message is sent to the vehicle-mounted OBU through the RSU (Road Side Unit), the vehicle-mounted OBU simultaneously acquires V2X messages broadcasted by other vehicle-mounted OBUs within the communication distance range, the OBU then sends the messages to the sensing fusion computing Unit, and the vehicle-mounted sensing device sends sensed state environment information to the sensing fusion computing Unit as shown in fig. 1.
(2) The first stage of sensing fusion information time synchronization check, the sensing fusion calculation unit transmits the received vehicle endThe environment information (namely, the vehicle-end information) perceived by the sensor is put into a vehicle-end information buffer queue, and the received vehicle-road cooperative information forwarded by the OBU is put into the vehicle-end information buffer queue. To meet timeliness, a timer is optionally set before fusion, and the timer is set every t g Time (100 ms, reference to V2X basic performance, whose update frequency depends on the update frequency of GPS data) triggers one time to get the head of the vectore_queue message queue and vectore_queue message queue to merge, t g The time is selected according to the transmission frequency of each sensing device. The method comprises the steps of filtering messages acquired from message queues vectore_queue and vectore_queue, removing vectore_queue messages without target state information, namely empty obstacles, from a message list, and carrying out time synchronization check on each fused vectore_queue message and vectore_queue message. Timestamp t of a veccle_queue message if a timer call has passed v Timestamp t of the veccle.head_queue message r With the current timestamp t n If the phase difference value is larger than the aging threshold tau g (10 ms, determined by the vehicle application function), such messages are treated as overstocked messages and removed in order to guarantee perceived timeliness. If the time stamp difference value is smaller than the aging threshold tau g The message set that needs to be fused at this time is considered to satisfy the timeliness required for the perceptual synergy. τ g The time is selected by considering the sensing frequency of the vehicle-mounted sensing equipment and the vehicle-road cooperative sensing equipment.
(3) In the second stage of the information space synchronization processing, data in a target state perceived by the vehicle end and the vehicle road in a cooperative manner depends on different coordinate systems, so that coordinates are unified by taking the north direction as the positive direction by referring to a WGS84 coordinate system, the target position information in a message set of the vehicle end and the vehicle road in a cooperative manner is spatially synchronized, longitude and latitude of points are projected into the coordinate system, center point coordinates and four vertex coordinates of each target candidate frame under center point coordinates of the coordinate system are obtained, and subsequent fusion calculation is performed. The selection of the origin of the coordinate system is obtained by high-precision map measurement. The coordinate system of the target candidate frame is spatially synchronized by a method of rotating transformation and then translating the coordinate system by taking the world coordinate system as a reference.
As shown in FIG. 2, point A is rotated θ degrees around origin O, point B, and the coordinates of point A are defined as (x A ,y A ) Then the coordinates of the point B are (x B ,y B ) The distance from the origin to the point A is r, and the included angle between the origin and the point A and the x axis is
Thus, there are:
by trigonometric function expansion and substituting x, y, we get:
therefore, its matrix pattern is:
based on the description of the rotation matrix, when the spatial synchronization is performed, the center point of the target candidate frame is taken as the rotation center point, the coordinate system of the target candidate frame is rotated to the same direction as the world coordinate system, the coordinate of the center point of the target candidate frame under the world coordinate system is obtained according to the previous process (in the text, the coordinate system of WGS84 is selected to be referenced, the north direction is taken as the positive direction unified coordinate, the longitude and latitude of the point are projected into the coordinate system, the distance d and the offset angle c between the center point of the sensing target and the origin of the coordinate system under the same coordinate system are obtained, the center point coordinate of each target candidate frame under the coordinate system is obtained based on the distance d, the offset angle c and the length h and the width w of the sensing target, and the four vertex coordinates under the world coordinate system are obtained by translation.
In the subject, the origin of the coordinate system is the center point of the target candidate frame, the rotation angle alpha is the heading angle head value of the target candidate frame, the point coordinates (x, y) are the four vertex coordinates of the target candidate frame, and the values of x and y depend on the length l and the width w of the perception target. Therefore, the spatial synchronization coordinate result in the clockwise order with the upper left vertex as the first vertex is expressed as follows:
after the coordinate system of the target candidate frame is rotated to the world coordinate system, translation is carried out according to the coordinate of the central point of the target candidate frame under the world coordinate system, so that each vertex coordinate is obtainedn.epsilon.1, 2,3,4 all have:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing vertex coordinates in the final world coordinate system, +.>Representing the rotated vertex coordinates, < >>Represented are the coordinates of the center point in the world coordinate system.
(4) In the third stage, in order to make the target position in the fusion result more accurate, kalman filtering coupling positioning processing is respectively carried out on the information from the vehicle end and the vehicle-road cooperative information before fusion. When we estimate the state information of an object, we first build a state representation of the object, which is used hereinWhere p is the position of the perceived target and v represents the speed of the perceived target at that time.
The distance that the perception target passes within the time t can be expressed as:
wherein L is x ,L y Is the distance of the target moving in the direction in time, v x ,v y The motion speed of the sensing target in the x and y directions is the motion speed of the sensing target, and a is the acceleration of the sensing target.
Thus, there are:
namely:
wherein x is k 、y k Is the position of the perception target at the moment k, x k+t 、y k+t Is the position of the perception target at the moment k+t, v x 、v y Is the speed of the perception target at the moment t, v x.k+t 、v y.k+t Is the speed of the perceived target at time k + t.
The state vector of the perception target is expressed as:
the state model of the perception target is as follows:
wherein A is k Representing a system state transition matrix, B uk Representing a systematic error matrix.
Namely:
in sensing the movement of the target, a certain process noise is included, wherein B uk It is the process noise in this model, since the acceleration of the motion is unknown, there are:
covariance matrix of this perceptual objective model:
therefore, the observation model expression thereof is:
wherein C is k =I 4X4 ,v k In order to observe the noise of the system,the noise covariance matrix R of the observation model can be obtained from the perception data, and is as follows:
the variances of the position information and the velocity information of the perception target are respectively represented. The prediction and update process of the perception target at the k moment by using Kalman filtering can be obtained by the formula:
the prediction error is:
the kalman gain is:
the updated optimal estimated state is therefore:
according to the steps, the vehicle-mounted sensing equipment and the vehicle-road cooperative sensing equipment perform Kalman filtering calculation on the sensing targets, and then perform vehicle-road sensing cooperative fusion calculation.
(5) And in the fourth stage, the reliability of the environment information perceived by the vehicle-mounted perception equipment (comprising various radars and cameras), the information broadcast by the different vehicle OBU acquired by the vehicle-mounted OBU and the environment information perceived and fused by the road side perception equipment is determined based on an Analytic Hierarchy Process (AHP). Because the scheme fuses the perception information of each vehicle-mounted sensor with the information of the OBU obtaining different vehicle OBU broadcasting and the fusion result information of the road side sensor, the vehicle-mounted OBU can be regarded as a sensor, and the road side sensor is regarded as a combined sensor.
Assuming that at the moment W, the vehicle-road collaborative rear sensing fusion system is provided with p sensors and q targets, and the measured value of the sensor i and the target j is as follows: m is m ij (i=1, 2, …, p; j=1, 2, …, q). Determining the reliability of the sensor i as a according to a fusion algorithm of weighted average i (i=1, 2, …, p), then the fusion of the targets results in:
wherein a is i Satisfy the following requirementsAnd a is i (i=1, 2, …, p) is the reliability of each sensor of the system at a certain moment in time that needs to be determined. Q targets for p sensors in the system. The sensor measures p×q targets in total. Q targets are grouped into q groups, and measurements for the same target are grouped into one group. For target j, let us set the average of the sensor measurements:
wherein m is ij Representing the measurement of the j target by the ith sensor.
The ratio of the two sensors s and t to the target measurement reliability is defined as follows:
then for the j target, the judgment matrix of sensor reliability:
the matrix is a symmetric matrix and the diagonal elements are 1. Sensor measurement distance average value m j The closer the result is, the higher the confidence ratio of the result to other results will be,the larger the corresponding element in the decision matrix. The greater the confidence of the sensor will be for object j. The step of judging the matrix solving reliability is as follows:
(1) will judge matrix D j Normalized to obtain a matrixThe general terms of the elements are:
(2) judgment matrix normalizing columnsAdding according to rows to obtain a transition reliability vector +.>The general terms of its elements are:
(3) transition confidence vectorNormalizing to obtain a credibility vector W j . Wherein the element->Indicating the trustworthiness of the r-th sensor to the target j.
/>
The above-described perceptual fusion process is illustrated in fig. 3.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (10)

1. A vehicle-road cooperative sensing fusion method is characterized in that: comprises the steps of,
s1, acquiring a vehicle end sensing message through vehicle-mounted sensing equipment of a self vehicle, acquiring a vehicle road collaborative sensing message through vehicle-mounted sensing equipment and road side sensing equipment of other vehicles, and respectively putting the vehicle end sensing message and the vehicle road collaborative sensing message into a vehicle end sensing message cache queue and a vehicle road collaborative sensing message cache queue according to time sequences;
s2, acquiring vehicle end perception information and vehicle road collaborative perception information corresponding to a vehicle end perception information cache queue and a vehicle road collaborative perception information cache queue at a certain frequency;
s3, synchronizing the sensing target coordinate information in the vehicle end sensing message and the vehicle road cooperative sensing message acquired in the S2 under the same coordinate system;
s4, carrying out Kalman filtering processing on the perception targets of the vehicle end perception message and the vehicle road collaborative perception message after the coordinate information is synchronized;
and S5, fusing the measurement values obtained by different sensors in the vehicle-end sensing message and the vehicle-road cooperative sensing message by adopting an analytic hierarchy process.
2. The vehicle-road cooperative sensing fusion method according to claim 1, wherein the method comprises the following steps: the vehicle-end sensing message and the vehicle-road cooperative sensing message comprise sensing target coordinate information and a time stamp; the vehicle-end sensing information is obtained through a vehicle-mounted sensor of the vehicle, and the vehicle-road cooperative sensing information is obtained through vehicle-mounted sensors of other vehicles and sensors arranged on the road side.
3. The vehicle-road cooperative sensing fusion method according to claim 2, wherein the method comprises the following steps: the vehicle end sensing information cache queue and the vehicle road cooperative sensing information cache queue filter the information in the cache queue in real time, specifically, the information which does not contain sensing target information in the queue is removed, the corresponding time stamps of the vehicle end sensing information and the vehicle road cooperative sensing information are made to be different, and the vehicle end sensing information and the vehicle road cooperative sensing information with the time difference larger than a set time threshold are removed.
4. The vehicle-road cooperative sensing fusion method according to claim 1, wherein the method comprises the following steps: the perceived target in S3 is expressed as a target candidate frame, the perceived target coordinate information comprises the coordinates of the central point of the target candidate frame under the world coordinate system and the length l and the width w of the target candidate frame, and the synchronization process is specifically,
firstly, determining coordinates of four vertexes of a target candidate frame under a coordinate system of the target candidate frame, wherein an origin of the coordinate system of the target candidate frame is a center point of the target candidate frame, an x-axis positive direction of the coordinate system is horizontal left vertical to a running direction of the target candidate frame, and a y-axis positive direction of the coordinate system is the running direction;
then, rotating the four vertex coordinates of the target candidate frame around the center point of the target candidate frame according to the course angle of the target candidate frame, so that the coordinate axis of the coordinate system of the target candidate frame is parallel to the coordinate axis of the world coordinate system;
and finally, translating the four vertexes of the target candidate frame and the center point of the target candidate frame together to enable the origin of the coordinate system of the target candidate frame to coincide with the origin of the world coordinate system.
5. The vehicle-road cooperative sensing fusion method according to claim 4, wherein the method comprises the following steps: the kalman filter process in S4 is specifically as follows,
representing the state of the perceived target as θ= (p, v), where p is the position of the perceived target and v is the speed of the perceived target;
the distance that the perception object passes within the time t is expressed as,
in the above, L x ,L y The distance between the motion of the sensing target in the x-axis direction and the y-axis direction in the t time, v x ,v y Sensing the movement speed of the target in the x-axis direction and the y-axis direction respectively, wherein a is the acceleration of the sensing target;
so that there is a number of the steps,
there is then a number of such a method,
in the above, x k 、y k To sense the x-axis and y-axis coordinates of the target at time k, x k+t 、y k+t Is the x-axis and y-axis coordinates of the perception target at the moment k+t, v x.k+t 、v y.k+t To sense the movement speed of the target in the directions of the x axis and the y axis at the moment k+t;
the state of the perception object is expressed in vector form,
θ=(p x ,p y ,v x ,v y ) T
wherein p is x 、p y Representing the components of p in the x-axis direction and the y-axis direction;
then the state model of the perception target is established as follows,
θ k+t =A k θ k +B uk
in the above, θ k+t Representing the state of the perception target at the time k+t, θ k Representing the perceived target state at time k, A k For the system state transition matrix, B uk Is a system error matrix;
the above formula is thus expressed as,
wherein B is uk It is indicated that the number of the elements is,
the covariance matrix of the state model of the perceived target is,
the observation model of the perception target is then expressed as,
Z k =C k θ k ·v k
in the above, C k =I 4X4 ,v k System noise is observed for the model; the observation model system noise covariance matrix R is expressed as,
in the above-mentioned method, the step of,p is respectively x 、p y 、v x 、v y Is a variance of (2);
the perceived target state is represented as a prediction of time of day using kalman filtering,
the prediction error is represented as a result of,
the kalman gain is expressed as,
the best estimated state of the perceived target state after the update at time k is expressed as,
6. the vehicle-road cooperative sensing fusion method according to claim 1, wherein the method comprises the following steps: s5, the process of fusing the measurement values by adopting the analytic hierarchy process is specifically as follows,
if there are p sensors for acquiring sensing information at time W, there are q sensing targets, and the measurement value of the sensing target j by the sensor i is m ij (i=1, 2, …, p; j=1, 2, …, q); determining the credibility of the sensor i as a according to a fusion algorithm of weighted average i (i=1, 2, …, p), the fusion result of the perceived target j is,
in the above, a i Satisfy the following requirements
For the perceived target j, the average of the measured values of all the corresponding sensors is,
the ratio of the credibility of the sensor s and the sensor t to the perceived target j is defined as,
for the perceived target j, the sensor's reliability judgment matrix is,
matrix D is then formed j Normalization processing of each column to obtain matrixThe elements of which are generally referred to as,
matrix is formedAdding by row to obtain an excessive confidence vector +.>The elements of which are generally referred to as,
will thenNormalizing to obtain a reliability vector W j Wherein the element->Representing the trustworthiness of the sensor r to the perceived target j,/>It is indicated that the number of the elements is,
7. the utility model provides a vehicle road collaborative perception fuses equipment which characterized in that: the method comprises the steps of receiving a vehicle-end perception message and a vehicle-road cooperative perception message and executing the vehicle-road cooperative perception fusion method according to any one of claims 1-6.
8. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program when executed by a processor implements the steps of the vehicle road collaborative awareness fusion method of any of claims 1-6.
9. An automobile, characterized in that: which is provided with a road cooperative sensing fusion device as defined in claim 7.
10. The utility model provides a car way collaborative perception fuses system which characterized in that: comprising the steps of (a) a step of,
the vehicle-mounted sensing equipment is used for acquiring a vehicle-end sensing message;
the road side sensing equipment is used for acquiring road side sensing information;
the road side communication unit is used for sending road side perception information to the OBU;
the OBU forwards the vehicle end sensing information of the own vehicle to the sensing fusion unit and sends the vehicle end sensing information of the own vehicle to OBU of other vehicles; meanwhile, the vehicle OBU receives road end sensing information from the road side communication unit and vehicle end sensing information of other vehicles sent by other OBUs of the vehicles to form a vehicle-road cooperative sensing information;
the vehicle-end sensing information and the vehicle-road cooperative sensing information sent by the vehicle OBU are received by the sensing fusion unit arranged on each vehicle, and the steps of the vehicle-road cooperative sensing fusion method are executed according to any one of claims 1-6.
CN202310588105.2A 2023-05-23 2023-05-23 Vehicle-road collaborative perception fusion method and system Pending CN116853284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310588105.2A CN116853284A (en) 2023-05-23 2023-05-23 Vehicle-road collaborative perception fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310588105.2A CN116853284A (en) 2023-05-23 2023-05-23 Vehicle-road collaborative perception fusion method and system

Publications (1)

Publication Number Publication Date
CN116853284A true CN116853284A (en) 2023-10-10

Family

ID=88217821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310588105.2A Pending CN116853284A (en) 2023-05-23 2023-05-23 Vehicle-road collaborative perception fusion method and system

Country Status (1)

Country Link
CN (1) CN116853284A (en)

Similar Documents

Publication Publication Date Title
CN109920246B (en) Collaborative local path planning method based on V2X communication and binocular vision
KR102446518B1 (en) Position-dependent support for autonomous vehicle control systems
CN113678140A (en) Locating and identifying approaching vehicles
US10552695B1 (en) Driver monitoring system and method of operating the same
CN111508276B (en) High-precision map-based V2X reverse overtaking early warning method, system and medium
US20150010212A1 (en) Method of determining the position of a vehicle in a traffic lane of a road and methods for detecting alignment and risk of collision between two vehicles
US10446035B2 (en) Collision avoidance device for vehicle, collision avoidance method, and non-transitory storage medium storing program
CN113223317B (en) Method, device and equipment for updating map
US20200189459A1 (en) Method and system for assessing errant threat detection
CN110574357B (en) Imaging control apparatus, method for controlling imaging control apparatus, and moving body
US20220032955A1 (en) Vehicle control device and vehicle control method
JP2021099793A (en) Intelligent traffic control system and control method for the same
CN112469970A (en) Method for estimating a localization quality in terms of a self-localization of a vehicle, device for carrying out the method steps of the method, and computer program
JP7006235B2 (en) Display control device, display control method and vehicle
CN113085852A (en) Behavior early warning method and device for automatic driving vehicle and cloud equipment
JP7371783B2 (en) Own vehicle position estimation device
US20230148097A1 (en) Adverse environment determination device and adverse environment determination method
US20230118619A1 (en) Parking-stopping point management device, parking-stopping point management method, and vehicle device
WO2017104209A1 (en) Driving assistance device
CN110606081B (en) Moving body assistance system and moving body assistance method
CN115482685A (en) Vehicle early warning control method and system and vehicle
CN116853284A (en) Vehicle-road collaborative perception fusion method and system
CN112349093A (en) Vehicle early warning method and device fused with V2X sensor
CN113614782A (en) Information processing apparatus, information processing method, and program
US11926342B2 (en) Autonomous vehicle post-action explanation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination