CN117529935A - Vehicle-road cooperative communication and data processing method, detection system and fusion device - Google Patents

Vehicle-road cooperative communication and data processing method, detection system and fusion device Download PDF

Info

Publication number
CN117529935A
CN117529935A CN202180099647.2A CN202180099647A CN117529935A CN 117529935 A CN117529935 A CN 117529935A CN 202180099647 A CN202180099647 A CN 202180099647A CN 117529935 A CN117529935 A CN 117529935A
Authority
CN
China
Prior art keywords
data
value
covariance
target
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180099647.2A
Other languages
Chinese (zh)
Inventor
胡滨
勾鹏琪
花文健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN117529935A publication Critical patent/CN117529935A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

A vehicle-road cooperative communication, data processing method, a detection system and a fusion device can be used in the fields of automatic driving, intelligent driving or auxiliary driving and the like. The method comprises the steps of obtaining data from a detection system, and sending the data to a fusion device, wherein the data comprises one or more of a state process calibration value, a motion model, state information, a covariance estimation value and a time stamp; the state process calibration value is used for fusing the device optimization covariance estimation value, the motion model is used for identifying the motion state of the target, the state information is used for identifying the motion characteristic of the target, the covariance estimation value is used for identifying the error between the state information and the actual state information, and the time stamp is used for identifying the moment when the detection system sends data. The covariance estimation value can be optimized through the state process calibration value, so that the measurement error of the detection system can be more accurately represented, and the accuracy of the first fusion data can be further improved.

Description

Vehicle-road cooperative communication and data processing method, detection system and fusion device Technical Field
The application relates to the technical field of vehicle-road cooperative communication, in particular to a vehicle-road cooperative communication, a data processing method, a detection system and a fusion device.
Background
The intelligent network-connected automobile is used as a technical product for fusion of multiple fields such as artificial intelligence, intelligent traffic, electronic communication and the like, and has wide application prospect. The intelligent network-connected automobile is a specific application of the intelligent automobile in an automobile networking environment, and is generally provided with a camera, a laser radar, an inertial measurement unit (inertial measurement unit, an IMU) and other sensing equipment for sensing the information of the environment inside and outside the automobile, and the information interaction between the automobile side end and the road side end can be realized based on a communication technology.
The sensing of the vehicle side end is based on the sensing of the vehicle body surrounding area by the vehicle-mounted sensor, but the sensing area is limited; the sensing of the road side end is based on the sensing of multiple stations and multiple sensors, and a wider observation space can be obtained. The data sensed by the road side end and the data sensed by the vehicle side end can be fused through the cooperation of the vehicle and the road, so that the sensing capability of the vehicle to the surrounding environment can be improved. The vehicle-road cooperation refers to a road traffic system which adopts technologies such as wireless communication, new generation internet and the like, carries out vehicle-vehicle and road dynamic real-time information interaction in all directions, carries out vehicle active safety control and road cooperation management on the basis of full-time empty dynamic traffic information acquisition and fusion, fully realizes effective cooperation of people and vehicles and roads, ensures traffic safety and improves traffic efficiency, thereby forming safety, high efficiency and environmental protection.
In the process of data fusion of the vehicle-road cooperation, the perceived data of the road side end and the perceived data of the vehicle side end are required to be sent to a fusion device. In general, the fusion device is located at a vehicle side, a road side end is provided with scheduling and buffering in a data processing process, and the road side end transmits perceived data to the fusion device located at the vehicle side end through an air interface, so that delay jitter exists when the road side end transmits the perceived data, the perceived data transmitted by the road side end cannot be continuously transmitted to the fusion device according to a preset time, the fusion device is caused to be in a data fusion period, the perceived data given by the road side end and the perceived data given by the vehicle side end may not be in the same fusion period, and the fused data of the fusion device are inaccurate.
Disclosure of Invention
The application provides a vehicle-road cooperative communication and data processing method, a detection system and a fusion device, which are used for improving the accuracy of data fused by the fusion device as much as possible.
In a first aspect, the present application provides a vehicle-road cooperative communication method, the method including obtaining data from a detection system, the data including one or more of a state process calibration value, a motion model, state information, a covariance estimation value, and a timestamp, and transmitting the data to a fusion device; the state process calibration value is used for optimizing the covariance estimation value, the motion model is used for identifying the motion state of the target, the state information is used for identifying the motion characteristic of the target, the covariance estimation value is used for identifying the error between the state information and the actual state information, and the time stamp is used for identifying the moment when the detection system transmits data.
Based on the scheme, the first state process calibration value and the first motion model are sent to the fusion device, so that the fusion device can optimize the first covariance estimation value based on the first state process calibration value, and the measurement error of the detection system can be more accurately represented. Further, the fusion device can determine the first confidence coefficient according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the first fusion data output by the fusion device can be further improved.
In one possible implementation, the targets include collaborative targets; the method comprises the steps of obtaining the actual position of a cooperative target conforming to a motion model at a preset time and the estimated position of the cooperative target at the preset time, and determining the estimated position of the cooperative target after m steps according to the estimated position and the motion model, wherein m is a positive integer; further, according to the actual position and the estimated position after m steps, determining an estimated error corresponding to the m steps, and then obtaining a state process calibration value corresponding to the motion model according to the estimated error.
By obtaining the state process calibration value of the m steps, the measurement error of the detection system is accurately represented, and therefore the accuracy of the first data is improved.
In one possible implementation, the method may include obtaining n-m estimated errors for each of L cycles, n being an integer greater than 1, L being a positive integer; and determining the variance of the L× (n-m) estimation errors obtained by L times of circulation as a state process calibration value corresponding to the motion model.
When L is greater than 1, it helps to improve the accuracy of the state process calibration.
In one possible implementation, the motion model includes any one of a rectilinear motion model, a left motion model, or a right motion model.
In one possible implementation, the target further comprises a random target, the method further comprising obtaining sampled data of the random target; based on the sampled data, status information is determined.
In one possible implementation, the method may include transmitting an electromagnetic wave signal to a detection region, receiving an echo signal from the detection region, and determining status information based on the echo signal; the detection area comprises a random target, and the echo signal is obtained after the electromagnetic wave signal is reflected by the random target.
In a second aspect, the application provides a data processing method for vehicle-road cooperation, which includes obtaining first data from a road side detection system and second data from the vehicle side detection system, and obtaining first fusion data of a random target according to the first data and the second data. The first data comprises one or more of a first state process calibration value, a first motion model, first state information, a first covariance estimation value and a first timestamp, wherein the first state process calibration value is used for optimizing the first covariance estimation value, the first motion model is used for identifying a first motion state of a target, the first state information is used for identifying a first motion feature of the target, the first covariance estimation value is used for identifying an error between the first state information and first actual state information, and the first timestamp is used for identifying a first moment when the road side detection system transmits the first data; the second data includes a second state process calibration value, a second motion model, second state information, a second covariance estimate, and a second timestamp, the second state process calibration value is used for optimizing the second covariance estimate, the second motion model is used for identifying a second motion state of the target, the second state information is used for identifying a second motion feature of the target, the second covariance estimate is used for identifying an error between the second state information and second actual state information, and the second timestamp is used for identifying a second time at which the vehicle-side detection system transmits the second data.
Based on the scheme, the fusion device can receive a first state process calibration value and a first motion model from the road side fusion module, and the first state process calibration value can optimize a first covariance estimation value so as to more accurately characterize a measurement error; further, the fusion device also receives a second state process calibration value and a second motion model from the vehicle-side fusion module, and the second state process calibration value can optimize the second covariance estimation value, so that the measurement error can be more accurately represented, and the accuracy of the obtained first fusion data of the random target can be improved.
In one possible implementation manner, the method may obtain a first state predicted value and a first covariance predicted value of the random target at the fusion time according to the first timestamp, the first state process calibration value, the first state information and the first covariance estimated value; obtaining a second state predicted value and a second covariance predicted value of the random target at the fusion moment according to the second timestamp, the second state process calibration value, the second state information and the second motion model; predicting a third state predicted value and a third covariance predicted value of the target according to the first motion model and second fusion data of a frame before the target; predicting a fourth state predicted value and a fourth covariance predicted value of the target according to the second motion model and second fusion data of a frame previous to the random target; obtaining a first filtering estimation value and a first filtering covariance estimation value according to the first state prediction value, the first covariance prediction value, the third state prediction value and the third covariance prediction value; obtaining a second filtering estimated value and a second filtering covariance estimated value according to the second state predicted value, the second covariance predicted value, the fourth state predicted value and the fourth covariance predicted value; obtaining a first confidence coefficient of the first data at the fusion moment according to the first timestamp, the first state process calibration value, the first motion model and the first covariance estimation value; obtaining a second confidence coefficient of the second data at the fusion moment according to the second timestamp, the second state process calibration value, the second motion model and the second covariance estimation value; and obtaining first fusion data according to the first confidence coefficient, the second confidence coefficient, the first filtering estimation value, the first filtering covariance estimation value, the second filtering estimation value and the second filtering covariance estimation value.
Further optionally, when the first confidence coefficient meets the first preset confidence coefficient and the second confidence coefficient does not meet the second preset confidence coefficient, obtaining first fusion data according to the first filtering estimation value and the first filtering covariance estimation value; when the second confidence coefficient meets the second preset confidence coefficient and the first confidence coefficient does not meet the first preset confidence coefficient, obtaining first fusion data according to the second filtering estimated value and the second filtering covariance estimated value; when the first confidence coefficient meets the first preset confidence coefficient and the second confidence coefficient meets the second preset confidence coefficient, obtaining first fusion data according to the first filtering estimated value, the first filtering covariance estimated value, the second filtering estimated value and the second filtering covariance estimated value; and when the first confidence coefficient does not meet the first preset confidence coefficient and the second confidence coefficient does not meet the second preset confidence coefficient, acquiring first fusion data according to the second fusion data of the previous frame.
Before the first data and the second data are fused, the data fusion is performed based on the first confidence coefficient corresponding to the first data and the second confidence coefficient corresponding to the second data, and the confidence coefficients of the data from different sources, so that the quality of the fused first fusion data can be improved.
In a third aspect, the present application provides a detection system for implementing any one of the above first aspect or the method of the first aspect, including corresponding functional modules for implementing the steps in the above method, respectively. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In one possible implementation, the detection system may be a roadside detection system or a vehicle side detection system in a vehicle-road cooperative communication system, or a module, such as a chip or a chip system or a circuit, that may be used in the roadside detection system or the vehicle side detection system. The advantages can be seen from the description of the first aspect, and are not repeated here. The detection system may include: a transceiver and at least one processor. The processor may be configured to support the detection system to perform the corresponding functions shown above, the transceiver being used to support communication between the detection system and a fusion device or other detection system or the like. The transceiver may be a stand-alone receiver, a stand-alone transmitter, a transceiver with integrated transceiver functions, or an interface circuit. Optionally, the detection system may further comprise a memory, which may be coupled to the processor, which holds the program instructions and data necessary for the detection system.
The processor is used for acquiring data from the detection system, wherein the data comprises one or more of a state process calibration value, a motion model, state information, a covariance estimation value and a time stamp; the state process calibration value is used for optimizing a covariance estimation value, the motion model is used for identifying the motion state of the target, the state information is used for identifying the motion characteristic of the target, the covariance estimation value is used for identifying the error between the state information and the actual state information, and the time stamp is used for identifying the moment when the detection system sends data; the transceiver is configured to transmit data to the fusion device.
In one possible implementation, the targets include collaborative targets; the processor is specifically used for acquiring the actual position of the cooperative target conforming to the motion model at the preset moment and the estimated position of the cooperative target at the preset moment; according to the estimated position and the motion model, determining the estimated position of the cooperative target after m steps, wherein m is a positive integer; determining an estimated error corresponding to the m steps according to the actual position and the estimated position after the m steps; and obtaining a state process calibration value corresponding to the motion model according to the estimation error.
In one possible implementation, the processor is specifically configured to: obtaining n-m estimated errors obtained in each cycle of L cycles, wherein n is an integer greater than 1, and L is a positive integer; and determining the variance of the L× (n-m) estimation errors obtained by L times of circulation as a state process calibration value corresponding to the motion model.
In one possible implementation, the motion model includes any one of a rectilinear motion model, a left motion model, or a right motion model.
In one possible implementation, the targets further include random targets; the processor is further configured to: acquiring sampling data of a random target; based on the sampled data, status information is determined.
In one possible implementation, the transceiver is specifically configured to: transmitting an electromagnetic wave signal to a detection area, the detection area comprising a random target; receiving an echo signal from a detection area, wherein the echo signal is obtained after an electromagnetic wave signal is reflected by a random target; the processor is specifically configured to: and determining state information according to the echo signals.
In a fourth aspect, the present application provides a fusion device for implementing any one of the methods of the second aspect or the second aspect, including corresponding functional modules for implementing the steps of the methods. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In one possible implementation, the fusion device may be a fusion device in a vehicle-road cooperative communication system, or a module, such as a chip or a chip system or a circuit, that may be used in the fusion device. The advantages can be seen from the description of the second aspect, and are not repeated here. The fusion device may include: a transceiver and at least one processor. The processor may be configured to support the fusion device to perform the corresponding functions shown above, the transceiver being used to support communication between the fusion device and a detection system (e.g., a roadside detection system or a vehicle-side detection system), etc. The transceiver may be a stand-alone receiver, a stand-alone transmitter, a transceiver with integrated transceiver functions, or an interface circuit. Optionally, the fusion device may further comprise a memory, which may be coupled to the processor, which holds the program instructions and data necessary for the fusion device.
The transceiver is used for acquiring first data from the road side detection system and second data from the vehicle side detection system, the first data comprises one or more of a first state process calibration value, a first motion model, first state information, a first covariance estimation value and a first timestamp, the first state process calibration value is used for optimizing the first covariance estimation value, the first motion model is used for identifying a first motion state of a target, the first state information is used for identifying a first motion feature of the target, the first covariance estimation value is used for identifying an error between the first state information and first actual state information, and the first timestamp is used for identifying a first moment when the road side detection system transmits the first data; the second data comprises one or more of a second state process calibration value, a second motion model, second state information, a second covariance estimate and a second timestamp, the second state process calibration value is used for optimizing the second covariance estimate, the second motion model is used for identifying a second motion state of the target, the second state information is used for identifying a second motion feature of the target, the second covariance estimate is used for identifying an error between the second state information and second actual state information, and the second timestamp is used for identifying a second moment when the vehicle-side detection system transmits the second data; the processor is used for obtaining first fusion data of the random target according to the first data and the second data.
In one possible implementation, the processor has a logic circuit for: obtaining a first state predicted value and a first covariance predicted value of the random target at the fusion moment according to the first timestamp, the first state process calibration value, the first state information and the first covariance estimated value; obtaining a second state predicted value and a second covariance predicted value of the random target at the fusion moment according to the second timestamp, the second state process calibration value, the second state information and the second motion model; predicting a third state predicted value and a third covariance predicted value of the target according to the first motion model and second fusion data of a frame before the target; predicting a fourth state predicted value and a fourth covariance predicted value of the target according to the second motion model and second fusion data of a frame previous to the random target; obtaining a first filtering estimated value and a first filtering covariance estimated value according to the first state predicted value, the first covariance predicted value, the third state predicted value and the third covariance predicted value; obtaining a second filtering estimated value and a second filtering covariance estimated value according to the second state predicted value, the second covariance predicted value, the fourth state predicted value and the fourth covariance predicted value; obtaining a first confidence coefficient of the first data at the fusion moment according to the first timestamp, the first state process calibration value, the first motion model and the first covariance estimation value; obtaining a second confidence coefficient of the second data at the fusion moment according to the second timestamp, the second state process calibration value, the second motion model and the second covariance estimation value; and obtaining first fusion data according to the first confidence coefficient, the second confidence coefficient, the first filtering estimation value, the first filtering covariance estimation value, the second filtering estimation value and the second filtering covariance estimation value.
In one possible implementation, the processor is specifically configured to: when the first confidence coefficient meets the first preset confidence coefficient and the second confidence coefficient does not meet the second preset confidence coefficient, obtaining first fusion data according to the first filtering estimated value and the first filtering covariance estimated value; when the second confidence coefficient meets the second preset confidence coefficient and the first confidence coefficient does not meet the first preset confidence coefficient, obtaining first fusion data according to the second filtering estimated value and the second filtering covariance estimated value; when the first confidence coefficient meets the first preset confidence coefficient and the second confidence coefficient meets the second preset confidence coefficient, obtaining first fusion data according to the first filtering estimated value, the first filtering covariance estimated value, the second filtering estimated value and the second filtering covariance estimated value; and when the first confidence coefficient does not meet the first preset confidence coefficient and the second confidence coefficient does not meet the second preset confidence coefficient, acquiring first fusion data according to the second fusion data of the previous frame.
In a fifth aspect, the present application provides a detection system for implementing any one of the above first aspect or the method of the first aspect, including corresponding functional modules for implementing the steps in the above method, respectively. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In a possible implementation manner, the detection system may be a road side detection system or a vehicle side detection system, and the detection system may include a processing module and a transceiver module, where the modules may perform corresponding functions of the road side detection system or the vehicle side detection system in the above method example, and detailed descriptions in the method example are omitted herein.
In a sixth aspect, the present application provides a fusion device for implementing any one of the methods of the second aspect or the second aspect, including corresponding functional modules for implementing the steps of the above methods, respectively. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In a possible implementation manner, the fusion device may include a processing module and a transceiver module, where the modules may perform corresponding functions of the fusion device in the foregoing method examples, and detailed descriptions in the method examples are specifically referred to herein and are not repeated herein.
In a seventh aspect, the present application provides a communication system including a detection system (e.g., a roadside detection system and a vehicle-side detection system) and a fusion device. Wherein the detection system may be adapted to perform any of the above-described first aspect or the method of the first aspect, and the fusion device may be adapted to perform any of the above-described second aspect or the method of the second aspect.
In an eighth aspect, the present application provides a vehicle, the communication system comprising a vehicle side detection system and/or a fusion device. Wherein the vehicle side system may be adapted to perform any of the above-described first aspect or the method of the first aspect, and the fusion device may be adapted to perform any of the above-described second aspect or the method of the second aspect.
In a ninth aspect, the present application provides a computer readable storage medium having stored therein a computer program or instructions which, when executed by a processor, cause the detection system to perform the method of the first aspect or any possible implementation of the first aspect, or cause the fusion apparatus to perform the method of the second aspect or any possible implementation of the second aspect.
In a tenth aspect, the present application provides a computer program product comprising a computer program or instructions which, when executed by a processor, cause the detection system to perform the method of the first aspect or any of the possible implementations of the first aspect, or cause the fusion device to perform the method of the second aspect or any of the possible implementations of the second aspect.
In an eleventh aspect, the present application provides a chip comprising a processor coupled to a memory for executing a computer program or instructions stored in the memory, such that the chip implements any one of the first or second aspects, and any possible implementation of any one of the aspects.
Drawings
Fig. 1a is a schematic diagram of a communication system architecture provided in the present application;
FIG. 1b is a schematic diagram of another communication system architecture provided herein;
fig. 2 is a schematic diagram of a radar detection target provided in the present application;
fig. 3 is a method flow diagram of a vehicle-road cooperative communication method provided in the present application;
FIG. 4 is a schematic flow chart of a method for obtaining a calibration value of a first state process and a first motion model according to the present application;
FIG. 5 is a method for obtaining first actual positions of a cooperative target at n different first moments by a road side fusion module provided in the present application;
fig. 6a is a schematic flow chart of a method for acquiring first state information by a road side detection system provided in the present application;
FIG. 6b is a flowchart illustrating a method for acquiring first status information by another roadside detection system provided in the present application;
FIG. 7a is a flowchart illustrating a method for acquiring first status information of a target by a roadside detection system provided in the present application;
FIG. 7b is a flowchart illustrating a method for acquiring first status information of a target by a roadside detection system provided in the present application;
fig. 8 is a schematic flow chart of a method for processing data of vehicle-road cooperation provided in the present application;
FIG. 9 is a flowchart of a method for determining first fusion data of a random target according to the present application;
FIG. 10 is a flowchart of a method for obtaining first fusion data based on a first confidence and a second confidence provided in the present application;
FIG. 11 is a schematic structural diagram of a detection system provided herein;
FIG. 12 is a schematic view of a fusion device according to the present disclosure;
FIG. 13 is a schematic diagram of a detection system provided herein;
fig. 14 is a schematic structural view of a fusion device provided in the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Hereinafter, some terms in the present application will be explained. It should be noted that these explanations are for the convenience of those skilled in the art, and do not limit the scope of protection claimed in the present application.
1. Cooperative targets
Cooperative targets generally refer to real location information of the detected target that can be obtained through other cooperation channels in addition to direct measurement by sensors. For example, the position of a certain fixed target may be obtained in advance; for another example, the cooperative target reports the current location information wirelessly, and the current location information of the cooperative target may be measured by a measuring instrument.
2. Non-cooperative targets (or referred to as random targets)
Non-cooperative targets generally refer to real position information of detected targets, and no other technical means can be obtained except that a sensor can directly measure the real position information.
3. Covariance (covariance)
Covariance is used to measure the overall error of two variables. In this application, the two variables may be, for example, a predicted value and an actual value. Covariance is typically a matrix, typically as an intermediate parameter.
4. Kalman Filter (KF)
Kalman filtering is a highly efficient recursive filter (or autoregressive filter) that can estimate the state of a dynamic system from a series of incomplete and random measurements. The Kalman filtering can take the joint distribution of each measurement quantity under different time into consideration according to the value of each measurement quantity under different time, and then the estimation of the unknown variable is generated, so that the estimation mode is more accurate than the estimation mode based on a single measurement quantity.
It will also be appreciated that Kalman filtering is essentially a data fusion algorithm, fusing together data from different sensors, possibly with different units (units), with the same measurement purpose, resulting in a more accurate measurement.
5. Image fusion (Image fusion)
The image fusion is an image processing technology, namely, image processing, specific algorithm calculation and the like are carried out on image data about the same target acquired by a multi-source channel, beneficial information in each channel is extracted to the greatest extent, and finally, high-quality (such as brightness, definition and color) images are fused, and compared with the original images, the fused images are more accurate.
Based on the foregoing, a possible applicable architecture and a possible application scenario of the present application are described below.
Referring to fig. 1a, a schematic diagram of a possible communication system architecture is provided. The communication system may include a vehicle side detection system and a road side detection system, and communication between the vehicle side detection system and the road side detection system may be through a Side Link (SL) air interface or a Uu air interface. The vehicle side detection system may be mounted on a vehicle including, but not limited to, an unmanned vehicle, a smart vehicle, an electric vehicle, a digital car, or the like. The road side detection system may be installed on a road side infrastructure including, but not limited to, traffic lights, traffic cameras, or Road Side Units (RSUs), etc., and fig. 1a is an illustration of a road side infrastructure as a traffic light.
The vehicle side detection system can acquire measurement information such as longitude and latitude, speed, direction, distance of surrounding objects and the like of the vehicle in real time or periodically, and then auxiliary driving or automatic driving of the vehicle is realized by combining an advanced driving auxiliary system (advanced driving assistant system, ADAS) according to the measurement information. For example, the location of the vehicle may be determined using longitude and latitude, or the direction and destination of travel of the vehicle over a period of time in the future may be determined using speed and heading, or the number, density, etc. of obstacles surrounding the vehicle may be determined using the distance of surrounding objects. Further alternatively, the vehicle side detection system may comprise a vehicle-mounted sensing module and a vehicle-mounted fusion module, see fig. 1b. The vehicle-mounted sensing module may be a sensor (the function of the sensor may be described below) disposed around the vehicle body (e.g. front left, front right, left, right, rear left, rear right, etc. of the vehicle), and the position where the vehicle-mounted sensing module is mounted on the vehicle is not limited in this application. The vehicle-mounted fusion module may be, for example, a processor in the vehicle, or may be a domain controller in the vehicle, or may be a control unit (electronic control unit, ECU) in the vehicle, or may be another chip mounted in the vehicle, where the ECU may also be referred to as a "driving computer", "vehicle-mounted computer", "vehicle-specific microcomputer controller", or "lower computer", etc., and is one of the core elements of the vehicle. Further optionally, the road side detection system may comprise a road side perception module and a road side fusion module, see fig. 1b. The road side sensing module can be used for collecting obstacle information, pedestrian information, signal lamp information, vehicle flow information, traffic sign information and the like in real time or periodically. The road side sensing module may be, for example, a sensor, and in particular, reference may be made to the following description of the sensor, which is not repeated herein. The roadside fusion module may be, for example, a chip or a processor, etc.
Further optionally, the communication system may further comprise a cloud server. The cloud server may be a single server or a server cluster composed of a plurality of servers. The cloud server may also be referred to as a cloud, cloud server, cloud controller, or internet of vehicles server, etc. It is also understood that a cloud server is a generic term for devices or components of data processing capabilities, such as physical devices that may include hosts or processors, virtual devices that may include virtual machines or containers, and chips or integrated circuits.
In one possible implementation, the communication system may be an intelligent vehicle-road collaboration system (intelligent vehicle infrastructure cooperative systems, IVICS), abbreviated as vehicle-road collaboration system.
The communication system can be applied to the fields of unmanned driving, automatic driving, auxiliary driving, intelligent driving, internet-access vehicle, surveying and mapping or security monitoring and the like. It should be noted that the above application scenario is merely an example, and the present invention may be applied to various other scenarios, for example, a scenario of an automatic guided vehicle (automated guided vehicle, AGV) trolley, where the AGV trolley is equipped with an electromagnetic or optical automatic navigation device, and can travel along a predetermined navigation path, a vehicle side detection system may be mounted on the AGV trolley, and a road side detection system may be mounted on roadside equipment in the AGV trolley scenario.
In one possible implementation, the fusion device may be mounted on the vehicle, or the processor in the vehicle as the fusion device, or the ECU in the vehicle as the fusion device, or the domain controller in the vehicle as the fusion device. Based on this, the fusion device may receive the transmitted information from the roadside detection system through an On Board Unit (OBU) in the vehicle. The OBU refers to a communication device adopting a special short-range communication (dedicated short range communication, DSRC) technology, and can be used for communication between a vehicle and the outside.
In another possible implementation, the fusion device may also be installed on a road side infrastructure. The road side infrastructure may communicate with the vehicle through the internet of vehicles (vehicle to everthing, V2X). The fusion device and the drive test infrastructure may communicate via control area network (controller area network, CAN) lines, ethernet, or wireless, among others.
In yet another possible implementation manner, the fusion device may also be installed on a cloud server, or the cloud server is used as the fusion device. The cloud server and the road side detection system can communicate in a wireless mode, and the cloud server and the vehicle side detection system can also communicate in a wireless mode.
In fig. 1b, the fusion device is mounted on the vehicle side as an example.
The following describes sensors suitable for use in a vehicle-mounted sensing module and a roadside sensing module.
The sensors can be divided into two main types according to the sensing mode, namely a passive sensing type sensor and an active sensing type sensor. Wherein, the passive sensing sensor mainly depends on radiation information of the external environment. The active sensing type sensor is used for sensing the environment by actively transmitting energy waves. The passive sensing type sensor and the active sensing type sensor are described in detail below, respectively.
The passive sensing type sensor may be, for example, a camera (or referred to as a video camera or video camera), the accuracy of which is primarily dependent on image processing and classification algorithms. Wherein the camera includes any camera (e.g., still camera, video camera, etc.) for acquiring images of the environment in which the vehicle is located. In some examples, the camera may be configured to detect visible light, referred to as a visible light camera. The visible light camera adopts a charge-coupled device (CCD) or a standard complementary metal oxide semiconductor (complementary meta-oxide semiconductor, CMOS) to obtain an image corresponding to visible light. In other examples, the camera may also be configured to detect light from other portions of the spectrum (such as infrared light), which may be referred to as an infrared camera. The infrared camera may employ a CCD or CMOS, and is filtered by a filter to allow only light of a color wavelength band and a set infrared wavelength band to pass therethrough.
Wherein the active sensing type sensor may be a radar. As shown in fig. 2, for example, a radar disposed at the front end of the vehicle may sense a sector area as shown by a solid line frame, which may be a radar sensing area (or a detection area called radar). The radar transmits electromagnetic wave signals outwards through the antenna, receives echo signals obtained by reflecting the electromagnetic wave signals by the target, amplifies the echo signals, performs down-conversion and other processes, and obtains information such as relative distance, relative speed, angle and the like between the vehicle and the target.
As described in the background art, in the vehicle-road cooperative communication system, the road side data transmitted by the fusion road side detection system and the vehicle side data of the vehicle-mounted detection system may not be data in the same period, so that the fused data may be distorted.
In view of this, the present application provides a vehicle-road cooperative communication method. The vehicle-road cooperative communication method can improve the accuracy of data from the road side detection system and data fused by the vehicle side detection system. The communication method may be applied in a communication system as shown in fig. 1a, and may be performed by the road side detection system or may be performed by the vehicle side detection system.
Based on the foregoing, a specific explanation of the vehicle-road cooperative communication method proposed in the present application is provided below with reference to fig. 3 to 6 b.
Referring to fig. 3, a method flow diagram of a vehicle-road cooperative communication method provided in the present application is shown. The method is hereinafter performed by the roadside detection system as an example. The method comprises the following steps:
in step 301, a roadside detection system acquires first data.
Here, the first data includes one or more of a first state process calibration value, a first motion model, first state information, a first covariance estimate, and a first timestamp.
The first state process calibration value is used for optimizing the first covariance estimation value (namely, the fusion device can optimize the first covariance estimation value based on the first state process calibration value in the data fusion process), so that the measurement error of the detection system can be more accurately represented; the first motion model is used to identify a motion state of the cooperative target. Further alternatively, the first state process calibration value and the first motion model may be pre-acquired or pre-stored by the roadside detection system. Here, a possible implementation method of the roadside detection system to obtain the first state process calibration value and the first motion model may be referred to the following description of fig. 4, which is not repeated herein.
The first state information is used to identify a motion characteristic of the random object. Illustratively, the first state information may include a first location and/or a first rate of random targets, etc. The possible implementation manner of obtaining the first state information may be referred to the following description of fig. 5, which is not repeated herein.
The first covariance estimate is used to identify a statistical error between the first state information and the first actual state information. It should be noted that, the initial state of the roadside detection system may be preset with a first initial state information. The first covariance estimated value generally converges along with the filtering process, and when the first covariance estimated value converges to a certain value, the first actual state information can be regarded as first state information.
The first timestamp is used to identify a time at which the roadside probe train transmitted the first data. It is also understood that the first timestamp is a timestamp that is covered when the roadside detection system transmits the first data.
In step 302, the roadside detection system sends first data to the fusion device. Accordingly, the fusion device may receive first data from the road detection system.
When the fusion device is mounted on the vehicle, or the processor in the vehicle is the fusion device, or the ECU in the vehicle is the fusion device, or the domain controller in the vehicle is the fusion device, the fusion device may receive the first data from the roadside detection system through the OBU in the vehicle. V2X communication is carried out between the road side detection system and the vehicle.
When the fusion device is installed on the cloud server or the cloud server is used as the fusion device, the road side detection system can send first data to the cloud server.
In one possible implementation, the first state process calibration value and the first motion model may be sent to the fusion device by the roadside detection system during a process of establishing a connection between the roadside detection system and the fusion device; or the first state information, the first covariance estimated value and the first timestamp are sent to the fusion device for the first time by the road side detection system; or may be sent before the roadside detection system sends the first state information, the first covariance estimate value and the first timestamp to the fusion device for the first time, which is not limited in the application.
Through the above steps 301 and 302, by sending the first state process calibration value and the first motion model to the fusion device, the fusion device can be enabled to optimize the first covariance estimation value based on the first state process calibration value, so that the measurement error can be more accurately represented. Further, the fusion device can determine the first confidence coefficient according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the first fusion data output by the fusion device can be further improved.
Possible implementations of the acquisition of the first data are exemplarily shown below.
1) Possible implementations of the first state process calibration and the first motion model are obtained.
Referring to fig. 4, a flowchart of a method for obtaining a calibration value of a first state process and a first motion model is provided. In the method, the preset moment is taken as an example of a first moment, the motion model is taken as an example of a first motion model, the state process calibration value is taken as an example of a first state process calibration value, the actual position acquired at the first moment can be called a first actual position, the estimated position acquired at the first moment can be called a first estimated position, the estimated position after M steps can be called a second estimated position, and the estimated error corresponding to the M steps is called a first estimated error. The method is applicable to the system shown in fig. 1b described above. The method may comprise the steps of:
in step 401, the roadside fusion module obtains a first actual position of a cooperative target conforming to a first motion model at a first moment.
Here, the cooperative target moves in a first motion model such as a linear motion model, a left motion model, or a right motion model, or the like. The linear motion model can be understood as that the cooperative target does uniform linear motion according to the speed v; the left movement model can be understood as accelerating movement of the cooperative target according to the speed v and the acceleration a; the right movement model can be understood as that the cooperative target does deceleration linear movement according to the speed v and the acceleration a; alternatively, the left motion model can be understood as that the cooperative target performs deceleration motion according to the speed v and the acceleration a; the right motion model can be understood as accelerating linear motion of the cooperative target according to the speed v and the acceleration a.
Next, a method for obtaining the first actual positions of the cooperative targets at n different first moments by using the roadside fusion module is provided, as shown in fig. 5, and the method includes the following steps:
in step 501, the measuring instrument measures first actual positions of the cooperative targets corresponding to the first motion model at n different first moments, where n is an integer greater than 1.
Here, the surveying instrument may be an equal-precision surveying instrument using real-time kinematic (RTK) technology, which may be mounted on the cooperative target to achieve a measurement of the actual position of the cooperative target.
In one possible implementation, the meter may measure the cooperative target as [ k ] as the cooperative target moves in the first motion model 1 ,k 2 ,k 3 ,…,k n ]First actual positions [ x ] of the n first moments 1 ,x 2 ,x 3 ,…,x n ]. It is also understood that at k 1 At the moment, the first actual position of the cooperative target measured by the measuring instrument is x 1 The method comprises the steps of carrying out a first treatment on the surface of the At k 2 At the moment, the first actual position of the cooperative target measured by the measuring instrument is x 2 The method comprises the steps of carrying out a first treatment on the surface of the And so on, at k n At the moment, the first actual position of the cooperative target measured by the measuring instrument is x n
Illustratively, the surveying instrument may obtain a relationship of a first motion model, n first moments, and n first actual positions as shown in table 1.
TABLE 1 relation of first motion model, n first moments and n first actual positions
The relation among the first motion model, the n first moments, and the n first actual positions is only an example, and may be represented by other corresponding relations, which is not limited in this application. In addition, the table 1 may be three independent tables, that is, one table corresponds to one motion model.
Step 502, the measuring instrument sends first actual positions of the obtained cooperative targets corresponding to n different first moments to the roadside fusion module. Correspondingly, the road side fusion module receives first actual positions corresponding to n different first moments from the measuring instrument.
If the meter obtains table 1 in step 501, step 502 may be where the meter sends table 1 to the roadside fusion module. If the meter obtains three tables in step 501, the step 502 may be that the meter sends three tables to the roadside fusion module. It should be noted that, the three tables may be sent to the roadside fusion module together, or may be sent to the roadside fusion module three times, which is not limited in this application.
Through the above steps 501 and 502, the road side fusion module may obtain the first actual positions corresponding to the n different first moments when the cooperative targets move according to the first motion model.
In step 402, the roadside awareness module obtains a first estimated position of the cooperative target conforming to the first motion model at a first time, and sends the first estimated position at the first time to the roadside fusion module.
In one possible implementation, the roadside awareness module may record the cooperative targets in [ k ] in accordance with the first motion model, respectively 1 ,k 2 ,k 3 ,…,k n ]First estimated positions at the n different first moments in timeIt is also understood that at k 1 At moment, the road side perception module records a first estimated position of a cooperative target conforming to a first motion model as followsAt k 2 At moment, the road side perception module records a first estimated position of a cooperative target conforming to a first motion model as followsAnd so on, at k n At moment, the road side perception module records a first estimated position of a cooperative target conforming to a first motion model as follows
With reference to fig. 1a, if the roadside sensing module is a radar, the distance of the cooperative target can be measured by electromagnetic waves, and then the angle of electromagnetic wave reception can be obtained by combining the multi-antenna technology, and the distance and the angle can be used for positioning in space, so that the first estimated positions corresponding to n different first moments can be obtained. If the road side sensing module is a camera, the corresponding relation between the image pixel coordinate system and the world coordinate system can be constructed, so that first estimated positions corresponding to n different first moments can be obtained.
Further optionally, the roadside awareness module sends a compliance to the roadside fusion moduleFirst estimated positions of the cooperative targets of the first motion model at n different first moments
In one possible implementation, the roadside detection system may include P roadside sensing modules, where the P roadside sensing modules may perform step 402 described above, each of the roadside sensing modules may obtain n first estimated positions, and the P roadside sensing modules may each send the recorded n first estimated positions to the roadside fusion module. Accordingly, the roadside fusion module may receive n first estimated positions from each of the P roadside awareness modules, resulting in p×n first estimated positions.
Further optionally, the roadside fusion module may first perform weighted average on the first estimated positions from the P roadside awareness modules to obtain a first estimated position at the first time. It can also be understood that the roadside fusion module is at k 1 At the moment, the first estimated positions from the P road side perception modules are weighted and averaged to obtain k 1 The first estimated position of the moment isAt k 2 At the moment, the first estimated positions from the P road side perception modules are weighted and averaged to obtain k 2 The first estimated position of the moment isAnd so on, at k n At the moment, the first estimated positions from the P road side perception modules are weighted and averaged to obtain k n The first estimated position of the moment isIn other words, at [ k ] 1 ,k 2 ,k 3 ,…,k n ]The n first moments, the first estimated positions available to the road side fusion module are
In step 403, the roadside fusion module determines a second estimated position of the cooperative target after the M steps according to the first estimated position and the first motion model.
Exemplary, if the first motion model is a linear motion model, then the second estimated position of the cooperative target after M stepsCan be represented by the following formula 1.
If the first motion model is a left motion model, a second estimated position of the cooperative target after M stepsCan be represented by the following equation 2. It should be understood that the left motion model is taken as an example to perform acceleration motion according to the speed v and the acceleration a by taking the left motion model as a cooperative target in this example.
If the first motion model is a right motion model, a second estimated position of the cooperative target after M stepsThe following formula 3. It should be understood that the right motion model is taken as an example to perform deceleration linear motion at the speed v and the acceleration a by taking the right motion model as a cooperative target.
In step 404, the roadside fusion module determines a first estimation error of the M-step estimation according to the first actual position and the second estimated position.
For example, the first estimation error at the first time instant may be expressed asn times, n-m first estimation errors can be obtainedFor example, when m=1, n-1 first estimation errors can be obtained; when m=2, n-2 first estimation errors can be obtained. It will be appreciated that the n-m first estimation errors may be identical, or may be different, or may be partially identical.
And step 405, the road side fusion module obtains a first state process calibration value corresponding to the first motion model according to the first estimation error.
In one possible implementation, the steps 401 to 405 are performed in a loop L times, and n-m different first estimation errors can be obtained each time the steps 401 to 405 are performed. The road side fusion module determines the variances of the L times of the first estimation errors as a first state process calibration value corresponding to the first motion model. When L is greater than 1, it helps to improve the accuracy of the first state process calibration.
Illustratively, a first state process calibration value Q m Can be represented by the following equation 4.
When the first motion model is a linear motion model, a first state process calibration value corresponding to the linear motion model can be obtained; when the first motion model is a left motion model, a first state process calibration value corresponding to the left motion model can be obtained; when the first motion model is a right motion model, a first state process calibration value corresponding to the right motion model can be obtained, and see table 2 below.
TABLE 2 correspondence of first motion model, M steps, and first state process calibration values
It should be noted that, in the case where m is the same for the same motion model, when the motion speeds of the cooperative targets are different, the corresponding first state process calibration values may also be different.
It should be noted that, the correspondence relationship between the first motion model, the M step, and the calibration value of the first state process is only an example, and may be represented by other correspondence relationships, which is not limited in this application.
2) Possible implementations of the acquisition of the first state information.
Based on the type of the road side perception module, the first state information can be acquired in the following two cases.
In one case, the road side sensing module is a radar.
In this case one, taking the roadside detection system as an example, P radars are included, P is a positive integer.
Fig. 6a is a schematic flow chart of a method for acquiring first status information by a roadside detection system provided by the present application. In this example, the transmitted electromagnetic wave signal is exemplified as the first electromagnetic wave signal, and the sampled data is exemplified as the first echo signal. The method comprises the following steps:
for each of the P radars, the following steps 601 to 603 are performed.
In step 601, the radar emits a first electromagnetic wave signal to a detection area.
The detection area of the radar can be seen from the description of fig. 2.
The radar receives a first echo signal from a detection zone, step 602.
Here, the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target. It should be understood that the first echo signal is detection data (or referred to as detection data, sampling data, etc.) of a random target acquired by the radar.
In step 603, the radar sends a first echo signal to the roadside fusion module. Accordingly, the roadside fusion module receives the first echo signal from the radar.
Here, the roadside fusion module may receive the first echo signals from the P radars (i.e., sample data 1, sample data 2 … sample data P) to obtain P first echo signals.
In step 604, the roadside fusion module determines first state information of the random target according to the received P first echo signals.
In one possible implementation manner, the roadside fusion module obtains P pieces of state information based on P pieces of first echo signals according to one piece of state information obtained by the first echo signals, and performs weighted average on the P pieces of state information to obtain first state information of a random target.
Illustratively, taking the example that the first state information includes the first position of the target at any time, the road side fusion module may perform weighted average on the obtained P positions to obtain the first position included in the first state information.
As shown in fig. 6b, a flowchart of a method for acquiring first status information by another roadside detection system provided in the present application is shown. In this example, the transmitted electromagnetic wave signal is exemplified as the first electromagnetic wave signal, and the sampled data is exemplified as the first echo signal. The method comprises the following steps:
for each of the P radars, the following steps 611 to 614 are performed.
In step 611, the radar transmits a first electromagnetic wave signal to the detection area.
Here, the detection area of the radar can be seen from the description of fig. 2 above.
The radar receives a first echo signal from the detection zone, step 612.
Here, the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target.
In step 613, the radar determines first state information of the random target according to the received first echo signal.
In step 614, the radar sends status information to the roadside fusion module. Correspondingly, the roadside fusion module receives state information from the radar.
Here, the roadside fusion module may receive state information from P radars, obtaining P state information.
In step 615, the roadside fusion module determines first state information of the random target according to the received P state information.
In one possible implementation, the roadside fusion module performs weighted average on the P pieces of state information according to the state information from the P pieces of radars to obtain first state information of the random target.
In the second case, the road side sensing module is a camera.
Fig. 7a is a schematic flow chart of a method for acquiring first state information of a target by using a road side detection system provided in the present application. In this example, the first image is taken as sample data. The method comprises the following steps:
for each of the P cameras, the following steps 701 to 703 are performed.
In step 701, a camera captures a first image of a random object.
In step 702, the camera sends a first image to a roadside fusion module. Accordingly, the roadside fusion module receives a first image from the camera.
Here, the roadside fusion module may receive the first images from the P cameras, obtaining P first images.
In step 703, the roadside fusion module may obtain first state information of the random target according to the P first images.
In one possible implementation manner, the roadside fusion module may fuse the P first images, and determine the first state information of the random target according to the fused first images.
Fig. 7b is a schematic flow chart of a method for obtaining first state information of a target by using a road side detection system provided in the present application. The method comprises the following steps:
in step 711, the camera captures a first image of the random object.
In step 712, the camera obtains state information of the random object from the first image.
In step 713, the camera sends state information of the random targets to the roadside fusion module. Accordingly, the roadside fusion module receives status information from the camera.
Here, the roadside fusion module may receive status information from P cameras.
In step 714, the roadside fusion module obtains the first state information of the random target according to the P state information from the camera.
In one possible implementation, the roadside fusion module performs weighted average on the P pieces of state information according to the state information from the P cameras to obtain the first state information of the random target.
3) A first timestamp.
In one possible implementation manner, the moment when the roadside fusion module sends the first data to the fusion device through the air interface is the first timestamp.
4) A first covariance estimate.
In one possible implementation, the first covariance estimate is a statistical error between the first state information and the actual state information.
Further optionally, the first covariance estimateCan be represented by the following equation 5.
Wherein k represents the fusion time, t represents the first time stamp, phi r (k/t) represents a state equation,covariance estimate at time t (initial time, the time of the start of the processMay be a preset value) which is an intermediate quantity, Q r (k/t) is the state process calibration value at the moment of k-t. It should be appreciated that the first state equation may be a model of the motion of the random target detected by the roadside detection system and indicated to the fusion device; or may be determined by the fusion device based on the first state information. The motion model of the random target can be a linear motion model, a left motion model or a right motion model.
Based on the above, the roadside fusion module may obtain the first state information, the first covariance estimate, the first timestamp, the first state process calibration value, and the first motion model.
Based on similar processes, the vehicle-side fusion module may obtain a second motion model, a second state process calibration value, second state information, a first covariance estimate, and a first timestamp. It should be understood that the above-described road side fusion module may be replaced with a vehicle side fusion module, the road side perception module with a vehicle side perception module, the first motion model with a second motion model, the first state process calibration with a second state process calibration, and the first state information with a second state information.
Based on the foregoing, a specific description will be given below of a vehicle-road collaborative data processing method according to the present application with reference to fig. 8 to 10.
Fig. 8 is a schematic flow chart of a method for processing data of vehicle-road cooperation. The method is applicable to the communication system described above and shown in fig. 1 a. The method comprises the following steps:
in step 801, a roadside detection system sends first data to a fusion device. Accordingly, the fusion device receives the first data from the roadside detection system.
Here, the first data includes one or more of a first state process calibration value, a first motion model, first state information, a first covariance estimate, and a first timestamp. Reference may be made specifically to the foregoing related description, and no further description is given here.
In step 802, the vehicle side detection system sends second data to the fusion device. Accordingly, the fusion device receives the second data from the vehicle side detection system.
Here, the second data includes a second state process calibration value, a second motion model, second state information, a second covariance estimate, and a second timestamp. The second state process calibration value is used for optimizing the second covariance estimation value (namely, the fusion device can optimize the second covariance estimation value based on the second state process calibration value in the data fusion process), so that the measurement error can be more accurately represented; the second motion model is used to identify a motion state of the cooperative target. Further alternatively, the second state process calibration and the second motion model may be pre-acquired and stored by the vehicle side detection system. Here, a possible implementation method for the vehicle-side detection system to obtain the second state process calibration value and the second motion model may be referred to the following description of fig. 4, which is not repeated herein. The second state information is used to identify a motion characteristic of the target. For example, the second status information may include a second location and/or a second rate of the target, etc. The possible implementation manner of obtaining the second state information may be referred to the following description of fig. 5, which is not repeated herein. The second covariance estimate is used to identify a statistical error between the second state information and the actual state information. It should be noted that, the initial state in the vehicle-side detection system has no actual state information, and an actual state information (referred to as initial state information) may be preset, where the initial state information generally converges continuously along with the filtering process. The second timestamp is used to identify a time at which the second data was transmitted by the vehicle side probe train. It is also understood that the second timestamp is a timestamp that is covered when the vehicle side probe system transmits the second data.
It should be noted that, there is no sequence between the step 801 and the step 802, the step 801 may be performed first and then the step 802 may be performed, or the step 802 may be performed first and then the step 801 may be performed, or the step 801 and the step 802 may be performed synchronously, which is not limited in this application.
In step 803, the fusion device obtains first fusion data of the target according to the first data and the second data.
Here, the first fused data refers to first fused data of a random target. The first fused data includes, for example, location information, status information (e.g., rate, direction), covariance, etc. of the target. A possible implementation of this step 803 may be referred to as the following description of fig. 9, and will not be described here again.
Through the steps 801 to 803, the fusion device may receive the first state process calibration value and the first motion model from the road side fusion module, and the fusion device may optimize the first covariance estimation value through the first state process calibration value in the data fusion process, so as to more accurately characterize the measurement error; further, the fusion device also receives the second state process calibration value and the second motion model from the vehicle-side fusion module, and optimizes the second covariance estimation value through the second state process calibration value in the data fusion process, so that the measurement error can be more accurately represented, and the accuracy of the obtained first fusion data of the target is improved.
Referring to fig. 9, a flowchart of a method for obtaining first fusion data according to first data and second data is provided. The method comprises the following steps:
in step 901, the fusion device obtains a first state predicted value of the target at the fusion time according to the first timestamp and the first state information.
In one possible implementation, the first state predictorCan be represented by the following equation 6:
wherein,first state information phi representing t time transmitted by road side detection system r (k/t) represents a first state equation.
In step 902, the fusion device obtains a second state prediction value of the target at the fusion time according to the second timestamp and the second state information.
In one possible implementation, the second state predictorCan be represented by the following equation 7:
wherein,second state information phi representing t time transmitted by vehicle side detection system v (k/t) represents a second state equation.
It should be noted that, there is no sequence between the step 901 and the step 902, the step 901 may be performed first and then the step 902 may be performed, or the step 902 may be performed first and then the step 901 may be performed, or the step 901 and the step 902 may be performed simultaneously.
In step 903, the fusion device obtains a first covariance predicted value of the target at the fusion time according to the first timestamp, the first state information, the first state process calibration value and the first covariance estimated value.
Here, the first covariance prediction valueThe description of equation 5 above may be used. Wherein,the first covariance estimated value output by the road side detection system at the moment t is shown.
Step 904, the fusion device obtains a second covariance predicted value of the target at the fusion time according to the second timestamp, the second state information, the second state process calibration value and the second motion model.
In one possible implementation, the second covariance prediction valueCan be represented by the following equation 8:
wherein k represents the kth fusion time, phi v (k/t) represents a state equation,a second covariance estimated value representing the t-time output of the vehicle-side detecting system, which is an intermediate quantity, Q v (k/t) is a second state process calibration value at time k-t. It should be appreciated that the second state equation may be a model of the motion of the object detected by the vehicle side detection system and indicated to the fusion device; or may be determined by the fusion device based on the second state information.
It should be noted that, there is no sequence between the step 903 and the step 904, the step 903 may be performed first and then the step 904 may be performed, or the step 903 and the step 904 may be performed simultaneously.
In step 905, the fusion device predicts a third state prediction value and a third covariance prediction value of the target according to the first motion model and the second fusion data of the previous frame.
In one possible implementation, the second fused data of the previous frame includes state information output by the previous frameThird state prediction valueCan be represented by equation 9.
Wherein phi is f (k)=φ r (k/t) represents a state equation, which can be determined from the first motion model.
Further optionally, the second fused data of the previous frameCovariance estimate including previous frame outputThird covariance prediction valueCan be represented by equation 10.
Wherein Q is f (k) Is a measure of process noise, typically an empirical value,representing the equation of state.
Here, each frame of fusion data output by the fusion device may be cached first, and the processor may call the fusion data in the cache.
In step 906, the fusion device predicts a fourth state predictor and a fourth covariance predictor of the target according to the second motion model and second fusion data of a frame preceding the target.
This step 906 is described in the previous step 905, and will not be described here again.
It should be noted that, there is no sequence between the step 905 and the step 906, the step 905 may be performed first and then the step 906 may be performed, or the step 906 may be performed first and then the step 905 may be performed, or the step 905 and the step 906 may be performed simultaneously.
In step 907, the fusion device obtains a first filtered estimate and a first filtered covariance estimate of the target according to the first state predictor, the first covariance predictor, the third state predictor, and the third covariance predictor.
In one possible implementation, the first filtered estimateCan be determined from the first state predictor and the third state predictor, the first filtered covariance estimate can be expressed as shown in equation 11 belowCan be determined from the first covariance prediction value and the third covariance prediction value, see the expression of equation 12 below.
Wherein H is an observation matrix, K r (k) For the Kalman gain equation, see the following expression of equation 13, where R is the noise figure, typically an empirical value.
Here, the first filtered estimate of the target includes a first rate, a first direction, first location information, etc. of the target.
In step 908, the fusion device obtains a second filtered estimation value and a second filtered covariance estimation value of the target according to the second state prediction value, the second covariance prediction value, the fourth state prediction value and the fourth covariance prediction value.
In one possible implementation, the second filtered estimateCan be determined from the second state predictor and the fourth state predictor, the second filtered covariance estimate can be found as shown in equation 14 below Can be determined from the second covariance prediction value and the fourth covariance prediction value, see the expression of equation 15 below.
Wherein H is an observation matrix, K v (k) For the Kalman gain equation, see the following expression of equation 16.
Here, the second filtered estimate of the target includes a second rate, a second direction, second location information, etc. of the target.
It should be noted that, there is no sequence between the step 907 and the step 908, the step 907 may be performed first and then the step 908 may be performed, or the step 908 may be performed first and then the step 907 may be performed, or the step 907 and the step 908 may be performed simultaneously.
In step 909, the fusion device determines a first confidence level of the first data at the fusion time according to the first timestamp, the first state process calibration value, the first motion model and the first covariance estimation value.
Wherein the first confidence level may also be referred to as a first confidence prediction value. Assume thatResidual error delta X of first data sent by road side detection system obeys normal distribution-N (0, sigma), and confidence interval is (0,S)]First confidence U r (k) Can be represented by the following equation 17.
Where Σ is the sum of the first covariance estimate and the first process calibration, S is defined by a standard, the range can be greater than 0 and less than or equal to 95%, and X is a variable.
In step 910, the fusion device measures a second confidence level of the second data at the fusion time according to the second timestamp, the second state process calibration value, the second motion model, and the second covariance estimation value.
Wherein the second confidence level may also be referred to as a second confidence prediction value. Assuming that the residual error DeltaX of the second data sent by the vehicle-side detection system is compliant with normal distribution-N (0, sigma), the confidence interval is (0,S)]First confidence U v (k) Can be represented by the following equation 18.
Where Σ is the sum of the second covariance estimate and the second process calibration, S is defined by a standard, the range can be greater than 0 and less than or equal to 95%, and X is a variable.
In step 911, the fusion device obtains the first fusion data of the target according to the first confidence coefficient, the second confidence coefficient, the first filter estimation value, the first filter covariance estimation value, the second filter estimation value and the second filter covariance estimation value.
A possible implementation of this step 911 can be seen in the description of fig. 10 below.
Fig. 10 is a schematic flow chart of another method for obtaining the first fusion data provided in the present application. The method comprises the following steps:
step 1001, the fusion device determines whether the first confidence coefficient meets a first preset confidence coefficient, and determines whether the second confidence coefficient meets a second preset confidence coefficient; if the first confidence level meets the first preset confidence level and the second confidence level also meets the second preset confidence level, executing step 1002; if the first confidence coefficient does not meet the first preset confidence coefficient and the second confidence coefficient does not meet the second preset confidence coefficient, executing step 1003; if the first confidence level meets the first preset confidence level and the second confidence level does not meet the second preset confidence level, executing step 1004; if the first confidence does not satisfy the first preset confidence and the second confidence satisfies the second preset confidence, step 1005 is performed.
Here, the first preset confidence level C r And a second preset confidence level C v May be two indices set in advance. The first preset confidence level may be the same as the second preset confidence level, or may be different from the second preset confidence level, which is not limited in the present application.
In one possible implementation, the first confidence level satisfying the first preset confidence level and the second confidence level also satisfying the second preset confidence level may be expressed as: u (U) v (k)≥C v ,U r (k)≥C r The method comprises the steps of carrying out a first treatment on the surface of the The first confidence failing to satisfy the first preset confidence and the second confidence failing to satisfy the second preset confidence may be expressed as: u (U) v (k)<C v ,U r (k)<C r The method comprises the steps of carrying out a first treatment on the surface of the The first confidence satisfying the first preset confidence and the second confidence not satisfying the second preset confidence may be expressed as: u (U) v (k)<C v ,U r (k)≥C r The method comprises the steps of carrying out a first treatment on the surface of the The first confidence not satisfying the first preset confidence and the second confidence satisfying the second preset confidence may be expressed as: u (U) v (k)<C v ,U r (k)≥C r
In step 1002, the fusion device obtains first fusion data according to the first filter estimation value, the first filter covariance estimation value, the second filter covariance estimation value, and the second filter estimation value.
With the first fusion dataIncluding the target state estimate and the corresponding target covariance estimate, as an example introduction. Target state estimation valueThe target covariance can be expressed by the following equation 19 Can be expressed by the following formula 20.
In step 1003, the fusion device obtains the first fusion data according to the second fusion data output from the previous frame.
Here, the target state is estimated asThe corresponding target covariance estimate is
In step 1004, the fusion device obtains first fusion data according to the first filter estimation value and the first filter covariance estimation value.
Here, the target state estimate isThe corresponding first fusion covariance estimate value is
In step 1005, the fusion device obtains the first fusion data according to the second filter estimation value and the second filter covariance estimation value.
Here, the target state estimate isThe corresponding target fusion covariance estimation value is
It will be appreciated that in order to implement the functions of the above embodiments, the detection system and the fusion device comprise corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative modules and method steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application scenario and design constraints imposed on the solution.
Based on the above and the same concepts, fig. 11 is a schematic structural diagram of a possible detection system provided in the present application. These detection systems may be used to implement the functions of the road side detection system or the vehicle side detection system in the above-described method embodiments, and thus may also implement the beneficial effects provided by the above-described method embodiments. In the present application, the detection system may be a road side detection system as shown in fig. 1a, or may be a vehicle side detection system as shown in fig. 1 a; the road side detection system shown in fig. 1b may be used, or the vehicle side detection system shown in fig. 1b may be used; but also a module (e.g. a chip) for application to a detection system.
As shown in fig. 11, the detection system 1100 includes a processing module 1101 and a transceiver module 1102. The detection system 1100 is used to implement the functionality of the roadside detection system in the method embodiments shown in fig. 3 or 8 described above.
When the detection system 1100 is used to implement the functionality of a roadside detection system of the method embodiment shown in fig. 3: the processing module 1101 is configured to obtain data of the detection system, where the data includes a state process calibration value, a motion model, state information, a covariance estimation value, and a timestamp; the state process calibration value is used for optimizing a covariance estimation value, the motion model is used for identifying the motion state of the target, the state information is used for identifying the motion characteristic of the target, the covariance estimation value is used for identifying the error between the state information and the actual state information, and the time stamp is used for identifying the moment when the detection system sends data; the transceiver module 1102 is configured to send data to the fusion device.
A more detailed description of the processing module 1101 and the transceiver module 1102 may be directly obtained by referring to the related description in the method embodiment shown in fig. 3, which is not repeated herein.
It should be appreciated that the processing module 1101 in the embodiments of the present application may be implemented by a processor or processor-related circuit component, and the transceiver module 1102 may be implemented by a transceiver or transceiver-related circuit component.
Based on the above and the same concepts, fig. 12 is a schematic structural view of a possible fusion device provided in the present application. These fusion devices may be used to perform the functions of the fusion devices of the above-described method embodiments, and thus may also perform the beneficial effects of the above-described method embodiments. In this application, the fusion device may be a cloud server as shown in fig. 1a, a processor, an ECU, or a domain controller in a vehicle as shown in fig. 1a, a fusion device in a road side detection system as shown in fig. 1b, or a module (such as a chip) applied to the fusion device.
As shown in fig. 12, the fusion device 1200 includes a processing module 1201 and a transceiver module 1202. The fusion device 1200 is used to implement the functionality of the fusion device described above in the method embodiments shown in fig. 3, 8, 9 or 10.
When the fusion device 1200 is used to implement the functionality of the fusion device of the method embodiment shown in fig. 8: the transceiver module 1202 is configured to obtain first data from a road side detection system and second data from a vehicle side detection system, where the first data includes one or more of a first state process calibration value, a first motion model, first state information, a first covariance estimate, and a first timestamp, the first state process calibration value is used to optimize the first covariance estimate, the first motion model is used to identify a first motion state of a target, the first state information is used to identify a first motion feature of the target, the first covariance estimate is used to identify an error between the first state information and first actual state information, and the first timestamp is used to identify a first time when the road side detection system transmits the first data; the second data comprises a second state process calibration value, a second motion model, second state information, a second covariance estimation value and a second timestamp, the second state process calibration value is used for optimizing the second covariance estimation value, the second motion model is used for identifying a second motion state of the target, the second state information is used for identifying a second motion characteristic of the target, the second covariance estimation value is used for identifying an error between the second state information and second actual state information, and the second timestamp is used for identifying a second moment when the vehicle side detection system transmits the second data; the processing module 1201 is configured to obtain first fused data of a random target according to the first data and the second data.
A more detailed description of the processing module 1201 and the transceiver module 1202 is directly obtained with reference to the related description in the method embodiment shown in fig. 8, and will not be repeated here.
It is to be appreciated that the processing module 1201 in the embodiments of the present application may be implemented by a processor or processor-related circuit component, and the transceiver module 1202 may be implemented by a transceiver or transceiver-related circuit component.
Based on the foregoing and the same, as shown in fig. 13, the present application also provides a detection system 1300. The detection system 1300 may include at least one processor 1301 and a transceiver 1302. Processor 1301 and transceiver 1302 are coupled to each other. It is understood that the transceiver 1302 may be an interface circuit or an input-output interface. Optionally, the detection system 1300 may further include a memory 1303 for storing instructions executed by the processor 1301 or input data required by the processor 1301 to execute the instructions or data generated after the processor 1301 executes the instructions.
When the detection system 1300 is used to implement the method shown in fig. 3, the processor 1301 is configured to perform the functions of the processing module 1101, and the transceiver 1302 is configured to perform the functions of the transceiver module 1102.
Based on the foregoing and the same, as shown in fig. 14, the present application also provides a fusion device 1400. The fusion device 1400 may include at least one processor 1401 and a transceiver 1402. The processor 1401 and the transceiver 1402 are coupled to each other. It is to be appreciated that the transceiver 1402 may be an interface circuit or an input-output interface. Optionally, the fusion device 1400 may further comprise a memory 1403 for storing instructions executed by the processor 1401 or for storing input data required for the processor 1401 to execute instructions or for storing data generated after the processor 1401 executes instructions.
When the fusion device 1400 is used to implement the method shown in fig. 8, the processor 1401 is used to perform the functions of the processing module 1201, and the transceiver 1402 is used to perform the functions of the transceiver module 1202.
Based on the above and the same conception, the present application provides a communication system of a vehicle road system. The communication system of the vehicle road system may comprise one or more of the vehicle side detection systems described above, and one or more road side detection systems, and a fusion device. The vehicle side detection system can execute any method of the vehicle side detection system side, the road side detection system can execute any method of the road side detection system side, and the fusion device can execute any method of the fusion device side. Possible implementation manners of the road side detection system, the vehicle side detection system and the fusion device can be referred to the above description, and will not be repeated here.
Based on the above and the same conception, the present application provides a vehicle. The vehicle may include one or more of the vehicle side detection systems, and/or fusion devices described above. The vehicle side detection system can execute any method of the vehicle side detection system side, and the fusion device can execute any method of the fusion device side. Possible implementation manners of the vehicle-side detection system and the fusion device can be referred to the above description, and will not be repeated here. Further optionally, the vehicle may also include other devices, such as a processor, memory, wireless communication device, etc.
In one possible implementation, the vehicle may be, for example, an unmanned vehicle, a smart vehicle, an electric vehicle, a digital car, or the like.
It is to be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a detection system. It is of course also possible that the processor and the storage medium reside as discrete components in a detection system.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions is loaded and executed on a computer, the processes or functions of the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a detection system, a user equipment, or other programmable apparatus. The computer program or instructions may be stored in or transmitted from one computer readable storage medium to another, for example, by wired or wireless means from one website site, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices such as servers, data centers, etc. that integrate one or more available media. Usable media may be magnetic media such as floppy disks, hard disks, magnetic tape; optical media, such as digital video discs (digital video disc, DVD); but also semiconductor media such as solid state disks (solid state drive, SSD).
In the various embodiments of the application, if there is no specific description or logical conflict, terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments according to their inherent logical relationships.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present application, the character "/", generally indicates that the associated object is an or relationship; in the formulas of the present application, the character "/" indicates that the front and rear associated objects are a "division" relationship. In addition, in this application, the term "exemplary" is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. It is to be understood that the use of the term "exemplary" is intended to present concepts in a concrete fashion and is not intended to be limiting.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. The sequence number of each process does not mean the sequence of the execution sequence, and the execution sequence of each process should be determined according to the function and the internal logic. The terms "first," "second," and the like, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a series of steps or modules. The method, system, article, or apparatus is not necessarily limited to those explicitly listed but may include other steps or modules not explicitly listed or inherent to such process, method, article, or apparatus.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (23)

  1. The vehicle-road cooperative communication method is characterized by comprising the following steps of:
    acquiring data from a detection system, the data including one or more of state process calibration values, motion models, state information, covariance estimation values, and timestamps; the state process calibration value is used for optimizing a first covariance estimation value, the motion model is used for identifying the motion state of a target, the state information is used for identifying the motion characteristic of the target, the covariance estimation value is used for identifying the error between the state information and actual state information, and the timestamp is used for identifying the moment when the detection system transmits the data;
    and sending the data to a fusion device.
  2. The method of claim 1, wherein the target comprises a collaborative target;
    the acquiring data from the detection system includes:
    acquiring an actual position of a cooperative target conforming to the motion model at a preset time and an estimated position of the cooperative target at the preset time;
    according to the estimated position and the motion model, determining the estimated position of the cooperative target after m steps, wherein m is a positive integer;
    determining an estimated error corresponding to the m steps according to the actual position and the estimated position after the m steps;
    And obtaining a state process calibration value corresponding to the motion model according to the estimation error.
  3. The method according to claim 2, wherein obtaining the state process calibration corresponding to the motion model according to the estimation error comprises:
    obtaining n-m estimated errors obtained in each cycle of L cycles, wherein n is an integer greater than 1, and L is a positive integer;
    and determining the variance of the L times (n-m) estimation errors obtained by the L times of circulation as the state process calibration value corresponding to the motion model.
  4. A method according to any one of claims 1 to 3, wherein the motion model comprises any one of:
    a linear motion model, a left motion model, or a right motion model.
  5. The method of any one of claims 1 to 4, wherein the target further comprises a random target;
    the acquiring data from the detection system further comprises:
    acquiring sampling data of the random target;
    and determining the state information according to the sampling data.
  6. The method of claim 5, wherein the acquiring detection data of the random target comprises:
    transmitting an electromagnetic wave signal to a detection area, the detection area comprising the random target;
    Receiving an echo signal from the detection area, wherein the echo signal is obtained after the electromagnetic wave signal is reflected by the random target;
    and determining the state information according to the echo signals.
  7. The vehicle-road cooperative data processing method is characterized by comprising the following steps of:
    acquiring first data from a road side detection system and second data from a vehicle side detection system, wherein the first data comprises one or more of a first state process calibration value, a first motion model, first state information, a first covariance estimation value and a first timestamp, the first state process calibration value is used for optimizing the first covariance estimation value, the first motion model is used for identifying a first motion state of a target, the first state information is used for identifying a first motion feature of the target, the first covariance estimation value is used for identifying an error between the first state information and first actual state information, and the first timestamp is used for identifying a first moment when the road side detection system transmits the first data; the second data includes one or more of a second state process calibration value, a second motion model, second state information, a second covariance estimate, and a second timestamp, the second state process calibration value is used for optimizing the second covariance estimate, the second motion model is used for identifying a second motion state of a target, the second state information is used for identifying a second motion feature of the target, the second covariance estimate is used for identifying an error between the second state information and second actual state information, and the second timestamp is used for identifying a second time at which the vehicle side detection system transmits the second data;
    And obtaining first fusion data of the target according to the first data and the second data.
  8. The method of claim 7, wherein the obtaining the first fused data of the target from the first data and the second data comprises:
    obtaining a first state predicted value and a first covariance predicted value of the target at a fusion moment according to the first timestamp, the first state process calibration value, the first state information and the first covariance estimated value; obtaining a second state predicted value and a second covariance predicted value of the target at the fusion moment according to the second timestamp, the second state process calibration value, the second state information and the second motion model;
    predicting a third state predicted value and a third covariance predicted value of the target according to second fusion data of a frame before the target and the first motion model; predicting a fourth state predicted value and a fourth covariance predicted value of the target according to second fusion data of a frame before the target;
    obtaining a first filtering estimation value and a first filtering covariance estimation value according to the first state prediction value, the first covariance prediction value, the third state prediction value and the third covariance prediction value; obtaining a second filtering estimated value and a second filtering covariance estimated value according to the second state predicted value, the second covariance predicted value, the fourth state predicted value and the fourth covariance predicted value;
    Obtaining a first confidence coefficient of the first data at the fusion moment according to the first timestamp, the first state process calibration value, the first motion model and the first covariance estimation value; obtaining a second confidence coefficient of the second data at the fusion time according to the second timestamp, the second state process calibration value, the second motion model and the second covariance estimation value;
    and obtaining the first fusion data according to the first confidence coefficient, the second confidence coefficient, the first filtering estimation value, the first filtering covariance estimation value, the second filtering estimation value and the second filtering covariance estimation value.
  9. The method of claim 8, wherein the obtaining the first fused data based on the first confidence level, the second confidence level, the first filtered covariance estimate, the second filtered estimate, and the second filtered covariance estimate comprises:
    when the first confidence coefficient meets a first preset confidence coefficient and the second confidence coefficient does not meet a second preset confidence coefficient, obtaining the first fusion data according to the first filtering estimated value and the first filtering covariance estimated value;
    When the second confidence coefficient meets a second preset confidence coefficient and the first confidence coefficient does not meet a first preset confidence coefficient, obtaining the first fusion data according to the second filtering estimated value and the second filtering covariance estimated value;
    when the first confidence coefficient meets a first preset confidence coefficient and the second confidence coefficient meets a second preset confidence coefficient, obtaining the first fusion data according to the first filtering estimated value, the first filtering covariance estimated value, the second filtering estimated value and the second filtering covariance estimated value;
    and when the first confidence coefficient does not meet the first preset confidence coefficient and the second confidence coefficient does not meet the second preset confidence coefficient, acquiring the first fusion data according to the second fusion data of the previous frame.
  10. A detection system, comprising:
    at least one processor for obtaining data from the detection system, the data comprising one or more of state process calibration values, motion models, state information, covariance estimates, and timestamps; the state process calibration value is used for optimizing a covariance estimation value, the motion model is used for identifying a motion state of a target, the state information is used for identifying a motion characteristic of the target, the covariance estimation value is used for identifying an error between the state information and actual state information, and the timestamp is used for identifying the moment when the detection system transmits the data;
    And the transceiver is used for sending the data to the fusion device.
  11. The system of claim 10, wherein the target comprises a collaboration target;
    the processor is specifically configured to:
    acquiring an actual position of a cooperative target conforming to the motion model at a preset time and an estimated position of the cooperative target at the preset time;
    according to the estimated position and the motion model, determining the estimated position of the cooperative target after m steps, wherein m is a positive integer;
    determining an estimated error corresponding to the m steps according to the actual position and the estimated position after the m steps;
    and obtaining a state process calibration value corresponding to the motion model according to the estimation error.
  12. The system of claim 11, wherein the processor is specifically configured to:
    obtaining n-m estimated errors obtained in each cycle of L cycles, wherein n is an integer greater than 1, and L is a positive integer;
    and determining the variance of the L times (n-m) estimation errors obtained by the L times of circulation as the state process calibration value corresponding to the motion model.
  13. The system of any of claims 10 to 12, wherein the motion model comprises any of:
    A linear motion model, a left motion model, or a right motion model.
  14. The system of any of claims 10 to 13, wherein the targets further comprise random targets;
    the processor is further configured to:
    acquiring sampling data of the random target;
    and determining the state information according to the sampling data.
  15. The system of claim 14, wherein the transceiver is configured to:
    transmitting an electromagnetic wave signal to a detection area, the detection area comprising the random target;
    receiving an echo signal from the detection area, wherein the echo signal is obtained after the electromagnetic wave signal is reflected by the random target;
    the processor is specifically configured to:
    and determining the state information according to the echo signals.
  16. A detection system comprising means for performing the method of any one of claims 1-6.
  17. A fusion device, comprising:
    a transceiver configured to obtain first data from a roadside detection system and second data from a vehicle-side detection system, the first data including one or more of a first state process calibration value, a first motion model, first state information, a first covariance estimate, and a first timestamp, the first state process calibration value being used to optimize the first covariance estimate, the first motion model being used to identify a first motion state of a target, the first state information being used to identify a first motion feature of the target, the first covariance estimate being used to identify an error between the first state information and first actual state information, the first timestamp being used to identify a first time at which the roadside detection system transmits the first data; the second data includes one or more of a second state process calibration value, a second motion model, second state information, a second covariance estimate, and a second timestamp, the second state process calibration value is used for optimizing the second covariance estimate, the second motion model is used for identifying a second motion state of a target, the second state information is used for identifying a second motion feature of the target, the second covariance estimate is used for identifying an error between the second state information and second actual state information, and the second timestamp is used for identifying a second time at which the vehicle-side detection system transmits the second data;
    And the at least one processor is used for obtaining first fusion data of the target according to the first data and the second data.
  18. The apparatus of claim 17, wherein the processor has means for:
    obtaining a first state predicted value and a first covariance predicted value of the target at a fusion moment according to the first timestamp, the first state process calibration value, the first state information and the first covariance estimated value; obtaining a second state predicted value and a second covariance predicted value of the target at the fusion moment according to the second timestamp, the second state process calibration value, the second state information and the second motion model;
    predicting a third state predicted value and a third covariance predicted value of the target according to second fusion data of a frame before the target and the first motion model; predicting a fourth state predicted value and a fourth covariance predicted value of the target according to second fusion data of a frame before the target;
    obtaining a first filtering estimation value and a first filtering covariance estimation value according to the first state prediction value, the first covariance prediction value, the third state prediction value and the third covariance prediction value; obtaining a second filtering estimated value and a second filtering covariance estimated value according to the second state predicted value, the second covariance predicted value, the fourth state predicted value and the fourth covariance predicted value;
    Obtaining a first confidence coefficient of the first data at the fusion moment according to the first timestamp, the first state process calibration value, the first motion model and the first covariance estimation value; obtaining a second confidence coefficient of the second data at the fusion time according to the second timestamp, the second state process calibration value, the second motion model and the second covariance estimation value;
    and obtaining the first fusion data according to the first confidence coefficient, the second confidence coefficient, the first filtering estimation value, the first filtering covariance estimation value, the second filtering estimation value and the second filtering covariance estimation value.
  19. The apparatus of claim 18, wherein the processor is configured to:
    when the first confidence coefficient meets a first preset confidence coefficient and the second confidence coefficient does not meet a second preset confidence coefficient, obtaining the first fusion data according to the first filtering estimated value and the first filtering covariance estimated value;
    when the second confidence coefficient meets a second preset confidence coefficient and the first confidence coefficient does not meet a first preset confidence coefficient, obtaining the first fusion data according to the second filtering estimated value and the second filtering covariance estimated value;
    When the first confidence coefficient meets a first preset confidence coefficient and the second confidence coefficient meets a second preset confidence coefficient, obtaining the first fusion data according to the first filtering estimated value, the first filtering covariance estimated value, the second filtering estimated value and the second filtering covariance estimated value;
    and when the first confidence coefficient does not meet the first preset confidence coefficient and the second confidence coefficient does not meet the second preset confidence coefficient, acquiring the first fusion data according to the second fusion data of the previous frame.
  20. Fusion device, characterized in that it comprises means for performing the method according to any one of claims 7 to 9.
  21. A vehicle comprising a detection system according to any one of claims 10 to 16; and/or comprising a fusion device according to any one of claims 17 to 20.
  22. A computer readable storage medium comprising computer instructions which, when run on a processor, cause the detection system to perform the method of any one of claims 1 to 6; alternatively, the fusion device is caused to perform the method of any one of claims 7-9.
  23. A computer program product, characterized in that the computer program product, when run on a processor, causes the detection system to perform the method according to any one of claims 1-6; alternatively, the fusion device is caused to perform the method of any one of claims 7-9.
CN202180099647.2A 2021-06-22 2021-06-22 Vehicle-road cooperative communication and data processing method, detection system and fusion device Pending CN117529935A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101660 WO2022266863A1 (en) 2021-06-22 2021-06-22 Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus

Publications (1)

Publication Number Publication Date
CN117529935A true CN117529935A (en) 2024-02-06

Family

ID=84543887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180099647.2A Pending CN117529935A (en) 2021-06-22 2021-06-22 Vehicle-road cooperative communication and data processing method, detection system and fusion device

Country Status (2)

Country Link
CN (1) CN117529935A (en)
WO (1) WO2022266863A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8229663B2 (en) * 2009-02-03 2012-07-24 GM Global Technology Operations LLC Combined vehicle-to-vehicle communication and object detection sensing
US10281925B2 (en) * 2016-10-21 2019-05-07 Toyota Jidosha Kabushiki Kaisha Estimate of geographical position of a vehicle using wireless vehicle data
DE102017206446A1 (en) * 2017-04-13 2018-10-18 Knorr-Bremse Systeme für Schienenfahrzeuge GmbH Fusion of infrastructure-related data, in particular infrastructure-related data for rail vehicles
CN110430079B (en) * 2019-08-05 2021-03-16 腾讯科技(深圳)有限公司 Vehicle-road cooperation system

Also Published As

Publication number Publication date
WO2022266863A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
US11113966B2 (en) Vehicular information systems and methods
US10317901B2 (en) Low-level sensor fusion
US10317522B2 (en) Detecting long objects by sensor fusion
WO2021077287A1 (en) Detection method, detection device, and storage medium
US20220178718A1 (en) Sensor fusion for dynamic mapping
US20220089153A1 (en) Scenario identification in autonomous driving environments
US20210070311A1 (en) Method and apparatus for multi vehicle sensor suite diagnosis
US20200118285A1 (en) Device and method for determining height information of an object in an environment of a vehicle
CN113129382A (en) Method and device for determining coordinate conversion parameters
US20210398425A1 (en) Vehicular information systems and methods
CN114968187A (en) Platform for perception system development of an autopilot system
CN109286785B (en) Environment information sharing system and method
CN117529935A (en) Vehicle-road cooperative communication and data processing method, detection system and fusion device
US20230103178A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion
CN114968189A (en) Platform for perception system development of an autopilot system
Kidambi et al. Sensitivity of automated vehicle Operational Safety Assessment (OSA) metrics to measurement and parameter uncertainty
Higuchi et al. Monitoring live parking availability by vision-based vehicular crowdsensing
JP7294323B2 (en) Moving body management device, moving body management system, moving body management method, and computer program
EP3936896A1 (en) Distance measurement method and device based on detection signal
US11682140B1 (en) Methods and apparatus for calibrating stereo cameras using a time-of-flight sensor
US10614711B2 (en) Concept for monitoring a parking facility for motor vehicles
WO2023123325A1 (en) State estimation method and device
US20230391372A1 (en) Method of detecting moving objects, device, electronic device, and storage medium
WO2019151109A1 (en) Road surface information acquisition method
CN114839624A (en) Parking space state detection method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination