WO2022266863A1 - Procédé de communication de coordination véhicule-route, procédé de traitement de données, système de détection et appareil de fusion - Google Patents

Procédé de communication de coordination véhicule-route, procédé de traitement de données, système de détection et appareil de fusion Download PDF

Info

Publication number
WO2022266863A1
WO2022266863A1 PCT/CN2021/101660 CN2021101660W WO2022266863A1 WO 2022266863 A1 WO2022266863 A1 WO 2022266863A1 CN 2021101660 W CN2021101660 W CN 2021101660W WO 2022266863 A1 WO2022266863 A1 WO 2022266863A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
data
covariance
target
state
Prior art date
Application number
PCT/CN2021/101660
Other languages
English (en)
Chinese (zh)
Inventor
胡滨
勾鹏琪
花文健
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/101660 priority Critical patent/WO2022266863A1/fr
Priority to CN202180099647.2A priority patent/CN117529935A/zh
Publication of WO2022266863A1 publication Critical patent/WO2022266863A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor

Definitions

  • the present application relates to the technical field of vehicle-infrastructure coordinated communication, in particular to a vehicle-infrared coordinated communication, a data processing method, a detection system, and a fusion device.
  • Intelligent networked vehicles refer to the specific application of smart vehicles in the Internet of Vehicles environment. They are usually equipped with sensing devices such as cameras, lidars, and inertial measurement units (IMUs) to sense the environmental information inside and outside the vehicle, and can realize vehicle information based on communication technology. Information interaction between the side end and the road side end.
  • sensing devices such as cameras, lidars, and inertial measurement units (IMUs) to sense the environmental information inside and outside the vehicle, and can realize vehicle information based on communication technology.
  • IMUs inertial measurement units
  • the perception of the car side is based on the perception of the surrounding area of the vehicle body by the on-board sensors, but the sensing area is limited; the perception of the roadside is based on multi-station and multi-sensor perception, which can obtain a relatively broad observation space.
  • vehicle-road coordination the fusion of the data sensed by the roadside end and the data sensed by the vehicle-side end can be realized, thereby improving the vehicle's ability to perceive the surrounding environment.
  • vehicle-road coordination refers to the use of technologies such as wireless communication and the new generation of the Internet to comprehensively implement vehicle-vehicle and vehicle-road dynamic real-time information interaction, and to carry out vehicle active safety control and Road collaborative management fully realizes the effective coordination of people, vehicles and roads, ensures traffic safety, improves traffic efficiency, and thus forms a safe, efficient and environmentally friendly road traffic system.
  • the fusion device In the data fusion process of vehicle-road coordination, it is necessary to send the perception data of the roadside terminal and the vehicle-side terminal to the fusion device.
  • the fusion device is located on the vehicle side, and the roadside terminal has scheduling and buffering during data processing, and the roadside terminal transmits the sensing data to the fusion device located on the vehicle side through the air interface. Therefore, when the roadside terminal sends sensing data There is time delay and jitter, which causes the sensing data transmitted by the roadside end to be unable to be continuously sent to the fusion device according to the agreed time, resulting in the fusion device during the data fusion period, the sensing data given by the roadside end and the sensing data given by the car side end may be different. Not in the same fusion cycle, resulting in inaccurate data after fusion by the fusion device.
  • the present application provides a communication and data processing method of vehicle-road coordination, a detection system and a fusion device, which are used to improve the accuracy of data fused by the fusion device as much as possible.
  • the present application provides a communication method for vehicle-road coordination, the method includes acquiring data from the detection system, the data includes one or more of state process calibration values, motion models, state information, covariance estimates, and time stamps Multiple, and send data to the fusion device; wherein, the state process calibration value is used to optimize the covariance estimate, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the covariance estimate It is used to identify the error between the state information and the actual state information, and the time stamp is used to identify the moment when the detection system sends data.
  • the state process calibration value is used to optimize the covariance estimate
  • the motion model is used to identify the motion state of the target
  • the state information is used to identify the motion characteristics of the target
  • the covariance estimate It is used to identify the error between the state information and the actual state information
  • the time stamp is used to identify the moment when the detection system sends data.
  • the fusion device can optimize the first covariance estimation value based on the first state process calibration value, so that the measurement of the detection system can be more accurately characterized error. Further, the fusion device can determine the first confidence level according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the fusion device outputting the first fusion data can be improved. Accuracy.
  • the target includes a cooperative target; the method includes obtaining the actual position of the cooperative target conforming to the motion model at a preset time, and the estimated position of the cooperative target at a preset time, and according to the estimated position and the motion model , determine the estimated position of the cooperative target after m steps, m is a positive integer; further, according to the actual position and the estimated position after m steps, determine the estimated error corresponding to m steps, and then obtain the state process corresponding to the motion model according to the estimated error calibration value.
  • Obtaining the m-step state process calibration value helps to more accurately characterize the measurement error of the detection system, thereby helping to improve the accuracy of the first data.
  • the method may include obtaining n-m estimation errors obtained in each cycle of the L cycles, n is an integer greater than 1, and L is a positive integer; L ⁇ (n-m ) The variance of the estimated errors is determined as the calibration value of the state process corresponding to the motion model.
  • the motion model includes any one of a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the target further includes a random target
  • the method further includes acquiring sampling data of the random target; and determining state information according to the sampling data.
  • the method may include transmitting electromagnetic wave signals to the detection area, receiving echo signals from the detection area, and determining state information according to the echo signals; wherein, the detection area includes random targets, and the echo signals are The electromagnetic wave signal is obtained after being reflected by a random target.
  • the present application provides a data processing method for vehicle-road coordination, the method includes acquiring first data from the roadside detection system and second data from the vehicle-side detection system, and according to the first data and the second data , to obtain the first fusion data of the random target.
  • the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first covariance estimation value and the first time stamp, and the first state process calibration value is used for optimizing
  • the first covariance estimated value the first motion model is used to identify the first motion state of the target, the first state information is used to identify the first motion feature of the target, and the first covariance estimated value is used to identify the first state information and the first state information
  • An error between actual state information, the first time stamp is used to identify the first moment when the roadside detection system sends the first data
  • the second data includes the second state process calibration value, the second motion model, the second state information, The second covariance estimated value and the second time stamp
  • the second state process calibration value is used to optimize the second covariance estimated value
  • the second motion model is used to identify the second motion state of the target
  • the second state information is used to identify the target
  • the second motion feature, the second estimated covariance value is used to identify the error between the second state information and the second
  • the fusion device can receive the first state process calibration value and the first motion model from the roadside fusion module, and the first state process calibration value can optimize the first covariance estimation value, so that the measurement error can be more accurately represented ; Further, the fusion device has also received the second state process calibration value and the second motion model from the vehicle side fusion module, and the second state process calibration value can optimize the second covariance estimation value, thereby more accurately characterizing the measurement error, Therefore, it is helpful to improve the accuracy of the obtained first fusion data of the random target.
  • the first state prediction value of the random target at the fusion moment can be obtained according to the first time stamp, the first state process calibration value, the first state information and the first covariance estimated value and the first covariance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, the second state prediction value and the second covariance prediction value of the random target at the fusion moment are obtained ; According to the first motion model and the second fusion data of the previous frame of the target, predict the third state prediction value and the third covariance prediction value of the target; according to the second motion model and the second fusion data of the previous frame of the random target Predicting the fourth state predicted value and the fourth covariance predicted value of the target; according to the first state predicted value, the first covariance predicted value, the third state predicted value and the third covariance predicted value, the first filtered estimated value and The first filtered covariance estimated value; according to the second state predicted value, the second covariance predicted value, the fourth
  • the first Fusion data when the first confidence level satisfies the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, the first Fusion data; when the second confidence level satisfies the second preset confidence level and the first confidence level does not meet the first preset confidence level, the first fusion data is obtained according to the second filtered estimated value and the second filtered covariance estimated value; When the first confidence meets the first preset reliability and the second confidence meets the second preset reliability, according to the first filtered estimated value, the first filtered covariance estimated value, the second filtered estimated value and the second filtered covariance estimated value to obtain the first fusion data; when the first confidence degree does not meet the first preset confidence degree and the second confidence degree does not satisfy the second preset confidence degree, according to the second fusion data of the previous frame, obtain Fusion data first.
  • the present application provides a detection system, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection system may be a roadside detection system or a vehicle-side detection system in a vehicle-road cooperative communication system, or a module that can be used in a roadside detection system or a vehicle-side detection system, such as a chip Or chip system or circuit.
  • the detection system may include: a transceiver and at least one processor.
  • the processor may be configured to support the detection system to perform the corresponding functions shown above, and the transceiver is used to support communication between the detection system and the fusion device or other detection systems.
  • the transceiver may be an independent receiver, an independent transmitter, a transceiver integrating transceiver functions, or an interface circuit.
  • the detection system may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the detection system.
  • the processor is used to obtain data from the detection system, and the data includes one or more of the state process calibration value, motion model, state information, covariance estimation value and time stamp;
  • the state process calibration value is used to optimize the covariance estimation value ,
  • the motion model is used to identify the motion state of the target,
  • the state information is used to identify the motion characteristics of the target,
  • the covariance estimate is used to identify the error between the state information and the actual state information, and the time stamp is used to identify the moment when the detection system sends data ;
  • the transceiver is used to send data to the fusion device.
  • the target includes a cooperative target;
  • the processor is specifically configured to obtain the actual position of the cooperative target conforming to the motion model at a preset time, and the estimated position of the cooperative target at a preset time; according to the estimated position and motion Model, determine the estimated position of the cooperative target after m steps, m is a positive integer; determine the estimated error corresponding to m steps according to the actual position and the estimated position after m steps; obtain the state process calibration value corresponding to the motion model according to the estimated error .
  • the processor is specifically configured to: acquire n-m estimation errors obtained in each cycle of L cycles, where n is an integer greater than 1, and L is a positive integer;
  • the variance of (n-m) estimated errors is determined as the calibration value of the state process corresponding to the motion model.
  • the motion model includes any one of a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the target further includes a random target; the processor is further configured to: acquire sampling data of the random target; and determine state information according to the sampling data.
  • the transceiver is specifically used to: transmit electromagnetic wave signals to the detection area, and the detection area includes random targets; receive echo signals from the detection area, and the echo signals are obtained after the electromagnetic wave signals are reflected by random targets ;
  • the processor is specifically configured to: determine the state information according to the echo signal.
  • the present application provides a fusion device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the fusion device may be a fusion device in a vehicle-road cooperative communication system, or a module that can be used in the fusion device, such as a chip or a chip system or a circuit.
  • the fusion device may include: a transceiver and at least one processor.
  • the processor may be configured to support the fusion device to perform the corresponding functions shown above, and the transceiver is used to support communication between the fusion device and a detection system (such as a roadside detection system or a vehicle side detection system).
  • the transceiver may be an independent receiver, an independent transmitter, a transceiver integrating transceiver functions, or an interface circuit.
  • the fusion device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the fusion device.
  • the transceiver is used to obtain the first data from the roadside detection system and the second data from the vehicle side detection system
  • the first data includes the calibration value of the first state process, the first motion model, the first state information, the first coordination One or more of the variance estimate and the first time stamp
  • the first state process calibration value is used to optimize the first covariance estimate
  • the first motion model is used to identify the first motion state of the target
  • the first state information is used to To identify the first motion feature of the target
  • the first estimated covariance value is used to identify the error between the first state information and the first actual state information
  • the first time stamp is used to identify the first time when the roadside detection system sends the first data.
  • the second data includes one or more of the second state process calibration value, the second motion model, the second state information, the second covariance estimation value and the second time stamp
  • the second state process calibration value is used for Optimizing the second covariance estimated value
  • the second motion model is used to identify the second motion state of the target
  • the second state information is used to identify the second motion feature of the target
  • the second covariance estimated value is used to identify the second state information
  • the error between the second actual state information, the second time stamp is used to identify the second moment when the vehicle side detection system sends the second data
  • the processor is used to obtain the first fusion data of the random target according to the first data and the second data .
  • the processor is configured to: obtain the first state prediction of the random target at the fusion moment according to the first time stamp, the first state process calibration value, the first state information, and the first covariance estimated value value and the first covariance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, the second state prediction value and the second covariance prediction of the random target at the time of fusion are obtained value; according to the first motion model and the second fusion data of the previous frame of the target, predict the third state prediction value and the third covariance prediction value of the target; according to the second motion model and the second fusion of the previous frame of the random target The fourth state predicted value and the fourth covariance predicted value of the data prediction target; according to the first state predicted value, the first covariance predicted value, the third state predicted value and the third covariance predicted value, the first filtered estimated value is obtained and the first filter covariance estimated value; according to the second state predicted value, the second covariance predicted value, the second covariance predicted value
  • the processor is specifically configured to: when the first confidence level satisfies the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, according to the first filtering estimate and the second A filter covariance estimate to obtain the first fusion data; when the second confidence meets the second preset reliability and the first confidence does not meet the first preset reliability, according to the second filter estimate and the second filter covariance
  • the variance estimated value obtains the first fused data; when the first confidence meets the first preset reliability and the second confidence meets the second preset reliability, according to the first filtered estimated value, the first filtered covariance estimated value,
  • the second filtered estimated value and the second filtered covariance estimated value are used to obtain the first fusion data; when the first confidence level does not meet the first preset reliability level and the second confidence level does not meet the second preset reliability level, according to the previous
  • the second fused data of one frame is used to obtain the first fused data.
  • the present application provides a detection system, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above method.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection system can be a roadside detection system or a vehicle side detection system, and the detection system can include a processing module and a transceiver module, and these modules can implement the roadside detection system or vehicle side detection system in the above method example.
  • the detection system can include a processing module and a transceiver module, and these modules can implement the roadside detection system or vehicle side detection system in the above method example.
  • the present application provides a fusion device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the fusion device may include a processing module and a transceiver module, and these modules may perform corresponding functions of the fusion device in the above method example.
  • a processing module and a transceiver module, and these modules may perform corresponding functions of the fusion device in the above method example.
  • the present application provides a communication system, which includes a detection system (such as a roadside detection system and a vehicle side detection system) and a fusion device.
  • a detection system such as a roadside detection system and a vehicle side detection system
  • a fusion device may be used to implement the second aspect or any one of the methods in the second aspect.
  • the present application provides a vehicle, where the communication system includes a vehicle side detection system and/or a fusion device.
  • the vehicle-side system may be used to implement the first aspect or any one of the methods in the first aspect
  • the fusion device may be used to implement the second aspect or any one of the methods in the second aspect.
  • the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by a processor, the detection system performs the above-mentioned first aspect or the first aspect.
  • the present application provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the detection system performs any of the above-mentioned first aspect or the first aspect.
  • the present application provides a chip, including a processor, the processor is coupled with a memory, and is used to execute computer programs or instructions stored in the memory, so that the chip implements any one of the first aspect or the second aspect, and A method in any possible implementation of any aspect.
  • Figure 1a is a schematic diagram of a communication system architecture provided by the present application.
  • Figure 1b is a schematic diagram of another communication system architecture provided by the present application.
  • FIG. 2 is a schematic diagram of the principle of a radar detection target provided by the present application.
  • FIG. 3 is a schematic flow chart of a communication method for vehicle-road coordination provided by the present application.
  • Fig. 4 is a schematic flow chart of a method for obtaining a first state process calibration value and a first motion model provided by the present application
  • Fig. 5 is a method for obtaining the first actual position of the cooperative target at n different first moments by a roadside fusion model provided by the present application;
  • Fig. 6a is a schematic flowchart of a method for obtaining first state information by a roadside detection system provided by the present application
  • Fig. 6b is a schematic flowchart of another method for acquiring first state information by a roadside detection system provided by the present application.
  • Fig. 7a is a schematic flowchart of another method for obtaining the first status information of the target by the roadside detection system provided by the present application;
  • Fig. 7b is a schematic flowchart of another method for obtaining the first status information of the target by the roadside detection system provided by the present application;
  • FIG. 8 is a schematic flow chart of a data processing method for vehicle-road coordination provided by the present application.
  • FIG. 9 is a schematic flowchart of a method for determining the first fusion data of a random target provided by the present application.
  • FIG. 10 is a schematic flowchart of a method for obtaining first fusion data based on the first confidence degree and the second confidence degree provided by the present application;
  • FIG. 11 is a schematic structural diagram of a detection system provided by the present application.
  • Figure 12 is a schematic structural view of a fusion device provided by the present application.
  • FIG. 13 is a schematic structural diagram of a detection system provided by the present application.
  • Fig. 14 is a schematic structural diagram of a fusion device provided by the present application.
  • Cooperative targets usually mean that the real position information of the detected target can be obtained through other cooperative channels in addition to direct measurement by sensors.
  • the position of a certain fixed target can be obtained in advance; another example, the cooperative target reports the current position information wirelessly, and the current position information of the cooperative target can be obtained by measuring with a measuring instrument.
  • Non-cooperative targets usually refer to the real position information of the detected target, except that the sensor can directly measure it, and there is no other technical means to obtain it.
  • Covariance is used to measure the overall error of two variables.
  • the two variables may be, for example, a predicted value and an actual value.
  • Covariance is usually a matrix, usually as an intermediate parameter.
  • Kalman filter is a high-efficiency recursive filter (or autoregressive filter), which can estimate the state of a dynamic system from a series of incomplete and random measurements. Kalman filtering can be based on the values of each measurement at different times, considering the joint distribution at each time, and then generating an estimate of the unknown variable, so it is more accurate than the estimation method based only on a single measurement.
  • Kalman filtering is essentially a data fusion algorithm that fuses data with the same measurement purpose, from different sensors, and may have different units, to obtain a more accurate measurement value.
  • Image fusion is a kind of image processing technology, which refers to the image data collected by multi-source channels about the same target through image processing and calculation of specific algorithms, etc., to maximize the extraction of beneficial information in each channel, and finally to fuse high-quality (such as brightness, clarity, color) of the image, the fused image is more accurate than the original image.
  • FIG. 1 a is a schematic diagram of a possible communication system architecture provided in this application.
  • the communication system may include a vehicle side detection system and a roadside detection system, and the vehicle side detection system and the roadside detection system may communicate through a sidelink (SL) air interface or a Uu air interface.
  • the vehicle side detection system can be installed on vehicles, including but not limited to unmanned vehicles, smart vehicles, electric vehicles, or digital vehicles.
  • the roadside detection system can be installed on the roadside infrastructure, which includes but not limited to traffic lights, traffic cameras, or roadside units (RSU), etc.
  • Figure 1a takes the roadside infrastructure as traffic lights as an example Introduced.
  • the vehicle side detection system can obtain measurement information such as latitude and longitude, speed, orientation, and distance of surrounding objects in real time or periodically, and then realize the Assisted driving or automatic driving of vehicles.
  • the latitude and longitude can be used to determine the position of the vehicle, or the speed and orientation can be used to determine the driving direction and destination of the vehicle in the future, or the distance of surrounding objects can be used to determine the number and density of obstacles around the vehicle.
  • the vehicle-side detection system may include a vehicle-mounted perception module and a vehicle-mounted fusion module, as shown in FIG. 1b.
  • the on-vehicle sensing module may be a sensor (the function of the sensor may refer to the following introduction) arranged around the body of the vehicle (for example, the front left, front right, left, right, rear left, rear right, etc. of the vehicle).
  • the location where the vehicle perception module is installed in the vehicle is not limited.
  • the on-vehicle fusion module can be, for example, a processor in the vehicle, or a domain controller in the vehicle, or an electronic control unit (ECU) in the vehicle, or other devices installed in the vehicle. Chips, among them, ECU can also be called “driving computer", “vehicle computer”, “vehicle-specific microcomputer controller” or “lower computer”, etc., which is one of the core components of the vehicle.
  • the roadside detection system may include a roadside perception module and a roadside fusion module, as shown in FIG. 1b.
  • the roadside perception module can be used to collect obstacle information, pedestrian information, signal light information, vehicle flow information and traffic sign information in real time or periodically.
  • the roadside sensing module may be, for example, a sensor.
  • the roadside fusion module can be, for example, a chip or a processor.
  • the communication system may also include a cloud server.
  • a cloud server can be a single server or a server cluster composed of multiple servers.
  • the cloud server may also be called cloud, cloud, cloud server, cloud controller, or Internet of Vehicles server, etc. It can also be understood that a cloud server is a general term for devices or devices with data processing capabilities, such as physical devices such as hosts or processors, virtual devices such as virtual machines or containers, and chips or integrated circuits.
  • the above-mentioned communication system may be an intelligent vehicle infrastructure cooperative system (intelligent vehicle infrastructure cooperative systems, IVICS), referred to as a vehicle-infrastructure cooperative system for short.
  • IVICS intelligent vehicle infrastructure cooperative systems
  • the above-mentioned communication system can be applied in areas such as unmanned driving, automatic driving, assisted driving, intelligent driving, connected vehicles, surveying and mapping, or security monitoring.
  • AGV automated guided vehicle
  • the above application scenarios are just examples, and can also be applied to various other scenarios, for example, it can also be applied to the scenario of an automated guided vehicle (AGV) car, wherein the AGV car is equipped with an electromagnetic Or optical and other automatic navigation devices, which can drive along the specified navigation path, transport vehicles with safety protection and various transfer functions, the vehicle side detection system can be installed on the AGV trolley, and the roadside detection system can be installed in the AGV trolley scene on the roadside equipment.
  • AGV automated guided vehicle
  • the fusion device may be installed on a vehicle, or a processor in the vehicle may serve as the fusion device, or an ECU in the vehicle may serve as the fusion device, or a domain controller in the vehicle may serve as the fusion device.
  • the fusion device can receive the information transmitted from the roadside detection system through the on board unit (OBU) in the vehicle.
  • OBU refers to a communication device using dedicated short range communication (DSRC) technology, which can be used for communication between the vehicle and the outside world.
  • DSRC dedicated short range communication
  • the fusion device can also be installed on the roadside infrastructure.
  • Roadside infrastructure can communicate with vehicles through vehicle to everthing (V2X).
  • V2X vehicle to everthing
  • the fusion device and the drive test infrastructure may communicate through a control area network (controller area network, CAN) line, Ethernet or wireless.
  • control area network controller area network, CAN
  • the fusion device may also be installed on a cloud server, or the cloud server may serve as the fusion device.
  • the cloud server and the roadside detection system can communicate wirelessly, and the cloud server and the vehicle side detection system can also communicate wirelessly.
  • Fig. 1b is an example in which the fusion device is installed on the vehicle side.
  • Sensors can be divided into two categories according to their sensing methods, namely passive sensing sensors and active sensing sensors.
  • passive sensing sensors mainly rely on the radiation information of the external environment.
  • Active sensing sensors are used to sense the environment by actively emitting energy waves.
  • Passive sensing sensors and active sensing sensors are introduced in detail as follows.
  • the passive sensing sensor may be, for example, a camera (or called a camera or a video camera), and the accuracy of the camera sensing results mainly depends on image processing and classification algorithms.
  • the camera includes any camera (for example, a static camera, a video camera, etc.) for acquiring images of the environment in which the vehicle is located.
  • the camera may be configured to detect visible light, referred to as a visible light camera.
  • Visible light cameras use charge-coupled devices (CCD) or standard complementary metal-oxide semiconductors (complementary meta-oxide semiconductors, CMOS) to obtain images corresponding to visible light.
  • CCD charge-coupled devices
  • CMOS complementary meta-oxide semiconductors
  • the camera may also be configured to detect light from other parts of the spectrum, such as infrared light, and may be referred to as an infrared camera.
  • Infrared camera can adopt CCD or CMOS, filter through filter, only allow to pass through the light of color wavelength band and set infrared wavelength band.
  • the active perception sensor may be radar.
  • the radar can sense the fan-shaped area shown in the solid line box, and the fan-shaped area can be the radar sensing area (or called the radar detection area).
  • the radar transmits electromagnetic wave signals through the antenna and receives the echo signals reflected by the target on the electromagnetic wave signals, amplifies and down-converts the echo signals, and obtains information such as the relative distance, relative speed, and angle between the vehicle and the target. .
  • the roadside data transmitted by the fusion roadside detection system and the vehicle-side data of the vehicle detection system may not be the same period of data, it may lead to the fusion of data distortion.
  • the present application provides a communication method for vehicle-road coordination.
  • the communication method of the vehicle-road coordination can improve the accuracy of the data from the roadside detection system and the data after the fusion of the data from the vehicle side detection system.
  • the communication method can be applied to the communication system shown in FIG. 1a above, and the method can be executed by the above-mentioned roadside detection system, or can also be executed by the above-mentioned vehicle-side detection system.
  • FIG. 3 is a schematic flow chart of a communication method for vehicle-infrastructure coordination provided by the present application.
  • the method is implemented by a roadside detection system as an example. The method includes the following steps:
  • Step 301 the roadside detection system acquires first data.
  • the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first estimated covariance value, and the first time stamp.
  • the first state process calibration value is used to optimize the first covariance estimation value (that is, the fusion device can optimize the first covariance estimation value based on the first state process calibration value during the data fusion process), so as to more accurately characterize the detection
  • the measurement error of the system; the first motion model is used to identify the motion state of the cooperation target.
  • the first state process calibration value and the first motion model may be pre-acquired or pre-stored by the roadside detection system.
  • a possible implementation method for the roadside detection system to obtain the calibration value of the first state process and the first motion model can refer to the introduction in FIG. 4 below, and will not be repeated here.
  • the first state information is used to identify the motion characteristics of the random target.
  • the first state information may include a first position and/or a first velocity of the random target, and the like.
  • a possible implementation manner of acquiring the first state information here refer to the introduction of FIG. 5 below, and details are not repeated here.
  • the first covariance estimate is used to identify a statistical error between the first state information and the first actual state information.
  • the initial state of the roadside detection system may be preset with a first initial state information.
  • the estimated value of the first covariance usually converges continuously with the filtering process, and when it converges to a certain value, it can be considered that the first actual state information is the first state information.
  • the first time stamp is used to identify the moment when the roadside detection system sends the first data. It can also be understood that the first time stamp is a time stamp stamped when the roadside detection system sends the first data.
  • Step 302 the roadside detection system sends the first data to the fusion device.
  • the fusion device may receive the first data from the drive test detection system.
  • the fusion device can receive information from the road through the OBU in the vehicle.
  • the first data of the side detection system V2X communication between the roadside detection system and the vehicle.
  • the roadside detection system can send the first data to the cloud server.
  • the first state process calibration value and the first motion model may be sent to the fusion device by the roadside detection system during the process of establishing a connection between the roadside detection system and the fusion device;
  • the side detection system sends the first state information, the first estimated covariance value, and the first time stamp to the fusion device for the first time; or it may be the first time that the roadside detection system sends the first state information
  • the first estimated covariance value and the first time stamp are sent before, which is not limited in this application.
  • the fusion device can optimize the first covariance estimation value based on the first state process calibration value, so as to more accurately characterize Measurement error. Further, the fusion device can determine the first confidence level according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the fusion device outputting the first fusion data can be improved. Accuracy.
  • a possible implementation manner of acquiring the first data is exemplarily shown as follows.
  • FIG. 4 is a schematic flowchart of a method for obtaining the calibration value of the first state process and the first motion model provided by the present application.
  • the preset moment takes the first moment as an example
  • the motion model takes the first motion model as an example
  • the state process calibration value takes the first state process calibration value as an example
  • the actual position obtained at the first moment can be called the first
  • the actual position, the estimated position obtained at the first moment may be called the first estimated position
  • the estimated position after M steps may be called the second estimated position
  • the estimated error corresponding to the M steps is called the first estimated error.
  • the method may include the steps of:
  • step 401 the roadside fusion module acquires the first actual position of the cooperative target conforming to the first motion model at the first moment.
  • the cooperation target moves according to the first motion model, such as a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the linear motion model can be understood as the cooperative target moving in a straight line at a constant speed according to the speed v;
  • the left-turning motion model can be understood as the cooperative target moving according to the speed v and acceleration a;
  • the right-turning motion model can be understood as the cooperative target moving according to the speed v, Acceleration a decelerates in a straight line; or
  • the left-turn motion model can be understood as the cooperative target decelerates according to the speed v and acceleration a;
  • the right-turn motion model can be understood as the cooperative target accelerates the linear motion according to the speed v and acceleration a.
  • the method includes the following steps:
  • step 501 the surveying instrument measures the first actual positions corresponding to n different first moments of the cooperation target conforming to the first motion model, where n is an integer greater than 1.
  • the surveying instrument may be a high-precision surveying instrument using real-time kinematic (RTK) technology, and may be installed on the cooperation target to realize the measurement of the actual position of the cooperation target.
  • RTK real-time kinematic
  • the measuring instrument when the cooperative target moves according to the first motion model, can measure the cooperative target at [k 1 ,k 2 ,k 3 ,...,k n ] the nth first moments - Actual position [x 1 , x 2 , x 3 , . . . , x n ]. It can also be understood that at time k 1 , the first actual position of the cooperative target measured by the measuring instrument is x 1 ; at time k 2 , the first actual position of the cooperative target measured by the measuring instrument is x 2 ; and so on, at time k n , the measuring instrument measures the first actual position of the cooperative target as x n .
  • the measuring instrument can obtain the relationship between the first motion model, n first moments, and n first actual positions as shown in Table 1.
  • first motion model n first moments
  • n first actual positions expressed in the form of a table is only an example, and it can also be expressed by other corresponding relationships, which is not limited in this application .
  • the above table 1 may also be three independent tables, that is, one motion model corresponds to one table.
  • step 502 the surveying instrument sends to the roadside fusion module the obtained first actual positions of the cooperative targets respectively corresponding to n different first moments.
  • the roadside fusion module receives the first actual positions respectively corresponding to n different first moments from the measuring instrument.
  • the measuring instrument may send Table 1 to the roadside fusion module. If the measuring instrument obtains three tables in step 501, in step 502, the measuring instrument may send the three tables to the roadside fusion module. It should be noted that the three tables may be sent to the roadside fusion module together, or may be sent to the roadside fusion module three times, which is not limited in this application.
  • the roadside fusion module can obtain the first actual positions respectively corresponding to n different first moments when the cooperative target moves according to the first motion model.
  • Step 402 the roadside perception module obtains the first estimated position of the cooperation target conforming to the first motion model at the first moment, and sends the first estimated position at the first moment to the roadside fusion module.
  • the roadside perception module can respectively record the cooperative target conforming to the first motion model at [k 1 , k 2 , k 3 ,...,k n ] the n different first moments an estimated position It can also be understood that, at time k 1 , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as At time k2 , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as By analogy, at time k n , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as
  • the roadside sensing module is a radar
  • the distance of the cooperative target can be measured by electromagnetic waves, and then combined with multi-antenna technology, the angle of electromagnetic wave reception can be obtained, and the distance and angle can be used to locate in space, and n different
  • the first moments of are respectively corresponding to the first estimated positions.
  • the roadside perception module is a camera
  • a corresponding relationship between the image pixel coordinate system and the world coordinate system can be constructed, so that the first estimated positions corresponding to n different first moments can be obtained respectively.
  • the roadside perception module sends to the roadside fusion module the first estimated positions corresponding to the cooperative targets conforming to the first motion model at n different first moments
  • the roadside detection system may include P roadside sensing modules, and the P roadside sensing modules can respectively perform the above step 402, and each roadside sensing module can obtain n first estimated positions,
  • the P roadside perception modules may all send the recorded n first estimated positions to the roadside fusion module.
  • the roadside fusion module may receive n first estimated positions from each of the P roadside sensing modules to obtain P ⁇ n first estimated positions.
  • the roadside fusion module may first perform a weighted average on the first estimated positions from the P roadside perception modules to obtain the first estimated position at the first moment. It can also be understood that the roadside fusion module weights and averages the first estimated positions from P roadside perception modules at time k 1 , and obtains the first estimated position at time k 1 as At time k2 , the weighted average of the first estimated positions from P roadside sensing modules is obtained to obtain the first estimated position at time k2 as By analogy, at time k n , the weighted average of the first estimated positions from P roadside sensing modules is obtained to obtain the first estimated position at time k n as In other words, at the n first moments [k 1 ,k 2 ,k 3 ,...,k n ], the first estimated position obtained by the roadside fusion module is
  • Step 403 the roadside fusion module determines the second estimated position of the cooperative target after M steps according to the first estimated position and the first motion model.
  • the second estimated position of the cooperative target after M steps It can be represented by the following formula 1.
  • the second estimated position of the cooperative target after M steps It can be represented by the following formula 2. It should be understood that, in this example, the left-turn movement model is used as an example to perform accelerated movement at the speed v and acceleration a.
  • the second estimated position of the cooperative target after M steps It is represented by the following formula 3. It should be understood that in this example, the right-turn motion model is used as an example to perform decelerated linear motion at the speed v and acceleration a.
  • Step 404 the roadside fusion module determines the first estimation error of the M-step estimation according to the first actual position and the second estimated position.
  • Step 405 the roadside fusion module obtains the first state process calibration value corresponding to the first motion model according to the first estimation error.
  • the above steps 401 to 405 are executed L times in a loop, and each time the above steps 401 to 405 are executed, n-m different first estimation errors can be obtained.
  • the roadside fusion module determines the variance of the L ⁇ (n-m) first estimation errors obtained in the L cycles as the calibration value of the first state process corresponding to the first motion model. When L is greater than 1, it is helpful to improve the accuracy of the calibration value of the first state process.
  • the first state process calibration value Q m can be represented by the following formula 4.
  • the calibration value of the first state process corresponding to the linear motion model can be obtained; when the first motion model is a left-turn motion model, the corresponding calibration value of the left-turn motion model can be obtained
  • the first state process calibration value; when the first motion model is a right-turning motion model, the first state process calibration value corresponding to the right-turning motion model can be obtained, as shown in Table 2 below.
  • the corresponding calibration values of the first state process may also be different.
  • the acquisition of the first state information can be introduced in the following two situations.
  • the roadside sensing module is a radar.
  • P is a positive integer.
  • FIG. 6 a it is a schematic flowchart of a method for acquiring first status information by a roadside detection system provided in the present application.
  • the transmitted electromagnetic wave signal is taken as an example of the first electromagnetic wave signal
  • the sampling data is taken as an example of the first echo signal.
  • the method includes the following steps:
  • Step 601 the radar transmits a first electromagnetic wave signal to a detection area.
  • the detection area of the radar can refer to the introduction of the above-mentioned FIG. 2 .
  • Step 602 the radar receives a first echo signal from the detection area.
  • the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target. It should be understood that the first echo signal is the detection data (or referred to as detection data, sampling data, etc.) of a random target collected by the radar.
  • Step 603 the radar sends the first echo signal to the roadside fusion module.
  • the roadside fusion module receives the first echo signal from the radar.
  • the roadside fusion module may receive the first echo signals (ie sampled data 1, sampled data 2 . . . sampled data P) from P radars to obtain P first echo signals.
  • first echo signals ie sampled data 1, sampled data 2 . . . sampled data P
  • Step 604 the roadside fusion module determines the first state information of the random target according to the received P first echo signals.
  • the roadside fusion module can obtain P pieces of state information based on P pieces of first echo signals based on one piece of state information that can be obtained from one first echo signal, and weight the P pieces of state information On average, the first state information of the random target is obtained.
  • the roadside fusion module may perform a weighted average on the obtained P positions to obtain the first position included in the first state information.
  • FIG. 6 b it is a schematic flowchart of another method for obtaining the first state information by the roadside detection system provided in the present application.
  • the transmitted electromagnetic wave signal is taken as an example of the first electromagnetic wave signal
  • the sampling data is taken as an example of the first echo signal. The method includes the following steps:
  • Step 611 the radar transmits a first electromagnetic wave signal to the detection area.
  • the detection area of the radar can refer to the introduction of the above-mentioned FIG. 2 .
  • Step 612 the radar receives the first echo signal from the detection area.
  • the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target.
  • Step 613 the radar determines first state information of a random target according to the received first echo signal.
  • Step 614 the radar sends status information to the roadside fusion module.
  • the roadside fusion module receives status information from the radar.
  • the roadside fusion module can receive status information from P radars and obtain P status information.
  • Step 615 the roadside fusion module determines the first state information of the random target according to the received P pieces of state information.
  • the roadside fusion module performs a weighted average on the P pieces of state information according to the state information from the P radars to obtain the first state information of the random target.
  • the roadside perception module is a camera.
  • FIG. 7 a it is a schematic flowchart of another method for obtaining the first status information of a target by a roadside detection system provided in the present application.
  • the sampling data is taken as the first image as an example. The method includes the following steps:
  • Step 701 the camera captures a first image of a random target.
  • Step 702 the camera sends the first image to the roadside fusion module. Accordingly, the roadside fusion module receives the first image from the camera.
  • the roadside fusion module may receive first images from P cameras and obtain P first images.
  • Step 703 the roadside fusion module can acquire the first state information of the random target according to the P first images.
  • the roadside fusion module may fuse the P first images, and determine the first state information of the random target according to the fused first images.
  • FIG. 7 b it is a schematic flowchart of another method for obtaining the first state information of a target by a roadside detection system provided in the present application.
  • the method includes the following steps:
  • Step 711 the camera captures a first image of a random target.
  • Step 712 the camera obtains state information of a random target according to the first image.
  • Step 713 the camera sends the state information of the random target to the roadside fusion module.
  • the roadside fusion module receives status information from the cameras.
  • the roadside fusion module may receive status information from P cameras.
  • Step 714 the roadside fusion module acquires the first state information of the random target according to the P pieces of state information from the camera.
  • the roadside fusion module performs a weighted average on the P pieces of state information according to the state information from the P cameras to obtain the first state information of the random target.
  • the time when the roadside fusion module sends the first data to the fusion device through the air interface is the first time stamp.
  • the first estimated covariance value is a statistical error between the first state information and the actual state information.
  • the first covariance estimate It can be represented by the following formula 5.
  • the first state equation may be a motion model of a random target detected by the roadside detection system and indicated to the fusion device; or may be determined by the fusion device according to the first state information.
  • the motion model of the random target may be a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the roadside fusion module can obtain the first state information, the first estimated covariance value, the first time stamp, the first state process calibration value and the first motion model.
  • the vehicle-side fusion module can obtain the second motion model, the second state process calibration value, the second state information, the first estimated covariance value and the first time stamp.
  • the above roadside fusion module can be replaced by the vehicle side fusion module
  • the roadside perception module can be replaced by the vehicle side perception module
  • the first motion model can be replaced by the second motion model
  • the calibration value of the first state process can be replaced by the second state process
  • the calibration value is replaced, and the first state information is replaced with the second state information.
  • FIG. 8 it is a schematic flow chart of a data processing method for vehicle-road coordination provided by the present application. This method can be applied to the communication system shown in Fig. 1a above. The method includes the following steps:
  • Step 801 the roadside detection system sends first data to the fusion device.
  • the fusion device receives the first data from the roadside detection system.
  • the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first estimated covariance value, and the first time stamp.
  • the first state process calibration value the first motion model, the first state information, the first estimated covariance value, and the first time stamp.
  • Step 802 the vehicle side detection system sends the second data to the fusion device.
  • the fusion device receives the second data from the vehicle side detection system.
  • the second data includes a second state process calibration value, a second motion model, a second state information, a second covariance estimation value, and a second time stamp.
  • the second state process calibration value is used to optimize the second covariance estimation value (that is, the fusion device can optimize the second covariance estimation value based on the second state process calibration value during the data fusion process), so as to more accurately characterize the measurement Error;
  • the second motion model is used to identify the motion state of the cooperative target.
  • the second state process calibration value and the second motion model may be pre-acquired and stored by the vehicle side detection system.
  • the second state information is used to identify the motion characteristics of the target.
  • the second state information may include a second position and/or a second velocity of the target, and the like.
  • the second covariance estimate is used to identify a statistical error between the second state information and the actual state information. It should be noted that the initial state in the vehicle side detection system does not have actual state information, and an actual state information (called initial state information) can be preset.
  • the initial state information usually converges continuously with the filtering process.
  • the second time stamp is used to identify the moment when the vehicle-side detection system sends the second data. It can also be understood that the second time stamp is a time stamp stamped when the vehicle side detection system sends the second data.
  • step 801 can be performed first and then step 802 can be performed, or step 802 can be performed first and then step 801 can be performed, or step 801 and step 802 can also be performed synchronously, This application does not limit this.
  • Step 803 the fusion device obtains the first fusion data of the target according to the first data and the second data.
  • the first fusion data refers to the first fusion data of the random target.
  • the first fused data includes, for example, target position information, state information (such as speed, direction), covariance, and the like.
  • state information such as speed, direction
  • covariance and the like.
  • the fusion device can receive the first state process calibration value and the first motion model from the roadside fusion module, and the fusion device can use the first state process calibration value to optimize the first coordination during the data fusion process.
  • Variance estimated value so that the measurement error can be more accurately characterized;
  • the fusion device also received the calibration value of the second state process and the second motion model from the fusion module on the vehicle side, and the fusion device passed the second state process during the data fusion process
  • the calibration value optimizes the second estimated value of the covariance, so that the measurement error can be more accurately represented, thereby helping to improve the accuracy of the obtained first fusion data of the target.
  • FIG. 9 is a schematic flowchart of a method for obtaining first fusion data according to first data and second data provided in this application. The method includes the following steps:
  • Step 901 the fusion device obtains the first predicted state of the target at the time of fusion according to the first time stamp and the first state information.
  • the first state prediction value It can be expressed by the following formula 6:
  • Step 902 the fusion device obtains a second state prediction value of the target at the time of fusion according to the second time stamp and the second state information.
  • the second state prediction value It can be expressed by the following formula 7:
  • step 901 can be performed first and then step 902 can be performed, or step 902 can be performed first and then step 901 is performed, or step 901 and step 902 can be performed simultaneously.
  • Step 903 the fusion device obtains the first covariance prediction value of the target at the fusion moment according to the first time stamp, the first state information, the first state process calibration value and the first covariance estimated value.
  • the first covariance predictor See the introduction to Equation 5 above. in, Indicates the first covariance estimate output by the roadside detection system at time t.
  • step 904 the fusion device obtains a second covariance prediction value of the target at the fusion moment according to the second time stamp, the second state information, the second state process calibration value and the second motion model.
  • the second covariance predictor It can be expressed by the following formula 8:
  • k represents the k-th fusion moment
  • ⁇ v (k/t) represents the state equation
  • Q v (k/t) is the calibration value of the second state process at time kt.
  • the second state equation may be the motion model of the target detected by the vehicle side detection system and indicated to the fusion device; or it may be determined by the fusion device according to the second state information.
  • step 903 can be performed first and then step 904 is performed, or step 903 can be performed first and then step 904 is performed, or step 903 and step 904 can be performed simultaneously.
  • Step 905 the fusion device predicts the third state prediction value and the third covariance prediction value of the target according to the first motion model and the second fusion data of the previous frame.
  • the second fusion data of the previous frame includes the state information output from the previous frame Third State Prediction It can be expressed by Equation 9.
  • the second fused data of the previous frame includes an estimated covariance value output by the previous frame third covariance predictor It can be expressed by Equation 10.
  • Q f (k) is the process noise, generally an empirical value, represents the equation of state.
  • each frame of fusion data output by the fusion device can be cached first, and the processor can call the fusion data in the cache.
  • Step 906 the fusion device predicts the fourth state prediction value and the fourth covariance prediction value of the object according to the second motion model and the second fusion data of the previous frame of the object.
  • step 906 reference may be made to the introduction of the foregoing step 905, which will not be repeated here.
  • step 905 can be performed first and then step 906 is performed, or step 906 can be performed first and then step 905 is performed, or step 905 and step 906 can be performed simultaneously.
  • Step 907 the fusion device obtains a first filtered estimated value and a first filtered covariance estimated value of the target according to the first predicted state value, the first predicted covariance value, the third predicted state value, and the third predicted covariance value.
  • the first filtered estimated value can be determined according to the first state prediction value and the third state prediction value, and can be referred to the expression of the following formula 11, the first filtering covariance estimation value It can be determined according to the first predicted covariance value and the third predicted covariance value, and can be referred to the expression in formula 12 below.
  • H is the observation matrix
  • K r (k) is the Kalman gain equation, which can be expressed in the following formula 13
  • R is the noise coefficient, which is generally an empirical value.
  • the first filtered estimated value of the target includes a first velocity, a first direction, a first position information, and the like of the target.
  • Step 908 the fusion device obtains a second filtered estimated value and a second filtered covariance estimated value of the target according to the second predicted state value, the second predicted covariance value, the fourth predicted state value, and the fourth predicted covariance value.
  • the second filtered estimated value It can be determined according to the second state prediction value and the fourth state prediction value, and can refer to the expression of the following formula 14,
  • the second filtering covariance estimation value It can be determined according to the second predicted covariance value and the fourth predicted covariance value, and can be referred to the expression in formula 15 below.
  • H is the observation matrix
  • K v (k) is the Kalman gain equation, which can be referred to the expression of the following formula 16.
  • the second filtered estimated value of the target includes a second velocity, a second direction, a second position information, and the like of the target.
  • step 907 can be performed first and then step 908 is performed, or step 908 can be performed first and then step 907 is performed, or step 907 and step 908 can be performed simultaneously.
  • Step 909 the fusion device determines the first confidence level of the first data at the fusion moment according to the first time stamp, the first state process calibration value, the first motion model and the first covariance estimation value.
  • the first confidence level may also be referred to as a first confidence level prediction value.
  • the first confidence U r (k) can be used in the following formula 17 express.
  • is the sum of the first estimated covariance value and the first process calibration value
  • S is a standard definition
  • the range can be greater than 0 and less than or equal to 95%
  • X is a variable.
  • Step 910 the fusion device is based on the second time stamp, the second state process calibration value, the second motion model and the second covariance estimation value of the second confidence level of the second data at the fusion moment.
  • the second confidence level may also be referred to as a second confidence level prediction value.
  • the first confidence U v (k) can be used in the following formula 18 express.
  • is the sum of the second covariance estimated value and the second process calibration value
  • S is a standard definition
  • the range can be greater than 0 and less than or equal to 95%
  • X is a variable.
  • Step 911 the fusion device obtains the first fusion of the target according to the first confidence level, the second confidence level, the first estimated filter value, the first estimated filter covariance value, the second estimated filter value and the second estimated filter covariance value data.
  • step 911 For possible implementation of step 911, refer to the introduction of FIG. 10 below.
  • FIG. 10 it is a schematic flowchart of another method for obtaining the first fusion data provided by the present application.
  • the method includes the following steps:
  • Step 1001 the fusion device determines whether the first confidence level meets the first preset confidence level, and determines whether the second confidence level meets the second preset confidence level; if the first confidence level satisfies the first preset confidence level and the second confidence level degree also meets the second preset reliability, execute step 1002; if the first confidence does not meet the first preset reliability and the second confidence does not meet the second preset reliability, execute step 1003; if the first confidence Satisfy the first preset reliability and the second confidence does not meet the second preset reliability, execute step 1004; if the first confidence does not meet the first preset reliability and the second confidence meets the second preset reliability , execute step 1005.
  • the first preset reliability C r and the second preset reliability C v may be two preset indicators.
  • the first preset reliability may be the same as or different from the second preset reliability, which is not limited in this application.
  • the first confidence level satisfies the first preset confidence level and the second confidence level also meets the second preset confidence level, which can be expressed as: U v (k) ⁇ C v , U r (k ) ⁇ C r ; the first confidence level does not meet the first preset reliability level and the second confidence level does not meet the second preset confidence level can be expressed as: U v (k) ⁇ C v , U r (k) ⁇ C r ; the first confidence meets the first preset reliability and the second confidence does not meet the second preset reliability can be expressed as: U v (k) ⁇ C v , U r (k) ⁇ C r ; the first The confidence level does not meet the first preset reliability level and the second confidence level meets the second preset level of confidence level can be expressed as: U v (k) ⁇ C v , U r (k) ⁇ C r .
  • Step 1002 the fusion device obtains first fused data according to the first estimated filtering value, the first estimated filtering covariance value, the second estimated filtering covariance value, and the second estimated filtering value.
  • target state estimate It can be expressed by Equation 19 below, the target covariance It can be represented by the following formula 20.
  • Step 1003 the fusion device obtains the first fusion data according to the second fusion data output in the previous frame.
  • the target state is estimated as The corresponding target covariance estimate is
  • Step 1004 the fusion device obtains first fusion data according to the first estimated filter value and the first estimated filter covariance value.
  • the target state estimate is The corresponding first ensemble covariance estimate is
  • Step 1005 the fusion device obtains the first fusion data according to the second estimated filter value and the second estimated filter covariance value.
  • the target state estimate is The corresponding target fusion covariance estimate is
  • the detection system and the fusion device include hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the modules and method steps described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • FIG. 11 is a schematic structural diagram of a possible detection system provided by the present application.
  • These detection systems can be used to realize the functions of the roadside detection system or the vehicle side detection system in the above method embodiments, and thus can also realize the beneficial effects of the above method embodiments.
  • the detection system may be the roadside detection system as shown in Figure 1a, or the vehicle side detection system as shown in Figure 1a; it may also be the roadside detection system as shown in Figure 1b above, or It can also be the vehicle side detection system shown in FIG. 1 b above; it can also be a module (such as a chip) applied to the detection system.
  • the detection system 1100 includes a processing module 1101 and a transceiver module 1102 .
  • the detection system 1100 is used to realize the functions of the roadside detection system in the above method embodiment shown in FIG. 3 or FIG. 8 .
  • the processing module 1101 is used to obtain the data of the detection system, the data includes state process calibration value, motion model, state information, covariance Estimated value and time stamp; the state process calibration value is used to optimize the covariance estimated value, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the covariance estimated value is used to identify the state information and the actual state For errors between information, the time stamp is used to identify the moment when the detection system sends data; the transceiver module 1102 is used to send data to the fusion device.
  • processing module 1101 and the transceiver module 1102 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 3 , and will not be repeated here.
  • processing module 1101 in the embodiment of the present application may be implemented by a processor or a processor-related circuit component
  • transceiver module 1102 may be implemented by a transceiver or a transceiver-related circuit component.
  • FIG. 12 is a schematic structural diagram of a possible fusion device provided by the present application.
  • These fusion devices can be used to realize the functions of the fusion devices in the foregoing method embodiments, and thus can also realize the beneficial effects possessed by the foregoing method embodiments.
  • the fusion device may be a cloud server as shown in Figure 1a, or a processor, ECU or domain controller in a vehicle as shown in Figure 1a, or the road as shown in Figure 1b above.
  • the fusion device in the side detection system may also be a module (such as a chip) applied to the fusion device.
  • the fusion device 1200 includes a processing module 1201 and a transceiver module 1202 .
  • the fusion device 1200 is used to implement the functions of the fusion device in the method embodiments shown in FIG. 3 , FIG. 8 , FIG. 9 or FIG. 10 above.
  • the first state process calibration value is used to optimize the first covariance estimation value
  • the first motion model is used to identify the first motion state of the target
  • the first state information is used to identify the first motion feature of the target
  • the first covariance estimation value is used to identify the difference between the first state information and the first actual state information
  • the first time stamp is used to identify the first moment when the roadside detection system sends the first data
  • the second data includes the second state process calibration value, the second motion model, the second state information, and the second covariance estimate value and the second timestamp
  • the second state process calibration value is used to optimize the second covariance estimation value
  • the second motion model is used to identify the second motion state of the target
  • the second state information is used to identify the second motion feature of the target , the second covariance estimated value
  • processing module 1201 and the transceiver module 1202 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 8 , and will not be repeated here.
  • processing module 1201 in the embodiment of the present application may be implemented by a processor or a processor-related circuit component
  • transceiver module 1202 may be implemented by a transceiver or a transceiver-related circuit component.
  • the present application also provides a detection system 1300 .
  • the detection system 1300 may include at least one processor 1301 and a transceiver 1302 .
  • the processor 1301 and the transceiver 1302 are coupled to each other.
  • the transceiver 1302 may be an interface circuit or an input and output interface.
  • the detection system 1300 may further include a memory 1303 for storing instructions executed by the processor 1301 or storing input data required by the processor 1301 to execute the instructions or storing data generated by the processor 1301 after executing the instructions.
  • the processor 1301 is used to execute the functions of the above-mentioned processing module 1101
  • the transceiver 1302 is used to execute the functions of the above-mentioned transceiver module 1102 .
  • the present application also provides a fusion device 1400 .
  • the fusion device 1400 may include at least one processor 1401 and a transceiver 1402 .
  • the processor 1401 and the transceiver 1402 are coupled to each other.
  • the transceiver 1402 may be an interface circuit or an input and output interface.
  • the fusion device 1400 may further include a memory 1403 for storing instructions executed by the processor 1401 or storing input data required by the processor 1401 to execute the instructions or storing data generated after the processor 1401 executes the instructions.
  • the processor 1401 is used to execute the functions of the above-mentioned processing module 1201
  • the transceiver 1402 is used to execute the functions of the above-mentioned transceiver module 1202 .
  • the present application provides a communication system for a vehicle-road system.
  • the communication system of the vehicle road system may include the aforementioned one or more vehicle side detection systems, one or more roadside detection systems, and a fusion device.
  • the vehicle side detection system can implement any method on the vehicle side detection system side
  • the roadside detection system can implement any method on the roadside detection system side
  • the fusion device can implement any method on the fusion device side.
  • the possible implementations of the roadside detection system, the vehicle side detection system, and the fusion device can be found in the introduction above, and will not be repeated here.
  • the present application provides a vehicle.
  • the vehicle may include one or more of the aforementioned vehicle side detection systems, and/or a fusion device.
  • the vehicle side detection system can execute any method on the vehicle side detection system side
  • the fusion device can execute any method on the fusion device side.
  • the possible implementations of the vehicle side detection system and the fusion device can be found in the above introduction, and will not be repeated here.
  • the vehicle may also include other components, such as a processor, a memory, a wireless communication device, and the like.
  • the vehicle may be, for example, an unmanned vehicle, a smart vehicle, an electric vehicle, a digital vehicle, and the like.
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC may be located in the detection system.
  • the processor and the storage medium can also exist in the detection system as discrete components.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product consists of one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions of the embodiments of the present application are executed in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, computer network, detection system, user equipment, or other programmable device.
  • Computer programs or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer programs or instructions may be Wired or wireless transmission to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
  • Available media can be magnetic media, such as floppy disks, hard disks, and tapes; optical media, such as digital video discs (digital video discs, DVDs); and semiconductor media, such as solid state drives (SSDs). ).
  • At least one means one or more, and “multiple” means two or more.
  • At least one of the following” or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • at least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ", where a, b, c can be single or multiple.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship; in the formulas of this application, the character “/” indicates that the contextual objects are a “division” Relationship.
  • the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “example” is not to be construed as preferred or advantageous over other embodiments or designs. Or it can be understood that the use of the word example is intended to present concepts in a specific manner, and does not constitute a limitation to the application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé de communication de coordination véhicule-route, un procédé de traitement de données, un système de détection et un appareil de fusion, qui peuvent être utilisés dans les domaines de la conduite automatique, de la conduite intelligente, de la conduite assistée ou analogue. Le procédé de communication de coordination véhicule-route consiste à : acquérir des données à partir d'un système de détection et envoyer les données à un appareil de fusion, les données comprenant un ou plusieurs éléments parmi une valeur d'étalonnage de processus d'état, un modèle de mouvement, des informations d'état, une valeur d'estimation de covariance et une estampille temporelle, la valeur d'étalonnage de processus d'état étant utilisée pour que l'appareil de fusion optimise la valeur d'estimation de covariance ; le modèle de mouvement étant utilisé pour identifier l'état de mouvement d'une cible ; les informations d'état étant utilisées pour identifier des caractéristiques de mouvement de la cible ; la valeur d'estimation de covariance étant utilisée pour identifier des erreurs entre les informations d'état et des informations d'état réelles ; et l'estampille temporelle étant utilisée pour identifier le moment où le système de détection envoie les données. Au moyen de la valeur d'étalonnage de processus d'état, la valeur d'estimation de covariance peut être optimisée, de telle sorte que les erreurs de mesure du système de détection peuvent être caractérisées avec plus de précision, ce qui aide ainsi à améliorer la précision des premières données de fusion.
PCT/CN2021/101660 2021-06-22 2021-06-22 Procédé de communication de coordination véhicule-route, procédé de traitement de données, système de détection et appareil de fusion WO2022266863A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/101660 WO2022266863A1 (fr) 2021-06-22 2021-06-22 Procédé de communication de coordination véhicule-route, procédé de traitement de données, système de détection et appareil de fusion
CN202180099647.2A CN117529935A (zh) 2021-06-22 2021-06-22 车路协同的通信、数据处理方法、探测系统及融合装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101660 WO2022266863A1 (fr) 2021-06-22 2021-06-22 Procédé de communication de coordination véhicule-route, procédé de traitement de données, système de détection et appareil de fusion

Publications (1)

Publication Number Publication Date
WO2022266863A1 true WO2022266863A1 (fr) 2022-12-29

Family

ID=84543887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101660 WO2022266863A1 (fr) 2021-06-22 2021-06-22 Procédé de communication de coordination véhicule-route, procédé de traitement de données, système de détection et appareil de fusion

Country Status (2)

Country Link
CN (1) CN117529935A (fr)
WO (1) WO2022266863A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799992A (zh) * 2009-02-03 2010-08-11 通用汽车环球科技运作公司 组合的车辆到车辆通信和目标检测感测
US20180113472A1 (en) * 2016-10-21 2018-04-26 Toyota Jidosha Kabushiki Kaisha Estimate of geographical position of a vehicle using wireless vehicle data
EP3388307A2 (fr) * 2017-04-13 2018-10-17 KNORR-BREMSE Systeme für Schienenfahrzeuge GmbH Fusion de données liées à l'infrastructure, en particulier de données liées à l'infrastructure pour véhicules ferroviaires
CN110430079A (zh) * 2019-08-05 2019-11-08 腾讯科技(深圳)有限公司 车路协同系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799992A (zh) * 2009-02-03 2010-08-11 通用汽车环球科技运作公司 组合的车辆到车辆通信和目标检测感测
US20180113472A1 (en) * 2016-10-21 2018-04-26 Toyota Jidosha Kabushiki Kaisha Estimate of geographical position of a vehicle using wireless vehicle data
EP3388307A2 (fr) * 2017-04-13 2018-10-17 KNORR-BREMSE Systeme für Schienenfahrzeuge GmbH Fusion de données liées à l'infrastructure, en particulier de données liées à l'infrastructure pour véhicules ferroviaires
CN110430079A (zh) * 2019-08-05 2019-11-08 腾讯科技(深圳)有限公司 车路协同系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU, SIYUAN: "Resrarch of Adaptive Cooperative Positioning Technology for Vehicular Network", DISSERTATION SUBMITTED TO SHANGHAI JIAO TONG UNIVERSITY FOR THE DEGREE OF MASTER, 1 June 2020 (2020-06-01), pages 1 - 100, XP093017291, [retrieved on 20230124] *
TENG YAN-FEI; HU BIN; LIU ZHI-WEI; HUANG JIAN; GUAN ZHI-HONG: "Adaptive neural network control for quadrotor unmanned aerial vehicles", 2017 11TH ASIAN CONTROL CONFERENCE (ASCC), IEEE, 17 December 2017 (2017-12-17), pages 988 - 992, XP033314485, DOI: 10.1109/ASCC.2017.8287305 *

Also Published As

Publication number Publication date
CN117529935A (zh) 2024-02-06

Similar Documents

Publication Publication Date Title
CN110268413B (zh) 低电平传感器融合
US11113966B2 (en) Vehicular information systems and methods
US10802450B2 (en) Sensor event detection and fusion
US10678240B2 (en) Sensor modification based on an annotated environmental model
US11067996B2 (en) Event-driven region of interest management
WO2022184127A1 (fr) Procédé et appareil de simulation pour véhicule et capteur
WO2021077287A1 (fr) Procédé de détection, dispositif de détection et support de stockage
KR20210077617A (ko) 융합된 카메라/LiDAR 데이터 포인트를 사용한 자동화된 대상체 주석 달기
US20220178718A1 (en) Sensor fusion for dynamic mapping
US9669838B2 (en) Method and system for information use
US20210065733A1 (en) Audio data augmentation for machine learning object classification
US20220215197A1 (en) Data processing method and apparatus, chip system, and medium
CN113129382A (zh) 一种确定坐标转换参数的方法及装置
US20210063165A1 (en) Adaptive map-matching-based vehicle localization
US20210398425A1 (en) Vehicular information systems and methods
WO2024078265A1 (fr) Procédé et appareil de génération de carte de haute précision multicouche
WO2022266863A1 (fr) Procédé de communication de coordination véhicule-route, procédé de traitement de données, système de détection et appareil de fusion
CN109286785B (zh) 一种环境信息共享系统及方法
US20230103178A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion
CN115359332A (zh) 基于车路协同的数据融合方法、装置、电子设备及系统
US20220198714A1 (en) Camera to camera calibration
EP3952359B1 (fr) Procédés et systèmes pour améliorer les capacités d'accès aux données de véhicule
US11682140B1 (en) Methods and apparatus for calibrating stereo cameras using a time-of-flight sensor
EP4357944A1 (fr) Identification d'objets de trafic inconnus
US20240135252A1 (en) Lane-assignment for traffic objects on a road

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946365

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180099647.2

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE