WO2022266863A1 - Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus - Google Patents

Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus Download PDF

Info

Publication number
WO2022266863A1
WO2022266863A1 PCT/CN2021/101660 CN2021101660W WO2022266863A1 WO 2022266863 A1 WO2022266863 A1 WO 2022266863A1 CN 2021101660 W CN2021101660 W CN 2021101660W WO 2022266863 A1 WO2022266863 A1 WO 2022266863A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
data
covariance
target
state
Prior art date
Application number
PCT/CN2021/101660
Other languages
French (fr)
Chinese (zh)
Inventor
胡滨
勾鹏琪
花文健
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180099647.2A priority Critical patent/CN117529935A/en
Priority to PCT/CN2021/101660 priority patent/WO2022266863A1/en
Publication of WO2022266863A1 publication Critical patent/WO2022266863A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor

Definitions

  • the present application relates to the technical field of vehicle-infrastructure coordinated communication, in particular to a vehicle-infrared coordinated communication, a data processing method, a detection system, and a fusion device.
  • Intelligent networked vehicles refer to the specific application of smart vehicles in the Internet of Vehicles environment. They are usually equipped with sensing devices such as cameras, lidars, and inertial measurement units (IMUs) to sense the environmental information inside and outside the vehicle, and can realize vehicle information based on communication technology. Information interaction between the side end and the road side end.
  • sensing devices such as cameras, lidars, and inertial measurement units (IMUs) to sense the environmental information inside and outside the vehicle, and can realize vehicle information based on communication technology.
  • IMUs inertial measurement units
  • the perception of the car side is based on the perception of the surrounding area of the vehicle body by the on-board sensors, but the sensing area is limited; the perception of the roadside is based on multi-station and multi-sensor perception, which can obtain a relatively broad observation space.
  • vehicle-road coordination the fusion of the data sensed by the roadside end and the data sensed by the vehicle-side end can be realized, thereby improving the vehicle's ability to perceive the surrounding environment.
  • vehicle-road coordination refers to the use of technologies such as wireless communication and the new generation of the Internet to comprehensively implement vehicle-vehicle and vehicle-road dynamic real-time information interaction, and to carry out vehicle active safety control and Road collaborative management fully realizes the effective coordination of people, vehicles and roads, ensures traffic safety, improves traffic efficiency, and thus forms a safe, efficient and environmentally friendly road traffic system.
  • the fusion device In the data fusion process of vehicle-road coordination, it is necessary to send the perception data of the roadside terminal and the vehicle-side terminal to the fusion device.
  • the fusion device is located on the vehicle side, and the roadside terminal has scheduling and buffering during data processing, and the roadside terminal transmits the sensing data to the fusion device located on the vehicle side through the air interface. Therefore, when the roadside terminal sends sensing data There is time delay and jitter, which causes the sensing data transmitted by the roadside end to be unable to be continuously sent to the fusion device according to the agreed time, resulting in the fusion device during the data fusion period, the sensing data given by the roadside end and the sensing data given by the car side end may be different. Not in the same fusion cycle, resulting in inaccurate data after fusion by the fusion device.
  • the present application provides a communication and data processing method of vehicle-road coordination, a detection system and a fusion device, which are used to improve the accuracy of data fused by the fusion device as much as possible.
  • the present application provides a communication method for vehicle-road coordination, the method includes acquiring data from the detection system, the data includes one or more of state process calibration values, motion models, state information, covariance estimates, and time stamps Multiple, and send data to the fusion device; wherein, the state process calibration value is used to optimize the covariance estimate, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the covariance estimate It is used to identify the error between the state information and the actual state information, and the time stamp is used to identify the moment when the detection system sends data.
  • the state process calibration value is used to optimize the covariance estimate
  • the motion model is used to identify the motion state of the target
  • the state information is used to identify the motion characteristics of the target
  • the covariance estimate It is used to identify the error between the state information and the actual state information
  • the time stamp is used to identify the moment when the detection system sends data.
  • the fusion device can optimize the first covariance estimation value based on the first state process calibration value, so that the measurement of the detection system can be more accurately characterized error. Further, the fusion device can determine the first confidence level according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the fusion device outputting the first fusion data can be improved. Accuracy.
  • the target includes a cooperative target; the method includes obtaining the actual position of the cooperative target conforming to the motion model at a preset time, and the estimated position of the cooperative target at a preset time, and according to the estimated position and the motion model , determine the estimated position of the cooperative target after m steps, m is a positive integer; further, according to the actual position and the estimated position after m steps, determine the estimated error corresponding to m steps, and then obtain the state process corresponding to the motion model according to the estimated error calibration value.
  • Obtaining the m-step state process calibration value helps to more accurately characterize the measurement error of the detection system, thereby helping to improve the accuracy of the first data.
  • the method may include obtaining n-m estimation errors obtained in each cycle of the L cycles, n is an integer greater than 1, and L is a positive integer; L ⁇ (n-m ) The variance of the estimated errors is determined as the calibration value of the state process corresponding to the motion model.
  • the motion model includes any one of a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the target further includes a random target
  • the method further includes acquiring sampling data of the random target; and determining state information according to the sampling data.
  • the method may include transmitting electromagnetic wave signals to the detection area, receiving echo signals from the detection area, and determining state information according to the echo signals; wherein, the detection area includes random targets, and the echo signals are The electromagnetic wave signal is obtained after being reflected by a random target.
  • the present application provides a data processing method for vehicle-road coordination, the method includes acquiring first data from the roadside detection system and second data from the vehicle-side detection system, and according to the first data and the second data , to obtain the first fusion data of the random target.
  • the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first covariance estimation value and the first time stamp, and the first state process calibration value is used for optimizing
  • the first covariance estimated value the first motion model is used to identify the first motion state of the target, the first state information is used to identify the first motion feature of the target, and the first covariance estimated value is used to identify the first state information and the first state information
  • An error between actual state information, the first time stamp is used to identify the first moment when the roadside detection system sends the first data
  • the second data includes the second state process calibration value, the second motion model, the second state information, The second covariance estimated value and the second time stamp
  • the second state process calibration value is used to optimize the second covariance estimated value
  • the second motion model is used to identify the second motion state of the target
  • the second state information is used to identify the target
  • the second motion feature, the second estimated covariance value is used to identify the error between the second state information and the second
  • the fusion device can receive the first state process calibration value and the first motion model from the roadside fusion module, and the first state process calibration value can optimize the first covariance estimation value, so that the measurement error can be more accurately represented ; Further, the fusion device has also received the second state process calibration value and the second motion model from the vehicle side fusion module, and the second state process calibration value can optimize the second covariance estimation value, thereby more accurately characterizing the measurement error, Therefore, it is helpful to improve the accuracy of the obtained first fusion data of the random target.
  • the first state prediction value of the random target at the fusion moment can be obtained according to the first time stamp, the first state process calibration value, the first state information and the first covariance estimated value and the first covariance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, the second state prediction value and the second covariance prediction value of the random target at the fusion moment are obtained ; According to the first motion model and the second fusion data of the previous frame of the target, predict the third state prediction value and the third covariance prediction value of the target; according to the second motion model and the second fusion data of the previous frame of the random target Predicting the fourth state predicted value and the fourth covariance predicted value of the target; according to the first state predicted value, the first covariance predicted value, the third state predicted value and the third covariance predicted value, the first filtered estimated value and The first filtered covariance estimated value; according to the second state predicted value, the second covariance predicted value, the fourth
  • the first Fusion data when the first confidence level satisfies the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, the first Fusion data; when the second confidence level satisfies the second preset confidence level and the first confidence level does not meet the first preset confidence level, the first fusion data is obtained according to the second filtered estimated value and the second filtered covariance estimated value; When the first confidence meets the first preset reliability and the second confidence meets the second preset reliability, according to the first filtered estimated value, the first filtered covariance estimated value, the second filtered estimated value and the second filtered covariance estimated value to obtain the first fusion data; when the first confidence degree does not meet the first preset confidence degree and the second confidence degree does not satisfy the second preset confidence degree, according to the second fusion data of the previous frame, obtain Fusion data first.
  • the present application provides a detection system, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection system may be a roadside detection system or a vehicle-side detection system in a vehicle-road cooperative communication system, or a module that can be used in a roadside detection system or a vehicle-side detection system, such as a chip Or chip system or circuit.
  • the detection system may include: a transceiver and at least one processor.
  • the processor may be configured to support the detection system to perform the corresponding functions shown above, and the transceiver is used to support communication between the detection system and the fusion device or other detection systems.
  • the transceiver may be an independent receiver, an independent transmitter, a transceiver integrating transceiver functions, or an interface circuit.
  • the detection system may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the detection system.
  • the processor is used to obtain data from the detection system, and the data includes one or more of the state process calibration value, motion model, state information, covariance estimation value and time stamp;
  • the state process calibration value is used to optimize the covariance estimation value ,
  • the motion model is used to identify the motion state of the target,
  • the state information is used to identify the motion characteristics of the target,
  • the covariance estimate is used to identify the error between the state information and the actual state information, and the time stamp is used to identify the moment when the detection system sends data ;
  • the transceiver is used to send data to the fusion device.
  • the target includes a cooperative target;
  • the processor is specifically configured to obtain the actual position of the cooperative target conforming to the motion model at a preset time, and the estimated position of the cooperative target at a preset time; according to the estimated position and motion Model, determine the estimated position of the cooperative target after m steps, m is a positive integer; determine the estimated error corresponding to m steps according to the actual position and the estimated position after m steps; obtain the state process calibration value corresponding to the motion model according to the estimated error .
  • the processor is specifically configured to: acquire n-m estimation errors obtained in each cycle of L cycles, where n is an integer greater than 1, and L is a positive integer;
  • the variance of (n-m) estimated errors is determined as the calibration value of the state process corresponding to the motion model.
  • the motion model includes any one of a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the target further includes a random target; the processor is further configured to: acquire sampling data of the random target; and determine state information according to the sampling data.
  • the transceiver is specifically used to: transmit electromagnetic wave signals to the detection area, and the detection area includes random targets; receive echo signals from the detection area, and the echo signals are obtained after the electromagnetic wave signals are reflected by random targets ;
  • the processor is specifically configured to: determine the state information according to the echo signal.
  • the present application provides a fusion device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the fusion device may be a fusion device in a vehicle-road cooperative communication system, or a module that can be used in the fusion device, such as a chip or a chip system or a circuit.
  • the fusion device may include: a transceiver and at least one processor.
  • the processor may be configured to support the fusion device to perform the corresponding functions shown above, and the transceiver is used to support communication between the fusion device and a detection system (such as a roadside detection system or a vehicle side detection system).
  • the transceiver may be an independent receiver, an independent transmitter, a transceiver integrating transceiver functions, or an interface circuit.
  • the fusion device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the fusion device.
  • the transceiver is used to obtain the first data from the roadside detection system and the second data from the vehicle side detection system
  • the first data includes the calibration value of the first state process, the first motion model, the first state information, the first coordination One or more of the variance estimate and the first time stamp
  • the first state process calibration value is used to optimize the first covariance estimate
  • the first motion model is used to identify the first motion state of the target
  • the first state information is used to To identify the first motion feature of the target
  • the first estimated covariance value is used to identify the error between the first state information and the first actual state information
  • the first time stamp is used to identify the first time when the roadside detection system sends the first data.
  • the second data includes one or more of the second state process calibration value, the second motion model, the second state information, the second covariance estimation value and the second time stamp
  • the second state process calibration value is used for Optimizing the second covariance estimated value
  • the second motion model is used to identify the second motion state of the target
  • the second state information is used to identify the second motion feature of the target
  • the second covariance estimated value is used to identify the second state information
  • the error between the second actual state information, the second time stamp is used to identify the second moment when the vehicle side detection system sends the second data
  • the processor is used to obtain the first fusion data of the random target according to the first data and the second data .
  • the processor is configured to: obtain the first state prediction of the random target at the fusion moment according to the first time stamp, the first state process calibration value, the first state information, and the first covariance estimated value value and the first covariance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, the second state prediction value and the second covariance prediction of the random target at the time of fusion are obtained value; according to the first motion model and the second fusion data of the previous frame of the target, predict the third state prediction value and the third covariance prediction value of the target; according to the second motion model and the second fusion of the previous frame of the random target The fourth state predicted value and the fourth covariance predicted value of the data prediction target; according to the first state predicted value, the first covariance predicted value, the third state predicted value and the third covariance predicted value, the first filtered estimated value is obtained and the first filter covariance estimated value; according to the second state predicted value, the second covariance predicted value, the second covariance predicted value
  • the processor is specifically configured to: when the first confidence level satisfies the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, according to the first filtering estimate and the second A filter covariance estimate to obtain the first fusion data; when the second confidence meets the second preset reliability and the first confidence does not meet the first preset reliability, according to the second filter estimate and the second filter covariance
  • the variance estimated value obtains the first fused data; when the first confidence meets the first preset reliability and the second confidence meets the second preset reliability, according to the first filtered estimated value, the first filtered covariance estimated value,
  • the second filtered estimated value and the second filtered covariance estimated value are used to obtain the first fusion data; when the first confidence level does not meet the first preset reliability level and the second confidence level does not meet the second preset reliability level, according to the previous
  • the second fused data of one frame is used to obtain the first fused data.
  • the present application provides a detection system, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above method.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection system can be a roadside detection system or a vehicle side detection system, and the detection system can include a processing module and a transceiver module, and these modules can implement the roadside detection system or vehicle side detection system in the above method example.
  • the detection system can include a processing module and a transceiver module, and these modules can implement the roadside detection system or vehicle side detection system in the above method example.
  • the present application provides a fusion device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the fusion device may include a processing module and a transceiver module, and these modules may perform corresponding functions of the fusion device in the above method example.
  • a processing module and a transceiver module, and these modules may perform corresponding functions of the fusion device in the above method example.
  • the present application provides a communication system, which includes a detection system (such as a roadside detection system and a vehicle side detection system) and a fusion device.
  • a detection system such as a roadside detection system and a vehicle side detection system
  • a fusion device may be used to implement the second aspect or any one of the methods in the second aspect.
  • the present application provides a vehicle, where the communication system includes a vehicle side detection system and/or a fusion device.
  • the vehicle-side system may be used to implement the first aspect or any one of the methods in the first aspect
  • the fusion device may be used to implement the second aspect or any one of the methods in the second aspect.
  • the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by a processor, the detection system performs the above-mentioned first aspect or the first aspect.
  • the present application provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the detection system performs any of the above-mentioned first aspect or the first aspect.
  • the present application provides a chip, including a processor, the processor is coupled with a memory, and is used to execute computer programs or instructions stored in the memory, so that the chip implements any one of the first aspect or the second aspect, and A method in any possible implementation of any aspect.
  • Figure 1a is a schematic diagram of a communication system architecture provided by the present application.
  • Figure 1b is a schematic diagram of another communication system architecture provided by the present application.
  • FIG. 2 is a schematic diagram of the principle of a radar detection target provided by the present application.
  • FIG. 3 is a schematic flow chart of a communication method for vehicle-road coordination provided by the present application.
  • Fig. 4 is a schematic flow chart of a method for obtaining a first state process calibration value and a first motion model provided by the present application
  • Fig. 5 is a method for obtaining the first actual position of the cooperative target at n different first moments by a roadside fusion model provided by the present application;
  • Fig. 6a is a schematic flowchart of a method for obtaining first state information by a roadside detection system provided by the present application
  • Fig. 6b is a schematic flowchart of another method for acquiring first state information by a roadside detection system provided by the present application.
  • Fig. 7a is a schematic flowchart of another method for obtaining the first status information of the target by the roadside detection system provided by the present application;
  • Fig. 7b is a schematic flowchart of another method for obtaining the first status information of the target by the roadside detection system provided by the present application;
  • FIG. 8 is a schematic flow chart of a data processing method for vehicle-road coordination provided by the present application.
  • FIG. 9 is a schematic flowchart of a method for determining the first fusion data of a random target provided by the present application.
  • FIG. 10 is a schematic flowchart of a method for obtaining first fusion data based on the first confidence degree and the second confidence degree provided by the present application;
  • FIG. 11 is a schematic structural diagram of a detection system provided by the present application.
  • Figure 12 is a schematic structural view of a fusion device provided by the present application.
  • FIG. 13 is a schematic structural diagram of a detection system provided by the present application.
  • Fig. 14 is a schematic structural diagram of a fusion device provided by the present application.
  • Cooperative targets usually mean that the real position information of the detected target can be obtained through other cooperative channels in addition to direct measurement by sensors.
  • the position of a certain fixed target can be obtained in advance; another example, the cooperative target reports the current position information wirelessly, and the current position information of the cooperative target can be obtained by measuring with a measuring instrument.
  • Non-cooperative targets usually refer to the real position information of the detected target, except that the sensor can directly measure it, and there is no other technical means to obtain it.
  • Covariance is used to measure the overall error of two variables.
  • the two variables may be, for example, a predicted value and an actual value.
  • Covariance is usually a matrix, usually as an intermediate parameter.
  • Kalman filter is a high-efficiency recursive filter (or autoregressive filter), which can estimate the state of a dynamic system from a series of incomplete and random measurements. Kalman filtering can be based on the values of each measurement at different times, considering the joint distribution at each time, and then generating an estimate of the unknown variable, so it is more accurate than the estimation method based only on a single measurement.
  • Kalman filtering is essentially a data fusion algorithm that fuses data with the same measurement purpose, from different sensors, and may have different units, to obtain a more accurate measurement value.
  • Image fusion is a kind of image processing technology, which refers to the image data collected by multi-source channels about the same target through image processing and calculation of specific algorithms, etc., to maximize the extraction of beneficial information in each channel, and finally to fuse high-quality (such as brightness, clarity, color) of the image, the fused image is more accurate than the original image.
  • FIG. 1 a is a schematic diagram of a possible communication system architecture provided in this application.
  • the communication system may include a vehicle side detection system and a roadside detection system, and the vehicle side detection system and the roadside detection system may communicate through a sidelink (SL) air interface or a Uu air interface.
  • the vehicle side detection system can be installed on vehicles, including but not limited to unmanned vehicles, smart vehicles, electric vehicles, or digital vehicles.
  • the roadside detection system can be installed on the roadside infrastructure, which includes but not limited to traffic lights, traffic cameras, or roadside units (RSU), etc.
  • Figure 1a takes the roadside infrastructure as traffic lights as an example Introduced.
  • the vehicle side detection system can obtain measurement information such as latitude and longitude, speed, orientation, and distance of surrounding objects in real time or periodically, and then realize the Assisted driving or automatic driving of vehicles.
  • the latitude and longitude can be used to determine the position of the vehicle, or the speed and orientation can be used to determine the driving direction and destination of the vehicle in the future, or the distance of surrounding objects can be used to determine the number and density of obstacles around the vehicle.
  • the vehicle-side detection system may include a vehicle-mounted perception module and a vehicle-mounted fusion module, as shown in FIG. 1b.
  • the on-vehicle sensing module may be a sensor (the function of the sensor may refer to the following introduction) arranged around the body of the vehicle (for example, the front left, front right, left, right, rear left, rear right, etc. of the vehicle).
  • the location where the vehicle perception module is installed in the vehicle is not limited.
  • the on-vehicle fusion module can be, for example, a processor in the vehicle, or a domain controller in the vehicle, or an electronic control unit (ECU) in the vehicle, or other devices installed in the vehicle. Chips, among them, ECU can also be called “driving computer", “vehicle computer”, “vehicle-specific microcomputer controller” or “lower computer”, etc., which is one of the core components of the vehicle.
  • the roadside detection system may include a roadside perception module and a roadside fusion module, as shown in FIG. 1b.
  • the roadside perception module can be used to collect obstacle information, pedestrian information, signal light information, vehicle flow information and traffic sign information in real time or periodically.
  • the roadside sensing module may be, for example, a sensor.
  • the roadside fusion module can be, for example, a chip or a processor.
  • the communication system may also include a cloud server.
  • a cloud server can be a single server or a server cluster composed of multiple servers.
  • the cloud server may also be called cloud, cloud, cloud server, cloud controller, or Internet of Vehicles server, etc. It can also be understood that a cloud server is a general term for devices or devices with data processing capabilities, such as physical devices such as hosts or processors, virtual devices such as virtual machines or containers, and chips or integrated circuits.
  • the above-mentioned communication system may be an intelligent vehicle infrastructure cooperative system (intelligent vehicle infrastructure cooperative systems, IVICS), referred to as a vehicle-infrastructure cooperative system for short.
  • IVICS intelligent vehicle infrastructure cooperative systems
  • the above-mentioned communication system can be applied in areas such as unmanned driving, automatic driving, assisted driving, intelligent driving, connected vehicles, surveying and mapping, or security monitoring.
  • AGV automated guided vehicle
  • the above application scenarios are just examples, and can also be applied to various other scenarios, for example, it can also be applied to the scenario of an automated guided vehicle (AGV) car, wherein the AGV car is equipped with an electromagnetic Or optical and other automatic navigation devices, which can drive along the specified navigation path, transport vehicles with safety protection and various transfer functions, the vehicle side detection system can be installed on the AGV trolley, and the roadside detection system can be installed in the AGV trolley scene on the roadside equipment.
  • AGV automated guided vehicle
  • the fusion device may be installed on a vehicle, or a processor in the vehicle may serve as the fusion device, or an ECU in the vehicle may serve as the fusion device, or a domain controller in the vehicle may serve as the fusion device.
  • the fusion device can receive the information transmitted from the roadside detection system through the on board unit (OBU) in the vehicle.
  • OBU refers to a communication device using dedicated short range communication (DSRC) technology, which can be used for communication between the vehicle and the outside world.
  • DSRC dedicated short range communication
  • the fusion device can also be installed on the roadside infrastructure.
  • Roadside infrastructure can communicate with vehicles through vehicle to everthing (V2X).
  • V2X vehicle to everthing
  • the fusion device and the drive test infrastructure may communicate through a control area network (controller area network, CAN) line, Ethernet or wireless.
  • control area network controller area network, CAN
  • the fusion device may also be installed on a cloud server, or the cloud server may serve as the fusion device.
  • the cloud server and the roadside detection system can communicate wirelessly, and the cloud server and the vehicle side detection system can also communicate wirelessly.
  • Fig. 1b is an example in which the fusion device is installed on the vehicle side.
  • Sensors can be divided into two categories according to their sensing methods, namely passive sensing sensors and active sensing sensors.
  • passive sensing sensors mainly rely on the radiation information of the external environment.
  • Active sensing sensors are used to sense the environment by actively emitting energy waves.
  • Passive sensing sensors and active sensing sensors are introduced in detail as follows.
  • the passive sensing sensor may be, for example, a camera (or called a camera or a video camera), and the accuracy of the camera sensing results mainly depends on image processing and classification algorithms.
  • the camera includes any camera (for example, a static camera, a video camera, etc.) for acquiring images of the environment in which the vehicle is located.
  • the camera may be configured to detect visible light, referred to as a visible light camera.
  • Visible light cameras use charge-coupled devices (CCD) or standard complementary metal-oxide semiconductors (complementary meta-oxide semiconductors, CMOS) to obtain images corresponding to visible light.
  • CCD charge-coupled devices
  • CMOS complementary meta-oxide semiconductors
  • the camera may also be configured to detect light from other parts of the spectrum, such as infrared light, and may be referred to as an infrared camera.
  • Infrared camera can adopt CCD or CMOS, filter through filter, only allow to pass through the light of color wavelength band and set infrared wavelength band.
  • the active perception sensor may be radar.
  • the radar can sense the fan-shaped area shown in the solid line box, and the fan-shaped area can be the radar sensing area (or called the radar detection area).
  • the radar transmits electromagnetic wave signals through the antenna and receives the echo signals reflected by the target on the electromagnetic wave signals, amplifies and down-converts the echo signals, and obtains information such as the relative distance, relative speed, and angle between the vehicle and the target. .
  • the roadside data transmitted by the fusion roadside detection system and the vehicle-side data of the vehicle detection system may not be the same period of data, it may lead to the fusion of data distortion.
  • the present application provides a communication method for vehicle-road coordination.
  • the communication method of the vehicle-road coordination can improve the accuracy of the data from the roadside detection system and the data after the fusion of the data from the vehicle side detection system.
  • the communication method can be applied to the communication system shown in FIG. 1a above, and the method can be executed by the above-mentioned roadside detection system, or can also be executed by the above-mentioned vehicle-side detection system.
  • FIG. 3 is a schematic flow chart of a communication method for vehicle-infrastructure coordination provided by the present application.
  • the method is implemented by a roadside detection system as an example. The method includes the following steps:
  • Step 301 the roadside detection system acquires first data.
  • the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first estimated covariance value, and the first time stamp.
  • the first state process calibration value is used to optimize the first covariance estimation value (that is, the fusion device can optimize the first covariance estimation value based on the first state process calibration value during the data fusion process), so as to more accurately characterize the detection
  • the measurement error of the system; the first motion model is used to identify the motion state of the cooperation target.
  • the first state process calibration value and the first motion model may be pre-acquired or pre-stored by the roadside detection system.
  • a possible implementation method for the roadside detection system to obtain the calibration value of the first state process and the first motion model can refer to the introduction in FIG. 4 below, and will not be repeated here.
  • the first state information is used to identify the motion characteristics of the random target.
  • the first state information may include a first position and/or a first velocity of the random target, and the like.
  • a possible implementation manner of acquiring the first state information here refer to the introduction of FIG. 5 below, and details are not repeated here.
  • the first covariance estimate is used to identify a statistical error between the first state information and the first actual state information.
  • the initial state of the roadside detection system may be preset with a first initial state information.
  • the estimated value of the first covariance usually converges continuously with the filtering process, and when it converges to a certain value, it can be considered that the first actual state information is the first state information.
  • the first time stamp is used to identify the moment when the roadside detection system sends the first data. It can also be understood that the first time stamp is a time stamp stamped when the roadside detection system sends the first data.
  • Step 302 the roadside detection system sends the first data to the fusion device.
  • the fusion device may receive the first data from the drive test detection system.
  • the fusion device can receive information from the road through the OBU in the vehicle.
  • the first data of the side detection system V2X communication between the roadside detection system and the vehicle.
  • the roadside detection system can send the first data to the cloud server.
  • the first state process calibration value and the first motion model may be sent to the fusion device by the roadside detection system during the process of establishing a connection between the roadside detection system and the fusion device;
  • the side detection system sends the first state information, the first estimated covariance value, and the first time stamp to the fusion device for the first time; or it may be the first time that the roadside detection system sends the first state information
  • the first estimated covariance value and the first time stamp are sent before, which is not limited in this application.
  • the fusion device can optimize the first covariance estimation value based on the first state process calibration value, so as to more accurately characterize Measurement error. Further, the fusion device can determine the first confidence level according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the fusion device outputting the first fusion data can be improved. Accuracy.
  • a possible implementation manner of acquiring the first data is exemplarily shown as follows.
  • FIG. 4 is a schematic flowchart of a method for obtaining the calibration value of the first state process and the first motion model provided by the present application.
  • the preset moment takes the first moment as an example
  • the motion model takes the first motion model as an example
  • the state process calibration value takes the first state process calibration value as an example
  • the actual position obtained at the first moment can be called the first
  • the actual position, the estimated position obtained at the first moment may be called the first estimated position
  • the estimated position after M steps may be called the second estimated position
  • the estimated error corresponding to the M steps is called the first estimated error.
  • the method may include the steps of:
  • step 401 the roadside fusion module acquires the first actual position of the cooperative target conforming to the first motion model at the first moment.
  • the cooperation target moves according to the first motion model, such as a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the linear motion model can be understood as the cooperative target moving in a straight line at a constant speed according to the speed v;
  • the left-turning motion model can be understood as the cooperative target moving according to the speed v and acceleration a;
  • the right-turning motion model can be understood as the cooperative target moving according to the speed v, Acceleration a decelerates in a straight line; or
  • the left-turn motion model can be understood as the cooperative target decelerates according to the speed v and acceleration a;
  • the right-turn motion model can be understood as the cooperative target accelerates the linear motion according to the speed v and acceleration a.
  • the method includes the following steps:
  • step 501 the surveying instrument measures the first actual positions corresponding to n different first moments of the cooperation target conforming to the first motion model, where n is an integer greater than 1.
  • the surveying instrument may be a high-precision surveying instrument using real-time kinematic (RTK) technology, and may be installed on the cooperation target to realize the measurement of the actual position of the cooperation target.
  • RTK real-time kinematic
  • the measuring instrument when the cooperative target moves according to the first motion model, can measure the cooperative target at [k 1 ,k 2 ,k 3 ,...,k n ] the nth first moments - Actual position [x 1 , x 2 , x 3 , . . . , x n ]. It can also be understood that at time k 1 , the first actual position of the cooperative target measured by the measuring instrument is x 1 ; at time k 2 , the first actual position of the cooperative target measured by the measuring instrument is x 2 ; and so on, at time k n , the measuring instrument measures the first actual position of the cooperative target as x n .
  • the measuring instrument can obtain the relationship between the first motion model, n first moments, and n first actual positions as shown in Table 1.
  • first motion model n first moments
  • n first actual positions expressed in the form of a table is only an example, and it can also be expressed by other corresponding relationships, which is not limited in this application .
  • the above table 1 may also be three independent tables, that is, one motion model corresponds to one table.
  • step 502 the surveying instrument sends to the roadside fusion module the obtained first actual positions of the cooperative targets respectively corresponding to n different first moments.
  • the roadside fusion module receives the first actual positions respectively corresponding to n different first moments from the measuring instrument.
  • the measuring instrument may send Table 1 to the roadside fusion module. If the measuring instrument obtains three tables in step 501, in step 502, the measuring instrument may send the three tables to the roadside fusion module. It should be noted that the three tables may be sent to the roadside fusion module together, or may be sent to the roadside fusion module three times, which is not limited in this application.
  • the roadside fusion module can obtain the first actual positions respectively corresponding to n different first moments when the cooperative target moves according to the first motion model.
  • Step 402 the roadside perception module obtains the first estimated position of the cooperation target conforming to the first motion model at the first moment, and sends the first estimated position at the first moment to the roadside fusion module.
  • the roadside perception module can respectively record the cooperative target conforming to the first motion model at [k 1 , k 2 , k 3 ,...,k n ] the n different first moments an estimated position It can also be understood that, at time k 1 , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as At time k2 , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as By analogy, at time k n , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as
  • the roadside sensing module is a radar
  • the distance of the cooperative target can be measured by electromagnetic waves, and then combined with multi-antenna technology, the angle of electromagnetic wave reception can be obtained, and the distance and angle can be used to locate in space, and n different
  • the first moments of are respectively corresponding to the first estimated positions.
  • the roadside perception module is a camera
  • a corresponding relationship between the image pixel coordinate system and the world coordinate system can be constructed, so that the first estimated positions corresponding to n different first moments can be obtained respectively.
  • the roadside perception module sends to the roadside fusion module the first estimated positions corresponding to the cooperative targets conforming to the first motion model at n different first moments
  • the roadside detection system may include P roadside sensing modules, and the P roadside sensing modules can respectively perform the above step 402, and each roadside sensing module can obtain n first estimated positions,
  • the P roadside perception modules may all send the recorded n first estimated positions to the roadside fusion module.
  • the roadside fusion module may receive n first estimated positions from each of the P roadside sensing modules to obtain P ⁇ n first estimated positions.
  • the roadside fusion module may first perform a weighted average on the first estimated positions from the P roadside perception modules to obtain the first estimated position at the first moment. It can also be understood that the roadside fusion module weights and averages the first estimated positions from P roadside perception modules at time k 1 , and obtains the first estimated position at time k 1 as At time k2 , the weighted average of the first estimated positions from P roadside sensing modules is obtained to obtain the first estimated position at time k2 as By analogy, at time k n , the weighted average of the first estimated positions from P roadside sensing modules is obtained to obtain the first estimated position at time k n as In other words, at the n first moments [k 1 ,k 2 ,k 3 ,...,k n ], the first estimated position obtained by the roadside fusion module is
  • Step 403 the roadside fusion module determines the second estimated position of the cooperative target after M steps according to the first estimated position and the first motion model.
  • the second estimated position of the cooperative target after M steps It can be represented by the following formula 1.
  • the second estimated position of the cooperative target after M steps It can be represented by the following formula 2. It should be understood that, in this example, the left-turn movement model is used as an example to perform accelerated movement at the speed v and acceleration a.
  • the second estimated position of the cooperative target after M steps It is represented by the following formula 3. It should be understood that in this example, the right-turn motion model is used as an example to perform decelerated linear motion at the speed v and acceleration a.
  • Step 404 the roadside fusion module determines the first estimation error of the M-step estimation according to the first actual position and the second estimated position.
  • Step 405 the roadside fusion module obtains the first state process calibration value corresponding to the first motion model according to the first estimation error.
  • the above steps 401 to 405 are executed L times in a loop, and each time the above steps 401 to 405 are executed, n-m different first estimation errors can be obtained.
  • the roadside fusion module determines the variance of the L ⁇ (n-m) first estimation errors obtained in the L cycles as the calibration value of the first state process corresponding to the first motion model. When L is greater than 1, it is helpful to improve the accuracy of the calibration value of the first state process.
  • the first state process calibration value Q m can be represented by the following formula 4.
  • the calibration value of the first state process corresponding to the linear motion model can be obtained; when the first motion model is a left-turn motion model, the corresponding calibration value of the left-turn motion model can be obtained
  • the first state process calibration value; when the first motion model is a right-turning motion model, the first state process calibration value corresponding to the right-turning motion model can be obtained, as shown in Table 2 below.
  • the corresponding calibration values of the first state process may also be different.
  • the acquisition of the first state information can be introduced in the following two situations.
  • the roadside sensing module is a radar.
  • P is a positive integer.
  • FIG. 6 a it is a schematic flowchart of a method for acquiring first status information by a roadside detection system provided in the present application.
  • the transmitted electromagnetic wave signal is taken as an example of the first electromagnetic wave signal
  • the sampling data is taken as an example of the first echo signal.
  • the method includes the following steps:
  • Step 601 the radar transmits a first electromagnetic wave signal to a detection area.
  • the detection area of the radar can refer to the introduction of the above-mentioned FIG. 2 .
  • Step 602 the radar receives a first echo signal from the detection area.
  • the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target. It should be understood that the first echo signal is the detection data (or referred to as detection data, sampling data, etc.) of a random target collected by the radar.
  • Step 603 the radar sends the first echo signal to the roadside fusion module.
  • the roadside fusion module receives the first echo signal from the radar.
  • the roadside fusion module may receive the first echo signals (ie sampled data 1, sampled data 2 . . . sampled data P) from P radars to obtain P first echo signals.
  • first echo signals ie sampled data 1, sampled data 2 . . . sampled data P
  • Step 604 the roadside fusion module determines the first state information of the random target according to the received P first echo signals.
  • the roadside fusion module can obtain P pieces of state information based on P pieces of first echo signals based on one piece of state information that can be obtained from one first echo signal, and weight the P pieces of state information On average, the first state information of the random target is obtained.
  • the roadside fusion module may perform a weighted average on the obtained P positions to obtain the first position included in the first state information.
  • FIG. 6 b it is a schematic flowchart of another method for obtaining the first state information by the roadside detection system provided in the present application.
  • the transmitted electromagnetic wave signal is taken as an example of the first electromagnetic wave signal
  • the sampling data is taken as an example of the first echo signal. The method includes the following steps:
  • Step 611 the radar transmits a first electromagnetic wave signal to the detection area.
  • the detection area of the radar can refer to the introduction of the above-mentioned FIG. 2 .
  • Step 612 the radar receives the first echo signal from the detection area.
  • the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target.
  • Step 613 the radar determines first state information of a random target according to the received first echo signal.
  • Step 614 the radar sends status information to the roadside fusion module.
  • the roadside fusion module receives status information from the radar.
  • the roadside fusion module can receive status information from P radars and obtain P status information.
  • Step 615 the roadside fusion module determines the first state information of the random target according to the received P pieces of state information.
  • the roadside fusion module performs a weighted average on the P pieces of state information according to the state information from the P radars to obtain the first state information of the random target.
  • the roadside perception module is a camera.
  • FIG. 7 a it is a schematic flowchart of another method for obtaining the first status information of a target by a roadside detection system provided in the present application.
  • the sampling data is taken as the first image as an example. The method includes the following steps:
  • Step 701 the camera captures a first image of a random target.
  • Step 702 the camera sends the first image to the roadside fusion module. Accordingly, the roadside fusion module receives the first image from the camera.
  • the roadside fusion module may receive first images from P cameras and obtain P first images.
  • Step 703 the roadside fusion module can acquire the first state information of the random target according to the P first images.
  • the roadside fusion module may fuse the P first images, and determine the first state information of the random target according to the fused first images.
  • FIG. 7 b it is a schematic flowchart of another method for obtaining the first state information of a target by a roadside detection system provided in the present application.
  • the method includes the following steps:
  • Step 711 the camera captures a first image of a random target.
  • Step 712 the camera obtains state information of a random target according to the first image.
  • Step 713 the camera sends the state information of the random target to the roadside fusion module.
  • the roadside fusion module receives status information from the cameras.
  • the roadside fusion module may receive status information from P cameras.
  • Step 714 the roadside fusion module acquires the first state information of the random target according to the P pieces of state information from the camera.
  • the roadside fusion module performs a weighted average on the P pieces of state information according to the state information from the P cameras to obtain the first state information of the random target.
  • the time when the roadside fusion module sends the first data to the fusion device through the air interface is the first time stamp.
  • the first estimated covariance value is a statistical error between the first state information and the actual state information.
  • the first covariance estimate It can be represented by the following formula 5.
  • the first state equation may be a motion model of a random target detected by the roadside detection system and indicated to the fusion device; or may be determined by the fusion device according to the first state information.
  • the motion model of the random target may be a linear motion model, a left-turn motion model, or a right-turn motion model.
  • the roadside fusion module can obtain the first state information, the first estimated covariance value, the first time stamp, the first state process calibration value and the first motion model.
  • the vehicle-side fusion module can obtain the second motion model, the second state process calibration value, the second state information, the first estimated covariance value and the first time stamp.
  • the above roadside fusion module can be replaced by the vehicle side fusion module
  • the roadside perception module can be replaced by the vehicle side perception module
  • the first motion model can be replaced by the second motion model
  • the calibration value of the first state process can be replaced by the second state process
  • the calibration value is replaced, and the first state information is replaced with the second state information.
  • FIG. 8 it is a schematic flow chart of a data processing method for vehicle-road coordination provided by the present application. This method can be applied to the communication system shown in Fig. 1a above. The method includes the following steps:
  • Step 801 the roadside detection system sends first data to the fusion device.
  • the fusion device receives the first data from the roadside detection system.
  • the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first estimated covariance value, and the first time stamp.
  • the first state process calibration value the first motion model, the first state information, the first estimated covariance value, and the first time stamp.
  • Step 802 the vehicle side detection system sends the second data to the fusion device.
  • the fusion device receives the second data from the vehicle side detection system.
  • the second data includes a second state process calibration value, a second motion model, a second state information, a second covariance estimation value, and a second time stamp.
  • the second state process calibration value is used to optimize the second covariance estimation value (that is, the fusion device can optimize the second covariance estimation value based on the second state process calibration value during the data fusion process), so as to more accurately characterize the measurement Error;
  • the second motion model is used to identify the motion state of the cooperative target.
  • the second state process calibration value and the second motion model may be pre-acquired and stored by the vehicle side detection system.
  • the second state information is used to identify the motion characteristics of the target.
  • the second state information may include a second position and/or a second velocity of the target, and the like.
  • the second covariance estimate is used to identify a statistical error between the second state information and the actual state information. It should be noted that the initial state in the vehicle side detection system does not have actual state information, and an actual state information (called initial state information) can be preset.
  • the initial state information usually converges continuously with the filtering process.
  • the second time stamp is used to identify the moment when the vehicle-side detection system sends the second data. It can also be understood that the second time stamp is a time stamp stamped when the vehicle side detection system sends the second data.
  • step 801 can be performed first and then step 802 can be performed, or step 802 can be performed first and then step 801 can be performed, or step 801 and step 802 can also be performed synchronously, This application does not limit this.
  • Step 803 the fusion device obtains the first fusion data of the target according to the first data and the second data.
  • the first fusion data refers to the first fusion data of the random target.
  • the first fused data includes, for example, target position information, state information (such as speed, direction), covariance, and the like.
  • state information such as speed, direction
  • covariance and the like.
  • the fusion device can receive the first state process calibration value and the first motion model from the roadside fusion module, and the fusion device can use the first state process calibration value to optimize the first coordination during the data fusion process.
  • Variance estimated value so that the measurement error can be more accurately characterized;
  • the fusion device also received the calibration value of the second state process and the second motion model from the fusion module on the vehicle side, and the fusion device passed the second state process during the data fusion process
  • the calibration value optimizes the second estimated value of the covariance, so that the measurement error can be more accurately represented, thereby helping to improve the accuracy of the obtained first fusion data of the target.
  • FIG. 9 is a schematic flowchart of a method for obtaining first fusion data according to first data and second data provided in this application. The method includes the following steps:
  • Step 901 the fusion device obtains the first predicted state of the target at the time of fusion according to the first time stamp and the first state information.
  • the first state prediction value It can be expressed by the following formula 6:
  • Step 902 the fusion device obtains a second state prediction value of the target at the time of fusion according to the second time stamp and the second state information.
  • the second state prediction value It can be expressed by the following formula 7:
  • step 901 can be performed first and then step 902 can be performed, or step 902 can be performed first and then step 901 is performed, or step 901 and step 902 can be performed simultaneously.
  • Step 903 the fusion device obtains the first covariance prediction value of the target at the fusion moment according to the first time stamp, the first state information, the first state process calibration value and the first covariance estimated value.
  • the first covariance predictor See the introduction to Equation 5 above. in, Indicates the first covariance estimate output by the roadside detection system at time t.
  • step 904 the fusion device obtains a second covariance prediction value of the target at the fusion moment according to the second time stamp, the second state information, the second state process calibration value and the second motion model.
  • the second covariance predictor It can be expressed by the following formula 8:
  • k represents the k-th fusion moment
  • ⁇ v (k/t) represents the state equation
  • Q v (k/t) is the calibration value of the second state process at time kt.
  • the second state equation may be the motion model of the target detected by the vehicle side detection system and indicated to the fusion device; or it may be determined by the fusion device according to the second state information.
  • step 903 can be performed first and then step 904 is performed, or step 903 can be performed first and then step 904 is performed, or step 903 and step 904 can be performed simultaneously.
  • Step 905 the fusion device predicts the third state prediction value and the third covariance prediction value of the target according to the first motion model and the second fusion data of the previous frame.
  • the second fusion data of the previous frame includes the state information output from the previous frame Third State Prediction It can be expressed by Equation 9.
  • the second fused data of the previous frame includes an estimated covariance value output by the previous frame third covariance predictor It can be expressed by Equation 10.
  • Q f (k) is the process noise, generally an empirical value, represents the equation of state.
  • each frame of fusion data output by the fusion device can be cached first, and the processor can call the fusion data in the cache.
  • Step 906 the fusion device predicts the fourth state prediction value and the fourth covariance prediction value of the object according to the second motion model and the second fusion data of the previous frame of the object.
  • step 906 reference may be made to the introduction of the foregoing step 905, which will not be repeated here.
  • step 905 can be performed first and then step 906 is performed, or step 906 can be performed first and then step 905 is performed, or step 905 and step 906 can be performed simultaneously.
  • Step 907 the fusion device obtains a first filtered estimated value and a first filtered covariance estimated value of the target according to the first predicted state value, the first predicted covariance value, the third predicted state value, and the third predicted covariance value.
  • the first filtered estimated value can be determined according to the first state prediction value and the third state prediction value, and can be referred to the expression of the following formula 11, the first filtering covariance estimation value It can be determined according to the first predicted covariance value and the third predicted covariance value, and can be referred to the expression in formula 12 below.
  • H is the observation matrix
  • K r (k) is the Kalman gain equation, which can be expressed in the following formula 13
  • R is the noise coefficient, which is generally an empirical value.
  • the first filtered estimated value of the target includes a first velocity, a first direction, a first position information, and the like of the target.
  • Step 908 the fusion device obtains a second filtered estimated value and a second filtered covariance estimated value of the target according to the second predicted state value, the second predicted covariance value, the fourth predicted state value, and the fourth predicted covariance value.
  • the second filtered estimated value It can be determined according to the second state prediction value and the fourth state prediction value, and can refer to the expression of the following formula 14,
  • the second filtering covariance estimation value It can be determined according to the second predicted covariance value and the fourth predicted covariance value, and can be referred to the expression in formula 15 below.
  • H is the observation matrix
  • K v (k) is the Kalman gain equation, which can be referred to the expression of the following formula 16.
  • the second filtered estimated value of the target includes a second velocity, a second direction, a second position information, and the like of the target.
  • step 907 can be performed first and then step 908 is performed, or step 908 can be performed first and then step 907 is performed, or step 907 and step 908 can be performed simultaneously.
  • Step 909 the fusion device determines the first confidence level of the first data at the fusion moment according to the first time stamp, the first state process calibration value, the first motion model and the first covariance estimation value.
  • the first confidence level may also be referred to as a first confidence level prediction value.
  • the first confidence U r (k) can be used in the following formula 17 express.
  • is the sum of the first estimated covariance value and the first process calibration value
  • S is a standard definition
  • the range can be greater than 0 and less than or equal to 95%
  • X is a variable.
  • Step 910 the fusion device is based on the second time stamp, the second state process calibration value, the second motion model and the second covariance estimation value of the second confidence level of the second data at the fusion moment.
  • the second confidence level may also be referred to as a second confidence level prediction value.
  • the first confidence U v (k) can be used in the following formula 18 express.
  • is the sum of the second covariance estimated value and the second process calibration value
  • S is a standard definition
  • the range can be greater than 0 and less than or equal to 95%
  • X is a variable.
  • Step 911 the fusion device obtains the first fusion of the target according to the first confidence level, the second confidence level, the first estimated filter value, the first estimated filter covariance value, the second estimated filter value and the second estimated filter covariance value data.
  • step 911 For possible implementation of step 911, refer to the introduction of FIG. 10 below.
  • FIG. 10 it is a schematic flowchart of another method for obtaining the first fusion data provided by the present application.
  • the method includes the following steps:
  • Step 1001 the fusion device determines whether the first confidence level meets the first preset confidence level, and determines whether the second confidence level meets the second preset confidence level; if the first confidence level satisfies the first preset confidence level and the second confidence level degree also meets the second preset reliability, execute step 1002; if the first confidence does not meet the first preset reliability and the second confidence does not meet the second preset reliability, execute step 1003; if the first confidence Satisfy the first preset reliability and the second confidence does not meet the second preset reliability, execute step 1004; if the first confidence does not meet the first preset reliability and the second confidence meets the second preset reliability , execute step 1005.
  • the first preset reliability C r and the second preset reliability C v may be two preset indicators.
  • the first preset reliability may be the same as or different from the second preset reliability, which is not limited in this application.
  • the first confidence level satisfies the first preset confidence level and the second confidence level also meets the second preset confidence level, which can be expressed as: U v (k) ⁇ C v , U r (k ) ⁇ C r ; the first confidence level does not meet the first preset reliability level and the second confidence level does not meet the second preset confidence level can be expressed as: U v (k) ⁇ C v , U r (k) ⁇ C r ; the first confidence meets the first preset reliability and the second confidence does not meet the second preset reliability can be expressed as: U v (k) ⁇ C v , U r (k) ⁇ C r ; the first The confidence level does not meet the first preset reliability level and the second confidence level meets the second preset level of confidence level can be expressed as: U v (k) ⁇ C v , U r (k) ⁇ C r .
  • Step 1002 the fusion device obtains first fused data according to the first estimated filtering value, the first estimated filtering covariance value, the second estimated filtering covariance value, and the second estimated filtering value.
  • target state estimate It can be expressed by Equation 19 below, the target covariance It can be represented by the following formula 20.
  • Step 1003 the fusion device obtains the first fusion data according to the second fusion data output in the previous frame.
  • the target state is estimated as The corresponding target covariance estimate is
  • Step 1004 the fusion device obtains first fusion data according to the first estimated filter value and the first estimated filter covariance value.
  • the target state estimate is The corresponding first ensemble covariance estimate is
  • Step 1005 the fusion device obtains the first fusion data according to the second estimated filter value and the second estimated filter covariance value.
  • the target state estimate is The corresponding target fusion covariance estimate is
  • the detection system and the fusion device include hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the modules and method steps described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • FIG. 11 is a schematic structural diagram of a possible detection system provided by the present application.
  • These detection systems can be used to realize the functions of the roadside detection system or the vehicle side detection system in the above method embodiments, and thus can also realize the beneficial effects of the above method embodiments.
  • the detection system may be the roadside detection system as shown in Figure 1a, or the vehicle side detection system as shown in Figure 1a; it may also be the roadside detection system as shown in Figure 1b above, or It can also be the vehicle side detection system shown in FIG. 1 b above; it can also be a module (such as a chip) applied to the detection system.
  • the detection system 1100 includes a processing module 1101 and a transceiver module 1102 .
  • the detection system 1100 is used to realize the functions of the roadside detection system in the above method embodiment shown in FIG. 3 or FIG. 8 .
  • the processing module 1101 is used to obtain the data of the detection system, the data includes state process calibration value, motion model, state information, covariance Estimated value and time stamp; the state process calibration value is used to optimize the covariance estimated value, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the covariance estimated value is used to identify the state information and the actual state For errors between information, the time stamp is used to identify the moment when the detection system sends data; the transceiver module 1102 is used to send data to the fusion device.
  • processing module 1101 and the transceiver module 1102 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 3 , and will not be repeated here.
  • processing module 1101 in the embodiment of the present application may be implemented by a processor or a processor-related circuit component
  • transceiver module 1102 may be implemented by a transceiver or a transceiver-related circuit component.
  • FIG. 12 is a schematic structural diagram of a possible fusion device provided by the present application.
  • These fusion devices can be used to realize the functions of the fusion devices in the foregoing method embodiments, and thus can also realize the beneficial effects possessed by the foregoing method embodiments.
  • the fusion device may be a cloud server as shown in Figure 1a, or a processor, ECU or domain controller in a vehicle as shown in Figure 1a, or the road as shown in Figure 1b above.
  • the fusion device in the side detection system may also be a module (such as a chip) applied to the fusion device.
  • the fusion device 1200 includes a processing module 1201 and a transceiver module 1202 .
  • the fusion device 1200 is used to implement the functions of the fusion device in the method embodiments shown in FIG. 3 , FIG. 8 , FIG. 9 or FIG. 10 above.
  • the first state process calibration value is used to optimize the first covariance estimation value
  • the first motion model is used to identify the first motion state of the target
  • the first state information is used to identify the first motion feature of the target
  • the first covariance estimation value is used to identify the difference between the first state information and the first actual state information
  • the first time stamp is used to identify the first moment when the roadside detection system sends the first data
  • the second data includes the second state process calibration value, the second motion model, the second state information, and the second covariance estimate value and the second timestamp
  • the second state process calibration value is used to optimize the second covariance estimation value
  • the second motion model is used to identify the second motion state of the target
  • the second state information is used to identify the second motion feature of the target , the second covariance estimated value
  • processing module 1201 and the transceiver module 1202 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 8 , and will not be repeated here.
  • processing module 1201 in the embodiment of the present application may be implemented by a processor or a processor-related circuit component
  • transceiver module 1202 may be implemented by a transceiver or a transceiver-related circuit component.
  • the present application also provides a detection system 1300 .
  • the detection system 1300 may include at least one processor 1301 and a transceiver 1302 .
  • the processor 1301 and the transceiver 1302 are coupled to each other.
  • the transceiver 1302 may be an interface circuit or an input and output interface.
  • the detection system 1300 may further include a memory 1303 for storing instructions executed by the processor 1301 or storing input data required by the processor 1301 to execute the instructions or storing data generated by the processor 1301 after executing the instructions.
  • the processor 1301 is used to execute the functions of the above-mentioned processing module 1101
  • the transceiver 1302 is used to execute the functions of the above-mentioned transceiver module 1102 .
  • the present application also provides a fusion device 1400 .
  • the fusion device 1400 may include at least one processor 1401 and a transceiver 1402 .
  • the processor 1401 and the transceiver 1402 are coupled to each other.
  • the transceiver 1402 may be an interface circuit or an input and output interface.
  • the fusion device 1400 may further include a memory 1403 for storing instructions executed by the processor 1401 or storing input data required by the processor 1401 to execute the instructions or storing data generated after the processor 1401 executes the instructions.
  • the processor 1401 is used to execute the functions of the above-mentioned processing module 1201
  • the transceiver 1402 is used to execute the functions of the above-mentioned transceiver module 1202 .
  • the present application provides a communication system for a vehicle-road system.
  • the communication system of the vehicle road system may include the aforementioned one or more vehicle side detection systems, one or more roadside detection systems, and a fusion device.
  • the vehicle side detection system can implement any method on the vehicle side detection system side
  • the roadside detection system can implement any method on the roadside detection system side
  • the fusion device can implement any method on the fusion device side.
  • the possible implementations of the roadside detection system, the vehicle side detection system, and the fusion device can be found in the introduction above, and will not be repeated here.
  • the present application provides a vehicle.
  • the vehicle may include one or more of the aforementioned vehicle side detection systems, and/or a fusion device.
  • the vehicle side detection system can execute any method on the vehicle side detection system side
  • the fusion device can execute any method on the fusion device side.
  • the possible implementations of the vehicle side detection system and the fusion device can be found in the above introduction, and will not be repeated here.
  • the vehicle may also include other components, such as a processor, a memory, a wireless communication device, and the like.
  • the vehicle may be, for example, an unmanned vehicle, a smart vehicle, an electric vehicle, a digital vehicle, and the like.
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC may be located in the detection system.
  • the processor and the storage medium can also exist in the detection system as discrete components.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product consists of one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions of the embodiments of the present application are executed in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, computer network, detection system, user equipment, or other programmable device.
  • Computer programs or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer programs or instructions may be Wired or wireless transmission to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
  • Available media can be magnetic media, such as floppy disks, hard disks, and tapes; optical media, such as digital video discs (digital video discs, DVDs); and semiconductor media, such as solid state drives (SSDs). ).
  • At least one means one or more, and “multiple” means two or more.
  • At least one of the following” or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • at least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ", where a, b, c can be single or multiple.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship; in the formulas of this application, the character “/” indicates that the contextual objects are a “division” Relationship.
  • the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “example” is not to be construed as preferred or advantageous over other embodiments or designs. Or it can be understood that the use of the word example is intended to present concepts in a specific manner, and does not constitute a limitation to the application.

Abstract

A vehicle-road coordination communication method, a data processing method, a detection system and a fusion apparatus, which may be used in the fields of automatic driving, intelligent driving, assisted driving or the like. The vehicle-road coordination communication method comprises: acquiring data from a detection system and sending the data to a fusion apparatus, the data comprising one or more of a state process calibration value, a motion model, state information, a covariance estimation value and a time stamp, wherein the state process calibration value is used for the fusion apparatus to optimize the covariance estimation value; the motion model is used to identify the motion state of a target; the state information is used to identify motion features of the target; the covariance estimation value is used to identify errors between the state information and actual state information; and the time stamp is used to identify the moment when the detection system sends the data. By means of the state process calibration value, the covariance estimation value may be optimized, so that measurement errors of the detection system may be characterized more accurately, which thus helps to improve the accuracy of first fusion data.

Description

车路协同的通信、数据处理方法、探测系统及融合装置Communication, data processing method, detection system and fusion device of vehicle-road coordination 技术领域technical field
本申请涉及车路协同的通信技术领域,尤其涉及一种车路协同的通信、数据处理方法、探测系统及融合装置。The present application relates to the technical field of vehicle-infrastructure coordinated communication, in particular to a vehicle-infrared coordinated communication, a data processing method, a detection system, and a fusion device.
背景技术Background technique
智能网联汽车作为人工智能、智慧交通、电子通信等多领域融合的技术产物,具有广阔应用前景。智能网联汽车是指智能汽车在车联网环境下的具体应用,通常搭载摄像头、激光雷达、惯性测量单元(inertial measurement unit,IMU)等感知设备感知车内外环境信息,并可以基于通信技术实现车侧端与路侧端之间的信息交互。As a technical product of the integration of artificial intelligence, smart transportation, electronic communication and other fields, intelligent networked vehicles have broad application prospects. Intelligent networked vehicles refer to the specific application of smart vehicles in the Internet of Vehicles environment. They are usually equipped with sensing devices such as cameras, lidars, and inertial measurement units (IMUs) to sense the environmental information inside and outside the vehicle, and can realize vehicle information based on communication technology. Information interaction between the side end and the road side end.
车侧端的感知是基于车载传感器对车身周围区域进行感知,但是感知区域有限;路侧端的感知是基于多站、多传感器进行感知,可获得较为广阔的观测空间。通过车路协同可实现路侧端感知的数据与车侧端感知的数据融合,从而可提升车辆对周边环境的感知能力。所谓车路协同是指是采用无线通信和新一代互联网等技术等,全方位实施车车、车路动态实时信息交互,并在全时空动态交通信息采集与融合的基础上开展车辆主动安全控制和道路协同管理,充分实现人车路的有效协同,保证交通安全,提高通行效率,从而形成的安全、高效和环保的道路交通系统。The perception of the car side is based on the perception of the surrounding area of the vehicle body by the on-board sensors, but the sensing area is limited; the perception of the roadside is based on multi-station and multi-sensor perception, which can obtain a relatively broad observation space. Through vehicle-road coordination, the fusion of the data sensed by the roadside end and the data sensed by the vehicle-side end can be realized, thereby improving the vehicle's ability to perceive the surrounding environment. The so-called vehicle-road coordination refers to the use of technologies such as wireless communication and the new generation of the Internet to comprehensively implement vehicle-vehicle and vehicle-road dynamic real-time information interaction, and to carry out vehicle active safety control and Road collaborative management fully realizes the effective coordination of people, vehicles and roads, ensures traffic safety, improves traffic efficiency, and thus forms a safe, efficient and environmentally friendly road traffic system.
在车路协同的数据融合过程中,需要将路侧端的感知数据及车侧端的感知数据发送给融合装置。通常,融合装置位于车侧,路侧端在数据处理过程中存在调度、缓存等,且路侧端是通过空口将感知数据传输至位于车侧端的融合装置,因此,路侧端发送感知数据时存在时延抖动,造成路侧端传输的感知数据不能连续按约定时间发送至融合装置,导致融合装置在数据融合周期内,路侧端给出的感知数据与车侧端给出的感知数据可能不在同一个融合周期,从而造成融合装置融合后的数据不准确。In the data fusion process of vehicle-road coordination, it is necessary to send the perception data of the roadside terminal and the vehicle-side terminal to the fusion device. Usually, the fusion device is located on the vehicle side, and the roadside terminal has scheduling and buffering during data processing, and the roadside terminal transmits the sensing data to the fusion device located on the vehicle side through the air interface. Therefore, when the roadside terminal sends sensing data There is time delay and jitter, which causes the sensing data transmitted by the roadside end to be unable to be continuously sent to the fusion device according to the agreed time, resulting in the fusion device during the data fusion period, the sensing data given by the roadside end and the sensing data given by the car side end may be different. Not in the same fusion cycle, resulting in inaccurate data after fusion by the fusion device.
发明内容Contents of the invention
本申请提供一种车路协同的通信、数据处理方法、探测系统及融合装置,用于尽可能的提高融合装置融合后的数据的准确性。The present application provides a communication and data processing method of vehicle-road coordination, a detection system and a fusion device, which are used to improve the accuracy of data fused by the fusion device as much as possible.
第一方面,本申请提供一种车路协同的通信方法,该方法包括获取来自探测系统的数据,数据包括状态过程标定值、运动模型、状态信息、协方差估计值及时间戳中的一个或多个,并向融合装置发送数据;其中,状态过程标定值用于优化所述协方差估计值,运动模型用于标识目标的运动状态,状态信息用于标识目标的运动特征,协方差估计值用于标识状态信息与实际状态信息之间的误差,时间戳用于标识探测系统发送数据的时刻。In the first aspect, the present application provides a communication method for vehicle-road coordination, the method includes acquiring data from the detection system, the data includes one or more of state process calibration values, motion models, state information, covariance estimates, and time stamps Multiple, and send data to the fusion device; wherein, the state process calibration value is used to optimize the covariance estimate, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the covariance estimate It is used to identify the error between the state information and the actual state information, and the time stamp is used to identify the moment when the detection system sends data.
基于该方案,通过向融合装置发送第一状态过程标定值和第一运动模型,可使得融合装置基于第一状态过程标定值优化第一协方差估计值,从而可更准确的表征探测系统的测量误差。进一步,融合装置可根据第一状态过程标定值和第一运动模型确定第一置信度,从而可进一步提高融合装置确定出的第一数据的精确性,进而可提高融合装置输出第一融合数据的准确度。Based on this scheme, by sending the first state process calibration value and the first motion model to the fusion device, the fusion device can optimize the first covariance estimation value based on the first state process calibration value, so that the measurement of the detection system can be more accurately characterized error. Further, the fusion device can determine the first confidence level according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the fusion device outputting the first fusion data can be improved. Accuracy.
在一种可能的实现方式中,目标包括合作目标;该方法包括获取符合运动模型的合作 目标在预设时刻的实际位置、及合作目标在预设时刻的估计位置,并根据估计位置及运动模型,确定合作目标在m步后的估计位置,m为正整数;进一步,根据实际位置和m步后的估计位置,确定m步对应的估计误差,再根据估计误差,获得运动模型对应的状态过程标定值。In a possible implementation, the target includes a cooperative target; the method includes obtaining the actual position of the cooperative target conforming to the motion model at a preset time, and the estimated position of the cooperative target at a preset time, and according to the estimated position and the motion model , determine the estimated position of the cooperative target after m steps, m is a positive integer; further, according to the actual position and the estimated position after m steps, determine the estimated error corresponding to m steps, and then obtain the state process corresponding to the motion model according to the estimated error calibration value.
通过获得m步的状态过程标定值,有助于较准确的表征探测系统的测量误差,从而有助于提高第一数据的准确度。Obtaining the m-step state process calibration value helps to more accurately characterize the measurement error of the detection system, thereby helping to improve the accuracy of the first data.
在一种可能的实现方式中,该方法可包括获取L次循环中每次循环得到的n-m个估计误差,n为大于1的整数,L为正整数;将L次循环得到的L×(n-m)个估计误差的方差,确定为运动模型对应的状态过程标定值。In a possible implementation, the method may include obtaining n-m estimation errors obtained in each cycle of the L cycles, n is an integer greater than 1, and L is a positive integer; L×(n-m ) The variance of the estimated errors is determined as the calibration value of the state process corresponding to the motion model.
当L大于1时,有助于提高状态过程标定值的准确度。When L is greater than 1, it helps to improve the accuracy of the calibration value of the state process.
在一种可能的实现方式中,运动模型包括直线运动模型、左转运动模型或右转运动模型中的任一项。In a possible implementation manner, the motion model includes any one of a linear motion model, a left-turn motion model, or a right-turn motion model.
在一种可能的实现方式中,目标还包括随机目标,该方法还包括获取随机目标的采样数据;根据采样数据,确定状态信息。In a possible implementation manner, the target further includes a random target, and the method further includes acquiring sampling data of the random target; and determining state information according to the sampling data.
在一种可能的实现方式中,该方法可包括向探测区域发射电磁波信号,接收来自探测区域的回波信号,根据回波信号,确定状态信息;其中,探测区域包括随机目标,回波信号是电磁波信号经随机目标反射后得到的。In a possible implementation manner, the method may include transmitting electromagnetic wave signals to the detection area, receiving echo signals from the detection area, and determining state information according to the echo signals; wherein, the detection area includes random targets, and the echo signals are The electromagnetic wave signal is obtained after being reflected by a random target.
第二方面,本申请提供一种车路协同的数据处理方法,该方法包括获取来自路侧探测系统的第一数据及来自车侧探测系统的第二数据,并根据第一数据和第二数据,获得随机目标的第一融合数据。其中,第一数据包括第一状态过程标定值、第一运动模型、第一状态信息、第一协方差估计值及第一时间戳中的一个或多个,第一状态过程标定值用于优化第一协方差估计值,第一运动模型用于标识目标的第一运动状态,第一状态信息用于标识目标的第一运动特征,第一协方差估计值用于标识第一状态信息与第一实际状态信息之间的误差,第一时间戳用于标识路侧探测系统发送第一数据的第一时刻;第二数据包括第二状态过程标定值、第二运动模型、第二状态信息、第二协方差估计值及第二时间戳,第二状态过程标定值用于优化第二协方差估计值,第二运动模型用于标识目标的第二运动状态,第二状态信息用于标识目标的第二运动特征,第二协方差估计值用于标识第二状态信息与第二实际状态信息之间的误差,第二时间戳用于标识车侧探测系统发送第二数据的第二时刻。In the second aspect, the present application provides a data processing method for vehicle-road coordination, the method includes acquiring first data from the roadside detection system and second data from the vehicle-side detection system, and according to the first data and the second data , to obtain the first fusion data of the random target. Wherein, the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first covariance estimation value and the first time stamp, and the first state process calibration value is used for optimizing The first covariance estimated value, the first motion model is used to identify the first motion state of the target, the first state information is used to identify the first motion feature of the target, and the first covariance estimated value is used to identify the first state information and the first state information An error between actual state information, the first time stamp is used to identify the first moment when the roadside detection system sends the first data; the second data includes the second state process calibration value, the second motion model, the second state information, The second covariance estimated value and the second time stamp, the second state process calibration value is used to optimize the second covariance estimated value, the second motion model is used to identify the second motion state of the target, and the second state information is used to identify the target The second motion feature, the second estimated covariance value is used to identify the error between the second state information and the second actual state information, and the second time stamp is used to identify the second moment when the vehicle side detection system sends the second data.
基于该方案,融合装置可接收到来自路侧融合模块的第一状态过程标定值和第一运动模型,第一状态过程标定值可优化第一协方差估计值,从而可更准确的表征测量误差;进一步,融合装置也接收到了来自车侧融合模块的第二状态过程标定值和第二运动模型,第二状态过程标定值可优化第二协方差估计值,从而可更准确的表征测量误差,从而有助于提高获得的随机目标的第一融合数据的准确度。Based on this scheme, the fusion device can receive the first state process calibration value and the first motion model from the roadside fusion module, and the first state process calibration value can optimize the first covariance estimation value, so that the measurement error can be more accurately represented ; Further, the fusion device has also received the second state process calibration value and the second motion model from the vehicle side fusion module, and the second state process calibration value can optimize the second covariance estimation value, thereby more accurately characterizing the measurement error, Therefore, it is helpful to improve the accuracy of the obtained first fusion data of the random target.
在一种可能的实现方式中,该方法中可根据第一时间戳、第一状态过程标定值、第一状态信息及第一协方差估计值,获得随机目标在融合时刻的第一状态预测值和第一协方差预测值;根据第二时间戳、第二状态过程标定值、第二状态信息及第二运动模型,获得随机目标在融合时刻的第二状态预测值和第二协方差预测值;根据第一运动模型及目标的前一帧的第二融合数据预测目标的第三状态预测值和第三协方差预测值;根据第二运动模型及随机目标的前一帧的第二融合数据预测目标的第四状态预测值和第四协方差预测值;根 据第一状态预测值、第一协方差预测值、第三状态预测值和第三协方差预测值,获得第一滤波估计值及第一滤波协方差估计值;根据第二状态预测值、第二协方差预测值、第四状态预测值和第四协方差预测值,获得第二滤波估计值及第二滤波协方差估计值;根据第一时间戳、第一状态过程标定值、第一运动模型及第一协方差估计值,获得第一数据在融合时刻的第一置信度;根据第二时间戳、第二状态过程标定值、第二运动模型及第二协方差估计值,获得第二数据在融合时刻的第二置信度;根据第一置信度、第二置信度、第一滤波估计值、第一滤波协方差估计值、第二滤波估计值及第二滤波协方差估计值,获得第一融合数据。In a possible implementation, in this method, the first state prediction value of the random target at the fusion moment can be obtained according to the first time stamp, the first state process calibration value, the first state information and the first covariance estimated value and the first covariance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, the second state prediction value and the second covariance prediction value of the random target at the fusion moment are obtained ; According to the first motion model and the second fusion data of the previous frame of the target, predict the third state prediction value and the third covariance prediction value of the target; according to the second motion model and the second fusion data of the previous frame of the random target Predicting the fourth state predicted value and the fourth covariance predicted value of the target; according to the first state predicted value, the first covariance predicted value, the third state predicted value and the third covariance predicted value, the first filtered estimated value and The first filtered covariance estimated value; according to the second state predicted value, the second covariance predicted value, the fourth state predicted value and the fourth covariance predicted value, a second filtered covariance estimated value and a second filtered covariance estimated value are obtained; According to the first time stamp, the first state process calibration value, the first motion model and the first covariance estimation value, obtain the first confidence degree of the first data at the fusion moment; according to the second time stamp, the second state process calibration value , the second motion model and the second estimated covariance value to obtain the second confidence degree of the second data at the fusion moment; according to the first confidence degree, the second confidence degree, the first filtered estimated value, and the first filtered covariance estimated value , the second filtered estimated value and the second filtered covariance estimated value to obtain the first fused data.
进一步,可选地,当第一置信度满足第一预设置信度、且第二置信度不满足第二预设置信度,根据第一滤波估计值及第一滤波协方差估计值获得第一融合数据;当第二置信度满足第二预设置信度、且第一置信度不满足第一预设置信度,根据第二滤波估计值及第二滤波协方差估计值获得第一融合数据;当第一置信度满足第一预设置信度、且第二置信度满足第二预设置信度,根据第一滤波估计值、第一滤波协方差估计值、第二滤波估计值及第二滤波协方差估计值,获得第一融合数据;当第一置信度不满足第一预设置信度、且第二置信度不满足第二预设置信度,根据前一帧的第二融合数据,获得第一融合数据。Further, optionally, when the first confidence level satisfies the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, the first Fusion data; when the second confidence level satisfies the second preset confidence level and the first confidence level does not meet the first preset confidence level, the first fusion data is obtained according to the second filtered estimated value and the second filtered covariance estimated value; When the first confidence meets the first preset reliability and the second confidence meets the second preset reliability, according to the first filtered estimated value, the first filtered covariance estimated value, the second filtered estimated value and the second filtered covariance estimated value to obtain the first fusion data; when the first confidence degree does not meet the first preset confidence degree and the second confidence degree does not satisfy the second preset confidence degree, according to the second fusion data of the previous frame, obtain Fusion data first.
通过在第一数据和第二数据融合前,先基于确定第一数据对应的第一置信度和第二数据对应的第二置信度,并基于不同来源的数据的置信度进行数据融合,从而可提高融合后的第一融合数据的质量。By first determining the first confidence degree corresponding to the first data and the second confidence degree corresponding to the second data before the fusion of the first data and the second data, and performing data fusion based on the confidence degrees of data from different sources, so that Improve the quality of the fused first fused data.
第三方面,本申请提供一种探测系统,该探测系统用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。In a third aspect, the present application provides a detection system, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods. The functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware. Hardware or software includes one or more modules corresponding to the above-mentioned functions.
在一种可能的实现方式中,该探测系统可以是车路协同通信系统中的路侧探测系统或车侧探测系统,或者是可用于路侧探测系统或车侧探测系统中的模块,例如芯片或芯片系统或者电路。有益效果可参见上述第一方面的描述,此处不再赘述。该探测系统可以包括:收发器和至少一个处理器。该处理器可被配置为支持该探测系统执行以上所示的相应功能,该收发器用于支持该探测系统与融合装置或其它探测系统等之间的通信。其中,收发器可以为独立的接收器、独立的发射器、集成收发功能的收发器、或者是接口电路。可选地,该探测系统还可以包括存储器,该存储器可以与处理器耦合,其保存该探测系统必要的程序指令和数据。In a possible implementation, the detection system may be a roadside detection system or a vehicle-side detection system in a vehicle-road cooperative communication system, or a module that can be used in a roadside detection system or a vehicle-side detection system, such as a chip Or chip system or circuit. For the beneficial effects, reference may be made to the description of the first aspect above, and details are not repeated here. The detection system may include: a transceiver and at least one processor. The processor may be configured to support the detection system to perform the corresponding functions shown above, and the transceiver is used to support communication between the detection system and the fusion device or other detection systems. Wherein, the transceiver may be an independent receiver, an independent transmitter, a transceiver integrating transceiver functions, or an interface circuit. Optionally, the detection system may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the detection system.
其中,处理器用于获取来自探测系统的数据,数据包括状态过程标定值、运动模型、状态信息、协方差估计值及时间戳中的一个或多个;状态过程标定值用于优化协方差估计值,运动模型用于标识目标的运动状态,状态信息用于标识目标的运动特征,协方差估计值用于标识状态信息与实际状态信息之间的误差,时间戳用于标识探测系统发送数据的时刻;收发器用于向融合装置发送数据。Wherein, the processor is used to obtain data from the detection system, and the data includes one or more of the state process calibration value, motion model, state information, covariance estimation value and time stamp; the state process calibration value is used to optimize the covariance estimation value , the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, the covariance estimate is used to identify the error between the state information and the actual state information, and the time stamp is used to identify the moment when the detection system sends data ; The transceiver is used to send data to the fusion device.
在一种可能的实现方式中,目标包括合作目标;处理器具体用于获取符合运动模型的合作目标在预设时刻的实际位置、及合作目标在预设时刻的估计位置;根据估计位置及运动模型,确定合作目标在m步后的估计位置,m为正整数;根据实际位置和m步后的估计位置,确定m步对应的估计误差;根据估计误差,获得运动模型对应的状态过程标定值。In a possible implementation, the target includes a cooperative target; the processor is specifically configured to obtain the actual position of the cooperative target conforming to the motion model at a preset time, and the estimated position of the cooperative target at a preset time; according to the estimated position and motion Model, determine the estimated position of the cooperative target after m steps, m is a positive integer; determine the estimated error corresponding to m steps according to the actual position and the estimated position after m steps; obtain the state process calibration value corresponding to the motion model according to the estimated error .
在一种可能的实现方式中,处理器具体用于:获取L次循环中每次循环得到的n-m个 估计误差,n为大于1的整数,L为正整数;将L次循环得到的L×(n-m)个估计误差的方差,确定为运动模型对应的状态过程标定值。In a possible implementation, the processor is specifically configured to: acquire n-m estimation errors obtained in each cycle of L cycles, where n is an integer greater than 1, and L is a positive integer; The variance of (n-m) estimated errors is determined as the calibration value of the state process corresponding to the motion model.
在一种可能的实现方式中,运动模型包括直线运动模型、左转运动模型或右转运动模型中的任一项。In a possible implementation manner, the motion model includes any one of a linear motion model, a left-turn motion model, or a right-turn motion model.
在一种可能的实现方式中,目标还包括随机目标;处理器还用于:获取随机目标的采样数据;根据采样数据,确定状态信息。In a possible implementation manner, the target further includes a random target; the processor is further configured to: acquire sampling data of the random target; and determine state information according to the sampling data.
在一种可能的实现方式中,收发器具体用于:向探测区域发射电磁波信号,探测区域包括随机目标;接收来自探测区域的回波信号,回波信号是电磁波信号经随机目标反射后得到的;处理器具体用于:根据回波信号,确定状态信息。In a possible implementation, the transceiver is specifically used to: transmit electromagnetic wave signals to the detection area, and the detection area includes random targets; receive echo signals from the detection area, and the echo signals are obtained after the electromagnetic wave signals are reflected by random targets ; The processor is specifically configured to: determine the state information according to the echo signal.
第四方面,本申请提供一种融合装置,该融合装置用于实现上述第二方面或第二方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。In a fourth aspect, the present application provides a fusion device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods. The functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware. Hardware or software includes one or more modules corresponding to the above-mentioned functions.
在一种可能的实现方式中,该融合装置可以是车路协同通信系统中的融合装置,或者是可用于融合装置中的模块,例如芯片或芯片系统或者电路。有益效果可参见上述第二方面的描述,此处不再赘述。该融合装置可以包括:收发器和至少一个处理器。该处理器可被配置为支持该融合装置执行以上所示的相应功能,该收发器用于支持该融合装置与探测系统(如路侧探测系统或车侧探测系统)等之间的通信。其中,收发器可以为独立的接收器、独立的发射器、集成收发功能的收发器、或者是接口电路。可选地,该融合装置还可以包括存储器,该存储器可以与处理器耦合,其保存该融合装置必要的程序指令和数据。In a possible implementation manner, the fusion device may be a fusion device in a vehicle-road cooperative communication system, or a module that can be used in the fusion device, such as a chip or a chip system or a circuit. For the beneficial effects, reference may be made to the description of the second aspect above, and details are not repeated here. The fusion device may include: a transceiver and at least one processor. The processor may be configured to support the fusion device to perform the corresponding functions shown above, and the transceiver is used to support communication between the fusion device and a detection system (such as a roadside detection system or a vehicle side detection system). Wherein, the transceiver may be an independent receiver, an independent transmitter, a transceiver integrating transceiver functions, or an interface circuit. Optionally, the fusion device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the fusion device.
其中,收发器用于获取来自路侧探测系统的第一数据及来自车侧探测系统的第二数据,第一数据包括第一状态过程标定值、第一运动模型、第一状态信息、第一协方差估计值及第一时间戳中的一个或多个,第一状态过程标定值用于优化第一协方差估计值,第一运动模型用于标识目标的第一运动状态,第一状态信息用于标识目标的第一运动特征,第一协方差估计值用于标识第一状态信息与第一实际状态信息之间的误差,第一时间戳用于标识路侧探测系统发送第一数据的第一时刻;第二数据包括第二状态过程标定值、第二运动模型、第二状态信息、第二协方差估计值及第二时间戳中的一个或多个,第二状态过程标定值用于优化第二协方差估计值,第二运动模型用于标识目标的第二运动状态,第二状态信息用于标识目标的第二运动特征,第二协方差估计值用于标识第二状态信息与第二实际状态信息之间的误差,第二时间戳用于标识车侧探测系统发送第二数据的第二时刻;处理器用于根据第一数据和第二数据,获得随机目标的第一融合数据。Wherein, the transceiver is used to obtain the first data from the roadside detection system and the second data from the vehicle side detection system, the first data includes the calibration value of the first state process, the first motion model, the first state information, the first coordination One or more of the variance estimate and the first time stamp, the first state process calibration value is used to optimize the first covariance estimate, the first motion model is used to identify the first motion state of the target, and the first state information is used to To identify the first motion feature of the target, the first estimated covariance value is used to identify the error between the first state information and the first actual state information, and the first time stamp is used to identify the first time when the roadside detection system sends the first data. A moment; the second data includes one or more of the second state process calibration value, the second motion model, the second state information, the second covariance estimation value and the second time stamp, and the second state process calibration value is used for Optimizing the second covariance estimated value, the second motion model is used to identify the second motion state of the target, the second state information is used to identify the second motion feature of the target, and the second covariance estimated value is used to identify the second state information and The error between the second actual state information, the second time stamp is used to identify the second moment when the vehicle side detection system sends the second data; the processor is used to obtain the first fusion data of the random target according to the first data and the second data .
在一种可能的实现方式中,处理器具有用于:根据第一时间戳、第一状态过程标定值、第一状态信息及第一协方差估计值,获得随机目标在融合时刻的第一状态预测值和第一协方差预测值;根据第二时间戳、第二状态过程标定值、第二状态信息及第二运动模型,获得随机目标在融合时刻的第二状态预测值和第二协方差预测值;根据第一运动模型及目标的前一帧的第二融合数据预测目标的第三状态预测值和第三协方差预测值;根据第二运动模型及随机目标的前一帧的第二融合数据预测目标的第四状态预测值和第四协方差预测值;根据第一状态预测值、第一协方差预测值、第三状态预测值和第三协方差预测值,获得第一滤波估计值及第一滤波协方差估计值;根据第二状态预测值、第二协方差预测值、第四状态预测值和第四协方差预测值,获得第二滤波估计值及第二滤波协方差估计值;根 据第一时间戳、第一状态过程标定值、第一运动模型及第一协方差估计值,获得第一数据在融合时刻的第一置信度;根据第二时间戳、第二状态过程标定值、第二运动模型及第二协方差估计值,获得第二数据在融合时刻的第二置信度;根据第一置信度、第二置信度、第一滤波估计值、第一滤波协方差估计值、第二滤波估计值及第二滤波协方差估计值,获得第一融合数据。In a possible implementation manner, the processor is configured to: obtain the first state prediction of the random target at the fusion moment according to the first time stamp, the first state process calibration value, the first state information, and the first covariance estimated value value and the first covariance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, the second state prediction value and the second covariance prediction of the random target at the time of fusion are obtained value; according to the first motion model and the second fusion data of the previous frame of the target, predict the third state prediction value and the third covariance prediction value of the target; according to the second motion model and the second fusion of the previous frame of the random target The fourth state predicted value and the fourth covariance predicted value of the data prediction target; according to the first state predicted value, the first covariance predicted value, the third state predicted value and the third covariance predicted value, the first filtered estimated value is obtained and the first filter covariance estimated value; according to the second state predicted value, the second covariance predicted value, the fourth state predicted value and the fourth covariance predicted value, obtain the second filtered covariance estimated value and the second filtered covariance estimated value ;According to the first time stamp, the first state process calibration value, the first motion model and the first covariance estimation value, the first confidence degree of the first data at the fusion moment is obtained; according to the second time stamp, the second state process calibration value, the second motion model, and the second covariance estimate to obtain the second confidence level of the second data at the time of fusion; according to the first confidence level, the second confidence level, the first filter estimate value, and the first filter covariance estimate value, the second filtered estimated value and the second filtered covariance estimated value to obtain the first fused data.
在一种可能的实现方式中,处理器具体用于:当第一置信度满足第一预设置信度、且第二置信度不满足第二预设置信度,根据第一滤波估计值及第一滤波协方差估计值获得第一融合数据;当第二置信度满足第二预设置信度、且第一置信度不满足第一预设置信度,根据第二滤波估计值及第二滤波协方差估计值获得第一融合数据;当第一置信度满足第一预设置信度、且第二置信度满足第二预设置信度,根据第一滤波估计值、第一滤波协方差估计值、第二滤波估计值及第二滤波协方差估计值,获得第一融合数据;当第一置信度不满足第一预设置信度、且第二置信度不满足第二预设置信度,根据前一帧的第二融合数据,获得第一融合数据。In a possible implementation manner, the processor is specifically configured to: when the first confidence level satisfies the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, according to the first filtering estimate and the second A filter covariance estimate to obtain the first fusion data; when the second confidence meets the second preset reliability and the first confidence does not meet the first preset reliability, according to the second filter estimate and the second filter covariance The variance estimated value obtains the first fused data; when the first confidence meets the first preset reliability and the second confidence meets the second preset reliability, according to the first filtered estimated value, the first filtered covariance estimated value, The second filtered estimated value and the second filtered covariance estimated value are used to obtain the first fusion data; when the first confidence level does not meet the first preset reliability level and the second confidence level does not meet the second preset reliability level, according to the previous The second fused data of one frame is used to obtain the first fused data.
第五方面,本申请提供一种探测系统,该探测系统用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。In a fifth aspect, the present application provides a detection system, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above method. The functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware. Hardware or software includes one or more modules corresponding to the above-mentioned functions.
在一种可能的实施方式中,该探测系统可为路侧探测系统或车侧探测系统,该探测系统可以括处理模块和收发模块,这些模块可以执行上述方法示例中路侧探测系统或车侧探测系统的相应功能,具体参见方法示例中的详细描述,此处不做赘述。In a possible implementation, the detection system can be a roadside detection system or a vehicle side detection system, and the detection system can include a processing module and a transceiver module, and these modules can implement the roadside detection system or vehicle side detection system in the above method example. For the corresponding functions of the system, refer to the detailed description in the method example for details, and details are not repeated here.
第六方面,本申请提供一种融合装置,该融合装置用于实现上述第二方面或第二方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。In a sixth aspect, the present application provides a fusion device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods. The functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware. Hardware or software includes one or more modules corresponding to the above-mentioned functions.
在一种可能的实施方式中,该融合装置可以括处理模块和收发模块,这些模块可以执行上述方法示例中融合装置的相应功能,具体参见方法示例中的详细描述,此处不做赘述。In a possible implementation manner, the fusion device may include a processing module and a transceiver module, and these modules may perform corresponding functions of the fusion device in the above method example. For details, refer to the detailed description in the method example, and details are not repeated here.
第七方面,本申请提供一种通信系统,该通信系统包括探测系统(如路侧探测系统和车侧探测系统)和融合装置。其中,探测系统可以用于执行上述第一方面或第一方面中的任意一种方法,融合装置可以用于执行上述第二方面或第二方面中的任意一种方法。In a seventh aspect, the present application provides a communication system, which includes a detection system (such as a roadside detection system and a vehicle side detection system) and a fusion device. Wherein, the detection system may be used to implement the first aspect or any one of the methods in the first aspect, and the fusion device may be used to implement the second aspect or any one of the methods in the second aspect.
第八方面,本申请提供一种车辆,该通信系统包括车侧探测系统和/或融合装置。其中,车侧系统可以用于执行上述第一方面或第一方面中的任意一种方法,融合装置可以用于执行上述第二方面或第二方面中的任意一种方法。In an eighth aspect, the present application provides a vehicle, where the communication system includes a vehicle side detection system and/or a fusion device. Wherein, the vehicle-side system may be used to implement the first aspect or any one of the methods in the first aspect, and the fusion device may be used to implement the second aspect or any one of the methods in the second aspect.
第九方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被处理器执行时,使得该探测系统执行上述第一方面或第一方面的任意可能的实现方式中的方法、或者使得该融合装置执行上述第二方面或第二方面的任意可能的实现方式中的方法。In a ninth aspect, the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by a processor, the detection system performs the above-mentioned first aspect or the first aspect. A method in any possible implementation manner of one aspect, or causing the fusion device to execute the above second aspect or the method in any possible implementation manner of the second aspect.
第十方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当该计算机程序或指令被处理器执行时,使得该探测系统执行上述第一方面或第一方面的任意可能的实现方式中的方法、或者使得该融合装置执行上述第二方面或第二方面的任意可能的实现方式中的方法。In a tenth aspect, the present application provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the detection system performs any of the above-mentioned first aspect or the first aspect. A method in a possible implementation manner, or causing the fusion device to execute the above second aspect or the method in any possible implementation manner of the second aspect.
第十一方面,本申请提供一种芯片,包括处理器,处理器与存储器耦合,用于执行存储器中存储的计算机程序或指令,使得芯片实现第一方面或第二方面中任一方面、以及任一方面的任意可能的实现方式中的方法。In an eleventh aspect, the present application provides a chip, including a processor, the processor is coupled with a memory, and is used to execute computer programs or instructions stored in the memory, so that the chip implements any one of the first aspect or the second aspect, and A method in any possible implementation of any aspect.
附图说明Description of drawings
图1a为本申请提供的一种通信系统架构示意图;Figure 1a is a schematic diagram of a communication system architecture provided by the present application;
图1b为本申请提供的另一种通信系统架构示意图;Figure 1b is a schematic diagram of another communication system architecture provided by the present application;
图2为本申请提供的一种雷达探测目标的原理示意图;FIG. 2 is a schematic diagram of the principle of a radar detection target provided by the present application;
图3为本申请提供的一种车路协同的通信方法的方法流程示意图;FIG. 3 is a schematic flow chart of a communication method for vehicle-road coordination provided by the present application;
图4为本申请提供的一种获取第一状态过程标定值和第一运动模型的方法流程示意图;Fig. 4 is a schematic flow chart of a method for obtaining a first state process calibration value and a first motion model provided by the present application;
图5为本申请提供的一种路侧融合模获取合作目标在n个不同的第一时刻的第一实际位置的方法;Fig. 5 is a method for obtaining the first actual position of the cooperative target at n different first moments by a roadside fusion model provided by the present application;
图6a为本申请提供的一种路侧探测系统获取第一状态信息的方法流程示意图;Fig. 6a is a schematic flowchart of a method for obtaining first state information by a roadside detection system provided by the present application;
图6b为本申请提供的另一种路侧探测系统获取第一状态信息的方法流程示意图;Fig. 6b is a schematic flowchart of another method for acquiring first state information by a roadside detection system provided by the present application;
图7a为本申请提供的又一种路侧探测系统获取目标的第一状态信息的方法流程示意图;Fig. 7a is a schematic flowchart of another method for obtaining the first status information of the target by the roadside detection system provided by the present application;
图7b为本申请提供的又一种路侧探测系统获取目标的第一状态信息的方法流程示意图;Fig. 7b is a schematic flowchart of another method for obtaining the first status information of the target by the roadside detection system provided by the present application;
图8为本申请提供的一种车路协同的数据处理方法的方法流程示意图;FIG. 8 is a schematic flow chart of a data processing method for vehicle-road coordination provided by the present application;
图9为本申请提供的一种确定随机目标的第一融合数据的方法流程示意图;FIG. 9 is a schematic flowchart of a method for determining the first fusion data of a random target provided by the present application;
图10为本申请提供的一种基于第一置信度和第二置信度获得第一融合数据的方法流程示意图;FIG. 10 is a schematic flowchart of a method for obtaining first fusion data based on the first confidence degree and the second confidence degree provided by the present application;
图11为本申请提供的一种探测系统的结构示意图;FIG. 11 is a schematic structural diagram of a detection system provided by the present application;
图12为本申请提供的一种融合装置的结构示意图;Figure 12 is a schematic structural view of a fusion device provided by the present application;
图13为本申请提供的一种探测系统的结构示意图;FIG. 13 is a schematic structural diagram of a detection system provided by the present application;
图14为本申请提供的一种融合装置的结构示意图。Fig. 14 is a schematic structural diagram of a fusion device provided by the present application.
具体实施方式detailed description
下面将结合附图,对本申请实施例进行详细描述。Embodiments of the present application will be described in detail below in conjunction with the accompanying drawings.
以下,对本申请中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。Hereinafter, some terms used in this application will be explained. It should be noted that these explanations are for the convenience of those skilled in the art to understand, and do not limit the scope of protection required by the present application.
一、合作目标1. Cooperation goals
合作目标通常是指被探测目标的真实位置信息除了传感器可以直接测量外,还可以通过其他合作渠道获得。例如,可以预先获得某个固定目标的位置;再比如,合作目标通过无线的方式报告当前位置信息,合作目标的当前位置信息可以是通过测量仪测量得到的。Cooperative targets usually mean that the real position information of the detected target can be obtained through other cooperative channels in addition to direct measurement by sensors. For example, the position of a certain fixed target can be obtained in advance; another example, the cooperative target reports the current position information wirelessly, and the current position information of the cooperative target can be obtained by measuring with a measuring instrument.
二、非合作目标(或称为随机目标)2. Non-cooperative goals (or random goals)
非合作目标通常是指被探测目标的真实位置信息除了传感器可以直接测量外,再无其他技术手段能够获得。Non-cooperative targets usually refer to the real position information of the detected target, except that the sensor can directly measure it, and there is no other technical means to obtain it.
三、协方差3. Covariance
协方差用于衡量两个变量的总体误差。在本申请中,两个变量例如可以是预测值与实际值。协方差通常是矩阵,一般作为中间参数。Covariance is used to measure the overall error of two variables. In this application, the two variables may be, for example, a predicted value and an actual value. Covariance is usually a matrix, usually as an intermediate parameter.
四、卡尔曼滤波(Kalman filter,KF)4. Kalman filter (Kalman filter, KF)
卡尔曼滤波是一种高效率的递归滤波器(或称为自回归滤波器),它能够从一系列的不完全及包含随机的测量中,估计动态系统的状态。卡尔曼滤波可根据各测量量在不同时间下的值,考虑各时间下的联合分布,再产生对未知变数的估计,因此会比只以单一测量量为基础的估计方式要准确。Kalman filter is a high-efficiency recursive filter (or autoregressive filter), which can estimate the state of a dynamic system from a series of incomplete and random measurements. Kalman filtering can be based on the values of each measurement at different times, considering the joint distribution at each time, and then generating an estimate of the unknown variable, so it is more accurate than the estimation method based only on a single measurement.
也可以理解为,卡尔曼滤波本质上是一个数据融合算法,将具有同样测量目的、来自不同传感器、可能具有不同单位(unit)的数据融合在一起,得到一个更精确的测量值。It can also be understood that Kalman filtering is essentially a data fusion algorithm that fuses data with the same measurement purpose, from different sensors, and may have different units, to obtain a more accurate measurement value.
五、图像融合(Image fusion)5. Image fusion
图像融合是一种图像处理技术,是指将多源信道所采集到的关于同一目标的图像数据经过图像处理和特定算法的计算等,最大限度的提取各自信道中的有利信息,最终融合高质量(例如亮度、清晰度、色彩)的图像,融合后的图像相较于原始图像较精确。Image fusion is a kind of image processing technology, which refers to the image data collected by multi-source channels about the same target through image processing and calculation of specific algorithms, etc., to maximize the extraction of beneficial information in each channel, and finally to fuse high-quality (such as brightness, clarity, color) of the image, the fused image is more accurate than the original image.
基于上述内容,下面介绍本申请可能的适用架构及可能的应用场景。Based on the above content, the following introduces the possible applicable architecture and possible application scenarios of the present application.
请参阅图1a,为本申请提供的一种可能的通信系统架构示意图。该通信系统可包括车侧探测系统和路侧探测系统,车侧探测系统与路侧探测系统之间可以通过侧行链路(sidelink,SL)空口或Uu空口通信。车侧探测系统可安装于车辆上,车辆包括但不限于无人车、智能车、电动车、或数字汽车等。路侧探测系统可以安装于路侧基础设施上,路侧基础设施包括但不限于红绿灯、交通摄像头、或路侧单元(roadside unit,RSU)等,图1a是以路侧基础设施为红绿灯为例介绍的。Please refer to FIG. 1 a , which is a schematic diagram of a possible communication system architecture provided in this application. The communication system may include a vehicle side detection system and a roadside detection system, and the vehicle side detection system and the roadside detection system may communicate through a sidelink (SL) air interface or a Uu air interface. The vehicle side detection system can be installed on vehicles, including but not limited to unmanned vehicles, smart vehicles, electric vehicles, or digital vehicles. The roadside detection system can be installed on the roadside infrastructure, which includes but not limited to traffic lights, traffic cameras, or roadside units (RSU), etc. Figure 1a takes the roadside infrastructure as traffic lights as an example Introduced.
其中,车侧探测系统可以实时或周期性地获取到车辆的经纬度、速度、朝向、周围物体的距离等测量信息,再根据这些测量信息并结合高级驾驶辅助系统(advanced driving assistant system,ADAS)实现车辆的辅助驾驶或自动驾驶。例如,可利用经纬度确定车辆的位置,或利用速度和朝向确定车辆在未来一段时间的行驶方向和目的地,或利用周围物体的距离确定车辆周围的障碍物数量、密度等。进一步,可选地,车侧探测系统可包括车载感知模块和车载融合模块,可参见图1b。其中,车载感知模块可以是设置于车辆的车身周围(例如车辆的前左、前右、左、右、后左、后右等)的传感器(传感器的功能可参见下述介绍),本申请对车载感知模块安装于车辆的位置不做限定。车载融合模块例如可以是车辆中的处理器,或者也可以是车辆中的域控制器、或者也可以是车辆中的控制单元(electronic control unit,ECU)、或者也可以是安装于车辆中的其他芯片,其中,ECU还可称为“行车电脑”、“车载电脑”、“车辆专用微机控制器”或“下位机”等,是车辆的核心元件之一。进一步,可选地,路侧探测系统可包括路侧感知模块和路侧融合模块,可参见图1b。其中,路侧感知模块可用于实时或周期性的采集障碍物信息、行人信息、信号灯信息、车辆流信息以及交通标记信息等。路侧感知模块例如可以是传感器,具体可参见下述对传感器的介绍,此处不再赘述。路侧融合模块例如可以是芯片或处理器等。Among them, the vehicle side detection system can obtain measurement information such as latitude and longitude, speed, orientation, and distance of surrounding objects in real time or periodically, and then realize the Assisted driving or automatic driving of vehicles. For example, the latitude and longitude can be used to determine the position of the vehicle, or the speed and orientation can be used to determine the driving direction and destination of the vehicle in the future, or the distance of surrounding objects can be used to determine the number and density of obstacles around the vehicle. Further, optionally, the vehicle-side detection system may include a vehicle-mounted perception module and a vehicle-mounted fusion module, as shown in FIG. 1b. Wherein, the on-vehicle sensing module may be a sensor (the function of the sensor may refer to the following introduction) arranged around the body of the vehicle (for example, the front left, front right, left, right, rear left, rear right, etc. of the vehicle). The location where the vehicle perception module is installed in the vehicle is not limited. The on-vehicle fusion module can be, for example, a processor in the vehicle, or a domain controller in the vehicle, or an electronic control unit (ECU) in the vehicle, or other devices installed in the vehicle. Chips, among them, ECU can also be called "driving computer", "vehicle computer", "vehicle-specific microcomputer controller" or "lower computer", etc., which is one of the core components of the vehicle. Further, optionally, the roadside detection system may include a roadside perception module and a roadside fusion module, as shown in FIG. 1b. Among them, the roadside perception module can be used to collect obstacle information, pedestrian information, signal light information, vehicle flow information and traffic sign information in real time or periodically. The roadside sensing module may be, for example, a sensor. For details, please refer to the introduction of the sensor below, which will not be repeated here. The roadside fusion module can be, for example, a chip or a processor.
进一步,可选地,该通信系统还可包括云服务器。云服务器可以是单个服务器,也可以是由多个服务器组成的服务器集群。云服务器也可称为云、云端、云端服务器、云端控制器或车联网服务器等。也可以理解为,云服务器是对数据处理能力的设备或器件的统称,诸如可以包括主机或处理器等实体设备,也可以包括虚拟机或容器等虚拟设备,还可以包括芯片或集成电路等。Further, optionally, the communication system may also include a cloud server. A cloud server can be a single server or a server cluster composed of multiple servers. The cloud server may also be called cloud, cloud, cloud server, cloud controller, or Internet of Vehicles server, etc. It can also be understood that a cloud server is a general term for devices or devices with data processing capabilities, such as physical devices such as hosts or processors, virtual devices such as virtual machines or containers, and chips or integrated circuits.
在一种可能的实现方式中,上述通信系统可为智能车路协同系统(intelligent vehicle infrastructure cooperative systems,IVICS),简称为车路协同系统。In a possible implementation manner, the above-mentioned communication system may be an intelligent vehicle infrastructure cooperative system (intelligent vehicle infrastructure cooperative systems, IVICS), referred to as a vehicle-infrastructure cooperative system for short.
上述通信系统可应用于无人驾驶、自动驾驶、辅助驾驶、智能驾驶、网联车、测绘或安防监控等领域。需要说明的是,如上应用场景只是举例,还可以应用在多种其它场景,例如,还可应用于自动导引运输车(automated guided vehicle,AGV)小车的场景,其中,AGV小车指装备有电磁或光学等自动导航装置,能够沿规定的导航路径行驶,具有安全保护以及各种移载功能的运输车,车侧探测系统可安装于AGV小车上,路侧探测系统可安装于AGV小车场景中的路边设备上。The above-mentioned communication system can be applied in areas such as unmanned driving, automatic driving, assisted driving, intelligent driving, connected vehicles, surveying and mapping, or security monitoring. It should be noted that the above application scenarios are just examples, and can also be applied to various other scenarios, for example, it can also be applied to the scenario of an automated guided vehicle (AGV) car, wherein the AGV car is equipped with an electromagnetic Or optical and other automatic navigation devices, which can drive along the specified navigation path, transport vehicles with safety protection and various transfer functions, the vehicle side detection system can be installed on the AGV trolley, and the roadside detection system can be installed in the AGV trolley scene on the roadside equipment.
在一种可能的实现方式中,融合装置可以安装于车辆上,或者车辆中的处理器作为融合装置,或者车辆中的ECU作为融合装置,或者车辆中的域控制器作为融合装置。基于此,融合装置可通过车辆中的车载单元(on board unit,OBU)接收来自路侧探测系统的传输的信息。其中,OBU是指采用专用短程通信(dedicated short range communication,DSRC)技术的通讯装置,可以用于车辆与外界的通信。In a possible implementation manner, the fusion device may be installed on a vehicle, or a processor in the vehicle may serve as the fusion device, or an ECU in the vehicle may serve as the fusion device, or a domain controller in the vehicle may serve as the fusion device. Based on this, the fusion device can receive the information transmitted from the roadside detection system through the on board unit (OBU) in the vehicle. Among them, OBU refers to a communication device using dedicated short range communication (DSRC) technology, which can be used for communication between the vehicle and the outside world.
在另一种可能的实现方式中,融合装置也可以安装在路侧基础设施上。路侧基础设施可以通过车联网(vehicle to everthing,V2X)与车辆进行通信。融合装置与路测基础设施可以通过控制区域网络(controller area network,CAN)线、以太网或无线等方式通信。In another possible implementation, the fusion device can also be installed on the roadside infrastructure. Roadside infrastructure can communicate with vehicles through vehicle to everthing (V2X). The fusion device and the drive test infrastructure may communicate through a control area network (controller area network, CAN) line, Ethernet or wireless.
在又一种可能的实现方式中,融合装置也可以是安装于云服务器,或者云服务器作为融合装置。云服务器与路侧探测系统之间可通过无线的方式进行通信,云服务器与车侧探测系统之间也可以通过无线的方式进行通信。In yet another possible implementation manner, the fusion device may also be installed on a cloud server, or the cloud server may serve as the fusion device. The cloud server and the roadside detection system can communicate wirelessly, and the cloud server and the vehicle side detection system can also communicate wirelessly.
需要说明的是,上述图1b是以融合装置安装于车侧为例示例性的。It should be noted that the above Fig. 1b is an example in which the fusion device is installed on the vehicle side.
下面对适用于车载感知模块和路侧感知模的传感器进行介绍。The following is an introduction to sensors suitable for vehicle perception modules and roadside perception modules.
传感器可根据感知方式分为两大类,即被动感知类传感器和主动感知类传感器。其中,被动感知类传感器主要依赖于外界环境的辐射信息。主动感知类传感器是通过主动发射能量波进行环境感知。如下分别详细介绍被动感知类传感器和主动感知类传感器。Sensors can be divided into two categories according to their sensing methods, namely passive sensing sensors and active sensing sensors. Among them, passive sensing sensors mainly rely on the radiation information of the external environment. Active sensing sensors are used to sense the environment by actively emitting energy waves. Passive sensing sensors and active sensing sensors are introduced in detail as follows.
被动感知类传感器例如可以是相机(或称为摄像头或摄像机),相机感知结果准确性主要是取决于图像处理和分类算法。其中,相机包括用于获取车辆所处环境的图像的任何相机(例如,静态相机、视频相机等)。在一些示例中,相机可以被配置为检测可见光,称为可见光相机。可见光相机采用电荷耦合器件(charge-coupled device,CCD)或标准互补金属氧化物半导体(complementary meta-oxide semiconductor,CMOS),获得可见光对应的图像。在另一些示例中,相机也可以被配置为检测来自光谱的其它部分(诸如红外光)的光,可称为红外线相机。红外线相机可采用CCD或CMOS,通过滤波片滤波,只允许透过彩色波长段和设定的红外波长段的光。The passive sensing sensor may be, for example, a camera (or called a camera or a video camera), and the accuracy of the camera sensing results mainly depends on image processing and classification algorithms. Wherein, the camera includes any camera (for example, a static camera, a video camera, etc.) for acquiring images of the environment in which the vehicle is located. In some examples, the camera may be configured to detect visible light, referred to as a visible light camera. Visible light cameras use charge-coupled devices (CCD) or standard complementary metal-oxide semiconductors (complementary meta-oxide semiconductors, CMOS) to obtain images corresponding to visible light. In other examples, the camera may also be configured to detect light from other parts of the spectrum, such as infrared light, and may be referred to as an infrared camera. Infrared camera can adopt CCD or CMOS, filter through filter, only allow to pass through the light of color wavelength band and set infrared wavelength band.
其中,主动感知类传感器可以是雷达。如图2所示,以部署于车辆前端的雷达为例,该雷达可感知如实线框所示的扇形区域,该扇形区域可以为雷达感知区域(或称为雷达的探测区域)。雷达通过天线向外发射电磁波信号以及接收目标对电磁波信号反射的得到的回波信号,对回波信号进行放大以及下变频等处理,得到车辆与目标之间的相对距离、相对速度以及角度等信息。Among them, the active perception sensor may be radar. As shown in FIG. 2 , taking the radar deployed at the front of the vehicle as an example, the radar can sense the fan-shaped area shown in the solid line box, and the fan-shaped area can be the radar sensing area (or called the radar detection area). The radar transmits electromagnetic wave signals through the antenna and receives the echo signals reflected by the target on the electromagnetic wave signals, amplifies and down-converts the echo signals, and obtains information such as the relative distance, relative speed, and angle between the vehicle and the target. .
如背景技术所描述,在车路协同通信系统中,由于在融合路侧探测系统传输的路侧数据与车载探测系统的车侧数据可能不是同一个周期的数据,从而可能会导致融合后的数据 失真。As described in the background technology, in the vehicle-road cooperative communication system, since the roadside data transmitted by the fusion roadside detection system and the vehicle-side data of the vehicle detection system may not be the same period of data, it may lead to the fusion of data distortion.
鉴于此,本申请提供一种车路协同的通信方法。该车路协同的通信方法可提升来自路侧探测系统的数据与来自车侧探测系统数据融合后的数据的准确度。该通信方法可应用于如上述图1a所示的通信系统中,可方法由上述路侧探测系统执行,或者也可以由上述车侧探测系统执行。In view of this, the present application provides a communication method for vehicle-road coordination. The communication method of the vehicle-road coordination can improve the accuracy of the data from the roadside detection system and the data after the fusion of the data from the vehicle side detection system. The communication method can be applied to the communication system shown in FIG. 1a above, and the method can be executed by the above-mentioned roadside detection system, or can also be executed by the above-mentioned vehicle-side detection system.
基于上述内容,下面结合附图3至附图6b,对本申请提出的车路协同的通信方法进行具体阐述。Based on the above content, the communication method for vehicle-infrastructure coordination proposed by this application will be described in detail below in conjunction with accompanying drawings 3 to 6b.
请参阅图3,为本申请提供的一种车路协同的通信方法的方法流程示意图。下文以该方法由路侧探测系统执行为例。该方法包括以下步骤:Please refer to FIG. 3 , which is a schematic flow chart of a communication method for vehicle-infrastructure coordination provided by the present application. Hereinafter, the method is implemented by a roadside detection system as an example. The method includes the following steps:
步骤301,路侧探测系统获取第一数据。Step 301, the roadside detection system acquires first data.
此处,第一数据包括第一状态过程标定值、第一运动模型、第一状态信息、第一协方差估计值及第一时间戳中的一个或多个。Here, the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first estimated covariance value, and the first time stamp.
其中,第一状态过程标定值用于优化第一协方差估计值(即融合装置在数据融合过程中可基于第一状态过程标定值优化第一协方差估计值),从而可更准确的表征探测系统的测量误差;第一运动模型用于标识合作目标的运动状态。进一步,可选地,第一状态过程标定值和第一运动模型可以是路侧探测系统预先获取或预先存储的。此处,路侧探测系统获取第一状态过程标定值及第一运动模型的可能的实现方法可参见下述图4的介绍,此处不再赘述。Among them, the first state process calibration value is used to optimize the first covariance estimation value (that is, the fusion device can optimize the first covariance estimation value based on the first state process calibration value during the data fusion process), so as to more accurately characterize the detection The measurement error of the system; the first motion model is used to identify the motion state of the cooperation target. Further, optionally, the first state process calibration value and the first motion model may be pre-acquired or pre-stored by the roadside detection system. Here, a possible implementation method for the roadside detection system to obtain the calibration value of the first state process and the first motion model can refer to the introduction in FIG. 4 below, and will not be repeated here.
第一状态信息用于标识随机目标的运动特征。示例性地,第一状态信息可包括随机目标的第一位置和/或第一速率等。此处获取第一状态信息的可能实现方式可参见下述图5的介绍,此处不再赘述。The first state information is used to identify the motion characteristics of the random target. Exemplarily, the first state information may include a first position and/or a first velocity of the random target, and the like. For a possible implementation manner of acquiring the first state information here, refer to the introduction of FIG. 5 below, and details are not repeated here.
第一协方差估计值用于标识第一状态信息与第一实际状态信息之间的统计误差。需要说明的是,路侧探测系统的初始状态可预先设置一个第一初始状态信息。其中,第一协方差估计值通常随着滤波过程不断收敛,收敛到一定值时,可以认为第一实际状态信息即为第一状态信息。The first covariance estimate is used to identify a statistical error between the first state information and the first actual state information. It should be noted that the initial state of the roadside detection system may be preset with a first initial state information. Wherein, the estimated value of the first covariance usually converges continuously with the filtering process, and when it converges to a certain value, it can be considered that the first actual state information is the first state information.
第一时间戳用于标识路侧探测系发送第一数据的时刻。也可以理解为,第一时间戳是路侧探测系统发送第一数据时盖的时间戳。The first time stamp is used to identify the moment when the roadside detection system sends the first data. It can also be understood that the first time stamp is a time stamp stamped when the roadside detection system sends the first data.
步骤302,路侧探测系统向融合装置发送第一数据。相应地,融合装置可接收来自路测探测系统的第一数据。Step 302, the roadside detection system sends the first data to the fusion device. Correspondingly, the fusion device may receive the first data from the drive test detection system.
当融合装置安装于车辆上、或者车辆中的处理器作为融合装置、或者车辆中的ECU作为融合装置、或者车辆中的域控制器作为融合装置时,融合装置可通过车辆中的OBU接收来自路侧探测系统的第一数据。路侧探测系统与车辆之间进行V2X通信。When the fusion device is installed on the vehicle, or the processor in the vehicle is used as the fusion device, or the ECU in the vehicle is used as the fusion device, or the domain controller in the vehicle is used as the fusion device, the fusion device can receive information from the road through the OBU in the vehicle. The first data of the side detection system. V2X communication between the roadside detection system and the vehicle.
当融合装置安装在云服务器,或者云服务器作为融合装置时,路侧探测系统可向云服务器发送第一数据。When the fusion device is installed on the cloud server, or the cloud server is used as the fusion device, the roadside detection system can send the first data to the cloud server.
在一种可能的实现方式中,第一状态过程标定值和第一运动模型可以是路侧探测系统与融合装置建立连接的过程中,路侧探测系统发送给融合装置的;或者也可以是路侧探测系统第一次向融合装置发送第一状态信息、第一协方差估计值及第一时间戳时携带的;或者也可以是路侧探测系统第一次向融合装置发送第一状态信息、第一协方差估计值及第一时间戳之前发送的,本申请对此不做限定。In a possible implementation, the first state process calibration value and the first motion model may be sent to the fusion device by the roadside detection system during the process of establishing a connection between the roadside detection system and the fusion device; When the side detection system sends the first state information, the first estimated covariance value, and the first time stamp to the fusion device for the first time; or it may be the first time that the roadside detection system sends the first state information, The first estimated covariance value and the first time stamp are sent before, which is not limited in this application.
通过上述步骤301和步骤302,通过向融合装置发送第一状态过程标定值和第一运动 模型,可使得融合装置基于第一状态过程标定值优化第一协方差估计值,从而可更准确的表征测量误差。进一步,融合装置可根据第一状态过程标定值和第一运动模型确定第一置信度,从而可进一步提高融合装置确定出的第一数据的精确性,进而可提高融合装置输出第一融合数据的准确度。Through the above steps 301 and 302, by sending the first state process calibration value and the first motion model to the fusion device, the fusion device can optimize the first covariance estimation value based on the first state process calibration value, so as to more accurately characterize Measurement error. Further, the fusion device can determine the first confidence level according to the first state process calibration value and the first motion model, so that the accuracy of the first data determined by the fusion device can be further improved, and the accuracy of the fusion device outputting the first fusion data can be improved. Accuracy.
如下,示例性地的示出了获取第一数据的可能的实现方式。A possible implementation manner of acquiring the first data is exemplarily shown as follows.
1)获取第一状态过程标定值和第一运动模型的可能的实现方式。1) A possible implementation manner of acquiring the calibration value of the first state process and the first motion model.
请参阅图4,为本申请提供的一种获取第一状态过程标定值和第一运动模型的方法流程示意图。该方法中预设时刻以第一时刻为例,运动模型以第一运动模型为例,状态过程标定值以第一状态过程标定值为例,在第一时刻获取的实际位置可称为第一实际位置,在第一时刻获取的估计位置可称为第一估计位置,M步后的估计位置可称为第二估计位置,M步对应的估计误差称为第一估计误差。该方法适用于上述图1b所示的系统。该方法可包括以下步骤:Please refer to FIG. 4 , which is a schematic flowchart of a method for obtaining the calibration value of the first state process and the first motion model provided by the present application. In this method, the preset moment takes the first moment as an example, the motion model takes the first motion model as an example, the state process calibration value takes the first state process calibration value as an example, and the actual position obtained at the first moment can be called the first The actual position, the estimated position obtained at the first moment may be called the first estimated position, the estimated position after M steps may be called the second estimated position, and the estimated error corresponding to the M steps is called the first estimated error. This approach is applicable to the system shown in Figure 1b above. The method may include the steps of:
步骤401,路侧融合模块获取符合第一运动模型的合作目标在第一时刻的第一实际位置。In step 401, the roadside fusion module acquires the first actual position of the cooperative target conforming to the first motion model at the first moment.
此处,合作目标按第一运动模型运动,第一运动模型例如直线运动模型、左转运动模型、或右转运动模型等。其中,直线运动模型可以理解为合作目标按速度v做匀速直线运动;左转运动模型可以理解为合作目标按速度v、加速度a做加速运动;右转运动模型可以理解为合作目标按速度v、加速度a做减速直线运动;或者,左转运动模型可以理解为合作目标按速度v、加速度a做减速运动;右转运动模型可以理解为合作目标按速度v、加速度a做加速直线运动。Here, the cooperation target moves according to the first motion model, such as a linear motion model, a left-turn motion model, or a right-turn motion model. Among them, the linear motion model can be understood as the cooperative target moving in a straight line at a constant speed according to the speed v; the left-turning motion model can be understood as the cooperative target moving according to the speed v and acceleration a; the right-turning motion model can be understood as the cooperative target moving according to the speed v, Acceleration a decelerates in a straight line; or, the left-turn motion model can be understood as the cooperative target decelerates according to the speed v and acceleration a; the right-turn motion model can be understood as the cooperative target accelerates the linear motion according to the speed v and acceleration a.
下面,给出了一种路侧融合模获取合作目标在n个不同的第一时刻的第一实际位置的方法,如图5所示,该方法包括以下步骤:Below, a roadside fusion model is given to obtain the first actual position of the cooperative target at n different first moments, as shown in Figure 5, the method includes the following steps:
步骤501,测量仪测量符合第一运动模型的合作目标在n个不同的第一时刻分别对应的第一实际位置,n为大于1的整数。In step 501 , the surveying instrument measures the first actual positions corresponding to n different first moments of the cooperation target conforming to the first motion model, where n is an integer greater than 1.
此处,测量仪可以是使用实时动态(real-time kinematic,RTK)技术的等高精度测量仪,可以安装于合作目标上,以实现对合作目标的实际位置的测量。Here, the surveying instrument may be a high-precision surveying instrument using real-time kinematic (RTK) technology, and may be installed on the cooperation target to realize the measurement of the actual position of the cooperation target.
在一种可能的实现方式中,当合作目标按第一运动模型运动时,测量仪可测量合作目标在[k 1,k 2,k 3,…,k n]这n个第一时刻的第一实际位置[x 1,x 2,x 3,…,x n]。也可以理解为,在k 1时刻,测量仪测量合作目标的第一实际位置为x 1;在k 2时刻,测量仪测量合作目标的第一实际位置为x 2;依次类推,在k n时刻,测量仪测量合作目标的第一实际位置为x nIn a possible implementation, when the cooperative target moves according to the first motion model, the measuring instrument can measure the cooperative target at [k 1 ,k 2 ,k 3 ,…,k n ] the nth first moments - Actual position [x 1 , x 2 , x 3 , . . . , x n ]. It can also be understood that at time k 1 , the first actual position of the cooperative target measured by the measuring instrument is x 1 ; at time k 2 , the first actual position of the cooperative target measured by the measuring instrument is x 2 ; and so on, at time k n , the measuring instrument measures the first actual position of the cooperative target as x n .
示例性地,测量仪可获得如表1所示的第一运动模型、n个第一时刻以及n个第一实际位置的关系。Exemplarily, the measuring instrument can obtain the relationship between the first motion model, n first moments, and n first actual positions as shown in Table 1.
表1第一运动模型、n个第一时刻及n个第一实际位置的关系Table 1 The relationship between the first motion model, n first moments and n first actual positions
Figure PCTCN2021101660-appb-000001
Figure PCTCN2021101660-appb-000001
需要说明的是,用表的形式表示第一运动模型、n个第一时刻及n个第一实际位置的关系仅是一种示例,也可以通过其它对应关系表示,本申请对此不做限定。另外,上述表1也可以是三个独立的表,即一个运动模型对应一个表。It should be noted that the relationship between the first motion model, n first moments, and n first actual positions expressed in the form of a table is only an example, and it can also be expressed by other corresponding relationships, which is not limited in this application . In addition, the above table 1 may also be three independent tables, that is, one motion model corresponds to one table.
步骤502,测量仪向路侧融合模块发送获得的合作目标在n个不同的第一时刻分别对应的第一实际位置。相应地,路侧融合模块接收来自测量仪的n个不同的第一时刻分别对应的第一实际位置。In step 502, the surveying instrument sends to the roadside fusion module the obtained first actual positions of the cooperative targets respectively corresponding to n different first moments. Correspondingly, the roadside fusion module receives the first actual positions respectively corresponding to n different first moments from the measuring instrument.
若测量仪在步骤501中获得表1,该步骤502可以是测量仪向路侧融合模块发送表1。若测量仪在步骤501中获得三个表,该步骤502可以是测量仪向路侧融合模块发送三个表。需要说明的是,三个表可以是一起发送给路侧融合模块的,或者也可以是分三次发送给路侧融合模块的,本申请对此不做限定。If the measuring instrument obtains Table 1 in step 501, in step 502, the measuring instrument may send Table 1 to the roadside fusion module. If the measuring instrument obtains three tables in step 501, in step 502, the measuring instrument may send the three tables to the roadside fusion module. It should be noted that the three tables may be sent to the roadside fusion module together, or may be sent to the roadside fusion module three times, which is not limited in this application.
通过上述步骤501和步骤502,路侧融合模块可获得合作目标按第一运动模型运动时,n个不同的第一时刻分别对应的第一实际位置。Through the above steps 501 and 502, the roadside fusion module can obtain the first actual positions respectively corresponding to n different first moments when the cooperative target moves according to the first motion model.
步骤402,路侧感知模块获取符合第一运动模型的合作目标在第一时刻的第一估计位置,并向路侧融合模块发送在第一时刻的第一估计位置。Step 402 , the roadside perception module obtains the first estimated position of the cooperation target conforming to the first motion model at the first moment, and sends the first estimated position at the first moment to the roadside fusion module.
在一种可能的实现方式中,路侧感知模块可分别记录符合第一运动模型的合作目标在[k 1,k 2,k 3,…,k n]这n个不同的第一时刻的第一估计位置
Figure PCTCN2021101660-appb-000002
也可以理解为,在k 1时刻,路侧感知模块记录符合第一运动模型的合作目标的第一估计位置为
Figure PCTCN2021101660-appb-000003
在k 2时刻,路侧感知模块记录符合第一运动模型的合作目标的第一估计位置为
Figure PCTCN2021101660-appb-000004
依次类推,在k n时刻,路侧感知模块记录符合第一运动模型的合作目标的第一估计位置为
Figure PCTCN2021101660-appb-000005
In a possible implementation manner, the roadside perception module can respectively record the cooperative target conforming to the first motion model at [k 1 , k 2 , k 3 ,...,k n ] the n different first moments an estimated position
Figure PCTCN2021101660-appb-000002
It can also be understood that, at time k 1 , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as
Figure PCTCN2021101660-appb-000003
At time k2 , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as
Figure PCTCN2021101660-appb-000004
By analogy, at time k n , the roadside perception module records the first estimated position of the cooperative target conforming to the first motion model as
Figure PCTCN2021101660-appb-000005
结合上述图1a,若路侧感知模块为雷达,可通过电磁波测量合作目标的距离,再结合多天线技术可以得到电磁波接收的角度,利用距离和角度可以在空间中定位,即可获得n个不同的第一时刻分别对应的第一估计位置。若路侧感知模块为相机,可以构建图像像素坐标系与世界坐标系的对应关系,从而可获得n个不同的第一时刻分别对应的第一估计位置。Combined with the above Figure 1a, if the roadside sensing module is a radar, the distance of the cooperative target can be measured by electromagnetic waves, and then combined with multi-antenna technology, the angle of electromagnetic wave reception can be obtained, and the distance and angle can be used to locate in space, and n different The first moments of are respectively corresponding to the first estimated positions. If the roadside perception module is a camera, a corresponding relationship between the image pixel coordinate system and the world coordinate system can be constructed, so that the first estimated positions corresponding to n different first moments can be obtained respectively.
进一步,可选地,路侧感知模块向路侧融合模块发送符合第一运动模型的合作目标在n个不同第一时刻分别对应的第一估计位置
Figure PCTCN2021101660-appb-000006
Further, optionally, the roadside perception module sends to the roadside fusion module the first estimated positions corresponding to the cooperative targets conforming to the first motion model at n different first moments
Figure PCTCN2021101660-appb-000006
在一种可能的实现方式中,路侧探测系统可能包括P个路侧感知模块,P个路侧感知模块可分别执行上述步骤402,每个路侧感知模块可获得n个第一估计位置,P个路侧感知模块可均向路侧融合模块发送记录的n个第一估计位置。相应地,路侧融合模块可接收来自P个路侧感知模块中的每个路侧感知模块的n个第一估计位置,得到P×n个第一估计位置。In a possible implementation manner, the roadside detection system may include P roadside sensing modules, and the P roadside sensing modules can respectively perform the above step 402, and each roadside sensing module can obtain n first estimated positions, The P roadside perception modules may all send the recorded n first estimated positions to the roadside fusion module. Correspondingly, the roadside fusion module may receive n first estimated positions from each of the P roadside sensing modules to obtain P×n first estimated positions.
进一步,可选地,路侧融合模块可以先对来自P个路侧感知模块的第一估计位置进行加权平均,得到第一时刻的第一估计位置。也可以理解为,路侧融合模块在k 1时刻,对来自P个路侧感知模块的第一估计位置加权平均,得到k 1时刻的第一估计位置为
Figure PCTCN2021101660-appb-000007
在k 2时刻,对来自P个路侧感知模块的第一估计位置加权平均,得到k 2时刻的第一估计位置为
Figure PCTCN2021101660-appb-000008
依次类推,在k n时刻,对来自P个路侧感知模块的第一估计位置加权平均,得到k n时刻的第一估计位置为
Figure PCTCN2021101660-appb-000009
换言之,在[k 1,k 2,k 3,…,k n]这n个第一时刻,路侧融合模块可得到的第一估计位置为
Figure PCTCN2021101660-appb-000010
Further, optionally, the roadside fusion module may first perform a weighted average on the first estimated positions from the P roadside perception modules to obtain the first estimated position at the first moment. It can also be understood that the roadside fusion module weights and averages the first estimated positions from P roadside perception modules at time k 1 , and obtains the first estimated position at time k 1 as
Figure PCTCN2021101660-appb-000007
At time k2 , the weighted average of the first estimated positions from P roadside sensing modules is obtained to obtain the first estimated position at time k2 as
Figure PCTCN2021101660-appb-000008
By analogy, at time k n , the weighted average of the first estimated positions from P roadside sensing modules is obtained to obtain the first estimated position at time k n as
Figure PCTCN2021101660-appb-000009
In other words, at the n first moments [k 1 ,k 2 ,k 3 ,…,k n ], the first estimated position obtained by the roadside fusion module is
Figure PCTCN2021101660-appb-000010
步骤403,路侧融合模块根据第一估计位置和第一运动模型,确定合作目标在M步后的第二估计位置。Step 403, the roadside fusion module determines the second estimated position of the cooperative target after M steps according to the first estimated position and the first motion model.
示例性地,若第一运动模型是直线运动模型,则合作目标在M步后的第二估计位置
Figure PCTCN2021101660-appb-000011
可用下述公式1表示。
Exemplarily, if the first motion model is a linear motion model, the second estimated position of the cooperative target after M steps
Figure PCTCN2021101660-appb-000011
It can be represented by the following formula 1.
Figure PCTCN2021101660-appb-000012
Figure PCTCN2021101660-appb-000012
若第一运动模型是左转运动模型,则合作目标在M步后的第二估计位置
Figure PCTCN2021101660-appb-000013
可用下述公式2表示。应理解,该示例中是以左转运动模型为合作目标按速度v、加速度a做加速运动为例。
If the first motion model is a left-turn motion model, the second estimated position of the cooperative target after M steps
Figure PCTCN2021101660-appb-000013
It can be represented by the following formula 2. It should be understood that, in this example, the left-turn movement model is used as an example to perform accelerated movement at the speed v and acceleration a.
Figure PCTCN2021101660-appb-000014
Figure PCTCN2021101660-appb-000014
若第一运动模型是右转运动模型,则合作目标在M步后的第二估计位置
Figure PCTCN2021101660-appb-000015
下述公式3表示。应理解,该示例中是以右转运动模型为合作目标按速度v、加速度a做减速直线运动为例的。
If the first motion model is a right-turn motion model, the second estimated position of the cooperative target after M steps
Figure PCTCN2021101660-appb-000015
It is represented by the following formula 3. It should be understood that in this example, the right-turn motion model is used as an example to perform decelerated linear motion at the speed v and acceleration a.
Figure PCTCN2021101660-appb-000016
Figure PCTCN2021101660-appb-000016
步骤404,路侧融合模块根据第一实际位置和第二估计位置,确定M步估计的第一估计误差。Step 404, the roadside fusion module determines the first estimation error of the M-step estimation according to the first actual position and the second estimated position.
示例性地,第一时刻的第一估计误差可表示为
Figure PCTCN2021101660-appb-000017
n个时刻,可得到n-m个第一估计误差
Figure PCTCN2021101660-appb-000018
例如,当m=1时,可得到n-1个第一估计误差;当m=2时,可得到n-2个第一估计误差。应理解,这n-m个第一估计误差可能是相同的,或者也可能是不同的,或者也可能部分是相同的。
Exemplarily, the first estimation error at the first moment can be expressed as
Figure PCTCN2021101660-appb-000017
At n times, nm first estimation errors can be obtained
Figure PCTCN2021101660-appb-000018
For example, when m=1, n-1 first estimation errors can be obtained; when m=2, n-2 first estimation errors can be obtained. It should be understood that the nm first estimation errors may be the same, or may also be different, or may also be partly the same.
步骤405,路侧融合模块根据第一估计误差,获得第一运动模型对应的第一状态过程标定值。Step 405, the roadside fusion module obtains the first state process calibration value corresponding to the first motion model according to the first estimation error.
在一种可能的实现方式中,循环执行L次上述步骤401至步骤405,每执行一次上述步骤401至步骤405,可得到n-m个不同的第一估计误差。路侧融合模块将这L次循环得到的L×(n-m)个第一估计误差的方差确定为第一运动模型对应的第一状态过程标定值。当L大于1时,有助于提高第一状态过程标定值的准确度。In a possible implementation manner, the above steps 401 to 405 are executed L times in a loop, and each time the above steps 401 to 405 are executed, n-m different first estimation errors can be obtained. The roadside fusion module determines the variance of the L×(n-m) first estimation errors obtained in the L cycles as the calibration value of the first state process corresponding to the first motion model. When L is greater than 1, it is helpful to improve the accuracy of the calibration value of the first state process.
示例性地,第一状态过程标定值Q m可用下述公式4表示。 Exemplarily, the first state process calibration value Q m can be represented by the following formula 4.
Figure PCTCN2021101660-appb-000019
Figure PCTCN2021101660-appb-000019
需要说明的是,当第一运动模型为直线运动模型时,可得到直线运动模型对应的第一状态过程标定值;当第一运动模型为左转运动模型时,可得到左转运动模型对应的第一状态过程标定值;当第一运动模型为右转运动模型时,可得到右转运动模型对应的第一状态过程标定值,可参见下述表2。It should be noted that when the first motion model is a linear motion model, the calibration value of the first state process corresponding to the linear motion model can be obtained; when the first motion model is a left-turn motion model, the corresponding calibration value of the left-turn motion model can be obtained The first state process calibration value; when the first motion model is a right-turning motion model, the first state process calibration value corresponding to the right-turning motion model can be obtained, as shown in Table 2 below.
表2第一运动模型、M步及第一状态过程标定值的对应关系Table 2 Correspondence between the first motion model, M step and calibration value of the first state process
Figure PCTCN2021101660-appb-000020
Figure PCTCN2021101660-appb-000020
Figure PCTCN2021101660-appb-000021
Figure PCTCN2021101660-appb-000021
需要说明的是,对于相同的运动模型且m相同的情况下,当合作目标的运动速度不同时,对应的第一状态过程标定值也可能是不同的。It should be noted that, for the same motion model and the same m, when the motion speeds of the cooperation targets are different, the corresponding calibration values of the first state process may also be different.
还需要说明的是,用表的形式表示第一运动模型、M步及第一状态过程标定值的对应关系仅是一种示例,也可以通过其它对应关系表示,本申请对此不做限定。It should also be noted that the representation of the correspondence between the first motion model, the M-step and the calibration value of the first state process in the form of a table is only an example, and may also be represented by other correspondences, which is not limited in this application.
2)获取第一状态信息的可能的实现方式。2) A possible implementation manner of acquiring the first state information.
基于路侧感知模块的类型,可分如下两种情形介绍获取第一状态信息。Based on the type of the roadside perception module, the acquisition of the first state information can be introduced in the following two situations.
情形一,路侧感知模块为雷达。 Case 1, the roadside sensing module is a radar.
在该情形一中,以路侧探测系统包括P个雷达为例,P为正整数。In the first case, taking the roadside detection system including P radars as an example, P is a positive integer.
如图6a所示,为本申请提供的一种路侧探测系统获取第一状态信息的方法流程示意图。该示例中以发射的电磁波信号为第一电磁波信号为例,采样数据为第一回波信号为例。该方法包括以下步骤:As shown in FIG. 6 a , it is a schematic flowchart of a method for acquiring first status information by a roadside detection system provided in the present application. In this example, the transmitted electromagnetic wave signal is taken as an example of the first electromagnetic wave signal, and the sampling data is taken as an example of the first echo signal. The method includes the following steps:
针对P个雷达中的每个雷达,执行下述步骤601至步骤603。For each of the P radars, the following steps 601 to 603 are performed.
步骤601,雷达向探测区域发射第一电磁波信号。Step 601, the radar transmits a first electromagnetic wave signal to a detection area.
其中,雷达的探测区域可参见上述图2的介绍。Wherein, the detection area of the radar can refer to the introduction of the above-mentioned FIG. 2 .
步骤602,雷达接收来自探测区域的第一回波信号。Step 602, the radar receives a first echo signal from the detection area.
此处,第一回波信号是第一电磁波信号经随机目标反射后得到的。应理解,第一回波信号即为雷达采集到的随机目标的探测数据(或称为检测数据、采样数据等)。Here, the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target. It should be understood that the first echo signal is the detection data (or referred to as detection data, sampling data, etc.) of a random target collected by the radar.
步骤603,雷达向路侧融合模块发送第一回波信号。相应地,路侧融合模块接收来自雷达的第一回波信号。Step 603, the radar sends the first echo signal to the roadside fusion module. Correspondingly, the roadside fusion module receives the first echo signal from the radar.
此处,路侧融合模块可接收来自P个雷达的第一回波信号(即采样数据1、采样数据2…采样数据P),获得P个第一回波信号。Here, the roadside fusion module may receive the first echo signals (ie sampled data 1, sampled data 2 . . . sampled data P) from P radars to obtain P first echo signals.
步骤604,路侧融合模块根据接收到的P个第一回波信号,确定随机目标的第一状态信息。Step 604, the roadside fusion module determines the first state information of the random target according to the received P first echo signals.
在一种可能的实现方式中,路侧融合模块根据一个第一回波信号可获得的一个状态信息,基于P个第一回波信号可获得P个状态信息,对这P个状态信息进行加权平均,得到随机目标的第一状态信息。In a possible implementation, the roadside fusion module can obtain P pieces of state information based on P pieces of first echo signals based on one piece of state information that can be obtained from one first echo signal, and weight the P pieces of state information On average, the first state information of the random target is obtained.
示例性地,以第一状态信息包括随时目标的第一位置为例,路侧融合模块可对获得的P个位置进行加权平均,得到第一状态信息包括的第一位置。Exemplarily, taking the first state information including the first position of the object at any time as an example, the roadside fusion module may perform a weighted average on the obtained P positions to obtain the first position included in the first state information.
如图6b所示,为本申请提供的另一种路侧探测系统获取第一状态信息的方法流程示意图。该示例中以发射的电磁波信号为第一电磁波信号为例,采样数据为第一回波信号为例。该方法包括以下步骤:As shown in FIG. 6 b , it is a schematic flowchart of another method for obtaining the first state information by the roadside detection system provided in the present application. In this example, the transmitted electromagnetic wave signal is taken as an example of the first electromagnetic wave signal, and the sampling data is taken as an example of the first echo signal. The method includes the following steps:
针对P个雷达中的每个雷达,执行下述步骤611至步骤614。For each of the P radars, the following steps 611 to 614 are performed.
步骤611,雷达向探测区域发射第一电磁波信号。Step 611, the radar transmits a first electromagnetic wave signal to the detection area.
此处,雷达的探测区域可参见上述图2的介绍。Here, the detection area of the radar can refer to the introduction of the above-mentioned FIG. 2 .
步骤612,雷达接收来自探测区域的第一回波信号。Step 612, the radar receives the first echo signal from the detection area.
此处,第一回波信号是第一电磁波信号经随机目标反射后得到的。Here, the first echo signal is obtained after the first electromagnetic wave signal is reflected by a random target.
步骤613,雷达根据接收到的第一回波信号,确定随机目标的第一状态信息。Step 613, the radar determines first state information of a random target according to the received first echo signal.
步骤614,雷达向路侧融合模块发送状态信息。相应地,路侧融合模块接收来自雷达的状态信息。Step 614, the radar sends status information to the roadside fusion module. Correspondingly, the roadside fusion module receives status information from the radar.
此处,路侧融合模块可接收来自P个雷达的状态信息,获得P个状态信息。Here, the roadside fusion module can receive status information from P radars and obtain P status information.
步骤615,路侧融合模块根据接收到的P个状态信息,确定随机目标的第一状态信息。Step 615, the roadside fusion module determines the first state information of the random target according to the received P pieces of state information.
在一种可能的实现方式中,路侧融合模块根据来自P个雷达的状态信息,对这P个状态信息进行加权平均,得到随机目标的第一状态信息。In a possible implementation manner, the roadside fusion module performs a weighted average on the P pieces of state information according to the state information from the P radars to obtain the first state information of the random target.
情形二,路侧感知模块为相机。In the second case, the roadside perception module is a camera.
如图7a所示,为本申请提供的又一种路侧探测系统获取目标的第一状态信息的方法流程示意图。该示例中以采样数据为第一图像为例。该方法包括以下步骤:As shown in FIG. 7 a , it is a schematic flowchart of another method for obtaining the first status information of a target by a roadside detection system provided in the present application. In this example, the sampling data is taken as the first image as an example. The method includes the following steps:
针对P个相机中的每个相机,执行下述步骤701至步骤703。For each of the P cameras, the following steps 701 to 703 are performed.
步骤701,相机拍摄随机目标的第一图像。Step 701, the camera captures a first image of a random target.
步骤702,相机向路侧融合模块发送第一图像。相应地,路侧融合模块接收来自相机的第一图像。Step 702, the camera sends the first image to the roadside fusion module. Accordingly, the roadside fusion module receives the first image from the camera.
此处,路侧融合模块可接收来自P个相机的第一图像,获得P个第一图像。Here, the roadside fusion module may receive first images from P cameras and obtain P first images.
步骤703,路侧融合模块可根据P个第一图像,获取随机目标的第一状态信息。Step 703, the roadside fusion module can acquire the first state information of the random target according to the P first images.
在一种可能的实现方式中,路侧融合模块可对P个第一图像进行融合,根据融合后的第一图像确定随机目标的第一状态信息。In a possible implementation manner, the roadside fusion module may fuse the P first images, and determine the first state information of the random target according to the fused first images.
如图7b所示,为本申请提供的又一种路侧探测系统获取目标的第一状态信息的方法流程示意图。该方法包括以下步骤:As shown in FIG. 7 b , it is a schematic flowchart of another method for obtaining the first state information of a target by a roadside detection system provided in the present application. The method includes the following steps:
步骤711,相机拍摄随机目标的第一图像。Step 711, the camera captures a first image of a random target.
步骤712,相机根据第一图像,获得随机目标的状态信息。Step 712, the camera obtains state information of a random target according to the first image.
步骤713,相机向路侧融合模块发送随机目标的状态信息。相应地,路侧融合模块接收来自相机的状态信息。Step 713, the camera sends the state information of the random target to the roadside fusion module. Correspondingly, the roadside fusion module receives status information from the cameras.
此处,路侧融合模块可接收来自P个相机的状态信息。Here, the roadside fusion module may receive status information from P cameras.
步骤714,路侧融合模块根据来自相机的P个状态信息,获取随机目标的第一状态信息。Step 714, the roadside fusion module acquires the first state information of the random target according to the P pieces of state information from the camera.
在一种可能的实现方式中,路侧融合模块根据来自P个相机的状态信息,对这P个状态信息进行加权平均,得到随机目标的第一状态信息。In a possible implementation manner, the roadside fusion module performs a weighted average on the P pieces of state information according to the state information from the P cameras to obtain the first state information of the random target.
3)第一时间戳。3) First timestamp.
在一种可能的实现方式中,路侧融合模块通过空口向融合装置发送第一数据的时刻即为第一时间戳。In a possible implementation manner, the time when the roadside fusion module sends the first data to the fusion device through the air interface is the first time stamp.
4)第一协方差估计值。4) The first covariance estimate.
在一种可能的实现方式中,第一协方差估计值是第一状态信息与实际状态信息之间的统计误差。In a possible implementation manner, the first estimated covariance value is a statistical error between the first state information and the actual state information.
进一步,可选地,第一协方差估计值
Figure PCTCN2021101660-appb-000022
可用下述公式5表示。
Further, optionally, the first covariance estimate
Figure PCTCN2021101660-appb-000022
It can be represented by the following formula 5.
Figure PCTCN2021101660-appb-000023
Figure PCTCN2021101660-appb-000023
其中,k表示融合时刻,t表示第一时间戳,φ r(k/t)表示状态方程,
Figure PCTCN2021101660-appb-000024
表示路侧探测系统输出的t时刻的协方差估计值(若是初始时刻,该
Figure PCTCN2021101660-appb-000025
可以是一个预设值),该值是 一个中间量,Q r(k/t)为k-t时刻的状态过程标定值。应理解,第一状态方程可以是路侧探测系统检测出并指示给融合装置的随机目标的运动模型;或者也可以是融合装置根据第一状态信息确定的。其中,随机目标的运动模型可以是直线运动模型、左转运动模型或右转运动模型。
Among them, k represents the fusion moment, t represents the first timestamp, φ r (k/t) represents the state equation,
Figure PCTCN2021101660-appb-000024
Indicates the estimated covariance value of the roadside detection system output at time t (if the initial time, the
Figure PCTCN2021101660-appb-000025
can be a preset value), this value is an intermediate quantity, and Q r (k/t) is the calibration value of the state process at time kt. It should be understood that the first state equation may be a motion model of a random target detected by the roadside detection system and indicated to the fusion device; or may be determined by the fusion device according to the first state information. Wherein, the motion model of the random target may be a linear motion model, a left-turn motion model, or a right-turn motion model.
基于上述内容,路侧融合模块可获得第一状态信息、第一协方差估计值、第一时间戳、第一状态过程标定值及第一运动模型。Based on the above content, the roadside fusion module can obtain the first state information, the first estimated covariance value, the first time stamp, the first state process calibration value and the first motion model.
基于类似的过程,车侧融合模块可获得第二运动模型、第二状态过程标定值、第二状态信息、第一协方差估计值及第一时间戳。应理解,可将上述路侧融合模块用车侧融合模块替换,路侧感知模块用车侧感知模块替换,第一运动模型用第二运动模型替换,第一状态过程标定值用第二状态过程标定值替换,第一状态信息用第二状态信息替换。Based on a similar process, the vehicle-side fusion module can obtain the second motion model, the second state process calibration value, the second state information, the first estimated covariance value and the first time stamp. It should be understood that the above roadside fusion module can be replaced by the vehicle side fusion module, the roadside perception module can be replaced by the vehicle side perception module, the first motion model can be replaced by the second motion model, and the calibration value of the first state process can be replaced by the second state process The calibration value is replaced, and the first state information is replaced with the second state information.
基于上述内容,下面结合附图8至附图10,对本申请提出的车路协同的数据处理方法进行具体阐述。Based on the above content, the data processing method for vehicle-road coordination proposed by this application will be described in detail below in conjunction with accompanying drawings 8 to 10 .
如图8所示,为本申请提供的一种车路协同的数据处理方法的方法流程示意图。该方法可应用于上述图1a所示的通信系统。该方法包括以下步骤:As shown in FIG. 8 , it is a schematic flow chart of a data processing method for vehicle-road coordination provided by the present application. This method can be applied to the communication system shown in Fig. 1a above. The method includes the following steps:
步骤801,路侧探测系统向融合装置发送第一数据。相应地,融合装置接收来自路侧探测系统的第一数据。Step 801, the roadside detection system sends first data to the fusion device. Correspondingly, the fusion device receives the first data from the roadside detection system.
此处,第一数据包括第一状态过程标定值、第一运动模型、第一状态信息、第一协方差估计值及第一时间戳中的一个或多个。具体可参见前述相关介绍,此处不再赘述。Here, the first data includes one or more of the first state process calibration value, the first motion model, the first state information, the first estimated covariance value, and the first time stamp. For details, please refer to the relevant introduction mentioned above, and will not repeat them here.
步骤802,车侧探测系统向融合装置发送第二数据。相应地,融合装置接收来自车侧探测系统的第二数据。Step 802, the vehicle side detection system sends the second data to the fusion device. Correspondingly, the fusion device receives the second data from the vehicle side detection system.
此处,第二数据包括第二状态过程标定值、第二运动模型、第二状态信息、第二协方差估计值及第二时间戳。其中,第二状态过程标定值用于优化第二协方差估计值(即融合装置在数据融合过程中可基于第二状态过程标定值优化第二协方差估计值),从而可更准确的表征测量误差;第二运动模型用于标识合作目标的运动状态。进一步,可选地,第二状态过程标定值和第二运动模型可以是车侧探测系统预先获取并存储的。此处,车侧探测系统获取第二状态过程标定值及第二运动模型的可能的实现方法可参见下述图4的介绍,此处不再赘述。第二状态信息用于标识目标的运动特征。示例性地,第二状态信息可包括目标的第二位置和/或第二速率等。此处获取第二状态信息的可能实现方式可参见下述图5的介绍,此处不再赘述。第二协方差估计值用于标识第二状态信息与实际状态信息之间的统计误差。需要说明的是,车侧探测系统中的初始状态没有实际状态信息,可预先设置一个实际状态信息(称为初始状态信息)其中,初始状态信息通常随着滤波过程不断收敛。第二时间戳用于标识车侧探测系发送第二数据的时刻。也可以理解为,第二时间戳是车侧探测系统发送第二数据时盖的时间戳。Here, the second data includes a second state process calibration value, a second motion model, a second state information, a second covariance estimation value, and a second time stamp. Wherein, the second state process calibration value is used to optimize the second covariance estimation value (that is, the fusion device can optimize the second covariance estimation value based on the second state process calibration value during the data fusion process), so as to more accurately characterize the measurement Error; the second motion model is used to identify the motion state of the cooperative target. Further, optionally, the second state process calibration value and the second motion model may be pre-acquired and stored by the vehicle side detection system. Here, a possible implementation method for the vehicle side detection system to obtain the calibration value of the second state process and the second motion model can be referred to the introduction in FIG. 4 below, and will not be repeated here. The second state information is used to identify the motion characteristics of the target. Exemplarily, the second state information may include a second position and/or a second velocity of the target, and the like. For a possible implementation manner of obtaining the second state information here, refer to the introduction of FIG. 5 below, and details are not repeated here. The second covariance estimate is used to identify a statistical error between the second state information and the actual state information. It should be noted that the initial state in the vehicle side detection system does not have actual state information, and an actual state information (called initial state information) can be preset. The initial state information usually converges continuously with the filtering process. The second time stamp is used to identify the moment when the vehicle-side detection system sends the second data. It can also be understood that the second time stamp is a time stamp stamped when the vehicle side detection system sends the second data.
需要说明的是,上述步骤801和步骤802之间没有先后顺序,可以先执行步骤801后执行步骤802,或者也可以先执行步骤802后执行步骤801,或者也可以步骤801和步骤802同步执行,本申请对此不做限定。It should be noted that there is no sequence between the above step 801 and step 802, step 801 can be performed first and then step 802 can be performed, or step 802 can be performed first and then step 801 can be performed, or step 801 and step 802 can also be performed synchronously, This application does not limit this.
步骤803,融合装置根据第一数据和第二数据,获得目标的第一融合数据。Step 803, the fusion device obtains the first fusion data of the target according to the first data and the second data.
此处,第一融合数据是指随机目标的第一融合数据。第一融合数据例如包括目标的位置信息、状态信息(如速率、方向)、协方差等。该步骤803可能的实现方式可参见下述 图9的介绍,此处不再赘述。Here, the first fusion data refers to the first fusion data of the random target. The first fused data includes, for example, target position information, state information (such as speed, direction), covariance, and the like. For a possible implementation manner of step 803, refer to the introduction of FIG. 9 below, which will not be repeated here.
通过上述步骤801至步骤803,融合装置可接收到来自路侧融合模块的第一状态过程标定值和第一运动模型,融合装置在数据融合过程中可通过第一状态过程标定值优化第一协方差估计值,从而可更准确的表征测量误差;进一步,融合装置也接收到了来自车侧融合模块的第二状态过程标定值和第二运动模型,融合装置在数据融合过程中通过第二状态过程标定值优化第二协方差估计值,从而可更准确的表征测量误差,从而有助于提高获得的目标的第一融合数据的准确度。Through the above steps 801 to 803, the fusion device can receive the first state process calibration value and the first motion model from the roadside fusion module, and the fusion device can use the first state process calibration value to optimize the first coordination during the data fusion process. Variance estimated value, so that the measurement error can be more accurately characterized; further, the fusion device also received the calibration value of the second state process and the second motion model from the fusion module on the vehicle side, and the fusion device passed the second state process during the data fusion process The calibration value optimizes the second estimated value of the covariance, so that the measurement error can be more accurately represented, thereby helping to improve the accuracy of the obtained first fusion data of the target.
请参阅图9,为本申请提供的一种根据第一数据和第二数据获得第一融合数据的方法流程示意图。该方法包括以下步骤:Please refer to FIG. 9 , which is a schematic flowchart of a method for obtaining first fusion data according to first data and second data provided in this application. The method includes the following steps:
步骤901,融合装置根据第一时间戳及第一状态信息,获得目标在融合时刻的第一状态预测值。Step 901, the fusion device obtains the first predicted state of the target at the time of fusion according to the first time stamp and the first state information.
在一种可能的实现方式中,第一状态预测值
Figure PCTCN2021101660-appb-000026
可用下述公式6表示:
In a possible implementation, the first state prediction value
Figure PCTCN2021101660-appb-000026
It can be expressed by the following formula 6:
Figure PCTCN2021101660-appb-000027
Figure PCTCN2021101660-appb-000027
其中,
Figure PCTCN2021101660-appb-000028
表示路侧探测系统发送的t时刻的第一状态信息,φ r(k/t)表示第一状态方程。
in,
Figure PCTCN2021101660-appb-000028
represents the first state information at time t sent by the roadside detection system, and φ r (k/t) represents the first state equation.
步骤902,融合装置根据第二时间戳及第二状态信息,获得目标在融合时刻的第二状态预测值。Step 902, the fusion device obtains a second state prediction value of the target at the time of fusion according to the second time stamp and the second state information.
在一种可能的实现方式中,第二状态预测值
Figure PCTCN2021101660-appb-000029
可用下述公式7表示:
In a possible implementation, the second state prediction value
Figure PCTCN2021101660-appb-000029
It can be expressed by the following formula 7:
Figure PCTCN2021101660-appb-000030
Figure PCTCN2021101660-appb-000030
其中,
Figure PCTCN2021101660-appb-000031
表示车侧探测系统发送的t时刻的第二状态信息,φ v(k/t)表示第二状态方程。
in,
Figure PCTCN2021101660-appb-000031
represents the second state information at time t sent by the vehicle side detection system, and φ v (k/t) represents the second state equation.
需要说明的是,上述步骤901和上述步骤902之间没有先后顺序,可以先执行步骤901后执行步骤902,或者也可以先执行步骤902后执行步骤901,或者步骤901和步骤902同时执行。It should be noted that there is no sequence between the above step 901 and the above step 902, step 901 can be performed first and then step 902 can be performed, or step 902 can be performed first and then step 901 is performed, or step 901 and step 902 can be performed simultaneously.
步骤903,融合装置根据第一时间戳、第一状态信息、第一状态过程标定值及第一协方差估计值,获得目标在融合时刻的第一协方差预测值。Step 903, the fusion device obtains the first covariance prediction value of the target at the fusion moment according to the first time stamp, the first state information, the first state process calibration value and the first covariance estimated value.
此处,第一协方差预测值
Figure PCTCN2021101660-appb-000032
可用参见上述公式5的介绍。其中,
Figure PCTCN2021101660-appb-000033
表示路侧探测系统t时刻输出的第一协方差估计值。
Here, the first covariance predictor
Figure PCTCN2021101660-appb-000032
See the introduction to Equation 5 above. in,
Figure PCTCN2021101660-appb-000033
Indicates the first covariance estimate output by the roadside detection system at time t.
步骤904,融合装置根据第二时间戳、第二状态信息、第二状态过程标定值及第二运动模型,获得目标在融合时刻的第二协方差预测值。In step 904, the fusion device obtains a second covariance prediction value of the target at the fusion moment according to the second time stamp, the second state information, the second state process calibration value and the second motion model.
在一种可能的实现方式中,第二协方差预测值
Figure PCTCN2021101660-appb-000034
可用下述公式8表示:
In one possible implementation, the second covariance predictor
Figure PCTCN2021101660-appb-000034
It can be expressed by the following formula 8:
Figure PCTCN2021101660-appb-000035
Figure PCTCN2021101660-appb-000035
其中,k表示第k个融合时刻,φ v(k/t)表示状态方程,
Figure PCTCN2021101660-appb-000036
表示车侧探测系统t时刻输出的第二协方差估计值,该值是一个中间量,Q v(k/t)为k-t时刻的第二状态过程标定值。应理解,第二状态方程可以是车侧探测系统检测出并指示给融合装置的目标的运动模型;或者也可以是融合装置根据第二状态信息确定的。
Among them, k represents the k-th fusion moment, φ v (k/t) represents the state equation,
Figure PCTCN2021101660-appb-000036
Represents the second estimated covariance value output by the vehicle side detection system at time t, which is an intermediate quantity, and Q v (k/t) is the calibration value of the second state process at time kt. It should be understood that the second state equation may be the motion model of the target detected by the vehicle side detection system and indicated to the fusion device; or it may be determined by the fusion device according to the second state information.
需要说明的是,上述步骤903和上述步骤904之间没有先后顺序,可以先执行步骤903后执行步骤904,或者也可以先执行步骤903后执行步骤904,或者步骤903和步骤904同时执行。It should be noted that there is no sequence between the above step 903 and the above step 904, step 903 can be performed first and then step 904 is performed, or step 903 can be performed first and then step 904 is performed, or step 903 and step 904 can be performed simultaneously.
步骤905,融合装置根据第一运动模型及前一帧的第二融合数据预测目标的第三状态 预测值及第三协方差预测值。Step 905, the fusion device predicts the third state prediction value and the third covariance prediction value of the target according to the first motion model and the second fusion data of the previous frame.
在一种可能的实现方式中,前一帧的第二融合数据包括前一帧输出的状态信息
Figure PCTCN2021101660-appb-000037
第三状态预测值
Figure PCTCN2021101660-appb-000038
可用公式9表示。
In a possible implementation, the second fusion data of the previous frame includes the state information output from the previous frame
Figure PCTCN2021101660-appb-000037
Third State Prediction
Figure PCTCN2021101660-appb-000038
It can be expressed by Equation 9.
Figure PCTCN2021101660-appb-000039
Figure PCTCN2021101660-appb-000039
其中,φ f(k)=φ r(k/t)表示状态方程,该状态方程可根据第一运动模型确定。 Wherein, φ f (k)=φ r (k/t) represents the state equation, which can be determined according to the first motion model.
进一步,可选地,前一帧的第二融合数据包括前一帧输出的协方差估计值
Figure PCTCN2021101660-appb-000040
第三协方差预测值
Figure PCTCN2021101660-appb-000041
可用公式10表示。
Further, optionally, the second fused data of the previous frame includes an estimated covariance value output by the previous frame
Figure PCTCN2021101660-appb-000040
third covariance predictor
Figure PCTCN2021101660-appb-000041
It can be expressed by Equation 10.
Figure PCTCN2021101660-appb-000042
Figure PCTCN2021101660-appb-000042
其中,Q f(k)是过程噪声,一般为经验值,
Figure PCTCN2021101660-appb-000043
表示状态方程。
Among them, Q f (k) is the process noise, generally an empirical value,
Figure PCTCN2021101660-appb-000043
represents the equation of state.
此处,融合装置输出的每一帧融合数据可先进行缓存,处理器可调用缓存中的融合数据。Here, each frame of fusion data output by the fusion device can be cached first, and the processor can call the fusion data in the cache.
步骤906,融合装置根据第二运动模型及目标的前一帧的第二融合数据预测目标的第四状态预测值和第四协方差预测值。Step 906 , the fusion device predicts the fourth state prediction value and the fourth covariance prediction value of the object according to the second motion model and the second fusion data of the previous frame of the object.
该步骤906可参见前述步骤905的介绍,此处不再赘述。For this step 906, reference may be made to the introduction of the foregoing step 905, which will not be repeated here.
需要说明的是,上述步骤905和上述步骤906之间没有先后顺序,可以先执行步骤905后执行步骤906,或者也可以先执行步骤906后执行步骤905,或者步骤905和步骤906同时执行。It should be noted that there is no sequence between the above step 905 and the above step 906, step 905 can be performed first and then step 906 is performed, or step 906 can be performed first and then step 905 is performed, or step 905 and step 906 can be performed simultaneously.
步骤907,融合装置根据第一状态预测值、第一协方差预测值、第三状态预测值和第三协方差预测值,获得目标的第一滤波估计值及第一滤波协方差估计值。Step 907, the fusion device obtains a first filtered estimated value and a first filtered covariance estimated value of the target according to the first predicted state value, the first predicted covariance value, the third predicted state value, and the third predicted covariance value.
在一种可能的实现方式中,第一滤波估计值
Figure PCTCN2021101660-appb-000044
可根据第一状态预测值和第三状态预测值确定,可参见下述公式11的表示,第一滤波协方差估计值
Figure PCTCN2021101660-appb-000045
可根据第一协方差预测值和第三协方差预测值确定,可参见下述用公式12的表示。
In a possible implementation, the first filtered estimated value
Figure PCTCN2021101660-appb-000044
can be determined according to the first state prediction value and the third state prediction value, and can be referred to the expression of the following formula 11, the first filtering covariance estimation value
Figure PCTCN2021101660-appb-000045
It can be determined according to the first predicted covariance value and the third predicted covariance value, and can be referred to the expression in formula 12 below.
Figure PCTCN2021101660-appb-000046
Figure PCTCN2021101660-appb-000046
Figure PCTCN2021101660-appb-000047
Figure PCTCN2021101660-appb-000047
其中,H为观测矩阵,K r(k)为卡尔曼增益方程,可参见下述公式13的表示,R是噪声系数,一般为经验值。 Among them, H is the observation matrix, K r (k) is the Kalman gain equation, which can be expressed in the following formula 13, and R is the noise coefficient, which is generally an empirical value.
Figure PCTCN2021101660-appb-000048
Figure PCTCN2021101660-appb-000048
此处,目标的第一滤波估计值包括目标的第一速率、第一方向、第一位置信息等。Here, the first filtered estimated value of the target includes a first velocity, a first direction, a first position information, and the like of the target.
步骤908,融合装置根据第二状态预测值、第二协方差预测值、第四状态预测值和第四协方差预测值,获得目标的第二滤波估计值及第二滤波协方差估计值。Step 908 , the fusion device obtains a second filtered estimated value and a second filtered covariance estimated value of the target according to the second predicted state value, the second predicted covariance value, the fourth predicted state value, and the fourth predicted covariance value.
在一种可能的实现方式中,第二滤波估计值
Figure PCTCN2021101660-appb-000049
可根据第二状态预测值和第四状态预测值确定,可参见下述公式14的表示,第二滤波协方差估计值
Figure PCTCN2021101660-appb-000050
可根据第二协方差预测值和第四协方差预测值确定,可参见下述用公式15的表示。
In a possible implementation manner, the second filtered estimated value
Figure PCTCN2021101660-appb-000049
It can be determined according to the second state prediction value and the fourth state prediction value, and can refer to the expression of the following formula 14, the second filtering covariance estimation value
Figure PCTCN2021101660-appb-000050
It can be determined according to the second predicted covariance value and the fourth predicted covariance value, and can be referred to the expression in formula 15 below.
Figure PCTCN2021101660-appb-000051
Figure PCTCN2021101660-appb-000051
Figure PCTCN2021101660-appb-000052
Figure PCTCN2021101660-appb-000052
其中,H为观测矩阵,K v(k)为卡尔曼增益方程,可参见下述公式16的表示。 Wherein, H is the observation matrix, and K v (k) is the Kalman gain equation, which can be referred to the expression of the following formula 16.
Figure PCTCN2021101660-appb-000053
Figure PCTCN2021101660-appb-000053
此处,目标的第二滤波估计值包括目标的第二速率、第二方向、第二位置信息等。Here, the second filtered estimated value of the target includes a second velocity, a second direction, a second position information, and the like of the target.
需要说明的是,上述步骤907和上述步骤908之间没有先后顺序,可以先执行步骤907后执行步骤908,或者也可以先执行步骤908后执行步骤907,或者步骤907和步骤908同时执行。It should be noted that there is no sequence between the above step 907 and the above step 908, step 907 can be performed first and then step 908 is performed, or step 908 can be performed first and then step 907 is performed, or step 907 and step 908 can be performed simultaneously.
步骤909,融合装置根据第一时间戳、第一状态过程标定值、第一运动模型和第一协方差估计值确定第一数据在融合时刻的第一置信度。Step 909, the fusion device determines the first confidence level of the first data at the fusion moment according to the first time stamp, the first state process calibration value, the first motion model and the first covariance estimation value.
其中,第一置信度也可以称为第一置信度预测值。假设路侧探测系统发送的第一数据的残差ΔX服从正态分布~N(0,∑),置信区间为(0,S],则第一置信度U r(k)可用下述公式17表示。 Wherein, the first confidence level may also be referred to as a first confidence level prediction value. Assuming that the residual ΔX of the first data sent by the roadside detection system obeys the normal distribution ~N(0, ∑), and the confidence interval is (0, S], then the first confidence U r (k) can be used in the following formula 17 express.
Figure PCTCN2021101660-appb-000054
Figure PCTCN2021101660-appb-000054
式中,∑为第一协方差估计值和第一过程标定值的和,S是标准定义的,范围可为大于0小于或等于95%,X是变量。In the formula, Σ is the sum of the first estimated covariance value and the first process calibration value, S is a standard definition, the range can be greater than 0 and less than or equal to 95%, and X is a variable.
步骤910,融合装置根据第二时间戳、第二状态过程标定值、第二运动模型和第二协方差估计值第二数据在融合时刻的第二置信度。Step 910, the fusion device is based on the second time stamp, the second state process calibration value, the second motion model and the second covariance estimation value of the second confidence level of the second data at the fusion moment.
其中,第二置信度也可以称为第二置信度预测值。假设车侧探测系统发送的第二数据的残差ΔX服从正态分布~N(0,∑),置信区间为(0,S],则第一置信度U v(k)可用下述公式18表示。 Wherein, the second confidence level may also be referred to as a second confidence level prediction value. Assuming that the residual ΔX of the second data sent by the vehicle side detection system obeys the normal distribution ~N(0, ∑), and the confidence interval is (0, S], then the first confidence U v (k) can be used in the following formula 18 express.
Figure PCTCN2021101660-appb-000055
Figure PCTCN2021101660-appb-000055
式中,∑为第二协方差估计值和第二过程标定值的和,S是标准定义的,范围可为大于0小于或等于95%,X是变量。In the formula, Σ is the sum of the second covariance estimated value and the second process calibration value, S is a standard definition, the range can be greater than 0 and less than or equal to 95%, and X is a variable.
步骤911,融合装置根据第一置信度、第二置信度、第一滤波估计值、第一滤波协方差估计值、第二滤波估计值及第二滤波协方差估计值,获得目标的第一融合数据。Step 911, the fusion device obtains the first fusion of the target according to the first confidence level, the second confidence level, the first estimated filter value, the first estimated filter covariance value, the second estimated filter value and the second estimated filter covariance value data.
该步骤911可能的实现可参见下述图10的介绍。For possible implementation of step 911, refer to the introduction of FIG. 10 below.
如图10所示,为本申请提供的又一种获得第一融合数据的方法流程示意图。该方法包括以下步骤:As shown in FIG. 10 , it is a schematic flowchart of another method for obtaining the first fusion data provided by the present application. The method includes the following steps:
步骤1001,融合装置确定第一置信度是否满足第一预设置信度,并确定第二置信度是否满足第二预设置信度;若第一置信度满足第一预设置信度且第二置信度也满足第二预设置信度,执行步骤1002;若第一置信度不满足第一预设置信度且第二置信度不满足第二预设置信度,执行步骤1003;若第一置信度满足第一预设置信度且第二置信度不满足第二预设置信度,执行步骤1004;若第一置信度不满足第一预设置信度且第二置信度满足第二预设置信度,执行步骤1005。Step 1001, the fusion device determines whether the first confidence level meets the first preset confidence level, and determines whether the second confidence level meets the second preset confidence level; if the first confidence level satisfies the first preset confidence level and the second confidence level degree also meets the second preset reliability, execute step 1002; if the first confidence does not meet the first preset reliability and the second confidence does not meet the second preset reliability, execute step 1003; if the first confidence Satisfy the first preset reliability and the second confidence does not meet the second preset reliability, execute step 1004; if the first confidence does not meet the first preset reliability and the second confidence meets the second preset reliability , execute step 1005.
此处,第一预设置信度C r和第二预设置信度C v可以是预先设置的两个指标。其中,第一预设置信度可以与第二预设置信度相同,或者也可以不相同,本申请对此不做限定。 Here, the first preset reliability C r and the second preset reliability C v may be two preset indicators. Wherein, the first preset reliability may be the same as or different from the second preset reliability, which is not limited in this application.
在一种可能的实现方式中,第一置信度满足第一预设置信度且第二置信度也满足第二预设置信度可表示为:U v(k)≥C v,U r(k)≥C r;第一置信度不满足第一预设置信度且第二置信度不满足第二预设置信度可表示为:U v(k)<C v,U r(k)<C r;第一置信度满足第一预设置信度且第二置信度不满足第二预设置信度可表示为:U v(k)<C v,U r(k)≥C r;第一置信度不满足第一预设置信度且第二置信度满足第二预设置信度可表示为:U v(k)<C v,U r(k)≥C rIn a possible implementation manner, the first confidence level satisfies the first preset confidence level and the second confidence level also meets the second preset confidence level, which can be expressed as: U v (k)≥C v , U r (k )≥C r ; the first confidence level does not meet the first preset reliability level and the second confidence level does not meet the second preset confidence level can be expressed as: U v (k)<C v , U r (k)<C r ; the first confidence meets the first preset reliability and the second confidence does not meet the second preset reliability can be expressed as: U v (k)<C v , U r (k)≥C r ; the first The confidence level does not meet the first preset reliability level and the second confidence level meets the second preset level of confidence level can be expressed as: U v (k)<C v , U r (k)≥C r .
步骤1002,融合装置根据第一滤波估计值、第一滤波协方差估计值、第二滤波协方差估计值和第二滤波估计值,获得第一融合数据。Step 1002, the fusion device obtains first fused data according to the first estimated filtering value, the first estimated filtering covariance value, the second estimated filtering covariance value, and the second estimated filtering value.
以第一融合数据包括目标状态估计值和对应的目标协方差估计值为例介绍。目标状态估计值
Figure PCTCN2021101660-appb-000056
可用下述公式19表示,目标协方差
Figure PCTCN2021101660-appb-000057
可用下述公式20表示。
An example is introduced where the first fused data includes target state estimates and corresponding target covariance estimates. target state estimate
Figure PCTCN2021101660-appb-000056
It can be expressed by Equation 19 below, the target covariance
Figure PCTCN2021101660-appb-000057
It can be represented by the following formula 20.
Figure PCTCN2021101660-appb-000058
Figure PCTCN2021101660-appb-000058
Figure PCTCN2021101660-appb-000059
Figure PCTCN2021101660-appb-000059
步骤1003,融合装置根据前一帧输出的第二融合数据,获得第一融合数据。Step 1003, the fusion device obtains the first fusion data according to the second fusion data output in the previous frame.
此处,目标状态估计为
Figure PCTCN2021101660-appb-000060
相应的目标协方差估计值为
Figure PCTCN2021101660-appb-000061
Here, the target state is estimated as
Figure PCTCN2021101660-appb-000060
The corresponding target covariance estimate is
Figure PCTCN2021101660-appb-000061
步骤1004,融合装置根据第一滤波估计值及第一滤波协方差估计值获得第一融合数据。Step 1004, the fusion device obtains first fusion data according to the first estimated filter value and the first estimated filter covariance value.
此处,目标状态估计值为
Figure PCTCN2021101660-appb-000062
相应的第一融合协方差估计值为
Figure PCTCN2021101660-appb-000063
Here, the target state estimate is
Figure PCTCN2021101660-appb-000062
The corresponding first ensemble covariance estimate is
Figure PCTCN2021101660-appb-000063
步骤1005,融合装置根据第二滤波估计值及第二滤波协方差估计值获得第一融合数据。Step 1005, the fusion device obtains the first fusion data according to the second estimated filter value and the second estimated filter covariance value.
此处,目标状态估计值为
Figure PCTCN2021101660-appb-000064
相应的目标融合协方差估计值为
Figure PCTCN2021101660-appb-000065
Here, the target state estimate is
Figure PCTCN2021101660-appb-000064
The corresponding target fusion covariance estimate is
Figure PCTCN2021101660-appb-000065
可以理解的是,为了实现上述实施例中功能,探测系统和融合装置包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的模块及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。It can be understood that, in order to realize the functions in the foregoing embodiments, the detection system and the fusion device include hardware structures and/or software modules corresponding to each function. Those skilled in the art should easily realize that the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the modules and method steps described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
基于上述内容和相同构思,图11为本申请的提供的可能的探测系统的结构示意图。这些探测系统可以用于实现上述方法实施例中路侧探测系统或车侧探测系统的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请中,该探测系统可以是如图1a所示的路侧探测系统,也可以是如图1a所示的车侧探测系统;也可以是上述图1b所示的路侧探测系统,或者也可以是上述图1b所示的车侧探测系统;还可以是应用于探测系统的模块(如芯片)。Based on the above content and the same idea, FIG. 11 is a schematic structural diagram of a possible detection system provided by the present application. These detection systems can be used to realize the functions of the roadside detection system or the vehicle side detection system in the above method embodiments, and thus can also realize the beneficial effects of the above method embodiments. In this application, the detection system may be the roadside detection system as shown in Figure 1a, or the vehicle side detection system as shown in Figure 1a; it may also be the roadside detection system as shown in Figure 1b above, or It can also be the vehicle side detection system shown in FIG. 1 b above; it can also be a module (such as a chip) applied to the detection system.
如图11所示,该探测系统1100包括处理模块1101和收发模块1102。探测系统1100用于实现上述图3或图8中所示的方法实施例中路侧探测系统的功能。As shown in FIG. 11 , the detection system 1100 includes a processing module 1101 and a transceiver module 1102 . The detection system 1100 is used to realize the functions of the roadside detection system in the above method embodiment shown in FIG. 3 or FIG. 8 .
当探测系统1100用于实现图3所示的方法实施例的路侧探测系统的功能时:处理模块1101用于获取探测系统的数据,数据包括状态过程标定值、运动模型、状态信息、协方差估计值及时间戳;状态过程标定值用于优化协方差估计值,运动模型用于标识目标的运动状态,状态信息用于标识目标的运动特征,协方差估计值用于标识状态信息与实际状态信息之间的误差,时间戳用于标识探测系统发送数据的时刻;收发模块1102用于向融合装置发送数据。When the detection system 1100 is used to realize the function of the roadside detection system of the method embodiment shown in FIG. 3: the processing module 1101 is used to obtain the data of the detection system, the data includes state process calibration value, motion model, state information, covariance Estimated value and time stamp; the state process calibration value is used to optimize the covariance estimated value, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the covariance estimated value is used to identify the state information and the actual state For errors between information, the time stamp is used to identify the moment when the detection system sends data; the transceiver module 1102 is used to send data to the fusion device.
有关上述处理模块1101和收发模块1102更详细的描述可以参考图3所示的方法实施例中相关描述直接得到,此处不再一一赘述。More detailed descriptions about the processing module 1101 and the transceiver module 1102 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 3 , and will not be repeated here.
应理解,本申请实施例中的处理模块1101可以由处理器或处理器相关电路组件实现,收发模块1102可以由收发器或收发器相关电路组件实现。It should be understood that the processing module 1101 in the embodiment of the present application may be implemented by a processor or a processor-related circuit component, and the transceiver module 1102 may be implemented by a transceiver or a transceiver-related circuit component.
基于上述内容和相同构思,图12为本申请的提供的可能的融合装置的结构示意图。这些融合装置可以用于实现上述方法实施例中融合装置的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请中,该融合装置可以是如图1a所示的云服务器,也可以是如图1a所示的车辆中的处理器、ECU或域控制器,也可以是上述图1b所示的路侧探测系统中的融合装置,还可以是应用于融合装置的模块(如芯片)。Based on the above content and the same idea, FIG. 12 is a schematic structural diagram of a possible fusion device provided by the present application. These fusion devices can be used to realize the functions of the fusion devices in the foregoing method embodiments, and thus can also realize the beneficial effects possessed by the foregoing method embodiments. In this application, the fusion device may be a cloud server as shown in Figure 1a, or a processor, ECU or domain controller in a vehicle as shown in Figure 1a, or the road as shown in Figure 1b above. The fusion device in the side detection system may also be a module (such as a chip) applied to the fusion device.
如图12所示,该融合装置1200包括处理模块1201和收发模块1202。融合装置1200用于实现上述图3、图8、图9或图10中所示的方法实施例中融合装置的功能。As shown in FIG. 12 , the fusion device 1200 includes a processing module 1201 and a transceiver module 1202 . The fusion device 1200 is used to implement the functions of the fusion device in the method embodiments shown in FIG. 3 , FIG. 8 , FIG. 9 or FIG. 10 above.
当融合装置1200用于实现图8所示的方法实施例的融合装置的功能时:收发模块1202 用于获取来自路侧探测系统的第一数据及车侧探测系统的第二数据,第一数据包括第一状态过程标定值、第一运动模型、第一状态信息、第一协方差估计值及第一时间戳中的一个或多个,第一状态过程标定值用于优化第一协方差估计值,第一运动模型用于标识目标的第一运动状态,第一状态信息用于标识目标的第一运动特征,第一协方差估计值用于标识第一状态信息与第一实际状态信息之间的误差,第一时间戳用于标识路侧探测系统发送第一数据的第一时刻;第二数据包括第二状态过程标定值、第二运动模型、第二状态信息、第二协方差估计值及第二时间戳,第二状态过程标定值用于优化第二协方差估计值,第二运动模型用于标识目标的第二运动状态,第二状态信息用于标识目标的第二运动特征,第二协方差估计值用于标识第二状态信息与第二实际状态信息之间的误差,第二时间戳用于标识车侧探测系统发送第二数据的第二时刻;处理模块1201用于根据第一数据和第二数据,获得随机目标的第一融合数据。When the fusion device 1200 is used to realize the function of the fusion device in the method embodiment shown in FIG. Including one or more of the first state process calibration value, the first motion model, the first state information, the first covariance estimation value and the first time stamp, the first state process calibration value is used to optimize the first covariance estimation value, the first motion model is used to identify the first motion state of the target, the first state information is used to identify the first motion feature of the target, and the first covariance estimation value is used to identify the difference between the first state information and the first actual state information The first time stamp is used to identify the first moment when the roadside detection system sends the first data; the second data includes the second state process calibration value, the second motion model, the second state information, and the second covariance estimate value and the second timestamp, the second state process calibration value is used to optimize the second covariance estimation value, the second motion model is used to identify the second motion state of the target, and the second state information is used to identify the second motion feature of the target , the second covariance estimated value is used to identify the error between the second state information and the second actual state information, and the second time stamp is used to identify the second moment when the vehicle side detection system sends the second data; the processing module 1201 is used to According to the first data and the second data, the first fusion data of the random target is obtained.
有关上述处理模块1201和收发模块1202更详细的描述可以参考图8所示的方法实施例中相关描述直接得到,此处不再一一赘述。More detailed descriptions about the processing module 1201 and the transceiver module 1202 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 8 , and will not be repeated here.
应理解,本申请实施例中的处理模块1201可以由处理器或处理器相关电路组件实现,收发模块1202可以由收发器或收发器相关电路组件实现。It should be understood that the processing module 1201 in the embodiment of the present application may be implemented by a processor or a processor-related circuit component, and the transceiver module 1202 may be implemented by a transceiver or a transceiver-related circuit component.
基于上述内容和相同构思,如图13所示,本申请还提供一种探测系统1300。该探测系统1300可包括至少一个处理器1301和收发器1302。处理器1301和收发器1302之间相互耦合。可以理解的是,收发器1302可以为接口电路或输入输出接口。可选地,探测系统1300还可包括存储器1303,用于存储处理器1301执行的指令或存储处理器1301运行指令所需要的输入数据或存储处理器1301运行指令后产生的数据。Based on the above content and the same idea, as shown in FIG. 13 , the present application also provides a detection system 1300 . The detection system 1300 may include at least one processor 1301 and a transceiver 1302 . The processor 1301 and the transceiver 1302 are coupled to each other. It can be understood that the transceiver 1302 may be an interface circuit or an input and output interface. Optionally, the detection system 1300 may further include a memory 1303 for storing instructions executed by the processor 1301 or storing input data required by the processor 1301 to execute the instructions or storing data generated by the processor 1301 after executing the instructions.
当探测系统1300用于实现图3所示的方法时,处理器1301用于执行上述处理模块1101的功能,收发器1302用于执行上述收发模块1102的功能。When the detection system 1300 is used to implement the method shown in FIG. 3 , the processor 1301 is used to execute the functions of the above-mentioned processing module 1101 , and the transceiver 1302 is used to execute the functions of the above-mentioned transceiver module 1102 .
基于上述内容和相同构思,如图14所示,本申请还提供一种融合装置1400。该融合装置1400可包括至少一个处理器1401和收发器1402。处理器1401和收发器1402之间相互耦合。可以理解的是,收发器1402可以为接口电路或输入输出接口。可选地,融合装置1400还可包括存储器1403,用于存储处理器1401执行的指令或存储处理器1401运行指令所需要的输入数据或存储处理器1401运行指令后产生的数据。Based on the above content and the same idea, as shown in FIG. 14 , the present application also provides a fusion device 1400 . The fusion device 1400 may include at least one processor 1401 and a transceiver 1402 . The processor 1401 and the transceiver 1402 are coupled to each other. It can be understood that the transceiver 1402 may be an interface circuit or an input and output interface. Optionally, the fusion device 1400 may further include a memory 1403 for storing instructions executed by the processor 1401 or storing input data required by the processor 1401 to execute the instructions or storing data generated after the processor 1401 executes the instructions.
当融合装置1400用于实现图8所示的方法时,处理器1401用于执行上述处理模块1201的功能,收发器1402用于执行上述收发模块1202的功能。When the fusion device 1400 is used to implement the method shown in FIG. 8 , the processor 1401 is used to execute the functions of the above-mentioned processing module 1201 , and the transceiver 1402 is used to execute the functions of the above-mentioned transceiver module 1202 .
基于上述内容和相同构思,本申请提供一种车路系统的通信系统。该车路系统的通信系统可包括前述一个或多个车侧探测系统、以及、一个或多个路侧探测系统、以及融合装置。车侧探测系统可执行车侧探测系统侧任意方法,路侧探测系统可执行路侧探测系统侧任意方法,融合装置可执行融合装置侧的任意方法。路侧探测系统、车侧探测系统和融合装置可能的实现方式可参见上述介绍,此处不再赘述。Based on the above content and the same idea, the present application provides a communication system for a vehicle-road system. The communication system of the vehicle road system may include the aforementioned one or more vehicle side detection systems, one or more roadside detection systems, and a fusion device. The vehicle side detection system can implement any method on the vehicle side detection system side, the roadside detection system can implement any method on the roadside detection system side, and the fusion device can implement any method on the fusion device side. The possible implementations of the roadside detection system, the vehicle side detection system, and the fusion device can be found in the introduction above, and will not be repeated here.
基于上述内容和相同构思,本申请提供一种车辆。该车辆可包括前述一个或多个车侧探测系统、和/或融合装置。车侧探测系统可执行车侧探测系统侧任意方法,融合装置可执行融合装置侧的任意方法。车侧探测系统和融合装置可能的实现方式可参见上述介绍,此处不再赘述。进一步,可选地,该车辆还可以包括其他器件,例如处理器、存储器、无线通信装置等。Based on the above content and the same idea, the present application provides a vehicle. The vehicle may include one or more of the aforementioned vehicle side detection systems, and/or a fusion device. The vehicle side detection system can execute any method on the vehicle side detection system side, and the fusion device can execute any method on the fusion device side. The possible implementations of the vehicle side detection system and the fusion device can be found in the above introduction, and will not be repeated here. Further, optionally, the vehicle may also include other components, such as a processor, a memory, a wireless communication device, and the like.
在一种可能的实现方式中,车辆例如可以是无人车、智能车、电动车、数字汽车等。In a possible implementation manner, the vehicle may be, for example, an unmanned vehicle, a smart vehicle, an electric vehicle, a digital vehicle, and the like.
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing unit,CPU),还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。It can be understood that the processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. A general-purpose processor can be a microprocessor, or any conventional processor.
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于探测系统中。当然,处理器和存储介质也可以作为分立组件存在于探测系统中。The method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions. Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be a component of the processor. The processor and storage medium can be located in the ASIC. Alternatively, the ASIC may be located in the detection system. Of course, the processor and the storage medium can also exist in the detection system as discrete components.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行计算机程序或指令时,全部或部分地执行本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、探测系统、用户设备或者其它可编程装置。计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. A computer program product consists of one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions of the embodiments of the present application are executed in whole or in part. The computer can be a general purpose computer, special purpose computer, computer network, detection system, user equipment, or other programmable device. Computer programs or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer programs or instructions may be Wired or wireless transmission to another website site, computer, server or data center. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media. Available media can be magnetic media, such as floppy disks, hard disks, and tapes; optical media, such as digital video discs (digital video discs, DVDs); and semiconductor media, such as solid state drives (SSDs). ).
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。In each embodiment of the present application, if there is no special explanation and logical conflict, the terms and/or descriptions between different embodiments are consistent and can be referred to each other, and the technical features in different embodiments are based on their inherent Logical relationships can be combined to form new embodiments.
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。在本申请的文字描述中,字符“/”,一般表示前后关联对象是一种“或”的关系;在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。另外,在本申请中,“示例的”一词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。或者可理解为,使用示例的一词旨在以具体方式呈现概念,并不对本 申请构成限定。In this application, "at least one" means one or more, and "multiple" means two or more. "At least one of the following" or similar expressions refer to any combination of these items, including any combination of single or plural items. For example, at least one item (piece) of a, b or c can mean: a, b, c, "a and b", "a and c", "b and c", or "a and b and c ", where a, b, c can be single or multiple. "And/or" describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural. In the text description of this application, the character "/" generally indicates that the contextual objects are an "or" relationship; in the formulas of this application, the character "/" indicates that the contextual objects are a "division" Relationship. Additionally, in this application, the word "exemplary" is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "example" is not to be construed as preferred or advantageous over other embodiments or designs. Or it can be understood that the use of the word example is intended to present concepts in a specific manner, and does not constitute a limitation to the application.
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。术语“第一”、“第二”等类似表述,是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块。方法、系统、产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或模块。It can be understood that the various numbers involved in the embodiments of the present application are only for convenience of description, and are not used to limit the scope of the embodiments of the present application. The size of the serial numbers of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic. The terms "first", "second" and similar expressions are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. Furthermore, the terms "comprising" and "having", as well as any variations thereof, are intended to cover a non-exclusive inclusion, for example, of a series of steps or modules. A method, system, product or device is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not explicitly listed or inherent to the process, method, product or device.
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的保护范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。Apparently, those skilled in the art can make various changes and modifications to this application without departing from the protection scope of this application. In this way, if these modifications and variations of the present application fall within the scope of the claims of the present application and their equivalent technologies, the present application is also intended to include these modifications and variations.

Claims (23)

  1. 一种车路协同的通信方法,其特征在于,包括:A communication method for vehicle-road coordination, characterized in that it includes:
    获取来自探测系统的数据,所述数据包括状态过程标定值、运动模型、状态信息、协方差估计值及时间戳中的一个或多个;所述状态过程标定值用于优化第一协方差估计值,所述运动模型用于标识目标的运动状态,所述状态信息用于标识所述目标的运动特征,所述协方差估计值用于标识所述状态信息与实际状态信息之间的误差,所述时间戳用于标识所述探测系统发送所述数据的时刻;acquiring data from the detection system, the data including one or more of state process calibration values, motion models, state information, covariance estimates, and time stamps; the state process calibration values are used to optimize the first covariance estimate value, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the estimated covariance value is used to identify the error between the state information and the actual state information, The timestamp is used to identify the moment when the detection system sends the data;
    向融合装置发送所述数据。The data is sent to a fusion device.
  2. 如权利要求1所述的方法,其特征在于,所述目标包括合作目标;The method of claim 1, wherein the target comprises a collaborative target;
    所述获取来自探测系统的数据,包括:The acquisition of data from the detection system includes:
    获取符合所述运动模型的合作目标在预设时刻的实际位置、及所述合作目标在所述预设时刻的估计位置;Acquiring the actual position of the cooperative target conforming to the motion model at the preset time, and the estimated position of the cooperative target at the preset time;
    根据所述估计位置及所述运动模型,确定所述合作目标在m步后的估计位置,所述m为正整数;Determine the estimated position of the cooperation target after m steps according to the estimated position and the motion model, where m is a positive integer;
    根据所述实际位置和所述m步后的估计位置,确定所述m步对应的估计误差;determining an estimation error corresponding to the m steps according to the actual position and the estimated position after the m steps;
    根据所述估计误差,获得所述运动模型对应的状态过程标定值。According to the estimation error, a state process calibration value corresponding to the motion model is obtained.
  3. 如权利要求2所述的方法,其特征在于,所述根据所述估计误差,获得所述运动模型对应的状态过程标定值,包括:The method according to claim 2, wherein said obtaining a state process calibration value corresponding to said motion model according to said estimation error comprises:
    获取L次循环中每次循环得到的n-m个估计误差,所述n为大于1的整数,所述L为正整数;Obtaining n-m estimation errors obtained in each cycle in the L cycles, the n is an integer greater than 1, and the L is a positive integer;
    将所述L次循环得到的L×(n-m)个估计误差的方差,确定为所述运动模型对应的所述状态过程标定值。The variance of L×(n-m) estimation errors obtained in the L cycles is determined as the calibration value of the state process corresponding to the motion model.
  4. 如权利要求1至3任一项所述的方法,其特征在于,所述运动模型包括以下任一项:The method according to any one of claims 1 to 3, wherein the motion model includes any of the following:
    直线运动模型、左转运动模型、或右转运动模型。Linear motion model, left-turn motion model, or right-turn motion model.
  5. 如权利要求1至4任一项所述的方法,其特征在于,所述目标还包括随机目标;The method according to any one of claims 1 to 4, wherein the target further comprises a random target;
    所述获取来自探测系统的数据,还包括:The acquisition of data from the detection system also includes:
    获取所述随机目标的采样数据;Obtain sampling data of the random target;
    根据所述采样数据,确定所述状态信息。The state information is determined according to the sampled data.
  6. 如权利要求5所述的方法,其特征在于,所述获取随机目标的检测数据,包括:The method according to claim 5, wherein said acquisition of detection data of a random target comprises:
    向探测区域发射电磁波信号,所述探测区域包括所述随机目标;transmitting an electromagnetic wave signal to a detection area, the detection area including the random target;
    接收来自所述探测区域的回波信号,所述回波信号是所述电磁波信号经所述随机目标反射后得到的;receiving an echo signal from the detection area, the echo signal is obtained after the electromagnetic wave signal is reflected by the random target;
    根据所述回波信号,确定所述状态信息。The state information is determined according to the echo signal.
  7. 一种车路协同的数据处理方法,其特征在于,包括:A data processing method for vehicle-road coordination, characterized in that it includes:
    获取来自路侧探测系统的第一数据及来自车侧探测系统的第二数据,所述第一数据包括第一状态过程标定值、第一运动模型、第一状态信息、第一协方差估计值及第一时间戳中的一个或多个,所述第一状态过程标定值用于优化第一协方差估计值,所述第一运动模型用于标识目标的第一运动状态,所述第一状态信息用于标识所述目标的第一运动特征,所述第一协方差估计值用于标识所述第一状态信息与第一实际状态信息之间的误差,所述 第一时间戳用于标识所述路侧探测系统发送所述第一数据的第一时刻;所述第二数据包括第二状态过程标定值、第二运动模型、第二状态信息、第二协方差估计值及第二时间戳中的一个或多个,所述第二状态过程标定值用于优化第二协方差估计值,所述第二运动模型用于标识目标的第二运动状态,所述第二状态信息用于标识所述目标的第二运动特征,所述第二协方差估计值用于标识所述第二状态信息与第二实际状态信息之间的误差,所述第二时间戳用于标识所述车侧探测系统发送所述第二数据的第二时刻;Acquiring first data from the roadside detection system and second data from the vehicle side detection system, the first data including the first state process calibration value, the first motion model, the first state information, and the first estimated covariance value and one or more of the first time stamps, the first state process calibration value is used to optimize the first covariance estimate, the first motion model is used to identify the first motion state of the target, and the first The state information is used to identify a first motion feature of the target, the first estimated covariance value is used to identify an error between the first state information and the first actual state information, and the first time stamp is used to Identifying the first moment when the roadside detection system sends the first data; the second data includes a second state process calibration value, a second motion model, a second state information, a second covariance estimate, and a second One or more of the time stamps, the second state process calibration value is used to optimize the second covariance estimation value, the second motion model is used to identify the second motion state of the target, and the second state information is used To identify the second motion feature of the target, the second covariance estimate is used to identify the error between the second state information and the second actual state information, and the second time stamp is used to identify the The second moment when the vehicle side detection system sends the second data;
    根据所述第一数据和所述第二数据,获得目标的第一融合数据。Obtain first fusion data of the target according to the first data and the second data.
  8. 如权利要求7所述的方法,其特征在于,所述根据所述第一数据和所述第二数据,获得所述目标的第一融合数据,包括:The method according to claim 7, wherein said obtaining the first fusion data of the target according to the first data and the second data comprises:
    根据所述第一时间戳、所述第一状态过程标定值、所述第一状态信息及所述第一协方差估计值,获得所述目标在融合时刻的第一状态预测值和第一协方差预测值;根据所述第二时间戳、所述第二状态过程标定值、所述第二状态信息及所述第二运动模型,获得所述目标在融合时刻的第二状态预测值和第二协方差预测值;According to the first time stamp, the first state process calibration value, the first state information and the first covariance estimated value, obtain the first predicted state value and the first covariance value of the target at the fusion moment Variance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, obtain the second state prediction value and the second state prediction value of the target at the fusion moment Bicovariance predicted value;
    根据所述第一运动模型及所述目标的前一帧的第二融合数据预测所述目标的第三状态预测值和第三协方差预测值;根据所述第二运动模型及所述目标的前一帧的第二融合数据预测所述目标的第四状态预测值和第四协方差预测值;predicting a third state prediction value and a third covariance prediction value of the target according to the first motion model and the second fusion data of the previous frame of the target; according to the second motion model and the target's Predicting a fourth state prediction value and a fourth covariance prediction value of the target from the second fusion data of the previous frame;
    根据所述第一状态预测值、所述第一协方差预测值、所述第三状态预测值和所述第三协方差预测值,获得第一滤波估计值及第一滤波协方差估计值;根据所述第二状态预测值、第二协方差预测值、第四状态预测值和所述第四协方差预测值,获得第二滤波估计值及第二滤波协方差估计值;Obtaining a first filtered estimated value and a first filtered covariance estimated value based on the first predicted state value, the first predicted covariance value, the third predicted state value, and the third predicted covariance value; Obtaining a second filtered estimated value and a second filtered covariance estimated value based on the second predicted state value, the second predicted covariance value, the fourth predicted state value, and the fourth predicted covariance value;
    根据所述第一时间戳、所述第一状态过程标定值、所述第一运动模型及所述第一协方差估计值,获得所述第一数据在融合时刻的第一置信度;根据所述第二时间戳、所述第二状态过程标定值、所述第二运动模型及所述第二协方差估计值,获得所述第二数据在融合时刻的第二置信度;According to the first timestamp, the first state process calibration value, the first motion model, and the first covariance estimate, obtain a first confidence degree of the first data at the fusion moment; according to the The second time stamp, the second state process calibration value, the second motion model, and the second covariance estimation value, to obtain a second confidence degree of the second data at the fusion moment;
    根据所述第一置信度、所述第二置信度、所述第一滤波估计值、所述第一滤波协方差估计值、所述第二滤波估计值及所述第二滤波协方差估计值,获得所述第一融合数据。Based on the first confidence level, the second confidence level, the first filter estimate, the first filter covariance estimate, the second filter estimate, and the second filter covariance estimate , to obtain the first fusion data.
  9. 如权利要求8所述的方法,其特征在于,所述根据所述第一置信度、所述第二置信度、所述第一滤波估计值、所述第一滤波协方差估计值、所述第二滤波估计值及所述第二滤波协方差估计值,获得所述第一融合数据,包括:The method according to claim 8, wherein, according to the first confidence level, the second confidence level, the first filter estimate, the first filter covariance estimate, the The second filtered estimated value and the second filtered covariance estimated value are used to obtain the first fusion data, including:
    当所述第一置信度满足第一预设置信度、且所述第二置信度不满足第二预设置信度,根据所述第一滤波估计值及所述第一滤波协方差估计值获得所述第一融合数据;When the first confidence degree satisfies the first preset reliability degree and the second confidence degree does not satisfy the second preset reliability degree, according to the first filter estimated value and the first filtered covariance estimated value, the the first fusion data;
    当所述第二置信度满足第二预设置信度、且所述第一置信度不满足第一预设置信度,根据所述第二滤波估计值及所述第二滤波协方差估计值获得所述第一融合数据;When the second confidence degree satisfies the second preset reliability degree and the first confidence degree does not satisfy the first preset reliability degree, according to the second filter estimated value and the second filtered covariance estimated value, the the first fusion data;
    当所述第一置信度满足第一预设置信度、且所述第二置信度满足第二预设置信度,根据所述第一滤波估计值、所述第一滤波协方差估计值、所述第二滤波估计值及所述第二滤波协方差估计值,获得所述第一融合数据;When the first confidence level satisfies the first preset confidence level and the second confidence level satisfies the second preset confidence level, according to the first filter estimated value, the first filtered covariance estimated value, the The second filtered estimated value and the second filtered covariance estimated value to obtain the first fusion data;
    当所述第一置信度不满足第一预设置信度、且所述第二置信度不满足第二预设置信度,根据前一帧的第二融合数据,获得所述第一融合数据。When the first confidence level does not satisfy the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, the first fusion data is obtained according to the second fusion data of the previous frame.
  10. 一种探测系统,其特征在于,包括:A detection system, characterized in that it comprises:
    至少一个处理器,用于获取来自探测系统的数据,所述数据包括状态过程标定值、运 动模型、状态信息、协方差估计值及时间戳中的一个或多个;所述状态过程标定值用于优化协方差估计值,所述运动模型用于标识目标的运动状态,所述状态信息用于标识所述目标的运动特征,所述协方差估计值用于标识所述状态信息与实际状态信息之间的误差,所述时间戳用于标识所述探测系统发送所述数据的时刻;At least one processor for acquiring data from the detection system, the data including one or more of state process calibration values, motion models, state information, covariance estimates, and time stamps; the state process calibration values are used For optimizing the covariance estimation value, the motion model is used to identify the motion state of the target, the state information is used to identify the motion characteristics of the target, and the covariance estimation value is used to identify the state information and the actual state information The error between, the time stamp is used to identify the moment when the detection system sends the data;
    收发器,用于向融合装置发送所述数据。The transceiver is used to send the data to the fusion device.
  11. 如权利要求10所述的系统,其特征在于,所述目标包括合作目标;The system of claim 10, wherein the target comprises a collaborative target;
    所述处理器,具体用于:The processor is specifically used for:
    获取符合所述运动模型的合作目标在预设时刻的实际位置、及所述合作目标在所述预设时刻的估计位置;Acquiring the actual position of the cooperative target conforming to the motion model at the preset time, and the estimated position of the cooperative target at the preset time;
    根据所述估计位置及所述运动模型,确定所述合作目标在m步后的估计位置,所述m为正整数;Determine the estimated position of the cooperation target after m steps according to the estimated position and the motion model, where m is a positive integer;
    根据所述实际位置和所述m步后的估计位置,确定所述m步对应的估计误差;determining an estimation error corresponding to the m steps according to the actual position and the estimated position after the m steps;
    根据所述估计误差,获得所述运动模型对应的状态过程标定值。According to the estimation error, a state process calibration value corresponding to the motion model is obtained.
  12. 如权利要求11所述的系统,其特征在于,所述处理器,具体用于:The system according to claim 11, wherein the processor is specifically configured to:
    获取L次循环中每次循环得到的n-m个估计误差,所述n为大于1的整数,所述L为正整数;Obtaining n-m estimation errors obtained in each cycle in the L cycles, the n is an integer greater than 1, and the L is a positive integer;
    将所述L次循环得到的L×(n-m)个估计误差的方差,确定为所述运动模型对应的所述状态过程标定值。The variance of L×(n-m) estimation errors obtained in the L cycles is determined as the calibration value of the state process corresponding to the motion model.
  13. 如权利要求10至12任一项所述的系统,其特征在于,所述运动模型包括以下任一项:The system according to any one of claims 10 to 12, wherein the motion model includes any of the following:
    直线运动模型、左转运动模型、或右转运动模型。Linear motion model, left-turn motion model, or right-turn motion model.
  14. 如权利要求10至13任一项所述的系统,其特征在于,所述目标还包括随机目标;The system according to any one of claims 10 to 13, wherein the target further comprises a random target;
    所述处理器,还用于:The processor is also used for:
    获取所述随机目标的采样数据;Obtain sampling data of the random target;
    根据所述采样数据,确定所述状态信息。The state information is determined according to the sampled data.
  15. 如权利要求14所述的系统,其特征在于,所述收发器,具体用于:The system according to claim 14, wherein the transceiver is specifically used for:
    向探测区域发射电磁波信号,所述探测区域包括所述随机目标;transmitting an electromagnetic wave signal to a detection area, the detection area including the random target;
    接收来自所述探测区域的回波信号,所述回波信号是所述电磁波信号经所述随机目标反射后得到的;receiving an echo signal from the detection area, the echo signal is obtained after the electromagnetic wave signal is reflected by the random target;
    所述处理器,具体用于:The processor is specifically used for:
    根据所述回波信号,确定所述状态信息。The state information is determined according to the echo signal.
  16. 一种探测系统,其特征在于,包括用于执行如权利要求1~6中的任一项所述方法的模块。A detection system, characterized by comprising a module for performing the method according to any one of claims 1-6.
  17. 一种融合装置,其特征在于,包括:A fusion device, characterized in that it comprises:
    收发器,用于获取来自路侧探测系统的第一数据及车侧探测系统的第二数据,所述第一数据包括第一状态过程标定值、第一运动模型、第一状态信息、第一协方差估计值及第一时间戳中的一个或多个,所述第一状态过程标定值用于优化第一协方差估计值,所述第一运动模型用于标识目标的第一运动状态,所述第一状态信息用于标识所述目标的第一运动特征,所述第一协方差估计值用于标识所述第一状态信息与第一实际状态信息之间的误差,所述第一时间戳用于标识所述路侧探测系统发送所述第一数据的第一时刻;所述第二 数据包括第二状态过程标定值、第二运动模型、第二状态信息、第二协方差估计值及第二时间戳中的一个或多个,所述第二状态过程标定值用于优化第二协方差估计值,所述第二运动模型用于标识目标的第二运动状态,所述第二状态信息用于标识所述目标的第二运动特征,所述第二协方差估计值用于标识所述第二状态信息与第二实际状态信息之间的误差,所述第二时间戳用于标识所述车侧探测系统发送所述第二数据的第二时刻;The transceiver is used to obtain the first data from the roadside detection system and the second data from the vehicle side detection system, the first data includes the first state process calibration value, the first motion model, the first state information, the first One or more of a covariance estimate and a first time stamp, the first state process calibration value is used to optimize the first covariance estimate, and the first motion model is used to identify a first motion state of the target, The first state information is used to identify a first motion feature of the target, the first estimated covariance value is used to identify an error between the first state information and first actual state information, and the first The time stamp is used to identify the first moment when the roadside detection system sends the first data; the second data includes a second state process calibration value, a second motion model, a second state information, and a second covariance estimate One or more of the value and the second time stamp, the second state process calibration value is used to optimize the second covariance estimation value, the second motion model is used to identify the second motion state of the target, and the first The second state information is used to identify the second motion feature of the target, the second estimated covariance value is used to identify an error between the second state information and the second actual state information, and the second timestamp is used identifying a second moment when the vehicle side detection system sends the second data;
    至少一个处理器,用于根据所述第一数据和所述第二数据,获得目标的第一融合数据。At least one processor, configured to obtain first fusion data of the target according to the first data and the second data.
  18. 如权利要求17所述的装置,其特征在于,所述处理器,具有用于:The apparatus according to claim 17, wherein the processor is configured to:
    根据所述第一时间戳、所述第一状态过程标定值、所述第一状态信息及所述第一协方差估计值,获得所述目标在融合时刻的第一状态预测值和第一协方差预测值;根据所述第二时间戳、所述第二状态过程标定值、所述第二状态信息及所述第二运动模型,获得所述目标在融合时刻的第二状态预测值和第二协方差预测值;According to the first time stamp, the first state process calibration value, the first state information and the first covariance estimated value, obtain the first predicted state value and the first covariance value of the target at the fusion moment Variance prediction value; according to the second time stamp, the second state process calibration value, the second state information and the second motion model, obtain the second state prediction value and the second state prediction value of the target at the fusion moment Bicovariance predicted value;
    根据所述第一运动模型及所述目标的前一帧的第二融合数据预测所述目标的第三状态预测值和第三协方差预测值;根据所述第二运动模型及所述目标的前一帧的第二融合数据预测所述目标的第四状态预测值和第四协方差预测值;predicting a third state prediction value and a third covariance prediction value of the target according to the first motion model and the second fusion data of the previous frame of the target; according to the second motion model and the target's Predicting a fourth state prediction value and a fourth covariance prediction value of the target from the second fusion data of the previous frame;
    根据所述第一状态预测值、所述第一协方差预测值、所述第三状态预测值和所述第三协方差预测值,获得第一滤波估计值及第一滤波协方差估计值;根据所述第二状态预测值、第二协方差预测值、第四状态预测值和所述第四协方差预测值,获得第二滤波估计值及第二滤波协方差估计值;Obtaining a first filtered estimated value and a first filtered covariance estimated value based on the first predicted state value, the first predicted covariance value, the third predicted state value, and the third predicted covariance value; Obtaining a second filtered estimated value and a second filtered covariance estimated value based on the second predicted state value, the second predicted covariance value, the fourth predicted state value, and the fourth predicted covariance value;
    根据所述第一时间戳、所述第一状态过程标定值、所述第一运动模型及所述第一协方差估计值,获得所述第一数据在融合时刻的第一置信度;根据所述第二时间戳、所述第二状态过程标定值、所述第二运动模型及所述第二协方差估计值,获得所述第二数据在融合时刻的第二置信度;According to the first timestamp, the first state process calibration value, the first motion model, and the first covariance estimate, obtain a first confidence degree of the first data at the fusion moment; according to the The second time stamp, the second state process calibration value, the second motion model, and the second covariance estimation value, to obtain a second confidence degree of the second data at the fusion moment;
    根据所述第一置信度、所述第二置信度、所述第一滤波估计值、所述第一滤波协方差估计值、所述第二滤波估计值及所述第二滤波协方差估计值,获得所述第一融合数据。Based on the first confidence level, the second confidence level, the first filter estimate, the first filter covariance estimate, the second filter estimate, and the second filter covariance estimate , to obtain the first fusion data.
  19. 如权利要求18所述的装置,其特征在于,所述处理器,具体用于:The device according to claim 18, wherein the processor is specifically configured to:
    当所述第一置信度满足第一预设置信度、且所述第二置信度不满足第二预设置信度,根据所述第一滤波估计值及所述第一滤波协方差估计值获得所述第一融合数据;When the first confidence degree satisfies the first preset reliability degree and the second confidence degree does not satisfy the second preset reliability degree, according to the first filter estimated value and the first filtered covariance estimated value, the the first fusion data;
    当所述第二置信度满足第二预设置信度、且所述第一置信度不满足第一预设置信度,根据所述第二滤波估计值及所述第二滤波协方差估计值获得所述第一融合数据;When the second confidence degree satisfies the second preset reliability degree and the first confidence degree does not satisfy the first preset reliability degree, according to the second filter estimated value and the second filtered covariance estimated value, the the first fusion data;
    当所述第一置信度满足第一预设置信度、且所述第二置信度满足第二预设置信度,根据所述第一滤波估计值、所述第一滤波协方差估计值、所述第二滤波估计值及所述第二滤波协方差估计值,获得所述第一融合数据;When the first confidence level satisfies the first preset confidence level and the second confidence level satisfies the second preset confidence level, according to the first filter estimated value, the first filtered covariance estimated value, the The second filtered estimated value and the second filtered covariance estimated value to obtain the first fusion data;
    当所述第一置信度不满足第一预设置信度、且所述第二置信度不满足第二预设置信度,根据前一帧的第二融合数据,获得所述第一融合数据。When the first confidence level does not satisfy the first preset confidence level and the second confidence level does not satisfy the second preset confidence level, the first fusion data is obtained according to the second fusion data of the previous frame.
  20. 一种融合装置,其特征在于,包括用于执行如权利要求7~9中的任一项所述方法的模块。A fusion device, characterized by comprising a module for performing the method according to any one of claims 7-9.
  21. 一种车辆,其特征在于,包括如权利要求10~16中的任一项所述探测系统;和/或,包括如权利要求17~20中的任一项所述融合装置。A vehicle, characterized by comprising the detection system according to any one of claims 10-16; and/or comprising the fusion device according to any one of claims 17-20.
  22. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在处理器上运行时,使得所述探测系统执行如权利要求1~6任一项所述的方法;或者,使得所 述融合装置执行如权利要求7~9任一项所述的方法。A computer-readable storage medium, characterized by comprising computer instructions, when the computer instructions are run on a processor, the detection system is made to perform the method according to any one of claims 1-6; or, The fusion device is made to execute the method according to any one of claims 7-9.
  23. 一种计算机程序产品,其特征在于,当所述计算机程序产品在处理器上运行时,使得所述探测系统执行如权利要求1~6中任一项所述的方法;或者,使得所述融合装置执行如权利要求7~9任一项所述的方法。A computer program product, characterized in that, when the computer program product runs on a processor, the detection system is made to execute the method according to any one of claims 1 to 6; or, the fusion The device executes the method according to any one of claims 7-9.
PCT/CN2021/101660 2021-06-22 2021-06-22 Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus WO2022266863A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180099647.2A CN117529935A (en) 2021-06-22 2021-06-22 Vehicle-road cooperative communication and data processing method, detection system and fusion device
PCT/CN2021/101660 WO2022266863A1 (en) 2021-06-22 2021-06-22 Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101660 WO2022266863A1 (en) 2021-06-22 2021-06-22 Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus

Publications (1)

Publication Number Publication Date
WO2022266863A1 true WO2022266863A1 (en) 2022-12-29

Family

ID=84543887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101660 WO2022266863A1 (en) 2021-06-22 2021-06-22 Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus

Country Status (2)

Country Link
CN (1) CN117529935A (en)
WO (1) WO2022266863A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799992A (en) * 2009-02-03 2010-08-11 通用汽车环球科技运作公司 The vehicle-to-vehicle communication and the object detection sensing of combination
US20180113472A1 (en) * 2016-10-21 2018-04-26 Toyota Jidosha Kabushiki Kaisha Estimate of geographical position of a vehicle using wireless vehicle data
EP3388307A2 (en) * 2017-04-13 2018-10-17 KNORR-BREMSE Systeme für Schienenfahrzeuge GmbH Merging of infrastructure-related data, in particular of infrastructure-related data for railway vehicles
CN110430079A (en) * 2019-08-05 2019-11-08 腾讯科技(深圳)有限公司 Bus or train route cooperative system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799992A (en) * 2009-02-03 2010-08-11 通用汽车环球科技运作公司 The vehicle-to-vehicle communication and the object detection sensing of combination
US20180113472A1 (en) * 2016-10-21 2018-04-26 Toyota Jidosha Kabushiki Kaisha Estimate of geographical position of a vehicle using wireless vehicle data
EP3388307A2 (en) * 2017-04-13 2018-10-17 KNORR-BREMSE Systeme für Schienenfahrzeuge GmbH Merging of infrastructure-related data, in particular of infrastructure-related data for railway vehicles
CN110430079A (en) * 2019-08-05 2019-11-08 腾讯科技(深圳)有限公司 Bus or train route cooperative system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU, SIYUAN: "Resrarch of Adaptive Cooperative Positioning Technology for Vehicular Network", DISSERTATION SUBMITTED TO SHANGHAI JIAO TONG UNIVERSITY FOR THE DEGREE OF MASTER, 1 June 2020 (2020-06-01), pages 1 - 100, XP093017291, [retrieved on 20230124] *
TENG YAN-FEI; HU BIN; LIU ZHI-WEI; HUANG JIAN; GUAN ZHI-HONG: "Adaptive neural network control for quadrotor unmanned aerial vehicles", 2017 11TH ASIAN CONTROL CONFERENCE (ASCC), IEEE, 17 December 2017 (2017-12-17), pages 988 - 992, XP033314485, DOI: 10.1109/ASCC.2017.8287305 *

Also Published As

Publication number Publication date
CN117529935A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
US11113966B2 (en) Vehicular information systems and methods
US10317901B2 (en) Low-level sensor fusion
US10802450B2 (en) Sensor event detection and fusion
US10678240B2 (en) Sensor modification based on an annotated environmental model
US11067996B2 (en) Event-driven region of interest management
WO2022184127A1 (en) Simulation method and apparatus for vehicle and sensor
WO2021077287A1 (en) Detection method, detection device, and storage medium
US20220178718A1 (en) Sensor fusion for dynamic mapping
US9669838B2 (en) Method and system for information use
KR20210077617A (en) AUTOMATED OBJECT ANNOTATION USING FUSED CAMERA/LiDAR DATA POINTS
US20210065733A1 (en) Audio data augmentation for machine learning object classification
US20220215197A1 (en) Data processing method and apparatus, chip system, and medium
CN113129382A (en) Method and device for determining coordinate conversion parameters
US20210063165A1 (en) Adaptive map-matching-based vehicle localization
US20210398425A1 (en) Vehicular information systems and methods
WO2024078265A1 (en) Multi-layer high-precision map generation method and apparatus
WO2022266863A1 (en) Vehicle-road coordination communication method, data processing method, detection system and fusion apparatus
CN109286785B (en) Environment information sharing system and method
US20230103178A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
US20220198714A1 (en) Camera to camera calibration
EP3952359B1 (en) Methods and systems for enhancing vehicle data access capabilities
US11682140B1 (en) Methods and apparatus for calibrating stereo cameras using a time-of-flight sensor
EP4357944A1 (en) Identification of unknown traffic objects
US20240135719A1 (en) Identification of unknown traffic objects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE