CN112817301A - Fusion method, device and system of multi-sensor data - Google Patents

Fusion method, device and system of multi-sensor data Download PDF

Info

Publication number
CN112817301A
CN112817301A CN201911041828.0A CN201911041828A CN112817301A CN 112817301 A CN112817301 A CN 112817301A CN 201911041828 A CN201911041828 A CN 201911041828A CN 112817301 A CN112817301 A CN 112817301A
Authority
CN
China
Prior art keywords
sensor data
current
time
data
storage space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911041828.0A
Other languages
Chinese (zh)
Other versions
CN112817301B (en
Inventor
管守奎
李元
胡佳兴
段睿
韩永根
穆北鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co Ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201911041828.0A priority Critical patent/CN112817301B/en
Publication of CN112817301A publication Critical patent/CN112817301A/en
Application granted granted Critical
Publication of CN112817301B publication Critical patent/CN112817301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method, a device and a system for fusing multi-sensor data, wherein the method comprises the following steps: the processor obtains a first moment corresponding to the current specified sensor data after determining to obtain the current specified sensor data acquired by the specified sensor; obtaining target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space; filtering the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data; and determining the current pose information of the target vehicle corresponding to the current designated sensor data by using the current pose predictor, the filtering fusion result, the current designated sensor data and the designated sensor data between the current acquisition time and the first time so as to ensure the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the off-line platform test.

Description

Fusion method, device and system of multi-sensor data
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a method, a device and a system for fusing multi-sensor data.
Background
In the unmanned technology, vehicle positioning technology is important. In the related art, when vehicle positioning is performed, multiple sensors such as an image acquisition unit, an Inertial Measurement Unit (IMU), a wheel speed sensor, and an Inertial navigation unit, which are arranged in a target vehicle, are generally used to perform fusion on acquired sensor data to obtain a vehicle positioning result of the target vehicle.
In the real vehicle positioning process, problems of related positioning technology algorithms are inevitably caused, and further the vehicle positioning result is deviated, in order to ensure the feasibility of the related positioning technology algorithms and the safety of vehicles and drivers, the problems of the related positioning technology algorithms in the real vehicle positioning process need to be reproduced on an off-line platform, and correspondingly, the deviated vehicle positioning result needs to be reproduced on the off-line platform in the real vehicle positioning process. In addition, the difference of the calculation force of the real vehicle platform and the off-line platform in the real vehicle positioning process is considered, so that the fusion speed may be different during the multi-sensor data fusion, and further the vehicle positioning results cannot be consistent. Therefore, how to provide a fusion method of multi-sensor data capable of ensuring consistency of a vehicle positioning result in an actual vehicle positioning process and a vehicle positioning result in an offline platform test becomes an urgent problem to be solved.
Disclosure of Invention
The invention provides a method, a device and a system for fusing multi-sensor data, which are used for ensuring the consistency of a vehicle positioning result in an actual vehicle positioning process and a vehicle positioning result in an offline platform test. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for fusing multi-sensor data, which is applied to a processor of a system for fusing multi-sensor data, where the system further includes at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the method comprises the following steps:
after determining that current appointed sensor data acquired by an appointed sensor are acquired, acquiring a first moment corresponding to the current appointed sensor data, wherein a difference value between the current acquisition moment corresponding to the current appointed sensor data and the first moment is a preset time difference;
obtaining target sensor data of which the corresponding acquisition time is before a first time and after a second time from the preset storage space, wherein the difference value between the acquisition time corresponding to the previous appointed sensor data of the current appointed sensor data and the second time is the preset time difference;
filtering the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data;
and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using a current pose predictor, a filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
Optionally, the step of obtaining the target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space includes:
judging whether the preset storage space stores sensor data of the corresponding acquisition time before the first time;
and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle-mounted platform of the target vehicle;
after the step of obtaining the first time corresponding to the currently specified sensor data, the method further includes:
and storing the first moment in the preset storage space corresponding to the current specified sensor data, wherein the first moment is used as the fusion moment of the fusion filtering fusion result corresponding to the current specified sensor data.
Optionally, the processor is a processor disposed on the off-board device;
the step of obtaining a first time corresponding to the currently specified sensor data includes:
and obtaining a first moment corresponding to the current specified sensor data from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle-mounted platform of the target vehicle;
the step of obtaining a first time corresponding to the currently specified sensor data includes:
obtaining a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current designated sensor data and the preset time difference, and taking the time as the first time corresponding to the current designated sensor data.
In a second aspect, an embodiment of the present invention provides a device for fusing multi-sensor data, which is applied to a processor of a system for fusing multi-sensor data, where the system further includes at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data collected by the at least two types of sensors, and the device comprises:
the device comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is configured to obtain a first moment corresponding to current specified sensor data after determining that the current specified sensor data collected by a specified sensor is obtained, and a difference value between the current collecting moment corresponding to the current specified sensor data and the first moment is a preset time difference;
a second obtaining module configured to obtain, from the preset storage space, target sensor data of which the corresponding acquisition time is before a first time and after a second time, where a difference between the acquisition time corresponding to a previous designated sensor data of the currently designated sensor data and the second time is the preset time difference;
the filtering module is configured to perform filtering processing on the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data;
a determining module configured to determine current pose information of the target vehicle corresponding to the current designated sensor data by using a current pose predictor, a filter fusion result corresponding to the current designated sensor data, and designated sensor data between the current acquisition time and the first time.
Optionally, the second obtaining module is specifically configured to determine whether the preset storage space stores sensor data of the corresponding acquisition time before the first time;
and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle-mounted platform of the target vehicle;
the device further comprises:
and the storage module is configured to store the first moment in the preset storage space corresponding to the currently-specified sensor data after the current filter is used for filtering the target sensor data to obtain a filtering fusion result, and the first moment is used as a fusion moment of a fusion filtering fusion result corresponding to the currently-specified sensor data.
Optionally, the processor is a processor disposed on the off-board device;
the first obtaining module is specifically configured to obtain a first time corresponding to the currently specified sensor data from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle-mounted platform of the target vehicle;
the first obtaining module is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current designated sensor data and the preset time difference, and taking the time as the first time corresponding to the current designated sensor data.
In a third aspect, an embodiment of the present invention provides a system for fusing multi-sensor data, where the system includes a processor, at least two types of sensors, and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the processor is configured to obtain a first moment corresponding to current specified sensor data after determining that the current specified sensor data acquired by a specified sensor is obtained, wherein a difference value between the current acquisition moment corresponding to the current specified sensor data and the first moment is a preset time difference;
obtaining target sensor data of which the corresponding acquisition time is before a first time and after a second time from the preset storage space, wherein the difference value between the acquisition time corresponding to the previous appointed sensor data of the current appointed sensor data and the second time is the preset time difference;
filtering the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data;
and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using a current pose predictor, a filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
Optionally, the processor is specifically configured to determine whether the preset storage space stores sensor data of the corresponding acquisition time before the first time;
and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle-mounted platform of the target vehicle;
the processor is further configured to, after the first time corresponding to the currently-specified sensor data is obtained, store the first time in the preset storage space corresponding to the currently-specified sensor data as a fusion time of a fusion filtering fusion result corresponding to the currently-specified sensor data.
Optionally, the processor is a processor disposed on the off-board device;
the processor is specifically configured to obtain a first time corresponding to the currently specified sensor data from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle-mounted platform of the target vehicle;
the processor is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current designated sensor data and the preset time difference, and taking the time as the first time corresponding to the current designated sensor data.
As can be seen from the above, the method, the device, and the system for fusing multi-sensor data provided in the embodiments of the present invention are applied to a processor of a system for fusing multi-sensor data, and the system further includes at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data acquired by at least two types of sensors, and the processor can obtain a first moment corresponding to current specified sensor data after determining that the current specified sensor data acquired by a specified sensor is obtained, wherein a difference value between the first moment and the current acquisition moment corresponding to the current specified sensor data is a preset time difference; acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space, wherein the difference value between the second time and the acquisition time corresponding to the previous appointed sensor data of the current appointed sensor data is a preset time difference; filtering the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data; and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, the filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
By applying the embodiment of the invention, the triggering condition of the fusion process of the multi-sensor data can be limited, namely the fusion process is triggered after the current specified sensor data collected by the specified touch sensor is obtained, and in addition, when the current filter is used for filtering the sensor data, according to the preset data processing sequence, the data are processed, the orderliness of data processing and the fixity of time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current specified sensor data are ensured, and by limiting the output time of the filtering fusion result of the filter, namely, the filtering fusion result corresponding to the currently specified sensor data is output, so that when the fusion process of the multi-sensor data in the embodiment is operated in platforms with different computational powers, the consistency of the filtering fusion results of the filters realizes a fusion process of multi-sensor data irrelevant to the platform efficiency. Before the sensor data is filtered by the current filter, a first time corresponding to the currently specified sensor data is first determined, and obtaining target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space, the current filter is used for filtering the data of the target sensor, so that the filtering fusion results output by the filter corresponding to the data of the current designated sensor are the same in platforms with different computational powers, the method and the device ensure that the input results of the current pose predictor are the same and the input results of the subsequent current pose predictors are the same in the platforms with different computational powers aiming at the same designated sensor data, and realize the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the offline platform test, namely ensure the consistency of the vehicle positioning result in the platforms with different computational powers. The problem that the algorithm problem appears in the real vehicle running process and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-board platform, is solved, and the problem reproduction and solution efficiency is greatly improved. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. a triggering condition for a fusion process of the multi-sensor data may be defined, that is, the fusion process is triggered upon obtaining the current designated sensor data acquired by the designated touch sensor, and, when the current filter is used for filtering the sensor data, according to the preset data processing sequence, the data are processed, the orderliness of data processing and the fixity of time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current specified sensor data are ensured, and by limiting the output time of the filtering fusion result of the filter, namely, the filtering fusion result corresponding to the currently specified sensor data is output, so that when the fusion process of the multi-sensor data in the embodiment is operated in platforms with different computational powers, the consistency of the filtering fusion results of the filters realizes a fusion process of multi-sensor data irrelevant to the platform efficiency. Before the sensor data is filtered by the current filter, a first time corresponding to the currently specified sensor data is first determined, and obtaining target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space, the current filter is used for filtering the data of the target sensor, so that the filtering fusion results output by the filter corresponding to the data of the current designated sensor are the same in platforms with different computational powers, the method and the device ensure that the input results of the current pose predictor are the same and the input results of the subsequent current pose predictors are the same in the platforms with different computational powers aiming at the same designated sensor data, and realize the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the offline platform test, namely ensure the consistency of the vehicle positioning result in the platforms with different computational powers. The problem that the algorithm problem appears in the real vehicle running process and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-board platform, is solved, and the problem reproduction and solution efficiency is greatly improved.
2. Under the condition that the processor is arranged in a vehicle-mounted platform of a target vehicle, after a first moment corresponding to current designated sensor data is obtained, the current designated sensor data corresponding to the first moment is stored in a preset storage space, so that the follow-up process is ensured, when a fusion process of multi-sensor data is executed on an off-line platform aiming at the current designated sensor data, the same sensor data is determined when the fusion process of the multi-sensor data is executed on the current designated sensor data in the vehicle-mounted platform aiming at the preset storage space, and a basis is provided for ensuring the consistency of a vehicle positioning result in a real vehicle positioning process and a vehicle positioning result in an off-line platform test.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
FIG. 1 is a schematic flow chart of a method for fusing multi-sensor data according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a multi-sensor data fusion apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a multi-sensor data fusion system according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The invention provides a method, a device and a system for fusing multi-sensor data, which are used for ensuring the consistency of a vehicle positioning result in an actual vehicle positioning process and a vehicle positioning result in an offline platform test. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flow chart of a method for fusing multi-sensor data according to an embodiment of the present invention. The method is applied to a processor of a multi-sensor data fusion system, and the system can also comprise at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data collected by at least two types of sensors, and the method can comprise the following steps:
s101: after the current appointed sensor data acquired by the appointed sensor is determined to be obtained, a first moment corresponding to the current appointed sensor data is obtained.
And the difference value between the current acquisition time corresponding to the current designated sensor data and the first time is a preset time difference.
In one implementation, the processor may be a processor disposed in a vehicle-mounted platform of the target vehicle, or may be a processor disposed in an off-board platform, which may be an electronic device such as a desktop computer, a notebook computer, and an all-in-one machine. The processor may be in data communication with at least two types of sensors disposed in the target vehicle, and may obtain data collected by the at least two types of sensors.
In one implementation, the at least two types of sensors may include, but are not limited to, at least two types of IMU (Inertial measurement unit), wheel speed sensors, Inertial navigation units, and image acquisition units. Wherein, the inertial navigation unit can be: a GNSS (Global Navigation Satellite System, and Global Navigation Satellite System) Positioning unit or a GPS (Global Positioning System) Positioning unit. The image acquisition unit may be: cameras, etc.
In the embodiment of the invention, the processor can automatically or manually control to appoint one sensor as the appointed sensor from at least two types of sensors in advance, and the processor can immediately trigger the fusion process of the multi-sensor data after determining to obtain the sensor data acquired by the appointed sensor. The sensor data collected by the designated sensor can be referred to as designated sensor data. The current designated sensor data may be any currently desired processed designated sensor data.
In this step, after determining to obtain the current designated sensor data acquired by the designated sensor, the processor may first obtain a first time corresponding to the current designated sensor data, and then execute a subsequent fusion process. And the difference value between the current acquisition time corresponding to the current designated sensor data and the first time is a preset time difference. The setting of the preset time difference is determined by the combination of at least two types of sensors included in the multi-sensor data fusion system, and the specific value of the preset time difference is determined by a preset principle, wherein the preset principle is a time interval from generation of one frame of data to storage in a preset storage space in the at least two types of sensors, and after the preset time difference is waited, the data of the sensor with the slowest data transmission reaches the preset storage space. Correspondingly, the preset time difference may be greater than or equal to the transmission delay of a target sensor of at least two types of sensors included in the multi-sensor data fusion system, where the target sensor is: the sensor with the longest time required for transmitting the collected data to the preset storage space is selected from the at least two types of sensors.
Wherein the preset storage space may be provided through a buffer.
Wherein the designated sensor may be any sensor in the fusion system of multi-sensor data. In one implementation, in consideration of the fact that the transmission delay of the data of the IMU is short, correspondingly, the designated sensor may be the IMU, so as to improve the real-time performance of the determination of the pose information of the target vehicle to a certain extent.
When the processor is disposed in different platforms, the first time may be obtained in different manners. In an embodiment of the invention, the processor is provided on the off-board device; the S101 may include:
and obtaining a first moment corresponding to the currently specified sensor data from a preset storage space.
It is understood that, in the case that the processor is a processor disposed on the off-board device, the preset storage space stores therein data acquired by the at least two types of sensors disposed on the target vehicle during target driving, and the preset storage space stores therein each data corresponding to its acquisition time. And for the appointed sensor data, the preset storage space also stores the fusion time of the corresponding fusion filtering fusion result. And for the currently specified sensor data, the corresponding fusion time of the fusion filtering fusion result is the first time. The fusion time of the fusion filtering fusion result corresponding to each appointed sensor data is as follows: when the processor arranged in the vehicle-mounted platform inputs the specified sensor data into the pose predictor, the filter used for filtering and fusing the specified sensor data is represented to obtain the latest time corresponding to the filtering and fusing result.
In another embodiment of the present invention, in the case where the processor is a processor provided in an onboard platform of the target vehicle; the S101 may include:
obtaining a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current appointed sensor data and the preset time difference, and taking the time as the first time corresponding to the current appointed sensor data.
Under the condition that the processor is arranged in a vehicle-mounted platform of a target vehicle, the processor needs to calculate in real time to obtain a first moment corresponding to the current specified sensor data according to the current acquisition moment and the preset time difference for obtaining the current specified sensor data, and then, execute a subsequent fusion process.
S102: and obtaining target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space.
And the difference value between the acquisition time corresponding to the previous appointed sensor data of the current appointed sensor data and the second time is a preset time difference.
In this step, after the processor obtains the first time corresponding to the currently specified sensor data, the processor may continue to traverse the preset storage space, and obtain, from the preset storage space, the sensor data of the corresponding acquisition time before the first time and after the second time as the target sensor data.
In one case, there may be sensor data with a long transmission delay among the sensor data stored in the preset storage space, for example, sensor data with a corresponding transmission delay exceeding the preset time difference. In order to ensure the accuracy of the determined pose information of the target vehicle, the processor may filter out sensor data, of which the corresponding transmission delay exceeds a preset time difference, from sensor data corresponding to acquisition time before the first time and after the second time, and use the remaining sensor data corresponding to acquisition time before the first time and after the second time as target sensor data. Wherein, the transmission delay of each sensor data is as follows: the difference between the acquisition time corresponding to the sensor data and the time of transmission to the preset storage space.
S103: and performing filtering processing on the target sensor data by using the current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data.
In this step, after the processor obtains the target sensor data corresponding to the currently specified sensor data, the processor may input the target sensor data into the current filter, and perform filtering processing on each type of target sensor data according to a preset data processing sequence by using the current filter to obtain a filtering fusion result corresponding to the currently specified sensor data, and output the filtering fusion result.
In one case, the filter may be a kalman filter, a preset positioning fusion algorithm may be preset in the kalman filter, and the target sensor data may be fused according to a preset data processing sequence through the preset positioning fusion algorithm set in the kalman filter to obtain a filtering fusion result. The filtering fusion result may include a vehicle positioning result, i.e., pose information, of the target vehicle at the first time. The preset positioning fusion algorithm can be any positioning fusion algorithm in related vehicle positioning, and the specific type of the preset positioning fusion algorithm is not limited in the embodiment of the invention.
By setting the data processing sequence, when the processors arranged on different platforms process the target sensor data corresponding to the current appointed sensor data, the processing sequence and the process are the same, the consistency of the processing results of the filters arranged in different platforms on the target sensor data corresponding to the current appointed sensor data is ensured, and the fusion process of the multi-sensor data irrelevant to the platform efficiency is realized. In addition, considering that the pose predictor needs to use the latest filtering fusion result output by the filter when performing the prediction action each time, in the embodiment of the invention, the output time of the filter is preset, namely the filtering fusion result corresponding to the current specified sensor data is obtained and then output, so that the pose prediction of the current specified sensor data by utilizing processors arranged on different platforms is ensured, namely when the current specified sensor data is input into the current pose predictor, the consistency of the filtering fusion result of the filter input correspondingly is ensured.
In order to ensure that processors arranged on different platforms perform a fusion process on sensor data acquired by at least two types of sensors arranged in a target running process of a target vehicle, the recurrence of a target vehicle positioning result is realized, namely the real-time vehicle positioning result, which is a real-time pose information determination result of a processor arranged in a vehicle-mounted platform of the target vehicle in the target running process, is consistent with a pose information determination result, which is a vehicle positioning result, of the target vehicle in the target running process, which is a pose information determination result of the target vehicle repeated by a subsequent processor arranged in a non-vehicle-mounted platform, the filters corresponding to the processors arranged on different platforms are the same, and the corresponding pose predictors are the same.
S104: and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, the filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
The processor obtains a filtering fusion result corresponding to the current specified sensor data, inputs the filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time into the current pose predictor, and determines the current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, the filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time. The current position and orientation information may include a collection time of the current designated sensor data, that is, position information and orientation information of the target vehicle at the current collection time.
The pose predictor can be preset with a preset pose prediction algorithm, and the preset pose prediction algorithm set by the pose predictor, the filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time can be used for determining the current pose information of the target vehicle corresponding to the current appointed sensor data. The preset pose prediction algorithm can be any pose prediction algorithm in related vehicle positioning, and the specific type of the preset pose prediction algorithm is not limited in the embodiment of the invention. For determining the current pose information of the target vehicle corresponding to the current designated sensor data by using a preset pose prediction algorithm in the current pose predictor, reference may be made to related technologies, which are not described herein again.
In one implementation, after determining the current pose information of the target vehicle, the processor may output the current pose information to the corresponding pose use application.
By applying the embodiment of the invention, the triggering condition of the fusion flow of the multi-sensor data can be limited, namely the fusion flow is triggered after the current designated sensor data acquired by the designated touch sensor is obtained, and when the current filter is used for filtering the sensor data, the data is processed according to the preset data processing sequence, so that the orderliness of data processing and the fixity of the time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current designated sensor data are ensured, and the consistency of the filtering fusion result of the filter can be ensured when the fusion flow of the multi-sensor data in the embodiment is operated in platforms with different computational powers by limiting the output time of the filtering fusion result of the filter, namely outputting the filtering fusion result corresponding to the current designated sensor data. Before the sensor data is filtered by the current filter, a first time corresponding to the currently specified sensor data is first determined, and obtaining target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space, the current filter is used for filtering the data of the target sensor, so that the filtering fusion results output by the filter corresponding to the data of the current designated sensor are the same in platforms with different computational powers, the method and the device ensure that the input results of the current pose predictor are the same and the input results of the subsequent current pose predictors are the same in the platforms with different computational powers aiming at the same designated sensor data, and realize the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the offline platform test, namely ensure the consistency of the vehicle positioning result in the platforms with different computational powers. The problem that the algorithm problem appears in the real vehicle running process and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-board platform, is solved, and the problem reproduction and solution efficiency is greatly improved.
In another embodiment of the present invention, the S102 may include:
judging whether the preset storage space stores sensor data of the corresponding acquisition time before the first time;
and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
In this embodiment, in a case that the processor is a processor disposed in a vehicle-mounted platform of a target vehicle, the processor is configured to obtain sensor data acquired by at least two types of sensors disposed in the target vehicle in real time, after obtaining a first time corresponding to currently specified sensor data, the processor may first determine whether a preset storage space stores sensor data corresponding to an acquisition time before the first time, and if it is determined that the preset storage space stores sensor data corresponding to an acquisition time before the first time, obtain target sensor data corresponding to an acquisition time before the first time and after a second time from the preset storage space. In another case, if it is determined that the preset storage space does not store the sensor data of the corresponding acquisition time before the first time, the fusion process for the currently specified sensor data may be ended; subsequent processors may continue to monitor whether new currently specified sensor data is obtained.
In another embodiment, in a case that the processor is a processor disposed in the off-board platform, the processor may also first determine whether sensor data corresponding to the acquisition time before the first time is stored in the preset storage space, and if so, obtain target sensor data corresponding to the acquisition time before the first time and after the second time from the preset storage space; otherwise, the fusion process for the currently specified sensor data may be ended; subsequent processors may continue to monitor whether new currently specified sensor data is obtained.
In another embodiment of the present invention, the processor is disposed in a processor of an onboard platform of the target vehicle; after the S101, the method may further include:
and storing the first moment in a preset storage space corresponding to the current specified sensor data as the fusion moment of the fusion filtering fusion result corresponding to the current specified sensor data.
In the case where the processor is a processor provided in an on-board platform of the subject vehicle, after calculating a first time corresponding to the currently specified sensor data, the first time corresponding to the currently specified sensor data can be stored in a preset storage space as the fusion time of the fusion filtering fusion result corresponding to the currently specified sensor data, so that when the vehicle positioning result of the target vehicle in the target driving process is reproduced in a subsequent off-line mode, determining target sensor data corresponding to the currently specified sensor data based on the first moment so as to perform subsequent fusion processes and ensure success of off-line reproduction, namely, the vehicle positioning result obtained by fusion on the vehicle-mounted platform, namely the pose information of the target vehicle is ensured, and the vehicle positioning result obtained by fusion on the non-vehicle-mounted platform is the consistency of the pose information of the target vehicle.
In another embodiment of the invention, the designated sensor is an IMU inertial measurement unit: the current specified sensor data is current IMU data; the processor is arranged in the vehicle-mounted platform of the target vehicle; before the S101, the method may further include:
a process of obtaining current IMU data, wherein the process may include:
obtaining initial IMU data acquired by an IMU;
converting the initial IMU data into data in a first specified format to obtain intermediate IMU data corresponding to the initial IMU data;
determining current IMU data corresponding to the integral point moment by using intermediate IMU data corresponding to previous IMU data acquired by the IMU and intermediate IMU data corresponding to the initial IMU data;
storing the current IMU data and the corresponding acquisition time in a preset storage space;
after the S104, the method may further include:
determining a map area corresponding to the current pose information from a target map based on the current pose information, wherein the map area is used as a map area corresponding to the current designated sensor data, and the target map comprises map data;
and storing the map area corresponding to the currently specified sensor data converted into the second specified format and the corresponding acquisition time to a preset storage space.
The IMU may include: a gyroscope for acquiring an angular velocity of the target vehicle, and an acceleration sensor for acquiring an acceleration of the target vehicle.
In this embodiment, the designated sensor is an IMU, and correspondingly, the current designated sensor data acquired by the designated sensor is current IMU data; due to the fact that formats of IMU data acquired by IMUs of different models are different, the subsequent multi-sensor data fusion process is facilitated. In the embodiment of the invention, after obtaining IMU data acquired by the IMU, the processor firstly converts the acquired IMU data into IMU data with a uniform format in the multi-sensor data fusion system, and then executes the subsequent process. Correspondingly, the processor obtains initial IMU data acquired by the IMU in real time and processes the initial IMU data to obtain IMU data in a format convenient for a subsequent process, namely current IMU data. After the initial IMU data is obtained by the processor, the initial IMU data can be converted into data in a first designated format, intermediate IMU data corresponding to the initial IMU data is obtained, and then, in order to facilitate subsequent fusion, the intermediate IMU data corresponding to the initial IMU data is subjected to integral point alignment processing, namely, the intermediate IMU data corresponding to the previous IMU data acquired by the IMU and the intermediate IMU data corresponding to the initial IMU data are utilized to determine the current IMU data corresponding to the integral point moment, and then the current IMU data and the acquisition moment corresponding to the current IMU data are stored in a preset storage space.
The process of determining the current IMU data corresponding to the whole time by using the intermediate IMU data corresponding to the previous IMU data acquired by the IMU and the intermediate IMU data corresponding to the initial IMU data may be: and determining the current IMU data corresponding to the integral point moment by adopting a difference algorithm and the intermediate IMU data corresponding to the previous IMU data and the initial IMU data acquired by the IMU. For example: taking an IMU of 100 hz as an example, the time interval between every two frames of IMU data acquired by the IMU is 10 ms, the real time of the IMU acquired by the IMU may be 1.1234 s and 1.1334 s, and for traversal of the subsequent process, an integer-point alignment operation needs to be performed on the IMU data acquired by the IMU, that is, IMU data acquired by the IMU corresponding to 1.120 s and 1.130 s is calculated. Correspondingly, 1.1234 seconds can be used for acquiring the intermediate IMU data corresponding to the initial IMU data, 1.1334 seconds can be used for acquiring the intermediate IMU data corresponding to the initial IMU data, and a difference algorithm is used for calculating to obtain the IMU data acquired by the IMU corresponding to 1.130 seconds.
The first designated format may be any format in the related art that is convenient for the subsequent fusion process, and the embodiment of the present invention does not limit the specific type of the first designated format. For example: the first designated format may include an estimated speed and an estimated pose of the target vehicle at the current acquisition time, which are calculated through the initial IMU data and the speed and pose information of the target vehicle at the time before the current acquisition time; or may include a speed variation amount and a pose information variation amount of the target vehicle between the current acquisition time and a time immediately before the current acquisition time. Wherein the current IMU data in the first specified format may be represented as an ImuFrame data frame.
Subsequently, in the embodiment of the present invention, the fusion system of the multi-sensor data further includes a target map, where the target map is a map corresponding to a driving scene of the target vehicle, and includes map data; after the processor determines the current pose information of the target vehicle corresponding to the currently specified sensor data, a map area corresponding to the current pose information can be determined from the target map based on the current pose, and the map area is used as the map area corresponding to the currently specified sensor data; and converting the map area corresponding to the currently specified sensor data into a second specified format, and further, converting the map area corresponding to the currently specified sensor data in the second specified format and the acquisition time corresponding to the map area to a preset storage space. The collection time corresponding to the map area may be the collection time of the data of the currently specified sensor. In one case, the target map may be a high-precision map. The map area of the second specified format may be represented as an HdmapGeometryFrame data frame. The second designated format may be any format of a map area that is convenient for a subsequent fusion process in the related art, and the embodiment of the present invention is not limited.
The area within the preset range with the current pose information as the center in the target map can be used as the map area corresponding to the current pose information.
In the embodiment, the designated sensor is set as the IMU, and due to the characteristic of low data delay of the IMU, the real-time performance of the current pose information of the target vehicle is improved to a certain extent. After the initial IMU data is obtained, the intermediate IMU data which is in the first designated format and corresponds to the previous IMU data acquired by the IMU and the intermediate IMU data which is in the first designated format and corresponds to the initial IMU data are utilized to determine the current IMU data corresponding to the integral point moment, so that additional interpolation work is avoided when the positioning result precision evaluation is carried out subsequently on other high-precision combined navigation equipment.
In another embodiment of the present invention, if at least two types of sensors include: the wheel speed sensor, the sensor data that two kinds of sensors at least gathered include: spare wheel speed data collected by a wheel speed sensor; in the case where the processor is a processor disposed within an onboard platform of the target vehicle, the method may further include:
a process of obtaining backup wheel speed data collected by a wheel speed sensor, wherein the process may include:
obtaining initial wheel speed data collected by a wheel speed sensor;
converting the initial wheel speed data into data in a third specified format to obtain standby wheel speed data;
and storing the standby wheel speed data and the corresponding acquisition time to a preset storage space.
In this embodiment, the at least two types of sensors may include a wheel speed sensor, and the wheel speed sensor may acquire module values of wheel speeds of 4 wheels of the target vehicle. In view of the different formats of data collected by different models of wheel speed sensors, for example: the data collected is the angular velocity of the wheel or the data collected is the linear velocity of the wheel. In order to facilitate the subsequent fusion process, the obtained initial wheel speed data acquired by the wheel speed sensor is converted into data in a third specified format to obtain standby wheel speed data, and the standby wheel speed data and the corresponding acquisition time are stored in a preset storage space. The spare wheel speed data in this third specified format may be represented as an OdoFrame data frame. The third designated format may be any format of wheel speed data that facilitates the subsequent fusion process in the related art, and the embodiment of the present invention is not limited thereto.
In another embodiment of the present invention, if at least two types of sensors include: the inertial navigation unit, the sensor data that two kinds of sensors at least gathered include: standby inertial navigation data collected by an inertial navigation unit; in the case where the processor is a processor disposed within an onboard platform of the target vehicle, the method may further include:
a process of obtaining standby inertial navigation data acquired by an inertial navigation unit, wherein the process may comprise:
acquiring initial inertial navigation data acquired by an inertial navigation unit;
converting the initial inertial navigation data into data in a fourth specified format to obtain standby inertial navigation data;
and storing the standby inertial navigation data and the corresponding acquisition time to a preset storage space.
In view of different formats of inertial navigation data acquired by different inertial navigation units, after the initial inertial navigation data acquired by the inertial navigation unit is obtained, the processor firstly converts the initial inertial navigation data into data of a fourth specified format to obtain standby inertial navigation data, and then stores the standby inertial navigation data and the corresponding acquisition time thereof into a preset storage space. For example, in one case, the inertial navigation unit is a GNSS, the initial inertial navigation data acquired by the GNSS may include position information and speed information, and the format of the initial inertial navigation data generally includes NMEA statements or binary statements with a higher compression rate. The alternate inertial navigation data in the fourth specified format may be represented as a GnssFrame data frame.
In another embodiment of the present invention, if at least two types of sensors include: the image acquisition unit, the sensor data that two kinds of sensors at least gathered include: standby image data acquired by an image acquisition unit; in the case where the processor is a processor disposed within an onboard platform of the target vehicle, the method may further include:
a process of obtaining the standby image data acquired by the image acquisition unit, wherein the process may include:
acquiring an image acquired by an image acquisition unit;
detecting the image by using a pre-trained target detection model to obtain perception data corresponding to the image;
converting the perception data corresponding to the image into data in a fifth specified format to obtain intermediate perception data;
storing the intermediate sensing data and the corresponding acquisition time to a preset storage space;
determining map data matched with the intermediate perception data from a target map based on the intermediate perception data and pose information of the target vehicle corresponding to the image, wherein the target map comprises the map data;
storing the map data matched with the intermediate sensing data converted into the sixth specified format to a preset storage space;
extracting characteristic points of the image, and determining characteristic point information in the image;
coding the feature point information in the image to obtain an image containing the feature point information and a coding result;
and storing the image which is converted into the seventh appointed format and contains the characteristic point information and the coding result, and the corresponding acquisition time to a preset storage space.
In this embodiment, the at least two types of sensors may include an image capturing unit, and correspondingly, the sensor data captured by the at least two types of sensors includes: the image acquisition unit acquires standby image data. After the processor obtains the image acquired by the image acquisition unit, on one hand, the image can be converted into a preset image format so as to be input into a pre-trained target detection model, and the pre-trained target detection model is utilized to detect the image to obtain sensing data corresponding to the image; the pre-trained target detection model is a neural network model obtained by training based on a sample image marked with a target and marking information thereof, wherein the target can comprise traffic markers such as lane lines, parking spaces, light poles and traffic signs, and the marking information can comprise position information of the target in the corresponding sample image. For a specific training process, reference may be made to a training process of a model in the related art, which is not described herein again.
The sensing data corresponding to the image may include the position and type of the target included in the image, such as a traffic marker and its position, such as a lane line, a parking space, a light pole, and/or a traffic sign. And converting the sensing data corresponding to the image into data in a fifth specified format to obtain intermediate sensing data, and storing the intermediate sensing data in the fifth specified format and the corresponding acquisition time in a preset storage space. And the acquisition time corresponding to the intermediate sensing data in the fifth specified format is the acquisition time of the corresponding image. The intermediate perceptual data in the fifth specified format may be represented as a perceptinframe data frame.
Determining map data matched with each piece of intermediate perception data from the target map based on the intermediate perception data and the pose information of the target vehicle corresponding to the image; and converting the map data matched with each piece of intermediate perception data in the image into a sixth specified format, and storing the map data and the corresponding acquisition time into a preset storage space. And the acquisition time corresponding to the map data matched with each piece of intermediate perception data in the image in the sixth specified format is the acquisition time of the image. The map data matched with each piece of intermediate sensing data in the image in the sixth specified format can be represented as a sematic matchframe data frame, wherein the sematic matchframe data frame comprises the intermediate sensing data with the corresponding relationship and the map data matched with the intermediate sensing data.
The processor can directly read the pose information of the target vehicle, the corresponding acquisition time of which is closest to the acquisition time of the image, from the preset storage space as the pose information of the target vehicle corresponding to the image.
On the other hand, a preset feature point extraction algorithm is utilized to extract feature points of the image, and feature point information in the image is determined, wherein the feature points can comprise angular points in the image, and correspondingly, the feature point information can comprise position information of the angular points in the image; and coding the feature point information in the image to obtain the feature points in the image containing the feature point information and the coding result, wherein the feature point information of the same code on different images is ensured to correspond to the same object in the actual scene during coding. And converting the image containing the feature point information and the coding result into a seventh appointed format, and transmitting the image and the corresponding acquisition time to a preset storage space. And the acquisition time corresponding to the image containing the feature point information and the coding result in the seventh specified format is the acquisition time of the image. The image containing the feature point information and the encoding result in the seventh specified format may be represented as a FeatureFrame data frame.
In one implementation, when the processor stores the sensor data acquired by the at least two types of sensors into the preset storage space, the sensor data may be stored in a sequence from front to back or from back to front according to the acquisition time corresponding to each sensor data.
Corresponding to the method embodiment, the embodiment of the invention provides a multi-sensor data fusion device, which is applied to a processor of a multi-sensor data fusion system, wherein the system further comprises at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data collected by the at least two types of sensors, as shown in fig. 2, the apparatus includes:
a first obtaining module 210, configured to obtain a first time corresponding to current specified sensor data after determining that the current specified sensor data collected by a specified sensor is obtained, where a difference between the current collection time corresponding to the current specified sensor data and the first time is a preset time difference;
a second obtaining module 220, configured to obtain, from the preset storage space, target sensor data whose corresponding acquisition time is before a first time and after a second time, where a difference between an acquisition time corresponding to a previous designated sensor data of the currently designated sensor data and the second time is the preset time difference;
a filtering module 230 configured to perform filtering processing on the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current specified sensor data;
a determining module 240 configured to determine current pose information of the target vehicle corresponding to the current designated sensor data by using a current pose predictor, a filter fusion result corresponding to the current designated sensor data, and designated sensor data between the current acquisition time and the first time.
By applying the embodiment of the invention, the triggering condition of the fusion process of the multi-sensor data can be limited, namely the fusion process is triggered after the current specified sensor data collected by the specified touch sensor is obtained, and in addition, when the current filter is used for filtering the sensor data, according to the preset data processing sequence, the data are processed, the orderliness of data processing and the fixity of time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current specified sensor data are ensured, and by limiting the output time of the filtering fusion result of the filter, namely, the filtering fusion result corresponding to the currently specified sensor data is output, so that when the fusion process of the multi-sensor data in the embodiment is operated in platforms with different computational powers, the consistency of the filtering fusion results of the filters realizes a fusion process of multi-sensor data irrelevant to the platform efficiency. Before the sensor data is filtered by the current filter, a first time corresponding to the currently specified sensor data is first determined, and obtaining target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space, the current filter is used for filtering the data of the target sensor, so that the filtering fusion results output by the filter corresponding to the data of the current designated sensor are the same in platforms with different computational powers, the method and the device ensure that the input results of the current pose predictor are the same and the input results of the subsequent current pose predictors are the same in the platforms with different computational powers aiming at the same designated sensor data, and realize the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the offline platform test, namely ensure the consistency of the vehicle positioning result in the platforms with different computational powers. The problem that the algorithm problem appears in the real vehicle running process and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-board platform, is solved, and the problem reproduction and solution efficiency is greatly improved.
In another embodiment of the present invention, the second obtaining module 220 is specifically configured to determine whether the preset storage space stores sensor data of the corresponding acquisition time before the first time;
and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
In another embodiment of the present invention, the processor is a processor disposed within an onboard platform of the target vehicle;
the device further comprises: and the storage module is configured to store the first moment in the preset storage space corresponding to the currently-specified sensor data after the current filter is used for filtering the target sensor data to obtain a filtering fusion result, and the first moment is used as a fusion moment of a fusion filtering fusion result corresponding to the currently-specified sensor data.
In another embodiment of the present invention, the processor is a processor disposed on an off-board device;
the first obtaining module 210 is specifically configured to obtain a first time corresponding to the currently specified sensor data from the preset storage space.
In another embodiment of the present invention, the processor is a processor disposed within an onboard platform of the target vehicle;
the first obtaining module 210 is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current designated sensor data and the preset time difference, and taking the time as the first time corresponding to the current designated sensor data.
Corresponding to the above method embodiment, an embodiment of the present invention provides a system for fusing multi-sensor data, as shown in fig. 3, the system includes a processor 310, at least two types of sensors 320, and a preset storage space 330; each sensor 320 is configured to collect respective sensor data, all disposed in the same target vehicle; the preset storage space 330 is configured to store sensor data acquired by the at least two types of sensors, and the processor 310 is configured to obtain a first time corresponding to current specified sensor data acquired by a specified sensor after determining that the current specified sensor data is obtained, where a difference value between the current acquisition time corresponding to the current specified sensor data and the first time is a preset time difference;
obtaining target sensor data of which the corresponding acquisition time is before a first time and after a second time from the preset storage space, wherein the difference value between the acquisition time corresponding to the previous appointed sensor data of the current appointed sensor data and the second time is the preset time difference;
filtering the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data;
and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using a current pose predictor, a filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
By applying the embodiment of the invention, the triggering condition of the fusion process of the multi-sensor data can be limited, namely the fusion process is triggered after the current specified sensor data collected by the specified touch sensor is obtained, and in addition, when the current filter is used for filtering the sensor data, according to the preset data processing sequence, the data are processed, the orderliness of data processing and the fixity of time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current specified sensor data are ensured, and by limiting the output time of the filtering fusion result of the filter, namely, the filtering fusion result corresponding to the currently specified sensor data is output, so that when the fusion process of the multi-sensor data in the embodiment is operated in platforms with different computational powers, the consistency of the filtering fusion results of the filters realizes a fusion process of multi-sensor data irrelevant to the platform efficiency. Before the sensor data is filtered by the current filter, a first time corresponding to the currently specified sensor data is first determined, and obtaining target sensor data of which the corresponding acquisition time is before the first time and after the second time from a preset storage space, the current filter is used for filtering the data of the target sensor, so that the filtering fusion results output by the filter corresponding to the data of the current designated sensor are the same in platforms with different computational powers, the method and the device ensure that the input results of the current pose predictor are the same and the input results of the subsequent current pose predictors are the same in the platforms with different computational powers aiming at the same designated sensor data, and realize the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the offline platform test, namely ensure the consistency of the vehicle positioning result in the platforms with different computational powers. The problem that the algorithm problem appears in the real vehicle running process and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-board platform, is solved, and the problem reproduction and solution efficiency is greatly improved.
In another embodiment of the present invention, the processor 310 is specifically configured to determine whether the preset storage space stores sensor data corresponding to the acquisition time before the first time; and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
In another embodiment of the present invention, the processor 310 is a processor disposed in an onboard platform of the target vehicle; the processor 310 is further configured to, after the first time corresponding to the currently specified sensor data is obtained, store the first time in the preset storage space corresponding to the currently specified sensor data as a fusion time of a fusion filtering fusion result corresponding to the currently specified sensor data.
In another embodiment of the present invention, the processor 310 is a processor disposed on an off-board device;
the processor 310 is specifically configured to obtain a first time corresponding to the currently specified sensor data from the preset storage space.
In another embodiment of the present invention, the processor 310 is a processor disposed in an onboard platform of the target vehicle; the processor 310 is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current designated sensor data and the preset time difference, and taking the time as the first time corresponding to the current designated sensor data.
The device and system embodiments correspond to the method embodiments, and have the same technical effects as the method embodiments, and specific descriptions refer to the method embodiments. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. The fusion method of the multi-sensor data is characterized in that the fusion method is applied to a processor of a fusion system of the multi-sensor data, and the system further comprises at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the method comprises the following steps:
after determining that current appointed sensor data acquired by an appointed sensor are acquired, acquiring a first moment corresponding to the current appointed sensor data, wherein a difference value between the current acquisition moment corresponding to the current appointed sensor data and the first moment is a preset time difference;
obtaining target sensor data of which the corresponding acquisition time is before a first time and after a second time from the preset storage space, wherein the difference value between the acquisition time corresponding to the previous appointed sensor data of the current appointed sensor data and the second time is the preset time difference;
filtering the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data;
and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using a current pose predictor, a filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
2. The method of claim 1, wherein the step of obtaining target sensor data from the predetermined storage space corresponding to acquisition times before the first time and after the second time comprises:
judging whether the preset storage space stores sensor data of the corresponding acquisition time before the first time;
and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
3. The method of claim 1 or 2, wherein the processor is a processor disposed within an onboard platform of the target vehicle;
after the step of obtaining the first time corresponding to the currently specified sensor data, the method further includes:
and storing the first moment in the preset storage space corresponding to the current specified sensor data, wherein the first moment is used as the fusion moment of the fusion filtering fusion result corresponding to the current specified sensor data.
4. The method of claim 1 or 2, wherein the processor is a processor disposed on an off-board device;
the step of obtaining a first time corresponding to the currently specified sensor data includes:
and obtaining a first moment corresponding to the current specified sensor data from the preset storage space.
5. The method of any one of claims 1-3, wherein the processor is a processor disposed within an onboard platform of the target vehicle;
the step of obtaining a first time corresponding to the currently specified sensor data includes:
obtaining a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current designated sensor data and the preset time difference, and taking the time as the first time corresponding to the current designated sensor data.
6. The fusion device of the multi-sensor data is characterized by being applied to a processor of a fusion system of the multi-sensor data, wherein the system further comprises at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data collected by the at least two types of sensors, and the device comprises:
the device comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is configured to obtain a first moment corresponding to current specified sensor data after determining that the current specified sensor data collected by a specified sensor is obtained, and a difference value between the current collecting moment corresponding to the current specified sensor data and the first moment is a preset time difference;
a second obtaining module configured to obtain, from the preset storage space, target sensor data of which the corresponding acquisition time is before a first time and after a second time, where a difference between the acquisition time corresponding to a previous designated sensor data of the currently designated sensor data and the second time is the preset time difference;
the filtering module is configured to perform filtering processing on the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data;
a determining module configured to determine current pose information of the target vehicle corresponding to the current designated sensor data by using a current pose predictor, a filter fusion result corresponding to the current designated sensor data, and designated sensor data between the current acquisition time and the first time.
7. The apparatus according to claim 6, wherein the second obtaining module is specifically configured to determine whether the preset storage space stores sensor data corresponding to an acquisition time before the first time;
and if the preset storage space is judged to store the sensor data of which the corresponding acquisition time is before the first time, acquiring target sensor data of which the corresponding acquisition time is before the first time and after the second time from the preset storage space.
8. The apparatus of claim 6 or 7, wherein the processor is a processor disposed within an onboard platform of the target vehicle;
the device further comprises:
and the storage module is configured to store the first moment in the preset storage space corresponding to the currently-specified sensor data after the current filter is used for filtering the target sensor data to obtain a filtering fusion result, and the first moment is used as a fusion moment of a fusion filtering fusion result corresponding to the currently-specified sensor data.
9. The apparatus of claim 6 or 7, wherein the processor is a processor disposed on an off-board device;
the first obtaining module is specifically configured to obtain a first time corresponding to the currently specified sensor data from the preset storage space.
10. The system for fusing the data of the multiple sensors is characterized by comprising a processor, at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all arranged in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the processor is configured to obtain a first moment corresponding to current specified sensor data after determining that the current specified sensor data acquired by a specified sensor is obtained, wherein a difference value between the current acquisition moment corresponding to the current specified sensor data and the first moment is a preset time difference;
obtaining target sensor data of which the corresponding acquisition time is before a first time and after a second time from the preset storage space, wherein the difference value between the acquisition time corresponding to the previous appointed sensor data of the current appointed sensor data and the second time is the preset time difference;
filtering the target sensor data by using a current filter according to a preset data processing sequence to obtain a filtering fusion result corresponding to the current specified sensor data;
and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using a current pose predictor, a filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
CN201911041828.0A 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data Active CN112817301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911041828.0A CN112817301B (en) 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911041828.0A CN112817301B (en) 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data

Publications (2)

Publication Number Publication Date
CN112817301A true CN112817301A (en) 2021-05-18
CN112817301B CN112817301B (en) 2023-05-16

Family

ID=75851371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911041828.0A Active CN112817301B (en) 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data

Country Status (1)

Country Link
CN (1) CN112817301B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112747754A (en) * 2019-10-30 2021-05-04 北京初速度科技有限公司 Fusion method, device and system of multi-sensor data
CN112859659A (en) * 2019-11-28 2021-05-28 初速度(苏州)科技有限公司 Multi-sensor data acquisition method, device and system
CN113327344A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Fusion positioning method, device, equipment, storage medium and program product

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103930797A (en) * 2011-09-12 2014-07-16 大陆-特韦斯贸易合伙股份公司及两合公司 Time-corrected sensor system
WO2015189144A1 (en) * 2014-06-11 2015-12-17 Continental Teves Ag & Co. Ohg Method and system for correcting measurement data and/or navigation data of a sensor base system
CN105682222A (en) * 2016-03-01 2016-06-15 西安电子科技大学 Vehicle location positioning information fusion method based on vehicular ad hoc network
CN106840179A (en) * 2017-03-07 2017-06-13 中国科学院合肥物质科学研究院 A kind of intelligent vehicle localization method based on multi-sensor information fusion
US20170307379A1 (en) * 2016-04-20 2017-10-26 Honda Research Institute Europe Gmbh Navigation system and method for error correction
WO2018077176A1 (en) * 2016-10-26 2018-05-03 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device
CN108535755A (en) * 2018-01-17 2018-09-14 南昌大学 The vehicle-mounted combined in real time air navigation aids of GNSS/IMU based on MEMS
CN108573271A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Optimization method and device, computer equipment and the recording medium of Multisensor Target Information fusion
CN109059927A (en) * 2018-08-21 2018-12-21 南京邮电大学 The mobile robot slam of multisensor builds drawing method and system under complex environment
CN109947103A (en) * 2019-03-18 2019-06-28 深圳一清创新科技有限公司 Unmanned control method, device, system and load bearing equipment
CN110231028A (en) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 Aircraft navigation methods, devices and systems

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103930797A (en) * 2011-09-12 2014-07-16 大陆-特韦斯贸易合伙股份公司及两合公司 Time-corrected sensor system
WO2015189144A1 (en) * 2014-06-11 2015-12-17 Continental Teves Ag & Co. Ohg Method and system for correcting measurement data and/or navigation data of a sensor base system
CN105682222A (en) * 2016-03-01 2016-06-15 西安电子科技大学 Vehicle location positioning information fusion method based on vehicular ad hoc network
US20170307379A1 (en) * 2016-04-20 2017-10-26 Honda Research Institute Europe Gmbh Navigation system and method for error correction
WO2018077176A1 (en) * 2016-10-26 2018-05-03 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device
CN106840179A (en) * 2017-03-07 2017-06-13 中国科学院合肥物质科学研究院 A kind of intelligent vehicle localization method based on multi-sensor information fusion
CN108573271A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Optimization method and device, computer equipment and the recording medium of Multisensor Target Information fusion
CN108535755A (en) * 2018-01-17 2018-09-14 南昌大学 The vehicle-mounted combined in real time air navigation aids of GNSS/IMU based on MEMS
CN110231028A (en) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 Aircraft navigation methods, devices and systems
CN109059927A (en) * 2018-08-21 2018-12-21 南京邮电大学 The mobile robot slam of multisensor builds drawing method and system under complex environment
CN109947103A (en) * 2019-03-18 2019-06-28 深圳一清创新科技有限公司 Unmanned control method, device, system and load bearing equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112747754A (en) * 2019-10-30 2021-05-04 北京初速度科技有限公司 Fusion method, device and system of multi-sensor data
CN112859659A (en) * 2019-11-28 2021-05-28 初速度(苏州)科技有限公司 Multi-sensor data acquisition method, device and system
CN112859659B (en) * 2019-11-28 2022-05-13 魔门塔(苏州)科技有限公司 Method, device and system for acquiring multi-sensor data
CN113327344A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Fusion positioning method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
CN112817301B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN108571974B (en) Vehicle positioning using a camera
CN107024215B (en) Tracking objects within a dynamic environment to improve localization
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
CN112817301B (en) Fusion method, device and system of multi-sensor data
CN112116654B (en) Vehicle pose determining method and device and electronic equipment
US11740093B2 (en) Lane marking localization and fusion
EP3104284A1 (en) Automatic labeling and learning of driver yield intention
CN110945320B (en) Vehicle positioning method and system
CN112747754A (en) Fusion method, device and system of multi-sensor data
Liu et al. Bigroad: Scaling road data acquisition for dependable self-driving
CN111832376B (en) Vehicle reverse running detection method and device, electronic equipment and storage medium
CN111812698A (en) Positioning method, device, medium and equipment
CN110119138A (en) For the method for self-locating of automatic driving vehicle, system and machine readable media
KR20160112580A (en) Apparatus and method for reconstructing scene of traffic accident using OBD, GPS and image information of vehicle blackbox
CN110244742A (en) Method, equipment and the storage medium that automatic driving vehicle is cruised
CN111521192A (en) Positioning method, navigation information display method, positioning system and electronic equipment
CN113191030A (en) Automatic driving test scene construction method and device
CN111152792A (en) Device and method for determining the level of attention demand of a vehicle driver
CN113743312B (en) Image correction method and device based on vehicle-mounted terminal
CN104422426A (en) Method and apparatus for providing vehicle navigation information within elevated road region
US20210300372A1 (en) Driving assistant method, vehicle-mounted device and readable storage medium
JP2008082925A (en) Navigation device, its control method, and control program
CN106840181B (en) System and method for determining vehicle position
CN109145908A (en) Vehicle positioning method, system, device, test equipment and storage medium
CN111854770B (en) Vehicle positioning system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220308

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: 100083 room 28, 4 / F, block a, Dongsheng building, 8 Zhongguancun East Road, Haidian District, Beijing

Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant