CN112733907A - Data fusion method and device, electronic equipment and storage medium - Google Patents

Data fusion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112733907A
CN112733907A CN202011628706.4A CN202011628706A CN112733907A CN 112733907 A CN112733907 A CN 112733907A CN 202011628706 A CN202011628706 A CN 202011628706A CN 112733907 A CN112733907 A CN 112733907A
Authority
CN
China
Prior art keywords
target
data
observation
observation data
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011628706.4A
Other languages
Chinese (zh)
Inventor
张世权
马全盟
罗铨
蒋沁宏
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202011628706.4A priority Critical patent/CN112733907A/en
Publication of CN112733907A publication Critical patent/CN112733907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Abstract

The present disclosure provides a data fusion method, an apparatus, an electronic device, and a computer-readable storage medium, wherein the method comprises: determining a target sensor and a fusion algorithm process indicated by a target configuration file; acquiring observation data acquired by the target sensor; and performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target. The embodiment of the disclosure sets the target sensor and the fusion algorithm process of the collected observation data thereof through the target configuration file, and can realize free configuration of the fusion schemes of the observation data of different sensors installed in different application scenes, thereby alleviating the technical problem that the existing data perception fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes.

Description

Data fusion method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data fusion method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
At present, the existing sensor data perception fusion methods are mainly divided into a perception fusion method using a synchronous sensor system and a perception fusion method using an asynchronous sensor system, and most of the existing perception fusion methods using the asynchronous sensor system are used at present. Because the types of the asynchronous sensors are different in different application scenes, the data fusion algorithm of the sensor data perception fusion method is related to the application scene of the sensor data perception fusion method. For example, in the field of automatic driving, the configuration of the sensor is diversified. The sensors are classified according to categories, and may be roughly classified into a laser Radar (Lidar), a Camera (Camera), a millimeter wave Radar (Radar), an Ultrasonic sensor (ultrasound), and the like. The sensors have different arrangement schemes in different application scenes according to the use requirements, and the multiple sensors are heterogeneous in structure; the arrangement is ectopic in space, namely the spatial positions are different in orientation; the perceived results of multiple sensors tend to be out of time in time, i.e., the timestamps are not aligned. The perception fusion system is an important module which synthesizes perception results of all sensors so as to restore and estimate states of multiple targets in the real world.
The real automatic driving application scene has a strong demand on the perception fusion system, and the perception fusion system needs to fuse the advantages and perception areas of the sensors, so that various state information of a target, including but not limited to position, orientation, category, bounding box, speed, acceleration, existence and the like, can be stably, accurately and real-timely provided.
Disclosure of Invention
The embodiment of the disclosure at least provides a data fusion method, a data fusion device, electronic equipment and a computer-readable storage medium.
In a first aspect, an embodiment of the present disclosure provides a data fusion method, including: determining a target sensor and a fusion algorithm process indicated by a target configuration file; acquiring observation data acquired by the target sensor; and performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target.
As can be seen from the above description, in the embodiment of the present disclosure, by setting the target configuration file, and setting the fusion algorithm flow of the observation data by using the target configuration file, the fusion schemes of different sensors installed in different application scenes can be freely configured, so that the technical problem that the existing data sensing fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes is solved.
In an alternative embodiment, the tracking status information includes a motion status; performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, including: performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data acquired at each observation moment in the target time window; performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region; and updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target.
In the embodiment of the present disclosure, by storing the predicted data of the target predicted according to the observation data acquired at each observation time in the target time window through the annular buffer, and determining the motion state of the target according to the predicted data in the annular buffer, the motion state of the target at the historical time can be stored by using the annular buffer technology, so that all data of each sensor can be effectively utilized, and meanwhile, the stability of the motion state of the target can be ensured in the process of data sensing fusion.
In an optional implementation manner, the performing data association on the observation data and at least one target according to the predicted data in the ring buffer to obtain target observation data associated with each target includes: calculating the similarity between the predicted data and the observed data in the annular cache region according to a data association matching algorithm indicated in the target configuration file, determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target.
In the embodiment of the present disclosure, it can be known from the above description that, by configuring the data association matching manner in the target configuration file, the algorithm for data association matching can be configured at will, so that the data association matching algorithm can meet the data fusion requirements in different application scenarios, and the target configuration file is used to disassemble the algorithm corresponding to the sub-flow of the data association matching algorithm in the fusion algorithm flow.
In an optional embodiment, the performing a timestamp alignment process on the target observation data and the prediction data in the ring buffer includes: determining a matching relationship between a target observation time of the target observation data and the target time window; and under the condition that the target observation time is determined to be in the target time window according to the matching relation, or the target observation time is greater than the maximum time stamp of the predicted data in the target time window, interpolating to obtain the predicted data corresponding to the target observation data and storing the predicted data in the annular cache region, and taking the target observation time as the time stamp of the predicted data obtained by interpolation.
In the embodiment of the present disclosure, by performing timestamp alignment processing on the target observation data and the prediction data in the annular buffer in the manner described above, the stability of the latest motion state of the target can be ensured.
In an optional implementation, the time-stamp aligning the target observation data and the prediction data in the ring buffer further includes: under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain target prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time; and inserting the target prediction data into the storage position of the annular cache region corresponding to the target observation time.
As can be seen from the above description, in the embodiment of the present disclosure, by performing timestamp alignment processing on the annular buffer queue and the target observation data, interpolation calculation on the prediction data of the target corresponding to the target observation data in the target time window can be performed, so as to achieve accurate alignment between the timestamp of the annular buffer queue and the timestamp of the target observation data. After the time stamps are aligned, when updating is carried out according to the prediction data after the time stamp alignment processing, a more accurate fusion motion state can be obtained.
In an optional implementation manner, the predicting data corresponding to the target observation data according to the prediction data of the target time in the ring buffer to obtain target prediction data includes: predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model; and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the target predicted data.
In the embodiment of the disclosure, the target prediction data is predicted through the interactive multi-model, and the motion state of the complex target can be effectively fitted, so that a better perception fusion result can be obtained.
In an optional embodiment, the predicting, by each motion model in the interactive multi-model, prediction data corresponding to the target observation data includes: predicting model probabilities according to the prediction data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; acquiring the fusion motion state of each target at the target moment determined according to each motion model; and determining the prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
In the embodiment of the disclosure, the prediction data of each motion model at the target observation time is determined by the model probability of each motion model in the interactive multi-model and the fusion state data of the target determined by each motion model at the target time, so that the motion state of the complex target can be effectively fitted, and a better perception fusion result is obtained.
In an optional implementation manner, the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes: selecting at least one motion model from the interactive multi-models; and updating the prediction data after the timestamp alignment processing according to the selected motion model and the target observation data to obtain the motion state of each target.
In the embodiment of the disclosure, a mode of selecting one or more target motion models from the interactive multi-models can be realized, and a plurality of motion models most conforming to the target motion mode are selected in real time in the motion process, so that the optimization speed and the optimization effect of the interactive multi-model method are improved.
In an optional implementation manner, the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes: and updating the prediction data after the timestamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the motion state of each target.
As can be seen from the above description, in the embodiment of the present disclosure, the interactive multi-model is used to update the prediction data after the timestamp alignment processing, so as to obtain the motion state of the target, and the motion state of the complex target can be effectively fitted, thereby obtaining a better perception fusion result.
In an optional implementation manner, the updating, by using a motion model in an interactive multi-model and the target observation data, the prediction data after the timestamp alignment processing to obtain the motion state of each target includes: determining the confidence of each motion model according to the target observation data, wherein the confidence represents the matching degree between the motion state of each target predicted by the motion model at the target observation time and the actual motion state of each target; updating the model probability of each motion model according to the confidence coefficient, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; and determining the motion state and covariance matrix of each target according to the predicted data of the target observation data through the updated model probability and each motion model, wherein the covariance matrix of one target is used for representing the correlation degree between the predicted motion state of the target and the actual motion state of the target.
In the embodiment of the disclosure, the confidence of each motion model is determined through a multi-interactive multi-model, and the model probability of each motion model is updated according to the confidence, so that the motion state and the covariance matrix of the target are determined according to the updated model probability and the prediction data predicted by each motion model according to the target observation data, a more accurate perception fusion result can be obtained, and the motion state of the complex target can be effectively fitted.
In an optional implementation manner, the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes: determining a plurality of target prediction data in the ring cache, wherein the time represented by the timestamp is located before the target observation time; and updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data.
In the embodiment of the present disclosure, by updating the plurality of target prediction data according to the target observation data, the accuracy and smoothness of the predicted motion state can be further improved, so that a more accurate motion state is obtained.
In an optional implementation manner, the updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data includes: determining time windows corresponding to the target prediction data; determining observation data positioned in the corresponding time window in the target observation data; determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measurement loss function value, and a prior loss function value; updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
In the embodiment of the present disclosure, by calculating the loss function value between any two adjacent target prediction data in the plurality of target prediction data and updating the plurality of target prediction data according to the loss function value, it is possible to determine the motion state of each target by combining richer information, thereby improving the accuracy of the determined motion state of each target. (ii) a
In an optional embodiment, the method further comprises: after obtaining tracking state information of a target, executing target operation on the information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following operations: data updating operation, data creating operation and data transmission operation.
In the embodiment of the present disclosure, by the above setting manner, corresponding operations on relevant data of the target in the target pool can be performed in real time through the tracking state information of the target, so that efficient management of the data can be realized.
In an optional embodiment, the method further comprises: preprocessing the observation data to obtain the preprocessed observation data; the data fusion processing is performed on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target, and the method comprises the following steps: and performing data fusion on the preprocessed observation data according to the fusion algorithm flow to obtain the tracking state information of the target.
In the embodiment of the disclosure, before the data fusion processing is performed on the observation data detected by the target sensor according to the fusion algorithm flow, the useless data in the observation data can be removed by performing the data preprocessing on the observation data, so that the accuracy of the observation data is improved. When data fusion processing is carried out according to observation data after data preprocessing, the efficiency of the data fusion processing can be improved, and the accuracy of the determined tracking state information is improved.
In an optional embodiment, the fusion algorithm flow includes a plurality of sub-flows, and the target configuration file is further configured to indicate an execution order of the plurality of sub-flows and a flow algorithm corresponding to each sub-flow; the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object, a sub-process for determining type information of the object; the data fusion processing of the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target comprises the following steps: and performing data fusion on the observation data acquired by the target sensor according to the execution sequence of the plurality of sub-processes in the target configuration file and the process algorithm corresponding to each sub-process to obtain tracking state information of the target, wherein the tracking state information of the target comprises at least one of a motion state, existence estimation information and type information.
In the embodiment of the present disclosure, the fusion schemes of different sensors installed in different sensing environments are freely configured through the processing manner, so that the technical problem that the existing data sensing fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes is solved.
In an alternative embodiment, the target profile is determined according to the following steps: determining a sensor matched with an environment to be sensed as a target sensor; determining a profile indicative of a target sensor; and selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file.
In the embodiment of the disclosure, the observation data are fused according to the fusion algorithm process in the corresponding perception environment in the target configuration file, so that the tracking state of the target can be accurately predicted, and a more accurate tracking state can be obtained.
In a second aspect, an embodiment of the present disclosure further provides a data fusion apparatus, including: the determining unit is used for determining a target sensor and a fusion algorithm process indicated by the target configuration file; the acquisition unit is used for acquiring the observation data acquired by the target sensor; and the fusion processing unit is used for carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the data fusion method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the data fusion method according to any one of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a data fusion method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a sensor configuration provided by an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific method for performing data association between the observation data and the target in the data fusion method provided by the embodiment of the present disclosure;
FIG. 4 illustrates a data timing diagram of a timestamp alignment process provided by an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a specific method for performing data association between the observation data and the target in the data fusion method provided in the embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a specific method of performing timestamp alignment processing on the target observation data and the predicted data in the ring buffer in the data fusion method provided by the embodiment of the present disclosure;
fig. 7 is a schematic structural diagram illustrating a data processing method provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another fusion algorithm flow provided by the embodiments of the present disclosure;
fig. 9 is a schematic diagram illustrating a data fusion apparatus provided in an embodiment of the present disclosure;
fig. 10 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Research shows that the existing data perception fusion system needs to be correspondingly changed aiming at different sensor configuration schemes, or the existing data perception fusion system is difficult to freely adapt to various sensor configuration schemes.
Based on the research, the present disclosure provides a data fusion method, an apparatus, an electronic device, and a computer-readable storage medium. In the embodiment of the disclosure, the target sensor and the fusion algorithm process of the acquired observation data thereof are set through the target configuration file, so that the fusion schemes of different sensors installed in different application scenes can be freely configured, and the technical problem that the existing data perception fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes is solved.
To facilitate understanding of the present embodiment, first, a data fusion method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the data fusion method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the data fusion method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a data fusion method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S105, where:
s101: and determining the target sensor indicated by the target configuration file and the fusion algorithm flow.
In the embodiment of the disclosure, the target configuration file is used for indicating a target sensor in a corresponding sensing environment and a fusion algorithm process for fusing observation data acquired by the target sensor; the perception environment can be understood as an application scenario of the target sensor. For example, the sensing environment may be a driving environment of an autonomous vehicle in which the target sensor is located, and when the driving environment of the autonomous vehicle changes, the sensing environment of the target sensor changes accordingly. For example, when an autonomous vehicle runs on a highway and a rural area, the sensing environment of the target sensor is not the same.
For the automatic driving vehicle, when the vehicle is in different driving environments, the sensing environments of the automatic driving vehicle are also different, and at this time, in order to more accurately track the target in the corresponding sensing environment, the types and/or the number of the target sensors used in the corresponding sensing environments may also be different. Therefore, in the embodiment of the present disclosure, the type and/or the number of the corresponding target sensors may also be determined according to the type of the sensing environment, and then, the fusion algorithm flow of the observation data acquired by the target sensors is stored in the corresponding target configuration file.
Based on this, in the embodiments of the present disclosure, the target profile is determined according to the following steps: determining a sensor matched with an environment to be sensed as a target sensor; determining a profile indicative of a target sensor; and selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file.
In the embodiment of the disclosure, the observation data are fused by setting the fusion algorithm flow under the corresponding perception environment in the target configuration file, so that the tracking state of the target can be accurately predicted, and a more accurate tracking state can be obtained.
It should be noted that, in the embodiment of the present disclosure, the type of the target sensor in the target configuration file may be modified, and the fusion algorithm flow may also be adjusted. As shown in fig. 2, in the embodiment of the present disclosure, the target profile may be determined according to the kind of the target sensor and the number of the target sensors.
S103: and acquiring the observation data acquired by the target sensor.
In the embodiment of the present disclosure, after the target sensor is determined, the observation data collected by the target sensor may be acquired. It should be noted that, for a sensor with a high data transmission delay, after acquiring observation data, the sensor needs to perform data preprocessing on the observation data, so as to upload the preprocessed observation data, and therefore, a certain delay exists in the data transmission process of such a sensor, that is, in this case, a certain time difference may exist between the time when the observation data acquired by the target sensor is acquired and the time when the observation data is acquired by the target sensor.
S105: and performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target.
In an embodiment of the present disclosure, the fusion algorithm process includes a plurality of sub-processes, and the target configuration file is further configured to indicate an execution order of the plurality of sub-processes and a process algorithm corresponding to each sub-process, where the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object, a sub-process for determining type information of the object.
It should be noted that, in the embodiment of the present disclosure, the number and/or the order of the sub-processes in the target configuration file may be adjustable, and the process algorithm corresponding to each sub-process may be adjustable.
As can be seen from the above description, in the embodiment of the present disclosure, the target configuration file is used to set the fusion algorithm process of the target sensor and the collected observation data thereof, so that the fusion schemes of the observation data of different sensors installed in different sensing environments can be freely configured, and each process in the fusion algorithm process can be disassembled through the target configuration file, thereby alleviating the technical problem that the existing data sensing fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes.
As can be seen from the above description, in the embodiment of the present disclosure, the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object; a sub-process for determining type information of the target. Besides, the plurality of sub-processes may also include other sub-processes, and the plurality of sub-processes included in the fusion algorithm process will be described in detail below.
In an alternative embodiment, the plurality of sub-processes may include at least one of the following sub-processes: data preprocessing, timestamp alignment processing, data association matching, motion state estimation (i.e. the above-mentioned sub-process for determining the motion state of the target), target presence estimation (i.e. the above-mentioned sub-process for determining the presence estimation information of the target), target class estimation (i.e. the above-mentioned sub-process for determining the type information of the target), target pool management, and the like. The sub-processes described above will be described with reference to specific embodiments.
Firstly, the method comprises the following steps: and (4) preprocessing data.
For the sub-process of data preprocessing, in the embodiment of the present disclosure, the data fusion method further includes: and preprocessing the observation data to obtain the preprocessed observation data.
In this case, when data fusion processing is performed on observation data acquired by a target sensor according to a fusion algorithm flow to obtain tracking state information of a target, data fusion may be performed on the observation data after preprocessing according to the fusion algorithm flow to obtain the tracking state information of the target.
Specifically, the data preprocessing sub-process refers to acquiring observation data acquired by a target sensor indicated in a target configuration file. Then, the observation data is subjected to data preprocessing. In an alternative embodiment, the target sensor may send the pre-processed observation data to the computer device after pre-processing the observation data; in addition, the target sensor may send the observation data to the computer device to enable the computer device to perform data preprocessing on the observation data. The data preprocessing may include any one of the following processing methods: de-duplication matching, deletion of abnormal values, and calculation of a preamble (such as calculation of a preamble of a view angle range, a projection frame, and the like).
In the embodiment of the present disclosure, the data preprocessing process may be performed on multi-sensor data such as a multi-camera, a camera-lidar, a camera-millimeter-wave radar, and a lidar-millimeter-wave radar. In addition, the observation data may be processed by other data preprocessing methods, such as data denoising and data smoothing. It should be noted that, in the embodiment of the present disclosure, a specific processing method of data preprocessing is associated with a type of the target sensor, for example, data preprocessing methods corresponding to observation data of different types of target sensors are different.
In the embodiment of the disclosure, before the data fusion processing is performed on the observation data detected by the target sensor according to the fusion algorithm flow, the useless data in the observation data can be removed by performing the data preprocessing on the observation data, so that the accuracy of the observation data is improved. When data fusion processing is carried out according to observation data after data preprocessing, the efficiency of the data fusion processing can be improved, and the accuracy of the determined tracking state information is improved.
Secondly, the method comprises the following steps: and (5) time stamp alignment processing.
In the embodiment of the present disclosure, the timestamp alignment process refers to aligning a timestamp of prediction data of an object whose existence has been estimated to a timestamp of new observation data of a sensor (i.e., an observation time of the observation data, or an acquisition time of the observation data), so as to align and precisely match in a motion state. In the embodiment of the disclosure, the accurate alignment processing of the prediction data of the target which is estimated to exist and the new observation data of the target sensor can be realized by maintaining the annular buffer zone of the motion state of the target. It should be noted that, as can be seen from the above description, since there is a delay in data transmission of some target sensors, a time difference between a time when the target sensor acquires the observation data and a time when the vehicle-mounted host acquires the observation data is a delay time. At this time, after the vehicle-mounted host acquires the observation data transmitted in a delayed manner, the timestamp of the prediction data in the ring buffer needs to be aligned to the observation time of the observation data according to the observation time of the observation data.
Thirdly, the method comprises the following steps: and (6) data association matching.
In the embodiment of the present disclosure, data association matching refers to an operation of performing matching association on an object that is estimated to exist and observation data collected by an object sensor.
For the sub-flow of the timestamp alignment process and the data association matching, in an alternative embodiment, as shown in fig. 3, the following steps are performed: and performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm process to obtain the tracking state information of the target, wherein the process comprises the following steps:
step S1051, according to the prediction data in the ring buffer, performing data association to the observation data and at least one target to obtain target observation data associated with each target; and the predicted data in the annular cache area represents the motion state of the target predicted according to the observation data acquired at each observation moment in the target time window.
In the embodiment of the present disclosure, if the number of the targets is multiple, the observation data of the multiple sensors is the observation data of the multiple targets, and at this time, the observation data of the multiple target sensors and the multiple targets need to be subjected to data association according to the predicted data in the ring buffer, so as to obtain the target observation data belonging to each target.
It should be noted that, in the embodiment of the present disclosure, each target corresponds to a ring buffer, and prediction data indicating a motion state of the target is stored in the ring buffer.
Step S1052, performing timestamp alignment processing on the target observation data and the prediction data in the ring buffer.
In the embodiment of the present disclosure, after the observed data and the multiple targets are subjected to data association, since the time stamp (or observed data) of the observed data of the targets obtained after the association does not correspond to the time stamp of the predicted data in the ring buffer, at this time, the observed data of the targets and the predicted data in the ring buffer need to be subjected to time stamp alignment processing.
For example, as shown in fig. 4, the plurality of sensors includes: a camera, a laser radar sensor, a millimeter wave radar sensor. As can be seen from fig. 4, for the target observation data, at times a1 and a2, the millimeter wave radar sensor acquires corresponding observation data, for example, denoted as M1 and M2(M1 and M2 are shown in fig. 4); at times A3 and a5, the lidar sensor collects corresponding observations, e.g., as M3 and M5(M3 and M5 are shown in fig. 4); at time a4, the camera acquires data M4. As can be seen from fig. 4, the timestamps of the prediction data stored in the ring buffer are B1, B2, B3, B4, B5, and B6, respectively, and as can be seen from fig. 4, at the observation time a2, the prediction data corresponding to the observation time a2 does not exist in the ring buffer, and at this time, it can be determined that the timestamps of the target observation data and the prediction data are not aligned.
It should be noted that one possible reason why the predicted data corresponding to the observation time a2 does not exist in the ring buffer is that the observation data collected at the time a2 is not uploaded to the computer device in time, but is transmitted to the computer device after a certain time delay. At this point, there may be no prediction data in the ring buffer corresponding to this time point a 2. However, in order to apply the observation data acquired at the time a2 to the data fusion method, a prediction data needs to be interpolated in the annular buffer, and the timestamp of the prediction data obtained by interpolation is the time a2, at this time, the data transmitted in a delayed manner can be applied to the data fusion method, so that the observation data transmitted in a delayed manner does not need to be discarded, all the observation data acquired by the target sensor can be utilized, all the data of each sensor can be effectively utilized, and the state stability of the target can be ensured in the process of data sensing fusion.
Based on this, in the embodiment of the present disclosure, time stamp alignment processing needs to be performed on the target observed data and the predicted data in the ring buffer, and as can be seen from fig. 4, the predicted data corresponding to the a2 time can be obtained by interpolation according to the predicted data corresponding to the B2 or B3 time, so as to achieve time stamp alignment between the target observed data and the predicted data in the ring buffer.
And step S1053, updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target.
After the time stamps of the target observation data and the prediction data in the annular cache region are aligned, the prediction data after time stamp alignment processing can be updated according to the target observation data, and the motion state of each target is obtained. When the motion state of each target is determined by using the prediction data after the timestamp alignment processing, the stability of the motion state of each target after updating can be improved, and a more accurate motion state can be obtained.
In the embodiment of the disclosure, the annular buffer area is used for storing the prediction data determined according to the observation data acquired at each observation time in the target time window, and the motion state of the target is determined according to the prediction data in the annular buffer area, so that the motion state at the historical time can be stored by adopting the annular buffer technology, all data of each sensor can be effectively utilized, and meanwhile, the state stability of the target can be ensured in the process of data perception fusion.
In an alternative embodiment, as shown in fig. 5, performing data association on the observation data and the targets to obtain target observation data associated with each target includes the following processes:
step S501, calculating the similarity between the predicted data and the observed data in the annular cache region according to a data association matching algorithm indicated in the target configuration file;
step S502, determining a target associated with the observation data according to the similarity, and determining the observation data as target observation data of the associated target.
In the embodiment of the present disclosure, the data association matching algorithm may be a "one-to-one" matching algorithm, or a "one-to-many" matching algorithm, where the "one-to-one" matching algorithm may be hungarian matching or greedy matching, and the "one-to-many" matching algorithm may be multiple bipartite graph matching or simple greedy matching, and the like. The similarity calculation algorithm comprises similarity calculation algorithms such as center point similarity, orientation weighted center point similarity, shape similarity, similarity of 2D image frames, Euclidean distance and the like.
In the embodiment of the present disclosure, in addition to the data association matching algorithm and the similarity calculation algorithm described above, an algorithm that can replace the data association matching algorithm and an algorithm that can replace the similarity calculation algorithm may be adopted, and the present disclosure is not particularly limited thereto.
In the embodiment of the present disclosure, as can be known from the above description, for the sub-process "data association matching", the algorithm corresponding to the sub-process may be configured at will, so that the algorithm corresponding to each sub-process in the fusion algorithm process is disassembled through the target configuration file, and further the technical problem that the existing data-aware fusion system in the prior art is difficult to adapt to various sensor configuration schemes freely is solved.
It should be noted that, in the embodiment of the present disclosure, in addition to performing data association matching according to the above-described method, data association matching may also be performed in the following described manner, which specifically includes:
in the embodiment of the present disclosure, after the observation data with the timestamp (i.e., the observation time) is acquired, the prediction data corresponding to the timestamp, i.e., the first prediction data, may be searched in the ring cache. As can be seen from the above description, each target corresponds to one ring buffer, and therefore, if there are a plurality of targets, the first prediction data may also be prediction data determined in different ring buffers.
After the first prediction data is determined, a similarity between the first prediction data and the observation data needs to be determined. After the similarity is determined, the target associated with the observation data can be determined according to the similarity, and the observation data can be determined as the target observation data of the associated target.
According to the description, the observation data and the targets are associated through the similarity, the target observation data of each target can be quickly and accurately determined from a large amount of sensor observation data, and therefore when the fusion motion state of the targets is determined according to the target observation data and the prediction data, an accurate prediction result can be obtained.
In an optional implementation manner, in the embodiment of the present disclosure, after the observation data and the target are subjected to data association according to the method described above, the target observation data and the predicted data in the ring buffer may be subjected to time stamp alignment processing according to the target observation data.
In an alternative embodiment, as shown in fig. 6, the step of performing timestamp alignment on the target observation data and the predicted data in the ring buffer includes the following steps:
step S601, determining a matching relation between the target observation time and a target time window;
step S602, when it is determined that the target observation time is within the target time window according to the matching relationship, or the target observation time is greater than the maximum time stamp of the predicted data within the target time window, interpolating to obtain the predicted data of the target observation data and store the predicted data in the annular cache region, and taking the target observation time as the time stamp of the predicted data obtained by interpolation.
In the embodiment of the present disclosure, each piece of target observation data obtained latest may be processed through the following three cases, which specifically include:
the first condition is as follows:
and if the target observation time of the target observation data is determined to be smaller than the timestamp of the earliest predicted data in the annular cache region according to the matching relation, discarding the target observation data. For example, as shown in FIG. 4, the start timestamp of the target time window is B1 and the end timestamp of the target time window is B6. As can be seen from fig. 4, the sensor observation data (millimeter wave radar sensor observation data) with the time stamps of C1 and C2 is smaller than the time stamp B1 of the earliest predicted data in the status buffer, and at this time, the target observation data can be discarded.
Case two:
if the target observation time of the target observation data is determined to be greater than the timestamp of the oldest prediction data in the annular cache region and less than the timestamp of the newest prediction data in the annular cache region according to the matching relationship, the target observation time can be determined to be in a target time window according to the matching relationship, and at the moment, the prediction data corresponding to the target observation time can be searched in the annular cache region; and under the condition that the corresponding prediction data is not found, inserting the prediction data of the target observation data in the annular cache region, adding the prediction data obtained by interpolation into the annular cache region, and then updating the prediction data. In case that the corresponding prediction data is found, the step of updating the found prediction data may be performed.
Case three:
if it is determined according to the matching relationship that the target observation time of the target observation data is greater than the timestamp of the latest prediction data in the annular cache region (i.e., the maximum timestamp of the prediction data within the target time window), the prediction data of the target observation data can be interpolated in the annular cache region, the prediction data obtained by interpolation is added to the annular cache region, and then the prediction data is updated.
In the embodiment of the present disclosure, by performing timestamp alignment processing on the target observation data and the prediction data in the annular buffer in the manner described above, the stability of the latest motion state of the target can be ensured.
In this embodiment of the present disclosure, the timestamp alignment processing may be further performed on the target observation data and the prediction data in the ring cache region in a manner described in the following steps, specifically including the following processes:
(1) under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain target prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time;
(2) and inserting the target prediction data into the storage position of the annular cache region corresponding to the target observation time so as to obtain the prediction data subjected to the timestamp alignment processing.
As can be seen from the above three cases, in case two and case three, if it is determined according to the matching relationship that the target observation time of the target observation data is greater than the timestamp of the earliest predicted data in the annular cache region and less than the timestamp of the latest predicted data in the annular cache region, and the predicted data corresponding to the target observation time is not found in the annular cache region; or if the target observation time of the target observation data is determined to be greater than the timestamp of the latest prediction data in the annular cache region according to the matching relation, predicting the prediction data of the target observation data according to the prediction data corresponding to the target time in the annular cache region to obtain target prediction data, adding the target prediction data obtained through interpolation into the annular cache region, taking the target observation time as the timestamp of the target prediction data obtained through interpolation, and then updating the target prediction data.
As can be seen from the above description, in the embodiment of the present disclosure, by performing timestamp alignment processing on the annular buffer queue and the target observation data, interpolation calculation on the prediction data of the target corresponding to the target observation data in the target time window can be performed, so as to achieve accurate alignment between the timestamp of the annular buffer queue and the timestamp of the target observation data. After the time stamps are aligned, when updating is carried out according to the prediction data after the time stamp alignment processing, a more accurate fusion motion state can be obtained.
In an optional implementation manner of the embodiment of the present disclosure, predicting, according to prediction data of a target time in the ring buffer, prediction data corresponding to the target observation data to obtain target prediction data includes the following processes:
(1) predicting the corresponding prediction data of the target observation data through each motion model in the interactive multi-model;
(2) and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the target predicted data.
The existing sensor data perception fusion method usually adopts a single motion model to fit the motion state of a target, however, due to the complexity of a target motion mode, the motion state of the target is difficult to be effectively fitted by using the single motion model. For example, in the field of vehicle automatic driving, a motion pattern executed by a certain target is complex during vehicle driving, such as complex actions of straight-going-right-turning-straight-going-merging and the like. In this case, the complex motion state cannot be fitted by one motion model.
Based on this, in the embodiment of the present disclosure, the prediction data corresponding to the target observation data is predicted for each motion model in the interactive multi-model, so that the prediction data predicted by each motion model is fused according to a data summation calculation method to obtain the target prediction data.
As can be seen from the above description, in the embodiment of the present disclosure, the interactive multi-model is used to predict the target prediction data corresponding to the target observation data. The motion state of the complex target can be effectively fitted by predicting the target prediction data through the interactive multi-model, so that a better perception fusion result can be obtained.
In an optional embodiment, the predicting data corresponding to the target observation data may be predicted in the following manner, specifically including:
(1) predicting model probability according to the prediction data of the target moment through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time.
Specifically, for each of the motion models, a target transition probability is determined, wherein a target transition probability represents a probability of transition to that motion model by other motion models in the interactive multi-model. In the disclosed embodiment, for the ith motion model, at πjiRepresenting the probability of transition of the object from motion model j to motion model i, where motion model j is the other motion model described above.
Then, a first confidence of the other motion model at the target time can be determined; the first confidence level is used to represent the probability that the actual motion of the object at the object time conforms to the other motion model.
If the target observation time is denoted as time k, the target time may be denoted as time k-1. At this time, the first confidence of the other motion model j at the time k-1 (i.e., the target time) is expressed as
Figure BDA0002878050880000221
Finally, the first confidence level may be based on the transition to goal probability
Figure BDA0002878050880000222
Determining the model probability. In the disclosed embodiment, the target transition probability pi is determinedjiAnd a first confidence
Figure BDA0002878050880000223
Then, the probability of target transition pi can be matchedjiAnd a first confidence
Figure BDA0002878050880000224
And performing weighted summation to obtain a model probability of the motion model i, wherein a calculation formula for determining the model probability based on the target transition probability and the confidence coefficient can be represented as:
Figure BDA0002878050880000225
wherein the content of the first and second substances,
Figure BDA0002878050880000226
expressed as the model probability of the above-mentioned motion model i.
After determining the model probability of the motion model i in the above-described manner, the model probability is further determined by the following formula
Figure BDA0002878050880000227
Carrying out normalization processing to obtain the model probability after the normalization processing:
Figure BDA0002878050880000228
wherein the content of the first and second substances,
Figure BDA0002878050880000229
representing the model probability after the normalization process.
(2) And acquiring the fusion motion state of each target determined at the target moment according to each motion model.
In the disclosed embodiment, after predicting the model probability according to the prediction data of the target time, the final state result (i.e. the motion state) of the target determined by the motion model j at the time k-1 may also be determined, for example, the motion state is expressed as:
Figure BDA00028780508800002210
(3) and determining the prediction data corresponding to the target observation data predicted by each motion model based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
After the fusion motion state of the target at the target moment is determined through the step (2), the fusion motion state determined at the k-1 moment can be determined
Figure BDA0002878050880000231
Model probability with time k-1
Figure BDA0002878050880000232
And fusing to obtain the predicted data corresponding to the target observation data predicted by each motion model at the time k. Then, the predicted data predicted by all the motion models can be summed to obtain the target predicted data.
In the embodiment of the present disclosure, the determining, by using the following formula, the prediction data corresponding to the target observation data predicted by each motion model specifically includes:
Figure BDA0002878050880000233
then, by the formula
Figure BDA0002878050880000234
And performing summation operation on the prediction data predicted by all the motion models to obtain target prediction data.
It should be noted that, in the embodiment of the present disclosure, each motion model may predict a corresponding model probability, and in addition, each motion model may also predict a corresponding covariance
Figure BDA0002878050880000235
In the step (3), in addition to determining the prediction data corresponding to the target observation data predicted by each motion model, a corresponding covariance matrix may be predicted, where the covariance matrix is used to represent the degree of correlation between the motion state of a target and the actual motion state of the target.
In the disclosed embodiment, the covariance matrix may be determined as described by the following equation:
Figure BDA0002878050880000236
as can be seen from the above description, for each motion model, first, a matrix difference between the target prediction data and the fusion motion state determined by each motion model at the time k-1 may be calculated, then, according to the matrix difference, a transposed matrix of the matrix difference, and a model probability corresponding to each motion model, a degree of association between the motion model and the target prediction data is determined, and then, the degree of association is added to a covariance matrix determined by each motion model at the time k-1, so as to obtain an addition calculation result. For each motion model, the addition calculation result can be determined in the manner described above, and then the addition calculation result is subjected to summation operation to obtain the covariance matrix determined at the time k.
As can be seen from the above description, in the embodiment of the present disclosure, the interactive multi-model is used to predict the target prediction data corresponding to the target observation data. The motion state of the complex target can be effectively fitted by predicting the target prediction data through the interactive multi-model, so that a better perception fusion result can be obtained.
And fourthly, estimating the motion state.
The motion state estimation sub-process is to estimate the motion state of the target according to the prediction data after the timestamp alignment process, wherein the motion state includes, but is not limited to, position, orientation, speed, and acceleration.
In the embodiment of the present disclosure, after performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region according to the above-described manner, the prediction data after the timestamp alignment processing may be updated to obtain the motion state of the target.
In an optional implementation manner, updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes the following steps:
and updating the prediction data after the timestamp alignment processing through the motion model in the multi-interactive multi-model and the target observation data to obtain the motion state of each target.
Specifically, first, a confidence level of each motion model in the interactive multi-model may be determined, where the confidence level is used to characterize a degree of matching between a motion state of the target predicted by the motion model at the target observation time and an actual motion state of the target.
In the embodiment of the disclosure, each motion model is utilized to perform kalman filtering updating on the prediction data after the timestamp alignment processing according to the target observation data, so as to obtain a kalman filtering result; the prediction data after the timestamp alignment processing is the target prediction data determined according to the steps (1) to (3). Specifically, kalman filtering updating may be performed on the target prediction data by using the target observation data and the extended kalman filtering or the unscented kalman filtering to obtain a kalman filtering result, where the kalman filtering result may include a measurement residual and a measurement residual covariance matrix. Then, the confidence level of each motion model in the interactive multi-model is determined according to the Kalman filtering result. Specifically, the measurement residual and the measurement residual covariance matrix can be used as the mean and the variance of a gaussian model, respectively, and then the confidence of each motion model is determined by using the gaussian model.
In particular, in the disclosed embodiments, the data may be represented by a formula
Figure BDA0002878050880000251
A confidence level is determined. Wherein the content of the first and second substances,
Figure BDA0002878050880000252
for the purpose of the confidence level,
Figure BDA0002878050880000253
the model is a Gaussian model, and the model is a Gaussian model,
Figure BDA0002878050880000254
the measured residual error (i.e., the mean of the gaussian model) determined from the target observation data,
Figure BDA0002878050880000255
to measure the residual covariance matrix (i.e., the variance of the gaussian model).
After the confidence is determined, the model probability of each motion model may be updated according to the confidence, wherein the model probability is used to represent the probability that the actual motion state of each target matches the motion model at the target observation time.
In particular, it can be based on a formula
Figure BDA0002878050880000256
The model probabilities for each motion model are updated.
For the motion model i, the confidence of the motion model j is first determined
Figure BDA0002878050880000257
And model probability of motion model j
Figure BDA0002878050880000258
Carrying out multiplication operation to obtain a product calculation result Z1; then, the product calculation result Z1 of each motion model j is subjected to addition operation to obtain an addition operation result P1, that is
Figure BDA0002878050880000259
Then, the confidence of the motion model i is determined
Figure BDA00028780508800002510
And model probability of motion model i
Figure BDA00028780508800002511
Multiplication operation is performed to obtain a product calculation result Z2, and then a ratio between the product calculation result Z2 and the addition operation result P1 is calculated to determine the ratio as a model probability after the motion model i is updated.
After the updated model probability is determined, the motion state of the target and a covariance matrix can be determined through the updated model probability and prediction data predicted by each motion model according to the target observation data, wherein the covariance matrix is used for representing the correlation degree between the motion state of the target and the actual motion state of the target.
In particular, in the disclosed embodiments, the data may be represented by a formula
Figure BDA00028780508800002512
The motion state of the object is calculated, wherein,
Figure BDA00028780508800002513
in order to update the probability of the model after the update,
Figure BDA00028780508800002514
prediction data predicted for the motion model i from the target observation data. In the embodiment of the present disclosure, the model probability after each motion model is updated and the prediction data predicted by each motion model may be multiplied, and then, for all motion models, the products are summed, so as to obtain the motion state of each target.
After the motion state of the object is obtained, the motion state of the object can be obtained according to the formula:
Figure BDA0002878050880000261
a covariance matrix of the target is determined. As can be seen from the above description, for each motion model, first, a matrix difference between the motion state of each target and the prediction data predicted by each motion model may be calculated, then, according to the matrix difference and a transposed matrix of the matrix difference and a model probability corresponding to each motion model, a degree of association between the prediction data of the motion model and the motion state is determined, and then, the degree of association is added to a covariance matrix determined at the time of each motion model k, so as to obtain an addition calculation result. For each motion model, the addition calculation result can be determined in the manner described above, and then the addition calculation result is subjected to summation operation to obtain the covariance matrix determined at the time k.
As can be seen from the above description, in the embodiment of the present disclosure, the prediction data after the timestamp alignment processing is updated by using an interactive multi-model, so as to obtain the motion state of the target, and the motion state of the complex target can be effectively fitted, so that a better perception fusion result can be obtained.
The data processing is described below with reference to fig. 7, and as can be seen from fig. 7, the plurality of sensors includes a camera, a lidar sensor, and a millimeter-wave radar sensor.
As can be seen from fig. 7, the camera, the lidar sensor, and the millimeter wave radar sensor collect observation data to obtain an image frame, a lidar data frame, and a millimeter wave radar data frame (i.e., observation data), respectively. And then, performing data association operation on the image frame, the laser radar data frame and the millimeter wave radar data frame according to the predicted data in the annular buffer area, thereby determining a target corresponding to each observation datum. After the data association operation is performed, target observation data to which each target belongs can be obtained. Then, the target observation data and the prediction data in the ring buffer may be subjected to a time stamp alignment process, so that the prediction data after the time stamp alignment process includes the prediction data of the target corresponding to each target observation time of the target observation data.
For example, as shown in fig. 7, the plurality of sensors includes: a camera, a laser radar sensor, a millimeter wave radar sensor. As can be seen from fig. 7, for the target observation data, at times a1 and a2, the millimeter wave radar sensor acquires data M1 and M2; at the time A3 and the time A5, the laser radar sensor acquires data M3 and M5; at time a4, the camera acquires data M4. As can be seen from fig. 4, the timestamps of the prediction data stored in the ring buffer are B1, B2, B3, B4, B5, and B6, respectively, for the prediction data, and as can be seen from fig. 7, the timestamps of the target observed data and the prediction data are not aligned.
Based on this, in the embodiment of the present disclosure, time stamp alignment processing needs to be performed on the target observed data and the predicted data in the ring buffer, and as can be seen from fig. 7, the predicted data corresponding to the time M2 can be obtained by interpolation according to the predicted data corresponding to the time B2 or the time B3, so as to achieve time stamp alignment between the target observed data and the predicted data in the ring buffer.
After the target observation data and the prediction data in the annular cache region are subjected to the timestamp alignment processing, the prediction data subjected to the timestamp alignment processing can be updated, and the fusion motion state of the target is obtained. As shown in fig. 7, the predicted data after the timestamp alignment process may be updated by a multiple Interactive Multiple Model (IMM), so as to obtain the motion state of the target.
As can be seen from the above description, embodiments of the present disclosure propose to use a buffer technique to efficiently use all measurements of each sensor and maintain stability of the latest state of the target. Compared with the prior art, the method provided by the embodiment of the disclosure can effectively fit the motion state of the complex target, so that a better perception fusion result can be obtained.
In another optional implementation manner of the embodiment of the present disclosure, the step of updating the prediction data after the timestamp alignment processing by using an interactive multi-model to obtain the motion state of the target includes the following steps:
(1) selecting one or more object motion models from the interactive multi-models;
(2) and updating the prediction data after the timestamp alignment processing according to the selected motion model to obtain the motion state of each target.
In the embodiment of the present disclosure, in order to simulate the motion states of various targets, more motion models are required to be continuously added to simulate the motion patterns of the targets, but when the number of motion models is large, the optimization speed of the interactive multi-model IMM algorithm is significantly reduced, and the optimization effect of the interactive multi-model IMM algorithm is also reduced due to the large number of motion models. Therefore, in order to prevent these problems, it is necessary to adjust the interactive multi-model so that the interactive multi-model can select a plurality of motion models that best meet the motion mode of the target in real time during the motion process, and update the prediction data after the timestamp alignment process according to the selected plurality of motion models to obtain the motion state of the target.
Specifically, in the embodiment of the present disclosure, one or more motion models may be selected from the interactive multi-models through a likelihood model set selection LMS algorithm, and a specific selection process is described as follows:
the Model set adaptive selection step is a likelihood Model set selection method (LMS), and the main processes include Model classification, Model activation, Model work update, and the like. Firstly, initializing and grouping a plurality of motion models to obtain a model all set (total model set), a model active set (active model set) and a model working set (working model set). Wherein the initialization grouping of models is associated with an initialization confidence for each model, wherein the confidence is used to characterize the probability that the predicted motion state of the object conforms to each motion model. The model all sets comprise model active sets, the model active sets comprise model working sets, and in an initial state, the models in the model all sets are the same as the models in the model active sets. Next, the motion state of the object may be predicted by each motion model in the active set of models. In the embodiment of the present disclosure, the motion state of the target may be predicted by the above-described interactive multi-model IMM algorithm, which is not described in detail herein.
Thereafter, the confidence level of each motion model is updated based on the predicted motion state. Next, a final confidence is determined according to the updated confidence and the initialized confidence, and then the motion models included in the model active set (active model set) and the model working set (working model set) are adjusted according to the final confidence, for example, a model with a final confidence greater than a confidence threshold may be determined as a model in the model working set. And finally, updating the prediction data after the timestamp alignment processing through the adjusted model working set to obtain the motion state of each target.
Based on the aforementioned interactive multi-model IMM algorithm method and the improved interactive multi-model VSIMM algorithm. In the embodiment of the disclosure, configuration batch optimization can be selected to further improve the precision and smoothness of motion state estimation, and the main idea is to perform unified optimization iteration on the prediction data in the target time window. In this case, updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes the following steps:
(1) determining a plurality of target prediction data of which the time represented by the timestamp is located before the target observation time in the annular cache region;
(2) updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data, specifically comprising:
determining time windows corresponding to the target prediction data; determining observation data positioned in the corresponding time window in the target observation data; determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measurement loss function value, and a prior loss function value; updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
Supposing that the target observation time is k time, at this time, a plurality of target prediction data before the k time can be determined in the annular cache region, then, time windows corresponding to the plurality of target prediction data are determined, and the observation data located in the corresponding time windows are determined in the target observation data; then, according to the determined observation data, calculating a loss function value between any two adjacent target prediction data, wherein the loss function comprises: a motion loss function value, a measurement loss function value, and a priori loss function value.
In the disclosed embodiment, the calculation formula of the motion loss function value can be described as: e.g. of the typemotion=(f(xk-1,0)-xk)TP(f(xk-1,0)-xk);xk-1Target prediction data representing time k-1, f (x)k-10) prediction data of a target at time k predicted from target prediction data at time k-1, xkRepresenting the actual motion state at time k.
The calculation formula for measuring the loss function value can be described as: e.g. of the typemeasure=(yk-g(xk-1,0))TR(yk-g(xk-1,0));ykTarget observation data, g (x), indicating time kk-10) is a measurement function for inferring the observed data at time k-1 from the target predicted data at time k-1, R representing a constant.
The calculation of the a priori loss function values can be described as:
Figure BDA0002878050880000301
wherein the content of the first and second substances,
Figure BDA0002878050880000302
target prediction data representing predicted time k-n, where time k-n is the starting time of the corresponding time window, x0Representing the target prediction data at time k-n.
After obtaining the plurality of loss function values, the plurality of loss function values can be summed to obtain a total loss function value, then, optimization iteration can be performed on the plurality of target prediction data according to the total loss function value, when the total loss function value does not change any more, the loss function value is determined to meet an iteration stop condition, and the motion state of each target is determined according to the plurality of updated target prediction data.
And fifthly, estimating the existence of the target, wherein the estimation result of the target existence estimation is the existence estimation information in the tracking state information.
Target presence estimation refers to a step of determining or probability estimating an estimated presence evidence of a target, which is target observation data corresponding to the target. In the embodiment of the disclosure, the target observation data of each target is processed by an evidence existence estimation method based on the Dempster-Shafer theory, so as to obtain existence estimation information. The presence estimation information may be a presence probability, when the presence probability is greater than a certain value, the evidence that the target exists is considered to be sufficient, and at this time, the related information of the target may be retained in the target pool, otherwise, the target is considered to be absent, and at this time, the related information of the target may be deleted in the target pool.
And sixthly, estimating the object class.
Different sensors have certain difference on the identification effect of the target category, and more accurate category information can be obtained by fusing the estimation results of the target category by using multiple sensors. In the disclosed embodiment, the class information of the target may be determined through an evidence theory algorithm based on the Dempster-Shafer theory.
And seventhly, managing the target pool.
Based on the sub-process "target pool management", in an optional embodiment, the method further includes: after obtaining tracking state information of a target, executing target operation on the information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following operations: data updating operation, data creating operation and data transmission operation.
The target pool contains related information of the targets, and the target pool management can be understood as operations of new creation, deletion, update and the like of the targets in the target pool. Specifically, in the embodiments of the present disclosure, different selectable target pool tracking management methods are provided for different sensor configuration schemes (such as camera-lidar, multiple cameras, camera-millimeter wave radar, etc.). The user may also add other target pool management methods in this step, which is not specifically limited by this disclosure.
The above fusion algorithm flow is described below with reference to fig. 8.
As can be seen from fig. 8, the plurality of sensors includes a camera, a lidar sensor, and a millimeter-wave radar sensor.
As can be seen from fig. 8, the camera, the lidar sensor, and the millimeter wave radar sensor collect observation data to obtain an image frame, a lidar data frame, and a millimeter wave radar data frame (i.e., observation data), respectively. And then, carrying out data preprocessing on the image frames, the laser radar data frames and the millimeter wave radar data frames, and carrying out data association operation on the image frames, the laser radar data frames and the millimeter wave radar data frames according to the predicted data in the annular buffer area after the data preprocessing, thereby determining a target corresponding to each observation data. The sub-flow "time stamp alignment processing" described above needs to be performed before the data association operation is performed. After the data association operation is performed, the sub-flow "target pool management" may be performed. In the sub-process "target pool management", target state update may be performed according to a result of the data association operation, where the target state update refers to: performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region; and estimating the motion state of the target according to the prediction data after the timestamp alignment processing. After the target state is updated, the corresponding information may be modified in the target pool to the updated tracking state information. Thereafter, a state post-processing flow and a deletion flow of the target may be executed. The state post-processing flow refers to executing corresponding operation on the tracking state information after the tracking state information is obtained, and the specific operation type may be set according to the actual needs of the user, which is not specifically limited by the present disclosure.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Example two:
based on the same inventive concept, a data fusion device corresponding to the data fusion method is also provided in the embodiments of the present disclosure, and as the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the data fusion method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 9, a schematic diagram of a data fusion apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a determination unit 91, an acquisition unit 92, and a fusion processing unit 93; wherein the content of the first and second substances,
a determining unit 91, configured to determine a target sensor and a fusion algorithm process indicated by the target configuration file;
an obtaining unit 92, configured to obtain observation data acquired by the target sensor;
and the fusion processing unit 93 is configured to perform data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target.
As can be seen from the above description, in the embodiment of the present disclosure, by setting the target configuration file, in a manner of setting the fusion algorithm flow of the observation data by using the target configuration file, the fusion schemes of different sensors installed in different application scenes can be freely configured, and each flow in the fusion algorithm flow can be disassembled by using the target configuration file, so that the technical problem that the existing data sensing fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes is solved.
In one possible embodiment, the fusion processing unit 93 is configured to: under the condition that the tracking state information comprises a motion state, performing data association on the observation data and at least one target according to predicted data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data acquired at each observation moment in the target time window; performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region; and updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target.
In a possible implementation, the fusion processing unit 93 is further configured to: calculating the similarity between the predicted data and the observed data in the annular cache region according to a data association matching algorithm indicated in the target configuration file, determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target.
In a possible implementation, the fusion processing unit 93 is further configured to: determining a matching relationship between a target observation time of the target observation data and the target time window; and under the condition that the target observation time is determined to be in the target time window according to the matching relation, or the target observation time is greater than the maximum time stamp of the predicted data in the target time window, interpolating to obtain the predicted data corresponding to the target observation data and storing the predicted data in the annular cache region, and taking the target observation time as the time stamp of the predicted data obtained by interpolation.
In a possible implementation, the fusion processing unit 93 is further configured to: under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain target prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time; and inserting the target prediction data into the storage position of the annular cache region corresponding to the target observation time.
In a possible implementation, the fusion processing unit 93 is further configured to: predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model; and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the target predicted data.
In a possible implementation, the fusion processing unit 93 is further configured to: predicting model probabilities according to the prediction data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; acquiring the fusion motion state of each target at the target moment determined according to each motion model; and determining the prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
In a possible implementation, the fusion processing unit 93 is further configured to: selecting at least one motion model from the interactive multi-models; and updating the prediction data after the timestamp alignment processing according to the selected motion model and the target observation data to obtain the motion state of each target.
In a possible implementation, the fusion processing unit 93 is further configured to: and updating the prediction data after the timestamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the motion state of each target.
In a possible implementation, the fusion processing unit 93 is further configured to: determining the confidence of each motion model according to the target observation data, wherein the confidence represents the matching degree between the motion state of each target predicted by the motion model at the target observation time and the actual motion state of each target; updating the model probability of each motion model according to the confidence coefficient, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; and determining the motion state and covariance matrix of each target according to the predicted data of the target observation data through the updated model probability and each motion model, wherein the covariance matrix of one target is used for representing the correlation degree between the predicted motion state of the target and the actual motion state of the target.
In a possible implementation, the fusion processing unit 93 is further configured to: determining a plurality of target prediction data in the ring cache, wherein the time represented by the timestamp is located before the target observation time; and updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data.
In a possible implementation, the fusion processing unit 93 is further configured to: determining time windows corresponding to the target prediction data; determining observation data positioned in the corresponding time window in the target observation data; determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measurement loss function value, and a prior loss function value; updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
In a possible embodiment, the apparatus is further configured to: after obtaining tracking state information of a target, executing target operation on the information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following operations: data updating operation, data creating operation and data transmission operation.
In a possible embodiment, the apparatus is further configured to: preprocessing the observation data to obtain the preprocessed observation data; a fusion processing unit further configured to: and performing data fusion on the preprocessed observation data according to the fusion algorithm flow to obtain the tracking state information of the target.
In a possible implementation, the fusion processing unit 93 is further configured to: the fusion algorithm process comprises a plurality of sub-processes, and the target configuration file is also used for indicating the execution sequence of the sub-processes and the process algorithm corresponding to each sub-process; the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object; and under the condition of determining the sub-processes of the type information of the target, performing data fusion on the observation data acquired by the target sensor according to the execution sequence of the plurality of sub-processes in the target configuration file and the process algorithm corresponding to each sub-process to obtain tracking state information of the target, wherein the tracking state information of the target comprises at least one of a motion state, existence estimation information and type information.
In one possible embodiment, the apparatus is further configured to: determining a sensor matched with an environment to be sensed as a target sensor; determining a profile indicative of a target sensor; selecting a configuration file indicating a fusion algorithm process matched with the environment to be perceived from the determined configuration files as a target configuration file
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Example three:
corresponding to the data fusion method in fig. 1, an embodiment of the present disclosure further provides an electronic device 100, as shown in fig. 10, which is a schematic structural diagram of the electronic device 100 provided in the embodiment of the present disclosure, and includes:
a processor 11, a memory 12, and a bus 13; the memory 12 is used for storing execution instructions and includes a memory 121 and an external memory 122; the memory 121 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 11 and data exchanged with the external memory 122 such as a hard disk, the processor 11 exchanges data with the external memory 122 through the memory 121, and when the electronic device 100 operates, the processor 11 communicates with the memory 12 through the bus 13, so that the processor 11 executes the following instructions:
determining a target sensor and a fusion algorithm process indicated by a target configuration file; acquiring observation data acquired by the target sensor; and performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the data fusion method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the data fusion method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (19)

1. A method of data fusion, comprising:
determining a target sensor and a fusion algorithm process indicated by a target configuration file;
acquiring observation data acquired by the target sensor;
and performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target.
2. The method of claim 1, wherein the tracking status information includes a motion status;
performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, including:
performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data acquired at each observation moment in the target time window;
performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region;
and updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target.
3. The method of claim 2, wherein the performing data association between the observation data and at least one target according to the predicted data in the ring buffer to obtain target observation data associated with each target comprises:
calculating the similarity between the predicted data and the observed data in the annular cache region according to a data association matching algorithm indicated in the target configuration file;
and determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target.
4. The method of claim 2 or 3, wherein the time-stamp aligning the target observation data and the prediction data in the ring buffer comprises:
determining a matching relationship between a target observation time of the target observation data and the target time window;
and under the condition that the target observation time is determined to be in the target time window according to the matching relation, or the target observation time is greater than the maximum time stamp of the predicted data in the target time window, interpolating to obtain the predicted data corresponding to the target observation data and storing the predicted data in the annular cache region, and taking the target observation time as the time stamp of the predicted data obtained by interpolation.
5. The method of any of claims 2 to 4, wherein the time stamp aligning the target observation data and the prediction data in the ring buffer further comprises:
under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain target prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time;
and inserting the target prediction data into the storage position of the annular cache region corresponding to the target observation time.
6. The method according to claim 5, wherein the predicting data corresponding to the target observation data according to the prediction data of the target time in the ring buffer to obtain target prediction data comprises:
predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model;
and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the target predicted data.
7. The method of claim 6, wherein predicting the predicted data corresponding to the target observation data through each motion model in the interactive multi-model comprises:
predicting model probabilities according to the prediction data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time;
acquiring the fusion motion state of each target at the target moment determined according to each motion model;
and determining the prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
8. The method according to claim 2, wherein the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target comprises:
selecting at least one motion model from the interactive multi-models;
and updating the prediction data after the timestamp alignment processing according to the selected motion model and the target observation data to obtain the motion state of each target.
9. The method according to claim 2, wherein the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target comprises:
and updating the prediction data after the timestamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the motion state of each target.
10. The method according to claim 9, wherein the updating the prediction data after the timestamp alignment processing through a motion model in an interactive multi-model and the target observation data to obtain the motion state of each target comprises:
determining the confidence of each motion model according to the target observation data, wherein the confidence represents the matching degree between the motion state of each target predicted by the motion model at the target observation time and the actual motion state of each target;
updating the model probability of each motion model according to the confidence coefficient, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time;
and determining the motion state and covariance matrix of each target according to the predicted data of the target observation data through the updated model probability and each motion model, wherein the covariance matrix of one target is used for representing the correlation degree between the predicted motion state of the target and the actual motion state of the target.
11. The method according to claim 2, wherein the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target comprises:
determining a plurality of target prediction data in the ring cache, wherein the time represented by the timestamp is located before the target observation time;
and updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data.
12. The method of claim 11, wherein the updating the determined target prediction data according to the target observation data and determining the motion state of each target according to the updated target prediction data comprises:
determining time windows corresponding to the target prediction data;
determining observation data positioned in the corresponding time window in the target observation data;
determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measurement loss function value, and a prior loss function value;
updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
13. The method according to any one of claims 1 to 12, further comprising:
after obtaining tracking state information of a target, executing target operation on the information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following operations: data updating operation, data creating operation and data transmission operation.
14. The method according to any one of claims 1 to 13,
the method further comprises the following steps: preprocessing the observation data to obtain the preprocessed observation data;
the data fusion processing is performed on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target, and the method comprises the following steps: and performing data fusion on the preprocessed observation data according to the fusion algorithm flow to obtain the tracking state information of the target.
15. The method according to any one of claims 1 to 14, wherein the fusion algorithm flow comprises a plurality of sub-flows, and the target configuration file is further used for indicating an execution order of the plurality of sub-flows and a flow algorithm corresponding to each sub-flow; the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object, a sub-process for determining type information of the object;
the data fusion processing of the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target comprises the following steps:
and performing data fusion on the observation data acquired by the target sensor according to the execution sequence of the plurality of sub-processes in the target configuration file and the process algorithm corresponding to each sub-process to obtain tracking state information of the target, wherein the tracking state information of the target comprises at least one of a motion state, existence estimation information and type information.
16. The method according to any of claims 1 to 15, wherein the target profile is determined according to the following steps:
determining a sensor matched with an environment to be sensed as a target sensor;
determining a profile indicative of a target sensor;
and selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file.
17. A data fusion apparatus, comprising:
the determining unit is used for determining a target sensor and a fusion algorithm process indicated by the target configuration file;
the acquisition unit is used for acquiring the observation data acquired by the target sensor;
and the fusion processing unit is used for carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain the tracking state information of the target.
18. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the data fusion method of any one of claims 1 to 16.
19. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the data fusion method according to any one of claims 1 to 16.
CN202011628706.4A 2020-12-31 2020-12-31 Data fusion method and device, electronic equipment and storage medium Pending CN112733907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011628706.4A CN112733907A (en) 2020-12-31 2020-12-31 Data fusion method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011628706.4A CN112733907A (en) 2020-12-31 2020-12-31 Data fusion method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112733907A true CN112733907A (en) 2021-04-30

Family

ID=75608154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011628706.4A Pending CN112733907A (en) 2020-12-31 2020-12-31 Data fusion method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112733907A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113612567A (en) * 2021-10-11 2021-11-05 树根互联股份有限公司 Alignment method and device for data collected by multiple sensors of equipment and electronic equipment
CN113965879A (en) * 2021-05-13 2022-01-21 深圳市速腾聚创科技有限公司 Perception information fusion method of multiple sensors and related equipment
CN113965289A (en) * 2021-10-29 2022-01-21 际络科技(上海)有限公司 Time synchronization method and device based on multi-sensor data
CN114241749A (en) * 2021-11-26 2022-03-25 深圳市戴升智能科技有限公司 Video beacon data association method and system based on time sequence

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105424043A (en) * 2015-11-02 2016-03-23 北京航空航天大学 Motion state estimation method based on maneuver judgment
CN105719312A (en) * 2016-01-19 2016-06-29 深圳大学 Multi-target tracking method and tracking system based on sequential Bayes filtering
CN105759295A (en) * 2014-09-02 2016-07-13 现代自动车株式会社 Apparatus And Method For Recognizing Driving Environment For Autonomous Vehicle
CN106054170A (en) * 2016-05-19 2016-10-26 哈尔滨工业大学 Maneuvering target tracking method under constraint conditions
CN107462882A (en) * 2017-09-08 2017-12-12 深圳大学 A kind of multiple maneuver target tracking methods and system suitable for flicker noise
CN108226920A (en) * 2018-01-09 2018-06-29 电子科技大学 A kind of maneuvering target tracking system and method based on predicted value processing Doppler measurements
CN109086788A (en) * 2017-06-14 2018-12-25 通用汽车环球科技运作有限责任公司 The equipment of the multi-pattern Fusion processing of data for a variety of different-formats from isomery device sensing, method and system
CN109789842A (en) * 2016-10-03 2019-05-21 日立汽车系统株式会社 On-board processing device
CN110543850A (en) * 2019-08-30 2019-12-06 上海商汤临港智能科技有限公司 Target detection method and device and neural network training method and device
CN110850403A (en) * 2019-11-18 2020-02-28 中国船舶重工集团公司第七0七研究所 Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method
CN111310840A (en) * 2020-02-24 2020-06-19 北京百度网讯科技有限公司 Data fusion processing method, device, equipment and storage medium
CN111348046A (en) * 2018-12-24 2020-06-30 长城汽车股份有限公司 Target data fusion method, system and machine-readable storage medium
CN111417869A (en) * 2017-11-20 2020-07-14 三菱电机株式会社 Obstacle recognition device and obstacle recognition method
CN111721238A (en) * 2020-07-22 2020-09-29 上海图漾信息科技有限公司 Depth data measuring apparatus and target object data collecting method
CN111860604A (en) * 2020-06-24 2020-10-30 国汽(北京)智能网联汽车研究院有限公司 Data fusion method, system and computer storage medium
CN111860589A (en) * 2020-06-12 2020-10-30 中山大学 Multi-sensor multi-target cooperative detection information fusion method and system
US20200363816A1 (en) * 2019-05-16 2020-11-19 WeRide Corp. System and method for controlling autonomous vehicles
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105759295A (en) * 2014-09-02 2016-07-13 现代自动车株式会社 Apparatus And Method For Recognizing Driving Environment For Autonomous Vehicle
CN105424043A (en) * 2015-11-02 2016-03-23 北京航空航天大学 Motion state estimation method based on maneuver judgment
CN105719312A (en) * 2016-01-19 2016-06-29 深圳大学 Multi-target tracking method and tracking system based on sequential Bayes filtering
CN106054170A (en) * 2016-05-19 2016-10-26 哈尔滨工业大学 Maneuvering target tracking method under constraint conditions
CN109789842A (en) * 2016-10-03 2019-05-21 日立汽车系统株式会社 On-board processing device
CN109086788A (en) * 2017-06-14 2018-12-25 通用汽车环球科技运作有限责任公司 The equipment of the multi-pattern Fusion processing of data for a variety of different-formats from isomery device sensing, method and system
CN107462882A (en) * 2017-09-08 2017-12-12 深圳大学 A kind of multiple maneuver target tracking methods and system suitable for flicker noise
CN111417869A (en) * 2017-11-20 2020-07-14 三菱电机株式会社 Obstacle recognition device and obstacle recognition method
CN108226920A (en) * 2018-01-09 2018-06-29 电子科技大学 A kind of maneuvering target tracking system and method based on predicted value processing Doppler measurements
CN111348046A (en) * 2018-12-24 2020-06-30 长城汽车股份有限公司 Target data fusion method, system and machine-readable storage medium
US20200363816A1 (en) * 2019-05-16 2020-11-19 WeRide Corp. System and method for controlling autonomous vehicles
CN110543850A (en) * 2019-08-30 2019-12-06 上海商汤临港智能科技有限公司 Target detection method and device and neural network training method and device
CN110850403A (en) * 2019-11-18 2020-02-28 中国船舶重工集团公司第七0七研究所 Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method
CN111310840A (en) * 2020-02-24 2020-06-19 北京百度网讯科技有限公司 Data fusion processing method, device, equipment and storage medium
CN111860589A (en) * 2020-06-12 2020-10-30 中山大学 Multi-sensor multi-target cooperative detection information fusion method and system
CN111860604A (en) * 2020-06-24 2020-10-30 国汽(北京)智能网联汽车研究院有限公司 Data fusion method, system and computer storage medium
CN111721238A (en) * 2020-07-22 2020-09-29 上海图漾信息科技有限公司 Depth data measuring apparatus and target object data collecting method
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GIANCARMINE FASANO ET AL.: "Radar/electro-optical data fusion for non-cooperative UAS sense and avoid", 《AEROSPACE SCIENCE AND TECHNOLOGY》, 28 August 2015 (2015-08-28), pages 436 - 450 *
蒋沁宏: "基于解耦化的高精度单目图像深度估计", 《中国优秀硕士学位论文全文数据库》, 15 January 2019 (2019-01-15), pages 1 - 60 *
陈嫣: "多平台多传感器数据融合中的时间一致", 《火力与指挥控制》, vol. 32, no. 11, 30 November 2007 (2007-11-30), pages 71 - 73 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965879A (en) * 2021-05-13 2022-01-21 深圳市速腾聚创科技有限公司 Perception information fusion method of multiple sensors and related equipment
CN113965879B (en) * 2021-05-13 2024-02-06 深圳市速腾聚创科技有限公司 Multi-sensor perception information fusion method and related equipment
CN113612567A (en) * 2021-10-11 2021-11-05 树根互联股份有限公司 Alignment method and device for data collected by multiple sensors of equipment and electronic equipment
CN113612567B (en) * 2021-10-11 2021-12-14 树根互联股份有限公司 Alignment method and device for data collected by multiple sensors of equipment and electronic equipment
CN113965289A (en) * 2021-10-29 2022-01-21 际络科技(上海)有限公司 Time synchronization method and device based on multi-sensor data
CN113965289B (en) * 2021-10-29 2024-03-12 际络科技(上海)有限公司 Time synchronization method and device based on multi-sensor data
CN114241749A (en) * 2021-11-26 2022-03-25 深圳市戴升智能科技有限公司 Video beacon data association method and system based on time sequence
CN114241749B (en) * 2021-11-26 2022-12-13 深圳市戴升智能科技有限公司 Video beacon data association method and system based on time sequence

Similar Documents

Publication Publication Date Title
CN112733907A (en) Data fusion method and device, electronic equipment and storage medium
CN110335316B (en) Depth information-based pose determination method, device, medium and electronic equipment
EP2858008B1 (en) Target detecting method and system
CN107480704B (en) Real-time visual target tracking method with shielding perception mechanism
Agamennoni et al. An outlier-robust Kalman filter
JP4849464B2 (en) Computerized method of tracking objects in a frame sequence
CN110782483A (en) Multi-view multi-target tracking method and system based on distributed camera network
KR20210005621A (en) Method and system for use in coloring point clouds
CN108198172B (en) Image significance detection method and device
CN112991389A (en) Target tracking method and device and mobile robot
CN108053424A (en) Method for tracking target, device, electronic equipment and storage medium
CN111402303A (en) Target tracking architecture based on KFSTRCF
CN112712549A (en) Data processing method, data processing device, electronic equipment and storage medium
JP2004220292A (en) Object tracking method and device, program for object tracking method, and recording medium with its program recorded
US20240020870A1 (en) Method, electronic device and medium for target state estimation
CN107665495B (en) Object tracking method and object tracking device
CN110580483A (en) indoor and outdoor user distinguishing method and device
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
WO2020149044A1 (en) Parameter selection device, parameter selection method, and parameter selection program
CN112927258A (en) Target tracking method and device
CN107154052B (en) Object state estimation method and device
JP6866621B2 (en) Moving object state quantity estimation device and program
CN115655291A (en) Laser SLAM closed-loop mapping method and device, mobile robot, equipment and medium
CN114299115A (en) Method and device for multi-target tracking, storage medium and electronic equipment
Juang et al. Comparative performance evaluation of GM-PHD filter in clutter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination