CN112733907B - Data fusion method, device, electronic equipment and storage medium - Google Patents

Data fusion method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112733907B
CN112733907B CN202011628706.4A CN202011628706A CN112733907B CN 112733907 B CN112733907 B CN 112733907B CN 202011628706 A CN202011628706 A CN 202011628706A CN 112733907 B CN112733907 B CN 112733907B
Authority
CN
China
Prior art keywords
target
data
predicted
observation
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011628706.4A
Other languages
Chinese (zh)
Other versions
CN112733907A (en
Inventor
张世权
马全盟
罗铨
蒋沁宏
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202011628706.4A priority Critical patent/CN112733907B/en
Publication of CN112733907A publication Critical patent/CN112733907A/en
Application granted granted Critical
Publication of CN112733907B publication Critical patent/CN112733907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present disclosure provides a data fusion method, apparatus, electronic device, and computer-readable storage medium, wherein the method includes: determining a target sensor and a fusion algorithm flow indicated by a target configuration file; obtaining observation data acquired by the target sensor; and carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target. According to the embodiment of the disclosure, the target sensor and the fusion algorithm flow of the acquired observation data are set through the target configuration file, so that the fusion schemes of the observation data of different sensors installed in different application scenes can be freely configured, and the technical problem that the existing data perception fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes is solved.

Description

Data fusion method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the technical field of data processing, and in particular, to a data fusion method, a data fusion device, an electronic device, and a computer readable storage medium.
Background
At present, the existing sensor data sensing fusion method is mainly divided into a sensing fusion method using a synchronous sensor system and a sensing fusion method using an asynchronous sensor system, and more sensing fusion methods using an asynchronous sensor system are used at present. Because the types of the asynchronous sensors are different in different application scenes, the data fusion algorithm of the sensor data perception fusion method is associated with the application scene of the sensor data perception fusion method. For example, in the field of automatic driving, the configuration of sensors is diversified. The sensors are classified into laser radars (Lidar), cameras (Camera), millimeter wave radars (Radar), ultrasonic sensors (Ultrasonic), and the like. The sensors have different arrangement schemes in different application scenes according to the use requirements, and the multiple sensors are heterogeneous in structure; is out of position in the space of arrangement, i.e. the spatial position orientation is different; the sensing results of multiple sensors over time tend to be non-time, i.e., time stamps are not aligned. The perception fusion system is an important module for integrating the perception results of various sensors so as to restore and state estimate multiple targets in the real world.
The actual automatic driving application scene has strong demands on a perception fusion system, and the perception fusion system is required to fuse respective advantages and perception areas of each sensor, so that various state information of a target can be stably, accurately and real-timely given, including but not limited to information such as position, orientation, category, bounding box, speed, acceleration, existence and the like.
Disclosure of Invention
The embodiment of the disclosure at least provides a data fusion method, a data fusion device, electronic equipment and a computer readable storage medium.
In a first aspect, an embodiment of the present disclosure provides a data fusion method, including: determining a target sensor and a fusion algorithm flow indicated by a target configuration file; obtaining observation data acquired by the target sensor; and carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target.
As can be seen from the foregoing description, in the embodiments of the present disclosure, by setting the target configuration file, in a manner of setting the fusion algorithm flow of the observation data through the target configuration file, the fusion schemes of different sensors installed in different application scenarios may be freely configured, so as to alleviate the technical problem that the existing data perception fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes.
In an alternative embodiment, the tracking state information includes a motion state; performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, wherein the data fusion processing comprises the following steps: according to the predicted data in the annular cache region, carrying out data association on the observed data and at least one target to obtain target observed data associated with each target; the predicted data in the annular buffer area represents the motion state of the target predicted according to the observed data acquired at each observation time in a target time window; performing time stamp alignment processing on the target observation data and the predicted data in the annular cache region; and updating the predicted data after the time stamp alignment processing according to the target observation data to obtain the motion state of each target.
In the embodiment of the disclosure, the ring buffer is used for storing the predicted data of the target predicted according to the observed data acquired at each observation time in the target time window, and the motion state of the target is determined according to the predicted data in the ring buffer, so that the motion state of the history time can be stored by adopting the ring buffer technology, all the data of each sensor can be effectively utilized, and meanwhile, the stability of the motion state of the target can be ensured in the data perception fusion process.
In an alternative embodiment, the data associating the observed data with at least one target according to the predicted data in the ring buffer to obtain target observed data associated with each target includes: and calculating the similarity between the predicted data and the observed data in the annular cache region according to a data association matching algorithm indicated in the target configuration file, determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target.
In the embodiment of the disclosure, as can be seen from the above description, by configuring the data association matching mode in the target configuration file, any configuration can be performed on the data association matching algorithm, so that the data association matching algorithm can meet the data fusion requirements in different application scenarios, and the disassembly of the algorithm corresponding to the sub-flow related to the data association matching algorithm in the fusion algorithm flow through the target configuration file is realized.
In an alternative embodiment, the performing a timestamp alignment process on the target observation data and the predicted data in the ring buffer includes: determining a matching relationship between a target observation time of the target observation data and the target time window; and under the condition that the target observation time is determined to be within the target time window or the target observation time is greater than the maximum timestamp of the predicted data in the target time window according to the matching relation, interpolating to obtain the predicted data corresponding to the target observation data, storing the predicted data in the annular buffer, and taking the target observation time as the timestamp of the interpolated predicted data.
In the embodiment of the disclosure, the timestamp alignment processing is performed on the target observation data and the prediction data in the ring buffer in the above-described manner, so that the stability of the latest motion state of the target can be ensured.
In an optional embodiment, the performing timestamp alignment processing on the target observation data and the predicted data in the ring buffer further includes: under the condition that the annular cache area does not contain the predicted data corresponding to the target observed data, predicting the predicted data corresponding to the target observed data according to the predicted data at the target moment in the annular cache area to obtain target predicted data; the target time is a time before the target observation time of the target observation data and/or a time after the target observation time in a target time window of the annular cache region; and inserting the target prediction data in a storage position of the annular buffer area corresponding to the target observation time.
As can be seen from the foregoing description, in the embodiment of the present disclosure, by performing the timestamp alignment processing on the ring buffer queue and the target observation data, interpolation calculation may be performed on the predicted data of the target corresponding to the target observation data within the target time window, so as to accurately align the timestamp of the ring buffer queue and the timestamp of the target observation data. After the time stamps are aligned, when the predicted data after the time stamp alignment is updated, a more accurate fusion motion state can be obtained.
In an optional implementation manner, the predicting the prediction data corresponding to the target observation data according to the prediction data of the target time in the ring buffer to obtain target prediction data includes: predicting prediction data corresponding to the target observation data through each motion model in the interactive multi-model; and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the target predicted data.
In the embodiment of the disclosure, the target prediction data is predicted through interactive multimode, so that the motion state of the complex target can be effectively fitted, and a better perception fusion result can be obtained.
In an optional implementation manner, the predicting, by each motion model in the interactive multi-model, the prediction data corresponding to the target observation data includes: predicting model probabilities according to the predicted data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; acquiring a fusion motion state of each target at the target moment determined according to each motion model; and determining prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
In the embodiment of the disclosure, the prediction data of each motion model at the target observation time is determined by the model probability of each motion model in the interactive multi-model and the fusion state data of the target determined by each motion model at the target time, so that the motion state of a complex target can be effectively fitted, and a better perception fusion result is obtained.
In an optional implementation manner, the updating the predicted data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes: selecting at least one motion model from the interactive multi-model; and updating the predicted data after the time stamp alignment processing according to the selected motion model and the target observation data to obtain the motion state of each target.
In the embodiment of the disclosure, the mode of selecting one or more target motion models in the interactive multi-model can be realized, and a plurality of motion models which are most in line with the target motion model are selected in real time in the motion process, so that the optimization speed and the optimization effect of the interactive multi-model method are improved.
In an optional implementation manner, the updating the predicted data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes: and updating the predicted data after the time stamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the motion state of each target.
As can be seen from the foregoing description, in the embodiment of the present disclosure, the manner of updating the prediction data after the timestamp alignment processing by using the interactive multi-model to obtain the motion state of the target may effectively fit the motion state of the complex target, so as to obtain a better sensing fusion result.
In an optional implementation manner, the updating the predicted data after the timestamp alignment processing to obtain the motion state of each target through the motion model in the interactive multi-model and the target observation data includes: determining the confidence coefficient of each motion model according to the target observation data, wherein the confidence coefficient represents the matching degree between the motion state of each target predicted by the motion model at the target observation time and the actual motion state of each target; updating the model probability of each motion model according to the confidence, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation moment; and determining the motion state and covariance matrix of each target according to the predicted data predicted by the target observation data through the updated model probability and each motion model, wherein the covariance matrix of one target is used for representing the association degree between the predicted motion state of the target and the actual motion state of the target.
In the embodiment of the disclosure, the confidence coefficient of each motion model is determined through multiple interactive multi-model, and the model probability of each motion model is updated according to the confidence coefficient, so that the motion state and covariance matrix of the target are determined according to the updated model probability and the predicted data predicted by each motion model according to the target observation data, a more accurate perception fusion result can be obtained, and the motion state of the complex target is effectively fitted.
In an optional implementation manner, the updating the predicted data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target includes: determining a plurality of target prediction data of which the time represented by the timestamp is positioned before the target observation time in the annular buffer; and updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data.
In the embodiment of the disclosure, the accuracy and smoothness of the predicted motion state can be further improved by updating the plurality of target prediction data according to the target observation data, so that a more accurate motion state is obtained.
In an optional implementation manner, the updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data, includes: determining time windows corresponding to the target prediction data; determining observation data located in the corresponding time window in the target observation data; determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measured loss function value and an a priori loss function value; updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
In the embodiment of the disclosure, the motion state of each target can be determined by combining richer information by calculating the loss function value between any two adjacent target prediction data in the plurality of target prediction data and updating the plurality of target prediction data according to the loss function value, so that the accuracy of the determined motion state of each target is improved. ;
in an alternative embodiment, the method further comprises: after tracking state information of a target is obtained, performing target operation on information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following steps: data update operation, data creation operation, data transmission operation.
In the embodiment of the disclosure, through the setting manner, corresponding operation on the related data of the target in the target pool can be realized through the tracking state information of the target, so that efficient management of the data can be realized.
In an alternative embodiment, the method further comprises: preprocessing the observed data to obtain preprocessed observed data; the data fusion processing is carried out on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, and the method comprises the following steps: and carrying out data fusion on the observed data after preprocessing according to the fusion algorithm flow to obtain tracking state information of the target.
In the embodiment of the disclosure, before the data fusion processing is performed on the observed data detected by the target sensor according to the fusion algorithm flow, useless data in the observed data can be removed by performing data preprocessing on the observed data, so that the accuracy of the observed data is improved. When data fusion processing is carried out according to the observed data after data preprocessing, the efficiency of the data fusion processing can be further improved, and the accuracy of the determined tracking state information is improved.
In an optional implementation manner, the fusion algorithm flow includes a plurality of sub-flows, and the target configuration file is further configured to indicate an execution sequence of the plurality of sub-flows and a flow algorithm corresponding to each sub-flow; the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object, a sub-process for determining type information of the object; the data fusion processing is performed on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, including: and carrying out data fusion on the observation data acquired by the target sensor according to the execution sequence of a plurality of sub-processes in the target configuration file and a process algorithm corresponding to each sub-process to obtain tracking state information of the target, wherein the tracking state information of the target comprises at least one of motion state, existence estimation information and type information.
In the embodiment of the disclosure, the above processing manner is used to implement free configuration of fusion schemes of different sensors installed in different sensing environments, so as to alleviate the technical problem that the existing data sensing fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes.
In an alternative embodiment, the target profile is determined according to the following steps: determining a sensor matched with the environment to be sensed as a target sensor; determining a configuration file indicating the target sensor; and selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file.
In the embodiment of the disclosure, by setting a manner of fusing the observation data according to the fusion algorithm flow in the corresponding perception environment in the target configuration file, the tracking state of the target can be accurately predicted, so that a more accurate tracking state is obtained.
In a second aspect, an embodiment of the present disclosure further provides a data fusion apparatus, including: the determining unit is used for determining the target sensor and the fusion algorithm flow indicated by the target configuration file; the acquisition unit is used for acquiring the observation data acquired by the target sensor; and the fusion processing unit is used for carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the data fusion method as described in any of the first aspects above.
In a fourth aspect, an embodiment of the present disclosure further provides a computer readable storage medium, wherein the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the data fusion method according to any one of the first aspects.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a data fusion method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of one sensor configuration provided by embodiments of the present disclosure;
FIG. 3 is a flowchart of a specific method for performing data association on the observed data and a target in the data fusion method according to the embodiment of the disclosure;
FIG. 4 illustrates a data timing diagram of a time stamp alignment process provided by an embodiment of the present disclosure;
FIG. 5 is a flowchart of a specific method for performing data association on the observed data and the target in the data fusion method according to the embodiment of the disclosure;
Fig. 6 is a flowchart of a specific method for performing time stamp alignment processing on the target observation data and the prediction data in the ring buffer in the data fusion method according to the embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a data processing method according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another fusion algorithm flow provided by an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a data fusion device according to an embodiment of the present disclosure;
Fig. 10 shows a schematic diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
According to research, the existing data sensing fusion system often needs to be correspondingly changed aiming at different sensor configuration schemes, or the existing data sensing fusion system is difficult to freely adapt to various sensor configuration schemes.
Based on the above study, the present disclosure provides a data fusion method, apparatus, electronic device, and computer readable storage medium. In the embodiment of the disclosure, the fusion algorithm flow of the target sensor and the acquired observation data thereof is set through the target configuration file, so that the fusion schemes of different sensors installed in different application scenes can be freely configured, and the technical problem that the conventional data perception fusion system is difficult to freely adapt to various sensor configuration schemes in the prior art is solved.
For the sake of understanding the present embodiment, first, a detailed description will be given of a data fusion method disclosed in an embodiment of the present disclosure, where an execution body of the data fusion method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, where the computer device includes, for example: the terminal device or server or other processing device may be a User Equipment (UE), mobile device, user terminal, cellular phone, cordless phone, personal digital assistant (Personal DIGITAL ASSISTANT, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the data fusion method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a data fusion method according to an embodiment of the disclosure is shown, where the method includes steps S101 to S105, where:
S101: and determining the target sensor and the fusion algorithm flow indicated by the target configuration file.
In the embodiment of the disclosure, the target configuration file is used for indicating a target sensor under a corresponding sensing environment and a fusion algorithm flow for fusing the acquired observation data; perceived environment can be understood as the application scenario of the object sensor. For example, the sensing environment may be a driving environment of the autonomous vehicle where the target sensor is located, and when the driving environment of the autonomous vehicle changes, the sensing environment of the target sensor also changes accordingly. For example, when an autonomous vehicle travels on a highway and a rural road, the sensing environments of the object sensors are not the same.
For an autonomous vehicle, when the vehicle is in different driving environments, the sensing environments of the autonomous vehicle are also different, and in this case, in order to track the targets in the corresponding sensing environments more accurately, the types and/or the numbers of the target sensors used in the corresponding sensing environments may also be different. Therefore, in the embodiment of the present disclosure, the type and/or number of the corresponding target sensors may also be determined according to the type of the sensing environment, and then, the fusion algorithm flow of the observation data collected by the target sensors is stored in the corresponding target configuration file.
Based on this, in the disclosed embodiment, the target profile is determined according to the following steps: determining a sensor matched with the environment to be sensed as a target sensor; determining a configuration file indicating the target sensor; and selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file.
In the embodiment of the disclosure, the observation data is fused according to the fusion algorithm flow in the corresponding perception environment in the target configuration file, so that the tracking state of the target can be accurately predicted, and the more accurate tracking state is obtained.
It should be noted that, in the embodiment of the present disclosure, the kinds of the target sensors in the target configuration file may be modified, and the fusion algorithm flow may be adjusted. As shown in fig. 2, in the embodiment of the present disclosure, the target profile may be determined according to the kind of target sensor and the number of target sensors.
S103: and obtaining the observation data acquired by the target sensor.
In the embodiment of the disclosure, after the target sensor is determined, the observation data acquired by the target sensor can be acquired. It should be noted that, for a sensor with a higher data transmission delay, after the sensor acquires the observed data, the sensor needs to perform data preprocessing on the observed data, so as to upload the observed data after the preprocessing, so that a certain delay exists in the data transmission process of the sensor, that is, in this case, a certain time difference may exist between the time when the observed data acquired by the target sensor is acquired and the time when the observed data acquired by the target sensor is acquired.
S105: and carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target.
In an embodiment of the present disclosure, the fusion algorithm flow includes a plurality of sub-flows, and the target configuration file is further configured to indicate an execution sequence of the plurality of sub-flows and a flow algorithm corresponding to each sub-flow, where the plurality of sub-flows includes at least one of the following sub-flows: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object, a sub-process for determining type information of the object.
It should be noted that, in the embodiment of the present disclosure, the number and/or the sequence of the sub-flows in the target configuration file may be adjusted, and the flow algorithm corresponding to each sub-flow may be adjusted.
As can be seen from the above description, in the embodiment of the present disclosure, the fusion algorithm flow of the target sensor and the acquired observation data thereof is set through the target configuration file, so that the fusion schemes of the observation data of different sensors installed in different sensing environments can be freely configured, and each flow in the fusion algorithm flow can be disassembled through the target configuration file, thereby alleviating the technical problem that the existing data perception fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes.
As can be seen from the above description, in the embodiments of the present disclosure, the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object; and a sub-process for determining type information of the target. In addition, the multiple sub-processes may further include other sub-processes, and the multiple sub-processes included in the fusion algorithm process will be described in detail below.
In an alternative embodiment, the plurality of sub-processes may include at least one of the following sub-processes: data preprocessing, timestamp alignment processing, data association matching, motion state estimation (i.e., the above-mentioned sub-flow for determining the motion state of the target), target presence estimation (i.e., the above-mentioned sub-flow for determining the presence estimation information of the target), target category estimation (i.e., the above-mentioned sub-flow for determining the type information of the target), target pool management, and the like. The above sub-processes will be described in connection with specific embodiments.
First: and (5) preprocessing data.
For the sub-process of data preprocessing, in an embodiment of the present disclosure, the data fusion method further includes: and preprocessing the observed data to obtain the preprocessed observed data.
In this case, data fusion processing is performed on the observation data acquired by the target sensor according to a fusion algorithm flow, and when tracking state information of the target is obtained, data fusion can be performed on the observation data after preprocessing according to the fusion algorithm flow, so as to obtain the tracking state information of the target.
Specifically, the data preprocessing sub-process refers to acquiring observation data acquired by a target sensor indicated in a target configuration file. Then, data preprocessing is performed on the observed data. In an alternative embodiment, the target sensor may, after preprocessing the observation data, send the preprocessed observation data to the computer device; in addition, the object sensor may send the observation data to the computer device, so that the computer device performs data preprocessing on the observation data. The data preprocessing may include any of the following processing methods: de-duplication matching, outlier deletion, and preamble computation (e.g., computation of a preamble such as view angle range, projection frame, etc.).
In the embodiment of the present disclosure, the data preprocessing process may be data preprocessing for multi-sensor data such as multi-camera, camera-lidar, camera-millimeter wave radar, lidar-millimeter wave radar, and the like. In addition, the observed data may be processed by other data preprocessing methods, for example, data denoising, data smoothing, and other data methods. It should be noted that, in the embodiment of the present disclosure, the specific processing method of data preprocessing is associated with the type of the target sensor, for example, the data preprocessing methods corresponding to the observed data of different types of target sensors are different.
In the embodiment of the disclosure, before the data fusion processing is performed on the observed data detected by the target sensor according to the fusion algorithm flow, useless data in the observed data can be removed by performing data preprocessing on the observed data, so that the accuracy of the observed data is improved. When data fusion processing is carried out according to the observed data after data preprocessing, the efficiency of the data fusion processing can be further improved, and the accuracy of the determined tracking state information is improved.
Second,: and (5) performing time stamp alignment processing.
In the embodiment of the present disclosure, the time stamp alignment process refers to aligning the time stamp of the predicted data of the object whose presence has been estimated to the time stamp of the observation data newly observed by the sensor (i.e., the observation time of the observation data, or the acquisition time of the observation data), so that the alignment is accurately matched in the motion state. In the embodiment of the disclosure, the precise alignment processing of the predicted data of the target which is estimated to exist and the new observed data of the target sensor can be realized by maintaining the annular buffer area of the motion state of the target. It should be noted that, as described above, since some of the target sensors have delay in data transmission, the time difference between the time when the target sensor collects the observation data and the time when the vehicle-mounted host computer obtains the observation data is the delay time. At this time, after the vehicle-mounted host acquires the observation data transmitted in a delayed manner, the time stamp of the predicted data in the ring buffer is aligned to the observation time of the observation data according to the observation time of the observation data.
Third,: and (5) matching the data association.
In the embodiment of the present disclosure, data association matching refers to an operation of matching and associating an object that has been estimated to exist with observation data acquired by an object sensor.
For the above sub-flow of timestamp alignment processing and data association matching, in an alternative embodiment, as shown in fig. 3, the steps are as follows: and carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, wherein the data fusion processing comprises the following flows:
Step S1051, according to the predicted data in the annular buffer area, carrying out data association on the observed data and at least one target to obtain target observed data associated with each target; the predicted data in the annular buffer area represents the motion state of the target predicted according to the observed data acquired at each observation time in the target time window.
In the embodiment of the present disclosure, if the number of the targets is plural, the observed data of the plural sensors is the observed data of the plural targets, and at this time, it is necessary to perform data association on the observed data of the plural target sensors and the plural targets according to the predicted data in the ring buffer, so as to obtain the target observed data of each of the targets.
It should be noted that, in the embodiment of the present disclosure, each target corresponds to an annular buffer area, and prediction data for indicating a motion state of the target is stored in the annular buffer area.
Step S1052, performing a time stamp alignment process on the target observation data and the prediction data in the ring buffer.
In the embodiment of the present disclosure, after the observation data and the plurality of targets are data-associated, since the time stamp of the target observation data (or the observation data) obtained after the association does not correspond to the time stamp of the predicted data in the ring buffer, at this time, it is necessary to perform time stamp alignment processing on the target observation data and the predicted data in the ring buffer.
For example, as shown in fig. 4, the plurality of sensors includes: camera, laser radar sensor, millimeter wave radar sensor. As can be seen from fig. 4, for the target observation data, at time A1 and time A2, the millimeter wave radar sensor acquires the corresponding observation data, for example, denoted as M1 and M2 (M1 and M2 are shown in fig. 4); at times A3 and A5, the lidar sensor acquires corresponding observations, e.g., denoted as M3 and M5 (M3 and M5 are shown in fig. 4); at time A4, the camera acquires data M4. As can be seen from fig. 4, the time stamps of the predicted data stored in the ring buffer are B1, B2, B3, B4, B5, and B6, respectively, and as can be seen from fig. 4, at the time of observation time A2, there is no predicted data corresponding to the observation time A2 in the ring buffer, at which time it can be determined that the time stamps of the target observed data and the predicted data are not aligned.
It should be noted that, one possible reason that the predicted data corresponding to the observation time A2 does not exist in the ring buffer is that the observation data collected at the time A2 is not uploaded to the computer device in time, but is transmitted to the computer device after a certain time delay. At this time, predicted data corresponding to the A2 time may not exist in the ring buffer. However, in order to apply the observation data acquired at the time A2 to the data fusion method, interpolation is required in the ring buffer to obtain a prediction data, and the timestamp of the prediction data obtained by interpolation is the time A2, at this time, the data transmitted in a delayed manner can be applied to the data fusion method, and the observation data transmitted in a delayed manner can be unnecessary to discard, so that all the observation data acquired by using the target sensors can be realized, at this time, all the data of each sensor can be effectively utilized, and meanwhile, the state stability of the target can be ensured in the data sensing fusion process.
Based on this, in the embodiment of the present disclosure, the time stamp alignment process needs to be performed on the target observation data and the predicted data in the ring buffer, and as can be seen from fig. 4, the predicted data corresponding to the A2 time may be obtained by interpolation according to the predicted data corresponding to the B2 or B3 time, so as to implement the time stamp alignment between the target observation data and the predicted data in the ring buffer.
Step S1053, updating the predicted data after the time stamp alignment processing according to the target observation data, to obtain the motion state of each target.
After the time stamps of the target observation data and the predicted data in the annular buffer area are aligned, the predicted data after the time stamp alignment treatment can be updated according to the target observation data, so that the motion state of each target can be obtained. When the motion state of each target is determined by using the predicted data after the time stamp alignment processing, the stability of the motion state of each target after updating can be improved, and a more accurate motion state can be obtained.
In the embodiment of the disclosure, the ring buffer is used for storing the predicted data determined according to the observed data acquired at each observation time in the target time window, and determining the motion state of the target according to the predicted data in the ring buffer, so that the motion state of the history time can be saved by adopting the ring buffer technology, all data of each sensor can be effectively utilized, and meanwhile, the state stability of the target can be ensured in the data sensing fusion process.
In an alternative embodiment, as shown in fig. 5, the data association is performed on the observation data and the targets, so as to obtain target observation data associated with each target, which includes the following procedures:
step S501, calculating the similarity between the predicted data in the annular cache area and the observed data according to the data association matching algorithm indicated in the target configuration file;
Step S502, determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target.
In the embodiment of the disclosure, the data association matching algorithm may be a "one-to-one" matching algorithm, or a "one-to-many" matching algorithm, where the "one-to-one" matching algorithm may be hungarian matching or greedy matching, and the "one-to-many" matching algorithm may be multiple bipartite graph matching or simple greedy matching, etc. The similarity calculation algorithm comprises a center point similarity, a weighted center point similarity, a shape similarity, a similarity of 2D image frames, a Euclidean distance and other similarity calculation algorithms.
In the embodiment of the present disclosure, in addition to including the above-described data association matching algorithm and similarity calculation algorithm, an algorithm capable of replacing the above-described data association matching algorithm and an algorithm capable of replacing the above-described similarity calculation algorithm may be adopted, which is not particularly limited in this disclosure.
In the embodiment of the disclosure, as can be seen from the above description, for the "data association matching" of the sub-process, the algorithm corresponding to the sub-process can be configured arbitrarily, so as to realize the disassembly of the algorithm corresponding to each sub-process in the fusion algorithm process through the target configuration file, thereby alleviating the technical problem that the existing data perception fusion system in the prior art is difficult to freely adapt to various sensor configuration schemes.
It should be noted that, in the embodiment of the present disclosure, in addition to performing data association matching according to the above-described method, data association matching may also be performed in the following manner, which specifically includes:
In the embodiment of the present disclosure, after the observation data with the time stamp (i.e., the observation time) is acquired, the prediction data corresponding to the time stamp, that is, the first prediction data, may be searched in the ring buffer. As can be seen from the above description, each target corresponds to one ring buffer, and thus, if the targets are plural, the first prediction data may also be prediction data determined in different ring buffers.
After the first predicted data is determined, a similarity between the first predicted data and the observed data also needs to be determined. After determining the similarity, a target associated with the observation may be determined based on the similarity and the observation determined as a target observation of the associated target.
According to the above description, the observation data and the targets are associated through the similarity, and the target observation data of each target can be quickly and accurately determined from a large number of sensor observation data, so that an accurate prediction result can be obtained when the fusion motion state of the targets is determined according to the target observation data and the prediction data.
In an alternative implementation manner, in an embodiment of the present disclosure, after the observation data and the target are associated with each other according to the method described above, timestamp alignment processing may be performed on the target observation data and the predicted data in the ring buffer according to the target observation data.
In an alternative embodiment, as shown in fig. 6, the step of performing a time stamp alignment process on the target observation data and the predicted data in the ring buffer includes the following procedures:
Step S601, determining a matching relationship between the target observation time and a target time window;
Step S602, when it is determined according to the matching relationship that the target observation time is within the target time window or the target observation time is greater than the maximum timestamp of the predicted data within the target time window, interpolating to obtain the predicted data of the target observation data and storing the predicted data in the ring buffer, and taking the target observation time as the timestamp of the interpolated predicted data.
In the embodiment of the present disclosure, for each target observation data that is acquired recently, the processing may be performed by the following three cases, which specifically include:
case one:
And if the target observation time of the target observation data is determined to be smaller than the time stamp of the earliest predicted data in the annular buffer area according to the matching relation, discarding the target observation data. For example, as shown in fig. 4, the start time stamp of the target time window is B1, and the end time stamp of the target time window is B6. As can be seen from fig. 4, the sensor observations (millimeter wave radar sensor observations) with time stamps C1 and C2 are smaller than the time stamp B1 of the earliest predicted data in the state buffer, at which point the target observations can be discarded.
And a second case:
If the target observation time of the target observation data is determined to be greater than the timestamp of the oldest predicted data in the annular cache region and smaller than the timestamp of the latest predicted data in the annular cache region according to the matching relationship, determining that the target observation time is within a target time window according to the matching relationship, and searching the predicted data corresponding to the target observation time in the annular cache region; and under the condition that the corresponding predicted data is not found, interpolating in the annular buffer area to obtain the predicted data of the target observed data, adding the interpolated predicted data into the annular buffer area, and then updating the predicted data. In case that the corresponding prediction data is found, the step of updating the found prediction data may be performed.
And a third case:
If it is determined according to the matching relationship that the target observation time of the target observation data is greater than the timestamp of the latest predicted data in the ring buffer (i.e., the maximum timestamp of the predicted data in the target time window), the predicted data of the target observation data can be obtained by interpolation in the ring buffer, the predicted data obtained by interpolation is added into the ring buffer, and then the predicted data is updated.
In the embodiment of the disclosure, the timestamp alignment processing is performed on the target observation data and the prediction data in the ring buffer in the above-described manner, so that the stability of the latest motion state of the target can be ensured.
In the embodiment of the present disclosure, the time stamp alignment process may be further performed on the target observation data and the predicted data in the ring buffer in a manner described by the following steps, and specifically includes the following procedures:
(1) Under the condition that the annular cache area does not contain the predicted data corresponding to the target observed data, predicting the predicted data corresponding to the target observed data according to the predicted data at the target moment in the annular cache area to obtain target predicted data; the target time is a time before the target observation time of the target observation data and/or a time after the target observation time in a target time window of the annular cache region;
(2) And inserting the target prediction data into a storage position of the annular buffer memory area corresponding to the target observation time, thereby obtaining the prediction data after the time stamp alignment processing.
As can be seen from the above three cases, in the second case and the third case, if it is determined according to the matching relationship that the target observation time of the target observation data is greater than the timestamp of the earliest predicted data in the ring buffer, and is less than the timestamp of the latest predicted data in the ring buffer, and if the predicted data corresponding to the target observation time is not found in the ring buffer; or if the target observation time of the target observation data is determined to be greater than the timestamp of the latest predicted data in the annular cache region according to the matching relation, the predicted data of the target observation data can be predicted according to the predicted data corresponding to the target time in the annular cache region, so that the target predicted data is obtained, the target predicted data obtained through interpolation is added into the annular cache region, the target observation time is used as the timestamp of the target predicted data obtained through interpolation, and then the target predicted data is updated.
As can be seen from the foregoing description, in the embodiment of the present disclosure, by performing the timestamp alignment processing on the ring buffer queue and the target observation data, interpolation calculation may be performed on the predicted data of the target corresponding to the target observation data within the target time window, so as to accurately align the timestamp of the ring buffer queue and the timestamp of the target observation data. After the time stamps are aligned, when the predicted data after the time stamp alignment is updated, a more accurate fusion motion state can be obtained.
In an optional implementation manner of the embodiment of the present disclosure, predicting, according to prediction data of a target time in the ring buffer, prediction data corresponding to the target observation data, to obtain target prediction data includes the following processes:
(1) Predicting prediction data corresponding to the target observation data through each motion model in the interactive multi-model;
(2) And fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the target predicted data.
The existing sensor data perception fusion method often adopts a single motion model to fit the motion state of a target, however, due to the complexity of the motion mode of the target, the motion state of the target is difficult to effectively fit by using the single motion model. For example, in the field of automatic driving of a vehicle, a movement pattern performed by a certain target is complex during driving of the vehicle, such as complex actions of straight-right turn-straight-lane combination, and the like. At this time, the complex motion state cannot be fitted by one motion model.
Based on this, in the embodiment of the present disclosure, prediction data corresponding to the target observation data is predicted for each motion model in the interactive multi-model, so that the prediction data predicted for each motion model is fused according to a calculation method of data summation, to obtain the target prediction data.
As can be seen from the above description, in the embodiment of the present disclosure, the target prediction data corresponding to the target observation data is predicted using the interactive multi-model. The motion state of the complex target can be effectively fitted by predicting the target prediction data through interactive multimode, so that a better perception fusion result can be obtained.
In an alternative embodiment, the prediction data corresponding to the target observation data may be predicted in a manner described below, which specifically includes:
(1) Predicting model probabilities according to the predicted data of the target moment through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time.
Specifically, for each of the motion models, a target transition probability is determined, wherein the target transition probability represents a probability of transitioning to the motion model by other motion models in the interactive multi-model. In the embodiment of the disclosure, pi ji is used to represent the target transition probability from the motion model j to the motion model i for the ith motion model, where the motion model j is the other motion model described above.
Then, a first confidence level of the other motion models at the target moment can be determined; the first confidence level is used to represent the probability that the actual motion of the target at the target moment conforms to the other motion model.
If the target observation time is denoted as k time, then the target time may be denoted as k-1 time. At this time, the first confidence level of the other motion model j at the k-1 time (i.e., the target time) is expressed as
Finally, based on the target transition probability and the first confidence levelThe model probabilities are determined. In the disclosed embodiment, the target transition probability pi ji and the first confidence are determinedThereafter, the target transition probabilities pi ji and the first confidence level can be determinedAnd carrying out weighted summation to obtain a model probability of a motion model i, wherein a calculation formula for determining the model probability based on the target transition probability and the confidence coefficient can be expressed as follows: Wherein, The model probability is expressed as the above-mentioned motion model i.
After determining the model probability of the motion model i in the manner described above, it is also necessary to apply the model probability by the following formulaCarrying out normalization processing to obtain model probability after normalization processing:
Wherein, Representing the model probability after the normalization process is achieved.
(2) And acquiring the fusion motion state of each target determined at the target moment according to each motion model.
In the disclosed embodiment, after predicting the model probability from the predicted data at the target time instant, the final state result (i.e., the motion state) of the target determined by the motion model j at the k-1 time instant may also be determined, for example, the motion state is expressed as:
(3) And determining prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
After determining the fusion motion state of the target at the target time through the step (2), the fusion motion state determined at the time k-1 can be obtainedModel probability with time k-1Fusion is carried out, so that predicted data corresponding to target observed data predicted by each motion model at the moment k is obtained. Then, the predicted data predicted by all the motion models can be summed up, so that target predicted data can be obtained.
In the embodiment of the present disclosure, the predicted data corresponding to the target observed data predicted by each motion model may be determined by the following formula, which specifically includes:
then, through the formula And carrying out summation operation on the predicted data predicted by all the motion models, thereby obtaining target predicted data.
It should be noted that, in the embodiment of the present disclosure, each motion model may predict a corresponding model probability, and in addition, each motion model may also predict a corresponding covarianceIn the step (3), in addition to determining the predicted data corresponding to the target observed data predicted by each motion model, a corresponding covariance matrix may be predicted, where the covariance matrix is used to characterize a degree of association between a motion state of a target and an actual motion state of the target.
In the disclosed embodiments, the covariance matrix may be determined in a manner described by the following equation:
As can be seen from the above description, for each motion model, first, a matrix difference between the target prediction data and the fused motion state determined at time k-1 for each motion model may be calculated, then, according to the transposed matrix of the matrix difference and the matrix difference, and the model probability corresponding to each motion model, the degree of association between the motion model and the target prediction data is determined, and then, the degree of association is added to the covariance matrix determined at time k-1 for each motion model, so as to obtain an addition calculation result. For each motion model, the addition calculation result may be determined in the manner described above, and then the addition calculation result may be subjected to a summation operation to obtain the covariance matrix determined at the k time.
As can be seen from the above description, in the embodiment of the present disclosure, the target prediction data corresponding to the target observation data is predicted using the interactive multi-model. The motion state of the complex target can be effectively fitted by predicting the target prediction data through interactive multimode, so that a better perception fusion result can be obtained.
Fourth, motion state estimation.
The motion state estimation sub-process refers to estimating the motion state of the target according to the predicted data after the timestamp alignment processing, wherein the motion state includes, but is not limited to, position, orientation, speed and acceleration.
In the embodiment of the present disclosure, after performing the time stamp alignment processing on the target observation data and the predicted data in the ring buffer in the manner described above, the predicted data after the time stamp alignment processing may be updated to obtain the motion state of the target.
In an optional embodiment, updating the predicted data after the timestamp alignment processing according to the target observation data to obtain the motion state of each target, including the following steps:
And updating the predicted data after the time stamp alignment processing through the motion model in the multi-interactive multi-model and the target observation data to obtain the motion state of each target.
In particular, first, a confidence level for each motion model in the interactive multi-model may be determined, where the confidence level is used to characterize a degree of matching between a motion state of the target predicted by the motion model and an actual motion state of the target at the target observation time.
In the embodiment of the disclosure, each motion model is utilized to perform Kalman filtering update on the predicted data after the time stamp alignment processing according to target observation data, so as to obtain a Kalman filtering result; the predicted data after the time stamp alignment process is the target predicted data determined according to the steps (1) to (3). Specifically, the target observation data and the extended kalman filter or unscented kalman filter may be used to update the kalman filter on the target prediction data to obtain a kalman filter result, where the kalman filter result may include a measurement residual and a measurement residual covariance matrix. The confidence level of each motion model in the interactive multi-model is then determined from the Kalman filtering result. Specifically, the measurement residual and the measurement residual covariance matrix may be respectively taken as the mean and the variance of the gaussian model, and then the confidence of each motion model is determined by using the gaussian model.
Specifically, in embodiments of the present disclosure, the following formula may be usedConfidence is determined. Wherein, In order for the confidence level to be high,In the form of a gaussian model, the model is a model of a gaussian,For measurement residuals determined from target observations (i.e., the mean of the gaussian model),For measuring the residual covariance matrix (i.e., the variance of the gaussian model).
After determining the confidence level, the model probability of each motion model can be updated according to the confidence level, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time.
Specifically, it can be according to the formulaThe model probabilities for each motion model are updated.
For the motion model i, the confidence of the motion model j is firstly calculatedModel probability for motion model jPerforming multiplication operation to obtain a product calculation result Z1; then, the product calculation result Z1 of each motion model j is added to obtain an addition calculation result P1, namelyAfterwards, confidence of the motion model i is calculatedModel probability for motion model iAnd performing multiplication operation to obtain a product calculation result Z2, and then calculating the ratio between the product calculation result Z2 and the addition operation result P1, so as to determine the ratio as the model probability after the motion model i is updated.
After determining the model probability after updating, determining a motion state of the target and a covariance matrix according to the model probability after updating and predicted data predicted by each motion model according to the target observation data, wherein the covariance matrix is used for representing the association degree between the motion state of the target and the actual motion state of the target.
Specifically, in embodiments of the present disclosure, the following formula may be usedA state of motion of the object is calculated, wherein,In order to update the probability of the model after the update,Predicted data predicted from the target observation data for the motion model i. In the embodiment of the present disclosure, the model probability after each motion model update and the predicted data predicted by each motion model may be multiplied, and then, for all motion models, the products may be summed, so as to obtain the motion states of the respective targets.
After obtaining the motion state of the target, the formula can be:
a covariance matrix of the target is determined. As is apparent from the above description, for each motion model, first, a matrix difference between a motion state of each object and predicted data predicted by each motion model may be calculated, then, a degree of association between the predicted data of the motion model and the motion state is determined according to a transposed matrix of the matrix difference and a model probability corresponding to each motion model, and then, the degree of association is added to a covariance matrix determined at a time of each motion model k to obtain an addition calculation result. For each motion model, the addition calculation result may be determined in the manner described above, and then the addition calculation result may be subjected to a summation operation to obtain the covariance matrix determined at the k time.
As can be seen from the above description, in the embodiment of the present disclosure, the manner of updating the prediction data after the timestamp alignment processing by using the interactive multi-model to obtain the motion state of the target may effectively fit the motion state of the complex target, so that a better sensing fusion result may be obtained.
The above data processing procedure is described below in conjunction with fig. 7, and it can be seen from fig. 7 that the plurality of sensors includes a camera, a lidar sensor, and a millimeter wave radar sensor.
As can be seen from fig. 7, the camera, the lidar sensor and the millimeter wave radar sensor acquire observation data, so as to obtain an image frame, a lidar data frame and a millimeter wave radar data frame (i.e. observation data) respectively. And then, according to the predicted data in the annular buffer area, carrying out data association operation on the image frame, the laser radar data frame and the millimeter wave radar data frame, thereby determining the target corresponding to each observation data. After the data association operation is performed, object observation data to which each object belongs can be obtained. And then, performing time stamp alignment processing on the target observation data and the prediction data in the annular buffer area, so that the prediction data after the time stamp alignment processing contains the prediction data of the corresponding target at each target observation time of the target observation data.
For example, as shown in fig. 7, the plurality of sensors includes: camera, laser radar sensor, millimeter wave radar sensor. As can be seen from fig. 7, for the target observation data, at the timings A1 and A2, the millimeter wave radar sensors acquire data M1 and M2; at the time A3 and the time A5, the laser radar sensor acquires data M3 and M5; at time A4, the camera acquires data M4. As can be seen from fig. 4, for the predicted data, the time stamps of the predicted data stored in the ring buffers are B1, B2, B3, B4, B5, and B6, respectively, and as can be seen from fig. 7, the time stamps of the target observation data and the predicted data are misaligned.
Based on this, in the embodiment of the present disclosure, the target observation data and the predicted data in the ring buffer need to be aligned by using a time stamp, as can be seen from fig. 7, the predicted data corresponding to the M2 time may be interpolated according to the predicted data corresponding to the B2 or B3 time, so as to implement time stamping for aligning the target observation data and the predicted data in the ring buffer.
After the target observation data and the predicted data in the annular buffer area are subjected to time stamp alignment, the predicted data after the time stamp alignment can be updated to obtain the fusion motion state of the target. As shown in fig. 7, the predicted data after the timestamp alignment process may be updated by a multi-interactive multi-model (INTERACTING MULTIPLEMODEL, abbreviated as IMM) to obtain the motion state of the target.
From the above description, embodiments of the present disclosure propose to use a buffer technique to efficiently use all measurements of each sensor and maintain the stability of the latest state of the target. Compared with the prior art, the method provided by the embodiment of the disclosure can effectively fit the motion state of the complex target, so that a better perception fusion result can be obtained.
In another optional implementation manner of the embodiment of the present disclosure, the step of updating the predicted data after the timestamp alignment processing through the interactive multimode to obtain the motion state of the target includes the following processes:
(1) Selecting one or more object motion models from the interactive multi-model;
(2) And updating the predicted data after the time stamp alignment processing according to the selected motion model to obtain the motion state of each target.
In the embodiment of the disclosure, in order to simulate the motion states of various targets, more motion models are required to be continuously added to simulate the motion modes of the targets, but when the number of the motion models is large, the optimization speed of the interactive multi-model IMM algorithm is significantly reduced, and the optimization effect of the interactive multi-model IMM algorithm is reduced due to the large number of the motion models. Therefore, in order to prevent these problems, an adjustment is needed to be performed on the interactive multi-model, so that the interactive multi-model can pick out a plurality of motion models which are most suitable for the target motion mode in real time in the motion process, and the predicted data after the timestamp alignment processing is updated according to the selected plurality of motion models, so as to obtain the motion state of the target.
Specifically, in the embodiments of the present disclosure, one or more motion models may be selected among the interactive multi-models by the likelihood model set selection LMS algorithm, and the specific selection process is described as follows:
The Model set adaptive selection step is a likelihood Model set selection method (LMS), and the main processes include Model classification, model activation, model work update, and the like. First, a plurality of motion models are initialized and grouped to obtain a model ensemble (total model set), a model active set (active model set) and a model working set (working model set). Wherein the initialization groupings of models are associated with an initialization confidence for each model, wherein the confidence is used to characterize the probability that the predicted motion state of the object meets each motion model. The model all sets comprise model active sets, the model active sets comprise model working sets, and in an initial state, the models in the model all sets are the same as the models in the model active sets. Next, the motion state of the target may be predicted by each motion model in the model active set. In the embodiment of the present disclosure, the motion state of the target may be predicted by the above-described interactive multi-model IMM algorithm, which is not described herein.
Thereafter, the confidence level of each motion model is updated according to the predicted motion state. Next, a final confidence level is determined based on the updated confidence level and the initialized confidence level, and the motion models contained in the active model set and the model working set (working model set) are adjusted based on the final confidence level, for example, a model with a final confidence level greater than the confidence threshold value may be determined as a model in the model working set. Finally, the step of updating the predicted data after the time stamp alignment processing to obtain the motion state of each target can be executed through the adjusted model working set.
Based on the aforementioned interactive multi-model IMM algorithm method and the interactive multi-model VSIMM algorithm after modification. In the embodiment of the disclosure, the precision and smoothness of motion state estimation can be further improved by selecting configuration batch optimization, and the main idea is to perform unified optimization iteration on the predicted data in the target time window. In this case, updating the predicted data after the time stamp alignment processing according to the target observation data, to obtain the motion state of each target includes the following steps:
(1) Determining a plurality of target prediction data in the annular buffer, the time represented by the timestamp being located before the target observation time;
(2) Updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data, wherein the method specifically comprises the following steps:
Determining time windows corresponding to the target prediction data; determining observation data located in the corresponding time window in the target observation data; determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measured loss function value and an a priori loss function value; updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
Assuming that the target observation time is k time, at this time, a plurality of target prediction data before k time can be determined in the ring buffer, then, a time window corresponding to the plurality of target prediction data is determined, and observation data located in the corresponding time window is determined in the target observation data; and then, calculating a loss function value between any two adjacent target prediction data according to the determined observation data, wherein the loss function comprises the following steps: motion loss function value, measured loss function value, and a priori loss function value.
In the embodiment of the present disclosure, a calculation formula of a motion loss function value may be described as: e motion=(f(xk-1,0)-xk)TP(f(xk-1,0)-xk);xk-1 denotes target prediction data at time k-1, f (x k-1, 0) is a prediction function for predicting prediction data of the target at time k based on the target prediction data at time k-1, and x k denotes an actual motion state at time k.
The calculation formula for measuring the loss function value can be described as: e measure=(yk-g(xk-1,0))TR(yk-g(xk-1,0));yk denotes the target observation data at time k, g (x k-1, 0) is a measurement function for deriving the observation data at time k-1 from the target prediction data at time k-1, and R denotes a constant.
The calculation formula of the a priori loss function value can be described as: Wherein, The predicted target prediction data at the k-n time is represented, the k-n time is the start time of the corresponding time window, and x 0 represents the target prediction data at the k-n time.
After obtaining the multiple loss function values, the multiple loss function values can be summed and calculated to obtain a total loss function value, then, optimization iteration can be carried out on the multiple target prediction data according to the total loss function value, when the total loss function value is not changed any more, the loss function value is determined to meet the iteration stop condition, and then, the motion state of each target is determined according to the multiple target prediction data after updating.
Fifthly, estimating the existence of the target, wherein an estimation result of the existence estimation of the target is the existence estimation information in the tracking state information.
The target presence estimation refers to a step of judging or estimating probability of presence evidence of an estimated target, wherein the presence evidence of the target refers to target observation data corresponding to the target. In the embodiment of the disclosure, the existence estimation information is obtained by processing the object observation data of each object by an evidence existence estimation method based on the Dempster-Shafer theory. The presence estimation information may be a presence probability, and when the presence probability is greater than a certain value, the evidence that the target exists is considered to be sufficient, at this time, the relevant information of the target may be retained in the target pool, or else, the target is considered to be absent, at this time, the relevant information of the target may be deleted in the target pool.
Sixth, estimating the target category.
Different sensors have certain difference in the classification recognition effect of the targets, and more accurate classification information can be obtained by fusing the classification estimation results of the targets by utilizing multiple sensors. In the disclosed embodiments, the class information of the target may be determined by an evidence theory algorithm based on the Dempster-Shafer theory.
Seventh, target pool management.
Based on the above-described sub-process "target pool management," in an alternative embodiment, the method further comprises: after tracking state information of a target is obtained, performing target operation on information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following steps: data update operation, data creation operation, data transmission operation.
The target pool contains related information of the target, and the target pool management can be understood as performing operations such as new creation, deletion, update and the like on the target in the target pool. In particular, in embodiments of the present disclosure, different selectable target pool tracking management methods are provided for different sensor configurations (e.g., camera-lidar, multi-camera, camera-millimeter wave radar, etc.). The user may also facilitate adding other target pool management methods in this step, which is not specifically limited by the present disclosure.
The above fusion algorithm flow is described below in conjunction with fig. 8.
As can be seen in fig. 8, the plurality of sensors includes a camera, a lidar sensor, and a millimeter wave radar sensor.
As can be seen from fig. 8, the camera, the lidar sensor and the millimeter wave radar sensor acquire observation data, and respectively obtain an image frame, a lidar data frame and a millimeter wave radar data frame (i.e. observation data). And then, carrying out data preprocessing on the image frames, the laser radar data frames and the millimeter wave radar data frames, and carrying out data association operation on the image frames, the laser radar data frames and the millimeter wave radar data frames according to the predicted data in the annular buffer area after the data preprocessing, so as to determine the target corresponding to each observation data. The above-described sub-flow "time stamp alignment process" also needs to be performed before the data association operation is performed. After performing the data association operation, a sub-flow "target pool management" may be performed. In the sub-process of "target pool management", a target state update may be performed according to a result of the data association operation, where the target state update refers to: performing time stamp alignment processing on the target observation data and the predicted data in the annular cache region; and estimating the motion state of the target according to the predicted data after the time stamp alignment processing. After updating the target state, the corresponding information can be modified in the target pool to be the updated tracking state information. Thereafter, a state post-processing flow and a deletion flow of the target may be performed. The state post-processing flow refers to performing a corresponding operation on the tracking state information after the tracking state information is obtained, and a specific operation type may be set according to a user's actual needs, which is not specifically limited in the present disclosure.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Embodiment two:
based on the same inventive concept, the embodiments of the present disclosure further provide a data fusion device corresponding to the data fusion method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the data fusion method in the embodiments of the present disclosure, implementation of the device may refer to implementation of the method, and repeated descriptions are omitted.
Referring to fig. 9, a schematic diagram of a data fusion device according to an embodiment of the disclosure is shown, where the device includes: a determination unit 91, an acquisition unit 92, a fusion processing unit 93; wherein,
A determining unit 91, configured to determine a target sensor and a fusion algorithm flow indicated by the target configuration file;
An acquisition unit 92 configured to acquire observation data acquired by the target sensor;
And the fusion processing unit 93 is used for performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target.
As can be seen from the foregoing description, in the embodiments of the present disclosure, by setting the target configuration file, in a manner of setting the fusion algorithm flow of the observation data through the target configuration file, the fusion schemes of different sensors installed in different application scenarios may be freely configured, and each flow in the fusion algorithm flow may be disassembled through the target configuration file, so that the technical problem that it is difficult for the existing data perception fusion system in the prior art to freely adapt to various sensor configuration schemes is alleviated.
In a possible implementation manner, the fusion processing unit 93 is configured to: under the condition that the tracking state information comprises a motion state, carrying out data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular buffer area represents the motion state of the target predicted according to the observed data acquired at each observation time in a target time window; performing time stamp alignment processing on the target observation data and the predicted data in the annular cache region; and updating the predicted data after the time stamp alignment processing according to the target observation data to obtain the motion state of each target.
In a possible implementation manner, the fusion processing unit 93 is further configured to: and calculating the similarity between the predicted data and the observed data in the annular cache region according to a data association matching algorithm indicated in the target configuration file, determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target.
In a possible implementation manner, the fusion processing unit 93 is further configured to: determining a matching relationship between a target observation time of the target observation data and the target time window; and under the condition that the target observation time is determined to be within the target time window or the target observation time is greater than the maximum timestamp of the predicted data in the target time window according to the matching relation, interpolating to obtain the predicted data corresponding to the target observation data, storing the predicted data in the annular buffer, and taking the target observation time as the timestamp of the interpolated predicted data.
In a possible implementation manner, the fusion processing unit 93 is further configured to: under the condition that the annular cache area does not contain the predicted data corresponding to the target observed data, predicting the predicted data corresponding to the target observed data according to the predicted data at the target moment in the annular cache area to obtain target predicted data; the target time is a time before the target observation time of the target observation data and/or a time after the target observation time in a target time window of the annular cache region; and inserting the target prediction data in a storage position of the annular buffer area corresponding to the target observation time.
In a possible implementation manner, the fusion processing unit 93 is further configured to: predicting prediction data corresponding to the target observation data through each motion model in the interactive multi-model; and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the target predicted data.
In a possible implementation manner, the fusion processing unit 93 is further configured to: predicting model probabilities according to the predicted data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; acquiring a fusion motion state of each target at the target moment determined according to each motion model; and determining prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
In a possible implementation manner, the fusion processing unit 93 is further configured to: selecting at least one motion model from the interactive multi-model; and updating the predicted data after the time stamp alignment processing according to the selected motion model and the target observation data to obtain the motion state of each target.
In a possible implementation manner, the fusion processing unit 93 is further configured to: and updating the predicted data after the time stamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the motion state of each target.
In a possible implementation manner, the fusion processing unit 93 is further configured to: determining the confidence coefficient of each motion model according to the target observation data, wherein the confidence coefficient represents the matching degree between the motion state of each target predicted by the motion model at the target observation time and the actual motion state of each target; updating the model probability of each motion model according to the confidence, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation moment; and determining the motion state and covariance matrix of each target according to the predicted data predicted by the target observation data through the updated model probability and each motion model, wherein the covariance matrix of one target is used for representing the association degree between the predicted motion state of the target and the actual motion state of the target.
In a possible implementation manner, the fusion processing unit 93 is further configured to: determining a plurality of target prediction data of which the time represented by the timestamp is positioned before the target observation time in the annular buffer; and updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data.
In a possible implementation manner, the fusion processing unit 93 is further configured to: determining time windows corresponding to the target prediction data; determining observation data located in the corresponding time window in the target observation data; determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measured loss function value and an a priori loss function value; updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
In a possible embodiment, the device is further configured to: after tracking state information of a target is obtained, performing target operation on information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following steps: data update operation, data creation operation, data transmission operation.
In a possible embodiment, the device is further configured to: preprocessing the observed data to obtain preprocessed observed data; the fusion processing unit is further used for: and carrying out data fusion on the observed data after preprocessing according to the fusion algorithm flow to obtain tracking state information of the target.
In a possible implementation manner, the fusion processing unit 93 is further configured to: the fusion algorithm flow comprises a plurality of sub-flows, and the target configuration file is also used for indicating the execution sequence of the plurality of sub-flows and the flow algorithm corresponding to each sub-flow; the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object; under the condition of determining the sub-processes of the type information of the target, data fusion is carried out on the observation data acquired by the target sensor according to the execution sequence of a plurality of sub-processes in the target configuration file and a process algorithm corresponding to each sub-process, so as to obtain tracking state information of the target, wherein the tracking state information of the target comprises at least one of motion state, existence estimation information and type information.
In a possible embodiment, the device is further configured to: determining a sensor matched with the environment to be sensed as a target sensor; determining a configuration file indicating the target sensor; selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Embodiment III:
Corresponding to the data fusion method in fig. 1, the embodiment of the present disclosure further provides an electronic device 100, as shown in fig. 10, which is a schematic structural diagram of the electronic device 100 provided in the embodiment of the present disclosure, including:
a processor 11, a memory 12, and a bus 13; the memory 12 is used for storing execution instructions, including a memory 121 and an external memory 122; the memory 121 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 11 and data exchanged with the external memory 122 such as a hard disk, and the processor 11 exchanges data with the external memory 122 through the memory 121, and when the electronic device 100 is operated, the processor 11 and the memory 12 communicate through the bus 13, so that the processor 11 executes the following instructions:
Determining a target sensor and a fusion algorithm flow indicated by a target configuration file; obtaining observation data acquired by the target sensor; and carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the data fusion method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform the steps of the data fusion method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein in detail.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (14)

1. A method of data fusion, comprising:
determining a sensor matched with the environment to be sensed as a target sensor; determining a configuration file indicating the target sensor; selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file;
obtaining observation data acquired by the target sensor;
Performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target;
the tracking state information includes a motion state; performing data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, wherein the data fusion processing comprises the following steps:
Calculating the similarity between the predicted data and the observed data in the annular cache region according to the data association matching algorithm indicated in the target configuration file;
Determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target; the predicted data in the annular buffer area represents the motion state of the target predicted according to the observed data acquired at each observation time in a target time window;
Performing time stamp alignment processing on the target observation data and the predicted data in the annular cache region;
Updating the predicted data after the time stamp alignment processing according to the target observation data to obtain the motion state of each target;
The performing timestamp alignment processing on the target observed data and the predicted data in the ring buffer area includes:
under the condition that the annular cache area does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model; fusing predicted data corresponding to target observation data predicted by each motion model to obtain target predicted data; the target time is a time before the target observation time of the target observation data and/or a time after the target observation time in a target time window of the annular cache region;
And inserting the target prediction data in a storage position of the annular buffer area corresponding to the target observation time.
2. The method of claim 1, wherein the performing a time stamp alignment process on the target observation data and the predicted data in the ring buffer comprises:
determining a matching relationship between a target observation time of the target observation data and the target time window;
And under the condition that the target observation time is determined to be within the target time window or the target observation time is greater than the maximum timestamp of the predicted data in the target time window according to the matching relation, interpolating to obtain the predicted data corresponding to the target observation data, storing the predicted data in the annular buffer, and taking the target observation time as the timestamp of the interpolated predicted data.
3. The method according to claim 1, wherein predicting the predicted data corresponding to the target observation data by each motion model in the interactive multi-model comprises:
Predicting model probabilities according to the predicted data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time;
acquiring a fusion motion state of each target at the target moment determined according to each motion model;
And determining prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
4. The method according to claim 1, wherein updating the predicted data after the time stamp alignment processing according to the target observation data to obtain the motion state of each target comprises:
selecting at least one motion model from the interactive multi-model;
And updating the predicted data after the time stamp alignment processing according to the selected motion model and the target observation data to obtain the motion state of each target.
5. The method according to claim 1, wherein updating the predicted data after the time stamp alignment processing according to the target observation data to obtain the motion state of each target comprises:
And updating the predicted data after the time stamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the motion state of each target.
6. The method according to claim 5, wherein updating the predicted data after the time stamp alignment processing to obtain the motion state of each target by the motion model in the interactive multi-model and the target observation data comprises:
Determining the confidence coefficient of each motion model according to the target observation data, wherein the confidence coefficient represents the matching degree between the motion state of each target predicted by the motion model at the target observation time and the actual motion state of each target;
Updating the model probability of each motion model according to the confidence, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation moment;
And determining the motion state and covariance matrix of each target according to the predicted data predicted by the target observation data through the updated model probability and each motion model, wherein the covariance matrix of one target is used for representing the association degree between the predicted motion state of the target and the actual motion state of the target.
7. The method according to claim 1, wherein updating the predicted data after the time stamp alignment processing according to the target observation data to obtain the motion state of each target comprises:
determining a plurality of target prediction data of which the time represented by the timestamp is positioned before the target observation time in the annular buffer;
and updating the determined target prediction data according to the target observation data, and determining the motion state of each target according to the updated target prediction data.
8. The method of claim 7, wherein updating the determined target prediction data based on the target observation data and determining the motion state of each target based on the updated target prediction data comprises:
determining time windows corresponding to the target prediction data;
Determining observation data located in the corresponding time window in the target observation data;
Determining a loss function value between any two adjacent target prediction data in the plurality of target prediction data according to the determined observation data, wherein the loss function value comprises at least one of the following: a motion loss function value, a measured loss function value and an a priori loss function value;
Updating the plurality of target prediction data according to the loss function value; and determining the motion state of each target according to the updated target prediction data until the loss function value meets the iteration stop condition.
9. The method according to claim 1, wherein the method further comprises:
After tracking state information of a target is obtained, performing target operation on information of the target in a target pool according to the tracking state information, wherein the target operation comprises any one of the following steps: data update operation, data creation operation, data transmission operation.
10. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The method further comprises the steps of: preprocessing the observed data to obtain preprocessed observed data;
The data fusion processing is carried out on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, and the method comprises the following steps: and carrying out data fusion on the observed data after preprocessing according to the fusion algorithm flow to obtain tracking state information of the target.
11. The method according to any one of claims 1 to 10, wherein the fusion algorithm flow includes a plurality of sub-flows, and the target configuration file is further configured to indicate an execution order of the plurality of sub-flows and a flow algorithm corresponding to each sub-flow; the plurality of sub-processes includes at least one of the following sub-processes: a sub-process for determining a motion state of the object, a sub-process for determining presence estimation information of the object, a sub-process for determining type information of the object;
The data fusion processing is performed on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target, including:
And carrying out data fusion on the observation data acquired by the target sensor according to the execution sequence of a plurality of sub-processes in the target configuration file and a process algorithm corresponding to each sub-process to obtain tracking state information of the target, wherein the tracking state information of the target comprises at least one of motion state, existence estimation information and type information.
12. A data fusion device, comprising:
A determining unit for determining a sensor matched with the environment to be sensed as a target sensor; determining a configuration file indicating the target sensor; selecting a configuration file indicating a fusion algorithm flow matched with the environment to be perceived from the determined configuration files as a target configuration file;
the acquisition unit is used for acquiring the observation data acquired by the target sensor;
the fusion processing unit is used for carrying out data fusion processing on the observation data acquired by the target sensor according to the fusion algorithm flow to obtain tracking state information of the target; the tracking state information includes a motion state;
The fusion processing unit is specifically configured to: calculating the similarity between the predicted data and the observed data in the annular cache region according to the data association matching algorithm indicated in the target configuration file; determining a target associated with the observed data according to the similarity, and determining the observed data as target observed data of the associated target; the predicted data in the annular buffer area represents the motion state of the target predicted according to the observed data acquired at each observation time in a target time window; performing time stamp alignment processing on the target observation data and the predicted data in the annular cache region; updating the predicted data after the time stamp alignment processing according to the target observation data to obtain the motion state of each target; the performing timestamp alignment processing on the target observed data and the predicted data in the ring buffer area includes: under the condition that the annular cache area does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model; fusing predicted data corresponding to target observation data predicted by each motion model to obtain target predicted data; the target time is a time before the target observation time of the target observation data and/or a time after the target observation time in a target time window of the annular cache region; and inserting the target prediction data in a storage position of the annular buffer area corresponding to the target observation time.
13. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the data fusion method according to any of claims 1 to 11.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the data fusion method according to any of claims 1 to 11.
CN202011628706.4A 2020-12-31 2020-12-31 Data fusion method, device, electronic equipment and storage medium Active CN112733907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011628706.4A CN112733907B (en) 2020-12-31 2020-12-31 Data fusion method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011628706.4A CN112733907B (en) 2020-12-31 2020-12-31 Data fusion method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112733907A CN112733907A (en) 2021-04-30
CN112733907B true CN112733907B (en) 2024-09-17

Family

ID=75608154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011628706.4A Active CN112733907B (en) 2020-12-31 2020-12-31 Data fusion method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112733907B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965879B (en) * 2021-05-13 2024-02-06 深圳市速腾聚创科技有限公司 Multi-sensor perception information fusion method and related equipment
CN113612567B (en) * 2021-10-11 2021-12-14 树根互联股份有限公司 Alignment method and device for data collected by multiple sensors of equipment and electronic equipment
CN113965289B (en) * 2021-10-29 2024-03-12 际络科技(上海)有限公司 Time synchronization method and device based on multi-sensor data
CN114241749B (en) * 2021-11-26 2022-12-13 深圳市戴升智能科技有限公司 Video beacon data association method and system based on time sequence

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111348046A (en) * 2018-12-24 2020-06-30 长城汽车股份有限公司 Target data fusion method, system and machine-readable storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101610502B1 (en) * 2014-09-02 2016-04-07 현대자동차주식회사 Apparatus and method for recognizing driving enviroment for autonomous vehicle
CN105424043B (en) * 2015-11-02 2018-03-09 北京航空航天大学 It is a kind of based on judging motor-driven estimation method of motion state
CN105719312B (en) * 2016-01-19 2018-07-27 深圳大学 Multi-object tracking method based on sequential Bayesian filter and tracking system
CN106054170B (en) * 2016-05-19 2017-07-25 哈尔滨工业大学 A kind of maneuvering target tracking method under constraints
JP6652477B2 (en) * 2016-10-03 2020-02-26 日立オートモティブシステムズ株式会社 In-vehicle processing unit
US10602242B2 (en) * 2017-06-14 2020-03-24 GM Global Technology Operations LLC Apparatus, method and system for multi-mode fusion processing of data of multiple different formats sensed from heterogeneous devices
CN107462882B (en) * 2017-09-08 2020-06-02 深圳大学 Multi-maneuvering-target tracking method and system suitable for flicker noise
JP6818907B2 (en) * 2017-11-20 2021-01-27 三菱電機株式会社 Obstacle recognition device and obstacle recognition method
CN108226920B (en) * 2018-01-09 2021-07-06 电子科技大学 Maneuvering target tracking system and method for processing Doppler measurement based on predicted value
US20200363816A1 (en) * 2019-05-16 2020-11-19 WeRide Corp. System and method for controlling autonomous vehicles
CN110543850B (en) * 2019-08-30 2022-07-22 上海商汤临港智能科技有限公司 Target detection method and device and neural network training method and device
CN110850403B (en) * 2019-11-18 2022-07-26 中国船舶重工集团公司第七0七研究所 Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method
CN111310840B (en) * 2020-02-24 2023-10-17 北京百度网讯科技有限公司 Data fusion processing method, device, equipment and storage medium
CN111860589B (en) * 2020-06-12 2023-07-18 中山大学 Multi-sensor multi-target collaborative detection information fusion method and system
CN111860604B (en) * 2020-06-24 2024-02-02 国汽(北京)智能网联汽车研究院有限公司 Data fusion method, system and computer storage medium
CN111721238A (en) * 2020-07-22 2020-09-29 上海图漾信息科技有限公司 Depth data measuring apparatus and target object data collecting method
CN112033429B (en) * 2020-09-14 2022-07-19 吉林大学 Target-level multi-sensor fusion method for intelligent automobile

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111348046A (en) * 2018-12-24 2020-06-30 长城汽车股份有限公司 Target data fusion method, system and machine-readable storage medium

Also Published As

Publication number Publication date
CN112733907A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN112733907B (en) Data fusion method, device, electronic equipment and storage medium
US9798929B2 (en) Real-time pose estimation system using inertial and feature measurements
CN110782483B (en) Multi-view multi-target tracking method and system based on distributed camera network
Agamennoni et al. An outlier-robust Kalman filter
EP2858008B1 (en) Target detecting method and system
CN112712549B (en) Data processing method, device, electronic equipment and storage medium
Niedfeldt et al. Recursive RANSAC: Multiple signal estimation with outliers
US10019801B2 (en) Image analysis system and method
CN112991389A (en) Target tracking method and device and mobile robot
US20130080111A1 (en) Systems and methods for evaluating plane similarity
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN113066127B (en) Visual inertial odometer method and system for calibrating equipment parameters on line
KR20190001086A (en) Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method
CN116645396A (en) Track determination method, track determination device, computer-readable storage medium and electronic device
JP2004220292A (en) Object tracking method and device, program for object tracking method, and recording medium with its program recorded
CN107665495B (en) Object tracking method and object tracking device
CN115655291B (en) Method, device, mobile robot, equipment and medium for laser SLAM closed loop mapping
CN112927258A (en) Target tracking method and device
KR101483549B1 (en) Method for Camera Location Estimation with Particle Generation and Filtering and Moving System using the same
CN115993791A (en) Method and apparatus for providing tracking data identifying the movements of a person and a hand to control a technical system and a sensor system
KR20110131675A (en) Color region segmentation system for intelligent transportation system
Juang et al. Comparative performance evaluation of GM-PHD filter in clutter
CN113591017B (en) Method, system, device and readable storage medium for indoor navigation
CN115994934B (en) Data time alignment method and device and domain controller
Khasnabish et al. A stochastic resampling based selective particle filter for visual object tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant