CN112712549A - Data processing method, data processing device, electronic equipment and storage medium - Google Patents

Data processing method, data processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112712549A
CN112712549A CN202011634778.XA CN202011634778A CN112712549A CN 112712549 A CN112712549 A CN 112712549A CN 202011634778 A CN202011634778 A CN 202011634778A CN 112712549 A CN112712549 A CN 112712549A
Authority
CN
China
Prior art keywords
target
data
model
observation
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011634778.XA
Other languages
Chinese (zh)
Inventor
马全盟
罗铨
张世权
蒋沁宏
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202011634778.XA priority Critical patent/CN112712549A/en
Publication of CN112712549A publication Critical patent/CN112712549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The present disclosure provides a data processing method, an apparatus, an electronic device, and a storage medium, wherein the method includes: acquiring observation data acquired by a plurality of sensors; performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data collected at each observation moment in the target time window; carrying out time stamp alignment processing on target observation data and prediction data in the annular cache region; and updating the prediction data after the time stamp alignment processing according to the target observation data to obtain the fusion motion state of each target. According to the embodiment of the disclosure, the fusion motion state at the historical moment is stored by adopting the annular buffer technology, so that all data of each sensor can be effectively utilized, and the stability of the prediction state of the target is ensured.

Description

Data processing method, data processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
Currently, existing sensor data perception fusion methods are mainly classified into a perception fusion method using a synchronous sensor system and a perception fusion method using an asynchronous sensor system. For a synchronous sensor system, the observation time and the data transmission delay time of each synchronous sensor are required to be the same, in practice, people often trigger each sensor to acquire data by using a trigger pulse, so that the observation time of each sensor is ensured to be consistent, and meanwhile, the transmission time of each sensor is ensured to be the same by using a low-delay data transmission network. However, such methods require complex hardware circuits to implement simultaneous triggering, which is complex and costly to implement. For asynchronous sensor systems, it is only necessary to ensure that each asynchronous sensor has the same time source, e.g., has the same data acquisition time interval.
This can be achieved by setting a corresponding timer for each asynchronous sensor. However, for the asynchronous sensor, after acquiring the observation data, a part of the asynchronous sensor needs to preprocess the observation data, and the data is transmitted to the next-stage device after the preprocessing, which may cause a large data transmission delay time of such a sensor. In addition, asynchronous sensors also include sensors with a high acquisition frequency, such as cameras. Therefore, when the existing perception fusion method based on the asynchronous sensor system is adopted, sensor data with longer data transmission delay time or higher acquisition frequency needs to be abandoned, and at the moment, the data of each asynchronous sensor is difficult to be fused efficiently, so that the predicted motion state stability of the target is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides a data processing method, a data processing device, an electronic device and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a data processing method, including: acquiring observation data acquired by a plurality of sensors; performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data collected at each observation moment in the target time window; performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region; and updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fusion motion state of each target.
In the embodiment of the disclosure, the motion state of the target predicted according to the observation data acquired at each observation time in the target time window is stored through the annular cache region, and the fusion motion state of the target is determined according to the prediction data in the annular cache region, so that the fusion motion state at the historical time can be stored by adopting the annular cache technology, all data of each sensor can be effectively utilized, and meanwhile, the stability of the prediction state of the target can be ensured in the process of data perception fusion.
In an optional implementation manner, the performing data association on the observation data and at least one target according to the predicted data in the ring buffer to obtain target observation data associated with each target includes: searching first prediction data with time stamps of all observation moments of the observation data in the annular cache region; determining a similarity between the first predicted data and the observed data; and if the similarity is greater than or equal to a preset threshold value, determining the observation data as target observation data of a target corresponding to the annular cache region.
According to the description, the observation data and each target are associated by calculating the similarity, the target observation data of each target can be quickly and accurately determined from a large amount of observation data, and therefore, when the fusion motion state of the target is determined according to the target observation data and the prediction data, an accurate prediction result can be obtained.
In an optional embodiment, the performing a timestamp alignment process on the target observation data and the prediction data in the ring buffer includes: determining a matching relationship between a target observation time of the target observation data and the target time window; and under the condition that the target observation time is determined to be in the target time window according to the matching relation, or the target observation time is greater than the maximum time stamp of the predicted data in the target time window, interpolating to obtain the predicted data of the target observation data and storing the predicted data in the annular cache region, and taking the target observation time as the time stamp of the predicted data obtained by interpolation.
In the embodiment of the present disclosure, by performing timestamp alignment processing on the target observation data and the prediction data in the annular buffer in the manner described above, the stability of the latest motion state of the target can be ensured.
In an optional embodiment, the performing a timestamp alignment process on the target observation data and the prediction data in the ring buffer includes: under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain second prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time; and inserting the second prediction data into the storage position of the annular buffer zone corresponding to the target observation time.
As can be seen from the above description, in the embodiment of the present disclosure, by performing timestamp alignment processing on the circular buffer queue and the target observation data, interpolation calculation on the prediction data of the target corresponding to the target observation data in the target time window can be implemented, so as to implement accurate alignment between the timestamp of the circular buffer queue and the timestamp of the target observation data. After the time stamps are aligned, when updating is carried out according to the prediction data after the time stamp alignment processing, a more accurate fusion motion state can be obtained.
In an optional implementation manner, the predicting, according to the predicted data of the target time in the ring buffer, the predicted data corresponding to the target observation data to obtain second predicted data includes: predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model; and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the second predicted data.
In the embodiment of the disclosure, the second prediction data is predicted through the interactive multi-model, so that the motion state of the complex target can be effectively fitted, and a better perception fusion result can be obtained.
In an optional embodiment, the predicting, by each motion model in the interactive multi-model, prediction data corresponding to the target observation data includes: predicting model probabilities according to the prediction data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; acquiring the fusion motion state of each target at the target moment determined according to each motion model; and determining the prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
In the embodiment of the disclosure, the prediction data of each motion model at the target observation time is determined by the model probability of each motion model in the interactive multi-model and the fusion state data of the target determined by each motion model at the target time, so that the motion state of the complex target can be effectively fitted, and a better perception fusion result can be obtained.
In an alternative embodiment, the predicting, by each of the motion models, a model probability from the prediction data of the target time includes: determining a target transition probability for each of the motion models, wherein the target transition probability represents a probability of transition to that motion model by other motion models in the interactive multi-model; determining a first confidence level of the other motion model at the target moment; the first confidence is used for representing the probability that the actual motion of the target accords with the other motion models at the target moment; determining the model probability based on the target transition probability and the confidence.
In the embodiment of the disclosure, the model probability of the motion model is determined according to the target transition probability and the first confidence of other motion models at the target time, so that more accurate model probability can be obtained, and when the prediction data of each motion model at the target observation time is determined according to the model probability, the prediction accuracy of the data can be improved, so that a better perception fusion result can be obtained.
In an optional implementation manner, the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fusion motion state of the target includes: and updating the prediction data after the timestamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the fusion motion state of each target.
As can be seen from the above description, in the embodiment of the present disclosure, the interactive multi-model is used to update the prediction data after the timestamp alignment processing, so as to obtain the motion state of the target, and the motion state of the complex target can be effectively fitted, so as to obtain a better perception fusion result.
In an optional implementation manner, updating the prediction data after the timestamp alignment processing by using a motion model in an interactive multi-model and the target observation data to obtain a fused motion state of each target includes: determining a second confidence coefficient of each motion model according to the target observation data, wherein the second confidence coefficient represents the matching degree between the motion state of each target predicted by moving each motion model at the target observation time and the actual motion state of each target; updating the model probability of each motion model according to the second confidence coefficient, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; and determining the fusion motion state and the covariance matrix of each target according to the updated model probability and the prediction data predicted by each motion model according to the target observation data, wherein the covariance matrix of one target is used for representing the correlation degree between the predicted fusion motion state of the target and the actual motion state of the target.
In the embodiment of the disclosure, the second confidence of each motion model is determined through the interactive multi-model, and the model probability of each motion model is updated according to the second confidence, so that the fusion motion state and the covariance matrix of the target are determined according to the updated model probability and the prediction data predicted by each motion model according to the target observation data, a more accurate perception fusion result can be obtained, and the motion state of the complex target can be effectively fitted.
In an alternative embodiment, the determining the second confidence of each motion model according to the target observation data includes: performing Kalman filtering updating on the prediction data subjected to the timestamp alignment processing by using each motion model and the target observation data to obtain a Kalman filtering updating result; determining the second confidence level of each motion model in the interactive multi-model according to the Kalman filtering updating result.
In the embodiment of the disclosure, the second confidence of each motion model in the interactive multi-model is determined through a kalman filtering update result, so that a more accurate second confidence can be obtained, and a more accurate sensing fusion result can be obtained when the fusion motion state and the covariance matrix of the target are determined according to the second confidence, thereby effectively fitting the motion state of the complex target.
In a second aspect, an embodiment of the present disclosure provides a data processing apparatus, including: the acquisition unit is used for acquiring observation data acquired by a plurality of sensors; the data association unit is used for performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data collected at each observation moment in the target time window; the alignment processing unit is used for carrying out time stamp alignment processing on the target observation data and the prediction data in the annular cache region; and the data updating unit is used for updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fusion motion state of each target.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the data processing method according to any one of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the data processing method as described in any one of the above first aspects.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flow chart of a data processing method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a data timing diagram of a timestamp alignment process provided by an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific method for performing data association between observed data and a target in a data processing method provided by an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a specific method of performing timestamp alignment processing on target observation data and predicted data in a ring cache in a data processing method provided by an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram illustrating a data processing method provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a data processing apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Research shows that for asynchronous sensors, as data transmission delay time of part of the asynchronous sensors is longer and acquisition frequency of part of the sensors is higher, setting timers for the asynchronous sensors still cannot ensure consistent data acquisition time intervals of the asynchronous sensors, and in the request, sensor data with longer data transmission delay time or higher acquisition frequency needs to be abandoned. At this time, it is difficult to efficiently fuse the data of the respective asynchronous sensors, resulting in poor stability of the predicted motion state of the target.
Based on the above research, the present disclosure provides a data processing method, an apparatus, an electronic device, and a computer-readable storage medium. As can be seen from the above description, in the embodiment of the present disclosure, in order to use each asynchronous sensor data, the ring buffer is used to store the motion state of the target predicted according to the observation data collected in the target time window, and perform timestamp alignment processing on the predicted data in the ring buffer and the target observation data of each target, so as to determine the fused motion state of each target according to the result of the timestamp alignment processing, and on the basis that a timer is not set for the asynchronous sensor, the ring buffer technology is used to store the fused motion state at the historical time, so that all data of each sensor can be effectively utilized, and meanwhile, the stability of the predicted state of the target can be ensured in the data sensing fusion process.
To facilitate understanding of the present embodiment, first, a data processing method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the data processing method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the data processing method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a data processing method provided in an embodiment of the present disclosure is shown, where the method includes steps S101 to S107, where:
s101: and acquiring observation data acquired by a plurality of sensors.
In the disclosed embodiments, the variety of the plurality of sensors is associated with the installation scenario of the sensor, e.g., if the plurality of sensors are installed on an autonomous vehicle, the variety of the plurality of sensors may be a camera, a lidar sensor, a millimeter wave radar sensor, or the like, that is used to detect a target within the driving environment of the autonomous vehicle.
S103: performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular buffer area represents the motion state of the target predicted according to the observation data collected at each observation moment in the target time window.
If the number of the targets is multiple, the observation data of the multiple sensors is the observation data of the multiple targets, and at this time, the observation data of the multiple sensors and the multiple targets need to be subjected to data association according to the predicted data in the annular cache region, so that the target observation data belonging to each target is obtained.
It should be noted that, in the embodiment of the present disclosure, each target corresponds to a ring buffer, and the prediction data of the target is stored in the ring buffer.
S105: and carrying out time stamp alignment processing on the target observation data and the prediction data in the annular cache region.
In the embodiment of the present disclosure, after the observed data and the multiple targets are subjected to data association, since the time stamp (or observed data) of the observed data of the targets obtained after the association does not correspond to the time stamp of the predicted data in the ring buffer, at this time, the observed data of the targets and the predicted data in the ring buffer need to be subjected to time stamp alignment processing.
For example, as shown in fig. 2, the plurality of sensors includes: a camera, a laser radar sensor, a millimeter wave radar sensor. As can be seen from fig. 2, for the target observation data, at times a1 and a2, the millimeter wave radar sensor acquires corresponding observation data, for example, denoted as M1 and M2(M1 and M2 are shown in fig. 2); at times A3 and a5, the lidar sensor collects corresponding observations, e.g., as M3 and M5(M3 and M5 are shown in fig. 2); at time a4, the camera acquires data M4(M4 is shown in fig. 2). As can be seen from fig. 2, the timestamps of the prediction data stored in the ring buffer are B1, B2, B3, B4, B5, and B6, respectively, and as can be seen from fig. 2, at the observation time a2, the prediction data corresponding to the observation time a2 does not exist in the ring buffer, and at this time, it can be determined that the timestamps of the target observation data and the prediction data are not aligned.
It should be noted that one possible reason why the predicted data corresponding to the observation time a2 does not exist in the ring buffer is that the observation data collected at the time a2 is not uploaded to the computer device in time, but is transmitted to the computer device after a certain time delay. At this point, there may be no prediction data in the ring buffer corresponding to this time point a 2. However, in order to apply the observation data acquired at the time a2 to the data fusion method, a prediction data needs to be interpolated in the annular buffer, and the timestamp of the prediction data obtained by interpolation is the time a2, at this time, the data transmitted in a delayed manner can be applied to the data fusion method, so that the observation data transmitted in a delayed manner does not need to be discarded, all the observation data acquired by the target sensor can be utilized, all the data of each sensor can be effectively utilized, and the stability of the prediction state of the target can be ensured in the process of data sensing fusion.
Based on this, in the embodiment of the present disclosure, time stamp alignment processing needs to be performed on the target observed data and the predicted data in the ring buffer, and as can be seen from fig. 2, the predicted data corresponding to the a2 time can be obtained by interpolation according to the predicted data corresponding to the B2 or B3 time, so as to achieve time stamp alignment between the target observed data and the predicted data in the ring buffer.
S107: and updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fusion motion state of the target.
After the time stamps of the target observation data and the prediction data in the annular cache region are aligned, the prediction data after time stamp alignment processing can be updated according to the target observation data, and the motion state of each target is obtained. When the motion state of each target is determined by using the prediction data after the timestamp alignment processing, the stability of the motion state of each target after updating can be improved, and a more accurate motion state can be obtained.
In the embodiment of the present disclosure, the ring buffer is used to store the prediction data determined according to the observation data acquired at each observation time in the target time window, and the motion state of the target is determined according to the prediction data in the ring buffer, so that the motion state at the historical time can be stored by using the ring buffer technology, and thus, all data of each sensor can be effectively used, and meanwhile, the stability of the prediction state of the target can be ensured in the data perception fusion process.
In an alternative embodiment, as shown in fig. 3, in step S103, performing data association on the observation data and the targets to obtain target observation data associated with each target, including the following processes:
step S1031, searching first prediction data with time stamps of all observation moments of the observation data in the annular cache region;
step S1032, determining the similarity between the first prediction data and the observation data;
step S1033, if the similarity is greater than or equal to a preset threshold, determining the observation data as target observation data of a target corresponding to the ring cache region.
In the embodiment of the present disclosure, after the observation data with the timestamp (i.e., the observation time) is acquired, the prediction data corresponding to the timestamp, i.e., the first prediction data, may be searched in the ring cache. As can be seen from the above description, each target corresponds to one ring buffer, and therefore, if there are a plurality of targets, the first prediction data may also be prediction data determined in different ring buffers.
Therefore, in the embodiment of the present disclosure, after the first prediction data is determined, the similarity between the first prediction data and the observation data needs to be determined. For example, a euclidean distance between the first prediction data and the observation data may be calculated, so as to determine whether the observation data is the target observation data of the corresponding target according to the euclidean distance. For example, a smaller euclidean distance indicates a higher degree of similarity between the first prediction data and the observation data, and a larger euclidean distance indicates a lower degree of similarity between the first prediction data and the observation data.
It should be noted that, in the embodiment of the present disclosure, in addition to determining the similarity between the first prediction data and the observation data by calculating the euclidean distance as described above, other similarity calculation methods may be adopted, for example, a similarity calculation method that may be able to replace the euclidean distance.
After the similarity is determined, the similarity may be compared with a preset threshold, and if the similarity is greater than the preset threshold, the observed data is determined to be target observed data of a target corresponding to the annular buffer, and at this time, the observed data may be stored in a measurement buffer queue corresponding to the target, where each target corresponds to one measurement buffer queue.
According to the description, the observation data and the targets are associated through similarity calculation, the target observation data of each target can be quickly and accurately determined from a large amount of observation data, and therefore when the fusion motion state of the targets is determined according to the target observation data and the prediction data, an accurate prediction result can be obtained.
In the embodiment of the present disclosure, after the observation data and the target are subjected to data association according to the method described above, the target observation data and the prediction data in the ring buffer may be subjected to time stamp alignment processing.
In an alternative embodiment, as shown in fig. 4, in step S105, performing a timestamp alignment process on the target observation data and the predicted data in the ring buffer, including the following steps:
step S1051, determining the matching relation between the target observation time of the target observation data and the target time window;
step S1052, when it is determined that the target observation time is within the target time window according to the matching relationship, or the target observation time is greater than the maximum timestamp of the prediction data in the target time window, interpolating to obtain the prediction data of the target observation data and store the prediction data in the ring buffer, and using the target observation time as the timestamp of the prediction data obtained by interpolation.
In the embodiment of the present disclosure, each target observation datum that has recently entered the measurement buffer queue may be processed through the following three conditions, which specifically include:
the first condition is as follows:
and if the target observation time of the target observation data is determined to be smaller than the timestamp of the earliest predicted data in the annular cache region according to the matching relation, discarding the target observation data. For example, as shown in FIG. 2, the start timestamp of the target time window is B1, and the end timestamp of the target time window is B6. As can be seen from fig. 2, the sensor observation data (millimeter wave radar sensor observation data) with the time stamps of C1 and C2 is smaller than the time stamp B1 of the earliest predicted data in the status buffer, and at this time, the target observation data can be discarded.
Case two:
if the target observation time of the target observation data is determined to be greater than the timestamp of the oldest prediction data in the annular cache region and less than the timestamp of the newest prediction data in the annular cache region according to the matching relationship, the target observation time can be determined to be in a target time window according to the matching relationship, and at the moment, the prediction data corresponding to the target observation time can be searched in the annular cache region; and under the condition that the corresponding prediction data is not found, inserting the prediction data of the target observation data in the annular cache region, adding the prediction data obtained by interpolation into the annular cache region, and then updating the prediction data. In case that the corresponding prediction data is found, the step of updating the found prediction data may be performed.
Case three:
if it is determined according to the matching relationship that the target observation time of the target observation data is greater than the timestamp of the latest prediction data in the annular cache region (i.e., the maximum timestamp of the prediction data within the target time window), the prediction data of the target observation data can be interpolated in the annular cache region, the prediction data obtained by interpolation is added to the annular cache region, and then the prediction data is updated.
In the embodiment of the present disclosure, by performing timestamp alignment processing on the target observation data and the prediction data in the annular buffer in the manner described above, the stability of the latest motion state of the target can be ensured.
In this embodiment of the present disclosure, the timestamp alignment processing may be further performed on the target observation data and the prediction data in the ring cache region in a manner described in the following steps, specifically including the following processes:
step S1051, under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain second prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time;
step S1052, inserting the second prediction data into the storage location of the ring buffer corresponding to the target observation time.
As can be seen from the above three cases, in case two and case three, if it is determined according to the matching relationship that the target observation time of the target observation data is greater than the timestamp of the earliest predicted data in the annular cache region and less than the timestamp of the latest predicted data in the annular cache region, and the predicted data corresponding to the target observation time is not found in the annular cache region; or if the target observation time of the target observation data is determined to be greater than the timestamp of the latest prediction data in the annular cache region according to the matching relation, predicting the prediction data of the target observation data according to the prediction data corresponding to the target time in the annular cache region to obtain second prediction data, adding the second prediction data obtained through interpolation into the annular cache region, taking the target observation time as the timestamp of the second prediction data obtained through interpolation, and then updating the second prediction data.
As can be seen from the above description, in the embodiment of the present disclosure, by performing timestamp alignment processing on the circular buffer queue and the target observation data, interpolation calculation on the prediction data of the target corresponding to the target observation data in the target time window can be implemented, so as to implement accurate alignment between the timestamp of the circular buffer queue and the timestamp of the target observation data. After the time stamps are aligned, when updating is carried out according to the prediction data after the time stamp alignment processing, a more accurate fusion motion state can be obtained.
In an optional implementation manner of the embodiment of the present disclosure, in step S1052, predicting, according to the predicted data of the target time in the ring buffer, the predicted data corresponding to the target observation data to obtain second predicted data, including the following processes:
(1) predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model;
(2) and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the second predicted data.
The existing sensor data perception fusion method usually adopts a single motion model to fit the motion state of a target, however, due to the complexity of a target motion mode, the motion state of the target is difficult to be effectively fitted by using the single motion model. For example, in the field of vehicle automatic driving, a motion pattern executed by a certain target is complex during vehicle driving, such as complex actions of straight-going-right-turning-straight-going-merging and the like. In this case, the complex motion state cannot be fitted by one motion model.
Based on this, in the embodiment of the present disclosure, the prediction data corresponding to the target observation data is predicted for each motion model in the interactive multi-model, so that the prediction data predicted by each motion model is fused according to a calculation method of data summation to obtain the second prediction data.
As can be seen from the above description, in the embodiment of the present disclosure, the interactive multi-model is used to predict the second prediction data corresponding to the target observation data. The second prediction data is predicted through the interactive multi-model, the motion state of the complex target can be effectively fitted, and therefore a better perception fusion result can be obtained.
In an optional embodiment, the predicting data corresponding to the target observation data may be predicted in the following manner, specifically including:
(1) predicting model probability according to the prediction data of the target moment through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time.
Specifically, for each of the motion models, a target transition probability is determined, wherein a target transition probability represents a probability of transition to that motion model by other motion models in the interactive multi-model. In the disclosed embodiment, for the ith motion model, at πjiRepresenting the probability of transition of the object from motion model j to motion model i, where motion model j is the other motion model described above.
Then, a first confidence of the other motion model at the target time can be determined; the first confidence level is used to represent the probability that the actual motion of the object at the object time conforms to the other motion model.
If the target observation time is denoted as time k, the target time may be denoted as time k-1. At this time, the first confidence of the other motion model j at the time k-1 (i.e., the target time) is expressed as
Figure BDA0002880903980000161
Finally, the model probability may be determined based on the target transition probability and the first confidence level. In the disclosed embodiment, the target transition probability pi is determinedjiAnd a first confidence
Figure BDA0002880903980000162
Then, the probability of target transition pi can be matchedjiAnd a first confidence
Figure BDA0002880903980000163
And performing weighted summation to obtain a model probability of the motion model i, wherein a calculation formula for determining the model probability based on the target transition probability and the confidence coefficient can be represented as:
Figure BDA0002880903980000164
wherein the content of the first and second substances,
Figure BDA0002880903980000165
expressed as the model probability of the above-mentioned motion model i.
After determining the model probability of the motion model i in the above-described manner, the model probability is further determined by the following formula
Figure BDA0002880903980000166
Carrying out normalization processing to obtain the model probability after the normalization processing:
Figure BDA0002880903980000167
wherein the content of the first and second substances,
Figure BDA0002880903980000168
representing the model probability after the normalization process.
(2) And acquiring the fusion motion state of each target at the target moment determined according to each motion model.
In the embodiment of the present disclosure, after predicting the model probability according to the prediction data of the target time, the final state result (i.e., the fused motion state) of the target determined by the motion model j at the time k-1 may also be determined, for example, the fused motion state is represented as:
Figure BDA0002880903980000169
(3) and determining the prediction data corresponding to the target observation data predicted by each motion model based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
After the fusion motion state of the target at the target moment is determined through the step (2), the fusion motion state determined at the k-1 moment can be determined
Figure BDA0002880903980000171
Model probability with time k-1
Figure BDA0002880903980000172
And fusing to obtain the predicted data corresponding to the target observation data predicted by each motion model at the time k. Then, the predicted data predicted by all motion models can be summed to obtain second predicted data.
In the embodiment of the present disclosure, the determining, by using the following formula, the prediction data corresponding to the target observation data predicted by each motion model specifically includes:
Figure BDA0002880903980000173
then, by the formula
Figure BDA0002880903980000174
And performing summation operation on the prediction data predicted by all the motion models to obtain second prediction data.
It should be noted that, in the embodiment of the present disclosure, each motion model may predict a corresponding model probability, and in addition, each motion model may also predict a corresponding covariance
Figure BDA0002880903980000175
In the step (3), in addition to determining the fusion motion state corresponding to the target observation data predicted by each motion model, a corresponding covariance matrix may be predicted, where the covariance matrix is used to represent a degree of correlation between the fusion motion state and an actual motion state of the target.
In the disclosed embodiment, the covariance matrix may be determined as described by the following equation:
Figure BDA0002880903980000176
as can be seen from the above description, for each motion model, first, a matrix difference between the second prediction data and the fusion motion state determined by each motion model at the time k-1 may be calculated, then, according to the matrix difference, a transposed matrix of the matrix difference, and a model probability corresponding to each motion model, a degree of association between the motion model and the second prediction data is determined, and then, the degree of association is added to the covariance matrix determined at the time k-1 of each motion model, so as to obtain an addition calculation result. For each motion model, the addition calculation result can be determined in the manner described above, and then the addition calculation result is subjected to summation operation to obtain the covariance matrix determined at the time k.
As can be seen from the above description, in the embodiment of the present disclosure, the interactive multi-model is used to predict the second prediction data corresponding to the target observation data. The second prediction data is predicted through the interactive multi-model, the motion state of the complex target can be effectively fitted, and therefore a better perception fusion result can be obtained.
In the embodiment of the present disclosure, after performing timestamp alignment on the target observation data and the prediction data in the annular cache region in the manner described above, the prediction data after the timestamp alignment may be updated according to the target observation data, so as to obtain the fusion motion state of the target.
In an optional embodiment, in step S107, updating the prediction data after the timestamp alignment processing according to the target observation data to obtain a fusion motion state of the target, includes the following steps:
and updating the prediction data after the timestamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the fusion motion state of each target.
Specifically, first, a second confidence level of each motion model in the interactive multi-model may be determined, where the second confidence level is used to characterize a degree of matching between the motion state of the target predicted by the motion model at the target observation time and the actual motion state of the target.
In the embodiment of the present disclosure, each motion model and the target observation data are used to perform kalman filtering update on the prediction data after the timestamp alignment processing, so as to obtain a kalman filtering update result; the prediction data after the timestamp alignment processing is the second prediction data determined according to the steps (1) to (3). Specifically, kalman filtering update may be performed on the second prediction data by using the target observation data and the extended kalman filtering update or the unscented kalman filtering update, so as to obtain a kalman filtering update result, where the kalman filtering update result may include a measurement residual and a measurement residual covariance matrix. Then, the second confidence of each motion model in the interactive multi-model is determined according to the Kalman filtering updating result. Specifically, the measurement residual and the measurement residual covariance matrix can be taken as the mean and the variance of the gaussian model, respectively, and then the second confidence of each motion model is determined by using the gaussian model.
In particular, in the disclosed embodiments, the data may be represented by a formula
Figure BDA0002880903980000191
A second confidence level is determined. Wherein the content of the first and second substances,
Figure BDA0002880903980000192
in order to be the second degree of confidence,
Figure BDA0002880903980000193
the model is a Gaussian model, and the model is a Gaussian model,
Figure BDA0002880903980000194
determined from target observation dataThe residual (i.e., the mean of the gaussian model) is measured,
Figure BDA0002880903980000195
to measure the residual covariance matrix (i.e., the variance of the gaussian model).
After the second confidence level is determined, the model probability of each motion model may be updated according to the second confidence level, where the model probability is used to represent a probability that the actual motion state of the object matches the motion model at the observation time of the object.
In particular, it can be based on a formula
Figure BDA0002880903980000196
The model probabilities for each motion model are updated.
For the motion model i, the second confidence of the motion model j is first determined
Figure BDA0002880903980000197
And model probability of motion model j
Figure BDA0002880903980000198
Carrying out multiplication operation to obtain a product calculation result Z1; then, the product calculation result Z1 of each motion model j is added to obtain an addition result P1
Figure BDA0002880903980000199
Then, the second confidence of the motion model i
Figure BDA00028809039800001914
And model probability of motion model i
Figure BDA00028809039800001910
Multiplication operation is performed to obtain a product calculation result Z2, and then a ratio between the product calculation result Z2 and the addition operation result P1 is calculated to determine the ratio as a model probability after the motion model i is updated.
After the updated model probability is determined, the fusion motion state and the covariance matrix of the target can be determined according to the updated model probability and the prediction data predicted by each motion model according to the target observation data, wherein the covariance matrix is used for representing the correlation degree between the fusion motion state and the actual motion state of the target.
In particular, in the disclosed embodiments, the data may be represented by a formula
Figure BDA00028809039800001911
A fused motion state of the object is calculated, wherein,
Figure BDA00028809039800001912
in order to update the probability of the model after the update,
Figure BDA00028809039800001913
prediction data predicted for the motion model i from the target observation data. In the embodiment of the present disclosure, the model probability after each motion model is updated and the prediction data predicted by each motion model may be multiplied, and then, for all motion models, the products are summed, so as to obtain the fusion motion state of the target.
After the fused motion state is obtained, the motion state can be determined according to the formula:
Figure BDA0002880903980000201
a covariance matrix of the target is determined. As can be seen from the above description, for each motion model, first, a matrix difference between the fused motion state of the target and the predicted data predicted by each motion model may be calculated, then, according to the matrix difference and a transposed matrix of the matrix difference and the model probability corresponding to each motion model, a degree of association between the predicted data of the motion model and the fused motion state is determined, and then, the degree of association is added to the covariance matrix determined at the time k of each motion model, so as to obtain an addition calculation result. NeedleFor each motion model, the addition calculation result can be determined in the manner described above, and then the addition calculation result is subjected to summation operation to obtain the covariance matrix determined at the time k.
As can be seen from the above description, in the embodiment of the present disclosure, the interactive multi-model is used to update the prediction data after the timestamp alignment processing, so as to obtain the motion state of the target, and the motion state of the complex target can be effectively fitted, so that a better perception fusion result can be obtained.
The data processing process is described below with reference to fig. 5, and as can be seen from fig. 5, the plurality of sensors includes a camera, a lidar sensor, and a millimeter-wave radar sensor.
As can be seen from fig. 5, the camera, the lidar sensor, and the millimeter wave radar sensor collect observation data to obtain an image frame, a lidar data frame, and a millimeter wave radar data frame (i.e., observation data), respectively. And then, performing data association operation on the image frame, the laser radar data frame and the millimeter wave radar data frame according to the predicted data in the annular buffer area, thereby determining a target corresponding to each observation datum. After the data association operation is performed, target observation data to which each target belongs can be obtained. Then, the target observation data and the prediction data in the ring buffer may be subjected to a time stamp alignment process, so that the prediction data after the time stamp alignment process includes the prediction data of the target corresponding to each target observation time of the target observation data.
For example, as shown in fig. 5, the plurality of sensors includes: a camera, a laser radar sensor, a millimeter wave radar sensor. As can be seen from fig. 5, for the target observation data, at times a1 and a2, the millimeter wave radar sensor acquires data M1 and M2; at the time A3 and the time A5, the laser radar sensor acquires data M3 and M5; at time a4, the camera acquires data M4. As can be seen from fig. 4, the timestamps of the prediction data stored in the ring buffer are B1, B2, B3, B4, B5, and B6, respectively, for the prediction data, and as can be seen from fig. 5, the timestamps of the target observed data and the prediction data are not aligned.
Based on this, in the embodiment of the present disclosure, time stamp alignment processing needs to be performed on the target observed data and the predicted data in the ring buffer, and as can be seen from fig. 5, the predicted data corresponding to the time M2 can be obtained by interpolation according to the predicted data corresponding to the time B2 or the time B3, so as to achieve time stamp alignment between the target observed data and the predicted data in the ring buffer.
After the target observation data and the prediction data in the annular cache region are subjected to the timestamp alignment processing, the prediction data subjected to the timestamp alignment processing can be updated, and the fusion motion state of the target is obtained. As shown in fig. 5, the prediction data after the timestamp alignment processing may be updated through an Interactive Multiple Model (IMM), so as to obtain a fusion motion state of the target.
As can be seen from the above description, embodiments of the present disclosure propose to use a buffer technique to efficiently use all measurements of each sensor and maintain stability of the latest state of the target. Compared with the prior art, the method provided by the embodiment of the disclosure can effectively fit the motion state of the complex target, so that a better perception fusion result can be obtained.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Example two:
based on the same inventive concept, a data processing apparatus corresponding to the data processing method is also provided in the embodiments of the present disclosure, and because the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the data processing method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 6, a schematic diagram of a data processing apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition unit 61, a data association unit 62, an alignment processing unit 63, and a data update unit 64; wherein the content of the first and second substances,
an acquisition unit 61 configured to acquire observation data acquired by a plurality of sensors;
the data association unit 62 is configured to perform data association on the observation data and at least one target according to the predicted data in the annular cache region, so as to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data collected at each observation moment in the target time window;
an alignment processing unit 63, configured to perform timestamp alignment processing on the target observation data and the prediction data in the ring buffer;
and the data updating unit 64 is configured to update the prediction data after the timestamp alignment processing according to the target observation data, so as to obtain a fusion motion state of each target.
In the embodiment of the disclosure, the motion state of the target predicted according to the observation data acquired at each observation time in the target time window is stored through the annular cache region, and the fusion motion state of the target is determined according to the prediction data in the annular cache region, so that the fusion motion state at the historical time can be stored by adopting the annular cache technology, all data of each sensor can be effectively utilized, and meanwhile, the stability of the prediction state of the target can be ensured in the process of data perception fusion.
In a possible implementation, the data association unit 62 is configured to: searching for first prediction data corresponding to the observation data in the annular cache region; determining a similarity between the first predicted data and the observed data; and if the similarity is greater than or equal to a preset threshold value, determining the observation data as target observation data of a target corresponding to the annular cache region.
In one possible embodiment, the alignment processing unit 63 is configured to: searching first prediction data with time stamps of all observation moments of the observation data in the annular cache region; determining a similarity between the first predicted data and the observed data; and if the similarity is greater than or equal to a preset threshold value, determining the observation data as target observation data of a target corresponding to the annular cache region.
In a possible embodiment, the alignment processing unit 63 is further configured to: determining a matching relationship between a target observation time of the target observation data and the target time window; and under the condition that the target observation time is determined to be in the target time window according to the matching relation, or the target observation time is greater than the maximum time stamp of the predicted data in the target time window, interpolating to obtain the predicted data of the target observation data and storing the predicted data in the annular cache region, and taking the target observation time as the time stamp of the predicted data obtained by interpolation.
In a possible embodiment, the alignment processing unit 63 is further configured to: under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain second prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time; and inserting the second prediction data into the storage position of the annular buffer zone corresponding to the target observation time.
In a possible embodiment, the alignment processing unit 63 is further configured to: predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model; and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the second predicted data.
In a possible embodiment, the alignment processing unit 63 is further configured to: predicting model probabilities according to the prediction data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; acquiring the fusion motion state of each target at the target moment determined according to each motion model; and determining the prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
In a possible embodiment, the alignment processing unit 63 is further configured to: determining a target transition probability for each of the motion models, wherein the target transition probability represents a probability of transition to that motion model by other motion models in the interactive multi-model; determining a first confidence level of the other motion model at the target moment; the first confidence is used for representing the probability that the actual motion of the target accords with the other motion models at the target moment; determining the model probability based on the target transition probability and the confidence.
In a possible implementation, the data updating unit 64 is configured to: and updating the prediction data after the timestamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the fusion motion state of each target.
In a possible implementation, the data updating unit 64 is further configured to: determining a second confidence coefficient of each motion model according to the target observation data, wherein the second confidence coefficient represents the matching degree between the motion state of each target predicted by moving each motion model at the target observation time and the actual motion state of each target; updating the model probability of each motion model according to the second confidence coefficient, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time; and determining the fusion motion state and covariance matrix of each target according to the updated model probability and the prediction data predicted by each motion model according to the target observation data, wherein the covariance matrix of one target is used for representing the correlation degree between the predicted motion state of the target and the actual motion state of the target.
In a possible implementation, the data updating unit 64 is further configured to: performing Kalman filtering updating on the prediction data subjected to the timestamp alignment processing by using each motion model and the target observation data to obtain a Kalman filtering updating result; determining the second confidence level of each motion model in the interactive multi-model according to the Kalman filtering updating result.
Example three:
corresponding to the data processing method in fig. 1, an embodiment of the present disclosure further provides an electronic device 700, as shown in fig. 7, which is a schematic structural diagram of the electronic device 700 provided in the embodiment of the present disclosure, and includes:
a processor 71, a memory 72, and a bus 73; the memory 72 is used for storing execution instructions and includes a memory 721 and an external memory 722; the memory 721 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 71 and the data exchanged with the external memory 722 such as a hard disk, the processor 71 exchanges data with the external memory 722 through the memory 721, and when the electronic device 700 operates, the processor 71 communicates with the memory 72 through the bus 73, so that the processor 71 executes the following instructions:
acquiring observation data acquired by a plurality of sensors; performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data collected at each observation moment in the target time window; performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region; and updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fusion motion state of each target.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the data processing method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the data processing method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A data processing method, comprising:
acquiring observation data acquired by a plurality of sensors;
performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data collected at each observation moment in the target time window;
performing timestamp alignment processing on the target observation data and the prediction data in the annular cache region;
and updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fusion motion state of each target.
2. The method of claim 1, wherein the performing data association between the observation data and at least one target according to the predicted data in the ring buffer to obtain target observation data associated with each target comprises:
searching first prediction data with a timestamp of the observation time of the observation data in the annular cache region;
determining a similarity between the first predicted data and the observed data;
and if the similarity is greater than or equal to a preset threshold value, determining the observation data as target observation data of a target corresponding to the annular cache region.
3. The method of claim 1 or 2, wherein the time-stamp aligning the target observation data and the prediction data in the ring buffer comprises:
determining a matching relationship between a target observation time of the target observation data and the target time window;
and under the condition that the target observation time is determined to be in the target time window according to the matching relation, or the target observation time is greater than the maximum time stamp of the predicted data in the target time window, interpolating to obtain the predicted data of the target observation data and storing the predicted data in the annular cache region, and taking the target observation time as the time stamp of the predicted data obtained by interpolation.
4. The method of any of claims 1 to 3, wherein the time-stamp aligning the target observation data and the prediction data in the ring buffer comprises:
under the condition that the annular cache region does not contain the prediction data corresponding to the target observation data, predicting the prediction data corresponding to the target observation data according to the prediction data of the target moment in the annular cache region to obtain second prediction data; the target time is a time before the target observation time of the target observation data in the target time window of the annular cache region and/or a time after the target observation time;
and inserting the second prediction data into the storage position of the annular buffer zone corresponding to the target observation time.
5. The method according to claim 4, wherein the predicting data corresponding to the target observation data according to the predicted data of the target time in the ring buffer to obtain second predicted data includes:
predicting the prediction data corresponding to the target observation data through each motion model in the interactive multi-model;
and fusing the predicted data corresponding to the target observation data predicted by each motion model to obtain the second predicted data.
6. The method of claim 5, wherein predicting the corresponding prediction data of the target observation data through each motion model in the interactive multi-model comprises:
predicting model probabilities according to the prediction data of the target time through each motion model; the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time;
acquiring the fusion motion state of each target at the target moment determined according to each motion model;
and determining the prediction data which is predicted by each motion model and corresponds to the target observation data based on the model probability and the fusion motion state of each target at the target moment determined according to each motion model.
7. The method of claim 6, wherein predicting model probabilities from the prediction data for the target time instants with each of the motion models comprises:
determining a target transition probability for each of the motion models, wherein the target transition probability represents a probability of transition to that motion model by other motion models in the interactive multi-model;
determining a first confidence level of the other motion model at the target moment; the first confidence is used for representing the probability that the actual motion of the target accords with the other motion models at the target moment;
determining the model probability based on the target transition probability and the confidence.
8. The method according to any one of claims 1 to 7, wherein the updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fused motion state of the target comprises:
and updating the prediction data after the timestamp alignment processing through a motion model in the interactive multi-model and the target observation data to obtain the fusion motion state of each target.
9. The method according to claim 8, wherein the updating the prediction data after the timestamp alignment processing through the motion model in the interactive multi-model and the target observation data to obtain the fused motion state of each target comprises:
determining a second confidence coefficient of each motion model according to the target observation data, wherein the second confidence coefficient represents the matching degree between the motion state of each target predicted by moving each motion model at the target observation time and the actual motion state of each target;
updating the model probability of each motion model according to the second confidence coefficient, wherein the model probability is used for representing the probability that the actual motion state of each target is matched with the motion model at the target observation time;
and determining the fusion motion state and the covariance matrix of each target according to the updated model probability and the prediction data predicted by each motion model according to the target observation data, wherein the covariance matrix of one target is used for representing the correlation degree between the predicted fusion motion state of the target and the actual motion state of the target.
10. The method of claim 9, wherein determining a second confidence level for each motion model from the target observation data comprises:
performing Kalman filtering updating on the prediction data subjected to the timestamp alignment processing by using each motion model and the target observation data to obtain a Kalman filtering updating result;
determining the second confidence level of each motion model in the interactive multi-model according to the Kalman filtering updating result.
11. A data processing apparatus, comprising:
the acquisition unit is used for acquiring observation data acquired by a plurality of sensors;
the data association unit is used for performing data association on the observation data and at least one target according to the prediction data in the annular cache region to obtain target observation data associated with each target; the predicted data in the annular cache area represent the motion state of the target predicted according to the observation data collected at each observation moment in the target time window;
the alignment processing unit is used for carrying out time stamp alignment processing on the target observation data and the prediction data in the annular cache region;
and the data updating unit is used for updating the prediction data after the timestamp alignment processing according to the target observation data to obtain the fusion motion state of each target.
12. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the data processing method of any of claims 1 to 10.
13. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the data processing method according to one of the claims 1 to 10.
CN202011634778.XA 2020-12-31 2020-12-31 Data processing method, data processing device, electronic equipment and storage medium Pending CN112712549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011634778.XA CN112712549A (en) 2020-12-31 2020-12-31 Data processing method, data processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011634778.XA CN112712549A (en) 2020-12-31 2020-12-31 Data processing method, data processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112712549A true CN112712549A (en) 2021-04-27

Family

ID=75547871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011634778.XA Pending CN112712549A (en) 2020-12-31 2020-12-31 Data processing method, data processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112712549A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985909A (en) * 2021-12-07 2022-01-28 北京和德宇航技术有限公司 Satellite trajectory prediction method, device, equipment and storage medium
CN116990622A (en) * 2023-09-26 2023-11-03 国网辽宁省电力有限公司电力科学研究院 Fault wave recording method, device, equipment and medium of transformer substation direct current system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130159350A1 (en) * 2011-12-19 2013-06-20 Microsoft Corporation Sensor Fusion Interface for Multiple Sensor Input
CN108573270A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Multisensor Target Information is set to merge method and device, computer equipment and the recording medium synchronous with multisensor sensing
CN109343051A (en) * 2018-11-15 2019-02-15 众泰新能源汽车有限公司 A kind of multi-Sensor Information Fusion Approach driven for advanced auxiliary
CN110850403A (en) * 2019-11-18 2020-02-28 中国船舶重工集团公司第七0七研究所 Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method
CN111721238A (en) * 2020-07-22 2020-09-29 上海图漾信息科技有限公司 Depth data measuring apparatus and target object data collecting method
CN111860589A (en) * 2020-06-12 2020-10-30 中山大学 Multi-sensor multi-target cooperative detection information fusion method and system
CN111985300A (en) * 2020-06-29 2020-11-24 魔门塔(苏州)科技有限公司 Automatic driving dynamic target positioning method and device, electronic equipment and storage medium
CN112017216A (en) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile
CN112083725A (en) * 2020-09-04 2020-12-15 湖南大学 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130159350A1 (en) * 2011-12-19 2013-06-20 Microsoft Corporation Sensor Fusion Interface for Multiple Sensor Input
CN108573270A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Multisensor Target Information is set to merge method and device, computer equipment and the recording medium synchronous with multisensor sensing
CN109343051A (en) * 2018-11-15 2019-02-15 众泰新能源汽车有限公司 A kind of multi-Sensor Information Fusion Approach driven for advanced auxiliary
CN110850403A (en) * 2019-11-18 2020-02-28 中国船舶重工集团公司第七0七研究所 Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method
CN111860589A (en) * 2020-06-12 2020-10-30 中山大学 Multi-sensor multi-target cooperative detection information fusion method and system
CN111985300A (en) * 2020-06-29 2020-11-24 魔门塔(苏州)科技有限公司 Automatic driving dynamic target positioning method and device, electronic equipment and storage medium
CN111721238A (en) * 2020-07-22 2020-09-29 上海图漾信息科技有限公司 Depth data measuring apparatus and target object data collecting method
CN112017216A (en) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN112083725A (en) * 2020-09-04 2020-12-15 湖南大学 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
CN112033429A (en) * 2020-09-14 2020-12-04 吉林大学 Target-level multi-sensor fusion method for intelligent automobile

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EMRAH ADAMEY 等: "Poster Abstract: State Estimation and Sensor Fusion for Autonomous Driving in Mixed-Traffic Urban Environments", 《2012 IEEE/ACM THIRD INTERNATIONAL CONFERENCE ON CYBER-PHYSICAL SYSTEMS》, pages 229 *
孙行衍 等: "基于UKF联邦滤波的动力定位船舶运动状态估计", 《中国造船》, vol. 54, no. 1, pages 114 - 128 *
张军 等: "多传感器数据采集系统中的数据融合研究", 《传感器与微系统》, vol. 33, no. 3, pages 52 - 57 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985909A (en) * 2021-12-07 2022-01-28 北京和德宇航技术有限公司 Satellite trajectory prediction method, device, equipment and storage medium
CN116990622A (en) * 2023-09-26 2023-11-03 国网辽宁省电力有限公司电力科学研究院 Fault wave recording method, device, equipment and medium of transformer substation direct current system
CN116990622B (en) * 2023-09-26 2023-12-15 国网辽宁省电力有限公司电力科学研究院 Fault wave recording method, device, equipment and medium of transformer substation direct current system

Similar Documents

Publication Publication Date Title
EP3404556B1 (en) Information recommendation method and apparatus, and server
CN112733907A (en) Data fusion method and device, electronic equipment and storage medium
Li et al. Effectiveness of Bayesian filters: An information fusion perspective
CN112712549A (en) Data processing method, data processing device, electronic equipment and storage medium
CN108053424B (en) Target tracking method and device, electronic equipment and storage medium
US20220121641A1 (en) Multi-sensor-based state estimation method and apparatus and terminal device
US10374786B1 (en) Methods of estimating frequency skew in networks using timestamped packets
CN110111367B (en) Model particle filtering method, device, equipment and storage medium for target tracking
Yoo Change detection of RSSI fingerprint pattern for indoor positioning system
WO2022247915A1 (en) Fusion positioning method and apparatus, device, storage medium and program product
CN107918688B (en) Scene model dynamic estimation method, data analysis method and device and electronic equipment
Zheng et al. A robust approach to sequential information theoretic planning
CN112907671B (en) Point cloud data generation method and device, electronic equipment and storage medium
CN113869526A (en) Data processing model performance improving method and device, storage medium and electronic equipment
CN110580483A (en) indoor and outdoor user distinguishing method and device
Li et al. A track-oriented approach to target tracking with random finite set observations
US20140085138A1 (en) Efficient detection of movement using satellite positioning systems
CN116887396A (en) Training method of position prediction model, terminal positioning method and device
CN111538918A (en) Recommendation method and device, electronic equipment and storage medium
CN108093153B (en) Target tracking method and device, electronic equipment and storage medium
WO2016000487A1 (en) Target tracking method and tracking system based on variable coefficient α-β filter
Shan et al. Delayed-state nonparametric filtering in cooperative tracking
CN114202804A (en) Behavior action recognition method and device, processing equipment and storage medium
JP2018092368A (en) Moving object state amount estimation device and program
Ristic et al. Calibration of tracking systems using detections from non-cooperative targets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination