CN115630721A - Track prediction method and device based on potential graph structure generation - Google Patents

Track prediction method and device based on potential graph structure generation Download PDF

Info

Publication number
CN115630721A
CN115630721A CN202210483529.8A CN202210483529A CN115630721A CN 115630721 A CN115630721 A CN 115630721A CN 202210483529 A CN202210483529 A CN 202210483529A CN 115630721 A CN115630721 A CN 115630721A
Authority
CN
China
Prior art keywords
graph
neural network
potential
predicted
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210483529.8A
Other languages
Chinese (zh)
Inventor
高跃
张子昭
闫循石
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210483529.8A priority Critical patent/CN115630721A/en
Publication of CN115630721A publication Critical patent/CN115630721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to the technical field of time sequence prediction, in particular to a track prediction method and a track prediction device generated based on a potential graph structure, wherein the method comprises the following steps: collecting the position and the speed of a target to be predicted at any moment; inputting the position and the speed of any moment into a pre-trained graph cyclic neural network to obtain the position and the speed of each or a plurality of moments in a time sequence; and generating a predicted track of the target to be predicted based on the position and the speed at any one time and the position and the speed at each or a plurality of times. Therefore, more accurate multivariate time sequence prediction can be achieved, and accuracy of track prediction is improved.

Description

Track prediction method and device based on potential graph structure generation
Technical Field
The present disclosure relates to the field of timing prediction technologies, and in particular, to a trajectory prediction method and apparatus based on potential graph structure generation.
Background
Time sequence prediction is an important task facing time sequence data, namely predicting data at a subsequent moment according to data at a preceding moment. The time sequence data prediction model needs to consider the chronological order of data, so the time sequence data prediction model has the characteristic of orderliness. The time sequence comprises a stationarity sequence and a non-stationarity sequence, and how to accurately predict time sequence data of various scenes is an important problem. The conventional time sequence prediction method comprises the steps of mining data evolution characteristics based on artificial experience, statistically analyzing time sequence model parameters, a time sequence decomposition method and the like, along with the rise of machine learning and neural network methods, more and more methods realize time sequence prediction by constructing a sample data set and learning the mapping relation between time characteristics and sample values, and common neural network models comprise a recurrent neural network, a model based on an attention mechanism and the like.
It should be noted that most of the existing neural network models are directed to unary time series. In complex systems such as an industrial internet of things, a social network and a brain network, complex associations often exist among the constituent elements of the systems, for example, in the brain network, synaptic connections exist among neurons, functional connections exist among brain regions, and brain network connections of different scales from micro-scale to macro-scale are formed. When performing time-series prediction on these multivariate time series, if each component of the system is simply regarded as an independent variable without considering the correlation between the components, problems such as inefficiency and overfitting may occur. In these networks, the complex correlation structure between elements is the basis of the network function, and therefore, to analyze the time sequence model of these systems, the correlation between observable variables must be analyzed. In practical application, it is difficult to directly observe the correlation between variables due to the limitation of observation techniques, and it is necessary to indirectly infer the correlation between variables through multiple tests and statistical analysis, such as correlation analysis methods of mutual information, rank correlation coefficient, pearson correlation coefficient, and the like, or inference through a causal relationship analysis method, such as Granger causal analysis method.
In recent years, with the development of large data, the system complexity has exponentially increased with the network scale, and the generated multivariate time series data is difficult to model by correlation analysis or causal analysis. In addition, when a sample data set is acquired, due to the influence of an acquisition means and external environment noise, many complex system time sequence data have the characteristics of high volatility and high randomness, and the consistent correlation relationship is difficult to learn through a linear model. Aiming at the problems, a complex relation modeling method with more robustness among multiple variables needs to be researched to realize more accurate time sequence prediction.
Disclosure of Invention
The application provides a track prediction method and device based on potential graph structure generation, a recurrent neural network based on the potential graph structure is designed to realize multivariate time sequence prediction, more accurate multivariate time sequence prediction can be realized, and the accuracy of track prediction is improved.
An embodiment of a first aspect of the present application provides a trajectory prediction method generated based on a potential graph structure, including the following steps: collecting the position and the speed of a target to be predicted at any moment; inputting the position and the speed of any moment into a pre-trained graph circulation neural network to obtain the position and the speed of each or a plurality of moments in a time sequence; and generating a predicted track of the target to be predicted based on the position and the speed at any moment and the position and the speed at each or a plurality of moments.
Further, the graph circulation neural network is obtained by training a potential graph structure generation module, a multilayer perceptron-based encoder module, a graph circulation neural network control module and a graph convolution network-based decoder module.
Further, the graph circulation neural network is trained by a potential graph structure generation module, a multilayer perceptron-based encoder module, a graph circulation neural network control module and a graph convolution network-based decoder module, and comprises: acquiring a sample data set of a reference target, wherein the sample data set comprises a plurality of time sequences; initializing learnable parameters of all modules in the graph recurrent neural network, and inputting the plurality of time sequences into the initialized graph recurrent neural network in batches to obtain a predicted value and a potential graph structure at each or a plurality of moments; and calculating a loss function value based on the predicted value, the potential graph structure and the sample value of the sample data set, calculating gradients of learnable parameters of all modules according to the loss function value, and updating the learnable parameters until the learnable parameters are iteratively converged on the sample data set to obtain the graph cyclic neural network.
Further, the inputting the time series into the initialized graph recurrent neural network in batches to obtain the predicted value and the potential graph structure at each or a plurality of moments includes: inputting the time series data into the potential graph structure generation module and the multilayer perceptron-based encoder module in batches to respectively obtain a potential graph structure and codes at each or a plurality of moments; and inputting the potential graph structure and the codes of each or a plurality of moments into the graph recurrent neural network control module to obtain potential dynamic factors, and inputting the potential dynamic factors into the graph convolution network-based decoder module to obtain the predicted values of each or a plurality of moments.
Further, the generating the predicted trajectory of the target to be predicted based on the position and the speed at any time and the position and the speed at each or a plurality of times comprises: and performing curve fitting on the basis of the position and the speed at any moment and the position and the speed at each or a plurality of moments to obtain the predicted track.
The embodiment of the second aspect of the present application provides a trajectory prediction device generated based on a potential graph structure, including: the acquisition module is used for acquiring the position and the speed of the target to be predicted at any moment; the input module is used for inputting the position and the speed of any moment into a pre-trained graph circulation neural network to obtain the position and the speed of each or a plurality of moments in a time sequence; and the prediction module is used for generating a predicted track of the target to be predicted based on the position and the speed at any moment and the position and the speed at each or a plurality of moments.
Further, the graph circulation neural network is obtained by training a potential graph structure generation module, a multilayer perceptron-based encoder module, a graph circulation neural network control module and a graph convolution network-based decoder module.
Further, still include: the training module is used for acquiring a sample data set of a reference target, wherein the sample data set comprises a plurality of time sequences; initializing learnable parameters of all modules in the graph recurrent neural network, and inputting the plurality of time sequences into the initialized graph recurrent neural network in batches to obtain a predicted value and a potential graph structure at each or a plurality of moments; calculating a loss function value based on the predicted value, the potential graph structure and the sample value of the sample data set, calculating gradients of learnable parameters of all modules according to the loss function value, and updating the learnable parameters until the learnable parameters are iteratively converged on the sample data set to obtain the graph recurrent neural network.
An embodiment of a third aspect of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the trajectory prediction method generated based on the potential graph structure as described in the above embodiments.
A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement a trajectory prediction method generated based on a potential graph structure as described in the foregoing embodiments.
Therefore, the application has at least the following beneficial effects:
the method has the advantages that the potential distribution of the correlation among the variables is learned by directly optimizing the parameters of the graph structure generation module, a large number of sample data sets are collected, and the parameters are updated iteratively in the training process until convergence, so that the method can reduce the difference among the tests and the influence of random noise of a single test to a certain extent, and the correlation among the variables is modeled more robustly; by introducing graph structures into the recurrent neural network and the decoder and combining the advantages of the recurrent neural network and the graph learning method, the multivariate dynamics system is fitted, the effect of the correlation relationship among the multivariate is considered, the prediction accuracy and the graph structure constraint are considered in the design of the loss function, and more accurate multivariate time sequence prediction can be realized.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flowchart of a trajectory prediction method generated based on a latent graph structure according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a recurrent neural network based on a potential graph structure provided in an embodiment of the present application;
FIG. 3 is an exemplary diagram of a trajectory prediction device generated based on a potential graph structure according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative and intended to explain the present application and should not be construed as limiting the present application.
The track prediction method and apparatus based on potential graph structure generation according to the embodiment of the present application will be described below with reference to the drawings. Specifically, fig. 1 is a schematic flowchart of a trajectory prediction method generated based on a potential graph structure according to an embodiment of the present disclosure.
As shown in fig. 1, the trajectory prediction method based on the potential graph structure generation includes the following steps:
in step S101, the position and speed of the target to be predicted at any one time are acquired.
The target to be predicted can be specifically selected according to the target to be predicted, and can be charged particles and the like; any time can be specifically selected according to actual conditions, and for example, the time can be the current time.
In step S102, the position and velocity at any time are input to a graph-loop neural network trained in advance, and the position and velocity at each or a plurality of times in a time series are obtained.
It is to be appreciated that embodiments of the present application can predict the position and velocity at each or multiple time instants in the future based on the position and velocity at any time instant. Taking the current time as an example, the embodiments of the present application may predict the position and the speed at a future time based on the position and the speed at the current time.
In the embodiment of the present application, as shown in fig. 2, the whole framework of the graph recurrent neural network mainly includes a potential graph structure generation module, an encoder module based on the multi-layer perceptron, a graph recurrent neural network control module, and a decoder module based on the graph convolution network. The potential graph structure generation module models the correlation relation among the multivariate variables through the graph structure, records the probability distribution of the graph structure by using learnable parameters, and obtains the graph structure by means of reparameterization skill sampling. In addition, the graph-loop neural network control module records potential dynamic factors of the system at different time instants by maintaining a plurality of variables, thereby describing the dynamics of the multivariable system. At each moment, firstly, inputting a currently observed variable sample value based on an encoder module of a multilayer perceptron, outputting corresponding time characteristics, then, taking the current output of the encoder as input by a graph cyclic neural network control module, updating a potential dynamic factor according to a graph structure generated by a potential graph structure generating module, and finally, taking the updated potential dynamic factor as input by a decoder module based on a graph convolution network, and outputting a predicted value at the next moment. After the sample data set is collected, a loss function is designed, parameters in each module of the recurrent neural network based on the latent graph structure are updated through a back propagation algorithm, and therefore accurate multivariate time sequence data prediction is achieved.
In an embodiment of the present application, a graph-recurrent neural network training process includes: acquiring a sample data set of a reference target, wherein the sample data set comprises a plurality of time sequences; initializing learnable parameters of all modules in the graph recurrent neural network, and inputting a plurality of time sequences into the initialized graph recurrent neural network according to batches to obtain a predicted value and a potential graph structure at each or a plurality of moments; and calculating a loss function value based on the predicted value, the potential graph structure and the sample value of the sample data set, calculating the gradient of the learnable parameters of all the modules according to the loss function value, and updating the learnable parameters until the learnable parameters are iteratively converged on the sample data set to obtain the graph recurrent neural network. Inputting a plurality of time sequences into an initialized graph circulation neural network according to batches to obtain a predicted value and a potential graph structure at each or a plurality of moments, wherein the steps comprise: inputting the time series data into a potential graph structure generating module and an encoder module based on a multilayer perceptron according to batches to respectively obtain a potential graph structure and codes at each or a plurality of moments; and inputting the potential graph structure and the codes of each or a plurality of moments into a graph recurrent neural network control module to obtain potential dynamic factors, and inputting the potential dynamic factors into a decoder module based on a graph convolution network to obtain a predicted value of each or a plurality of moments.
Specifically, 1, considering a time sequence system including N variables, the embodiment of the present application samples the time sequence system C times with a time length of T, and each variable represents a sample value of its current time by a vector with a dimension of d, so that a sample data set { S } is obtained by sampling 1 ,S 1 ,...,S C In which S is i Representing the time series data obtained from the ith sample.
For example, considering a kinetic system composed of 50 charged particles, the particles are constantly moving by attraction or repulsion, and the force between the particles cannot be directly observed, and only the moving position and speed of the particles can be observed. In the embodiment of the present application, sampling with a time length of 100 may be performed on the system 5000 times, an interval of each time point is 1ms, and positions and velocities of 50 particles are obtained at each time point in the embodiment of the present application, so that a sample value at a current time point is recorded by a vector with a dimension of 4.
2. The learnable parameters of each module in the recurrent neural network based on the potential graph structure are initialized. The network includes a potential graph structure generationThe module comprises a multi-layer perceptron-based encoder module, a graph recurrent neural network control module and a graph convolution network-based decoder module. Wherein the learnable parameters of the potential graph structure generation module include a parameter Z controlling the distribution of the potential graph structures A ∈R N×N (ii) a The encoder module based on multilayer perceptron is realized by a double-layer perceptron, an input layer has d neurons, and an output layer has d neurons e The learnable parameters include weight of the output layer and deviation parameter W e ∈R d×d (ii) a The graph circulation neural network control module maintains a variable record of the current potential dynamic factor and updates the potential dynamic factor by taking the output of the encoder as input, and the learnable parameter comprises three weight parameters
Figure BDA0003628474740000051
Figure BDA0003628474740000052
The decoder based on graph convolution network is realized by adding one full-connection layer to two graph convolution layers, and the learnable parameters comprise the weight of each layer and the deviation parameter of the full-connection layer
Figure BDA0003628474740000053
b out ∈R 1×d
3. Independently sampling 2KN from Gumbel (0, 1) distribution 2 Secondly, obtaining independent and equally distributed random variables:
Figure BDA0003628474740000054
4. and after the sample data set is randomly disturbed, dividing the sample data set into different batches, wherein each batch comprises time series data of B times of sampling, and splicing the time series data into a tensor on a new dimension. The data from 1 to T time in a batch are recorded as: x is a radical of a fluorine atom t ∈R B ×N×d (t=1,2,...,T)。
5. Inputting the time series data into a recurrent neural network based on a latent graph structure according to batches, calculating a loss function value after each batch is input, and updating learnable parameters in the network through a back propagation algorithm.
Specifically, after a new batch is entered, the latent graph structure generation module is first configured to generate a new batch based on the learnable parameter Z A And random variables
Figure BDA0003628474740000061
Generating K N matrices { A } 1 ,A 2 ,...,A K -represents a adjacency matrix of K graph structures, which is calculated by the formula:
Figure BDA0003628474740000062
wherein tau is a temperature hyper-parameter, the distribution tendency of the sampling result is controlled, and the smaller tau is, the smaller tau is
Figure BDA0003628474740000063
Will be closer to 0 or 1, the greater the τ is
Figure BDA0003628474740000064
Will be less close to 0 or 1. The potential graph structure A% is obtained by averaging k graph structures, and the calculation formula is as follows:
Figure BDA0003628474740000065
meanwhile, the encoder module pair X based on the multilayer perceptron t (T =1, 2.. Eta., T) where the code e at time T t The calculation formula of (c) is:
e t =σ(b e +W e x t )
furthermore, the graph circulation neural network control module introduces the code e t (T =1, 2.. Gth, T) and the potential graph structure a%, and the potential dynamic factor f from 1 to T is obtained by loop calculation t (T =1,2,. Eta., T), wherein the potential dynamic factor f is initially present 0 Potential dynamics factor f at time instant =0,t t The calculation formula of (2) is as follows:
Figure BDA0003628474740000066
Figure BDA0003628474740000067
Figure BDA0003628474740000068
Figure BDA0003628474740000069
further, the decoder based on the graph convolution network introduces the potential dynamic factor f t (T =1, 2.. Gtoreq., T) and potential map structure a%, the predicted value at time T +1 is obtained through two map convolutional layers and one full-link layer
Figure BDA00036284747400000610
The calculation formula is as follows:
Figure BDA0003628474740000071
Figure BDA0003628474740000072
finally, the above predicted values are introduced
Figure BDA0003628474740000073
Sample value x t (T =2,.. T) and underlying graph structure a%, calculating a total loss function value based on the mean square error loss and the graph structure smoothing constraint:
Figure BDA0003628474740000074
where D% is a degree matrix, i.e., adding up each column of elements of A% to obtain N numbers, and then placing them on the diagonal. The first term of the above equation is a mean square error loss term, that is, the loss is smaller as the predicted value and the sample value are closer, and the second term is a graph structure smooth constraint regular term, that is, the closer the node observation values are, the closer the relationship on the generated graph structure should be.
After the loss function value is obtained, the gradients of all the learnable parameters are calculated through a back propagation algorithm, and the parameter values are updated based on an Adam algorithm until iteration convergence on a sample data set. And after iterative convergence is carried out on the sample data set, optimal parameters of each module are obtained and fixed.
In step S103, a predicted trajectory of the target to be predicted is generated based on the position and velocity at any one time and the position and velocity at each or a plurality of times.
In the embodiment of the application, the generation of the predicted track of the target to be predicted based on the position and the speed at any time and the position and the speed at each or a plurality of times comprises the following steps: and performing curve fitting on the basis of the position and the speed at any moment and the position and the speed at each or a plurality of moments to obtain a predicted track.
For example, taking the prediction of the trajectory of charged particles in a charged particle system as an example, to predict the trajectory of the charged particle system at t times in the future, only the position and the velocity of the charged particles at the current time need to be input into the network, the predicted values of the position and the velocity of the particles at the next time are output, the predicted values are input into the network, the predicted values of the position and the velocity of the particles at the next time are output, the above steps are repeated for t times to obtain discrete values at t time points, and the discrete values are fitted into a curve by a curve fitting algorithm to estimate the trajectory of the charged particle system at t times in the future.
According to the track prediction method based on potential graph structure generation provided by the embodiment of the application, the potential distribution of the correlation among variables is learned by directly optimizing the parameters of the graph structure generation module, a large number of sample data sets are collected, and the parameters are updated iteratively in the training process until convergence, so that the method can reduce the difference among tests and the influence of random noise of a single test to a certain extent, and more robustly model the correlation among the variables; by introducing graph structures into the recurrent neural network and the decoder, the advantages of the recurrent neural network and the graph learning method are combined, the effect of correlation relation among the multivariants is considered while fitting a multivariable dynamic system, the prediction accuracy and the graph structure constraint are considered in the design of the loss function, and more accurate multivariate time sequence prediction can be realized.
Next, a trajectory prediction device generated based on a latent graph structure according to an embodiment of the present application will be described with reference to the drawings.
Fig. 3 is a block diagram of a trajectory prediction device generated based on a potential graph structure according to an embodiment of the present application.
As shown in fig. 3, the trajectory prediction device 10 generated based on the latent graph structure includes: an acquisition module 100, an input module 200, and a prediction module 300.
The acquisition module 100 is configured to acquire a position and a speed of a target to be predicted at any time; the input module 200 is configured to input the position and the speed at any time into a pre-trained graph-looping neural network, so as to obtain the position and the speed at each or multiple times in a time sequence; the prediction module 300 is used to generate a predicted trajectory of the object to be predicted based on the position and velocity at any one time and the position and velocity at each or more times.
In the embodiment of the application, the graph circulation neural network is trained by a potential graph structure generation module, a multilayer perceptron-based encoder module, a graph circulation neural network control module and a graph convolution network-based decoder module.
In the embodiment of the present application, the apparatus 10 of the embodiment of the present application further includes: and a training module. The method comprises the steps of obtaining a sample data set of a reference target, wherein the sample data set comprises a plurality of time sequences; initializing learnable parameters of all modules in the graph recurrent neural network, and inputting a plurality of time sequences into the initialized graph recurrent neural network according to batches to obtain a predicted value and a potential graph structure at each or a plurality of moments; and calculating a loss function value based on the predicted value, the potential graph structure and the sample value of the sample data set, calculating the gradient of the learnable parameters of all the modules according to the loss function value, and updating the learnable parameters until the learnable parameters are iteratively converged on the sample data set to obtain the graph cyclic neural network.
It should be noted that the foregoing explanation on the embodiment of the trajectory prediction method is also applicable to the trajectory prediction apparatus generated based on the potential graph structure in this embodiment, and is not repeated here.
According to the track prediction device based on the potential graph structure generation, which is provided by the embodiment of the application, the potential distribution of the correlation relationship among the variables is learned by directly optimizing the parameters of the graph structure generation module, a large number of sample data sets are collected, and the parameters are updated iteratively in the training process until convergence, so that the method can reduce the difference among the tests and the influence of random noise of a single test to a certain extent, and more robustly model the correlation relationship among the variables; by introducing graph structures into the recurrent neural network and the decoder, the advantages of the recurrent neural network and the graph learning method are combined, the effect of correlation relation among the multivariants is considered while fitting a multivariable dynamic system, the prediction accuracy and the graph structure constraint are considered in the design of the loss function, and more accurate multivariate time sequence prediction can be realized.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
memory 401, processor 402, and computer programs stored on memory 401 and executable on processor 402.
The processor 402, when executing the program, implements the trajectory prediction method generated based on the potential graph structure provided in the above-described embodiments.
Further, the electronic device further includes:
a communication interface 403 for communication between the memory 401 and the processor 402.
A memory 401 for storing computer programs operable on the processor 402.
The Memory 401 may include a high-speed RAM (Random Access Memory) Memory, and may also include a non-volatile Memory, such as at least one disk Memory.
If the memory 401, the processor 402 and the communication interface 403 are implemented independently, the communication interface 403, the memory 401 and the processor 402 may be connected to each other through a bus and perform communication with each other. The bus may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 401, the processor 402, and the communication interface 403 are integrated on a chip, the memory 401, the processor 402, and the communication interface 403 may complete mutual communication through an internal interface.
Processor 402 may be a CPU (Central Processing Unit), an ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present Application.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for predicting a trajectory based on potential graph structure generation as above is implemented.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a programmable gate array, a field programmable gate array, or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the method of implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and the program, when executed, includes one or a combination of the steps of the method embodiments.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A track prediction method based on potential graph structure generation is characterized by comprising the following steps:
collecting the position and the speed of a target to be predicted at any moment;
inputting the position and the speed of any moment into a pre-trained graph circulation neural network to obtain the position and the speed of each or a plurality of moments in a time sequence; and
and generating a predicted track of the target to be predicted based on the position and the speed at any moment and the position and the speed at each or a plurality of moments.
2. The method of claim 1, wherein the graph-recurrent neural network is trained from a latent graph structure generation module, a multilayer perceptron-based encoder module, a graph-recurrent neural network control module, and a graph convolution network-based decoder module.
3. The method of claim 2, wherein the graph-recurrent neural network is trained by the latent graph structure generation module, the multi-layer perceptron-based encoder module, the graph-recurrent neural network control module, and the graph convolution network-based decoder module, and comprises:
acquiring a sample data set of a reference target, wherein the sample data set comprises a plurality of time sequences;
initializing learnable parameters of all modules in the graph recurrent neural network, and inputting the plurality of time sequences into the initialized graph recurrent neural network in batches to obtain a predicted value and a potential graph structure at each or a plurality of moments;
calculating a loss function value based on the predicted value, the potential graph structure and the sample value of the sample data set, calculating gradients of learnable parameters of all modules according to the loss function value, and updating the learnable parameters until the learnable parameters are iteratively converged on the sample data set to obtain the graph recurrent neural network.
4. The method of claim 3, wherein inputting the plurality of time series into the initialized graph recurrent neural network in batches to obtain the predicted value and the potential graph structure at each or a plurality of time instants comprises:
inputting the time series data into the potential graph structure generation module and the multilayer perceptron-based encoder module in batches to respectively obtain a potential graph structure and codes at each or a plurality of moments;
and inputting the potential graph structure and the codes of each or a plurality of moments into the graph recurrent neural network control module to obtain potential dynamic factors, and inputting the potential dynamic factors into the graph convolution network-based decoder module to obtain the predicted values of each or a plurality of moments.
5. The method according to any one of claims 1 to 4, wherein the generating the predicted trajectory of the object to be predicted based on the position and the velocity at any one time and the position and the velocity at each or a plurality of times comprises:
and performing curve fitting on the basis of the position and the speed of any moment and the position and the speed of each or more moments to obtain the predicted track.
6. A trajectory prediction apparatus generated based on a latent graph structure, comprising:
the acquisition module is used for acquiring the position and the speed of the target to be predicted at any moment;
the input module is used for inputting the position and the speed of any moment into a pre-trained graph circulation neural network to obtain the position and the speed of each or a plurality of moments in a time sequence; and
and the prediction module is used for generating a predicted track of the target to be predicted based on the position and the speed at any moment and the position and the speed at each or a plurality of moments.
7. The method of claim 1, wherein the graph-recurrent neural network is trained from a latent graph structure generation module, a multi-layer perceptron-based encoder module, a graph-recurrent neural network control module, and a graph convolution network-based decoder module.
8. The method of claim 7, further comprising:
the training module is used for acquiring a sample data set of a reference target, wherein the sample data set comprises a plurality of time sequences; initializing learnable parameters of all modules in the graph recurrent neural network, and inputting the plurality of time sequences into the initialized graph recurrent neural network in batches to obtain a predicted value and a potential graph structure at each or a plurality of moments; calculating a loss function value based on the predicted value, the potential graph structure and the sample value of the sample data set, calculating gradients of learnable parameters of all modules according to the loss function value, and updating the learnable parameters until the learnable parameters are iteratively converged on the sample data set to obtain the graph recurrent neural network.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the method of trajectory prediction based on potential graph structure generation of any of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, the program being executable by a processor for implementing a method for trajectory prediction generated on the basis of a latent graph structure according to any one of claims 1 to 5.
CN202210483529.8A 2022-05-05 2022-05-05 Track prediction method and device based on potential graph structure generation Pending CN115630721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210483529.8A CN115630721A (en) 2022-05-05 2022-05-05 Track prediction method and device based on potential graph structure generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210483529.8A CN115630721A (en) 2022-05-05 2022-05-05 Track prediction method and device based on potential graph structure generation

Publications (1)

Publication Number Publication Date
CN115630721A true CN115630721A (en) 2023-01-20

Family

ID=84902190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210483529.8A Pending CN115630721A (en) 2022-05-05 2022-05-05 Track prediction method and device based on potential graph structure generation

Country Status (1)

Country Link
CN (1) CN115630721A (en)

Similar Documents

Publication Publication Date Title
Sangiorgio et al. Robustness of LSTM neural networks for multi-step forecasting of chaotic time series
Jouin et al. Particle filter-based prognostics: Review, discussion and perspectives
US20200272905A1 (en) Artificial neural network compression via iterative hybrid reinforcement learning approach
Dubois et al. Data-driven predictions of the Lorenz system
Platt et al. A systematic exploration of reservoir computing for forecasting complex spatiotemporal dynamics
CN112086144B (en) Molecule generation method, device, electronic equipment and storage medium
Yin et al. LEADS: Learning dynamical systems that generalize across environments
Zeng et al. Data-driven control of spatiotemporal chaos with reduced-order neural ODE-based models and reinforcement learning
Tripura et al. Probabilistic machine learning based predictive and interpretable digital twin for dynamical systems
Rahman et al. Neural ordinary differential equations for nonlinear system identification
CN114818549B (en) Method, system, equipment and medium for calculating fluid mechanics parameter of object
Dass et al. Laplace based approximate posterior inference for differential equation models
Equer et al. Multi-scale message passing neural pde solvers
EP3582153A1 (en) Generating hybrid models of physical systems
Zhai et al. Parameter estimation and modeling of nonlinear dynamical systems based on Runge–Kutta physics-informed neural network
Kaiser et al. Data association with Gaussian processes
EP3913547A1 (en) Modelling input-output relation of computer-controlled entity
CN115630721A (en) Track prediction method and device based on potential graph structure generation
JP6919856B2 (en) Reinforcement learning programs, reinforcement learning methods, and reinforcement learning devices
De Lozzo et al. Multilayer perceptron for the learning of spatio-temporal dynamics—application in thermal engineering
Akhare et al. Diffhybrid-uq: uncertainty quantification for differentiable hybrid neural modeling
Mao Recursive particle filter-based RBF network on time series prediction of measurement data
CN112926758A (en) Short-term wind speed prediction method and system
Zaspel et al. Data-driven identification of port-Hamiltonian DAE systems by Gaussian processes
Beintema Data–driven Learning of Nonlinear Dynamic Systems: A Deep Neural State–Space Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination