CN112732777A  Position prediction method, apparatus, device and medium based on time series  Google Patents
Position prediction method, apparatus, device and medium based on time series Download PDFInfo
 Publication number
 CN112732777A CN112732777A CN202011564567.3A CN202011564567A CN112732777A CN 112732777 A CN112732777 A CN 112732777A CN 202011564567 A CN202011564567 A CN 202011564567A CN 112732777 A CN112732777 A CN 112732777A
 Authority
 CN
 China
 Prior art keywords
 time
 data
 network
 training
 time series
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Pending
Links
 238000000354 decomposition reaction Methods 0.000 claims abstract description 22
 238000010192 crystallographic characterization Methods 0.000 claims description 152
 230000003068 static Effects 0.000 claims description 112
 230000000875 corresponding Effects 0.000 claims description 39
 238000011084 recovery Methods 0.000 claims description 22
 230000002123 temporal effect Effects 0.000 claims description 21
 238000006243 chemical reaction Methods 0.000 claims description 6
 230000000737 periodic Effects 0.000 claims description 5
 238000004364 calculation method Methods 0.000 description 11
 238000004891 communication Methods 0.000 description 6
 238000005457 optimization Methods 0.000 description 6
 238000005516 engineering process Methods 0.000 description 5
 230000001537 neural Effects 0.000 description 5
 238000010586 diagram Methods 0.000 description 4
 238000000034 method Methods 0.000 description 4
 239000004576 sand Substances 0.000 description 3
 125000004122 cyclic group Chemical group 0.000 description 2
 230000002708 enhancing Effects 0.000 description 2
 238000010801 machine learning Methods 0.000 description 2
 230000000306 recurrent Effects 0.000 description 2
 239000000126 substance Substances 0.000 description 2
 238000007476 Maximum Likelihood Methods 0.000 description 1
 230000002159 abnormal effect Effects 0.000 description 1
 230000002596 correlated Effects 0.000 description 1
 238000009499 grossing Methods 0.000 description 1
 230000004048 modification Effects 0.000 description 1
 238000006011 modification reaction Methods 0.000 description 1
 238000011176 pooling Methods 0.000 description 1
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06F—ELECTRIC DIGITAL DATA PROCESSING
 G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
 G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
 G06F16/24—Querying
 G06F16/245—Query processing
 G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
 G06F16/2474—Sequence data queries, e.g. querying versioned data

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for recognising patterns
 G06K9/62—Methods or arrangements for pattern recognition using electronic means
 G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
 G06K9/6256—Obtaining sets of training patterns; Bootstrap methods, e.g. bagging, boosting

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for recognising patterns
 G06K9/62—Methods or arrangements for pattern recognition using electronic means
 G06K9/6267—Classification techniques

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computing arrangements based on biological models
 G06N3/02—Computing arrangements based on biological models using neural network models
 G06N3/04—Architectures, e.g. interconnection topology
 G06N3/0445—Feedback networks, e.g. hopfield nets, associative networks

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computing arrangements based on biological models
 G06N3/02—Computing arrangements based on biological models using neural network models
 G06N3/08—Learning methods
 G06N3/084—Backpropagation

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
 G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
Abstract
The application discloses a position prediction method, a position prediction device, position prediction equipment and a position prediction medium based on a time sequence, wherein the position prediction method based on the time sequence comprises the following steps: acquiring time sequence data, and performing data enhancement on the time sequence data to expand the number of samples of the time sequence data and obtain enhanced data; performing time characteristic decomposition on the enhanced data to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data; constructing a target time series model based on the optimized time series data; and acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, and performing position prediction based on a time sequence on the position data to be predicted to obtain a position prediction result. The method and the device solve the technical problem of inaccurate position prediction.
Description
Technical Field
The application relates to the technical field of machine learning of financial technology (Fintech), in particular to a position prediction method, a position prediction device, position prediction equipment and a position prediction medium based on a time sequence.
Background
With the continuous development of financial science and technology, especially internet science and technology, more and more technologies (such as distributed technology, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, for example, higher requirements on the distribution of backlog in the financial industry are also put forward.
With the continuous development of computer technology, the application field of machine learning is also more and more extensive, a time sequence is a data sequence which is arranged in time sequence, changes along with time and is correlated with each other, at present, a time sequence model is usually directly constructed on the basis of data with stationarity property, however, in some modeling scenes, modeling data are usually small sample data such as position data, and the like, and because the number of samples and the number of features of the position data are small, the time sequence model is easy to generate a model overfitting condition during modeling, and an inaccurate prediction condition is usually existed during position prediction of the position data.
Disclosure of Invention
The application mainly aims to provide a position prediction method, a position prediction device, position prediction equipment and a position prediction medium based on a time sequence, and aims to solve the technical problem of low position prediction accuracy in the prior art.
To achieve the above object, the present application provides a timeseries based position prediction method applied to a timeseries based position prediction apparatus, the timeseries based position prediction method including:
acquiring time sequence data, and performing data enhancement on the time sequence data to expand the number of samples of the time sequence data and obtain enhanced data;
performing time characteristic decomposition on the enhanced data to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data;
and constructing a target time series model based on the optimized time series data.
The present application also provides a position prediction apparatus based on a time series, the position prediction apparatus based on a time series is a virtual apparatus, and the position prediction apparatus based on a time series is applied to a position prediction device based on a time series, the position prediction apparatus based on a time series includes:
the sample number expansion module is used for acquiring time series data and performing data enhancement on the time series data so as to expand the sample number of the time series data and obtain enhanced data;
the characteristic dimension expansion module is used for performing time characteristic decomposition on the enhanced data so as to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data;
the model building module is used for building a target time sequence model based on the optimized time sequence data;
and the position prediction module is used for acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, and performing position prediction based on a time sequence on the position data to be predicted to obtain a position prediction result.
The present application also provides a position prediction apparatus based on a time series, the position prediction apparatus based on a time series being an entity apparatus, the position prediction apparatus based on a time series including: a memory, a processor and a program of the timeseries based position prediction method stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the timeseries based position prediction method as described above.
The present application also provides a medium which is a readable storage medium having stored thereon a program for implementing the timeseries based position prediction method, the program for implementing the timeseries based position prediction method, when executed by a processor, implementing the steps of the timeseries based position prediction method as described above.
Compared with the technical means of directly constructing a time sequence model based on data with stationarity in the prior art, the method, the device, the equipment and the medium for predicting the position of the time sequence based on the time sequence have the advantages that after the time sequence data are obtained, the time sequence data are subjected to data enhancement firstly to expand the sample number of the time sequence data and obtain enhanced data, further, the purpose of expanding the time sequence data on the sample number is realized, further, the time characteristic decomposition is carried out on the enhanced data to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data, further, the purpose of expanding the time sequence data on the characteristic dimension is realized, further, a target time sequence model is constructed based on the optimized time sequence data, and the purpose of constructing the time sequence model based on the data with higher sample number and characteristic dimension is realized, and then the generalization capability of the time series model is improved, so that the time series model is not easy to be overfitted, the position data to be predicted is further obtained, the position data to be predicted is input into the target time series model, the position prediction based on the time series is carried out on the position data to be predicted, and the position prediction result is obtained.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of a method for predicting a position based on time series according to the present application;
FIG. 2 is a schematic diagram of the method for predicting a position based on time series according to the present application, in which the enhanced temporal feature data is converted into derived temporal feature data having various temporal variable features based on a onehot coding method;
FIG. 3 is a schematic flow chart of a second embodiment of the method for predicting a position based on time series according to the present application;
fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the position prediction method based on time series according to the present application, referring to fig. 1, the position prediction method based on time series includes:
step S10, acquiring time series data, and performing data enhancement on the time series data to expand the sample number of the time series data and obtain enhanced data;
in this embodiment, it should be noted that the time series data is data with time series information, such as time series position data, regional population growth data, and the like, the data enhancement is to perform expansion of the number of samples of the data, and the time series based position prediction method is applied to a time series based position prediction device, and the time series based position prediction device includes a preset time series enhancement model for performing data enhancement, where the preset time series enhancement model includes a TSGAN network (time series generation countermeasure network).
Acquiring timeseries data and performing data enhancement on the timeseries data to expand the number of samples of the timeseries data to obtain enhanced data, specifically, acquiring the timeseries data, generating extended time series data corresponding to the time series data through a sequence generator in the preset time series enhancement model, thereby using the extended timeseries data and the timeseries data together as the enhancement data, further, the sample number of the enhanced data is higher than that of the original time sequence data, thereby realizing the purpose of expanding the sample number of the time sequence data, the extended timeseries data matches the data distribution of the timeseries data, and for example, if the fitting function corresponding to the timeseries data is y ═ x, the fitting function corresponding to the timeseries data is also y ═ x.
Further, in another embodiment, the data enhancing the timeseries data to expand the number of samples of the timeseries data, and the obtaining the enhanced data further includes:
performing data enhancement on the time series data in a preset potential characterization space to expand the number of samples of the time series data to obtain enhanced data, namely, mapping the time series data to the preset potential characterization space to obtain time series characterization data, and generating extended time series characterization data corresponding to the time series characterization data through a preset sequence generator, and further reconstructing the extended time series characterization data into extended time series data, wherein the extended time series characterization data is consistent with the data distribution of the time series data, and further using the time series data and the extended time series data together as the enhanced data, wherein it should be noted that the data dimension of the time series data is reduced after the time series data is mapped into the preset potential characterization space, and then, data enhancement is carried out under low dimensionality, the calculation efficiency during data enhancement can be improved, and the data enhancement efficiency is improved.
Further, in step S11, the data enhancing the timeseries data to expand the number of samples of the timeseries data, and the step of obtaining enhanced data includes:
step S11, inputting the time series data into an embedded network in a preset time series enhancement model to map the time series data to a preset potential characterization space to obtain time series characterization data;
in this embodiment, it should be noted that the timeseries data includes static feature data and time feature data, where the static feature data is data corresponding to a static feature, and the time feature data is data corresponding to a time feature, where the static feature is a feature without time information, and the time feature is a feature with time information, for example, assuming that a timeseries sample in the timeseries data is position data (2020.10.12,100), where "2020.10.12" represents a time value, "100" represents a position value, "2020.10.12" belongs to the time feature data, and "100" belongs to the static feature data.
Inputting the time series data into an embedded network in a preset time series enhanced model to map the time series data to a preset potential characterization space, and obtaining time series characterization data, specifically, mapping both the static feature data and the time feature data to potential codes in the preset potential characterization space based on the embedded network in the preset time series enhanced model, and obtaining static feature characterization data corresponding to the static feature data and time feature characterization data corresponding to the time feature data.
Further, in step S11, the embedded network includes a static feature embedded network and a time feature embedded network, the time series data includes at least a sample static feature and a sample time feature, the preset potential feature space includes a static feature space and a time feature space, the time series characterization data includes a static feature characterization and a time feature characterization,
the step of inputting the time series data into an embedded network in a preset time series enhancement model to map the time series data to a preset potential characterization space to obtain time series characterization data comprises:
step S111, based on the static feature embedded network, mapping the sample static feature to the static feature space to obtain the static feature representation;
in this embodiment, it should be noted that the sample static feature is a static feature value of a time series sample in the time series data, and the static feature is a corresponding potential encoding value of the static feature in the static feature space, for example, assuming that a position value in the position data is 100, the static feature is characterized as a vector (11110000) for identifying the position value of 100.
Based on the static feature embedding network, mapping the sample static features to the static feature space to obtain the static feature characterization, specifically, inputting the sample static features into the static feature embedding network to map the sample static features to the encoding values of the static feature space to obtain the static feature characterization, where the encoding values of the sample static features mapped to the static feature space are as follows:
h_{S}＝e_{s}(s)
wherein h is_{S}For the static feature characterization, S is the sample static feature, e_{S}And embedding the static features into a network.
Step S112, obtaining a time characteristic representation of the last time step, embedding the time characteristic representation into a network based on the time characteristic, and mapping the static characteristic representation, the time characteristic representation of the last time step and the sample time characteristic to the time characteristic space together to obtain the time characteristic representation.
In this embodiment, it should be described that the time feature embedding network is a recurrent neural network, the time feature of the previous time step is represented by an output of the time feature embedding network at the previous time step, the sample time feature is a time feature value of a time series sample in the time series data, and the time feature is represented by a corresponding potential code value of the time feature in the time feature space.
Obtaining a time characteristic representation of a last time step, mapping the static characteristic representation, the time characteristic representation of the last time step and the sample time characteristic to the time characteristic space based on the time characteristic embedded network, and obtaining the time characteristic representation, specifically, obtaining an output of the time characteristic embedded network at the last time step as the time characteristic representation of the last time step, further using the static characteristic representation, the time characteristic representation of the last time step and the sample time characteristic as network input, inputting the time characteristic embedded network, mapping the network input to an encoding value of the time characteristic space, and obtaining the time characteristic representation, wherein the process of generating the time characteristic representation is as follows:
h_{t}＝e_{x}(h_{S}，h_{t1}，X_{t})
wherein h is_{t}For said temporal characterization, X_{t}For the sample time feature, e_{x}Embedding the time profile into the network, h_{t1}For the time characteristic characterization of the last time step, h_{S}And characterizing the static features.
Step S12, generating extended time series representation data corresponding to the time series representation data based on a sequence generator in the preset time series enhancement model;
in this embodiment, it should be noted that the extended time series representation data at least includes an extended time series representation.
Generating extended time series characterization data corresponding to the time series characterization data based on a sequence generator in the preset time series enhancement model, specifically, extracting a vector tuple from the time series characterization data, where the vector tuple is composed of a sample static feature and a sample time feature, and then inputting the vector tuple into the sequence generator in the preset time series enhancement model to generate the extended time series characterization.
Further, in step S12, the sequence generator includes a static feature generator network and a time feature cycle generator network, the extended time series characterization data includes at least an extended static feature characterization and an extended time feature characterization,
the step of generating extended time series characterization data corresponding to the time series characterization data based on the sequence generator in the preset time series enhancement model includes:
step S121, randomly extracting static characteristic elements and time characteristic elements from the time series characterization data;
in this embodiment, it should be noted that the time series characterization data includes static feature characterization data and time feature characterization data, and the sequence generator is a recurrent neural network.
And randomly extracting static characteristic elements and time characteristic elements from the time series characterization data, specifically, randomly extracting a static characteristic characterization from the static characteristic characterization data as the static characteristic elements, and randomly extracting a time characteristic characterization from the time characteristic characterization data as the time characteristic elements.
Step S122, inputting the static feature elements into the static feature generator network, and generating the extended static feature representation;
in this embodiment, the process of generating the extended static feature representation by the static feature generator network is as follows:
wherein the content of the first and second substances,for the extended static feature characterization, z_{S}As the static characteristic element, g_{S}And embedding the static features into a network.
Step S123, obtaining the last time step extension time characteristic representation, and inputting the time characteristic element, the extension static characteristic representation and the last time step extension time characteristic representation into the time characteristic cycle generator network together to generate the extension time characteristic representation.
In this embodiment, a last time step extension time feature representation is obtained, and the time feature element, the extension static feature representation, and the last time step extension time feature representation are input to the time feature cycle generator network together to generate the extension time feature representation, specifically, an output of the time feature cycle generator network at a last time step is obtained as a last time step extension time feature representation, and the time feature element, the extension static feature representation, and the last time step extension time feature representation are further used as generator input values together, and the generator input values are input to the time feature cycle generator network to generate the extension time feature representation, where a process of generating the extension time feature representation is as follows:
wherein the content of the first and second substances,for said extended temporal feature characterization, z_{t}As the time characteristic element, g_{x}For the purpose of the time characteristic cycle generator network,the time characterization is extended for the last time step,and characterizing the extended static features.
Step S13, reconstructing the extended time series characterization data based on the recovery network in the preset time series enhancement model to obtain extended time series data;
in this embodiment, it should be noted that the recovery network is a feedforward neural network, and is used for recovering the potential codes of the preset potential characterization space into time series data.
And reconstructing the extended time series characterization data based on a recovery network in the preset time series enhancement model to obtain extended time series data, specifically, inputting the extended time series characterization data into the recovery network in the preset time series enhancement model, and reconstructing the extended time series characterization data into the extended time series data by performing data processing, such as convolution processing, pooling processing and the like, on the extended time series characterization data.
And step S14, performing stationarity conversion on the extended time series data to obtain stationarity extended time series data, and using the stationarity extended time series data and the time series data together as the enhancement data.
In this embodiment, it should be noted that the time series model needs to be constructed by time series data with stationarity, and if the time series model is constructed based on nonstationary time series data, model distortion is likely to occur.
And performing stationarity conversion on the extended time series data to obtain stationarity extended time series data, and using the stationarity extended time series data and the time series data as the enhancement data together, specifically, converting the extended time series data into time series data within a preset value range, so that the extended time series data has stationarity, obtaining stationarity extended time series data, and using the stationarity extended time series data and the time series data as the enhancement data together.
Step S20, performing time characteristic decomposition on the enhanced data to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data;
in this embodiment, the enhanced data is subjected to time feature decomposition to expand the feature dimension of the enhanced data to obtain optimized time sequence data, and specifically, the enhanced time feature data in the enhanced data is decomposed into expanded time feature data with a decomposition time feature of a time feature variable to realize feature dimension expansion on the enhanced data, wherein the time feature variable includes a year feature, a month feature, a week feature, a holiday feature and the like, wherein the year feature indicates which year the time belongs to, the month feature indicates which month the time belongs to, the week feature indicates which week the time belongs to which month the change, and the holiday feature indicates whether the time is on a holiday, so as to obtain optimized time sequence data with decomposition time features, and the feature dimension of the optimized time sequence data is higher than the feature dimension of the enhanced data due to the time feature decomposition, and further, the purpose of performing feature dimension expansion on the enhanced data is achieved.
Further, in step S20, the enhancement data includes enhancement time characteristic data;
the step of performing temporal feature decomposition on the enhanced data to expand feature dimensions of the enhanced data to obtain optimized time series data comprises:
step S21, carrying out onehot coding on the enhanced time characteristic data to decompose the time characteristics in the enhanced time characteristic data into time variable characteristics with preset quantity, and obtaining decomposed time characteristic data;
in this embodiment, the enhanced temporal feature data is subjected to unique hot coding to decompose temporal features in the enhanced temporal feature data into a preset number of time variable features, so as to obtain decomposed temporal feature data, and specifically, the enhanced temporal feature data is subjected to unique hot coding based on preset time variable features, so as to convert the enhanced temporal feature data into derivative temporal feature data having time variable features, and further, the derivative temporal feature data and the enhanced temporal feature data are used together as the decomposed temporal feature data, where fig. 2 is a schematic diagram of converting the enhanced temporal feature data into derivative temporal feature data having time variable features based on a unique hot coding manner, where "2020082909: 00: 00' is a time feature value in the enhanced time feature data, the first row in the table is each time variable feature, and the second row in the table is a feature value corresponding to each time variable feature, wherein 1 indicates that the time feature value has a corresponding time variable feature, and 0 indicates that the time feature value does not have a corresponding time variable feature.
And step S22, using the decomposition time characteristic data and the enhancement data together as the optimized time series data.
Step S30, constructing a target time series model based on the optimized time series data;
in this embodiment, a target time series model is constructed based on the optimized time series data, specifically, a preset time series model to be trained is obtained, each decomposition feature item in the target time series model is fitted based on the optimized time series data respectively until the preset time series model to be trained satisfies a preset fitting end condition, and the preset time series model to be trained is used as the target time series model, where the preset fitting end condition includes loss function convergence, reaching a maximum iteration threshold, and the like, and in an implementable manner, an expression of the target time series model is as follows:
y(t)＝g(t)+s(t)+h(t)+ε_{t}
wherein y (t) is the target time series model, g (t) is a trend term in each of the decomposition feature terms, s (t) is a period term in each of the decomposition feature terms, h (t) is a holiday term in each of the decomposition feature terms, epsilon_{t}And a noise term in each decomposed characteristic term, wherein the trend term is used for representing the variation trend of the time series on a nonperiodic basis, the periodic term is used for representing the periodic regularity of the time series, the holiday term is used for representing whether holidays exist or not on the current day, and the noise term is used for preventing the model from generating excessive errors due to abnormal points.
Further, in step S30, the target time series model includes at least one of a target trend term, a target period term, and a target holiday term,
the step of constructing a target time series model based on the optimized time series data comprises:
step S31, fitting the target trend item through a preset linear model based on the optimized time series data; and/or the presence of a gas in the gas,
in this embodiment, based on the optimized time series data, fitting the target trend item through a preset linear model, specifically, inputting the optimized time series data into a preset linear model, performing fitting optimization on the preset linear model until the preset linear model reaches a preset fitting optimization ending condition, and taking the preset linear model as the target trend item, in an implementable manner, the target trend item is as follows:
g(t)＝(k+a(t)^{T}δ)t+(m+a(t)^{T}γ)
wherein k is a growth rate, a (t) is an indication function, the value is 0 or 1, δ is a variation of the growth rate, m represents an offset parameter, γ is a curve smoothing parameter, g (t) is the target trend term, and t is time.
Step S32, fitting the target period item through a preset sine and cosine function based on the optimized time series data; and/or the presence of a gas in the gas,
in this embodiment, based on the optimized time series data, fitting the target period term through a preset sine and cosine function, specifically, inputting the optimized time series data into the preset sine and cosine function, performing fitting optimization on the preset sine and cosine function until the preset sine and cosine function reaches a preset fitting optimization end condition, and taking the preset sine and cosine function as the target period term, in an implementable manner, the target period term is as follows:
wherein, P represents a time period, P365.25 represents a year period, P7 represents a week period, a_{n}And b_{n}A parameter vector can be constructed: beta is a_{n}＝(a_{1}，b_{1}，…，a_{N}，b_{N})^{T}In a particular calculation, a normal distribution is used for approximation.
And step S33, fitting the target holiday term through a preset indication function and a preset range influence parameter based on the optimized time series data.
In this embodiment, based on the optimized time series data, fitting the target holiday term through a preset indication function and a preset range influence parameter, specifically, inputting the optimized time series data into the preset indication function, and performing fitting optimization on the preset indication function and the preset range influence parameter together until a preset fitting optimization end condition is reached to obtain the target holiday term, in an implementable manner, the target holiday term is as follows:
h(t)＝Z(t)k
wherein h (t) is the target holiday term, Z (t) is a preset indication function for indicating whether t conforms to the holiday set, k is a preset range influence parameter for indicating the influence range of the holiday, and k follows normal distribution, namely kNomal (0, v)^{2}) And V represents a holiday adjustment factor, which can prevent holiday terms from overfitting by adjusting the magnitude of V.
And step S40, acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, and performing position prediction based on time sequence on the position data to be predicted to obtain a position prediction result.
In this embodiment, it should be noted that the position prediction method based on time series is applied to a position management system, the position data to be predicted is historical position data stored in the position management system, and the position prediction based on time series aims to predict future position data in a period of time after a current time point based on the historical position data.
Additionally, it should be noted that, because the target time series model is a time series model formed by a loop network, the predicted position value of the position data to be predicted at each time step after the current time step can be predicted, and then the change situation of the position value within a period of time after the current time step can be obtained, so as to obtain the predicted position data, that is, obtain the position prediction result.
Acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, performing position prediction based on a time sequence on the position data to be predicted, and acquiring a position prediction result, specifically, acquiring the position data to be predicted, inputting the position data to be predicted into the target time sequence model, performing position prediction based on the time sequence on the position data to be predicted, so as to predict each predicted position value of the position data to be predicted in a preset time sequence, and further taking each predicted position value and the corresponding preset time sequence as the position prediction result.
Compared with the technical means of directly constructing a time series model based on data with stationarity in the prior art, the position prediction method based on the time series is provided, after the time series data is obtained, data enhancement is firstly carried out on the time series data to expand the sample number of the time series data and obtain enhanced data, the purpose of expanding the time series data on the sample number is further realized, time characteristic decomposition is further carried out on the enhanced data to expand the characteristic dimension of the enhanced data and obtain optimized time series data, the purpose of expanding the time series data on the characteristic dimension is further realized, a target time series model is constructed based on the optimized time series data, and the purpose of constructing the time series model based on the data with higher sample number and characteristic dimension is realized, and then the generalization capability of the time series model is improved, so that the time series model is not easy to be overfitted, the position data to be predicted is further obtained, the position data to be predicted is input into the target time series model, the position prediction based on the time series is carried out on the position data to be predicted, and the position prediction result is obtained.
Further, referring to fig. 3, in another embodiment of the present application, based on the first embodiment of the present application, before the step of inputting the timeseries data into an embedded network in a preset timeseries enhanced model to map the timeseries data to a preset potential characterization space to obtain a timeseries characterization, the method for position prediction based on time series further includes:
step A10, acquiring a confrontation network to be trained and training time series data;
in this embodiment, it should be noted that the confrontation network to be trained includes an embedded network to be trained, a generator network to be trained, and a discriminator network to be trained, where the embedded network to be trained is configured to map time series data to a preset potential characterization space, the restoration network to be trained is configured to restore a characterization of the preset potential characterization space to time series data, the generator network to be trained is configured to generate a characterization of new time series data, and the discriminator network to be trained is configured to determine whether the characterization of the new time series data generated by the generator network to be trained is true.
Step A20, inputting the training time sequence data into the confrontation network to be trained, and performing iterative training on the confrontation network to be trained so as to calculate the target model loss of the confrontation network to be trained;
in this embodiment, the training time series data is input into the generated countermeasure network to be trained, and iterative training is performed on the generated countermeasure network to be trained to calculate a target model loss of the generated countermeasure network to be trained, specifically, the training time series data is input into the generated countermeasure network to be trained, a training network output of the generated countermeasure network to be trained is obtained, and the target model loss of the generated countermeasure network to be trained is calculated based on the training network output and the training time series data.
Further, in step A20, the tobetrained generation countermeasure network includes an embedded network to be trained, a recovery network to be trained, a generator network to be trained, and a discriminator network to be trained,
the step of inputting the training time sequence data into the confrontation network to be trained, and performing iterative training on the confrontation network to be trained to calculate the target model loss of the confrontation network to be trained comprises:
step A21, based on the embedded network to be trained, mapping the training time series data to the preset potential characterization space to obtain training potential characterization data;
in this embodiment, it should be noted that the training time series data at least includes a training time series sample, and the training potential characterization data at least includes a training potential characterization corresponding to the training time series sample.
Based on the tobetrained embedded network, mapping the training time sequence data to the preset potential characterization space to obtain training potential characterization data, specifically, based on the tobetrained embedded network, mapping each training time sequence sample in the training time sequence data to a code value in the preset potential characterization space to obtain a training potential characterization corresponding to each training time sequence sample, thereby realizing dimension reduction of original training data, further reducing dimensions of a counterstudy space, and improving calculation efficiency when counterstudy is performed between a tobetrained generator network and a tobetrained discriminator network.
Step A22, randomly extracting training representation data corresponding to the training potential representation data in the preset potential representation space, and generating training extended representation data corresponding to the training representation data based on the generator network to be trained and the training potential representation data;
in this embodiment, it should be noted that the training characterization data at least includes a training sample static feature and a training sample time characterization, the generator network to be trained includes a static feature network to be trained and a cyclic network to be trained, the training extended characterization data at least includes a training extended static feature and a training extended time feature, and the generator network to be trained is a cyclic neural network.
Randomly extracting training characterization data corresponding to the training potential characterization data in the preset potential characterization space, and generating training extended characterization data corresponding to the training characterization data based on the generator network to be trained and the training potential characterization data, specifically, randomly extracting a training sample static characterization and a training sample time characterization in the training potential characterization data in the preset potential characterization space, and combining the training sample static characterization and the training sample time characterization into a vector tuple as a training characterization in the training characterization data, wherein the training sample static characterization is a static feature characterization in training time series data, the training sample time characterization is a time feature characterization in training time series data, and the training sample static characterization is input into the static feature network to be trained, and generating the training extended static feature characterization, extracting the real static feature characterization of the current time step and the real time feature characterization of the previous time step from the training potential characterization data, and using the real static feature characterization of the current time step, the real time feature characterization of the previous time step and the training sample time characterization as the input of the tobetrained circulation network together to generate the training extended time feature characterization.
Step A23, calculating penalty loss based on the training extended characterization data and the training potential characterization data;
in this embodiment, it should be noted that, because the training extended time characteristic representation is generated based on the real static characteristic representation of the current time step in the training potential representation data and the real time characteristic representation of the previous time step, and further based on the training extended time characteristic data and the penalty loss calculated by the training extended time characteristic data, the model parameter of the tobetrained generated confrontation network can be always trained and updated in the direction of convergence, so that the purpose of training and updating the model parameter of the tobetrained generated confrontation network in the direction of nonconvergence can be reduced, and further, the number of invalid calculations when the tobetrained generated confrontation network is trained can be reduced, and further, the calculation efficiency when the tobetrained generated confrontation network is trained is improved.
Further, in step A23, the training extension characterization data includes at least one training extension characterization,
the step of calculating penalty losses based on the training extended characterization data and the training potential characterization data comprises:
step A231, determining a target training potential representation belonging to the same time step with the training extension representation in the training potential representation data;
in this embodiment, a target training potential representation belonging to the same time step as the training extension representation is determined in the training potential representation data, specifically, a current time step corresponding to the training extension representation is determined, and the target training potential representation is queried in the training potential representation data by using the current time step as an index.
Step A232, generating the penalty loss by calculating a difference loss between the target training potential characterization and the training extension characterization.
In this embodiment, the penalty loss is generated by calculating a difference loss between the target training potential representation and the training extension representation, specifically, a supervision loss is generated by applying a maximum likelihood method by calculating a difference loss between the target training potential representation and the training extension representation, and the penalty loss is obtained by the following calculation formula:
wherein L is_{S}For the penalty loss, h_{t}Training potential characterizations for the target, g_{x}Extend the characterization for the training h_{s}Training a static feature representation of a current time step in the potential representation data for the target, h_{t1}Training a temporal feature representation, Z, of a previous time step in the potential representation data for the target_{t}And E is the expected symbol of the training extension characterization.
Step A24, based on the arbiter network to be trained, performing category discrimination on the training extension characterization in the training extension characterization data to obtain a category discrimination result;
in this embodiment, based on the tobetrained discriminator network, the class discrimination is performed on the training extended characterizations in the training extended characterizations data to obtain a class discrimination result, and specifically, based on the tobetrained discriminator network, the training extended characterizations in the training extended characterizations data are classified to obtain a classification probability, and the classification probability is used as the class discrimination result.
Step A25, calculating the confrontation loss based on the real discrimination result corresponding to the category discrimination result and the training time series data;
in this embodiment, it should be noted that the category determination result includes a static feature classification probability and a time feature classification probability corresponding to the static feature, where the static feature classification probability is a classification probability obtained by classifying the static feature, and the time feature classification probability is a classification probability obtained by classifying the time feature.
The real judgment result comprises a real static feature classification probability and a real time feature classification probability, and the category judgment result and the real judgment result correspond to the same time step.
Calculating the countermeasure loss based on the category discrimination result and the real discrimination result corresponding to the training time series data, specifically, obtaining the real discrimination result corresponding to the training time series data, and inputting the static feature classification probability, the time feature classification probability, and the real static feature classification probability and the real time feature classification probability in the real discrimination result into a preset countermeasure loss calculation formula to calculate the countermeasure loss, wherein the preset countermeasure loss calculation formula is as follows:
wherein L is_{u}To said fight loss, y_{S}Classifying the probability, y, for the true static features_{t}The probability is classified for the realtime features,the probability is classified for the static features in question,and E, classifying the probability for the time characteristic, and solving an expected symbol.
Step A26, based on the recovery network to be trained, reconstructing the training expansion characterization data to obtain training expansion time sequence data;
in this embodiment, it should be noted that the recovery network to be trained includes a feedforward neural network, which is used to convert the representation of the preset potential representation space into timeseries data.
And reconstructing the training extended representation data based on the recovery network to be trained to obtain training extended time series data, and specifically reconstructing the training extended representation data into time series data based on the recovery network to be trained to obtain training extended time series data.
A step a27 of calculating a reconstruction loss based on the training extended timeseries data and the training timeseries data;
in this embodiment, it should be noted that the training extended time series data at least includes a training extended time series sample, and the training time series data at least includes a training time series sample.
Calculating a reconstruction loss based on the training extended time series data and the training time series data, specifically, inputting training extended time series samples in the training extended time series data and training time series samples in the training time series data into a preset reconstruction loss calculation formula, and calculating the reconstruction loss, wherein the preset reconstruction loss calculation formula is as follows:
wherein L is_{R}For the reconstruction loss, S is a sample static feature in the training time series samples,for sample static features in the training extended time series samples, X_{t}A sample time feature in the training time series samples at the tth time step,and E is the sample time characteristic in the training extended time sequence sample at the tth time step, and the expected symbol is obtained.
Step A28, generating the target model loss based on the penalty loss, the countermeasure loss, and the reconstruction loss.
In this embodiment, it should be noted that the target model loss includes a first target model loss and a second target model loss, where the first target model loss is used to optimize the embedded network to be trained and the recovery network to be trained, and the second target model loss is used to optimize the generator network to be trained and the arbiter network to be trained.
Further, in step A28, the target model loss includes a first target model loss and a second target model loss,
the step of generating the target model loss based on the penalty loss, the challenge loss, and the reconstruction loss comprises:
a step a281 of generating the first target model loss by calculating a sum of the penalty loss and the minimization of the countermeasure loss;
in this embodiment, the first target model loss is generated by calculating the sum of the penalty loss and the minimization of the countermeasure loss, and specifically, the penalty loss and the countermeasure loss are input into a preset first model loss calculation formula to calculate the first target model loss, where the first model loss calculation formula is as follows;
wherein, κ_{1}For the first target model loss, θ_{g}Representing the generator network to be trained, theta_{d}Eta is a first hyperparameter for the arbiter network to be trained for balancing penalty loss and countermeasure loss, L_{S}For the penalty loss, L_{u}Is the challenge loss.
Step a282, calculating a sum of the penalty loss and the minimization of the reconstruction loss, obtaining the second target model loss.
Wherein, κ_{2}For the second target model loss, θ_{e}Representing the embedded network to be trained, theta_{r}For the recovery network to be trained, η is a second hyperparameter for balancing penalty losses and reconstruction losses, L_{S}For the penalty loss, L_{R}For the reconstruction loss, based on the second target model loss, the purpose of jointly training the embedded network and the generator network is achieved, and then the reconstruction loss can be minimized, so that the preset potential characterization space not only improves the parameter conversion efficiency, that is, the conversion efficiency between the features and the characterization, but also can promote the generator network to learn the time relationship more efficiently, and thus the efficiency of data enhancement is improved.
Step A30, based on the target model loss, updating the network parameters of the confrontation network to be trained so as to judge whether the updated confrontation network to be trained meets the preset iterative training end condition;
in this embodiment, based on the target model loss, the network parameters of the tobetrained generated countermeasure network are updated to determine whether the updated tobetrained generated countermeasure network meets a preset iterative training end condition, specifically, based on the target model loss, a model gradient is calculated, the network parameters of the tobetrained generated countermeasure network are updated according to the model gradient, and it is determined whether the updated tobetrained generated countermeasure network meets the preset iterative training end condition, where the preset iterative training end condition includes convergence of model loss, maximum number of times of model iteration, and the like.
In step a30, the target model loss includes a first target model loss and a second target model loss, the tobetrained generation countermeasure network includes an embedded network to be trained, a recovery network to be trained, a generator network to be trained, and a discriminator network to be trained,
the step of updating the generated countermeasure network to be trained based on the target model loss comprises:
step A31, updating the network parameters of the embedded network to be trained and the network parameters of the recovery network to be trained based on the loss of the first target model;
in this embodiment, the network parameters of the embedded network to be trained and the network parameters of the recovery network to be trained are updated based on the first target model loss, specifically, a first gradient corresponding to the embedded network to be trained and a second gradient of the recovery network to be trained are respectively calculated based on the first target model loss, the network parameters of the embedded network to be trained are updated according to the first gradient, and the network parameters of the recovery network to be trained are updated according to the second gradient.
Step A32, based on the second target model loss, updating the network parameters of the generator network to be trained and the network parameters of the discriminator network to be trained.
In this embodiment, based on the second target model loss, the network parameter of the generator network to be trained and the network parameter of the arbiter network to be trained are updated, specifically, based on the second target model loss, a third gradient corresponding to the generator network to be trained and a fourth gradient of the arbiter network to be trained are respectively calculated, and according to the third gradient, the network parameter of the generator network to be trained is updated, and according to the fourth gradient, the network parameter of the arbiter network to be trained is updated.
Step A40, if yes, taking the confrontation network to be trained as the preset time sequence enhancement model;
in this embodiment, if yes, the generated confrontation network to be trained is used as the preset time sequence enhancement model, and specifically, if yes, it is proved that the generated confrontation network to be trained is trained completely, and then the generated confrontation network to be trained is used as the preset time sequence enhancement model.
And step A50, if not, returning to the step of performing iterative training on the generated countermeasure network to be trained.
In this embodiment, if not, returning to the step of performing iterative training on the generated confrontation network to be trained, and performing the next iterative training on the generated confrontation network to be trained until the generated confrontation network to be trained satisfies a preset iterative training end condition.
The embodiment of the application provides a method for constructing a preset time sequence enhanced model, namely, acquiring a generated confrontation network to be trained and training time sequence data, inputting the training time sequence data into the generated confrontation network to be trained, performing iterative training on the generated confrontation network to be trained to calculate the target model loss of the generated confrontation network to be trained, updating the network parameters of the generated confrontation network to be trained based on the target model loss to judge whether the updated generated confrontation network to be trained meets preset iterative training end conditions, if so, taking the generated confrontation network to be trained as the preset time sequence enhanced model, and if not, returning to the step of performing iterative training on the generated confrontation network to be trained. Further, based on the trained preset time sequence enhancement model, the data enhancement can be carried out on the time sequence data so as to carry out sample quantity expansion on the time sequence data to obtain enhanced data, thereby achieving the purpose of expanding the time series data in terms of the number of samples, and further after generating the target time series model based on the enhanced data, acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, the position data to be predicted is subjected to position prediction based on the time sequence, so that a position prediction result can be obtained, therefore, in order to overcome the defects that the number of samples and the number of features of the position data are small in the prior art, and then, model overfitting is easy to occur in the time series model during modeling, so that the technical defect of inaccurate prediction during position prediction of position data is laid a foundation.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the timeseriesbased position prediction apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a highspeed RAM memory or a nonvolatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the position prediction apparatus based on time series may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input submodule such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WIFI interface).
Those skilled in the art will appreciate that the time series based position prediction apparatus configuration shown in fig. 4 does not constitute a limitation of the time series based position prediction apparatus and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 4, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a timeseriesbased position prediction program. The operating system is a program that manages and controls the hardware and software resources of the timeseries based position prediction device, supporting the operation of the timeseries based position prediction program, as well as other software and/or programs. The network communication module is used to enable communication between the various components within the memory 1005, as well as with other hardware and software in the timeseries based position prediction system.
In the timeseries based position prediction apparatus shown in fig. 4, the processor 1001 is configured to execute a timeseries based position prediction program stored in the memory 1005, and implement the steps of any one of the timeseries based position prediction methods described above.
The specific implementation of the position prediction device based on time series in the present application is substantially the same as the embodiments of the position prediction method based on time series, and is not described herein again.
The embodiment of the present application further provides a position prediction apparatus based on a time series, where the position prediction apparatus based on a time series is applied to a position prediction device based on a time series, and the position prediction apparatus based on a time series includes:
the sample number expansion module is used for acquiring time series data and performing data enhancement on the time series data so as to expand the sample number of the time series data and obtain enhanced data;
the characteristic dimension expansion module is used for performing time characteristic decomposition on the enhanced data so as to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data;
the model building module is used for building a target time sequence model based on the optimized time sequence data;
and the position prediction module is used for acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, and performing position prediction based on a time sequence on the position data to be predicted to obtain a position prediction result.
Optionally, the sample number expansion module is further configured to:
inputting the time series data into an embedded network in a preset time series enhancement model to map the time series data to a preset potential characterization space to obtain time series characterization data;
generating extended time series characterization data corresponding to the time series characterization data based on a sequence generator in the preset time series enhancement model;
reconstructing the extended time series characterization data based on a recovery network in the preset time series enhancement model to obtain extended time series data;
and performing stationarity conversion on the extended time series data to obtain stationarily extended time series data, and using the stationarily extended time series data and the time series data together as the enhancement data.
Optionally, the sample number expansion module is further configured to:
mapping the static features to the static feature space based on the static feature embedded network to obtain the static feature characterization;
and acquiring a time characteristic representation of the last time step, embedding the time characteristic representation into a network based on the time characteristic, and mapping the static characteristic representation, the time characteristic representation of the last time step and the sample time characteristic to the time characteristic space together to acquire the time characteristic representation.
Optionally, the sample number expansion module is further configured to:
randomly extracting static characteristic elements and time characteristic elements from the time series characterization data;
inputting the static feature elements into the static feature generator network to generate the extended static feature representation;
and acquiring an expansion time characteristic representation of the last time step, and inputting the time characteristic element, the expansion static characteristic representation and the expansion time characteristic representation of the last time step into the time characteristic cycle generator network together to generate the expansion time characteristic representation.
Optionally, the position predicting device based on time series is further configured to:
acquiring a confrontation network to be trained and training time sequence data;
inputting the training time sequence data into the confrontation network to be trained, and performing iterative training on the confrontation network to be trained so as to calculate the target model loss of the confrontation network to be trained;
updating the network parameters of the confrontation network to be trained based on the target model loss to judge whether the updated confrontation network to be trained meets the preset iterative training end condition;
if so, taking the generated confrontation network to be trained as the preset time sequence enhancement model;
if not, returning to the step of performing iterative training on the generated countermeasure network to be trained.
Optionally, the position predicting device based on time series is further configured to:
mapping the training time sequence data to the preset potential characterization space based on the embedded network to be trained to obtain training potential characterization data;
randomly extracting training representation data corresponding to the training potential representation data in the preset potential representation space, and generating training extended representation data corresponding to the training representation data based on the generator network to be trained and the training potential representation data;
calculating a penalty loss based on the training extended characterization data and the training potential characterization data;
based on the arbiter network to be trained, carrying out category discrimination on training extension characterizations in the training extension characterization data to obtain a category discrimination result;
calculating the countermeasure loss based on the category discrimination result and the real discrimination result corresponding to the training time series data;
reconstructing the training expansion representation data based on the recovery network to be trained to obtain training expansion time sequence data;
calculating a reconstruction loss based on the training extended time series data and the training time series data;
generating the target model loss based on the penalty loss, the countermeasure loss, and the reconstruction loss.
Optionally, the position predicting device based on time series is further configured to:
determining a target training potential representation in the training potential representation data that belongs to the same time step as the training extension representation;
generating the penalty loss by calculating a difference loss between the target training potential representation and the training extension representation.
Optionally, the position predicting device based on time series is further configured to:
generating the first target model loss by calculating a sum of the penalty loss and the minimization of the counter loss;
and calculating the sum of the penalty loss and the minimization of the reconstruction loss to obtain the second target model loss.
Optionally, the position predicting device based on time series is further configured to:
updating the network parameters of the embedded network to be trained and the network parameters of the recovery network to be trained based on the loss of the first target model;
and updating the network parameters of the generator network to be trained and the network parameters of the arbiter network to be trained based on the second target model loss.
Optionally, the feature dimension extension module is further configured to:
carrying out onehot coding on the enhanced time characteristic data so as to decompose the time characteristics in the enhanced time characteristic data into a preset number of time variable characteristics and obtain decomposed time characteristic data;
and using the decomposition time characteristic data and the enhancement data together as the optimized time sequence data.
Optionally, the model building module is further configured to:
fitting the target trend item through a preset linear model based on the optimized time series data; and/or the presence of a gas in the gas,
fitting the target periodic item through a preset sine and cosine function based on the optimized time series data; and/or the presence of a gas in the gas,
and fitting the target holiday term through a preset indication function and a preset range influence parameter based on the optimized time series data.
The specific implementation of the position prediction device based on time series in the present application is substantially the same as the embodiments of the position prediction method based on time series, and is not described herein again.
The present application provides a readable storage medium, and the readable storage medium stores one or more programs, which are also executable by one or more processors for implementing the steps of any one of the above methods for position prediction based on time series.
The specific implementation manner of the readable storage medium of the present application is substantially the same as that of each embodiment of the position prediction method based on the time series, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.
Claims (14)
1. A position prediction method based on time series is characterized by comprising the following steps:
acquiring time sequence data, and performing data enhancement on the time sequence data to expand the number of samples of the time sequence data and obtain enhanced data;
performing time characteristic decomposition on the enhanced data to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data;
constructing a target time series model based on the optimized time series data;
and acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, and performing position prediction based on a time sequence on the position data to be predicted to obtain a position prediction result.
2. The method of temporal sequence based position prediction according to claim 1, wherein the step of performing data enhancement on the timeseries data to expand the number of samples of the timeseries data to obtain enhanced data comprises:
inputting the time series data into an embedded network in a preset time series enhancement model to map the time series data to a preset potential characterization space to obtain time series characterization data;
generating extended time series characterization data corresponding to the time series characterization data based on a sequence generator in the preset time series enhancement model;
reconstructing the extended time series characterization data based on a recovery network in the preset time series enhancement model to obtain extended time series data;
and performing stationarity conversion on the extended time series data to obtain stationarily extended time series data, and using the stationarily extended time series data and the time series data together as the enhancement data.
3. The method of claim 2, wherein the embedded network comprises a static feature embedded network and a time feature embedded network, the time series data comprises at least a sample static feature and a sample time feature, the preset potential feature space comprises a static feature space and a time feature space, the time series characterization data comprises a static feature characterization and a time feature characterization,
the step of inputting the time series data into an embedded network in a preset time series enhancement model to map the time series data to a preset potential characterization space to obtain time series characterization data comprises:
mapping the static features to the static feature space based on the static feature embedded network to obtain the static feature characterization;
and acquiring a time characteristic representation of the last time step, embedding the time characteristic representation into a network based on the time characteristic, and mapping the static characteristic representation, the time characteristic representation of the last time step and the sample time characteristic to the time characteristic space together to acquire the time characteristic representation.
4. The method of time series based position prediction according to claim 2, wherein the sequence generator includes a network of static feature generators and a network of time feature cycle generators, the extended time series characterization data includes at least an extended static feature characterization and an extended time feature characterization,
the step of generating extended time series characterization data corresponding to the time series characterization data based on the sequence generator in the preset time series enhancement model includes:
randomly extracting static characteristic elements and time characteristic elements from the time series characterization data;
inputting the static feature elements into the static feature generator network to generate the extended static feature representation;
and acquiring an expansion time characteristic representation of the last time step, and inputting the time characteristic element, the expansion static characteristic representation and the expansion time characteristic representation of the last time step into the time characteristic cycle generator network together to generate the expansion time characteristic representation.
5. The method of timeseries based position prediction according to claim 2, wherein prior to the step of inputting the timeseries data into an embedded network in a preset timeseries enhancement model to map the timeseries data to a preset potential characterization space to obtain a timeseries characterization, the method of timeseries based position prediction further comprises:
acquiring a confrontation network to be trained and training time sequence data;
inputting the training time sequence data into the confrontation network to be trained, and performing iterative training on the confrontation network to be trained so as to calculate the target model loss of the confrontation network to be trained;
updating the network parameters of the confrontation network to be trained based on the target model loss to judge whether the updated confrontation network to be trained meets the preset iterative training end condition;
if so, taking the generated confrontation network to be trained as the preset time sequence enhancement model;
if not, returning to the step of performing iterative training on the generated countermeasure network to be trained.
6. The timeseriesbased position prediction method according to claim 5, wherein the tobetrained generation countermeasure network includes an tobetrained embedded network, a tobetrained restoration network, a tobetrained generator network, and a tobetrained discriminator network,
the step of inputting the training time sequence data into the confrontation network to be trained, and performing iterative training on the confrontation network to be trained to calculate the target model loss of the confrontation network to be trained comprises:
mapping the training time sequence data to the preset potential characterization space based on the embedded network to be trained to obtain training potential characterization data;
randomly extracting training representation data corresponding to the training potential representation data in the preset potential representation space, and generating training extended representation data corresponding to the training representation data based on the generator network to be trained and the training potential representation data;
calculating a penalty loss based on the training extended characterization data and the training potential characterization data;
based on the arbiter network to be trained, carrying out category discrimination on training extension characterizations in the training extension characterization data to obtain a category discrimination result;
calculating the countermeasure loss based on the category discrimination result and the real discrimination result corresponding to the training time series data;
reconstructing the training expansion representation data based on the recovery network to be trained to obtain training expansion time sequence data;
calculating a reconstruction loss based on the training extended time series data and the training time series data;
generating the target model loss based on the penalty loss, the countermeasure loss, and the reconstruction loss.
7. The method of time series based position prediction according to claim 6, wherein said training extension characterization data includes at least one training extension characterization,
the step of calculating penalty losses based on the training extended characterization data and the training potential characterization data comprises:
determining a target training potential representation in the training potential representation data that belongs to the same time step as the training extension representation;
generating the penalty loss by calculating a difference loss between the target training potential representation and the training extension representation.
8. The timeseries based position prediction method of claim 6, wherein the object model loss includes a first object model loss and a second object model loss,
the step of generating the target model loss based on the penalty loss, the challenge loss, and the reconstruction loss comprises:
generating the first target model loss by calculating a sum of the penalty loss and the minimization of the counter loss;
and calculating the sum of the penalty loss and the minimization of the reconstruction loss to obtain the second target model loss.
9. The timeseriesbased position prediction method according to claim 5, wherein the target model loss includes a first target model loss and a second target model loss, the tobetrained generated countermeasure network includes an tobetrained embedded network, an tobetrained restoration network, an tobetrained generator network, and an tobetrained discriminator network,
the step of updating the generated countermeasure network to be trained based on the target model loss comprises:
updating the network parameters of the embedded network to be trained and the network parameters of the recovery network to be trained based on the loss of the first target model;
and updating the network parameters of the generator network to be trained and the network parameters of the arbiter network to be trained based on the second target model loss.
10. The timeseries based position prediction method according to claim 1, wherein the enhancement data includes enhancement time feature data;
the step of performing temporal feature decomposition on the enhanced data to expand feature dimensions of the enhanced data to obtain optimized time series data comprises:
carrying out onehot coding on the enhanced time characteristic data so as to decompose the time characteristics in the enhanced time characteristic data into a preset number of time variable characteristics and obtain decomposed time characteristic data;
and using the decomposition time characteristic data and the enhancement data together as the optimized time sequence data.
11. The timeseries based position prediction method of claim 1, wherein the target timeseries model includes at least one of a target trend term, a target period term, and a target holiday term,
the step of constructing a target time series model based on the optimized time series data comprises:
fitting the target trend item through a preset linear model based on the optimized time series data; and/or the presence of a gas in the gas,
fitting the target periodic item through a preset sine and cosine function based on the optimized time series data; and/or the presence of a gas in the gas,
and fitting the target holiday term through a preset indication function and a preset range influence parameter based on the optimized time series data.
12. A timeseriesbased position prediction apparatus, characterized in that the timeseriesbased position prediction apparatus comprises:
the sample number expansion module is used for acquiring time series data and performing data enhancement on the time series data so as to expand the sample number of the time series data and obtain enhanced data;
the characteristic dimension expansion module is used for performing time characteristic decomposition on the enhanced data so as to expand the characteristic dimension of the enhanced data and obtain optimized time sequence data;
the model building module is used for building a target time sequence model based on the optimized time sequence data;
and the position prediction module is used for acquiring position data to be predicted, inputting the position data to be predicted into the target time sequence model, and performing position prediction based on a time sequence on the position data to be predicted to obtain a position prediction result.
13. A timeseriesbased position prediction apparatus characterized by comprising: a memory, a processor, and a program stored on the memory for implementing the timeseries based position prediction method,
the memory is used for storing a program for implementing a position prediction method based on time series;
the processor is configured to execute a program implementing the timeseries based position prediction method to implement the steps of the timeseries based position prediction method according to any one of claims 1 to 11.
14. A medium which is a readable storage medium, characterized in that the readable storage medium has stored thereon a program for implementing a timeseries based position prediction method, the program being executed by a processor to implement the steps of the timeseries based position prediction method according to any one of claims 1 to 11.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN202011564567.3A CN112732777A (en)  20201225  20201225  Position prediction method, apparatus, device and medium based on time series 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN202011564567.3A CN112732777A (en)  20201225  20201225  Position prediction method, apparatus, device and medium based on time series 
Publications (1)
Publication Number  Publication Date 

CN112732777A true CN112732777A (en)  20210430 
Family
ID=75616223
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN202011564567.3A Pending CN112732777A (en)  20201225  20201225  Position prediction method, apparatus, device and medium based on time series 
Country Status (1)
Country  Link 

CN (1)  CN112732777A (en) 

2020
 20201225 CN CN202011564567.3A patent/CN112732777A/en active Pending
Similar Documents
Publication  Publication Date  Title 

CN110610280B (en)  Shortterm prediction method, model, device and system for power load  
Rahman et al.  Predicting electricity consumption for commercial and residential buildings using deep recurrent neural networks  
CN105391083B (en)  Wind power interval short term prediction method based on variation mode decomposition and Method Using Relevance Vector Machine  
Wang et al.  A selfadaptive hybrid approach for wind speed forecasting  
Guo et al.  A research on a comprehensive adaptive grey prediction model CAGM (1, N)  
CN106649658B (en)  Recommendation system and method for user role nondifference treatment and data sparsity  
Li et al.  Shortterm apartmentlevel load forecasting using a modified neural network with selected autoregressive features  
Mu et al.  Hourly and daily urban water demand predictions using a long shortterm memory based model  
CN107886160B (en)  BP neural network interval water demand prediction method  
Qin et al.  Simulating and Predicting of Hydrological Time Series Based on TensorFlow Deep Learning.  
Imani  Electrical loadtemperature CNN for residential load forecasting  
Gong et al.  Bottomup load forecasting with Markovbased error reduction method for aggregated domestic electric water heaters  
Martínez‐Rodríguez et al.  Particle swarm grammatical evolution for energy demand estimation  
Moosavi et al.  A machine learning approach to adaptive covariance localization  
CN112418921A (en)  Power demand prediction method, device, system and computer storage medium  
Shen et al.  An optimized discrete grey multivariable convolution model and its applications  
CN113988477A (en)  Photovoltaic power shortterm prediction method and device based on machine learning and storage medium  
CN113610174A (en)  Power grid host load prediction method, equipment and medium based on Phik feature selection  
Vieira et al.  An Enhanced SeasonalHybrid ESD technique for robust anomaly detection on time series  
CN112148557B (en)  Method for predicting performance index in real time, computer equipment and storage medium  
CN112732777A (en)  Position prediction method, apparatus, device and medium based on time series  
Atamanyuk et al.  Models and algorithms for prediction of electrical energy consumption based on canonical expansions of random sequences  
JP6552077B1 (en)  Solar radiation normalization statistical analysis system, solar radiation normalization statistical analysis method and solar radiation normalization statistical analysis program  
Chen et al.  Analyzing the correlation and predictability of wind speed series based on mutual information  
CN111310963A (en)  Power generation data prediction method and device for power station, computer equipment and storage medium 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination 