CN113222265A - Mobile multi-sensor space-time data prediction method and system in Internet of things - Google Patents

Mobile multi-sensor space-time data prediction method and system in Internet of things Download PDF

Info

Publication number
CN113222265A
CN113222265A CN202110558678.1A CN202110558678A CN113222265A CN 113222265 A CN113222265 A CN 113222265A CN 202110558678 A CN202110558678 A CN 202110558678A CN 113222265 A CN113222265 A CN 113222265A
Authority
CN
China
Prior art keywords
time
data
sensor
prediction
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110558678.1A
Other languages
Chinese (zh)
Inventor
张颖慧
邢雅轩
刘洋
白戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia Tata Power Transmission And Transformation Engineering Co ltd
Inner Mongolia University
Original Assignee
Inner Mongolia Tata Power Transmission And Transformation Engineering Co ltd
Inner Mongolia University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia Tata Power Transmission And Transformation Engineering Co ltd, Inner Mongolia University filed Critical Inner Mongolia Tata Power Transmission And Transformation Engineering Co ltd
Priority to CN202110558678.1A priority Critical patent/CN113222265A/en
Publication of CN113222265A publication Critical patent/CN113222265A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/10Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/20Analytics; Diagnosis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Signal Processing (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Toxicology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method and a system for predicting mobile multi-sensor space-time data in the Internet of things, wherein the method for predicting the mobile multi-sensor space-time data in the Internet of things specifically comprises the following steps: acquiring monitoring data of a mobile multi-sensor; sending the acquired data into a network model, and extracting time characteristics to realize the prediction of time information in the data of the mobile multi-sensor; extracting spatial features according to the extracted temporal features; alternately extracting the time characteristic and the space characteristic according to the number of layers of the network model; and predicting according to the extraction of the time characteristic and the space characteristic which are alternately executed, and outputting a prediction result. The method and the device provide accurate and reliable multi-node prediction for an intelligent monitoring system, and enhance perception and utilization of intelligent equipment to environmental information.

Description

Mobile multi-sensor space-time data prediction method and system in Internet of things
Technical Field
The application relates to the field of computers, in particular to a method and a system for predicting space-time data of mobile multi-sensors in the Internet of things.
Background
Currently, human daily activities are highly dependent on Internet of things (IoT) services, which provide users with various intelligent services with environmental monitoring and personal sensing technologies [1 ]. Developing IoT-based more valuable applications has attracted extensive attention by both the academia and the industry. IoT is a novel application that offers tremendous opportunities to improve people's lives and create a wide variety of intelligent systems, such as healthcare, smart cities, smart agriculture, industrial automation, and autopilot. The IoT architecture includes three layers, including an internet of things sensor layer (ISL), a network layer and an application layer. The ISL is composed of physical layer systems (e.g., intelligent sensors and sensing devices), and has a main task of sensing environmental information and communicating with a network layer. By using the ISL-aware environment, users can perform predictive analysis on collected data, make seamless decisions, and automatically, intelligently, quickly respond to and control certain devices, developing more valuable applications. The ISL is a key element of the IoT, and when sensing and data collection are performed on key information of an external environment and an internal system, due to severe influences of various external signals, sensors and other factors, highly unstructured and dynamic characteristics, such as nonlinearity, instability, rapid change and other problems, are easily generated, and sensor data prediction based on such characteristics is very challenging. Recently, aiming at the complex characteristics of sensor node data, the prior art proposes to use various machine learning and deep learning algorithms to research data prediction in various application scenarios of ISL. The method utilizes a differential Integrated Moving Average autoregressive (ARIMA) model, carries out prediction according to historical information, alternately selects one sensor node to work and records data. Similarly, in the prior art, data is also monitored and collected by deploying ISL, and prediction is performed by using an ARIMA model and a Support Vector machine (SVR) model respectively. Still, the prior art also proposes modeling critical time data monitored and collected using ISL using an integrated framework of three machine learning algorithms, multivariate regression, random forest and support vector regression. To solve the above-mentioned difficulty of solving kernel functions in the SVR algorithm, the IoT-based multi-feature data is predicted by using SVR in which a plurality of kernels are linearly combined, and the adaptive SVM model can autonomously provide a suitable kernel for the prediction of given environmental factors. The method verifies the excellent characteristic of the multi-core SVR algorithm on data modeling. Although the data acquired by the single sensor can be used as an index for environment monitoring, the single sensor is difficult to obtain high-precision prediction, and the problems of low prediction precision and small prediction range exist. With the rapid development of IoT, a single sensor cannot accomplish the arduous tasks of various fields, and a sufficient degree of freedom of the sensor is required to ensure the effective transmission of various functions and various information. Installing multiple measurement sensors may increase the robustness and fault tolerance of the prediction algorithm. Therefore, on this basis, researchers have conducted a great deal of work on multi-sensor systems. Multiple sensor nodes can be used for internal or external data communication between deployment layers (sensor nodes/gateways), collected data are sent to users through communication among the multiple sensor nodes, and more accurate monitoring is carried out. A multi-sensor network monitoring system based on fuzzy logic rules is provided in the prior art, a plurality of sensor nodes are used for state monitoring, the limitations of coverage and equipment installation are overcome, and the detection area and the prediction accuracy of the environment are improved. In order to obtain a larger detection range and a longer distance, the prior art also provides that a plurality of sensors are distributed in an automatic driving system, and an artificial neural network algorithm is used for prediction, so that accurate judgment is realized. Although the multi-sensor data prediction is advanced to a certain extent, the spatial information and high-dimensional nonlinear complex features of the ISL are difficult to effectively extract by using a shallow machine learning algorithm and an artificial neural network algorithm. Therefore, an advanced deep learning based spatio-temporal prediction framework is proposed for functional representation and feature extraction. Although a prediction algorithm in a sensor array based on a Long Short-Term Memory network (LSTM) is also proposed in the prior art, rapid and accurate classification prediction is realized by using less data. However, this approach ignores the physical distribution of the sensors, assuming that there is no correlation between the order in which the data is collected by the multiple sensors and the physical distribution of the sensors. The existing hormone is predicted by using a convolution LSTM algorithm, relevant features of a surrounding area are captured by using two-dimensional convolution, and features of a time dimension are extracted by using the LSTM. And, a unified framework integrating Convolutional Neural Network (CNN) and LSTM is proposed for multi-node prediction. And, proposed to combine Gated Recursive Units (GRUs) and CNN models to analyze the relationship between spatiotemporal data. The two-dimensional CNN network has excellent capability of capturing space-related features, and better utilizes the space distribution features of data. However, the data processed by the CNN network must be an orderly matrix, and has no applicability to the network structure of topological nodes. In order to overcome the limitation of CNN network extraction of non-European structural data spatial features, researchers use GCN network to extract spatial information of complex topological structure. Document [28] proposes a time Graph Convolutional Network (T-GCN) for solving the constraints of the Network topology. First, using a graph convolution network to capture the topology of an urban road network using historical time-series data as input to obtain spatial features. Secondly, the obtained time series with the spatial characteristics are input into the GRU unit, and dynamic change is obtained through information transfer between the units so as to capture the temporal characteristics. The prior art proposes a multilevel co-occurrence graph convolution LSTM algorithm based on attention, and realizes space-time modeling by performing multi-node gesture recognition in IoT and simultaneously capturing co-occurrence characteristics from different positions. Similarly, by predefining graph structure data, an algorithm combining a graph Neural Network with a Recurrent Neural Network (RNN) variant is widely used for extracting spatial information and time information of a Network topology, and the space-time prediction algorithm shows that arranging multiple sensors with spatial correlation can add information to data captured by the sensors, thereby improving the quality of monitored data. As IoT deployments continue to expand, the amount of data itself and the number of sensors collecting the data continues to grow. The scale of the system and the energy consumption are gradually increased due to the excessive sensors. By introducing mobility nodes in the IoT-based sensor layer, a wider sensing coverage range can be provided, the number of nodes is further reduced, and the cost of the system is reduced. In order to realize economical and efficient IoT sensor deployment, IoT sensor nodes with mobility become a development trend, and various applications such as environment monitoring, health assessment, object tracking, pattern recognition and the like are realized by performing mobile data collection, monitoring, transmission, prediction, analysis and the like in different fields. Due to the fact that a plurality of sensors can be flexibly deployed in the region, the mobile multi-sensor system can establish an all-around and intelligent monitoring system. In contrast to static sensor networks, the mobile nature allows sensors to interface with a variety of platforms for continuous monitoring (data generation on a regular basis) or event-triggered monitoring (trigger event generation data), such as autonomous cars, animals, and humans. Thus, the mobile multi-sensor system can cope with a changing network topology. Most of the existing related research is based on the ideal assumption that the node position in ISL is fixed, and the influence of the node position change on the IoT performance is ignored. Furthermore, the application of GCN in spatio-temporal data mining is still in the exploration phase.
In conclusion, how to perform the mobile multi-sensor space-time data prediction is a technical problem which needs to be solved urgently by the person in the art.
Disclosure of Invention
Based on the method and the system, accurate and reliable multi-node prediction is provided, and perception and utilization of environment information by intelligent equipment are enhanced.
In order to achieve the purpose, the application provides a method for predicting mobile multi-sensor space-time data in the internet of things, which comprises the following steps: acquiring monitoring data of a mobile multi-sensor; sending the acquired data into a network model, and extracting time characteristics to realize the prediction of time information in the data of the mobile multi-sensor; extracting spatial features according to the extracted temporal features; alternately extracting the time characteristic and the space characteristic according to the number of layers of the network model; and predicting according to the time characteristic and the space characteristic which are alternately extracted, and outputting a prediction result.
The above, wherein acquiring the monitoring data of the mobile multisensor comprises spatial location information of the plurality of mobile sensors, and the plurality of mobile sensors generates time information in generating the multivariate response over time.
As above, the sending the acquired data to the network model for time feature extraction, and the predicting the time information in the mobile multi-sensor data specifically includes the following sub-steps: sending the monitoring data to a network model, and initializing a graph adjacency matrix; sending the initialized graph adjacency matrix into a graph learning module to update node information, and reconstructing the graph adjacency matrix; in response to completion of the construction of the graph adjacency matrix, acquisition of time-related information is performed.
As above, wherein the mapping M of the adjacency matrix of the random initialization map is represented as:
M1=tanh(αE1θ1),
M2=tanh(αE2θ2),
where tanh (·) represents a tangent hyperbolic activation function. E1、E2Representing randomly initialized node embedding. Theta1、θ2Is the model parameter and alpha is the hyper-parameter used to control the saturation rate of the activation function.
As described above, after introducing the model parameters and the hyper-parameters for controlling the saturation ratio of the activation function, the initialized graph adjacency matrix is reconstructed, and the reconstructed graph adjacency matrix a is represented as:
A=ReLU(tanh(α(M1M2 T-M2M1 T))),
where, tanh (·) represents a tangent hyperbolic activation function, α is a hyper-parameter for controlling the saturation rate of the activation function, M1 and M2 represent mappings that initialize graph adjacency matrices, and the subtraction term and the ReLU activation function regularize the adjacency matrices to realize an asymmetric matrix a.
As above, where the index of a node is defined as idx, i.e. representing the sequence number of k randomly selected nodes, the node sampling process for spatial correlation between mobile multi-sensor nodes is represented as:
idx=argtopk(A[i,:])
A[i,-idx]=0
wherein argtopk (·) represents the index of the first k maxima of the return vector; a represents the constructed graph adjacency matrix; and i represents the sequentially selected sensor nodes, and when the ith node is selected, whether k nodes corresponding to idx are related to the ith node is judged, wherein the correlation is 1, and the non-correlation is 0.
As above, in the process of acquiring the time-related information, time modeling is further performed; the object of time modeling is time series data acquired by a plurality of groups of sensors with fixed sampling frequency.
As above, wherein the time-related information is obtained by a dilation-time convolutional network model, the dilation-time convolutional network model is composed of two dilated start layers, and the output x obtained by the two dilated start layersoutFor representing predicted time-dependent information, xoutThe concrete expression is as follows:
xout=tanh(f1*xin)×sigmoid(f2*xin)
wherein x isinNode information representing input, f1And f2Denotes the inflation time convolution layer, sigmoid (. cndot.) denotes the S-type activation function, and tanh (. cndot.) denotes the tangent hyperbolic activation function.
As above, wherein, according to the extracted temporal features, extracting spatial features specifically includes the following sub-steps: constructing a neighborhood; continuously updating the neighborhood to realize continuous updating of the graph adjacency matrix; and in response to the completion of the updating of the adjacency matrix of the graph, fusing the information of the propagation layers and acquiring the spatial correlation between the nodes.
A mobile multi-sensor space-time data prediction system in the Internet of things specifically comprises a data acquisition unit, a time information prediction unit, a spatial feature extraction unit, an alternative execution unit and a prediction output unit; the data acquisition unit is used for acquiring monitoring data of the mobile multi-sensor; the time information prediction unit is used for sending the acquired data into a prediction network model, extracting time characteristics and realizing the prediction of time information in the data of the mobile multi-sensor; a spatial feature extraction unit for performing spatial feature extraction based on the extracted temporal features; the alternate execution unit is used for alternately executing the extraction of the time characteristic and the space characteristic according to the number of layers of the network model; and the prediction output unit is used for performing prediction according to the time characteristic and the spatial characteristic which are alternately extracted and outputting a prediction result.
The application has the following beneficial effects:
the method and the system for predicting the mobile multi-sensor space-time data in the internet of things provide accurate and reliable multi-node prediction for an intelligent monitoring system, and the perception and utilization of intelligent equipment to environmental information are enhanced. First, an adaptive learning graph adjacency matrix is designed according to the multi-sensor dynamics. And secondly, the expansion rate of the expansion time convolution network model is improved to fully mine time characteristics and improve prediction efficiency. And designing a graph convolution network model to effectively capture the hidden relation among the nodes according to the dynamic graph adjacency matrix, thereby realizing the prediction of the monitoring data of a plurality of mobile sensor nodes at the next moment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic structural diagram of a mobile multi-sensor space-time data prediction system in the internet of things provided by the present application;
fig. 2 is a flowchart of a method for predicting space-time data of mobile multiple sensors in the internet of things, which is provided by the application.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a method and a system for predicting mobile multi-sensor space-time data in an internet of things provided by the present application specifically include: a data acquisition unit 110, a temporal feature extraction unit 120, a spatial feature extraction unit 130, an alternative execution unit 140, and a prediction output unit 150.
The data acquisition unit 110 is used for acquiring monitoring data of the mobile multi-sensor.
The method specifically comprises the step of acquiring monitoring data through integration of a plurality of multi-mobile sensors of the same type. The integration of a plurality of sensors has the whole of certain function and effect to realize monitoring data acquisition and extraction.
The time information prediction unit 120 is connected to the obtaining unit 110, and is configured to send the obtained data to a prediction network model, perform time feature extraction, and implement prediction of time information in the data of the mobile multi-sensor.
Specifically, the time information prediction unit 120 performs time feature extraction and prediction of time-related information by using a time convolution network model.
The spatial feature extraction unit 130 is connected to the temporal information prediction unit 120, and is configured to perform spatial feature extraction according to the extracted temporal features.
The spatial feature extraction unit 130 specifically performs spatial feature extraction by using a dynamic graph convolution network model.
The alternative execution unit 140 is respectively connected to the temporal information prediction unit 120 and the spatial feature extraction unit 130, and is configured to alternately execute the extraction of the temporal feature and the spatial feature according to the number of network model layers.
The prediction output unit 150 is connected to the alternative execution unit 140, and is configured to perform prediction according to the time feature and the spatial feature extracted by the alternative execution, and output a prediction result.
As shown in fig. 2, the method for predicting the mobile multi-sensor space-time data in the internet of things specifically includes the following steps:
step S210: and acquiring monitoring data of the mobile multi-sensor.
Specifically, the acquisition of monitoring data is performed through the integration of a plurality of sensors of the same type. The integration of a plurality of sensors has the whole of certain function and effect to realize monitoring data acquisition and extraction.
Wherein the monitoring data includes spatial location information of the plurality of mobile sensors, and the plurality of mobile sensors generate temporal information in generating the multivariate response over time. I.e. the acquired data is two-dimensional data containing spatial and temporal information.
Step S220: and sending the acquired data into a network model, and extracting time characteristics to realize the prediction of time information in the data of the mobile multi-sensor.
Because the monitoring data collected by the integration of the multiple mobile sensors includes both the spatial position information of the multiple sensors and the time information generated by the sensors in the multivariate response generated along with the time change, the space-time data needs to be modeled integrally.
Specifically, the present embodiment employs modeling based on an improved time convolutional network model (DTCN) and a dynamic graph convolutional network model (DGCN), performs time feature extraction using the time convolutional network model (DTCN), and predicts time-related information from time features in historical multi-sensor monitoring data.
The step S220 specifically includes the following sub-steps:
step S2201: and sending the monitoring data to a network model to initialize the graph adjacency matrix.
Specifically, two-dimensional data containing spatial information and time information is sent to a network model, and the graph adjacency matrix is initialized according to spatial characteristic dimensions.
The monitoring data of the multiple sensors comprises the spatial position information of the multiple sensor nodes and historical time information monitored by each sensor node. In order to improve the accuracy of monitoring data time information prediction by capturing the spatial correlation of mobile multi-sensor nodes, a plurality of sensors arranged in a certain space and having a certain layout mode are regarded as a graph structure with attribute information and structure information. The attribute information of each sensor node is time data monitored by the mobile sensor, and the structural information is geometric distribution of the sensor nodes. For the time data monitored by a plurality of sensors, dividing the time data monitored by each node into a sequence of time step lengths P, and sending the sequence containing a plurality of time step lengths P into a time convolution network in the subsequent step to predict the data of future time according to the historical time information contained in the sequence. Given a plurality of sensing sequences with historical time step P, the monitoring data X of a plurality of sensor nodes divided into the time step P is represented as:
X={z1[i],z2[i],···,zP[i]}
wherein z isP∈RNRepresents a multi-sensor monitoring value having N sensors at time step P, where zP[i]E R represents the value of the ith (i e N) sensor at time step P. The next time value of the multi-sensor monitoring data is predicted according to the historical time step P, namely the predicted value Y of the time information is expressed as:
Y={zP+1[i]}.
based on the perspective of the graph, individual sensor nodes in the multi-sensor are treated as nodes in the graph structure, and the graph adjacency matrix is used to describe the relationship between the nodes. However, due to the layout of the motion sensors, the graph adjacency matrix cannot be directly defined. Therefore, the automatic updating of the node relationship is realized according to the graph learning module in the embodiment.
The graph adjacency matrix is a description of the relationship between the nodes of the multi-sensor, the graph adjacency matrix is specifically a matrix with the number 0, 1, and the relationship between the nodes is represented by the number 0, 1. Wherein 0 represents that the correlation between any two nodes in the plurality of nodes is small, and 1 represents that the correlation between two nodes in the plurality of nodes is large, so that the spatial correlation among the plurality of nodes can be embodied in the graph adjacency matrix. According to the one-to-one correspondence relationship between the plurality of sensor nodes and the plurality of sensor monitoring data, the graph adjacency matrix effectively guides the correlation of the monitoring data sequence of the graph adjacency matrix by indicating whether every two nodes are correlated or not.
Further, where each node corresponds to a time series in the acquired time information, the time series may be represented as X ═ X (X)1,x2,···,xT) And T represents time.
In particular, to construct a learnable graph adjacency matrix, learnable parameters are introduced into the representation of the graph adjacency matrix, so that the representation of the graph adjacency matrix can be continuously updated as the model is iterated. After nonlinear activation, the mapping M of the adjacency matrix of the random initialization map is expressed as:
M1=tanh(αE1θ1),
M2=tanh(αE2θ2),
where tanh (·) represents a tangent hyperbolic activation function. E1、E2Representing randomly initialized node embedding. Theta1、θ2The learnable parameters in the graph adjacency matrix which are continuously updated along with the iteration of the network model are represented, and a is a hyperparameter used for controlling the saturation rate of the activation function by learning the new graph adjacency matrix in the training period.
Step S2202: sending the initialized graph adjacency matrix into a graph learning module to update node information, and reconstructing the graph adjacency matrix.
Specifically, after introducing model parameters and hyper-parameters for controlling the saturation ratio of the activation function, reconstructing the initialized graph adjacency matrix, where the reconstructed graph adjacency matrix a is represented as:
A=ReLU(tanh(α(M1M2 T-M2M1 T))),
where, tanh (·) represents a tangent hyperbolic activation function, α is a hyper-parameter for controlling the saturation rate of the activation function, M1 and M2 represent mappings that initialize graph adjacency matrices, and the subtraction term and the ReLU activation function regularize the adjacency matrices to realize an asymmetric matrix a. If one element A of the adjacency matrixvuIf positive, the element A corresponding to the diagonal line isuvWill be zero and better represent the interrelationship between the mobile nodes.
Wherein the index of the node is defined as idx, i.e. representing the sequence numbers of k randomly selected nodes, the node sampling process of spatial correlation between mobile multi-sensor nodes is represented as:
idx=argtopk(A[i,:])
A[i,-idx]=0
where argtopk (·) represents the index of the first k maxima of the return vector. A represents the relation of every two nodes in the constructed graph adjacency matrix, and i represents the sequentially selected sensor nodes. When the ith node is selected, whether k nodes corresponding to idx are related to the ith node is judged, the correlation is 1, the non-correlation is 0, and the diagonal element of the node is set to be 0. For each sensor node, the first k nearest nodes are selected as the neighbors, namely the first k most relevant subsets in the plurality of sensor nodes are screened out.
In the above way, the mobile node position is automatically captured according to its relevance, so changing the node position does not affect the capture of the first k most relevant nodes. And setting the weight of the unconnected nodes to be zero while keeping the weight of the connected nodes, thereby obtaining the asymmetric dynamic graph adjacency matrix of the multi-sensor spatial position mapping.
Step S2203: in response to completion of the construction of the graph adjacency matrix, acquisition of time-related information is performed.
And extracting sequence time related information by using the DTCN, and predicting data of the next time step according to the time sequence with the historical time step being P.
Specifically, the time modeling object is time sequence data acquired by a plurality of groups of sensors with fixed sampling frequency, the time sequence data accurately records the change condition of an observation target in real time, reflects the development trend of parameters in a certain time range, and can obtain larger data volume in a short time.
Due to the short sampling interval and the large data volume, useful information needs to be extracted from a longer time sequence. In order to better capture the long-term change rule, the present embodiment utilizes DTCN to extract global information of the time series. The DTCN is a model structure that is based on the TCN by sampling at intervals, and the modeling length of the solution time is limited by the size of a convolution kernel to be suitable for a sensor time series with a large data volume. Since the data collected in this embodiment is two-dimensional data including spatial and temporal information, the two-dimensional convolution filter is set as:
F=(f1×1,f1×2,···,f1×c),
the monitoring data X of the plurality of sensor nodes divided into the time step P is:
X={z1[i],z2[i],···,zP[i]}
then at xtThe dilated convolution of (d) is:
Figure BDA0003078094420000111
where denotes a convolution operation. m denotes a natural number, d is a dilation factor, i.e., the number of layers of one-dimensional convolution increases as the number of network layers increases. c is the filter kernel size, subscript xt-m·dIndicating the direction of time in the past. The time convolution network captures sequential patterns of time series data through a one-dimensional convolution filter, but the sequence feature extraction is limited by the convolution kernel length.
When processing long sequences, either very deep networks or very large filters are required, i.e. the size of the receptive field is to be enlarged. Because sensor data often collects a large amount of data related with time, in order to process a long-time sequence in the sensor data, the embodiment adopts an expansion strategy, and the receptive field of the time convolution network is increased to effectively extract features.
The receptive field size R of the time convolutional network is:
R=d(c-1)+1
r represents the receptive field size of the d-layer dilated convolutional network with kernel size c, and to enlarge the receptive field, assuming that the dilation factor of each layer grows exponentially at a rate of q (q > 1) (assuming that the initial dilation factor is 1), the receptive field size R of the d-layer dilated convolutional network with kernel size c is:
Figure BDA0003078094420000121
the receptive field size of the network grows exponentially q with the number of hidden layers. Thus, longer sequences and more raw information can be captured using this dilation strategy.
To further extract temporal features, the method further comprises applying a set of standard expanded one-dimensional convolution filters through the expanded time convolution network model to extract high-level temporal features in the mobile sensor data.
Wherein the dilated time convolutional network model consists of two dilated start layers. An expanded starting layer is activated by a tangential hyperbolic function, acting as a filter. The other inflated start layer is activated by the sigmoid function and acts as a door control message transfer. Output x obtained from two expansion starting layersoutFor representing predicted time-dependent information, xoutThe concrete expression is as follows:
xout=tanh(f1*xin)×sigmoid(f2*xin),
wherein x isinMultiple node time information representing input, f1And f2Denotes the inflation time convolution layer, sigmoid (. cndot.) denotes the S-type activation function, and tanh (. cndot.) denotes the tangent hyperbolic activation function.
In the dilation time convolution network, the DTCN adopted in this embodiment enlarges the receptive field by increasing the growth rate of the dilation factor d, flexibly captures the historical long-term and short-term information of the sensor time sequence, avoids the problems of gradient dispersion and gradient explosion in the RNN, and realizes large-scale parallel processing, thereby improving the training and prediction speed.
Step S230: and extracting the spatial features according to the extracted temporal features and the predicted temporal related information.
Specifically, the present embodiment performs spatial feature extraction according to a dynamic graph convolutional network model (DGCN).
Wherein the time-related information x predicted in step S220 is usedoutAnd the graph adjacency matrix A reconstructed in the step S210 is sent into the dynamic graph convolution network model to extract the spatial characteristics.
Specifically, the superimposed dynamic graph convolution network model DGCN extracts spatial features of different nodes. The nature of the dynamic graph convolution network model DGCN is to apply convolution to a graph neural network, extract the structural data of the graph and reduce the model structure of the computation amount of the graph neural network. DGCN overcomes the disadvantage that conventional CNNs are only applicable to euclidean data with stable invariance to displacement, scaling and warping transformations. The DGCN can learn the attribute information and the structure information at the same time, cooperatively influence the representation of the graph nodes and fully learn the complementary relation in the graph structure.
Specifically, the predicted time-related information x is usedoutAnd sending the reconstructed graph adjacency matrix A into a dynamic graph convolution network DGCN, and acquiring the spatial characteristics between the nodes by dynamically constructing the graph adjacency matrix.
The step S230 specifically includes the following sub-steps:
step S2301: a neighborhood is constructed.
Wherein, a neighborhood is constructed by using a node sampling method.
For mobile multisensor graph data, there are no fixed 8 fields in euclidean data, the neighborhood size of each node varies constantly, and there is no ordering of nodes within the same neighborhood. Therefore, a node sampling method is utilized to screen out a fixed number of related nodes, and the related nodes are ranked according to the relevance to construct the neighborhood.
Step S2302: and continuously updating the neighborhood to realize continuous updating of the graph adjacency matrix.
And updating neighborhood information by the inner product of the dynamically constructed graph adjacency matrix pair point in the neighborhood and the convolution kernel parameter, and updating the graph adjacency matrix according to the information aggregated by the nodes.
In order to introduce the graph adjacency matrix construction method in step S210 into the DGCN and implement the process of aggregating nodes, the process of referencing the graph adjacency matrix by the GCN is derived as follows. Wherein the mathematical representation of the graph is defined as:
G=(V,E),
wherein the content of the first and second substances,
Figure BDA0003078094420000131
being a collection of vertices, being a mathematical representation of a plurality of sensor nodes, the edges connecting the vertices are
Figure BDA0003078094420000132
i, j represent two different vertices in the graph adjacency matrix. The graph is adjoined by matrix AijIs expressed mathematically as:
Figure BDA0003078094420000141
if A isij> information, then graph abuts v in the matrixi,vjThe two vertexes are related, and the connected edges of the two vertexes belong to the set E in the graph adjacency matrix; if 0, it is irrelevant.
In order to realize feature extraction, the GCN is utilized to train a convolution kernel coefficient and calculate the convolution result of an adjacent vertex of a central vertex and a convolution kernel, so that effective spatial matrix feature extraction is realized. Propagation layer H of single-layer GCNlComprises the following steps:
Figure BDA0003078094420000142
wherein the content of the first and second substances,
Figure BDA0003078094420000143
a is the adjacency matrix of the graph, I is the unit matrix, the diagonal elements are changed into 1 by adding the unit matrix to the initial adjacency matrix, and the information of the nodes is reserved.
Figure BDA0003078094420000144
Is a normalized symmetric adjacency matrix; hl-1Features of the vertices of layer l; wl-1Convolution weight for l layers of graphs; σ is the activation function.
To pair
Figure BDA0003078094420000145
Dynamic sampling is carried out to obtain a transmission layer H of the DGCNlComprises the following steps:
Figure BDA0003078094420000146
wherein the content of the first and second substances,
Figure BDA0003078094420000147
for a dynamically sampled graph adjacency matrix,
Figure BDA0003078094420000148
σ1、σ2for the activation function, the asymmetry of the graph adjacency matrix is implemented to accommodate mobile node propagation, Hl-1Features of the vertices of layer l; wl-1The l-layer graph convolution weight is given.
Step S2303: and in response to the completion of the updating of the adjacent matrix of the graph, fusing the information of the propagation layer and acquiring the spatial characteristics among the nodes.
The graph convolution network is composed of two Mix-hop propagation layers (Mix-hop layers), and when the updating of the graph adjacency matrix is completed, the Mix-hop propagation layers are utilized to process the information flow on the space-dependent nodes, and the information propagation step H is carried out(k)The definition is as follows:
Figure BDA0003078094420000149
wherein the content of the first and second substances,
Figure BDA00030780944200001410
for dynamically sampled graph adjacency matrices, k denotes a natural number, β is a hyperparameter, controlling the ratio of preserving the original state of the root node, H(k-t)Denotes the upper propagation layer, H(k)Representing the current propagation layer. HinRepresenting the spatial characteristics of the nodes whose states are constantly updated by the updated graph adjacency matrix as the depth of graph convolution increases, H(k-1)Representing the state of the previous node that was retained.
Thus, the propagated node state can not only preserve locality, but also explore deeper neighborhoods. Information is transmitted through the graph convolution network, the continuously updated graph adjacency matrix and the space characteristics can be transmitted together in a recursion mode, parameters of the graph adjacency matrix are dynamically updated, and meanwhile the monitoring data of the sensor nodes are made to fit the target observed value.
Further, the superimposed hybrid skip layer outputs HoutThe extracted spatial correlation information is expressed as:
Figure BDA0003078094420000151
wherein the content of the first and second substances,
Figure BDA0003078094420000152
for a dynamically sampled graph adjacency matrix,
Figure BDA0003078094420000153
representing the transpose of the adjacency matrix of the graph after sampling. x is the number ofoutRepresents the extracted time-related information, H(k)Representing the current propagation layer and k a natural number.
After the spatial features are extracted, the graph adjacency matrix is updated, so that the time sequence corresponding to each node in the graph adjacency matrix is changed, the changed time sequence is used as input, the time features are extracted again, the spatial features are extracted according to the time features extracted again, the time convolution network and the graph convolution network are interactively processed, and the time related information and the space related information are continuously extracted again.
Step S240: and extracting the time characteristic and the spatial characteristic alternately according to the model layer number, and continuously predicting the time-related information according to the time characteristic after alternate execution.
And sending the data into a time convolution network and a dynamic graph convolution network for alternative processing, respectively extracting time characteristics and space characteristics, and predicting time related information in the time convolution network. First, the monitoring data of the multiple sensors is executed to step S220, and time feature extraction and time-related information prediction are performed using the DTCN network. Then, the data containing the time characteristics and the dynamic graph adjacency matrix obtained by the method in step S210 are sent to the DGCN to execute step S230, the graph adjacency matrix is updated according to the spatial characteristics of the multi-sensor nodes, and a subset of the relevant nodes is screened out. Secondly, the step S220 is executed again on the multi-sensor data with the excavated spatial features, and unlike the step of directly sending the unearthed spatial features to the DTCN for the first time, the second round of multi-sensor data is a spatially-related node subset, so that effective node information is provided for improving the time prediction accuracy of the node monitoring data in the step S220. And then, monitoring data of the node subset containing the time features to extract the spatial features. And finally, repeatedly extracting the time characteristic and the space characteristic according to the model layer number set by the prediction neural network.
Through the DTCN and the DGCN which are executed alternately, the related node subsets are screened for many times and finally traverse all the nodes of the mobile sensor, the feature extraction of the multi-sensor monitoring time dimension and the space dimension is realized, the time feature and the space feature of the dynamic multi-sensor node are effectively ensured to be fully utilized, and the space feature is mined to improve the accuracy of the time related information prediction on the basis of only utilizing the time feature to predict the time related information in the prior art.
Step S250: and predicting according to the time characteristic and the space characteristic which are alternately extracted, and outputting a prediction result.
In step S220, the DTCN implements extraction of historical time characteristics of the multi-sensor monitoring data and prediction of time-related information of the next time step. Step S230 extracts multi-sensor monitoring spatial correlation using DGCN to provide valid information for prediction of next time step. Step S240 obtains a feature layer including temporal features and spatial features, and transforms the feature layer into an output prediction result using an output convolution layer according to the dimension of input multi-sensor monitoring data. Step S240 ensures sufficient extraction of the temporal features and the spatial features, and finally obtains the prediction results of the monitoring data of the multiple mobile sensor nodes at the next time, for example, predicting various sound, light, gas and other data that the multiple mobile sensors will monitor.
The application has the following beneficial effects:
the method and the system for predicting the mobile multi-sensor space-time data in the internet of things provide accurate and reliable multi-node prediction for an intelligent monitoring system, and the perception and utilization of intelligent equipment to environmental information are enhanced. First, an adaptive learning graph adjacency matrix is designed according to the multi-sensor dynamics. And secondly, the expansion rate of the expansion time convolution network model is improved to fully mine time characteristics and improve prediction efficiency. And designing a graph convolution network model to effectively capture the hidden relation among the nodes according to the dynamic graph adjacency matrix, thereby realizing the prediction of the monitoring data of a plurality of mobile sensor nodes at the next moment.
The above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or easily conceive of changes to the technical solutions described in the foregoing embodiments, or make equivalents to some of them, within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A mobile multi-sensor space-time data prediction method in the Internet of things is characterized by comprising the following steps:
acquiring monitoring data of a mobile multi-sensor;
sending the acquired data into a network model, and extracting time characteristics to realize the prediction of time information in the data of the mobile multi-sensor;
extracting spatial features according to the extracted temporal features;
alternately extracting the time characteristic and the space characteristic according to the number of layers of the network model;
and predicting according to the time characteristic and the space characteristic which are alternately extracted, and outputting a prediction result.
2. The method for space-time data prediction of mobile multisensor in internet of things of claim 1, wherein the obtaining of the monitoring data of the mobile multisensor includes spatial location information of the plurality of mobile sensors, and the plurality of mobile sensors generate time information in the multivariate response generated over time.
3. The method for predicting the space-time data of the mobile multi-sensor in the internet of things as claimed in claim 2, wherein the step of sending the acquired data to a network model for time feature extraction to realize the prediction of the time information in the data of the mobile multi-sensor specifically comprises the following substeps:
sending the monitoring data to a network model, and initializing a graph adjacency matrix;
sending the initialized graph adjacency matrix into a graph learning module to update node information, and reconstructing the graph adjacency matrix;
in response to completion of the construction of the graph adjacency matrix, acquisition of time-related information is performed.
4. The method for predicting the space-time data of the mobile multi-sensor in the internet of things according to claim 3, wherein the mapping M of the adjacency matrix of the random initialization map is represented as:
M1=tanh(αE1θ1),
M2=tanh(αE2θ2),
wherein, tanh (·) represents a tangent hyperbolic activation function; e1、E2Node embedding representing random initialization; theta1、θ2Is the model parameter and alpha is the hyper-parameter used to control the saturation rate of the activation function.
5. The method for predicting the space-time data of the mobile multi-sensor in the internet of things according to claim 4, wherein the initialized graph adjacency matrix is reconstructed after model parameters and hyper-parameters for controlling the saturation rate of the activation function are introduced, and the reconstructed graph adjacency matrix A is represented as:
A=ReLU(tanh(α(M1M2 T-M2M1 T))),
where, tanh (·) represents a tangent hyperbolic activation function, α is a hyper-parameter for controlling the saturation rate of the activation function, M1 and M2 represent mappings that initialize graph adjacency matrices, and the subtraction term and the ReLU activation function regularize the adjacency matrices to realize an asymmetric matrix a.
6. The method for predicting space-time data of mobile multi-sensors in the internet of things according to claim 5, wherein an index of a node is defined as idx, that is, a sequence number of k randomly selected nodes is represented, and a node sampling process of spatial correlation among mobile multi-sensor nodes is represented as follows:
idx=argtopk(A[i,:])
A[i,-idx]=0
wherein argtopk (·) represents the index of the first k maxima of the return vector; a represents the constructed graph adjacency matrix; and i represents the sequentially selected sensor nodes, and when the ith node is selected, whether k nodes corresponding to idx are related to the ith node is judged, wherein the correlation is 1, and the non-correlation is 0.
7. The method for predicting the space-time data of the mobile multi-sensor in the internet of things according to claim 3, wherein in the process of acquiring the time-related information, time modeling is further performed; the object of time modeling is time series data acquired by a plurality of groups of sensors with fixed sampling frequency.
8. The mobile multi-sensor data space-time data prediction method of claim 7, wherein the time-related information is obtained through an inflation time convolution network model, the inflation time convolution network model is composed of two inflated initial layers, and the output xo obtained by the two inflated initial layersutFor representing predicted time-related information, xoutThe concrete expression is as follows:
xout=tanh(f1*xin)×sigmoid(f2*xin),
wherein x isinNode information representing input, f1And f2Denotes the inflation time convolution layer, sigmoid (. cndot.) denotes the S-type activation function, and tanh (. cndot.) denotes the tangent hyperbolic activation function.
9. The method for predicting the space-time data of the mobile multi-sensor in the internet of things according to claim 7, wherein the step of extracting the spatial features according to the extracted temporal features specifically comprises the following substeps:
constructing a neighborhood;
continuously updating the neighborhood to realize continuous updating of the graph adjacency matrix;
and in response to the completion of the updating of the adjacency matrix of the graph, fusing the information of the propagation layers and acquiring the spatial correlation between the nodes.
10. The mobile multi-sensor space-time data prediction system in the Internet of things is characterized by specifically comprising a data acquisition unit, a time information prediction unit, a spatial feature extraction unit, an alternative execution unit and a prediction output unit;
the data acquisition unit is used for acquiring monitoring data of the mobile multi-sensor;
the time information prediction unit is used for sending the acquired data into a prediction network model, extracting time characteristics and realizing the prediction of time information in the data of the mobile multi-sensor;
a spatial feature extraction unit for performing spatial feature extraction based on the extracted temporal features;
the alternate execution unit is used for alternately executing the extraction of the time characteristic and the space characteristic according to the number of layers of the network model;
and the prediction output unit is used for performing prediction according to the extraction of the time characteristic and the spatial characteristic which are alternately executed and outputting a prediction result.
CN202110558678.1A 2021-05-21 2021-05-21 Mobile multi-sensor space-time data prediction method and system in Internet of things Pending CN113222265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110558678.1A CN113222265A (en) 2021-05-21 2021-05-21 Mobile multi-sensor space-time data prediction method and system in Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110558678.1A CN113222265A (en) 2021-05-21 2021-05-21 Mobile multi-sensor space-time data prediction method and system in Internet of things

Publications (1)

Publication Number Publication Date
CN113222265A true CN113222265A (en) 2021-08-06

Family

ID=77097787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110558678.1A Pending CN113222265A (en) 2021-05-21 2021-05-21 Mobile multi-sensor space-time data prediction method and system in Internet of things

Country Status (1)

Country Link
CN (1) CN113222265A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114169466A (en) * 2021-12-24 2022-03-11 马上消费金融股份有限公司 Graph data processing method, article classification method, article traffic prediction method, apparatus, device and storage medium
CN115549823A (en) * 2022-11-23 2022-12-30 中国人民解放军战略支援部队航天工程大学 Radio environment map prediction method
CN115834433A (en) * 2023-02-17 2023-03-21 杭州沄涞科技有限公司 Data processing method and system based on Internet of things technology
CN115828165A (en) * 2023-02-15 2023-03-21 南京工大金泓能源科技有限公司 New energy intelligent micro-grid data processing method and system
WO2023056786A1 (en) * 2021-10-06 2023-04-13 International Business Machines Corporation Attenuation weight tracking in graph neural networks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200151288A1 (en) * 2018-11-09 2020-05-14 Nvidia Corp. Deep Learning Testability Analysis with Graph Convolutional Networks
CN111540199A (en) * 2020-04-21 2020-08-14 浙江省交通规划设计研究院有限公司 High-speed traffic flow prediction method based on multi-mode fusion and graph attention machine mechanism
CN111639787A (en) * 2020-04-28 2020-09-08 北京工商大学 Spatio-temporal data prediction method based on graph convolution network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200151288A1 (en) * 2018-11-09 2020-05-14 Nvidia Corp. Deep Learning Testability Analysis with Graph Convolutional Networks
CN111540199A (en) * 2020-04-21 2020-08-14 浙江省交通规划设计研究院有限公司 High-speed traffic flow prediction method based on multi-mode fusion and graph attention machine mechanism
CN111639787A (en) * 2020-04-28 2020-09-08 北京工商大学 Spatio-temporal data prediction method based on graph convolution network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZONGHAN WU 等: "Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks", 《ARXIV:2005.11650V1 [CS.LG]》 *
ZONGHAN WU 等: "Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks", 《ARXIV:2005.11650V1 [CS.LG]》, 24 May 2020 (2020-05-24), pages 1 - 11 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023056786A1 (en) * 2021-10-06 2023-04-13 International Business Machines Corporation Attenuation weight tracking in graph neural networks
CN114169466A (en) * 2021-12-24 2022-03-11 马上消费金融股份有限公司 Graph data processing method, article classification method, article traffic prediction method, apparatus, device and storage medium
CN115549823A (en) * 2022-11-23 2022-12-30 中国人民解放军战略支援部队航天工程大学 Radio environment map prediction method
CN115828165A (en) * 2023-02-15 2023-03-21 南京工大金泓能源科技有限公司 New energy intelligent micro-grid data processing method and system
CN115828165B (en) * 2023-02-15 2023-05-02 南京工大金泓能源科技有限公司 New energy intelligent micro-grid data processing method and system
CN115834433A (en) * 2023-02-17 2023-03-21 杭州沄涞科技有限公司 Data processing method and system based on Internet of things technology

Similar Documents

Publication Publication Date Title
Ren et al. Deep learning-based weather prediction: a survey
Liu et al. Forecast methods for time series data: a survey
CN113222265A (en) Mobile multi-sensor space-time data prediction method and system in Internet of things
Cheng et al. Multi-step data prediction in wireless sensor networks based on one-dimensional CNN and bidirectional LSTM
Khodayar et al. Spatio-temporal graph deep neural network for short-term wind speed forecasting
Barzegar et al. Coupling a hybrid CNN-LSTM deep learning model with a boundary corrected maximal overlap discrete wavelet transform for multiscale lake water level forecasting
US11714937B2 (en) Estimating physical parameters of a physical system based on a spatial-temporal emulator
Wang et al. Multiple convolutional neural networks for multivariate time series prediction
Jin et al. Spatio-temporal graph neural networks for predictive learning in urban computing: A survey
US11966670B2 (en) Method and system for predicting wildfire hazard and spread at multiple time scales
CN110570035B (en) People flow prediction system for simultaneously modeling space-time dependency and daily flow dependency
Du et al. Missing data problem in the monitoring system: A review
Ruiz et al. Gated graph convolutional recurrent neural networks
US11720727B2 (en) Method and system for increasing the resolution of physical gridded data
Jin et al. A GAN-based short-term link traffic prediction approach for urban road networks under a parallel learning framework
Li et al. Decomposition integration and error correction method for photovoltaic power forecasting
Jin et al. Adaptive dual-view wavenet for urban spatial–temporal event prediction
Hwang et al. Climate modeling with neural diffusion equations
Liu et al. Deep fusion of heterogeneous sensor data
Bai et al. Graph neural network for groundwater level forecasting
Wu et al. Msstn: Multi-scale spatial temporal network for air pollution prediction
Dick et al. Embedded intelligence in the internet-of-things
Dhamge et al. Genetic algorithm driven ANN model for runoff estimation
Ren et al. The data-based adaptive graph learning network for analysis and prediction of offshore wind speed
Rico et al. Graph neural networks for traffic forecasting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210806

RJ01 Rejection of invention patent application after publication