CN111222666A - Data calculation method and device - Google Patents

Data calculation method and device Download PDF

Info

Publication number
CN111222666A
CN111222666A CN201811416428.9A CN201811416428A CN111222666A CN 111222666 A CN111222666 A CN 111222666A CN 201811416428 A CN201811416428 A CN 201811416428A CN 111222666 A CN111222666 A CN 111222666A
Authority
CN
China
Prior art keywords
data
prediction model
neural network
layer
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811416428.9A
Other languages
Chinese (zh)
Inventor
毛晓飞
丁岩
胡晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinzhuan Xinke Co Ltd
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201811416428.9A priority Critical patent/CN111222666A/en
Publication of CN111222666A publication Critical patent/CN111222666A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Biophysics (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a data calculation method and a device, wherein the method comprises the following steps: setting a prediction model, wherein the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively; training a prediction model according to the characteristic data corresponding to the predetermined indication event; inputting the characteristic data of the area to be predicted into the trained prediction model, and acquiring the prediction data corresponding to the predetermined indication event through the trained prediction model. Through the technical scheme of the invention, the characteristics of the front-back relation between time sequences and the characteristics of the mutual influence between space sequences are effectively maintained, the width of a neural network is increased, the depth of the neural network is reduced, the expression capability of the neural network is enhanced, the probability of model overfitting is reduced, and the efficient intelligent traffic data analysis and prediction platform based on the deep space-time interaction residual error network structure module is realized.

Description

Data calculation method and device
Technical Field
The invention relates to the field of intelligent traffic control, in particular to a data calculation method and device.
Background
With the continuous and rapid development of the Chinese economic society, the holding capacity of motor vehicles keeps a rapid growth situation, more and more residents select self-driving as a main travel mode, and the influence of the vehicles on urban traffic is larger and larger. By the end of 2017, the number of motor vehicles in the country reaches 3.10 hundred million. 3352 motor vehicles which are newly registered in a public security traffic management department in 2017, wherein 2813 automobiles which are newly registered all have high creation history. The data shows that in 2017, the number of automobiles kept in the country reaches 2.17 hundred million, and compared with 2016, 2304 thousands of automobiles are increased all year round, and the increase is 11.85%. The proportion of automobiles in automobiles is continuously increasing, and the proportion of automobiles in automobiles is increased from 54.93% to 70.17% in recent five years, so that automobiles become a main constituent of automobiles. Passenger cars are kept in 1.85 million vehicles in terms of vehicle type, with small and miniature passenger cars (private cars) registered on behalf of individuals reaching 1.70 million, 91.89% of passenger cars, being the highest level of history. From the distribution situation, the number of cars in 53 cities in the country exceeds one million, the number of cars in 24 cities exceeds 200 million, and the number of cars in 7 cities exceeds 300 million. With the rapid increase of the quantity of urban automobiles, the traffic management pressure and the safety risk are continuously increased, and the difficulty of monitoring and handling sudden crisis is also increased, so that the traffic monitoring and security becomes very important, the behavior detection of abnormal vehicles and the regional crime rate prediction based on the vehicle behaviors are indispensable tasks of intelligent urban traffic management, and an effective means for effectively predicting the vehicle abnormality is lacked in the related technology.
Disclosure of Invention
In order to solve the problems, the invention provides a data calculation method and a data calculation device, and can provide an intelligent traffic data analysis and prediction scheme based on a deep space-time interaction residual error network structure module.
The invention provides a data calculation method, which comprises the following steps:
setting a prediction model, wherein the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively;
training a prediction model according to the characteristic data corresponding to the predetermined indication event;
inputting the characteristic data of the area to be predicted into the trained prediction model, and acquiring the prediction data corresponding to the predetermined indication event through the trained prediction model.
The invention also proposes a data computing device, said device comprising:
the model establishing unit is used for setting a prediction model, the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively;
the model training unit is used for training a prediction model according to the characteristic data corresponding to the predetermined indication event;
and the prediction unit is used for inputting the characteristic data of the area to be predicted into the trained prediction model and acquiring the prediction data corresponding to the predetermined indication event through the trained prediction model.
The invention also provides a terminal, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the processing of any data calculation method provided by the invention.
The present invention also proposes a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the processing of any of the data calculation methods provided by the present invention.
Compared with the prior art, the technical scheme provided by the invention comprises the following steps: setting a prediction model, wherein the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively; training a prediction model according to the characteristic data corresponding to the predetermined indication event; inputting the characteristic data of the area to be predicted into the trained prediction model, and acquiring the prediction data corresponding to the predetermined indication event through the trained prediction model. According to the scheme, the characteristic of the front-back relation between time sequences and the characteristic of the mutual influence between space sequences are effectively kept by utilizing the deep space-time interaction residual error network structure, the width of a neural network is increased, the depth of the neural network is reduced, the expression capacity of the neural network is enhanced, the probability of model overfitting is reduced, and the efficient intelligent traffic data analysis and prediction platform based on the deep space-time interaction residual error network structure module is realized.
Drawings
The accompanying drawings in the embodiments of the present invention are described below, and the drawings in the embodiments are provided for further understanding of the present invention, and together with the description serve to explain the present invention without limiting the scope of the present invention.
FIG. 1A is a schematic flow chart of a data calculation method according to an embodiment of the present invention;
fig. 1B is an intelligent transportation platform architecture provided in an embodiment of the present invention;
FIG. 2 is a block diagram of a data training module architecture according to an embodiment of the present invention;
FIG. 3 is a grid structure of an urban traffic "picture" according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an urban traffic timing grid matrix structure according to an embodiment of the present invention;
FIG. 5 is a convolutional network structure provided by an embodiment of the present invention;
fig. 6 is an interactive residual error network unit structure provided by the embodiment of the present invention;
fig. 7 is an example of a Sigmoid function provided in the embodiment of the present invention;
fig. 8 is a block diagram of a standardized BN layer structure according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of data flow provided by an embodiment of the present invention;
fig. 10 is a schematic diagram of an implementation of an intelligent transportation platform according to an embodiment of the present invention;
fig. 11 is a schematic diagram of an implementation model of a vehicle crime prediction interactive residual error neural network according to an embodiment of the present invention;
FIG. 12 is a geographic area grid structure provided by an embodiment of the present invention;
fig. 13 is a schematic diagram of an implementation model of a traffic flow prediction interactive residual error neural network according to an embodiment of the present invention;
fig. 14 is a schematic diagram of a human traffic prediction interactive residual error neural network implementation model provided in an embodiment of the present invention;
fig. 15 is a structure of a mesh people stream change according to an embodiment of the present invention;
fig. 16 is a schematic diagram of an implementation model of a human crime prediction interactive residual error neural network according to an embodiment of the present invention.
Detailed Description
The following further description of the present invention, in order to facilitate understanding of those skilled in the art, is provided in conjunction with the accompanying drawings and is not intended to limit the scope of the present invention. In the present application, the embodiments and various aspects of the embodiments may be combined with each other without conflict.
The invention mainly provides a regional vehicle crime rate prediction platform based on vehicle gate trajectory data and geographic region gridding data deep learning. The intelligent traffic system is based on vehicle gate track data and geographic area grid division data, and achieves the goal of intelligent city intelligent traffic through a neural network training module and a neural network prediction module.
As shown in fig. 1A, an embodiment of the present invention provides a data calculation method, where the method includes:
step 100, setting a prediction model, wherein the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively;
step 200, training a prediction model according to characteristic data corresponding to a predetermined indication event;
step 300, inputting the feature data of the area to be predicted into the trained prediction model, and obtaining the prediction data corresponding to the predetermined indication event through the trained prediction model, wherein the prediction data comprises the occurrence probability or frequency corresponding to the predetermined indication event.
Fig. 1B is a schematic diagram of an intelligent transportation platform architecture for implementing the data calculation method according to the embodiment of the present invention;
the characteristic data comprises data indicating geographic positions and event data corresponding to each geographic area;
the training model also includes a vertical structure that acquires spatial influence between different geographical location grids through a deep convolutional neural network.
The step 100 includes: setting a transverse structure and a longitudinal structure of a prediction model;
the characteristic data of different time refers to the characteristic data of different time or the characteristic data of different time periods;
for example, the number and time intervals of time sequences formed by different moments or time periods in the transverse structure are set; the number of layers of the deep convolutional neural network is set, and the like.
The set prediction model also comprises an interactive residual error network unit array, wherein the interactive residual error network unit array comprises N layers of network units in M rows, and N and M are positive integers;
the interactive residual error network unit array is set to be a network unit array comprising N layers and M columns, wherein N and M are positive integers;
the input of the first layer of the array of interactive residual network elements is the output of an input pre-processing neural network,
in the interactive residual error network unit array, the output of the network unit of the upper layer is used as the input corresponding to the network unit of the lower layer; and the output of the network element in column K will also be part of the input of the network element in column K + 1; the time interval or moment corresponding to the network unit in the K column is the time interval or moment before the K +1 column, and the output of the Nth layer in the interactive residual error network unit array is connected to the output processing neural network layer; k is an integer greater than or equal to 1 and less than M.
Wherein the step 200 comprises: step 210, acquiring characteristic data corresponding to a predetermined indication event required by training; step 210 comprises:
step 211, obtaining source data corresponding to a predetermined indication event;
step 212, cleaning the source data through data cleaning, and mainly filtering out various error data;
step 213, extracting characteristic data from each type of data after data cleaning according to a characteristic extraction algorithm;
step 214, the extracted feature data are respectively normalized by columns, so that all the data are mapped between [ -1,1], and structured feature data are obtained. The feature data in one column indicates feature data corresponding to a certain time or a specific time period.
Step 215, the structured feature data is stored.
Wherein, in one embodiment, the source data comprises vehicle trajectory source data, geographic grid source data, external influence source data, and vehicle case source data;
as shown in Table 1, for the description of the various types of source data:
Figure BDA0001879585080000051
Figure BDA0001879585080000061
table 1 the various types of source data and meanings are shown in table 2, which are descriptions of vehicle trajectory feature data fields.
Figure BDA0001879585080000062
Figure BDA0001879585080000071
TABLE 2 vehicle trajectory characterization data field
In step 140, the specific standardized method descriptions for the external influence data including weather, temperature, wind speed, holidays are shown in table 3:
Figure BDA0001879585080000072
TABLE 3 external influencing factor data description and normalization method
In step 140, obtaining corresponding formatted data according to the source data mainly includes normalizing the unstructured data into structured data.
In step 150, the storage format of the feature data is as follows:
table 4 is a structured data field description
Figure BDA0001879585080000081
Figure BDA0001879585080000091
Table 4 structured data field description
The step 200 of training the prediction model according to the feature data corresponding to the predetermined indication event includes:
step 210, respectively acquiring monitoring data of each region corresponding to M time periods; m is a positive integer; wherein each region is defined by geographic grid source data and the monitoring data is defined by vehicle trajectory source data; a corresponding relation exists between the geographic grid source data and the vehicle track source data;
step 220, inputting the monitoring data of each region corresponding to the M time intervals and the external influence source data into an interaction residual error neural network model;
step 230, training a deep interaction residual error neural network prediction model by adopting an Adam algorithm through a minimum loss function;
and 240, saving the trained prediction model when a preset ending condition is met.
The deep interactive residual error neural network comprises an input preprocessing neural network, an interactive residual error network unit array, an output processing neural network layer and an external influence full-connection neural network;
wherein the input preprocessing neural network is used for preprocessing input vehicle trajectory data; the M time intervals determine that the network element array has M inputs;
the external influence full-connection neural network is used for processing the data of the external influence and taking the influence as an output result;
the output processing neural network layer is used for setting the dimension of the output of the external influence fully-connected neural network and the dimension of the output data of the interactive residual error network unit array to be the same, so that correlation calculation can be carried out.
Wherein each network element in the interactive residual network element array comprises: the device comprises a first input end, a second input end, a calculation module, a first output end and a second output end;
the first input end and the second input end are connected to a computing module; the first output end and the second output end are both connected to the computing module;
the computing module comprises a lateral structure and a longitudinal structure;
the longitudinal structure comprises: the input end sequentially passes through a BN (Batch normalization) layer, an activation function ReLU (Rectified Linear Unit) layer, a convolution layer and a dropout (discardable) structural layer;
the input before the BN unit in the previous layer is also connected to the input of the dropout structure layer in the current layer and the input of the dropout structure layer in the next layer, as shown by arrows a1 and a2 in fig. 6, respectively, and the output of the dropout structure layer in the previous layer is connected to the input of the dropout structure layer in the next layer, as shown by arrow A3 in fig. 6. The BN unit at the upper layer refers to the BN unit in the network unit at the upper layer, the dropout structural layer at the upper layer refers to the dropout structural layer in the network unit at the upper layer, and the dropout structural layer at the lower layer refers to the dropout structural layer in the network unit at the lower layer.
The lateral structure comprises: the convolution result of the previous time series is injected into the input of the BN unit of the next time series below, together with the input before the BN unit, as indicated by the arrow a4 in fig. 6.
Specifically, the convolution result of the t-1 time series is injected into the input of the BN unit of the t time series together with the original input of the t time series, that is, the input of the t time is added with the convolution result of the t-1 time series in addition to its own sequence, as shown by the curve with arrows in the shaded portion in fig. 6.
Wherein the predetermined end condition comprises:
the iteration times of the algorithm reach K1 times or the prediction error rate is less than K2; wherein K1 is a positive integer; the K2 is a real number greater than zero and less than 1. For example, set 500000< K1< 2000000; set 0.000005< K2< 0.00002.
This is explained below with reference to a specific example.
As shown in fig. 2, the data training module based on the deep convolution residual neural network architecture mainly comprises two parts, namely an external influence factor module and a time sequence trend module. The time sequence trend module includes an interactive residual network unit array, as shown in fig. 2, where each inter-ResNetUnit is a network unit in the interactive residual network unit array, and may be set as network units in M columns in N layers, where M and N are equal in the illustrated example. A plurality of conv1 modules before the interactive residual error network unit array form a first layer convolution layer; the plurality of conv2 modules after the inter-residual network element array constitute a second layer convolutional layer.
Firstly, the urban area is divided into two-dimensional grids according to the bayonet distribution, so that all grid area data jointly form urban area traffic data, as shown in fig. 3, if the data of the whole urban traffic condition in a certain time period is regarded as a picture, the data of each grid area is the data of each pixel point of the picture.
Then, the time axis is divided into different parts, for example, one hour is taken as a time interval, so that a plurality of pictures in different time periods can be obtained, and the value of each pixel point matrix of the pictures is just a matrix formed by various characteristic values of the vehicles at the gates of each grid region. In the example shown in fig. 4, three city traffic "pictures" are obtained after three time intervals of t from time a, where the data of the a grid form a matrix of 3 × 3 after 3 time intervals.
These time-series sequences of "pictures" at different time instants are then input into an interactive residual neural network, the specific structure of which is shown in fig. 5 and 6. In fig. 2, a deep convolutional neural network is used to extract spatial influence between grids capturing different geographic positions, for example, spatial dependence of nearby grids and distant grids is extracted, and an interactive residual neural network is used to extract the sequence of temporal features. In the external model in fig. 2, external influence features, such as weather, temperature, wind speed and holidays, are extracted through an external data set and then input into a two-layer fully-connected neural network, and the vector formed by the external influence features at the moment t is Et,EtThe method comprises the following steps that (1) the neural network passes through two layers of fully-connected neural networks, the first layer of neural network is an embedded layer of each type of external influence characteristic (four external characteristics such as weather, temperature, wind speed and holidays correspond to four input dimensions), and an activation function is accessed behind the embedded layer; the second layer is used to map the input data from low dimensional features to high dimensional features such that it is summed with the input X of the time series Trend ModuletThe dimensions are the same. Through the two layers of neural networks, the output of the external factor module is XE
For fig. 2, assume that the matrix composed of various eigenvalues of "picture" at time t is XtThen the matrix sequence at N times before t is [ X ]t-N,Xt-(N-1),…Xt-1]A tensor is formed by these matrix sequences in time sequence
Figure BDA0001879585080000121
After convolution by the first layer Conv1, the output of the first layer convolution layer is obtained
Figure BDA0001879585080000122
Where is the convolution calculation and f is the activation function, i.e., f (z) ═ max (0, z). Followed by
Figure BDA0001879585080000123
Is accessed to N interactive residual error network units InterResNetUnit, whose structure is shown in FIG. 6, outputs a matrix sequence after N interactive residual network units
Figure BDA0001879585080000124
Is accessed into the last convolution layer again, and the convolution outputs jointly form the output X of the time sequence trend moduleout
Figure BDA0001879585080000125
Wherein ". "is a Hadamard product (W)N,WN-1,…W1]The method comprises N matrixes, wherein the value of each matrix is a parameter needing to be learned, namely W is a weight parameter matrix of a conv1 convolution layer, b is a corresponding offset parameter matrix, subscripts of W and b indicate the number of columns of corresponding interactive residual network units, and superscripts of W and b indicate the number of convolutions of corresponding layers.
Finally, combining external influence factor module output XEAnd the time sequence trend module outputs XoutIf the predicted value at time t is
Figure BDA0001879585080000126
Figure BDA0001879585080000127
Prediction by training our model by computing the Minimum Mean Square Error (MMSE) between the true and predicted values
Figure BDA0001879585080000128
Figure BDA0001879585080000129
In the actual training process, adaptive moment estimation (Adam algorithm) is adopted, independent adaptive learning rates are designed for different parameters by calculating first moment estimation and second moment estimation of the gradient, and the optimized neural network model is obtained more efficiently.
FIG. 6 is an array structure composed of structure-optimized interactive residual network units, where input1 through input N are input sequences ordered in time-series
Figure BDA0001879585080000131
The first layer convolution output is firstly accessed to the BN layer, then the output is received by the activation function, and then the convolution is carried out again; viewed from the transverse direction, the convolution result of the previous time sequence and the input before the BN unit are injected into the input of the BN unit of the next time sequence, so that the influence of the previous time sequence is added into the subsequent convolution layer, and the mutual influence among the time sequences is captured; in a longitudinal view, all inputs pass through a first layer of convolutional network and then pass through a 'BN + ReLU + Conv' and a 'dropout' structure layer in sequence, three residual error structures exist in the 'BN + ReLU + Conv', the expression capacity of the neural network is greatly increased under the condition that the network depth is not increased in a mode that a plurality of residual error structures are parallel, the adaptability of nodes of the neural network is reduced through the random disconnection of the 'dropout' layer, and the possibility of overfitting of the model is remarkably reduced. Firstly, because urban areas are huge and grids are numerous after gridding of geographic areas, the model needs a very deep convolutional network to capture the dependence and influence relationship among grids in large ranges at different distances, and the expression capability of the deep convolutional neural network is enhanced along with the increase of the network depth. On the other hand, the deep network easily causes the gradient to disappear in the back propagation process, so that the training effect is poor, and the deeper the network depth is, the longer the model training time is, so that an interactive residual error neural network unit is introduced, the width of the neural network is increased, the depth of the neural network is reduced, the expression capability of the neural network is enhanced, and the probability of model overfitting is reduced.
The basic unit architecture of the interactive residual neural network is shown in the solid-line block portion of fig. 6.
H(x)=x+F(x)
Where H (x) is the input of the final ReLU activation function of the basic residual unit, assuming N layers of interactive residual neural networks, then:
Figure BDA0001879585080000132
Figure BDA0001879585080000133
then
Figure BDA0001879585080000134
……
Figure BDA0001879585080000135
That is to say any layer x that followsLThe content of the vector will have a part of x from a certain layer in front of itlA linear contribution. Then define:
Figure BDA0001879585080000141
Figure BDA0001879585080000142
xsindicating a certain layer x given the current sample and labelLThe corresponding ideal vector value avoids multiplication in the process of adjusting the back propagation weight of the deep neural network, so that the gradient does not disappear even if the depth is deep, and the influence of the time front-back relation between the sequences is better kept for the sequences with strong time relation.
Through the operation of the BN layer, a better balance point between linearity and nonlinearity is tried to be found, the expression capability of the neural network can be enhanced through nonlinearity, and the phenomenon that the network convergence speed is too slow due to the fact that two ends of a nonlinear region are excessively close can be avoided.
Before the deep neural network performs the nonlinear transformation, the input value H (x) of the activation function shifts or changes gradually in distribution as the depth of the neural network increases or during the training process, and generally the overall distribution gradually approaches to both ends of the upper and lower limits of the value range of the nonlinear function (e.g., Sigmoid function shown in fig. 7). The gradient of a low-layer neural network disappears when backward propagation is caused, convergence becomes slower and slower when a deep-layer neural network is trained, and by introducing a BN layer, the distribution of the input values of each layer of neural network neurons is pulled back to the standard positive distribution with the mean value of 0 and the variance of 1, so that the activation input values fall in a region where a nonlinear function is sensitive to input, small input changes can cause large changes of a loss function, and the disappearance of the gradient is avoided. Fig. 8 is a schematic diagram of the structure of the BN layer. Meanwhile, in order to ensure the non-linearity, the BN layer performs a mapping operation on the transformed x satisfying that the mean value is 0 and the variance is 1 (y ═ a × x + b), that is, adds two parameters a and b to each neuron, and the two parameters are learned through training.
Training a deep interactive residual neural network model by using an Adam algorithm through a minimum loss function, inputting training data, adopting 64 filters (filters) of 3 × 3 for the convolutional layer Conv1 and convolutional layers of all interactive residual neural network units, setting the maximum cycle number to be 1000000, setting a prediction error rate threshold to be 0.00001 and setting the batch size to be 32, wherein the convolutional layers Conv2 are 2 filters of 3 × 3. When the iteration number of the algorithm reaches 1000000 times or the prediction error rate is less than 0.00001, the training is considered to be completed, and the trained model is saved in the distributed storage system in a file form.
Fig. 9 is a schematic data flow diagram of an interactive residual neural network with N layers, and assuming that a geographic region is divided into 32 × 32 grids, 10 types of feature data are extracted from each grid at intervals of 30 minutes, and the size of an input data matrix is 32 × 10. Each path of input data firstly passes through a first layer of 64 filters with the number of 3 x 3, the size of a first layer of output matrix obtained by adopting the same padding is 32 x 64, then the output matrix passes through an array formed by N layers of interactive residual neural network units, the output size of each layer is 32 x 64, and finally the output matrix obtained by passing through a layer of filters with the number of 3 x 2 is 32 x 32. The external influence factors have four types, the size of an input data matrix of the external influence factors is 32 x 4 according to geographic grid division, 2 layers of fully-connected neural networks are used for each type of data, and the size of a final output matrix is 4 groups 32 x 2. And (3) performing Hadamard multiplication on the output of each path of data and the output of the 4 types of external influence factors and respective weight matrixes to obtain final result output, wherein the size of the result matrix is 32 x 1. The model training process mainly adopts an Adam algorithm to update the weight matrix of each layer of interactive neural network by minimizing a loss function.
The following description is made with reference to specific embodiments.
Example 1
Along with the continuous development of economy, the urbanization pace is faster and faster, and highly developed urban civilization attracts more and more people to rush into cities. Cities create unlimited opportunities for people living in them, and bear tremendous pressure from new growth problems. The urban public security problem is more and more prominent, and particularly with the rapid increase of the holding amount of urban vehicles, more and more people choose to drive themselves, more and more vehicle-related crime cases are generated, and the challenge to the construction of safe cities is increased. The analysis of the mass urban road monitoring data stored by utilizing the deep learning technology becomes more important to realize active prevention in advance, real-time perception and quick response in the process and quick investigation and analysis after the process.
And 11, extracting various types of source data in the table 1 by the distributed data extraction module.
And 12, performing data cleaning on the extracted various data, and mainly filtering various error data such as various overdue card information, no data card and case information of irrelevant vehicles.
And step 13, respectively extracting characteristic data from various types of data after data cleaning according to a characteristic extraction algorithm, wherein specific characteristics are shown in a table 4.
And 14, respectively carrying out standardization processing on the extracted characteristic data matrixes according to columns so that all data are mapped between [ -1,1 ]. The processed characteristic data is stored in a distributed storage system (HDFS);
step 15, the data prediction module constructs an interactive residual error neural network model shown in fig. 10 through a tensoflow (an open source software library for machine learning of various perception and language understanding tasks), extracts all bayonet feature data at intervals of 1440 minutes, sets the width of the interactive neural network to be 720 layers and the depth to be 80 layers, and initializes the weight of each hidden layer to be configured in a random normal distribution initialization mode. Setting the maximum cycle number to be 1000000, the threshold value of Loss (a difference function between a predicted value and an actual value) to be 0.00001 and the batch size to be 32, loading training data set training model parameters (adopting a reverse transfer and Adam algorithm), when the iteration number of the algorithm reaches 1000000 or the Loss is less than 0.00001, considering that the model training is finished, and storing the trained model in a distributed storage system in a file form.
FIG. 11 vehicle crime prediction interactive residual error neural network implementation model
And step 16, loading the model file by the system, inputting various feature data stored in the distributed file system, and calculating to obtain the predicted vehicle crime rate of each grid.
And step 17, outputting the output result of the prediction model to a front-end display module for display.
Figure BDA0001879585080000161
Table 5 table for setting structural parameters of embodiment 1
Fig. 12 is an example of gridding division of a geographical area by which diamonds are identified in the grid shown in fig. 12 as locations related to vehicular crimes, solid arrows representing traffic inflow data for this grid area, and dashed arrows representing vehicle outflow data for this grid area.
Example 2
With the continuous development of economy, the urbanization pace is faster and faster, the urban motor vehicle reserves keep a fast growth situation, more and more residents select self-driving as a main travel mode, and the influence of the vehicles on urban traffic is larger and larger. By the end of 2017, the number of motor vehicles in the country reaches 3.10 hundred million. 3352 motor vehicles which are newly registered in a public security traffic management department in 2017, wherein 2813 automobiles which are newly registered all have high creation history. Analysis and prediction of urban road traffic flow becomes very important for traffic management, public safety. The traffic flow of the road grid in a future period of time is predicted through historical traffic flow data of the urban road grid formed by each road section, so that the early warning capability of urban traffic flow change and the rapid handling capability of abnormal traffic flow change of important road sections are greatly improved.
Figure BDA0001879585080000171
Figure BDA0001879585080000181
TABLE 6 traffic prediction characteristics data field description
Step 1: the distributed data extraction module extracts gridding data of a geographic area, bayonet distribution data in the geographic grid, vehicle flow related data (shown in a table 6), weather, temperature and wind speed data of the geographic grid.
Step 2: and (4) performing data cleaning on the extracted various data, and mainly filtering various error data such as various positioning error data and the like.
And step 3: and (5) respectively extracting characteristic data of various types of data after data cleaning according to the characteristics of the table 6 and the table 3 through a characteristic extraction algorithm.
And 4, step 4: and respectively carrying out standardization processing on the extracted characteristic data matrix by columns so that all data are mapped between [ -1,1 ]. The processed characteristic data is stored in a distributed storage system (HDFS);
and 5: the data prediction module constructs an interactive residual error neural network model shown in fig. 13 through a tensoflow cluster, extracts each bayonet feature data at intervals of 30 minutes, sets the width of the interactive neural network to be 336 layers (2 × 24 × 7) and the depth to be 160 layers, initializes and configures the weight of each hidden layer into a random normal mode, and initializes and learns the rate to be 0.0006. Setting the maximum cycle number to be 100000, the Loss threshold value to be 0.0001 and the batch size to be 64, loading training data set training model parameters, when the iteration number of the algorithm reaches 100000 or the Loss is less than 0.0001, considering that the model training is finished, and storing the trained model in a distributed storage system in a file form. As shown in fig. 13, a schematic diagram of a model for implementing the interactive residual error neural network for traffic flow prediction is shown.
Step 6: and the system loads a model file, inputs various characteristic data stored in the distributed file system, and calculates to obtain the predicted traffic flow of each grid in a future period of time.
And 7: and outputting the output result of the prediction model to a front-end display module for display.
Figure BDA0001879585080000191
Table 7 table for setting configuration parameters in the present embodiment
Example 3
With the continuous development of economy, the urbanization is faster and faster, and more people are rushed into cities. The city creates infinite opportunities for people in life, bears huge pressure brought by new problems, and predicts the people flow and is very important for traffic management and public safety. The traditional people flow monitoring and early warning means stay in site manpower monitoring and real-time statistics of gate or ticket counting, and early warning is difficult to achieve. In the intelligent traffic platform, through the positioning and counting of the telecommunication operator mobile phone base stations and the gridding of the geographic area, the pedestrian volume of the area in a period of time in the future can be predicted through historical pedestrian flow data of each geographic grid, so that the early warning capability of urban pedestrian volume change and the rapid handling capability of coping with crowded trampling events are greatly improved.
Step 1: the distributed data extraction module extracts data of gridding (dividing a city into grids with the size of 2KM x 2 KM) of a geographic area, the number of base stations in the geographic grid, the number of public WIFI (Wireless Fidelity), mobile phone connection data of the base stations, positioning data, connection data of public WIFI equipment, public WIFI positioning data, geographic grid weather, temperature and wind speed data.
Step 2: and (4) performing data cleaning on the extracted various data, and mainly filtering various error data such as various positioning error data and the like.
And step 3: and respectively extracting characteristic data from various types of data after data cleaning according to a characteristic extraction algorithm, wherein the characteristic data comprises the connection number of base station equipment, the connection number of public WIFI equipment, the number of regional base stations, weather, temperature and wind speed data in each geographic grid.
And 4, step 4: and respectively carrying out standardization processing on the extracted characteristic data matrix by columns so that all data are mapped between [ -1,1 ]. The processed characteristic data is stored in a distributed storage system (HDFS);
and 5: the data prediction module constructs an interactive residual neural network model as shown in fig. 14 through a tensoflow cluster, extracts each bayonet feature data at intervals of 5 minutes, sets the width of the interactive neural network to 2016 layers (12 × 24 × 7) and the depth to 160 layers, initializes the weight of each hidden layer to be configured as orthogonal (initialization of parameters by orthogonal initialization), and sets the initialized learning rate to 0.001. Setting the maximum cycle number to be 100000, the Loss threshold value to be 0.0001 and the batch size to be 64, loading training data set to train model parameters (adopting reverse transfer and Adam algorithm), when the iteration number of the algorithm reaches 100000 or the Loss is less than 0.0001, considering that the model training is finished, and storing the trained model in a distributed storage system in a file form. As shown in fig. 14, a model is implemented for the human traffic prediction interactive residual neural network.
Step 6: and the system loads a model file, inputs various characteristic data stored in the distributed file system, and calculates to obtain the predicted pedestrian volume of each grid in a period of time in the future.
And 7: and outputting the output result of the prediction model to a front-end display module for display.
Fig. 15 is a schematic diagram of a mesh people stream change structure.
FIG. 15 is a flow of urban grid area people, the exemplary grid including a telecommunications base station, the grid people flowing in with first type arrows pointing into the exemplary grid, the initial position of the first type arrows being the initial position of the outflowing people; the second type of arrow from inside the example grid to outside the grid represents the grid crowd outflow, and the box at the end of the second type of arrow is the original position of the outflowing person.
Figure BDA0001879585080000211
Table 8 table for setting configuration parameters in the present embodiment
Example 4
With the rapid development of economy, more and more people select cities to seek development, and the urban population concentration is higher and higher. The public safety is seriously affected by the criminal behavior of people as the most important social problem in cities. It is also increasingly important to manage cities and to peace city construction to analyze such criminal activities and to predict the rate of regional crime.
Step 1: dividing a geographic area into 2KM by 2KM grids, and extracting various types of source data of each grid by a distributed data extraction module, as shown in table 8;
Figure BDA0001879585080000221
table 9 example 4 human crime rate prediction source data description
And step 3: the data cleaning module cleans the received data and mainly filters various error data, such as various data with longitude and latitude errors.
And 4, step 4: extracting corresponding characteristic data from the cleaned data according to a characteristic extraction algorithm, as shown in table 8;
and 5: the extracted feature data are respectively normalized by columns, so that all data are mapped between [ -1,1], and the structured data format is shown in table 8. The processed feature data is stored in the HDFS in a format of:
Figure BDA0001879585080000222
Figure BDA0001879585080000231
table 10 example 4 data characteristics field description
Step 6: the data prediction module constructs an interactive residual error neural network model shown in fig. 16 through a tensoflow cluster, extracts characteristic data of each bayonet at intervals of 1440 minutes, sets the width of the interactive neural network to be 720 layers and the depth to be 160 layers, initializes and configures the weight of each hidden layer into an orthogonal mode, and initializes and configures the learning rate to be 0.0006. Setting the maximum cycle number to be 1000000, setting the Loss threshold to be 0.00001 and setting the batch size to be 64, loading training data set training model parameters (adopting reverse transfer and Adam algorithm), when the iteration number of the algorithm reaches 1000000 or the Loss is less than 0.00001, considering that the model training is finished, and storing the trained model in a distributed storage system in a file form. And constructing an interactive residual error neural network model through a tensoflow cluster, and loading training data set training model parameters (adopting a reverse transfer and Adam algorithm) to obtain a model file.
And 7: and the system loads a model file, inputs various characteristic data stored in the distributed file system, and calculates to obtain the predicted pedestrian volume of each grid in a period of time in the future.
And 8: and outputting the output result of the prediction model to a front-end display module for display.
Figure BDA0001879585080000241
Table 11 example 4 structure parameter setting table
Based on the same or similar conception with the above embodiment, the embodiment of the present invention further provides a data computing apparatus, and the apparatus proposed by the present invention includes:
the model establishing unit is used for setting a prediction model, the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively;
the model training unit is used for training a prediction model according to the characteristic data corresponding to the predetermined indication event;
and the prediction unit is used for inputting the characteristic data of the area to be predicted into the trained prediction model and acquiring the prediction data corresponding to the predetermined indication event through the trained prediction model.
In an embodiment of the present invention, the apparatus further includes a data obtaining unit, configured to obtain source data required for training the prediction model before training the prediction model according to the feature data corresponding to the predetermined indication event; the method is also used for acquiring source data corresponding to the area to be predicted before inputting the characteristic data of the area to be predicted into the trained prediction model; acquiring corresponding source data according to the training data;
in this embodiment of the present invention, the acquiring, by the data acquiring unit, the source data corresponding to the training data includes:
performing a data washing process;
performing a data feature extraction process;
performing a feature data normalization process;
the data after normalization is stored.
In the embodiment of the invention, the characteristic data comprises data indicating geographic positions and event data corresponding to each geographic area;
the training model also includes a vertical structure that acquires spatial influence between different geographical location grids through a deep convolutional neural network.
In the embodiment of the invention, the model establishing unit is further used for setting an interactive residual error network unit array in the prediction model, the interactive residual error network unit array comprises N layers of network units with M columns, and N and M are positive integers; wherein the content of the first and second substances,
the longitudinal structure is arranged as follows: the input end of each network unit sequentially passes through a batch standardization BN layer, an activation function rectification linear unit ReLU layer, a convolution layer and a discardable dropout structural layer; the input of the BN layer of the previous network unit is also connected to the input of the dropout structural layer of the network unit of the current layer and the input of the dropout structural layer of the next network unit, and the output of the dropout structural layer of the previous network unit is connected to the input of the dropout structural layer of the next network unit;
the transverse structure is arranged as follows: the output results of the convolutional layers of the network elements in the previous column are injected into the inputs of the BN elements of the network elements in the next column together with the inputs before the BN layer.
In an embodiment of the present invention, the prediction model further includes: an input preprocessing neural network, an output processing neural network layer and an external influence full-connection neural network;
the model establishing unit is also used for setting an input preprocessing neural network, an output processing neural network layer and an external influence full-connection neural network in the prediction model;
the model training unit trains a prediction model according to the characteristic data corresponding to the predetermined indication event, and the model training unit comprises:
inputting source data corresponding to the characteristic data into the input preprocessing neural network to obtain corresponding standardized data;
inputting the standardized data into an interactive residual error network unit array to obtain corresponding first processing data;
inputting and outputting the first processing data to a processing neural network layer to obtain corresponding second processing data; the output processing neural network layer is used for setting the dimensionality of output data of the external influence fully-connected neural network and the dimensionality of output data of the interactive residual error network unit array to be the same;
inputting external data into an external influence full-connection neural network to obtain first external data;
and acquiring trained data according to the second processing data and the second external data.
In one embodiment of the invention, the characteristic data comprises: vehicle trajectory data, vehicle case data, geographic grid data, and external influence data.
The model training unit trains a prediction model according to the feature data corresponding to the predetermined indication event, and the model training unit comprises:
respectively acquiring vehicle track data and vehicle case data of each area corresponding to M time intervals with equal intervals, wherein M is a positive integer; wherein the regions are defined by geography grid data; a corresponding relation exists between the geographic grid data and the vehicle track data; a corresponding relation exists between the geographic grid data and the vehicle case data;
and inputting the vehicle track data and the vehicle case data of each region corresponding to the M time periods into the input preprocessing neural network, and respectively injecting the monitoring data of each region corresponding to the M time periods into M rows in the interactive residual error network unit array for processing.
In an embodiment of the present invention, the inputting, by the prediction unit, feature data of the area to be predicted into a trained prediction model, and the obtaining, by the trained prediction model, prediction data corresponding to the predetermined indication event includes:
and loading the trained prediction model, and inputting vehicle track data and external influence data corresponding to the area to be predicted to obtain the predicted vehicle crime rate of each area.
In this embodiment, the external influence data includes data related to at least one of the following: weather characteristics, temperature characteristics, wind speed characteristics, holiday characteristics.
In another embodiment of the present invention, the characteristic data includes geographic grid data, traffic data corresponding to each area, and external influence data;
the external influence data comprises data relating to at least one of: school, bank, hospital, or community distribution.
In another embodiment of the present invention, the feature data includes geography grid data, and human flow data corresponding to each region.
In yet another embodiment of the present invention, the characteristic data includes geographic grid data, crime data corresponding to each area, and external influence data; the external influence data comprises at least one of: entertainment venue data, financial institution data, or daily event venue data.
In the embodiment of the present invention, the model training unit sets a predetermined termination condition as:
the iteration times of the algorithm reach K1 times or the prediction error rate is less than K2; the K1 is a positive integer; the K2 is a real number greater than zero and less than 1.
It should be noted that the above-mentioned embodiments are only for facilitating the understanding of those skilled in the art, and are not intended to limit the scope of the present invention, and any obvious substitutions, modifications, etc. made by those skilled in the art without departing from the inventive concept of the present invention are within the scope of the present invention.

Claims (12)

1. A method of data computation, the method comprising:
setting a prediction model, wherein the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively;
training a prediction model according to the characteristic data corresponding to the predetermined indication event;
inputting the characteristic data of the area to be predicted into the trained prediction model, and acquiring the prediction data corresponding to the predetermined indication event through the trained prediction model.
2. The data calculation method of claim 1,
the characteristic data comprises data indicating geographic positions and event data corresponding to each geographic area;
the training model also includes a vertical structure that acquires spatial influence between different geographical location grids through a deep convolutional neural network.
3. The data calculation method according to claim 2, wherein the prediction model is configured to include an interactive residual network element array, the interactive residual network element array includes N layers and M columns of network elements, and N and M are positive integers;
the longitudinal structure is arranged as follows: the input end of each network unit sequentially passes through a batch standardization BN layer, an activation function rectification linear unit ReLU layer, a convolution layer and a discardable dropout structural layer; the input of the BN layer of the previous network unit is also connected to the input of the dropout structural layer of the network unit of the current layer and the input of the dropout structural layer of the next network unit, and the output of the dropout structural layer of the previous network unit is connected to the input of the dropout structural layer of the next network unit;
the transverse structure is arranged as follows: the output results of the convolutional layers of the network elements in the previous column are injected into the inputs of the BN elements of the network elements in the next column together with the inputs before the BN layer.
4. The data calculation method of claim 3, wherein the predictive model further comprises:
an input preprocessing neural network, an output processing neural network layer and an external influence full-connection neural network;
the training of the prediction model according to the feature data corresponding to the predetermined indication event comprises:
inputting source data corresponding to the characteristic data into the input preprocessing neural network to obtain corresponding standardized data;
inputting the standardized data into an interactive residual error network unit array to obtain corresponding first processing data;
inputting and outputting the first processing data to a processing neural network layer to obtain corresponding second processing data; the output processing neural network layer is used for setting the dimensionality of output data of the external influence fully-connected neural network and the dimensionality of output data of the interactive residual error network unit array to be the same;
inputting external data into an external influence full-connection neural network to obtain first external data;
and acquiring trained data according to the second processing data and the second external data.
5. The data computing method of claim 1, wherein before training the predictive model based on the feature data corresponding to the predetermined indicative event, further comprising: acquiring source data and acquiring the characteristic data according to the source data;
the obtaining the feature data according to the source data comprises:
performing a data washing process;
performing a data feature extraction process;
performing a feature data normalization process;
the data after normalization is stored.
6. The data calculation method according to any one of claims 1 to 5,
the characteristic data includes: vehicle trajectory data, vehicle case data, geographic grid data, and external influence data.
7. The data computing method of claim 6, wherein training the predictive model based on the feature data corresponding to the predetermined indicative event comprises:
respectively acquiring vehicle track data and vehicle case data of each area corresponding to M time intervals with equal intervals, wherein M is a positive integer; wherein the regions are defined by geography grid data; a corresponding relation exists between the geographic grid data and the vehicle track data; a corresponding relation exists between the geographic grid data and the vehicle case data;
and inputting the vehicle track data and the vehicle case data of each region corresponding to the M time periods into the input preprocessing neural network, and respectively injecting the monitoring data of each region corresponding to the M time periods into M rows in the interactive residual error network unit array for processing.
8. The data calculation method of claim 7,
the inputting the feature data of the area to be predicted into the trained prediction model, and the obtaining of the prediction data corresponding to the predetermined indication event through the trained prediction model includes:
and loading the trained prediction model, and inputting vehicle track data and external influence data corresponding to the area to be predicted to obtain the predicted vehicle crime rate of each area.
9. The data calculation method according to any one of claims 1 to 5,
the characteristic data is set in any one of the following ways:
the characteristic data are set to comprise geographic grid data, traffic data corresponding to each area and external influence data, wherein the external influence data comprise at least one item of relevant data of the following contents: distribution of schools, banks, hospitals, or communities;
the characteristic data are set to comprise geographic grid data and human flow data corresponding to each area;
the characteristic data comprises geographic grid data, crime data corresponding to each area and external influence data, wherein the external influence data comprises at least one of the following contents: entertainment venue data, financial institution data, or daily event venue data.
10. A data computing apparatus, the apparatus comprising:
the model establishing unit is used for setting a prediction model, the prediction model comprises a transverse structure, and each unit of the transverse structure is set to process characteristic data of different time respectively;
the model training unit is used for training a prediction model according to the characteristic data corresponding to the predetermined indication event;
and the prediction unit is used for inputting the characteristic data of the area to be predicted into the trained prediction model and acquiring the prediction data corresponding to the predetermined indication event through the trained prediction model.
11. A terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the process of the method of any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the processing of the method according to any one of claims 1 to 9.
CN201811416428.9A 2018-11-26 2018-11-26 Data calculation method and device Pending CN111222666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811416428.9A CN111222666A (en) 2018-11-26 2018-11-26 Data calculation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811416428.9A CN111222666A (en) 2018-11-26 2018-11-26 Data calculation method and device

Publications (1)

Publication Number Publication Date
CN111222666A true CN111222666A (en) 2020-06-02

Family

ID=70830295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811416428.9A Pending CN111222666A (en) 2018-11-26 2018-11-26 Data calculation method and device

Country Status (1)

Country Link
CN (1) CN111222666A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117527390A (en) * 2023-11-21 2024-02-06 河北师范大学 Network security situation prediction method, system and electronic equipment

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020147623A1 (en) * 2001-03-02 2002-10-10 Rifaat Ismail Ibrahim Method for interactive communication and information processing
CN1380625A (en) * 2002-05-23 2002-11-20 李钊翔 Mobile object identify recorder instrument, its management system and management method
US20040010481A1 (en) * 2001-12-07 2004-01-15 Whitehead Institute For Biomedical Research Time-dependent outcome prediction using neural networks
CN1543622A (en) * 2001-08-10 2004-11-03 ̩ɭ System and method for collecting vehicle data and diagnosing the vehicle, and method for automatically setting the vehicle convenience apparatus using smart card
TW201001322A (en) * 2008-06-16 2010-01-01 Chunghwa Telecom Co Ltd Dynamic vehicle auditing system
WO2015030606A2 (en) * 2013-08-26 2015-03-05 Auckland University Of Technology Improved method and system for predicting outcomes based on spatio / spectro-temporal data
CN104662533A (en) * 2012-09-20 2015-05-27 汽车云股份有限公司 Collection and use of captured vehicle data
WO2016145516A1 (en) * 2015-03-13 2016-09-22 Deep Genomics Incorporated System and method for training neural networks
CN106846801A (en) * 2017-02-06 2017-06-13 安徽新华博信息技术股份有限公司 A kind of region based on track of vehicle is hovered anomaly detection method
CN107251058A (en) * 2014-12-24 2017-10-13 定位器Ip公司 crime forecast system
US20170358154A1 (en) * 2016-06-08 2017-12-14 Hitachi, Ltd Anomality Candidate Information Analysis Apparatus and Behavior Prediction Device
CN107506368A (en) * 2017-07-04 2017-12-22 青岛海信网络科技股份有限公司 The determination method and device of one species case suspected vehicles
CN107516061A (en) * 2016-06-17 2017-12-26 北京市商汤科技开发有限公司 A kind of image classification method and system
CN107633674A (en) * 2017-09-14 2018-01-26 王淑芳 A kind of emphasis commerial vehicle exception tracing point elimination method and system
CN107967323A (en) * 2017-11-24 2018-04-27 泰华智慧产业集团股份有限公司 The method and system of abnormal in-trips vehicles analysis are carried out based on big data
CN108280795A (en) * 2018-01-29 2018-07-13 西安鲲博电子科技有限公司 The screening technique of highway green channel exception vehicle based on dynamic data base
CN108288109A (en) * 2018-01-11 2018-07-17 安徽优思天成智能科技有限公司 Motor-vehicle tail-gas concentration prediction method based on LSTM depth space-time residual error networks
CN108345666A (en) * 2018-02-06 2018-07-31 南京航空航天大学 A kind of vehicle abnormality track-detecting method based on time-space isolated point
CN108710637A (en) * 2018-04-11 2018-10-26 上海交通大学 Taxi exception track real-time detection method based on time-space relationship
CN108805345A (en) * 2018-06-01 2018-11-13 广西师范学院 A kind of crime space-time Risk Forecast Method based on depth convolutional neural networks model

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020147623A1 (en) * 2001-03-02 2002-10-10 Rifaat Ismail Ibrahim Method for interactive communication and information processing
CN1543622A (en) * 2001-08-10 2004-11-03 ̩ɭ System and method for collecting vehicle data and diagnosing the vehicle, and method for automatically setting the vehicle convenience apparatus using smart card
US20040010481A1 (en) * 2001-12-07 2004-01-15 Whitehead Institute For Biomedical Research Time-dependent outcome prediction using neural networks
CN1380625A (en) * 2002-05-23 2002-11-20 李钊翔 Mobile object identify recorder instrument, its management system and management method
TW201001322A (en) * 2008-06-16 2010-01-01 Chunghwa Telecom Co Ltd Dynamic vehicle auditing system
CN104662533A (en) * 2012-09-20 2015-05-27 汽车云股份有限公司 Collection and use of captured vehicle data
WO2015030606A2 (en) * 2013-08-26 2015-03-05 Auckland University Of Technology Improved method and system for predicting outcomes based on spatio / spectro-temporal data
CN107251058A (en) * 2014-12-24 2017-10-13 定位器Ip公司 crime forecast system
WO2016145516A1 (en) * 2015-03-13 2016-09-22 Deep Genomics Incorporated System and method for training neural networks
US20170358154A1 (en) * 2016-06-08 2017-12-14 Hitachi, Ltd Anomality Candidate Information Analysis Apparatus and Behavior Prediction Device
CN107516061A (en) * 2016-06-17 2017-12-26 北京市商汤科技开发有限公司 A kind of image classification method and system
CN106846801A (en) * 2017-02-06 2017-06-13 安徽新华博信息技术股份有限公司 A kind of region based on track of vehicle is hovered anomaly detection method
CN107506368A (en) * 2017-07-04 2017-12-22 青岛海信网络科技股份有限公司 The determination method and device of one species case suspected vehicles
CN107633674A (en) * 2017-09-14 2018-01-26 王淑芳 A kind of emphasis commerial vehicle exception tracing point elimination method and system
CN107967323A (en) * 2017-11-24 2018-04-27 泰华智慧产业集团股份有限公司 The method and system of abnormal in-trips vehicles analysis are carried out based on big data
CN108288109A (en) * 2018-01-11 2018-07-17 安徽优思天成智能科技有限公司 Motor-vehicle tail-gas concentration prediction method based on LSTM depth space-time residual error networks
CN108280795A (en) * 2018-01-29 2018-07-13 西安鲲博电子科技有限公司 The screening technique of highway green channel exception vehicle based on dynamic data base
CN108345666A (en) * 2018-02-06 2018-07-31 南京航空航天大学 A kind of vehicle abnormality track-detecting method based on time-space isolated point
CN108710637A (en) * 2018-04-11 2018-10-26 上海交通大学 Taxi exception track real-time detection method based on time-space relationship
CN108805345A (en) * 2018-06-01 2018-11-13 广西师范学院 A kind of crime space-time Risk Forecast Method based on depth convolutional neural networks model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117527390A (en) * 2023-11-21 2024-02-06 河北师范大学 Network security situation prediction method, system and electronic equipment

Similar Documents

Publication Publication Date Title
CN109697852B (en) Urban road congestion degree prediction method based on time sequence traffic events
CN111223301B (en) Traffic flow prediction method based on graph attention convolution network
Zhang et al. Trafficgan: Network-scale deep traffic prediction with generative adversarial nets
Liu et al. Dynamic spatial-temporal representation learning for traffic flow prediction
CN110827544B (en) Short-term traffic flow control method based on graph convolution recurrent neural network
CN114220271B (en) Traffic flow prediction method, equipment and storage medium based on dynamic space-time diagram convolution circulation network
CN109117987B (en) Personalized traffic accident risk prediction recommendation method based on deep learning
CN112257934A (en) Urban people flow prediction method based on space-time dynamic neural network
CN115240425B (en) Traffic prediction method based on multi-scale space-time fusion graph network
CN109830102A (en) A kind of short-term traffic flow forecast method towards complicated urban traffic network
CN110570035B (en) People flow prediction system for simultaneously modeling space-time dependency and daily flow dependency
Shen et al. Research on traffic speed prediction by temporal clustering analysis and convolutional neural network with deformable kernels (May, 2018)
CN115578851A (en) Traffic prediction method based on MGCN
CN112598165B (en) Urban functional area transfer flow prediction method and device based on private car data
CN115204478A (en) Public traffic flow prediction method combining urban interest points and space-time causal relationship
CN114692984A (en) Traffic prediction method based on multi-step coupling graph convolution network
CN112862177B (en) Urban area aggregation degree prediction method, device and medium based on deep neural network
CN115565369B (en) Space-time hypergraph convolution traffic flow prediction method and system based on hypergraph
CN111242395A (en) Method and device for constructing prediction model for OD (origin-destination) data
CN115273466B (en) Monitoring method and system based on flexible lane management and control algorithm
CN116307152A (en) Traffic prediction method for space-time interactive dynamic graph attention network
CN112529284A (en) Private car residence time prediction method, device and medium based on neural network
CN115762147B (en) Traffic flow prediction method based on self-adaptive graph meaning neural network
CN111222666A (en) Data calculation method and device
CN114781696B (en) Model-free accident influence range prediction method for urban road network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220211

Address after: 100176 floor 18, building 8, courtyard 10, KEGU 1st Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing (Yizhuang group, high-end industrial area of Beijing Pilot Free Trade Zone)

Applicant after: Jinzhuan Xinke Co.,Ltd.

Address before: 518057 Ministry of justice, Zhongxing building, South Science and technology road, Nanshan District hi tech Industrial Park, Shenzhen, Guangdong

Applicant before: ZTE Corp.