CN112446516A - Travel prediction method and device - Google Patents
Travel prediction method and device Download PDFInfo
- Publication number
- CN112446516A CN112446516A CN201910796838.9A CN201910796838A CN112446516A CN 112446516 A CN112446516 A CN 112446516A CN 201910796838 A CN201910796838 A CN 201910796838A CN 112446516 A CN112446516 A CN 112446516A
- Authority
- CN
- China
- Prior art keywords
- lstm
- data set
- original data
- training
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 64
- 238000012360 testing method Methods 0.000 claims abstract description 33
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 210000004027 cell Anatomy 0.000 claims description 45
- 230000006870 function Effects 0.000 claims description 19
- 210000002569 neuron Anatomy 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 9
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000013529 biological neural network Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Economics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Tourism & Hospitality (AREA)
- Data Mining & Analysis (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Development Economics (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Educational Administration (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention provides a travel prediction method and a travel prediction device, wherein the method comprises the following steps: counting the number of trips of one or more traffic districts to obtain an original data set; determining a training set and a testing set according to the original data set; constructing a multilayer LSTM; training the multilayer LSTM through the training set and the testing set to determine model parameters of the multilayer LSTM; travel prediction is performed by the trained multi-layer LSTM. In the embodiment of the invention, the traveling of the traffic community is counted to obtain an original data set, a multi-layer LSTM is constructed, the multi-layer LSTM is trained through the original data set, and the traveling prediction is carried out through the trained multi-layer LSTM. The neural network technology is applied to traffic trip prediction, the prediction precision is improved, and the demand of urban traffic trip prediction is met.
Description
Technical Field
The embodiment of the invention relates to the technical field of traffic, in particular to a travel prediction method and device.
Background
The core problem of urban traffic planning is traffic prediction, and the basis of traffic prediction is travel prediction. The travel prediction means that the total travel demand possibly generated by each traffic cell is predicted under a certain condition. The reliable travel prediction can not only provide reference for constructing an effective scheduling system, but also provide effective travel path selection information for urban residents.
The traditional travel prediction method takes four stages as a core, and comprises four stages of traffic generation, traffic distribution, mode division and traffic distribution.
Disclosure of Invention
The embodiment of the invention provides a travel prediction method and device, and solves the problem of low precision of the existing travel prediction method.
According to a first aspect of the embodiments of the present invention, there is provided a travel prediction method, including: counting the number of trips of one or more traffic districts to obtain an original data set; determining a training set and a testing set according to the original data set; constructing a multilayer LSTM; training the multilayer LSTM through the training set and the testing set to determine model parameters of the multilayer LSTM; travel prediction is performed by the trained multi-layer LSTM.
Optionally, the counting the number of trips of one or more traffic cells to obtain an original data set includes: mapping a trip starting point and a trip end point in the traffic cell; determining time sequences of leaving the traffic cell and arriving at the traffic cell in each unit time according to a preset time unit; determining the time series as the original data set.
Optionally, the determining a training set and a test set according to the raw data set includes: and dividing the original data set into the training set and the test set according to a preset proportion, wherein the preset proportion is the proportion of the training set in the original data set.
Optionally, before the determining a training set and a test set from the raw data set, the method further comprises: converting the format of the original data set into a sample characteristic format and a sample label format according to a preset time interval; wherein the sample feature format represents input variables and the sample label format represents output variables.
Optionally, the preset ratio is 0.7 to 0.8.
Optionally, the constructing the multilayer LSTM comprises: when a traffic cell is predicted, a plurality of layers of LSTMs are constructed; when multiple traffic cells are predicted, a multi-layered encoding-decoding LSTM is constructed.
Optionally, the model parameters of the multi-layer LSTM are connection weight parameters between neurons and/or neural network layers.
Optionally, the loss function in the multi-layer LSTM training process is a mean square error function.
Optionally, the model parameter adjustment rule in the multi-layer LSTM training process is an adaptive moment estimation Adam algorithm.
According to a second aspect of the embodiments of the present invention, there is provided a travel prediction apparatus, including: the statistical module is used for counting the trip quantity of one or more traffic districts to obtain an original data set; the first determining module is used for determining a training set and a testing set according to the original data set; the building module is used for building a plurality of layers of LSTMs; the second determining module is used for training the multilayer LSTM through the training set and the testing set to determine model parameters of the multilayer LSTM; and the prediction module is used for predicting the trip through the trained multilayer LSTM.
Optionally, the statistics module includes: the mapping unit is used for mapping a trip starting point and a trip end point in the traffic cell; the first determining unit is used for determining time sequences of leaving the traffic cell and arriving at the traffic cell in each unit time according to a preset time unit; a second determining unit for determining the time series as the original data set.
Optionally, the first determining module includes: and the dividing unit is used for dividing the original data set into the training set and the test set according to a preset proportion, wherein the preset proportion is the proportion of the training set in the original data set.
Optionally, the first determining module further includes: the conversion unit is used for converting the format of the original data set into a sample characteristic format and a sample label format according to a preset time interval; wherein the sample feature format represents input variables and the sample label format represents output variables.
Optionally, the preset ratio is 0.7 to 0.8.
Optionally, the building module comprises: the first construction unit is used for constructing a plurality of layers of LSTMs when a traffic cell is predicted; and a second construction unit for constructing a multi-layer encoding-decoding LSTM when predicting a plurality of traffic cells.
Optionally, the model parameters of the multi-layer LSTM are connection weight parameters between neurons and/or neural network layers.
Optionally, the loss function in the multi-layer LSTM training process is a mean square error function.
Optionally, the model parameter adjustment rule in the multi-layer LSTM training process is Adam algorithm.
According to a third aspect of the embodiments of the present invention, there is provided a communication device, including a processor, a memory, and a program stored on the memory and executable on the processor, where the program, when executed by the processor, implements the steps of the travel prediction method according to the first aspect.
According to a fourth aspect of the embodiments of the present invention, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the travel prediction method according to the first aspect.
In the embodiment of the invention, the traveling of the traffic community is counted to obtain an original data set, a multi-layer LSTM is constructed, the multi-layer LSTM is trained through the original data set, and the traveling prediction is carried out through the trained multi-layer LSTM. The neural network technology is applied to traffic trip prediction, the prediction precision is improved, and the demand of urban traffic trip prediction is met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a travel prediction method according to an embodiment of the present invention;
FIG. 2a is a schematic structural diagram of an LSTM neuron according to an embodiment of the present invention;
FIG. 2b is a schematic structural diagram of a multi-layer encoding-decoding LSTM according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a trip prediction apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a travel prediction method, including the following steps:
step 101: counting the number of trips of one or more traffic districts to obtain an original data set;
in the embodiment of the invention, a traffic cell is taken as a basic unit for predicting the travel, the number of the travel in the traffic cell is counted, and the number of the travel is determined as an original data set.
Specifically, the statistical raw data set comprises the following sub-steps:
(1) mapping a trip starting point and a trip end point in a traffic cell;
and mapping a travel starting point and a travel end point in the traffic cell, wherein the travel behavior generated at the travel starting point represents arriving at the traffic cell, and the travel behavior generated at the travel end point represents leaving from the traffic cell.
(2) Determining time sequences of leaving a traffic cell and arriving at the traffic cell in each unit time according to a preset time unit;
the preset time unit is a time unit for counting the trip amount, and the preset time unit may be one hour, or half an hour, and this is not specifically limited in the embodiment of the present invention.
And counting the number of trips leaving the traffic cell and arriving at the traffic cell in each unit time according to the determined preset time unit, and combining the number of trips and corresponding time to obtain a time sequence of the number of trips of the traffic cell.
(3) Determining the time series as an original data set;
the time series is determined as the raw data set for subsequent use.
Step 102: determining a training set and a testing set according to an original data set;
in the embodiment of the invention, the original data set is divided into the training set and the test set, wherein the training set is used for training the prediction model subsequently, and the test set is used for testing the trained prediction model so as to test the accuracy of the prediction model.
Specifically, the original data set is divided into a training set and a test set according to a preset proportion, and the preset proportion is the proportion of the training set in the original data set.
Optionally, the predetermined ratio is 0.7-0.8.
Optionally, before the training set and the test set are divided, the format of the original data set is converted into a sample feature format and a sample label format according to a preset time interval, wherein the sample feature format represents an input variable, and the sample label format represents an output variable.
The preset time interval may be selected from 1 to 10 hours, and the specific setting of the preset time interval is not limited in the embodiment of the present invention, and a person skilled in the art may select an appropriate value based on the prediction result.
Step 103: constructing a multilayer LSTM;
in the embodiment of the invention, the neural network is applied to the traffic travel prediction, and the neural network is a mathematical model simulating the structure and the function of a biological neural network and is mainly used for estimating or approximating functions. As an important branch of the neural network model, the recurrent neural network is the primary method of processing sequence data.
Compared with the common recurrent neural network neurons, the neurons of the Long-and-Short-Term Memory neural network (LSTM) comprise a forgetting gate, an input gate and an updating gate, the internal structure of the logic gate can effectively avoid the situations of gradient disappearance and gradient explosion, and the problem of Long-time interval sequence processing is solved.
Referring to FIG. 2a, there is shown the structure of an LSTM neuron, wherein XtFor input data of LSTM cells at time t, htIs the output. In the figure, tanh represents a hyperbolic tangent function, and σ represents a Sigmoid function, which may also be referred to as a logistic regression function or a Sigmoid growth curve function.
The gate function in the LSTM neuron consists of a Sigmoid function that outputs a number between 0 and 1, i.e., an output value between 0 and 1, which is used to describe how much of the throughput is per segment, with 0 representing a complete discard and 1 representing a complete pass, and a point multiplication.
The above-mentioned multilayer LSTM can be constructed by a high-level neural network library, for example: multiple layers of LSTM were constructed by Keras libraries.
Further, the output of the upper layer in the multiple layers of LSTM is the input of the next layer, and each layer of LSTM contains multiple neurons, for example, the number of neurons in each layer is 300.
Further, when prediction is made for one traffic cell, a multi-layer LSTM is constructed, for example, a three-layer LSTM is constructed. When prediction is made for multiple traffic cells, a multi-layer encoding-decoding LSTM is constructed, for example, a three-layer encoding-decoding LSTM is constructed.
It is understood that the number of layers of the multi-layer LSTM may be any number, and those skilled in the art can set the number of layers of the multi-layer LSTM according to actual needs.
Referring to fig. 2b, a schematic diagram of a multi-layer encoding-decoding LSTM structure is shown, wherein the structure of each neuron can refer to the structure of the LSTM neuron shown in fig. 2 a.
The coding refers to converting an input sequence into a vector with a fixed length; the decoding refers to converting the fixed vector generated previously into an output sequence. The codecs used in fig. 2b are all LSTM.
The above multilayer encoding-decoding LSTM has the following features:
(1) in LSTM, the hidden state of the current time is determined by the state of the last time and the current time input;
(2) and after the hidden layers of all time periods are obtained, summarizing the information of the hidden layers to generate a final intermediate vector C.
The last hidden layer is taken as the intermediate vector C in FIG. 2b, i.e.Since LSTM takes into account the input information of each previous step, the intermediate vector C can contain information of the entire input sequence.
(3) The decoding stage can be seen as the inverse of the encoding. In the decoding process, the output sequence Y is generated according to the intermediate vector C and the previous sequence1,Y2···YT-1To predict the next output YT. In fig. 2b, the last intermediate vector C output by the encoder only acts on the first moment of the decoder;
state of last moment of encoderI.e. a defined fixed length vector, which will be the initial state of the decoder, in the encoder, at each instantThe output will be the input for the next time instant and so on until the decoder predicts a particular symbol output at a time instant.
Step 104: training the multilayer LSTM through a training set and a testing set to determine model parameters of the multilayer LSTM;
in the embodiment of the invention, the model parameters of the multilayer LSTM are connection weight parameters among the neurons and/or the neural network layers, the multilayer LSTM is trained through multiple groups of data in a training set, the model parameters are gradually adjusted, and the final model parameters are determined when the training is finished.
Optionally, parameters such as a Batch Size (Batch Size), a training iteration number (Epoch) and the like are set in the training process, wherein the Batch Size is the number of samples selected each time, the Batch Size is a value raised to the nth power of 2, and commonly used values include: 64. 128, 256, etc., 256 may be selected when the network is small, and 64 may be selected when the network is large; the number of training iterations is the number of training for all samples, for example, the number of training iterations is 20.
It should be noted that the values of the above parameters are only examples, and those skilled in the art can determine the values of the parameters according to actual requirements.
Optionally, Mean Square Error (Mean Square Error) is chosen as a function of the loss during the multi-layer LSTM training process.
Optionally, Adaptive moment estimation (Adam) is used as a model parameter adjustment rule in the multi-layer LSTM training process.
Step 105: travel prediction is performed by the trained multi-layer LSTM.
In the embodiment of the invention, after the training of the multi-layer LSTM is finished, the trained multi-layer LSTM is used for travel prediction, so that the prediction precision is ensured.
In the embodiment of the invention, the travel quantity of a traffic cell is counted to obtain an original data set, the original data set is divided into a training set and a testing set, a plurality of layers of LSTMs are constructed and trained through the training set and the testing set, model parameters of the plurality of layers of LSTMs are determined, and the travel prediction is carried out through the trained plurality of layers of LSTMs, so that the prediction precision of travel behaviors is improved.
Referring to fig. 3, an embodiment of the present invention provides a travel prediction apparatus 300, including:
a counting module 301, configured to count the number of trips in one or more traffic cells to obtain an original data set;
a first determining module 302, configured to determine a training set and a testing set according to the original data set;
a building module 303 for building a multilayer LSTM;
a second determining module 304, configured to train the multi-layer LSTM through the training set and the test set, and determine model parameters of the multi-layer LSTM;
and a prediction module 305, configured to perform travel prediction through the trained multi-layer LSTM.
Optionally, the statistic module 301 includes:
a mapping unit 3011, configured to map a trip starting point and a trip ending point in the traffic cell;
a first determining unit 3012, configured to determine, according to a preset time unit, a time sequence of leaving the traffic cell and arriving at the traffic cell in each unit time;
a second determining unit 3013, configured to determine the time series as the original data set.
Optionally, the first determining module 302 includes:
a dividing unit 3021, configured to divide the original data set into the training set and the test set according to a preset ratio, where the preset ratio is a ratio of the training set in the original data set.
Optionally, the first determining module 302 further includes:
a conversion unit 3022, configured to convert the format of the original data set into a sample feature format and a sample label format according to a preset time interval;
wherein the sample feature format represents input variables and the sample label format represents output variables.
Optionally, the preset ratio is 0.7 to 0.8.
Optionally, the building module 303 includes:
a first constructing unit 3031, configured to construct a plurality of layers of LSTM when predicting a traffic cell;
a second constructing unit 3032, configured to construct a multi-layer encoding-decoding LSTM when predicting a plurality of traffic cells.
Optionally, the model parameters of the multi-layer LSTM are connection weight parameters between neurons and/or neural network layers.
Optionally, the loss function in the multi-layer LSTM training process is a mean square error function.
Optionally, the model parameter adjustment rule in the multi-layer LSTM training process is Adam algorithm.
In the embodiment of the invention, the travel quantity of a traffic cell is counted to obtain an original data set, the original data set is divided into a training set and a testing set, a plurality of layers of LSTMs are constructed and trained through the training set and the testing set, model parameters of the plurality of layers of LSTMs are determined, and the travel prediction is carried out through the trained plurality of layers of LSTMs, so that the prediction precision of travel behaviors is improved.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (18)
1. A travel prediction method, the method comprising:
counting the number of trips of one or more traffic districts to obtain an original data set;
determining a training set and a testing set according to the original data set;
constructing a multi-layer long-time memory neural network LSTM;
training the multilayer LSTM through the training set and the testing set to determine model parameters of the multilayer LSTM;
travel prediction is performed by the trained multi-layer LSTM.
2. The method of claim 1, wherein the counting the number of trips of one or more traffic cells to obtain an original data set comprises:
mapping a trip starting point and a trip end point in the traffic cell;
determining time sequences of leaving the traffic cell and arriving at the traffic cell in each unit time according to a preset time unit;
determining the time series as the original data set.
3. The method of claim 1, wherein determining a training set and a test set from the raw data set comprises:
and dividing the original data set into the training set and the test set according to a preset proportion, wherein the preset proportion is the proportion of the training set in the original data set.
4. The method of claim 3, wherein prior to said determining a training set and a test set from said raw data set, said method further comprises:
converting the format of the original data set into a sample characteristic format and a sample label format according to a preset time interval;
wherein the sample feature format represents input variables and the sample label format represents output variables.
5. The method according to claim 3, wherein the preset ratio is 0.7 to 0.8.
6. The method of claim 1, wherein constructing the multilayer LSTM comprises:
when a traffic cell is predicted, a plurality of layers of LSTMs are constructed;
when multiple traffic cells are predicted, a multi-layered encoding-decoding LSTM is constructed.
7. The method of claim 1 where the model parameters of the multi-layer LSTM are connection weight parameters between neurons and/or neural network layers.
8. The method of claim 1, where the loss function in the multi-layer LSTM training process is a mean square error function.
9. The method of claim 1, wherein the model parameter adjustment rule in the multi-layer LSTM training process is an adaptive moment estimation Adam algorithm.
10. A travel prediction apparatus, comprising:
the statistical module is used for counting the trip quantity of one or more traffic districts to obtain an original data set;
the first determining module is used for determining a training set and a testing set according to the original data set;
the building module is used for building a plurality of layers of LSTMs;
the second determining module is used for training the multilayer LSTM through the training set and the testing set to determine model parameters of the multilayer LSTM;
and the prediction module is used for predicting the trip through the trained multilayer LSTM.
11. The apparatus of claim 10, wherein the statistics module comprises:
the mapping unit is used for mapping a trip starting point and a trip end point in the traffic cell;
the first determining unit is used for determining time sequences of leaving the traffic cell and arriving at the traffic cell in each unit time according to a preset time unit;
a second determining unit for determining the time series as the original data set.
12. The apparatus of claim 10, wherein the first determining module comprises:
and the dividing unit is used for dividing the original data set into the training set and the test set according to a preset proportion, wherein the preset proportion is the proportion of the training set in the original data set.
13. The apparatus of claim 12, wherein the first determining module further comprises:
the conversion unit is used for converting the format of the original data set into a sample characteristic format and a sample label format according to a preset time interval;
wherein the sample feature format represents input variables and the sample label format represents output variables.
14. The apparatus of claim 12, wherein the predetermined ratio is 0.7 to 0.8.
15. The apparatus of claim 10, wherein the building module comprises:
the first construction unit is used for constructing a plurality of layers of LSTMs when a traffic cell is predicted;
and a second construction unit for constructing a multi-layer encoding-decoding LSTM when predicting a plurality of traffic cells.
16. The method of claim 10 where the model parameters of the multi-layer LSTM are connection weight parameters between neurons and/or neural network layers.
17. The method of claim 10 where the loss function in the multi-layer LSTM training process is a mean square error function.
18. The method of claim 10, wherein the model parameter adjustment rule in the multi-layer LSTM training process is Adam algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910796838.9A CN112446516A (en) | 2019-08-27 | 2019-08-27 | Travel prediction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910796838.9A CN112446516A (en) | 2019-08-27 | 2019-08-27 | Travel prediction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112446516A true CN112446516A (en) | 2021-03-05 |
Family
ID=74741510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910796838.9A Pending CN112446516A (en) | 2019-08-27 | 2019-08-27 | Travel prediction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112446516A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113837383A (en) * | 2021-10-18 | 2021-12-24 | 中国联合网络通信集团有限公司 | Model training method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734338A (en) * | 2018-04-24 | 2018-11-02 | 阿里巴巴集团控股有限公司 | Credit risk forecast method and device based on LSTM models |
CN109243172A (en) * | 2018-07-25 | 2019-01-18 | 华南理工大学 | Traffic flow forecasting method based on genetic algorithm optimization LSTM neural network |
CN110070713A (en) * | 2019-04-15 | 2019-07-30 | 浙江工业大学 | A kind of traffic flow forecasting method based on two-way nested-grid ocean LSTM neural network |
-
2019
- 2019-08-27 CN CN201910796838.9A patent/CN112446516A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734338A (en) * | 2018-04-24 | 2018-11-02 | 阿里巴巴集团控股有限公司 | Credit risk forecast method and device based on LSTM models |
CN109243172A (en) * | 2018-07-25 | 2019-01-18 | 华南理工大学 | Traffic flow forecasting method based on genetic algorithm optimization LSTM neural network |
CN110070713A (en) * | 2019-04-15 | 2019-07-30 | 浙江工业大学 | A kind of traffic flow forecasting method based on two-way nested-grid ocean LSTM neural network |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113837383A (en) * | 2021-10-18 | 2021-12-24 | 中国联合网络通信集团有限公司 | Model training method and device, electronic equipment and storage medium |
CN113837383B (en) * | 2021-10-18 | 2023-06-23 | 中国联合网络通信集团有限公司 | Model training method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111784041B (en) | Wind power prediction method and system based on graph convolution neural network | |
CN111260030B (en) | A-TCN-based power load prediction method and device, computer equipment and storage medium | |
CN110210644A (en) | The traffic flow forecasting method integrated based on deep neural network | |
CN110516833A (en) | A method of the Bi-LSTM based on feature extraction predicts road traffic state | |
CN111861013B (en) | Power load prediction method and device | |
CN109146156B (en) | Method for predicting charging amount of charging pile system | |
CN112001556B (en) | Reservoir downstream water level prediction method based on deep learning model | |
CN111027772A (en) | Multi-factor short-term load prediction method based on PCA-DBILSTM | |
CN110633859B (en) | Hydrologic sequence prediction method integrated by two-stage decomposition | |
CN113852432A (en) | RCS-GRU model-based spectrum prediction sensing method | |
CN113361803A (en) | Ultra-short-term photovoltaic power prediction method based on generation countermeasure network | |
CN111915081A (en) | Peak-sensitive travel demand prediction method based on deep learning | |
CN113947182A (en) | Traffic flow prediction model construction method based on double-stage stack graph convolution network | |
CN116050595A (en) | Attention mechanism and decomposition mechanism coupled runoff amount prediction method | |
CN116227716A (en) | Multi-factor energy demand prediction method and system based on Stacking | |
CN115577748A (en) | Dual-channel wind power prediction method integrated with extrusion excitation attention mechanism | |
CN110807508A (en) | Bus peak load prediction method considering complex meteorological influence | |
CN117172390B (en) | Charging amount prediction method and terminal based on scene division | |
CN110007371A (en) | Wind speed forecasting method and device | |
CN117833231A (en) | Load prediction method based on Bi-LSTM and dual-attention mechanism | |
CN116227738B (en) | Method and system for predicting traffic interval of power grid customer service | |
CN112446516A (en) | Travel prediction method and device | |
CN111783688B (en) | Remote sensing image scene classification method based on convolutional neural network | |
CN117409578A (en) | Traffic flow prediction method based on combination of empirical mode decomposition and deep learning | |
CN116822722A (en) | Water level prediction method, system, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210305 |