CN107886159A - Virtual site - Google Patents

Virtual site Download PDF

Info

Publication number
CN107886159A
CN107886159A CN201710751524.8A CN201710751524A CN107886159A CN 107886159 A CN107886159 A CN 107886159A CN 201710751524 A CN201710751524 A CN 201710751524A CN 107886159 A CN107886159 A CN 107886159A
Authority
CN
China
Prior art keywords
layer
value
neuron
output
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710751524.8A
Other languages
Chinese (zh)
Inventor
刘纯阳
鲍士要
张国涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Enjoy Ride Electric Vehicle Service Co Ltd
Original Assignee
Shanghai Enjoy Ride Electric Vehicle Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Enjoy Ride Electric Vehicle Service Co Ltd filed Critical Shanghai Enjoy Ride Electric Vehicle Service Co Ltd
Priority to CN201710751524.8A priority Critical patent/CN107886159A/en
Publication of CN107886159A publication Critical patent/CN107886159A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance

Abstract

The invention discloses a kind of virtual site, 1) by data crawler system crawl be fall people's row data on flows at station and people's row data on flows of bus;2) data cleaned by cleaning data, filtered, converted, extracted and meet the data of oneself;3) predicted by double BP artificial neural network combination forecasting;Virtual site provided by the invention, reptile data are predicted using neutral net, without doing stationarity to the time series of data it is assumed that the mapping relations between data are sought in the training for only relying on sample data, so as to establish accurate data prediction model.

Description

Virtual site
Technical field
The invention belongs to electric power overhaul field, especially a kind of virtual site.
Background technology
With flourishing for power system, the usage quantity and laying scope of various public power facilities increase therewith, But corresponding power failure, also in the trend risen, in traditional power failure maintenance process, workmen needs Signal lamp or instruction frame are put at the construction field (site) in time, or even part job site needs to set up fence to strengthen warning Guard against.But due to electric power first-aid and daily construction during, often easily occur indicator lamp or instruction frame putting position do not wake up Mesh, being strayed into for non-workmen is caused, so that work progress is affected, there is also potential safety hazard.To overcome existing skill Art deficiency, the present invention provide the virtual site of a kind of discrimination height and simple installation.
The content of the invention
The present invention is to provide a kind of virtual site to solve the technical scheme that above-mentioned technical problem uses, wherein, specifically Technical scheme is:
1) by data crawler system crawl be fall people's row data on flows at station and people's row data on flows of bus;
2) data cleaned by cleaning data, filtered, converted, extracted and meet the data of oneself;
3) predicted by double BP artificial neural network combination forecasting;
BP neural network is called error backpropagation algorithm, and its algorithm basic thought is:Input signal inputs through input layer, Calculated by hidden layer and exported by output layer, output valve is compared with mark value, if there is error, by error reversely from output layer to input Es-region propagations, in this process, neuron weights are adjusted using gradient descent algorithm;
Algorithm mathematics instrument:
Most crucial mathematical tool is the chain type Rule for derivation of calculus in BP algorithm.
Explanation:The function that z is y can be led, and the function that y is x can be led, then
Above-mentioned virtual site, wherein:
For BP algorithm except the neuron of input layer, each neuron can have the input value z and lead to z that weighted sum obtains Cross Sigmoid functions, the output valve a after non-linear transfer, the calculation formula between them is as follows:
What variable l and j were represented is j-th of neuron of l layers;
Ij is then represented from i-th of neuron to the line j-th of neuron;
What w was represented is weight;
What b was represented is biasing;
It was because the ability to express of linear model is inadequate the reason for activation primitive at that time, and added Sigmoid functions and come Add non-linear factor and obtain the output valve of neuron.
Above-mentioned virtual site, wherein:
BP algorithm performs flow:
Setting the number of plies of neutral net by hand, the number of every layer of neuron, after learning rate η, BP algorithm can first with Machine initializes every connecting line weight and biasing, and then for each input x in training set and output y, BP algorithm all can be first Perform fl transmission and obtain predicted value, reverse feedback renewal nerve net is then performed according to the error between actual value and predicted value The weight and every layer of preference of every connecting line in network, said process is repeated in the case of no arrival stop condition;
Wherein, stop condition can be following this three:
1) when the renewal of weight is less than some threshold value;
2) error rate of prediction is less than some threshold value;
3) default certain iterations is reached.
Above-mentioned virtual site, wherein:
There are 200,000 neurons as input layer, and output layer has 10 neurons to represent numeral 0~9, each neuron Value is 0~1, represents this digital probability.
A collection of sample is often inputted, neutral net can perform the calculating of fl transmission in layer to output layer neuron Value, crowd's flow is predicted according to the value maximum of which output neuron.
Then according to the value of output neuron, the error between predicted value and actual value is calculated, another mistake updates to feedback The weight of every connecting line and the preference of each neuron in neutral net.
Fl transmission (Feed-Forward)
From input layer=>Hidden layer=>Output layer, the process of all neuron output values of calculating in layer.
Reverse feedback (Back Propagation)
Because the value of output layer can have error with real value, weighed with mean square error between predicted value and actual value Error.
Mean square error
The target inversely fed back is exactly to make the value of E functions small as far as possible, and the output valve of each neuron is by the point Connecting line corresponding to preference corresponding to weighted value and this layer determined, therefore, to allow error function to reach minimum, we are just Adjust w and b values so that the value of error function is minimum;
Above-mentioned virtual site, wherein:
Weight and the more new formula of biasing:
Ask w and b local derviation to obtain w and b renewal amount object function E, take ask w local derviations to derive below;
Wherein η is learning rate, and value is usually 0.1~0.3, it can be understood as the paces stepped per subgradient;Notice Whj value first has influence on the input value a of j-th of output layer neuron, then has influence on output valve y and had according to chain type Rule for derivation:
Had according to neuron output value a definition:
The formula that Sigmoid differentiates is as follows:
F'(x)=f (x) (1-f (x))
So:
Then weight w renewal amount is:
The renewal amount that b can be obtained is:
But the two formula are merely able to update output layer and the weight of preceding layer connecting line and the biasing of output layer, and reason is Because δ values have relied on this variable of actual value y, but only know the actual value of output layer without knowing the true of every layer of hidden layer Value, leads to not the δ values for calculating every layer of hidden layer, so employing formula below:
Next layer of weight and the value can of neuron output layer calculate the δ values of last layer, as long as by continuous Whole weights and biasing using the renewal hidden layer of this formula can above.
Above-mentioned virtual site, wherein:
I-th of neuron of l layers and all neurons of l+1 layers have connection, and δ is launched into following formula:
That is to say can regard E as the z functions of all neuron input values of l+1 layers, and the n expressions of formula above is The quantity of l+1 layer neurons, then just obtain formula described above after carrying out abbreviation.
The present invention has the advantages that relative to prior art:
1) reptile data are predicted using neutral net, without doing stationarity to the time series of data it is assumed that only Seek the mapping relations between data by the training of sample data, so as to establish accurate data prediction model.
2) double BP artificial neural network combination forecasting is not required to be restricted model structure while precision of prediction is improved, It can integrate and the time serial message provided with Delayed Neural Networks Forecasting Methodology and space reflection information are provided, so as to obtain most Good prediction effect.The application of forecast model is carried out with reptile data instance, result of calculation shows double BP artificial neural network group Close forecast model has higher precision of prediction than being used alone to return with Delayed Neural Networks.
3) topological structure of neutral net has an impact to the precision of forecast model, appropriate increase input historical data sample, Network can be made to obtain the information of more sequences in itself, so that the prediction of neutral net is more accurate.Carry out data prediction When, number of training and test samples number preferably control 100~200 and 100 or so respectively.
Brief description of the drawings
Fig. 1 is data prediction flow chart.
Fig. 2 is nonlinear combination model prediction model.
Fig. 3 is double BP artificial neural network combination forecasting.
Fig. 4 is the structure of BP algorithm.
Fig. 5 is the whole weights and biasing schematic diagram of renewal hidden layer.
Fig. 6 is check sample number schematic diagram.
Fig. 7 is Sigmoid functional digraphs.
Embodiment
The invention will be further described with reference to the accompanying drawings and examples.
For accompanying drawing:
Data prediction flow chart:
Explanation:
Data crawler system:Crawl be fall people's row data on flows at station and people's row data on flows of bus;
Clean data:Data are cleaned, filtered, converted, extracts and meets the data of oneself;
Double BP artificial neural network combination forecasting:Cleaning data are modeled, predicted;
Application program:The website of the big subway of flow of the people or bus is predicted, application program is given and goes to judge virtual site Position;
Nonlinear combination model prediction model:
Explanation:
Mathematic(al) representation is:Y=f (x1, x2 ..., xi ..., xn);
Y:Represent the predicted value of the output, i.e. combination forecasting of neural network group clutch, that is, the prediction stream of people is big Region;
x i:Represent the predicted value of i-th forecast model;
f(...):Represent Neural Network Based Nonlinear mapping function;
Double BP artificial neural network combination forecasting:
Explanation:
1st weight be made up of recurrent neural networks and Delayed Neural Networks, recurrent neural network model by the use of reptile data as Input, Delayed Neural Networks model are then used as input by the use of operational factor related to Prediction Parameters;
2nd is neural network group clutch again, the above two prediction result is optimized into combination, so as to make full use of measuring point The historical variations trend of data and the mapping principle information of relevant parameter, improve precision of prediction.
Method is analyzed
The key problem of this product is the position that virtual site is selected in the white line of statutory regulation.
Method
From the perspective of big data, data source comes from the data that company crawls, including ground falls website and bus station Data, the flow of crowd is predicted using BP neural network, so as to select suitable position to establish virtual site.
1) BP neural network
BP neural network is called error back propagation (error Back Propagation, or be also called Back-propagation Broadcast) algorithm.Its algorithm basic thought is:Input signal inputs through input layer, is calculated by hidden layer and is exported by output layer, output valve Compared with mark value, if there is error, by error reversely from output layer to input Es-region propagations, in this process, using under gradient Drop algorithm is adjusted to neuron weights.
1., algorithm mathematics instrument
Most crucial mathematical tool is the chain type Rule for derivation of calculus in BP algorithm.
Explanation:The function that z is y can be led, and the function that y is x can be led, then
Example:
Find a function f (x)=(x2+1)3Derivative.If
G (x)=x2+ 1, h (x)=g (x)3.
F ' (x)=h ' (g (x)) g ' (x)=3 (g (x))2(the x of (2x)=32+1)2(2x)
=6x (x2+1)2.
2., the structure of BP algorithm, such as accompanying drawing 4;
Except the neuron of input layer, each neuron can have the input value z and pass through z that weighted sum obtains Output valve a after Sigmoid functions (that is to say activation primitive) non-linear transfer, the calculation formula between them are as follows:
Explanation:
What variable l and j were represented is the 1st layer of j-th of neuron;
Ij is then represented from i-th of neuron to the line j-th of neuron;
What w was represented is weight;
What b was represented is biasing;
It was the ability to express because linear model (situation that linearly inseparable can not be handled) the reason for activation primitive at that time Not enough, so add Sigmoid functions obtains the output valve of neuron to add non-linear factor.
The codomain of Sigmoid functions is (0,1), each neuron of output layer can represent be the classification probability.
3., BP algorithm perform flow
Setting the number of plies of neutral net by hand, the number of every layer of neuron, after learning rate η, BP algorithm can first with Machine initializes every connecting line weight and biasing, and then for each input x in training set and output y, BP algorithm all can be first Perform fl transmission and obtain predicted value, reverse feedback renewal nerve net is then performed according to the error between actual value and predicted value The weight and every layer of preference of every connecting line in network.Said process is repeated in the case of no arrival stop condition.
Wherein, stop condition can be following this three
● when the renewal of weight is less than some threshold value
● the error rate of prediction is less than some threshold value
● reach default certain iterations
There are 200,000 neurons as input layer, and output layer has 10 neurons to represent numeral 0~9, each neuron Value is 0~1, represents this digital probability.
A collection of sample is often inputted, neutral net can perform the calculating of fl transmission in layer to output layer neuron Value, crowd's flow is predicted according to the value maximum of which output neuron.
Then according to the value of output neuron, the error between predicted value and actual value is calculated, another mistake updates to feedback The weight of every connecting line and the preference of each neuron in neutral net.
Fl transmission (Feed-Forward)
From input layer=>Hidden layer=>Output layer, the process of all neuron output values of calculating in layer.
Reverse feedback (Back Propagation)
Because the value of output layer can have error with real value, weighed with mean square error between predicted value and actual value Error.
Mean square error
The target inversely fed back is exactly to make the value of E functions small as far as possible, and the output valve of each neuron is by the point Connecting line corresponding to preference corresponding to weighted value and this layer determined, therefore, to allow error function to reach minimum, we are just Adjust w and b values so that the value of error function is minimum.
Weight and the more new formula of biasing
Ask object function E w and b local derviation to obtain w and b renewal amount, take ask w local derviations to derive below.
Wherein η is learning rate, and value is usually 0.1~0.3, it can be understood as the paces stepped per subgradient.Notice Whj value first has influence on the input value a of j-th of output layer neuron, then has influence on output valve y and had according to chain type Rule for derivation:
Had according to neuron output value a definition:
The formula that Sigmoid differentiates is as follows:
F'(x)=f (x) (1-f (x))
So:
Then weight w renewal amount is:
It is similar can obtain b renewal amount be:
But the two formula are merely able to update output layer and the weight of preceding layer connecting line and the biasing of output layer, and reason is Because δ values have relied on this variable of actual value y, but only know the actual value of output layer without knowing the true of every layer of hidden layer Value, leads to not the δ values for calculating every layer of hidden layer, so employing formula below:
Next layer of weight and the value can of neuron output layer calculate the δ values of last layer, as long as by continuous Whole weights and biasing using the renewal hidden layer of this formula can above.
As shown in Figure 5;
I-th of neuron of l layers and all neurons of l+1 layers have connection, and δ is launched into following formula:
That is to say can regard E as the z functions of all neuron input values of l+1 layers, and the n expressions of formula above is The quantity of l+1 layer neurons, then can be obtained by formula described above after carrying out abbreviation.
Interpretation of result
200 groups of reptile data training samples are taken, carry out the learning training of double BP artificial neural network combination forecasting, the group The input number for closing forecast model is 2, i.e. the prediction output of recurrent neural networks and Delayed Neural Networks, and hidden layer number is according to survey Examination takes 10, and output number is 1, i.e., final predicted value, learning error is controlled within 0.00001.Then a certain continuous fortune is chosen 60 groups of actual reptile data in the row period carry out check analysis, respectively using recurrent neural network model, Delayed Neural Networks Model and double BP artificial neural network combination forecasting are predicted to the variation tendency of the flow of the people, check the pre- of sample data Survey trend is checked shown in sample number such as Fig. 6.
It can be seen that except recurrent neural network model at 19~22 sample number strong points predicted value and measured value it is relative Deviation is larger outer, and Delayed Neural Networks model, the prediction result of double BP artificial neural network combination forecasting and crowd's flow are surveyed The variation tendency of value is roughly the same, and the prediction result curve and actual measured value of double BP artificial neural network combination forecasting are bent Line coincide very well.Check sample and use the calculating average relative error of double BP artificial neural network combination forecasting as 1.5%, And it is respectively individually 2.7% He using the average relative error of recurrent neural networks and the prediction result of Delayed Neural Networks 1.9%.Recurrent neural network model is the variation tendency for carrying out prediction data from the timing of data, and Delayed Neural Networks model is From spatially then being integrated come prediction data, double BP artificial neural network combination forecasting according to the mapping principle of related operational factor Preceding 2 kinds of models each the advantages of, take full advantage of the information that each built-up pattern is included, there is higher precision of prediction.
Summarize:
1) reptile data are predicted using neutral net, without doing stationarity to the time series of data it is assumed that only Seek the mapping relations between data by the training of sample data, so as to establish accurate data prediction model.
2) double BP artificial neural network combination forecasting is not required to be restricted model structure while precision of prediction is improved, It can integrate and the time serial message provided with Delayed Neural Networks Forecasting Methodology and space reflection information are provided, so as to obtain most Good prediction effect.The application of forecast model is carried out with reptile data instance, result of calculation shows double BP artificial neural network group Close forecast model has higher precision of prediction than being used alone to return with Delayed Neural Networks.
3) topological structure of neutral net has an impact to the precision of forecast model, appropriate increase input historical data sample, Network can be made to obtain the information of more sequences in itself, so that the prediction of neutral net is more accurate.Carry out data prediction When, number of training and test samples number preferably control 100~200 and 100 or so respectively.
Although the present invention is disclosed as above with preferred embodiment, so it is not limited to the present invention, any this area skill Art personnel, without departing from the spirit and scope of the present invention, when a little modification and perfect, therefore the protection model of the present invention can be made Enclose to work as and be defined by what claims were defined.

Claims (6)

  1. A kind of 1. virtual site, it is characterised in that:
    1) by data crawler system crawl be fall people's row data on flows at station and people's row data on flows of bus;
    2) data cleaned by cleaning data, filtered, converted, extracted and meet the data of oneself;
    3) predicted by double BP artificial neural network combination forecasting;
    BP neural network is called error backpropagation algorithm, and its algorithm basic thought is:Input signal inputs through input layer, passes through Hidden layer is calculated and exported by output layer, and output valve is compared with mark value, if there is error, error is reversely passed from output layer to input layer Broadcast, in this process, neuron weights are adjusted using gradient descent algorithm;
    Algorithm mathematics instrument:
    Most crucial mathematical tool is the chain type Rule for derivation of calculus in BP algorithm.
    Explanation:The function that z is y can be led, and the function that y is x can be led, then
  2. 2. virtual site as claimed in claim 1, it is characterised in that:
    For BP algorithm except the neuron of input layer, each neuron can have the input value z and pass through z that weighted sum obtains Sigmoid functions, the output valve a after non-linear transfer, the calculation formula between them are as follows:
    What variable l and j were represented is j-th of neuron of l layers;
    Ij is then represented from i-th of neuron to the line j-th of neuron;
    What w was represented is weight;
    What b was represented is biasing;
    It was because the ability to express of linear model is inadequate the reason for activation primitive at that time, and added Sigmoid functions to add Non-linear factor obtains the output valve of neuron.
  3. 3. virtual site as claimed in claim 2, it is characterised in that:
    BP algorithm performs flow:
    Setting the number of plies of neutral net by hand, the number of every layer of neuron, after learning rate η, BP algorithm can it is first random just Every connecting line weight of beginningization and biasing, then it can all be first carried out for each input x in training set and output y, BP algorithm Fl transmission obtains predicted value, is then performed according to the error between actual value and predicted value in reverse feedback renewal neutral net The weight and every layer of preference of every connecting line, said process is repeated in the case of no arrival stop condition;
    Wherein, stop condition can be following this three:
    1) when the renewal of weight is less than some threshold value;
    2) error rate of prediction is less than some threshold value;
    3) default certain iterations is reached.
  4. 4. virtual site as claimed in claim 3, it is characterised in that:
    There are 200,000 neurons as input layer, and output layer has 10 neurons to represent numeral 0~9, each neuron value For 0~1, this digital probability represent.
    A collection of sample is often inputted, neutral net can perform the calculating of fl transmission in layer to the value of output layer neuron, root Crowd's flow is predicted according to the value maximum of which output neuron.
    Then according to the value of output neuron, the error between predicted value and actual value is calculated, another mistake is neural to feedback renewal The weight of every connecting line and the preference of each neuron in network.
    Fl transmission (Feed-Forward)
    From input layer=>Hidden layer=>Output layer, the process of all neuron output values of calculating in layer.
    Reverse feedback (Back Propagation)
    Because the value of output layer can have error with real value, the mistake between predicted value and actual value is weighed with mean square error Difference.
    Mean square error
    The target inversely fed back is exactly to make the value of E functions small as far as possible, and the output valve of each neuron is by the company of the point What preference corresponding to weighted value corresponding to wiring and this layer was determined, therefore, to allow error function to reach minimum, we will adjust Whole w and b values so that the value of error function is minimum;
  5. 5. virtual site as claimed in claim 4, it is characterised in that:
    Weight and the more new formula of biasing:
    Ask w and b local derviation to obtain w and b renewal amount object function E, take ask w local derviations to derive below;
    Wherein η is learning rate, and value is usually 0.1~0.3, it can be understood as the paces stepped per subgradient;Notice whj's Value first has influence on the input value a of j-th of output layer neuron, then has influence on output valve y and had according to chain type Rule for derivation:
    Had according to neuron output value a definition:
    The formula that Sigmoid differentiates is as follows:
    F ' (x)=f (x) (1-f (x))
    So:
    Then weight w renewal amount is:
    The renewal amount that b can be obtained is:
    But the two formula are merely able to update output layer and the weight of preceding layer connecting line and the biasing of output layer, reason be because δ values have relied on this variable of actual value y, but only know actual value of the actual value of output layer without knowing every layer of hidden layer, Lead to not the δ values for calculating every layer of hidden layer, so employing formula below:
    Next layer of weight and the value can of neuron output layer calculate the δ values of last layer, as long as by constantly utilizing Whole weights of the hidden layer of this formula can renewal above and biasing.
  6. 6. virtual site as claimed in claim 5, it is characterised in that:
    I-th of neuron of l layers and all neurons of l+1 layers have connection, and δ is launched into following formula:
    That is to say can regard E as the z functions of all neuron input values of l+1 layers, and that the n expressions of formula above is l+1 The quantity of layer neuron, then just obtain formula described above after carrying out abbreviation.
CN201710751524.8A 2017-08-28 2017-08-28 Virtual site Pending CN107886159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710751524.8A CN107886159A (en) 2017-08-28 2017-08-28 Virtual site

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710751524.8A CN107886159A (en) 2017-08-28 2017-08-28 Virtual site

Publications (1)

Publication Number Publication Date
CN107886159A true CN107886159A (en) 2018-04-06

Family

ID=61780571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710751524.8A Pending CN107886159A (en) 2017-08-28 2017-08-28 Virtual site

Country Status (1)

Country Link
CN (1) CN107886159A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222844A (en) * 2019-05-30 2019-09-10 西安交通大学 A kind of compressor performance prediction technique based on artificial neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236163A (en) * 2013-04-28 2013-08-07 北京航空航天大学 Traffic jam avoiding prompting system based on collective intelligence network
CN106503869A (en) * 2016-11-14 2017-03-15 东南大学 A kind of public bicycles dynamic dispatching method that is predicted based on website short-term needs
CN107067076A (en) * 2017-05-27 2017-08-18 重庆大学 A kind of passenger flow forecasting based on time lag NARX neutral nets

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236163A (en) * 2013-04-28 2013-08-07 北京航空航天大学 Traffic jam avoiding prompting system based on collective intelligence network
CN106503869A (en) * 2016-11-14 2017-03-15 东南大学 A kind of public bicycles dynamic dispatching method that is predicted based on website short-term needs
CN107067076A (en) * 2017-05-27 2017-08-18 重庆大学 A kind of passenger flow forecasting based on time lag NARX neutral nets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李蔚 等: "双重BP神经网络组合模型在实时数据预测中的应用", 《中国电机工程学报》 *
殷震: "基于BP神经网络的电力变压器内部故障诊断方法研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222844A (en) * 2019-05-30 2019-09-10 西安交通大学 A kind of compressor performance prediction technique based on artificial neural network

Similar Documents

Publication Publication Date Title
Ozgür Daily river flow forecasting using artificial neural networks and auto-regressive models
Berneti et al. An imperialist competitive algorithm artificial neural network method to predict oil flow rate of the wells
Bianchini et al. Prediction of pavement performance through neuro‐fuzzy reasoning
Lam et al. Decision support system for contractor pre‐qualification—artificial neural network model
CN110555230B (en) Rotary machine residual life prediction method based on integrated GMDH framework
CN103678952A (en) Elevator risk evaluation method
CN105117602A (en) Metering apparatus operation state early warning method
CN107909206A (en) A kind of PM2.5 Forecasting Methodologies based on deep structure Recognition with Recurrent Neural Network
CN106682781A (en) Power equipment multi-index prediction method
CN107016469A (en) Methods of electric load forecasting
CN107085941A (en) A kind of traffic flow forecasting method, apparatus and system
Jalalkamali et al. Groundwater modeling using hybrid of artificial neural network with genetic algorithm
Alqahtani et al. Artificial neural networks incorporating cost significant items towards enhancing estimation for (life-cycle) costing of construction projects
El-Sawalhi et al. A neural network model for building construction projects cost estimating
CN105046389A (en) Intelligent risk assessment method for electric power security risk assessment, and system thereof
CN108053052A (en) A kind of oil truck oil and gas leakage speed intelligent monitor system
KR20200008621A (en) Electrical Equipment Safety Evaluation System Using Artificial Intelligence Technique
CN105046377A (en) Method for screening optimum indexes of reservoir flood control dispatching scheme based on BP neural network
Ruslan et al. Flood modelling using artificial neural network
CN115953252A (en) Method for determining construction safety liability insurance premium
CN107886159A (en) Virtual site
CN108535434A (en) Method based on Neural Network model predictive building site surrounding body turbidity
Shafabakhsh et al. Determining the relative importance of parameters affecting concrete pavement thickness
Mohammed et al. Predicting performance measurement of residential buildings using machine intelligence techniques (MLR, ANN and SVM)
Yang et al. Risk prediction of city distribution engineering based on BP

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180406

WD01 Invention patent application deemed withdrawn after publication