CN110147874B - Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution - Google Patents

Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution Download PDF

Info

Publication number
CN110147874B
CN110147874B CN201910349189.8A CN201910349189A CN110147874B CN 110147874 B CN110147874 B CN 110147874B CN 201910349189 A CN201910349189 A CN 201910349189A CN 110147874 B CN110147874 B CN 110147874B
Authority
CN
China
Prior art keywords
time period
value
vehicle speed
hidden layer
speed distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910349189.8A
Other languages
Chinese (zh)
Other versions
CN110147874A (en
Inventor
刘兵
龚子任
罗寅杰
郑凯淘
鲁哲
王沛源
贝润钊
朱顺应
夏晶
邓正步
成萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201910349189.8A priority Critical patent/CN110147874B/en
Publication of CN110147874A publication Critical patent/CN110147874A/en
Application granted granted Critical
Publication of CN110147874B publication Critical patent/CN110147874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention provides an intelligent optimization method for the environmental factor level of vehicle speed distribution of a long and large tunnel. The method takes historically collected environmental factors and vehicle speed distribution data for a plurality of continuous days as learning samples; constructing an extreme learning machine neural network according to environmental factors and vehicle speed distribution historical data acquired by historical continuous days; and solving the optimal level of each environmental factor under the condition that the manager expects the vehicle speed distribution by using the particle swarm optimization. The invention establishes the nonlinear relation between the environmental factors and the vehicle speed distribution through the neural network, and solves the level required by each environmental factor under the condition that a manager expects the vehicle speed distribution by using the particle swarm algorithm.

Description

Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution
Technical Field
The invention belongs to the technical field of road engineering and traffic information engineering, and particularly relates to an intelligent optimization method for the environmental factor level of vehicle speed distribution of a long tunnel.
Background
The traffic flow velocity distribution in the long and large tunnel has strong space-time difference, and the difference is specifically expressed on three indexes of 85% vehicle speed, average vehicle speed and vehicle speed standard deviation. In time, the vehicle speed distribution changes along with the passage of the tunnel operation time, and the vehicle speed distribution shows obvious difference in different time periods of each day. In space, the entrance and the exit of the long and large tunnel have the effect of black and white holes, so the speed of the vehicle is low and the dispersion is small, and the speed of the vehicle at the middle section of the long and large tunnel is high and the dispersion is large. The traffic efficiency and the safety level in the tunnel are closely related to the traffic flow speed and the dispersion, a bottleneck is easily formed when the traffic flow speed is low, the traffic efficiency is reduced, the risk of rear-end accidents is high when the traffic flow dispersion is high, and the safety level is low. The vehicle speed distribution is influenced by a plurality of environmental factors, and the vehicle speed distribution can be influenced by changing the level of the environmental factors, but no effective model can establish a complex nonlinear relation between the environmental factors and the vehicle speed distribution at present, and the optimal level of each environmental factor under the condition that a manager expects the vehicle speed distribution is obtained by utilizing the relation.
In order to realize nonlinear regression of large sample data, currently, an artificial neural network is mostly adopted for supervised learning. The extreme learning machine is a machine learning algorithm based on a feedforward neural network, and is mainly characterized in that hidden layer node parameters can be random or considered to be given, adjustment is not needed, only weight needs to be calculated in the learning process, and the extreme learning machine has the characteristics of high learning efficiency and strong generalization capability. In order to obtain the level of each environmental factor under expected vehicle speed distribution, namely solving the extreme value of a multivariate nonlinear function, a heuristic algorithm is usually adopted at present, wherein a particle swarm algorithm is inspired by the regularity of bird cluster activities, and then a simplified model established by group intelligence can be utilized, so that the sharing of information by individuals in a group can be utilized, the motion of the whole group generates an evolution process from disorder to order in a problem solving space, and an optimal solution is obtained.
In summary, the invention provides an artificial intelligence calculation method for environmental factor levels influencing vehicle speed distribution of a long tunnel, which takes historical observation data of environmental factors and vehicle speed as learning samples, constructs a nonlinear mapping relation between the environmental factors and the vehicle speed distribution through an extreme learning machine neural network, and solves the optimal levels of all the environmental factors under expected vehicle speed distribution by using a particle algorithm.
Disclosure of Invention
The invention provides an intelligent optimization method for the environment factor level of the long and large tunnel vehicle speed distribution by combining an artificial intelligence algorithm according to the fact that a nonlinear mapping relation exists between the environment factors and the vehicle speed distribution as a theoretical basis, wherein the nonlinear mapping relation between the environment factors and the vehicle speed distribution is established through an extreme learning machine, and the optimal level of each environment factor under the vehicle speed distribution expected by a manager is solved by utilizing a particle swarm algorithm, so that the levels of each environment factor required by the manager to realize the vehicle speed distribution expected by the manager can be obtained.
The technical scheme of the invention is an intelligent optimization method for the environmental factor level of the long and large tunnel vehicle speed distribution, which specifically comprises the following steps:
step 1: using historically collected environmental factors and vehicle speed distribution data of continuous days as learning samples;
step 2: constructing an extreme learning machine neural network according to environmental factors and vehicle speed distribution historical data acquired by historical continuous days;
and step 3: and solving the optimal level of each environmental factor under the condition that the manager expects the vehicle speed distribution by using the particle swarm optimization.
Preferably, in step 1, the environment factors and vehicle speed distribution data collected in history for a plurality of consecutive days are taken as learning samples:
adopting an extreme learning machine neural network with a single hidden layer, wherein an input layer is provided with n neurons and inputs variables corresponding to n environmental factors; the hidden layer has l neurons; the output layer is provided with m neurons which respectively correspond to m output variables;
dividing 24h of a day into S time intervals (S is a positive integer), collecting data of environmental factors and vehicle speed distribution indexes at the end of each time interval, wherein the data collected in each time interval can form a learning sample, n environmental factors correspond to n inputs, and m vehicle speed distribution indexes correspond to m outputs;
input sample IP s,i,q Represents the level of the environmental factors of the ith (i =1,2, 3. \ 8230;, n) th item collected at the S (S =1,2, 3.;, S) th period in the Q (Q =1,2, 3. \8230;, Q) th day; OP (optical fiber) s,j,q Representing the value of the j (j =1,2,3, \ 8230;, m) th vehicle speed distribution index collected in the S (S =1,2,3, · 30; S) th time interval on the Q (Q =1,2,3, \8230; Q) th day as an output sample;
preferably, the construction of the neural network of the extreme learning machine in the step 2 is specifically as follows:
the learning samples for the S (S =1,2,3, \ 8230;, S) th epoch are Q in total, and the neural network map f for the S-th epoch s The construction method of x → y is as follows:
training set input matrix X with Q samples in s-th period s And an output matrix Y s Respectively as follows:
Figure GDA0002129381120000021
wherein x is s,i,q Representing the value of the ith (i =1,2,3, \8230;, n) input in the qth (Q =1,2, \8230;, Q) learning sample over the S (S =1,2,3, \8230;, S) th epoch in the history data;
y s,j,q representing the value of the jth (j =1,2, \8230;, m) output in the qth (Q =1,2, \8230;, Q) learning sample over the S (S =1,2,3, \8230;, S) th epoch in the history data;
x s,i,q and IP q,s,i 、y s,j,q And OP q,s,j Has the following relationship
x s,i,q =IP s,i,q ,y s,j,q =OP s,j,q
The connection weight between the input layer and the hidden layer in the s-th time period is W s The connection weight between the hidden layer and the output layer is beta s With a threshold of b for hidden layer neurons s
Figure GDA0002129381120000031
Wherein w s,e,i Representing the connection weights between the e (e =1,2, \8230;, l) th hidden layer neuron and the i (i =1,2, \8230;, n) th input layer neuron over the s-th time period in the historical data; beta is a beta s,e,j Representing the connection weight between the e-th hidden layer neuron and the j (j =1,2, \8230;, m) th output layer neuron in the s-th time period in the historical data; b s,e An offset value representing the e-th hidden layer neuron over an s-th time period;
the hidden layer neuron activation function for the s-th period is g s (x) The output is T s ,t s,q Is a (m × 1) column vector, which represents the computed output value of the qth input sample in the network over the s-th period:
T=[t 1 ,t 2 ,...,t Q ] m×Q
Figure GDA0002129381120000032
in the formula, t j,q Represents the calculated output value of the qth input sample at the jth (j =1,2, \8230;, m) output layer neuron;
w s,e =[w s,e,1 ,w s,e,2 ,…w s,e,n ] 1×n is a row vector, x s,q =[x s,1,q ,x s,2,q ,…,x s,n,q ] T The column vector, the other parameters are defined as above;
the relationship of network input to output can be represented by:
Figure GDA0002129381120000041
the simplification is as follows:
Η s β s =Τ s T
in the formula
Figure GDA0002129381120000044
Is a matrix beta s Transposition of (T) s T Is a matrix T s Transpose of (1), matrix [ B ] s ] l×Q =b s ×[1,1,…1] 1×Q ,[Η s ] Q×l Is the hidden layer output matrix of the s-th period, H s =[g s (W s X s +B s )] T The specific form is as follows:
Figure GDA0002129381120000042
when the number of hidden layer neurons equals the number of training samples, i.e., l = Q, for Q different training sample sets
Figure GDA0002129381120000045
Wherein x is s,q =[x s,1,q ,x s,2,q ,…x s,n,q ] T ∈R n ,y s,q =[y s,1,q ,y s,2,q ,…,y s,m,q ] T ∈R m If the function g is activated s R → R satisfies infinite or infinitesimal over an arbitrary interval, then for arbitrary R n And randomly generating w according to any continuous probability distribution in any interval of R space s,e And b s,e Then its hidden layer outputs matrix H s Reversible with probability 1, and with a | Η s β ss T I | =0 holds with probability 1. Then for any W s And b s The neural network can approximate the training sample with zero error, namely:
Figure GDA0002129381120000043
when the Q of the training sample is larger, in order to reduce the calculation amount, the value of the number l of the neurons in the hidden layer is usually smaller than Q, and an arbitrary is givenAn arbitrary number of positive e > 0 and arbitrary Q different training sample sets
Figure GDA0002129381120000046
Wherein x is s,q =[x s,1,q ,x s,2,q ,…x s,n,q ] T ∈R n ,y s,q =[y s,1,q ,y s,2,q ,…,y s,m,q ] T ∈R m If activating the function g s R → R satisfies infinite differentiable over arbitrary intervals, then for arbitrary R n And randomly generating w according to any continuous probability distribution in any interval of R space s,e And b s,e There is a feedforward neural network containing l (l ≦ Q) hidden layer neurons, such that | | | H s β ss T I < ε holds with a probability of 1. The training error of the neural network can be approximated to an arbitrary epsilon > 0, namely:
Figure GDA0002129381120000051
activation function g of hidden layer s (x) Using a Sigmoid function infinitely differentiable over intervals
Figure GDA0002129381120000052
Thus, when function g is activated s (x) Infinite and micro time, W s And b s The connection weight beta between the hidden layer and the output layer can be randomly selected before training and kept unchanged in the training process s This can be obtained by solving the least squares solution of the following minimum norm equation:
Figure GDA0002129381120000053
the solution is as follows:
Figure GDA0002129381120000054
in the formula, H s + Outputting a matrix H for hidden layer s Moore-Penrose augmented reverse of (H), H is usually calculated by the orthogonal method s +
When H s T Η s H when being a non-singular matrix s + =Η s -1 [(Η s T Η s ) -1s T ];
When H s Η s T H when being a nonsingular matrix s + =Η s Ts Η s T ) -1
The neural network of the extreme learning machine with the single hidden layer can be fit to obtain an input vector x in the s-th time period s,q And the output vector y s,q Non-linear mapping relation f between s :x s,q →y s,q
For arbitrary input vector x s,q The calculated output vector t of the network is s,q Is recorded as t s,q =f s (x s,q );
Preferably, in step 3, the optimal level of each environmental factor for solving the vehicle speed distribution expected by the manager through the particle swarm algorithm is as follows:
if the expected vehicle speed distribution of the manager is [ E ] in the s-th time period of the road section s ] m×1 Establishing an objective function by adjusting the level of the environmental factor [ I ] at the s-th time period s ] n×1 Distributing the actual speed of the vehicle on the road section [ O ] s ] m×1 Approaching the expected vehicle speed distribution E s The objective function of the s-th period is expressed as follows
min F s (O s )=||E s -O s ||
According to the mapping relation f s :I s →O s Optimum level I of environmental factors available for solving objective function s *
N number of vehicle speed influencing in s time periodAmong the environmental factors, the level is I s =[d s,1 ,d s,2 ,…,d s,n ] T ,d s,i (i =1,2, \8230;, n) is the ith environmental factor level for the s-th session; i is s Has an upper limit of [ ucl ] -interval value s ] n×1 =[ucl s,1 ,ucl s,2 ,…,ucl s,n ] T ,ucl s,i An upper limit for the level of the ith (i =1,2, \8230;, n) factor for the s-th time period; I.C. A s Has a lower limit of [ lcl ] -interval s ] n×1 =[lcl s,1 ,lcl s,2 ,…,lcl s,n ] T ,lcl s,i The value lower limit of the ith factor level in the s-th time period can be used as one of the constraint conditions for solving the objective function
lcl s,i ≤d s,i ≤ucl s,i
And solving the extreme value of the objective function through a particle swarm algorithm.
Step 3.1: initializing N particles in an N-dimensional search space to form a primary (evolution algebra r = 0) population
Figure GDA0002129381120000063
Wherein the kth (k =1,2, \8230;, N) particle represents one N-dimensional vector of the s-th time period
Figure GDA0002129381120000064
d s,k,i Vector the position of the kth particle in the ith dimension search space in the s period of the initial generation population
Figure GDA0002129381120000065
A potential solution may be represented;
step 3.2: according to the mapping relation f of the neural network in the s-th time period s :I s →O s Calculating the output of the kth particle in the network in the r generation
Figure GDA0002129381120000066
Further calculate the objective function value
Figure GDA0002129381120000067
Since the problem is the minimum value problem and the individual fitness is the benefit index, the fitness function G in the s-th time period can be made s (x)=-F s (x) Then the individual fitness of the s-th time period
Figure GDA0002129381120000068
And comparing the fitness function values of all the particles in the generation population in the s-th time period to obtain the e (e =1,2, \ 8230; N) th particle with the maximum fitness function value and the position I s,e r If the position of the individual extremum of the r-th generation in the s-th time period is P s r =I s,e r The position of the extreme value of the population is Z s r The calculation method comprises the following steps: if r =0, then Z s r =I s,e r (ii) a If r is more than or equal to 1, the position Z of the extreme value of the population s r Taking the value of Z s r-1 And P s r The fitness function value at two positions is larger;
step 3.3: updating the evolution algebra by r = r +1, and if the algebra r exceeds the maximum evolution algebra MG, namely r > MG, turning to the step 3.4; otherwise, the following operation is continuously executed.
Updating the positions of other particles in the population according to the positions of the extreme individuals in the previous generation
Figure GDA0002129381120000061
Figure GDA0002129381120000062
Wherein α is an inertial weight; c. C 1 Taking a non-negative constant for the acceleration factor moving towards the individual extremum; c. C 2 Taking a non-negative constant as an acceleration factor moving to the extreme value of the population; lambda [ alpha ] 1 Is the interval [0,1]With random numbers, lambda, uniformly distributed throughout 2 Is the interval [0,1]Random numbers which are uniformly distributed are taken;
Figure GDA0002129381120000071
is the velocity vector of the kth particle in the r generation in the s period, v s,k,i Searching the speed of the kth particle in the ith dimensional space in the s time period; speed of rotation
Figure GDA0002129381120000072
Has an upper limit of [ LV ] s ] 1×n =[lv s,1 ,lv s,2 ,…,lv s,n ] T ,lv s,i The upper limit of the search speed in the ith dimension space in the s-th time period; speed of rotation
Figure GDA0002129381120000073
The lower limit of the interval of (2):
[UV s ] 1×n =[uv s,1 ,uv s,2 ,…,uv s,n ] T ,uv s,i the lower limit of the search speed in the ith dimensional space in the s-th time period is set;
Figure GDA0002129381120000074
is the position vector of the kth particle in the (r-1) th generation population in the s-th time period,
Figure GDA0002129381120000075
is the position vector of the kth particle in the r generation population in the s period; the upper limit of the value interval of the position vector is as follows:
[ucl s ] n×1 =[ucl s,1 ,ucl s,2 ,…,ucl s,n ] T ,ucl s,i an upper limit for the level of the ith factor for the s-th period; the lower limit of the position vector is [ lcl ] s ] n×1 =[lcl s,1 ,lcl s,2 ,…,lcl s,n ] T ,lcl s,i The lower limit of the value of the ith factor level in the s-th time period
Go to step 3.2;
step 3.4: from the mapping f of the neural network over the s-th time period s :I s →O s Calculating the output of the k-th particle of the last generation (r = MG) in the network
Figure GDA0002129381120000076
Further, the objective function value can be calculated
Figure GDA0002129381120000077
The individual fitness is
Figure GDA0002129381120000078
Comparing the fitness function values of all the particles in the last generation r = MG population, and obtaining the e (e =1,2, \ 8230;, N) th particle with the maximum fitness function value in the s-th time period, wherein the position of the e (e =1,2, \\ 8230;, N) th particle is
Figure GDA0002129381120000079
The position of the extreme value of the last generation population
Figure GDA00021293811200000710
Position of extreme population Z s r Taking the value of Z s r-1 And P s r The value of the fitness function is larger at the two positions.
Solving by a particle swarm algorithm to obtain the optimal level I of each environmental factor in the s-th time period s *
The invention provides an artificial intelligence calculation method for environmental factor levels influencing vehicle speed distribution of a long and large tunnel, which can establish a nonlinear relation between the environmental factors and the vehicle speed distribution through a neural network and solve the level required by each environmental factor expected by a manager under the vehicle speed distribution by using a particle swarm algorithm.
Drawings
FIG. 1: an extreme learning machine neural network structure;
FIG. 2: solving a flow chart by a particle swarm algorithm;
FIG. 3: a method flow diagram.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following describes an intelligent optimization method for the environmental factor level of the vehicle speed distribution of the long tunnel in combination with fig. 1 to 2, and includes the following steps:
step 1: historically collected environmental factors and vehicle speed distribution data of multiple continuous days are used as learning samples;
in the step 1, the environment factors and vehicle speed distribution data collected in history for a plurality of continuous days are taken as learning samples:
the method comprises the following steps of adopting an extreme learning machine neural network with a single hidden layer, wherein an input layer is provided with 9 neurons and corresponds to 9 environment factor input variables, namely observation values of environment temperature, lane width, road cross slope, road longitudinal slope, traffic flow density, vehicle type proportion, tunnel illumination, light color temperature and tunnel noise; the hidden layer has 3 neurons; the output layer is provided with 3 neurons which respectively correspond to 3 output variables, namely 85% vehicle speed, average vehicle speed and vehicle speed standard deviation;
dividing 24h a day into 12 time intervals, collecting data of environmental factors and vehicle speed distribution indexes at the end of each time interval, wherein the data collected in each time interval can form a learning sample, 9 environmental factors correspond to 9 inputs, and 3 vehicle speed distribution indexes correspond to 3 outputs;
input sample IP s,i,q Represents the level of the environmental factors of the ith (i =1,2, 3. \ 8230;, 9) th item collected at the s (s =1,2, 3.;, 12) th period in the q (q =1,2, 3. \8230;, 365) th day; OP (optical proximity module) s,j,q Representing the value of the vehicle speed distribution index of the j (j =1,2,3, \ 8230;, 12) th item collected in the s (s =1,2,3, \ 8230;, 12) th time interval on the q (q =1,2,3, \ 8230;, 365) th day as an output sample;
step 2: constructing an extreme learning machine neural network according to environmental factors and vehicle speed distribution historical data acquired in 365 days in a historical manner;
the construction of the neural network of the extreme learning machine in the step 2 is specifically as follows:
study sample of the s (s =1,2,3, \ 8230;, 12) th session365 for s-th time period neural network mapping f s The construction method of x → y is as follows:
training set input matrix X with 365 samples in s-th time period s And an output matrix Y s Respectively as follows:
Figure GDA0002129381120000091
wherein x is s,i,q Representing the value of the ith (i =1,2,3, \ 8230;, 9) input in the qth (q =1,2, \ 8230;, 365) learning sample over the ith (s =1,2,3, \ 8230;, 12) epoch in the history data;
y s,j,q representing the value of the jth (j =1,2, \ 8230;, 12) output in the qth (q =1,2, \ 8230;, 365) learning sample over the s (s =1,2,3, \ 8230;, 12) th epoch in the history data;
x s,i,q and IP q,s,i 、y s,j,q And OP q,s,j Has the following relationship:
x s,i,q =IP s,i,q ,y s,j,q =OP s,j,q
the connection weight between the input layer and the hidden layer in the s-th time period is W s The connection weight between the hidden layer and the output layer is beta s With a threshold of b for hidden layer neurons s
Figure GDA0002129381120000092
Wherein, w s,e,i Representing connection weights between the e (e =1,2, 3) th hidden layer neuron and the i (i =1,2, \ 8230;, 9) th input layer neuron over the s-th time period in the historical data; beta is a s,e,j Representing the connection weights between the e-th hidden layer neuron and the j (j =1,2, 3) th output layer neuron over the s-th time period in the historical data; b s,e An offset value representing the e-th hidden layer neuron over the s-th time period;
the hidden layer neuron activation function for the s-th period is g s (x) The output is T s ,t s,q Is a (3 × 1) column vector, tableShowing the calculated output value of the qth input sample in the network over the s-th time period:
T=[t 1 ,t 2 ,...,t 365 ] 3×365
Figure GDA0002129381120000093
in the formula, t j,q Represents the calculated output value of the qth input sample at the jth (j =1,2, 3) output layer neuron; w is a s,e =[w s,e,1 ,w s,e,2 ,…,w s,e,9 ] 1×9 Is a row vector, x s,q =[x s,1,q ,x s,2,q ,…,x s,9,q ] T The column vector, the other parameters are defined as above;
the relationship of network input to output can be represented by:
Figure GDA0002129381120000101
the simplification is as follows:
Η s β s =Τ s T
in the formula
Figure GDA0002129381120000104
Is a matrix beta s Transposition of (T) s T Is a matrix T s Is transposed with [ B ] s ] 3×365 =b s ×[1,1,…1] 1×365 ,[Η s ] 365×3 Is the hidden layer output matrix of the s-th period, H s =[g s (W s X s +B s )] T The concrete form is as follows:
Figure GDA0002129381120000102
when the number of hidden layer neurons is equal to the number of training samples, i.e. l = Q =365, for 365 different sets of training samples
Figure GDA0002129381120000105
Wherein x is s,q =[x s,1,q ,x s,2,q ,…x s,9,q ] T ∈R 9 ,y s,q =[y s,1,q ,y s,2,q ,y s,3,q ] T ∈R 3 If the function g is activated s R → R satisfies infinite or infinitesimal over an arbitrary interval, then for arbitrary R 9 And randomly generating w according to any continuous probability distribution in any interval of R space s,e And b s,e Then its hidden layer outputs matrix H s Reversible with probability 1, and with h s β ss T I | =0 holds with a probability of 1. Then for any W s And b s The neural network can approximate the training sample with zero error, namely:
Figure GDA0002129381120000103
when the training sample Q is large, in order to reduce the calculation amount, the value of the number l of the hidden layer neurons is usually smaller than Q, and an arbitrarily small positive number epsilon (epsilon = 0.001) and an arbitrarily Q (Q = 365) different training sample set are given
Figure GDA0002129381120000106
x s,q =[x s,1,q ,x s,2,q ,…x s,9,q ] T ∈R 9 ,y s,q =[y s,1,q ,y s,2,q ,y s,3,q ] T ∈R 3 If the function g is activated s R → R satisfies infinite or infinitesimal over an arbitrary interval, then for arbitrary R 9 And randomly generating w according to any continuous probability distribution in any interval of R space s,e And b s,e There is a feedforward neural network containing l (l ≦ Q) hidden layer neurons, such that | | | H s β ss T With a probability of 1, | < ε. The training error of the neural network can be approximated to an arbitrary epsilon >0, namely:
Figure GDA0002129381120000111
activation function g of the hidden layer s (x) Using a Sigmoid function infinitely differentiable over intervals
Figure GDA0002129381120000112
Thus, when function g is activated s (x) Infinite and micro time, W s And b s The connection weight beta between the hidden layer and the output layer can be randomly selected before training and kept unchanged in the training process s This can be obtained by solving the least squares solution of the following minimum norm equation:
Figure GDA0002129381120000113
the solution is as follows:
Figure GDA0002129381120000114
h in the formula s + Outputting a matrix H for hidden layer s Moore-Penrose augmented reversal of (H), H is usually calculated by the orthogonal method s +
When H s T Η s H when being a non-singular matrix s + =Η s -1 [(Η s T Η s ) -1s T ];
When h s Η s T H when being a nonsingular matrix s + =Η s Ts Η s T ) -1
The neural network of the extreme learning machine with the single hidden layer can be fit to obtain the input direction of the s-th time periodQuantity x s,q And the output vector y s,q Non-linear mapping relation f between s :x s,q →y s,q
For arbitrary input vector x s,q The calculated output vector t of the network is s,q Is denoted by t s,q =f s (x s,q );
And step 3: and solving the optimal level of each environmental factor under the condition that the manager expects the vehicle speed distribution by using the particle swarm optimization.
In step 3, the optimal level of each environmental factor of the particle swarm algorithm under the condition that the manager expects the vehicle speed distribution is as follows:
if the expected vehicle speed distribution of the manager is [ E ] in the s-th time period of the road section s ] 3×1 Establishing an objective function by adjusting the level of the environmental factor [ I ] at the s-th time period s ] 9×1 Distributing the actual speed of the vehicle on the road section [ O s ] 3×1 Approximation of the expected vehicle speed distribution E s The objective function of the s-th period is expressed as follows
min F s (O s )=||E s -O s ||
According to the mapping relation f s :I s →O s Optimum level I of environmental factors available for solving objective function s *
Of the 9 environmental factors affecting the vehicle speed in the s-th period, the level is I s =[d s,1 ,d s,2 ,…,d s,9 ] T ,d s,i (ii) an ith (i =1,2, \8230;, 9) item environmental factor level for an s-th time period; I.C. A s Has an upper limit of [ ucl ] -interval value s ] 9×1 =[ucl s,1 ,ucl s,2 ,…,ucl s,9 ] T ,ucl s,i An upper limit for the level of the ith (i =1,2, \8230;, 9) factor during the s-th period; i is s Has a lower limit of [ lcl ] -interval s ] 9×1 =[lcl s,1 ,lcl s,2 ,…,lcl s,9 ] T ,lcl s,i The value lower limit of the ith factor level in the s-th time period can be used as one of the constraint conditions for solving the objective function
lcl s,i ≤d s,i ≤ucl s,i
And solving the extreme value of the objective function through a particle swarm algorithm.
Step 3.1: initializing N (N = 200) particles in a 9-dimensional search space to form a primary (evolution algebra r = 0) population
Figure GDA0002129381120000123
Wherein the kth (k =1,2, \ 8230;, 200) particle represents a 9-dimensional vector of the s-th time period
Figure GDA0002129381120000124
d s,k,i Vector the position of the kth particle in the ith dimension search space in the s period of the initial generation population
Figure GDA0002129381120000125
A potential solution may be represented;
step 3.2: according to the mapping relation f of the neural network in the s-th time period s :I s →O s Calculating the output of the kth particle in the network in the r generation
Figure GDA0002129381120000126
Further calculate the objective function value
Figure GDA0002129381120000127
Since the problem is the minimum value problem and the individual fitness is the benefit index, the fitness function G in the s-th time period can be made s (x)=-F s (x) Then the individual fitness of the s-th time period
Figure GDA0002129381120000128
Comparing the fitness function values of all the particles in the generation group in the s-th time period to obtain the e (e =1,2, \ 8230; 200) th particle with the maximum fitness function value and the position of the e (e =1,2, \\ 8230;, 200) th particle
Figure GDA0002129381120000129
The position of the individual extreme value of the r generation in the s period is
Figure GDA00021293811200001210
The position of the extreme value of the population is Z s r The calculation method comprises the following steps: if r =0, then
Figure GDA00021293811200001211
If r is more than or equal to 1, the position Z of the extreme value of the population s r Taking the value of Z s r -1 And P s r The fitness function value at two positions is larger;
step 3.3: updating the evolution algebra r = r +1, and if the algebra r exceeds the maximum evolution algebra MG, namely r > MG, turning to the step 3.4; otherwise, the following operation is continuously executed.
Updating the positions of other particles in the population according to the positions of the extreme individuals in the previous generation
Figure GDA0002129381120000121
Figure GDA0002129381120000122
Wherein α is an inertial weight; c. C 1 Taking a non-negative constant for the acceleration factor moving towards the individual extremum; c. C 2 Taking a non-negative constant as an acceleration factor moving to the population extreme value; lambda 1 Is the interval [0,1]With random numbers, lambda, uniformly distributed throughout 2 Is the interval [0,1]Random numbers uniformly distributed throughout the interior;
Figure GDA0002129381120000131
is the velocity vector of the kth particle in the r generation in the s period, v s,k,i Searching the speed of the kth particle in the ith dimension space in the s period; speed of rotation
Figure GDA0002129381120000132
Has an upper limit of [ LV ] s ] 1×9 =[lv s,1 ,lv s,2 ,…,lv s,9 ] T ,lv s,i The upper limit of the search speed in the ith dimension space in the s-th time period; speed of rotation
Figure GDA0002129381120000133
The lower limit of the interval of (2) is:
[UV s ] 1×9 =[uv s,1 ,uv s,2 ,…,uv s,9 ] T ,uv s,i the lower limit of the search speed in the ith dimensional space in the s-th time period is set;
Figure GDA0002129381120000134
is the position vector of the kth particle in the (r-1) th generation group in the s time period,
Figure GDA0002129381120000135
is the position vector of the kth particle in the r generation population in the s period; the upper limit of the value interval of the position vector is as follows:
[ucl s ] 9×1 =[ucl s,1 ,ucl s,2 ,…,ucl s,9 ] T ,ucl s,i an upper limit for the level of the ith factor during the s-th period; the lower limit of the position vector is [ lcl ] s ] 9×1 =[lcl s,1 ,lcl s,2 ,…,lcl s,9 ] T ,lcl s,i The lower limit of the value of the ith factor level in the s-th time period
Go to step 3.2;
step 3.4: from the mapping f of the neural network over the s-th time period s :I s →O s Calculating the output of the k-th particle of the last generation (r = MG) in the network
Figure GDA0002129381120000136
Further, the objective function value can be calculated
Figure GDA0002129381120000137
The individual fitness is
Figure GDA0002129381120000138
Comparing the Adaptation of all particles in the last generation r = MG populationThe value of the fitness function is the e (e =1,2, \ 8230;, 200) th particle with the largest value of the fitness function in the s-th time period, and the position of the e (e =1,2, \ 8230;, 200) th particle is
Figure GDA0002129381120000139
The position of the extreme value of the last generation population
Figure GDA00021293811200001310
Position of extreme population Z s r Taking the value of Z s r-1 And P s r The value of fitness function is larger at two positions.
Solving by particle swarm algorithm to obtain optimal level I of each environmental factor in s-th time period s *
It should be understood that the above description of the preferred embodiments is illustrative, and not restrictive, and that various changes and modifications may be made therein by those skilled in the art without departing from the scope of the invention as defined in the appended claims.

Claims (1)

1. An intelligent optimization method for the environmental factor level of the vehicle speed distribution of a long tunnel is characterized by comprising the following steps:
step 1: historically collected environmental factors and vehicle speed distribution data of multiple continuous days are used as learning samples;
and 2, step: constructing an extreme learning machine neural network according to environmental factors and vehicle speed distribution historical data acquired by historical continuous days;
and step 3: solving the optimal level of each environmental factor under the condition that a manager expects vehicle speed distribution by using a particle swarm algorithm;
in the step 1, the environment factors and vehicle speed distribution data collected in history for a plurality of continuous days are taken as learning samples:
an extreme learning machine neural network with a single hidden layer is adopted, an input layer is provided with n neurons, and n environment factor input variables are corresponding to the input layer; the hidden layer has l neurons; the output layer is provided with m neurons which respectively correspond to m output variables;
dividing 24h of a day into S time intervals, wherein S is a positive integer, collecting data of environmental factors and vehicle speed distribution indexes at the end of each time interval, wherein the data collected in each time interval can form a learning sample, n environmental factors correspond to n inputs, and m vehicle speed distribution indexes correspond to m outputs;
input sample IP s,i,q Represents the level of the ith, i =1,2, 3.., S, item environmental factor collected over the time period, i =1,2, 3.., n, item environmental factor in the qth, Q =1,2, 3.., Q, day; OP (optical fiber) s,j,q Representing as output samples the values of the S, S =1,2, 3.., S, the j, j =1,2, 3.., m, the vehicle speed distribution indicator collected in the time interval, Q =1,2, 3.., Q, day;
the construction of the neural network of the extreme learning machine in the step 2 is specifically as follows:
s, S =1,2, 3.·, S, Q total learning samples for the time period, the neural network mapping f for the time period S s The construction method of x → y is as follows:
training set input matrix X with Q samples in s-th period s And an output matrix Y s Respectively as follows:
Figure FDA0003928461610000011
wherein x is s,i,q Represents the value of the ith, i =1,2, 3.., S, the ith, i =1,2, 3.., n, the Q, Q =1,2, 3.., Q, the number of inputs in the historical data over the period of time;
y s,j,q represents the value of the jth, j =1,2, 3.., S, j =1,2, 3.., m, output in the qth, Q =1,2, 3.., Q, learning sample over the time interval in the historical data;
x s,i,q and IP q,s,i 、y s,j,q And OP q,s,j Has the following relationship
x s,i,q =IP s,i,q ,y s,j,q =OP s,j,q
Connection weight between input layer and hidden layer in s-th time periodIs W s The connection weight between the hidden layer and the output layer is beta s With the threshold for hidden layer neurons being b s
Figure FDA0003928461610000021
Wherein, w s,e,i Representing the connection weights between the e, e =1,2, 3.., l, hidden layer neurons and the i, i =1,2, 3.., n, input layer neurons on the s-th time period in the historical data; beta is a s,e,j Representing the connection weight between the e-th hidden layer neuron and the j, j =1,2, 3.. Eta., m, output layer neuron in the s-th time period in the historical data; b is a mixture of s,e An offset value representing the e-th hidden layer neuron over the s-th time period;
the hidden layer neuron activation function for the s-th period is g s (x) The output is T s ,t s,q Is (m × 1) column vector, and represents the calculated output value of the q-th input sample in the network in the s-th time period:
T=[t 1 ,t 2 ,...,t Q ] m×Q
Figure FDA0003928461610000022
in the formula, t j,q Represents the calculated output values of the q input samples at j, j =1,2, 3.., m, output layer neurons; w is a s,e =[w s,e,1 ,w s,e,2 ,…w s,e,n ] 1×n Is a row vector, x s,q =[x s,1,q ,x s,2,q ,…,x s,n,q ] T The column vector, the other parameters are defined as above;
the relationship of network input to output can be represented by:
Figure FDA0003928461610000023
the simplification is as follows:
Figure FDA0003928461610000024
in the formula
Figure FDA0003928461610000031
Is a matrix beta s The method (2) is implemented by the following steps,
Figure FDA0003928461610000032
is a matrix T s Transpose of (1), matrix [ B ] s ] l×Q =b s ×[1,1,…1] 1×Q ,[H s ] Q×l Is the hidden layer output matrix of the s-th period, H s =[g s (W s X s +B s )] T The specific form is as follows:
Figure FDA0003928461610000033
when the number of hidden layer neurons is equal to the number of training samples, i.e., l = Q, for Q different training sample sets
Figure FDA0003928461610000034
Wherein x s,q =[x s,1,q ,x s,2,q ,…x s,n,q ] T ∈R n ,y s,q =[y s,1,q ,y s,2,q ,…,y s,m,q ] T ∈R m If the function g is activated s R → R satisfies infinite or infinitesimal over an arbitrary interval, then for arbitrary R n And randomly generating w according to any continuous probability distribution in any interval of R space s,e And b s,e Then its hidden layer outputs matrix H s Reversible with a probability of 1, and have
Figure FDA0003928461610000035
With probability 1; then for anyW s And b s The neural network can approximate the training sample with zero error, namely:
Figure FDA0003928461610000036
when the training sample Q is bigger, in order to reduce the calculation amount, the value of the number l of the hidden layer neurons is usually smaller than Q, an arbitrary small positive number epsilon > 0 and arbitrary Q different training sample sets are given
Figure FDA0003928461610000037
Wherein x is s,q =[x s,1,q ,x s,2,q ,…x s,n,q ] T ∈R n ,y s,q =[y s,1,q ,y s,2,q ,…,y s,m,q ] T ∈R m If activating the function g s R → R satisfies infinite differentiable over arbitrary intervals, then for arbitrary R n And randomly generating w according to any continuous probability distribution in any interval of R space s,e And b s,e There is a feedforward neural network containing l, l ≦ Q, hidden layer neurons, such that
Figure FDA0003928461610000038
Holds with probability 1; the training error of the neural network can be approximated to an arbitrary epsilon > 0, namely:
Figure FDA0003928461610000039
activation function g of hidden layer s (x) Using a Sigmoid function infinitely differentiable over intervals
Figure FDA00039284616100000310
Thus, when function g is activated s (x) Infinite and micro time, W s And b s The connection weight beta between the hidden layer and the output layer can be randomly selected before training and is kept unchanged in the training process s This can be obtained by solving the least squares solution of the following minimum norm equation:
Figure FDA0003928461610000041
the solution is as follows:
Figure FDA0003928461610000042
in the formula, H s + Outputting a matrix H for the hidden layer s The Moore-Penrose inverse of (A), H is usually calculated by the orthogonal method s +
When H is present s T H s In the form of a non-singular matrix, H s + =H s -1 [(H s T H s ) -1 -H s T ];
When H is present s H s T In the non-singular matrix, H s + =H s T (H s H s T ) -1
The neural network of the extreme learning machine with the single hidden layer can be fit to obtain an input vector x in the s-th time period s,q And the output vector y s,q Non-linear mapping relation f between s :x s,q →y s,q
For arbitrary input vector x s,q The calculated output vector t of the network is s,q Is recorded as t s,q =f s (x s,q );
In step 3, the optimal level of each environmental factor of the particle swarm algorithm under the condition of solving the expected vehicle speed distribution of the manager is as follows:
if the expected vehicle speed distribution of the manager is [ E ] in the s-th time period of the road section s ] m×1 Establishing an objective function by adjustingEnvironmental factor level of s time period [ I s ] n×1 Distributing the actual speed of the vehicle on the road section [ O ] s ] m×1 Approaching the expected vehicle speed distribution E s The objective function of the s-th period is expressed as follows
min F s (O s )=||E s -O s ||
According to the mapping relation f s :I s →O s Optimum level I of environmental factors available for solving objective function s *
Of the n environmental factors affecting the vehicle speed in the s-th period, the level is I s =[d s,1 ,d s,2 ,…,d s,n ] T ,d s,i I =1,2,3,.., n, being the ith environmental factor level for the s-th time period; I.C. A s Has an upper limit of [ ucl ] -interval value s ] n×1 =[ucl s,1 ,ucl s,2 ,…,ucl s,n ] T ,ucl s,i An ith, i =1,2,3,.., n, upper limit of the factor level for the s-th time period; i is s Has a lower limit of [ lcl ] -interval value s ] n×1 =[lcl s,1 ,lcl s,2 ,…,lcl s,n ] T ,lcl s,i The lower limit of the value of the ith factor level in the s-th time period can be used as one of the constraint conditions for solving the objective function
lcl s,i ≤d s,i ≤ucl s,i
Solving the extreme value of the objective function through a particle swarm algorithm;
step 3.1: initializing N particles in an N-dimensional search space to form an initial generation, wherein the evolution generation r =0 and the population
Figure FDA0003928461610000051
Wherein the kth, k =1,2, 3.., N, particles represent an N-dimensional vector of the s-th period
Figure FDA0003928461610000052
d s,k,i Vector the position of the kth particle in the ith dimension search space in the s period of the initial generation population
Figure FDA0003928461610000053
A potential solution may be represented;
step 3.2: according to the mapping relation f of the neural network in the s-th time period s :I s →O s Calculating the output of the kth particle in the network in the r generation
Figure FDA0003928461610000054
Further calculate the objective function value
Figure FDA0003928461610000055
Since the problem is the minimum value problem and the individual fitness is the benefit index, the fitness function G in the s-th time period can be made s (x)=-F s (x) Then the individual fitness of the s-th time period
Figure FDA0003928461610000056
And (4) comparing fitness function values of all the particles in the generation population in the s-th time period to obtain the particle with the largest fitness function value, wherein the e is the e, and the position of the particle is I, and the e =1,2,3 s,e r If the position of the individual extreme value of the r-th generation in the s-th period is P s r =I s,e r The position of the extreme value of the population is Z s r The calculation method comprises the following steps: if r =0, Z s r =I s,e r (ii) a If r is more than or equal to 1, the position Z of the extreme value of the population s r Taking the value of Z s r-1 And P s r The fitness function value at two positions is larger;
step 3.3: updating the evolution algebra r = r +1, and if the algebra r exceeds the maximum evolution algebra MG, namely r > MG, turning to the step 3.4; otherwise, continuing to execute the following operations;
updating the positions of other particles in the population according to the positions of the extreme individuals in the previous generation
Figure FDA0003928461610000057
Figure FDA0003928461610000058
Wherein α is an inertial weight; c. C 1 Taking a non-negative constant as the acceleration factor moving to the individual extreme value; c. C 2 Taking a non-negative constant as an acceleration factor moving to the population extreme value; lambda [ alpha ] 1 Is the interval [0,1]With random numbers, lambda, uniformly distributed throughout 2 Is the interval [0,1]Random numbers uniformly distributed throughout the interior;
Figure FDA0003928461610000059
is the velocity vector of the kth particle in the r generation in the s period, v s,k,i Searching the speed of the kth particle in the ith dimensional space in the s time period; speed of rotation
Figure FDA00039284616100000510
Has an upper limit of [ LV ] s ] 1×n =[lv s,1 ,lv s,2 ,…,lv s,n ] T ,lv s,i The upper limit of the search speed in the ith dimensional space in the s-th time period is set; speed of rotation
Figure FDA00039284616100000511
Has a lower limit [ UV ] s ] 1×n =[uv s,1 ,uv s,2 ,…,uv s,n ] T ,uv s,i The lower limit of the search speed in the ith dimensional space in the s-th time period is set;
Figure FDA0003928461610000061
is the position vector of the kth particle in the (r-1) th generation group in the s time period,
Figure FDA0003928461610000062
is the position vector of the kth particle in the r generation population in the s period; the upper limit of the value interval of the position vector is [ ucl s ] n×1 =[ucl s,1 ,ucl s,2 ,…,ucl s,n ] T ,ucl s,i An upper limit for the level of the ith factor for the s-th period; the lower limit of the position vector is [ lcl ] s ] n×1 =[lcl s,1 ,lcl s,2 ,…,lcl s,n ] T ,lcl s,i The value lower limit of the ith factor level in the s-th time period;
go to step 3.2;
step 3.4: from the mapping f of the neural network over the s-th time period s :I s →O s Calculating the output of the k-th particle of the last generation (r = MG) in the network
Figure FDA0003928461610000063
Further, the objective function value can be calculated
Figure FDA0003928461610000064
The individual fitness is
Figure FDA0003928461610000065
Comparing the fitness function values of all the particles in the last generation r = MG population, and obtaining the e-th particle with the largest fitness function value in the s-th time period, wherein the e =1,2,3, the
Figure FDA0003928461610000066
The position of the extreme value of the last generation population
Figure FDA0003928461610000067
Position of extreme population Z s r Taking the value of Z s r-1 And P s r The fitness function value at two positions is larger;
solving by a particle swarm algorithm to obtain the optimal level I of each environmental factor in the s-th time period s *
CN201910349189.8A 2019-04-28 2019-04-28 Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution Active CN110147874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910349189.8A CN110147874B (en) 2019-04-28 2019-04-28 Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910349189.8A CN110147874B (en) 2019-04-28 2019-04-28 Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution

Publications (2)

Publication Number Publication Date
CN110147874A CN110147874A (en) 2019-08-20
CN110147874B true CN110147874B (en) 2022-12-16

Family

ID=67594504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910349189.8A Active CN110147874B (en) 2019-04-28 2019-04-28 Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution

Country Status (1)

Country Link
CN (1) CN110147874B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114819191B (en) * 2022-06-24 2022-10-11 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) High-emission road moving source identification method, system and storage medium
CN117236137B (en) * 2023-11-01 2024-02-02 龙建路桥股份有限公司 Winter continuous construction control system for deep tunnel in high and cold area

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251661A (en) * 2016-08-09 2016-12-21 福州大学 Tunnel portal section wagon flow control method
CN106897826A (en) * 2017-02-23 2017-06-27 吉林大学 A kind of street accidents risks appraisal procedure and system
WO2017197626A1 (en) * 2016-05-19 2017-11-23 江南大学 Extreme learning machine method for improving artificial bee colony optimization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017197626A1 (en) * 2016-05-19 2017-11-23 江南大学 Extreme learning machine method for improving artificial bee colony optimization
CN106251661A (en) * 2016-08-09 2016-12-21 福州大学 Tunnel portal section wagon flow control method
CN106897826A (en) * 2017-02-23 2017-06-27 吉林大学 A kind of street accidents risks appraisal procedure and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于粒子群优化的极限学习机;王杰等;《郑州大学学报(理学版)》;20130315(第01期);全文 *
基于BP神经网络的双车道公路运行车速预测模型;王建强等;《公路与汽运》;20090925(第05期);全文 *

Also Published As

Publication number Publication date
CN110147874A (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN109492814B (en) Urban traffic flow prediction method, system and electronic equipment
CN114626512B (en) High-temperature disaster forecasting method based on directed graph neural network
Park et al. TAIFEX and KOSPI 200 forecasting based on two-factors high-order fuzzy time series and particle swarm optimization
Siddique et al. Computational intelligence: synergies of fuzzy logic, neural networks and evolutionary computing
CN110147874B (en) Intelligent optimization method for environmental factor level of long and large tunnel vehicle speed distribution
Qiao et al. An online self-adaptive modular neural network for time-varying systems
CN107330902B (en) Chaotic genetic BP neural network image segmentation method based on Arnold transformation
Mosavi et al. Neural network trained by biogeography-based optimizer with chaos for sonar data set classification
CN108122048B (en) Transportation path scheduling method and system
CN113033072A (en) Imaging satellite task planning method based on multi-head attention pointer network
Qiao et al. An online self-organizing modular neural network for nonlinear system modeling
Donate et al. Time series forecasting. A comparative study between an evolving artificial neural networks system and statistical methods
Fang et al. Two-stream fused fuzzy deep neural network for multiagent learning
CN105334730B (en) The IGA optimization T S of heating furnace oxygen content obscure ARX modeling methods
Altundogan et al. A new deep neural network based dynamic fuzzy cognitive map weight updating approach
Lonergan A methodological framework for resolving ecological/economic problems
CN113609767A (en) Small sample learning method based on particle calculation
CN104102918B (en) A kind of pulse signal sorting technique and device based on fuzzy neural network
Bushara Weather forecasting using soft computing models: A comparative study
Neagu et al. A neuro-fuzzy approach for functional genomics data interpretation and analysis
Cuervo et al. Emergent cooperation through mutual information maximization
Furze et al. Mathematical methods to quantify and characterise the primary elements of trophic systems
Maries et al. Computational intelligence techniques for communities network formation
Jia et al. A teaching-learning-based optimization with uniform design for solving constrained optimization problems
CN115410051B (en) Continuous image classification method and system based on re-plasticity inspiration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant