CN101488913A - Application layer multicasting tree constructing method based on two-layer recurrent neural network - Google Patents

Application layer multicasting tree constructing method based on two-layer recurrent neural network Download PDF

Info

Publication number
CN101488913A
CN101488913A CNA2008102439111A CN200810243911A CN101488913A CN 101488913 A CN101488913 A CN 101488913A CN A2008102439111 A CNA2008102439111 A CN A2008102439111A CN 200810243911 A CN200810243911 A CN 200810243911A CN 101488913 A CN101488913 A CN 101488913A
Authority
CN
China
Prior art keywords
neuron
multicast
neural network
convergence
recurrent neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008102439111A
Other languages
Chinese (zh)
Inventor
张顺颐
刘世栋
王攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CNA2008102439111A priority Critical patent/CN101488913A/en
Publication of CN101488913A publication Critical patent/CN101488913A/en
Pending legal-status Critical Current

Links

Images

Abstract

A method of constructing application layer multicast tree based on double layer recurrent neural network determines the corresponding weights and current bias by neuron motion equations corresponding to the energy function, and adjusts the relevant feedback weight and offset to execute iterative computations until the system converges to stable state. In stable state, output of the neuron variable is solution of actual optimization problem. The invention utilizes the ideal of using Hopfield neural network model to solve the optimization problems, but adds with Kirchoff limited condition in double layer recurrent neural network on the basis for improving the validity of solution; adding with LP type non-linear programming neuron satisfies the limited condition in routing solving process; solving the relation of the relative neuron between neuron matrix of single-cast router ensures the optimization of the final multicast routing.

Description

Application layer multicast tree constructing method based on double-deck recurrent neural network
Technical field
The present invention be directed to the research that makes up algorithm based on the tree of the application layer multicast on the overlay network of acting server, how main research finds the solution the application layer multicast route based on improved double-deck recurrent neural networks model, relates to technical fields such as overlay network, neural network model, multicast routing algorithm.
Background technology
Multicast is used in current internet, as: occupy very consequence in video conference, online request, the interactive game.And the multicast solution of network layer is because its agreement dependence to the network equipment has influenced its arrangement enforcement in network-wide basis.Corresponding with it is that virtual overlay network (overlay network) and the application layer multicast solution that provides based on this network are being provided on the internet, just owing to it is disposed simply, does not have specific (special) requirements to obtain more concerns to the network equipment.On architecture, present overlay network mainly is divided into two classes: based on the overlay network of acting server (proxy-based) with based on the overlay network of (the end-host based) of end main frame--our P2P network of indication at ordinary times just.Because acting server has than the more excellent performance of end main frame, better stability, therefore based on the solution of acting server with compare based on the scheme of end main frame have higher bandwidth availability ratio, still less end-to-end time delay and the reliability of Geng Gao.
At present mainly contain heuristic routing algorithm and neural net derivation algorithm two big classes by the scheme of finding the solution at multicast path based on the overlay network of acting server.This paper primary study is found the solution limited applications layer multicast routing problem based on neural network model.
Existing optimum route neural net is found the solution scheme and is mainly had following limitation:
The first, mainly be finding the solution at the optimum route of clean culture.
The second, in the solution procedure of singlecast router, can't introduce restrictive condition, especially the inequality restrictive condition.
Three, used neuronal quantity is more, as its needed neuron number of famous Hopfield neural net that proposes by Ali-Kamount equal node number in the network square.
Therefore, adopt traditional neural network model to be difficult to obtain a kind of limited application layer multicast tree and find the solution scheme.Therefore, must look for another way.
Summary of the invention
Technical problem: the objective of the invention is to propose a kind of application layer multicast tree constructing method based on double-deck recurrent neural network, guarantee multicast path by optimum in, satisfy the restrictive condition of using, and reduce the neuronal quantity that uses in the solution procedure as much as possible, to improve the autgmentability of scheme.
Technical scheme: the present invention proposes a kind of find the solution limited multicast path by neural network model, as shown in Figure 1.As can be seen from the figure, model is made up of the neuron matrix to each multicast member.Comprise three types neuron in these matrixes: independent variable neuron, non-independent variable neuron and LP type neuron.The limit of the directed graph of the corresponding overlay network correspondence of independent variable neuron and non-independent variable neuron wherein, the output of these neuron variablees is found the solution separating of problem corresponding to route.Be output as 1 be illustrated in the final route selected, otherwise not selected.In each neuron matrix, guaranteed satisfying of multicast restrictive condition by the neuronic introducing of LP type.By the connection between the neuron matrix then guaranteed final multicast path by optimum.
Here need to define the term that several this paper use.
Kirchoff restrictive condition: if with path p SdOn the value 1 of each branch road decision variable regard on this branch road have unitary current to pass through as, value 0 is regarded zero current as, then according to the Kirchoff current law as can be known, each branch road decision variable of forming an active path should satisfy the restriction relation of following formula.Because this constraints comes from the Kirchoff current law, so we also call Kirchoff constraints to it, the validity that its existence assurance is understood.
Av=Φ
A=[a ij](i=1,2,...,n-1 j=1,2,...,m)
Wherein
Figure A200810243911D00041
Φ represents the vector on n-1 rank, and form is as follows:
The independent variable neuron: regard each decision variable of calculating path as the branch current variable, by network theory as can be known, chord electric current variable is independently, and a tree electric current variable is not only upright, can be represented by chord electric current variable.
Independent variable is neuronic, and to choose process as follows:
(1) in directed graph, appoints and get one tree, and will set the numbering of propping up and be taken as 1~n-1, the tree Zhi Bianliang v of its correspondence t=[v 1, v 2..., v N-1] TBe dependent.
(2) Yu Xia branch road is a chord, is numbered n~m, corresponding chord variable v l=[v n, v N+1..., v m] TBe independent variable.
In the neural network model of our definition, independent variable neuron in each neuron matrix and the neuronic output of non-independent variable have determined the route of final selection.And the non-independent variable neuron is linear relevant with the independent variable neuron according to the relation of determining.Therefore route find the solution main relevant with the independent variable neuron.What the independent variable neuron in our model still adopted is the neuron of Hopfield type, the core concept of Hopfield network theory is thought: network is transferred to state of minimum energy from upper state, then reach convergence, obtain stable separating, finish network function.When solving optimization problem with the Hopfield network model, weight matrix W is known, and purpose is to ask for the stable state of ceiling capacity E., problem to be optimized must be mapped on the specific configuration corresponding to the optimization problem feasible solution of network for this reason, construct the energy function of a problem to be optimized again.By energy function and cost function are compared, obtain weights and bias current in the energy function, and go to adjust corresponding feedback weights and biasing with this, carry out iterative computation, till system converges to stable state.At last resulting stable state is transformed to separating of actual optimization problem.
Therefore the key of this model is the definition of energy function, and the form of our energy function is as follows:
E k = Σ i = 1 m ( ρ 1 C i f i k ( · ) v i k + ρ 2 ( v i k ) 2 ( 1 - v i k ) 2 ) + ρ 2 ∫ h ( z ) dz
First expense minimum that guarantees the path of asking in the formula.Wherein
f i k ( · ) = 1 1 + Σ j = 1 , j ≠ k , j ∈ D n v i j
The expense that has guaranteed any selected link final multicast path by in only calculated once.
Second has guaranteed v i k ∈ { 0,1 } The time (i=1,2 ..., m) minimum.Be that final separating has only two states: selected corresponding 1, not selected corresponding 0.
The 3rd ∫ h (z) dz has guaranteed satisfying of node restrictive condition (as connection degree restrictive condition), and wherein h (z) is the neuronic transfer function of the LP type introduced in the model, and its form is as follows:
Figure A200810243911D00053
The value of h (z) only just is zero when the node restrictive condition is met, under all the other situations all greater than zero.
To the following formula differentiate, and it is as follows to get the neuronic equation of motion of independent variable by famous negative gradient formula:
dv i k dt = - ∂ E k ∂ v i k
- ρ 2 ( 4 v i 3 - 6 v i 2 + 2 v i ) - ρ 2 Σ j = 1 n - 1 w tji ( 4 v i 3 - 6 v i 2 + 2 v i ) - ρ 1 f i k ( · ) C i - ρ 1 Σ j = 1 n - 1 f j k ( · ) w tji C j - ρ 3 L i h ( z ) ,
Application layer multicast tree constructing method based on double-deck recurrent neural network specifically comprises:
A. each multicast member comprises that chain-circuit time delay, expense, annexation to each other send to host node to the link information known to it;
B. host node obtains the topology an of the whole network according to these annexations, and the corresponding edge in time delay and expense and the topology is associated;
C. host node adopts quadravalence dragon lattice-Ku Tafa to find the solution the step delta t=10 of computing time at the differential equation of the neural net of preamble -5S, output convergence precision Δ v=10 -5, promptly work as the difference of separating of twice calculating in front and back less than 10 -5The time, just think to separate and obtain, change the f step to each multicast member output result;
D. adopt the process of twice convergence, at first consider bias term I 1, make the state of neural net converge to specified accuracy with the degree requirement for restriction:, to adopt a kind of method of in cyclic process, revising parameter adaptively owing to need to satisfy the condition of degree of connection restriction at shortest path; Each neuron have one different
Figure A200810243911D00056
Value, the corresponding different neuron matrix of k wherein, j is a cycle-index.In convergence process, in case the time delay restrictive condition is not being met, just change the expense parameter of the link of forming that illegal path, by neuronic dynamic, they make up a route that satisfies restrictive condition the most at last;
E. when quadratic convergence, bias term I 1Put 0, thereby allow state variable converge on 0 or 1;
F. separate stable after, host node is corresponding value that 1 result of calculation is notified to corresponding multicast member node, when corresponding sides get 1, represents that this limit will participate in the transmission of multicast packet; Each multicast member is provided with the multicast forward table of oneself according to the data of receiving.
Beneficial effect: by finding the solution scheme, can solve following problem by the multicast path that draws based on improved double-deck recurrence network model:
Kirchhoff (Kirchoff) restriction that has utilized double-deck recurrence network model to introduce guarantees the validity of understanding;
(1) set up corresponding to the neuron matrix of each multicast member and by interconnected between the matrix guaranteed multicast path by optimum;
(2) in each neuron matrix, introduced linear programming (Linear Programmable:LP) thus the type neuron has guaranteed satisfying of node connectivity restrictive condition.
Description of drawings
Fig. 1 is a schematic diagram of the present invention.
Embodiment
Improving the accuracy rate in gained path and terminal outcome variable, to converge on 0 or 1 be contradiction, but these two factor effects in the equation of motion are separated from each other, and expense parameter and restrictive condition only show that the bias term of neural net is I lIn.Therefore, we take following measure in specific implementation process: consider bias term I earlier l, make the state of neural net converge to certain precision at shortest path and time delay requirement for restriction, and then with bias term I lPut 0, thereby allow state variable converge on 0 or 1.The convergence that is neural net is through two stages.In first converged state,, adopt a kind of method of in cyclic process, revising parameter adaptively owing to need to satisfy the condition of connection degree restriction.Each neuron k has different ρ lValue is in convergence process, in case restrictive condition Σ k = 1 , k ∈ D n sgn ( v i k ) ≤ Δ Be not being met, just use ρ 1 , i k , j + 1 = ρ 1 , i k , j ( 1 + 1 1000 ) Change the weight coefficient of the link of forming illegal path, by the motion of neuron equation, they make up a route that satisfies condition the most at last.
In the formula
Figure A200810243911D00063
Represent i the neuron ρ in the j time cyclic process among the neuron matrix k lValue.Coefficient l and j value at the beginning of the cycle is less finding a minimum cost path, and the value of coefficient k is compared with the above two and wanted big, but can not be provided with too muchly, carries out too fastly to prevent convergence process, is unfavorable for the cooperation between all matrixes.
Application layer multicast tree constructing method based on double-deck recurrent neural network specifically comprises:
A. each multicast member comprises that chain-circuit time delay, expense, annexation to each other send to host node to the link information known to it;
B. host node obtains the topology an of the whole network according to these annexations, and the corresponding edge in time delay and expense and the topology is associated;
C. host node adopts quadravalence dragon lattice-Ku Tafa to find the solution the step delta t=10 of computing time at the differential equation of the neural net of preamble -5S, output convergence precision Δ v=10 -5, promptly work as the difference of separating of twice calculating in front and back less than 10 -5The time, just think to separate and obtain, change the f step to each multicast member output result;
D. adopt the process of twice convergence, at first consider bias term I 1, make the state of neural net converge to specified accuracy with the degree requirement for restriction:, to adopt a kind of method of in cyclic process, revising parameter adaptively owing to need to satisfy the condition of degree of connection restriction at shortest path; Each neuron have one different Value, the corresponding different neuron matrix of k wherein, j is a cycle-index.In convergence process, in case the time delay restrictive condition is not being met, just change the expense parameter of the link of forming that illegal path, by neuronic dynamic, they make up a route that satisfies restrictive condition the most at last;
E. when quadratic convergence, bias term I 1Put 0, thereby allow state variable converge on 0 or 1;
F. separate stable after, host node is corresponding value that 1 result of calculation is notified to corresponding multicast member node, when corresponding sides get 1, represents that this limit will participate in the transmission of multicast packet; Each multicast member is provided with the multicast forward table of oneself according to the data of receiving.
Every coefficient selection is a very difficult task in the equation of motion, and coefficient selects bad meeting to cause finally separate unavailable.The variable coefficient of double-deck recurrence NN has only three.First coefficient is electric capacity m, and m is more little, and time constant is more little, and the state convergence is fast more.On the other hand, undue hour of m can not guarantee enough time search optimal values, the error of calculation is increased, even do not have and separate.Second coefficient is proportionality coefficient n, and it influences the precision of path computing.N is more little, and the share of path fee item in energy function is more little, may increase the path that neural net calculates and the error of shortest path.N is big more, and the path that neural net calculates is more near shortest path.But the network convergence that can affect the nerves when excessive is in 0 or 1.The 3rd coefficient l is similar with second parameter n, and it influences the precision of path computing equally.L is more little, and the ratio that connection degree restrictive condition accounts in energy function is more little, and the connection degree restrictive condition of node may not be satisfied in the path of calculating; L is big more, calculates the path energy satisfaction restrictive condition of gained, but the convergence of the network that also can affect the nerves simultaneously.
Through repeatedly simulated experiment, to get between the 1-3 in each limit cost value, the span of m, n, l is advisable between (0.001,0.02), (4.5,13.5), (7.5,22.5) respectively.

Claims (1)

1. application layer multicast tree constructing method based on double-deck recurrent neural network is characterized in that this method specifically comprises:
A. each multicast member comprises that chain-circuit time delay, expense, annexation to each other send to host node to the link information known to it;
B. host node obtains the topology an of the whole network according to these annexations, and the corresponding edge in time delay and expense and the topology is associated;
C. host node adopts quadravalence dragon lattice-Ku Tafa to find the solution the step delta t=10 of computing time at the differential equation of the neural net of preamble -5S, output convergence precision Δ v=10 -5, promptly work as the difference of separating of twice calculating in front and back less than 10 -5The time, just think to separate and obtain, change the f step to each multicast member output result;
D. adopt the process of twice convergence, at first consider bias term I 1, make the state of neural net converge to specified accuracy with the degree requirement for restriction:, to adopt a kind of method of in cyclic process, revising parameter adaptively owing to need to satisfy the condition of degree of connection restriction at shortest path; Each neuron have one different
Figure A200810243911C0002151712QIETU
Value, the corresponding different neuron matrix of k wherein, j is a cycle-index; In convergence process, in case the time delay restrictive condition is not being met, just change the expense parameter of the link of forming that illegal path, by neuronic dynamic, they make up a route that satisfies restrictive condition the most at last;
E. when quadratic convergence, bias term I 1Put 0, thereby allow state variable converge on 0 or 1;
F. separate stable after, host node is corresponding value that 1 result of calculation is notified to corresponding multicast member node, when corresponding sides get 1, represents that this limit will participate in the transmission of multicast packet; Each multicast member is provided with the multicast forward table of oneself according to the data of receiving.
CNA2008102439111A 2008-12-10 2008-12-10 Application layer multicasting tree constructing method based on two-layer recurrent neural network Pending CN101488913A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008102439111A CN101488913A (en) 2008-12-10 2008-12-10 Application layer multicasting tree constructing method based on two-layer recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008102439111A CN101488913A (en) 2008-12-10 2008-12-10 Application layer multicasting tree constructing method based on two-layer recurrent neural network

Publications (1)

Publication Number Publication Date
CN101488913A true CN101488913A (en) 2009-07-22

Family

ID=40891596

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008102439111A Pending CN101488913A (en) 2008-12-10 2008-12-10 Application layer multicasting tree constructing method based on two-layer recurrent neural network

Country Status (1)

Country Link
CN (1) CN101488913A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139612A (en) * 2011-12-01 2013-06-05 苏州达联信息科技有限公司 Method of managing dynamic network distribution trees of video live broadcast distribution network
CN106548207A (en) * 2016-11-03 2017-03-29 北京图森互联科技有限责任公司 A kind of image processing method and device based on neutral net
CN107357757A (en) * 2017-06-29 2017-11-17 成都考拉悠然科技有限公司 A kind of algebra word problems automatic calculation device based on depth enhancing study
CN114500407A (en) * 2022-01-13 2022-05-13 厦门大学 Scheduling method for single-multicast mixed-transmission switching network

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139612A (en) * 2011-12-01 2013-06-05 苏州达联信息科技有限公司 Method of managing dynamic network distribution trees of video live broadcast distribution network
CN103139612B (en) * 2011-12-01 2017-03-29 苏州达联信息科技有限公司 A kind of dynamic network distribution tree management method of live video distribution
CN106548207A (en) * 2016-11-03 2017-03-29 北京图森互联科技有限责任公司 A kind of image processing method and device based on neutral net
CN107357757A (en) * 2017-06-29 2017-11-17 成都考拉悠然科技有限公司 A kind of algebra word problems automatic calculation device based on depth enhancing study
CN107357757B (en) * 2017-06-29 2020-10-09 成都考拉悠然科技有限公司 Algebraic application problem automatic solver based on deep reinforcement learning
CN114500407A (en) * 2022-01-13 2022-05-13 厦门大学 Scheduling method for single-multicast mixed-transmission switching network
CN114500407B (en) * 2022-01-13 2023-10-27 厦门大学 Scheduling method for switching network for unicast and multicast mixed transmission

Similar Documents

Publication Publication Date Title
Wen et al. Distributed tracking of nonlinear multiagent systems under directed switching topology: An observer-based protocol
WO2019134254A1 (en) Real-time economic dispatch calculation method using distributed neural network
Li et al. Multicast routing for decentralized control of cyber physical systems with an application in smart grid
Yu et al. Distributed adaptive control for synchronization in directed complex networks
CN101777990B (en) Method for selecting multi-objective immune optimization multicast router path
Zhang et al. Multi-colony ant colony optimization based on generalized jaccard similarity recommendation strategy
CN104301305B (en) Interest bag is forwarded under information centre's network method and forwarding terminal
CN111191918A (en) Service route planning method and device for smart power grid communication network
CN102726011A (en) Virtual network control method and system based on fluctuations
CN101488913A (en) Application layer multicasting tree constructing method based on two-layer recurrent neural network
CN101616074B (en) Multicast routing optimization method based on quantum evolution
CN105302858A (en) Distributed database system node-spanning check optimization method and system
CN106685745A (en) Network topology construction method and device
Silva et al. Inter-domain routing for communication networks using Hierarchical Hopfield Neural Networks.
Wang et al. Delay-constrained multicast routing using the noisy chaotic neural networks
Pan et al. Multi-path SDN route selection subject to multi-constraints
CN101741749A (en) Method for optimizing multi-object multicast routing based on immune clone
CN110188317A (en) Exclude means of voting, apparatus and system that polyisocyanate structure executes body common mode mistake
Hamid et al. Optimization assisted load tracing via hybrid ant colony algorithm for deregulated power system
CN110781352B (en) Method for optimizing topological structure to realize network structure controllability at lowest cost
Yuan et al. Consensus of multi-agent systems with piecewise continuous time-varying topology
CN106850431A (en) A kind of optimal route selection method of many attributes for being applied to low rail Information Network
Bastos-Filho et al. A novel approach for a routing algorithm based on a discrete time hopfield neural network
Kojic et al. Neural network based dynamic multicast routing
Van Mieghem Robustness of large networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20090722