WO1994024637A1 - Hopfield neural networks - Google Patents
Hopfield neural networks Download PDFInfo
- Publication number
- WO1994024637A1 WO1994024637A1 PCT/GB1994/000818 GB9400818W WO9424637A1 WO 1994024637 A1 WO1994024637 A1 WO 1994024637A1 GB 9400818 W GB9400818 W GB 9400818W WO 9424637 A1 WO9424637 A1 WO 9424637A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network
- neurons
- hopfield
- values
- operating
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
Definitions
- the present invention relates to an improved Hopfield neural network.
- the improved neural network may be used to control packet switching in a packet switching network.
- one of the essential features of such a system is the availability of fast packet switches to route the individual packets reliably and rapidly to their addressed destinations.
- [17] uses a rather similar approach, but based upon the analogy of a gradually decreasing annealing temperature.
- Cheung [18] and Gsosh [19] disclose other approaches for changing the operation of individual neurons in a defined way as the calculations proceeds.
- Hopfield suggested first of all determining an energy function for the problem to be solved, and then comparing that energy function with what is now called the Hopfield Energy Function to determine the weights and biases for the network.
- Hopfield Energy Function to determine the weights and biases for the network.
- Neural networks have been applied in a variety of circumstances, and these include routing systems and crossbar switches - see for example Fujitsu [20] and Troudet et al [12] . It is an object of the present invention to provide an improved neural network, based upon the Hopfield model, and particularly for use in operating high speed packet switches (although many other applications may be envisaged) . It is a further object to improve on the work of Ali and Nguyen [6] and to provide a neural network which converges more rapidly to a guaranteed, or virtually guaranteed, valid solution.
- a method of operating a Hopfield network to solve a problem comprising forcing at least some of the neurons for which the correct values are known to take on the respective correct values, and operating the network to complete the solution to the problem.
- the invention also extends to apparatus for carrying out the method, and it accordingly extends to a Hopfield network adapted to solve a problem the solution to which is partially known, the network including means adapted to force at least some of the neurons for which the correct values are known to take on the respective correct values, and means for operating the network to complete the solution to the problem.
- the individual elements of the solution matrix will converge to either an upper attractor (which may be 1) or a lower attractor (which may be 0) .
- an upper attractor which may be 1
- a lower attractor which may be 0
- the relevant entries may be forced to the corresponding attractor.
- a method of operating a Hopfield network incorporating neurons having a transfer function with a graded response comprising repeatedly updating the neuron outputs according to an updating rule, characterised in that the transfer function has at least one parameter which changes randomly or pseudorando ly between iterations.
- the invention also extends to apparatus for carrying out the method, and accordingly also extends to a Hopfield network incorporating neurons having a transfer function with a graded response, the network including means for updating the neuron outputs according to an updating rule, characterised by means for varying a parameter of the transfer function randomly or pseudorandomly between iterations .
- Changing the updating rule from iteration to iteration introduces noise into the system, and enables the algorithm to avoid at least some of the non-global minima in which it might otherwise become trapped. This may be achieved by randomly or pseudorandomly changing the constant (/?) of a sigmoid function.
- Xi j is an input to the neuron referenced by ij
- y ⁇ j is an output to the neuron referenced by ij
- A, B and C are optimisation values, and, further, where
- Xi j and yi j are related by a monotonic activation (transfer) function
- the invention also extends to an apparatus for carrying out the method, and according extends to a Hopfield network adapted to operate using an updating rule
- x i3 ⁇ t) x i:j ( t-l ) + ⁇ t
- Vij £ ⁇ e ⁇ i-.
- a value of ⁇ t as high as 0.1 or 0.2 or even greater may be used provided that A, B and C are appropriately chosen.
- C lies within the range 40 to 150, and
- a and B are at least ten times greater than C (e.g. 20 times or 30 times greater) .
- the preferred application of the present invention is in a telecommunications switch, preferably a packet switch for use in a packet switching system.
- the practical realisation of the switch will depend very much upon the application, and the limitation of the various technologies involved. VLSI, optics, software and so on could all be exploited.
- the neural network could be embodied either in hardware or in software. It could also be split between the two (hybridware) .
- a packet switch will desirably be associated with a separate queue manager, which sits in front of the switch, and provides prioritisation information to the neural network. If an incoming packet is experiencing an unacceptable delay (or alternatively if an incoming packet is designated as having a high priority) , the queue manager may modify the input matrix to the neural network to take account of the desired sequencing.
- the function of the queue manager could be achieved using conventional, neural, genetic algorithm, fuzzy algorithm or hybrid techniques.
- the present invention is used for problem solving other than in telecommunications switches, it may still be desirable to have an input manager, sitting in front of the -neural network, and modifying the input matrix according to known constraints on the problem to be solved.
- the input manager will provide the neural network with prioritisation information where the network is undertaking some sort of sequencing task.
- Figure 1 is a schematic diagram of a high speed packet switch
- Figure 2 is a graph showing the Sigmoid function for values of ⁇ of 0.08 and 0.16;
- Figure 4 shows the values of f (x) for the simulations of Figure 3 ;
- Figure 6 shows the values of f (x) for the simulations of Figure 5;
- Figure 8 is a graph corresponding to that of Figure 7 but in which the non-requested neurons are not connected and are consequently excluded from the calculations;
- Figure 9 shows the relationship of the neural network and the queue manager.
- the specific embodiment relates particularly to a neural network for operating a high speed packet switch, of the type illustrated in Figure 1.
- This figure is adapted from a similar drawing in Ali and Nguyen [6] .
- the purpose of a switch is to ensure that the addressed packets within the system are rapidly routed to their correct destination, along their respective requested pathways.
- a typical switch has a plurality of inputs and a plurality of outputs, the inputs receiving the packets to be routed and the outputs being connected to the various available pathways.
- an incoming packet on input 1 may request to be routed via any one of the switch's outputs.
- packets arriving on any of the other inputs may request to be routed to any output.
- the switch of the present embodiment is an nxn switch; in other words there are n inputs and n outputs.
- Each of the n inputs has N separate input queues, one for each of the outputs. Accordingly, an incoming package on input 1 which requests to be routed to output 1 will be queued in the first queue of input 1. Other packets on that input requesting to be routed to output 2 would be stored on the second queue of input 1, and so on. It will be evident that there are a total of n 2 input queues.
- the switch is operated synchronously, and its task is to transfer the packets from the input queues as rapidly as possible, to the requested outputs. Where the requests for transfers are arriving more rapidly than the capacity of the switch, the switch has to choose the packets from the various input queues in such a way as to maximise the throughput. In the present embodiment, it is a neural network which solves this optimisation problem.
- Hopfield neural network is used.
- the general Hopfield energy function is compared with the calculated energy function that will maximise the throughput to give the desired differential equation which is the solution to the switching problem.
- the differential equation includes undefined constants (optimisation parameters) which Ali and Nguyen determined by simulation.
- optimisation parameters undefined constants which Ali and Nguyen determined by simulation.
- the present invention provides a substantially improved method for determining what the optimisation parameters should be for stable and rapid convergence. We will start by considering the basic Hopfield model, and then go on to consider how this can be used to deal with the particular switching problem under consideration.
- the neural network used in this present embodiment is the -Hopfield model [8,9] , this consists of a large number of processing elements (the neural cells) which are interconnected via neural weights.
- each neuron can be described by two continuous variables, the neural activity level x i;j and the neural output yi j . These variables are related by the non ⁇ linear processing function f, as follows:
- f is taken to be some non-linear monotonically increasing function. This function is called the activation (or transfer) function. ⁇ The exact form of f is not particularly important, and any appropriate non ⁇ linear monotonically increasing function could be used. In the preferred embodiment, however, f is taken to be the sigmoid function
- T ij/kl is the weight matrix which described the connection strength between the neurons indexed by (ij) and (kl) .
- I ⁇ j describes the external bias (sometimes referred to as the "external bias current") which is supplied to each neuron.
- equation (4) must continually decrease, and hence the right hand side of equation 7 must be less than or equal to 0. In general, this would not be the case, because of the second term on the right hand side. But in the large gain limit (in other words as the equation (2) tends towards a step function, and ⁇ is large) the derivative df- ⁇ /dx.., becomes a delta function and therefore (dt. ⁇ /dx... ) ⁇ . tends to zero. This establishes a result discussed by Hopfield [8] . Accordingly, provided that the value of ⁇ in equation (2) is appropriately chosen, we can be certain that the system converges and that at equilibrium we have not introduced any inaccuracies by dropping the integral term from equation (4) .
- r. is unity if a particular input queue is busy, and is zero if it is idle.
- the rows of the matrix y represent the input lines, and the columns represent the output lines. Every index pair (ij) defines a connection channel.
- every index pair (ij) defines a connection channel.
- only one packet can be permitted per channel: in other words, during each time slot at most one packet may be sent to each of the outputs, and at most one packet may be chosen from each of the inputs.
- the task of the neural network is to take the input matrix y, and operate upon it, repeatedly, to produce an output or configuration matrix which actually sets up the channel connections, that is defines the packets that are to be chosen to achieve maximum throughput within the switching constraints.
- the output (configuration) matrix can have at most one non-vanishing element in each row and one non-vanishing element in each column. More than a single element in each row, or a single element in each column, would mean that the switching constraints have been violated in that the switch was either trying to pass two packets at once from a single input or to pass two packets at once to a single output.
- the input matrix contains more than one non- vanishing element: in any row, or more than one non- vanishing element in any column, then there are more requests for connections than the switch can handle during that time slot. Since only one input can be connected to one output at any time, the switching mechanism will have to choose only one request and force the rest to wait.
- the first of these satisfies the switching constraints but does not maximise the throughput because no packet is chosen from the first input even through there is at least one packet waiting.
- the second matrix violates the switching constraints in that it attempts to select two packets at once from the first input, and to send two packets' at once to the first output.
- the input matrix y will not be full. In that case, the number of valid solutions will be fewer.
- the input matrix is as follows:
- equation (13) which is the desired differential equation for the switching problem.
- the parameters A, B and C are known as "optimisation parameters", and in previous work [6] these have been determined purely by trial and error. If the optimisation parameters are not chosen carefully, the equation (13) will either not converge at all, or it will converge only slowly. A further possibility is that the equation might converge to a solution which is not "valid" in the sense described above, for example, because it does not maximise the throughput of the switch.
- equation (13) is used in its iterated form as follows:
- x o,i3 ⁇ A ⁇ f ( ⁇ o,i ⁇ B ⁇ f (X o.Itj ) +
- Equation (16) where x x denotes the first equilibrium solution. Because we are at equilibrium, we know that the associated y value must be close to zero, and from equation (2) we know that y only tends to zero as x tends to minus infinity. Accordingly, we can rewrite equation (16) as the following inequality
- This solution may be referred to as the "negative attractor” solution, as it is the equilibrium solution obtained as x tends to minus infinity.
- a practical point is that A is always taken to be very much larger than C. This is due to the fact that a large proportion of the neurons have to approach the negative attractor while only a small number of them will approach the positive attractor. Taking A much greater than C speeds convergence, and allows us to use a large value of ⁇ t.
- Equation (18) If a small value of ⁇ t were to be acceptable in equation (13a) , then the term ⁇ x in equation (2) can be very large. To make the limiting values of y reasonably close to 0 and 1 we choose, approximately, / 3x greater than about 4. Hence, from equation (18) :
- Figure 3 shows how dx/dt varies with time, for each of the n 2 neurons
- ⁇ t has been set to 0.2, as before, but now because A is very much greater than C the system is not unstable at larger values of ⁇ t. Taking A very much greater than C allows us to use large ⁇ t and hence obtain more rapid convergence than was possible in the prior art.
- the value of ⁇ is set at 0.08, and in subsequent iterations the value of ⁇ is randomly chosen to lie somewhere within the range 0.08 to 0.16.
- ⁇ is the gain factor which controls the steepness of the sigmoid function, as illustrated in Figure 1. Where ⁇ is taken at a value of greater than 0.08 (see equation 21(c)) , the maximum amount of noise can be taken to be equivalent to the value of ⁇ .
- the connection matrix (12) therefore defines 2n 2 (n-l) connections. This has to be compared with the maximum possible number of connections that a network of n 2 neurons can have: that is n 4 -n 2 .
- the output matrix should also contain a corresponding null row or column, since otherwise the neural network will have introduced connections where none have been requested.
- the energy function (11) and the connection matrix (12) do not guarantee this to be the case. This follows from the fact that the energy function (11) does not take on minima if one of the rows or columns have only vanishing entries. To avoid this occurrence, we propose that the null rows and columns should be decoupled from the Hopfield dynamics. We have found in practice that this both improves the convergence of the other neurons, and also permits considerable increases in speed to be achieved.
- the null rows and columns may be decoupled from the Hopfield dynamics in a number of ways.
- One simple mechanism would simply be not to include the null rows and columns at all in any of the calculations.
- An alternative, and preferred, method is to force all of the neurons in a null row or a null column onto the negative attractor. What we do in practice is to decouple the input to the null rows and columns from all the other neurons. However, their constant outputs are still coupled. Hence the forced neurons remain fixed at all times at the negative attractor, but their outputs are still fed into the rest of the calculations, and will affect time evolution of the unforced neurons. The external biases of all the forced neurons are set to zero.
- the queue manager will modify the input matrix (9) to take account of the desired sequencing.
- the queue manager When the queue manager receives a packet from the input line, it will examine the destination address to determine the appropriate destination output line. It will also determine the priority and/or delay of that particular packet. Based upon the information it has read, it will then appropriately update the input matrix (9) , so changing the initial conditions that the neural network has to operate upon. It may also impose attractors, either positive or negative, or in other ways decouple certain neurons from the network according to the requested priorities of the individual packets.
- the function of the queue manager could be achieved using conventional, neural, genetic algorithm, fuzzy algorithm or hybrid techniques.
- One particular function that might be provided by the queue manager would be to adjust the input matrix (9) to take account of the fact that there may be more than one packet which is waiting for the particular connection. If a large number of packets start to build up, all waiting for a particular connection, the queue manager should have some mechanism for effectively increasing the priority of those packets to ensure that the queue does not become unacceptably long.
- An alternative method of achieving the same result might be to make use of an input request matrix in which each element is not merely 0 or 1, but is an integer representing the number of packets that are awaiting that particular connection. The higher the number waiting for a particular connection, the greater would be the initial value of y, according to equations (1) and (2) , and accordingly the greater likelihood there would be of that particular neuron converging to the value 1.
- the modified input matrix is then replaced by a neuron matrix using equation (2) (with the non- requested or forced elements remaining at the negative attractor, -2A) .
- the elements of this neuron matrix are the y x - .
- the network of the present • invention could be embodied either in hardware or in software, or in a combination of the two.
- Potential hardware technologies would include VLSI, optics and so on.
- the environment in which the network is to be embedded will of course play a key role in the determination of the medium. For example, if speed is of the essence, a hardware implementation would appear preferable, but of course that has to be offset against the fact that hardware implementations for large switches would be exceedingly complicated.
- a specification for a hardware (electrical) realisation of a Hopfield neural network has already been published by Brown [5] .
- the various parameters in the Hopfield model can be related to values of the various electrical components in the circuitry.
- the present invention will have application in very many different fields, and in particular to any problem in which a Hopfield energy can be calculated, and there is a requirement for an input matrix having at most one non-vanishing element per row and at . most one non-vanishing element per column, in other words where the problem is equivalent to the Travelling Salesman Problem.
- Potential application areas include network and service management (including switching of lines, channels, cards, circuits, networks etc.) ; congestion control; distributed computer systems (including load balancing in microprocessor systems, cards, circuits etc.) and decentralized computer systems; work management systems; financial transaction systems
- the present work can be extended to the continuous case, where the inputs to the network can take on any value within a given range, rather than being restricted to O and 1.
- One particular way of doing this, as previously described, would be to allow the inputs to be any positive integer, for example an integer corresponding to the number of packets awaiting switching in the respective queue.
- the inputs could be truly continuous (not just stepped) .
- the inputs may be multiplied by a spread factor f, prior to the network calculations being started, to vary the range that the input values span. Because the network calculations are non-linear, altering the input range may have profound effects on the operation and speed of convergence of the net.
- Imposed attractors may be used but, in practice, it has been found that they do not add very much speed to the speed of convergence of the net in many cases .
- the continuous case can be considered for performing first order task assignment in a multi-service or heterogenous environment by maximising the sum of indicators of suitability of tasks for the chosen resources.
- the network can also be used for higher order task assignment taking into account, amongst others, intertask communication costs.
- Application areas include, amongst others, network and service management, distributed computer systems, systems for work management, financial transactions, traffic scheduling, reservation, storage, cargo handling, database control, automated production, and general scheduling, control or resource allocation problems.
Abstract
Description
Claims
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002160027A CA2160027C (en) | 1993-04-20 | 1994-04-19 | Hopfield neural networks |
NZ263870A NZ263870A (en) | 1993-04-20 | 1994-04-19 | Robust hopfield model neural network suitable for control of packet switching |
DE69420134T DE69420134T2 (en) | 1993-04-20 | 1994-04-19 | HOPFIELD NEURONAL NETWORK AND METHOD FOR OPERATING IT |
JP6522921A JPH08509083A (en) | 1993-04-20 | 1994-04-19 | Popfield neural network |
EP94912645A EP0695447B1 (en) | 1993-04-20 | 1994-04-19 | Hopfield neural network and method of operating it |
KR1019950704543A KR960702130A (en) | 1993-04-20 | 1994-04-19 | Absorbed field neural network and its operation method |
AU65108/94A AU690904B2 (en) | 1993-04-20 | 1994-04-19 | Hopfield neural networks |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB939308165A GB9308165D0 (en) | 1993-04-20 | 1993-04-20 | Hopfield neural networks |
GB9308165.1 | 1993-04-20 | ||
US10378093A | 1993-08-10 | 1993-08-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1994024637A1 true WO1994024637A1 (en) | 1994-10-27 |
Family
ID=26302783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1994/000818 WO1994024637A1 (en) | 1993-04-20 | 1994-04-19 | Hopfield neural networks |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO1994024637A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0840238A1 (en) | 1996-10-30 | 1998-05-06 | BRITISH TELECOMMUNICATIONS public limited company | An artificial neural network |
US6377545B1 (en) | 1996-11-18 | 2002-04-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Open loop adaptive access control of ATM networks using a neural network |
CN113377335A (en) * | 2021-05-14 | 2021-09-10 | 长沙理工大学 | Pseudo-random number generator, pseudo-random number generation method and processor chip |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4660166A (en) * | 1985-01-22 | 1987-04-21 | Bell Telephone Laboratories, Incorporated | Electronic network for collective decision based on large number of connections between signals |
EP0340742A2 (en) * | 1988-05-06 | 1989-11-08 | Honeywell Inc. | Mask controlled neural networks |
-
1994
- 1994-04-19 WO PCT/GB1994/000818 patent/WO1994024637A1/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4660166A (en) * | 1985-01-22 | 1987-04-21 | Bell Telephone Laboratories, Incorporated | Electronic network for collective decision based on large number of connections between signals |
EP0340742A2 (en) * | 1988-05-06 | 1989-11-08 | Honeywell Inc. | Mask controlled neural networks |
Non-Patent Citations (7)
Title |
---|
CHEUNG: "Neuron variable structure controller", IECON89 : 15TH ANNUAL CONFERENCE OF IEEE INDUSTRIAL ELECTRONICS SOCIETY, vol. 1, 6 November 1989 (1989-11-06), PHILADELPHIA , USA, pages 759 - 763 * |
CHU: "Using a semi-asynchronous hopfield network to obtain optimal coverage in logic minimization", IJCNN-91 : INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, vol. 1, 8 July 1991 (1991-07-08), SEATTLE , USA, pages 141 - 146 * |
FOO: "Stochastic neural networks for solving job-shop scheduling : part 1. Problem representation", IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, vol. 2, 24 July 1988 (1988-07-24), SAN DIEGO , USA, pages 275 - 282 * |
GHOSH: "A temporal memory network with state-dependent thresholds", 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, vol. 1, 28 March 1993 (1993-03-28), SAN FRANCISCO , USA, pages 359 - 364 * |
NEELAKANTA: "Langevin machine: a neural network based on stochastically justifiable sigmoidal function", BIOLOGICAL CYBERNETICS, vol. 65, no. 5, September 1991 (1991-09-01), HEIDELBERG DE, pages 331 - 338 * |
TROUDET: "Neural network architecture for crossbar switch control", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, vol. 38, no. 1, January 1991 (1991-01-01), NEW YORK US, pages 42 - 56 * |
UEDA: "Hopfield-type neural networks with fuzzy sets to gather the convergent speed", IJCNN INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, vol. 4, 7 June 1992 (1992-06-07), BALTIMORE , USA, pages 624 - 629 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0840238A1 (en) | 1996-10-30 | 1998-05-06 | BRITISH TELECOMMUNICATIONS public limited company | An artificial neural network |
US6377545B1 (en) | 1996-11-18 | 2002-04-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Open loop adaptive access control of ATM networks using a neural network |
CN113377335A (en) * | 2021-05-14 | 2021-09-10 | 长沙理工大学 | Pseudo-random number generator, pseudo-random number generation method and processor chip |
CN113377335B (en) * | 2021-05-14 | 2022-07-01 | 长沙理工大学 | Pseudo-random number generator, pseudo-random number generation method and processor chip |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuhas et al. | Neural networks in telecommunications | |
Hiramatsu | Integration of ATM call admission control and link capacity control by distributed neural networks | |
Park et al. | Applications of neural networks in high-speed communication networks | |
Feng et al. | Optimal state-free, size-aware dispatching for heterogeneous M/G/-type systems | |
Verloop et al. | Heavy-traffic analysis of a multiple-phase network with discriminatory processor sharing | |
CN116225649A (en) | Fine-granularity electric power task cloud edge collaborative optimization scheduling method | |
Nordstrom et al. | Neural networks for adaptive traffic control in ATM networks | |
CA2160027C (en) | Hopfield neural networks | |
WO1994024637A1 (en) | Hopfield neural networks | |
Chong | A neural-network contention controller for packet switching networks | |
Mehmet-Ali et al. | The performance analysis and implementation of an input access scheme in a high-speed packet switch | |
Vahidipour et al. | Priority assignment in queuing systems with unknown characteristics using learning automata and adaptive stochastic Petri nets | |
Ali et al. | The performance analysis of an input access scheme in a high-speed packet switch | |
Brown | Neural networks for switching | |
Jawad et al. | Prototype design for routing load balancing algorithm based on fuzzy logic | |
Davoli et al. | A two-level stochastic approximation for admission control and bandwidth allocation | |
Kojic et al. | Neural network based dynamic multicast routing | |
Kurokawa et al. | The neural network approach to a parallel decentralized network routing | |
Liu | A hybrid queueing model for fast broadband networking simulation | |
Arulambalam et al. | Traffic management of a satellite communication network using mean field annealing | |
Nimisha et al. | Polling Models: A Short Survey and Some New Results | |
Matsuda | Guaranteeing efficiency and safety of a Hopfield network for crossbar switching | |
Marek et al. | A Diffusion Approximation model of Active Queue Management | |
Chen et al. | An Online RBF Network Approach for Adaptive Message Scheduling on Controller Area Networks. | |
Necker et al. | Bitrate management in ATM systems using recurrent neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 94191841.6 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU CA CN FI JP KR NZ US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 263870 Country of ref document: NZ |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2160027 Country of ref document: CA Ref document number: 1994912645 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 1996 537767 Country of ref document: US Date of ref document: 19960118 Kind code of ref document: A |
|
WWP | Wipo information: published in national office |
Ref document number: 1994912645 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1994912645 Country of ref document: EP |