CN1206526A - Method and arrangement for network resource administration - Google Patents

Method and arrangement for network resource administration Download PDF

Info

Publication number
CN1206526A
CN1206526A CN 96199374 CN96199374A CN1206526A CN 1206526 A CN1206526 A CN 1206526A CN 96199374 CN96199374 CN 96199374 CN 96199374 A CN96199374 A CN 96199374A CN 1206526 A CN1206526 A CN 1206526A
Authority
CN
China
Prior art keywords
node
token
capacity
network
time slot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 96199374
Other languages
Chinese (zh)
Inventor
克里斯特·伯姆
泊·琳德兰
拉斯·拉姆菲尔特
玛库斯·海德尔
彼特·斯乔汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dynarc AB
Original Assignee
Dynarc AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dynarc AB filed Critical Dynarc AB
Priority to CN 96199374 priority Critical patent/CN1206526A/en
Publication of CN1206526A publication Critical patent/CN1206526A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention relates to a method and an arrangement for centralized and distributed management of the capacity in a circuit-switched network with a bus or ring structure, especially a network of DTM-type (Dynamic Synchronous Transfer Mode), which network uses a bandwidth which is partitioned in cycles, which in turn are partitioned in control time slots for signalling and data time slots for transfer of data, and wherein each data time slot is associated with a token. In the centralized version, a first node, called server node, is allocated tokens corresponding to all data time slots which are flowing in one direction in a bus or a ring. A second node is requesting tokens corresponding to a certain capacity from the server node, and the server node makes reservations for and transfers tokens corresponding to requested capacity to the other node in event the server node has requested capacity unutilized. The number of server nodes in the distributed version is between two and the total number of nodes in the bus or ring.

Description

The method and apparatus of network resource administration
A. be used for the Fast Circuit Switching of high performance network of new generation
DTM (dynamic synchronous transfer mode) is based on the broadband network structure of the reinforced Fast Circuit Switching technology that can carry out the resource dynamic reallocation.It provides based on the short business of setting up time-delay of having of multileaving, multi-rate channel, and supports service quality is had the application of real-time requirement and has the application of sudden asynchronous service.The application has described DTM structure and its distributed resource management pattern.The results of performance analysis that obtains from network analog is provided.This analysis is to carry out according to the throughput of dual bus and these two kinds of network topology structures of dual bus net and visit time-delay.With respect to the uniform traffic pattern, studied the influence that user's request, euclidean distance between node pair and transmission length factors change.Even result of study shows that the overhead of setting up channel also is low (hundreds of microsecond) for short transmission, and utilance is high.Analyze and show that also signaling capacity has limited performance when setting up channel very continually.1. introduce
New generation network is with integrated types of applications business: insensitive asynchronous application of time-delay such as fax, letter and file transmit and the time-delay sensitive application that has real-time requirement such as audio frequency and video.Traditionally, these dissimilar application are supported by the networks of different type topological structure.Adopt the computer network (as the internet) of packet switching and storage-retransmission technique to support asynchronous communication.On the other hand, circuit switching, the time division multiplexing telephone network is supported real time communication.
Circuit Switching Network has many prominent characteristics.Business on circuit is not subjected to influence professional on other circuit, and circuit isolates each other in this sense.So just might guarantee delivery quality by constant time lag, this is applicable to have the application that regularly requires.In addition, data are separated with control in Circuit Switching Network.Only when circuit is set up and remove, just carry out the processing of control information, and in the actual data of carrying out when transmitting, need not to carry out data flow, block processing such as control.This just can carry out mass data effectively and transmit [1].We think in that this point is even more important in the future, because the development of photonic propulsion aspect has greatly reduced transmission cost, thereby make exchange become the main bottleneck of communication.
The statistic property of custom circuit switching network shows that they are not suitable for the business of some type.Usually, circuit has fixing capacity, long foundation is delayed time and can not support multileaving well.These shortcomings make it be difficult to support effectively compunication in the Circuit Switching Network for example.This has promoted to seek the solution that substitutes, and the viewpoint of occupying leading position is new generation telecommunication network should be based on the cell switching net [2] of ATM.Cell is grouping little, regular length, so be intended for packet switching [3].The many weakness that this means packet switching also are present in the cell switching net, particularly guarantee the field of quality of service at needs.So, need such as permission control, traffic carrying capacity regulate and link on the additional mechanism such as synchronous again of packet scheduling and receiver side support different type of service [4] jointly.About the packet switching network, ATM particularly, one the problem of worth care be that can it realize these mechanism [5], [6], [7], [8], [9], [10], [11] with effective P/C than mode.
DTM (dynamic synchronous transfer mode) is the broadband network architecture of technical research institute of Sweden imperial family exploitation.This architecture is attempted the advantage of synthetic circuit exchange and packet switching.This architecture is based on the reinforced Fast Circuit Switching technology that can carry out the resource dynamic reallocation, and it is supported the multileaving channel and has the mechanism that short visit time-delay can be provided.The DTM architecture relates to various media interviews, comprises a synchronization mechanism, even the Route Selection of receiver side's logic port and addressing.DTM supports various types of business and can be directly used in the communication of application-application, perhaps as the bearer network such as other agreements such as ATM or IP.A kind of prototype based on 622Mbps optical fiber realizes operating 2 years, and the progress of work is to be representative with the multiplexed version of 4 wave-length divisions.The overview of DTM and the detailed description that prototype realizes are disclosed in [12].
[13] just appear in Fast Circuit Switching that be used for telephone system at the beginning of the eighties.The Fast Circuit Switching telephone network is attempted only just the transmission path of given data rate to be distributed to them when a group network user actively sends information.This means to each information happens suddenly and set up a circuit [14], [15].When detecting quietness, rapidly transmission capacity is redistributed to other users.At TASI-E[13] in the form that uses, disposed Fast Circuit Switching for the transcontinental communication between the Europe and the U.S..Exchange (burst switching) is the another kind of form of Fast Circuit Switching in groups, wherein sends pulse train (data and end mark by head, any amount are formed) on the time-derived channel of fixed bit rate, then with other pulse train interlock [16].This makes to exchange in groups and is different from fast packet switching, sends a grouping with full link bandwidth in fast packet switching at every turn.In addition, different with grouping, before sending, can't determine the length of pulse train.
Shown with the foundation of communication channel already and to remove relevant signaling time-delay be the principal element [14] that influences Fast Circuit Switching efficient.So, make DTM in the hundreds of microsecond, set up channel fast.DTM is different from exchange in groups, and control and data are separated in DTM, and DTM uses multileaving, many speed, high capacity channel to support various type of service.This means that DTM for example can increase or be reduced to the resource of an existing channel allocation.Even the DTM network has the ability of setting up a channel for each message, we do not think that this method is applicable to all types of service yet.Even and be to allow the user determine whether the channel of setting up a channel or between the free time, also keeping setting up for each information burst at the most.
The objective of the invention is to the performance by Fast Circuit Switching among the research DTM, emphasis is the dynamic resource management pattern, thereby proves that DTM can support the business of being made up of short frequent transmission.This paper organizes as follows: part 2 provides the introduction of relevant DTM, has described channel concept and resource management scheme, and some problems that how to solve legacy circuit-switched are discussed.Part 3 report with single-hop and various configurations such as multi-hop is connected have been discussed under analog result.At last, in part 4, provide conclusion.2.DTM-dynamic synchronous transfer mode
DTM is used for the unidirectional medium of multiple access, promptly has the medium that can be shared capacity by all connected nodes.It can be based upon on the various topological structure, as ring, collapsible bus or dual bus.Have distance between short average nodal because dual bus is compared with collapsible bus, so we have selected dual bus, and we find that the synchronous mode of DTM is in ratio easier realization on ring on the dual bus.
The business that provides is based on channel.Channel is the time slot collection with a transmit leg and any a plurality of recipients; Guarantee that data arrive the recipient with the determined speed of this channel capacity.Realize being based upon the channel of physically sharing on the medium (as shown in Figure 1) by time division multiplexing (TDM) pattern.Total capacity is divided into the cycle of several 125 microseconds, and each cycle further is divided into several time slots of 64.
Time slot is divided into data slot and control time slot.At least one control time slot of each node visit, this control time slot is used for sending control information to other nodes.Response from the control messages of other nodes or for management spontaneously when the request that receives the user, transmits control message.The control time slot accounts for the minimum part of total capacity, and most of time slot is the data carried by data time slot.When system start-up,, data slot is distributed to node according to some predetermined distribution principles.This means that each node " has " a part of data slot.Node need have time slot, sends data to utilize its, and has relation can dynamically changing this time slot between each node during the network operation.2.1 time slot allocation
DTM uses distributed algorithm to carry out the reallocation of time slot, wherein distributes free time slot pond in each node.Use distributed schemes to replace centralized time slot pond scheme that following two main causes are arranged.At first, when node only uses when setting up channel from the time slot in local time slot pond, the overhead during channel is set up is just very low.Secondly, distributed algorithm does not rely on individual node, so node failure is had certain fault-tolerant ability.The major defect of distributed realization is internodal communication and the overhead that brings synchronously.
When receiving the user when asking, whether node is at first checked its local time slot pond, have enough time slots to satisfy this request, if satisfy, sets up message to next wire jumper transmitting channel immediately.Otherwise, the more time slot of other node requests that this node at first must be on bus.One of each node maintenance includes the state table about the information of the free time slot of other nodes, and when the more time slot of needs, which node request time slot this node is determined to reference to its state table.Each node sends the status message that has its local time slot pond information termly.The priority of status message is lower than the priority of other control messages, does not so only have when occupied at the control time slot, just transmit status message.In addition, redistribution algorithm does not rely on all status messages of node processing, so node can be ignored state information safely when busy.
The process of time slot reallocation is very simple, and the course of work is as follows: if the user asks the channel of M time slot, and node has only N free time slot, and node just sends the time slot request of asking for.This node at first sends request to the nearest node with free time slot.If judge that according to state table this node does not have abundant free time slot, then also send request to the second nearest node with free time slot, so handle.This node waits for, up to receiving replying of each request, then according to the conclusion mandate of redistribution process or refuse this channel request.
A node that has J free time slot and receive the time slot relocation request of asking for K time slot is always sent min (J, K) individual time slot.This node is confirmed to respond by sending a time slot reallocation, and the expression dispensing is what time slot.If requested node without any free time slot, is then embraced exhausted replying by time slot reallocation.Except this algorithm, resource can be controlled by network management system.For example, in order to prevent network hunger, node configuration can be become do not send its all time slots, and keep the initial of definite part to share.2.2 exchange
The DTM net can be expanded (as shown in Figure 2) by interconnect some buses and switching node.In DTM, any with two or more the node that links to each other of multibus can be between them swap data, on this meaning, what DTM used is the distributing exchange.The advantage of doing like this is by increasing more switching node exchange capacity have been strengthened greatly.Exchange is synchronous, this means that the exchange time-delay is constant to a channel.This shows that the multi-hop channel has and the roughly the same characteristic of single bus upper signal channel.And difference only is to exchange the time-delay (each wire jumper is 125 microseconds at the most) that channel has the length omited.If a switching node can be data cycle of its every bus cache, any obstruction then can not be arranged on this node or overflow.
Between enterprising row bus of each cycle synchronously.On all buses, start reference period with identical frequency.This can realize by allowing a network node be responsible at it on all buses of going out the cycle real estate cycle of giving birth to.For each new cycle, this node produce one be transmitted to all buses in the network the cycle-startups indicate.Each bus all has the switching node of being responsible for transmitting this sign on bus.These switching nodes must be organized so that indicate each bus of arrival just in time once.When sign arrives bus, on this bus, restart this cycle.The further details reader of relative synchronous scheme can be referring to [12].
The length of cycle time and time slot can be constant for all buses, this means that synchronous mode allows different bus with different bit rate operations.So just might in network, upgrade or reconfigure each bus and do not influence the other parts of network.2.3DTM channel
The abstract channel that comes out is different from general circuit in DTM, and such channel has following characteristics.
Simply: channel is set up to the recipient by transmit leg.Duplex connects and is made of a channel on the direction two channels.
Many speed: channel can be made up of according to time slot any number, this means that channel capacity can be the multiple of 512Kbps, is the whole data capacity of bus at the most.
Multileaving: a channel can have several recipients.
One node can be by being one group of data slot of a channel allocation and sending a channel and set up control messages and set up a channel.This control messages can or be issued an individual node or is issued a multileaving group, and announces that channel set up the time slot with its use.
For setting up a channel, must be at transmit leg with along distributing time slot on each switching node of this channel route.So switching node is a channel allocation time slot with the name of transmit leg.This switching node is then by copying to the time slot of this channel the bus of going out and come this channel is exchanged from going into internal bus.If any relevant switching node can not distribute the time slot of requirement, then set up the effort of multi-hop channel and count out.In this case, must attempt other routes.In a network, several routes are arranged all under normal condition between every pair of node.The current version of this agreement uses source routing [17] and based on coordinate in the grid (X, addressing mode Y).A kind of simple load-balancing pattern that is used for two wire jumpers is to realize by the information that allows each switching node user mode message send about free number of timeslots on its bus of going out.It for example, between node shown in Figure 21 and node 4, has two possible paths, so if node 1 is wanted to set up and being connected of node 4, can make one's options using between switching node 2 and the switching node 3.Node 1 receives the state information from node 2 and 3, and makes its route decision according to this information.This algorithm is very suitable for compact mesh, and most of routes are only used twice jump, but then more needs general route selection algorithm for randomly topologically structured network.2.3.1 multileaving channel
Traditional circuit is the connection of putting in order between transmit leg and the recipient.DTM uses the shared medium of supporting multi-point diffusion in essence, because time slot can be read by a plurality of nodes on the bus.Can at an easy rate the multileaving channel expansion be the several wire jumpers of leap, because swap operation is actually the multileaving operation, on this meaning, it copies data to (see figure 3) on another bus.3 network performances
We study throughput and time-delay under the miscellaneous service state in this section.We simulate two kinds of different network configurations:
Dual bus with 100 nodes
Connect the dual bus network entirely with 20 * 20 nodes.
In simulation model, node receives from the transmission request of professional generator with from the control messages of other network node.The input rank that these incidents are put into node, and node is each handles an incident.The time of handling an incident is 5 microseconds.By the Poisson process request of produce transmitting, and distributed source and destination address equably.Transmit request for each, node all attempts to distribute time slot, if success is then set up channel, sends data and removed this channel.This means as long as transmit to finish and just discharge time slot.
This simulation is carried out with different euclidean distance between node pair (0.01 to 10 km) difference transmission length (1-256 kilobytes), dissimilar user's request.Link bit rates is 4.8Gbps, and this provides the time slot speed of 75MHz and the cycle size of 9600 time slots.Each transmits request each cycle 40 time slots.The channel that this is equivalent to 20.48Mbps this means that for example the transmission of 16 kilobytes needs 6 milliseconds.
We obtain throughput by the amount of user data with total transmission divided by simulated time, and the capacity of a relative dual bus (9.6Gbps) standardization should value.The maximum throughput that might obtain is always less than link capacity, because some capacity are used to control messages.In single dual bus simulation, for 100 nodes, the control capacity of each node is 5 control time slots, and this is equivalent to 5% overhead.Then the maximum possible throughput is 0.95.
Grid is more than the node that single dual bus has, but the node of every bus few (be 20 whether 100).Because business load distributes on its each node (this means given bus load), then grid will have the traffic carrying capacity that more has more into each node.So the node in the grid needs the more control time slot.In addition, a node has limited capacity and comes control message processing, and we find the event handling time for 5 microseconds, can improve its performance slightly by using the control time slot of each node more than 10.So in network, we use 10 control time slots for each node, this might obtain maximum throughput 0.98.
Visiting time-delay is to arrive this node up to the average time that begins between the data transmission from request.This is a kind of measurement of channel overhead when setting up, and it comprises distributes time slot to take time, sets up message to recipient's transmitting channel and take time and send first data slot and take time.In the multi-hop mode, transmit leg was waited for the relevant confirmation of having set up channel on both sides' bus from the recipient before beginning to send data.For the single-hop mode, transmit leg is set up to recipient's channel separately, so as long as distributed time slot, just can begin to send data.3.1 dual bus
The first of simulation relates to the performance of a dual bus.The purpose of these simulations mainly is the time slot allocation scheme that research is used for different user demands.Because time slot allocation is independently to operate on different bus, so these conclusions also are applicable to the multi-hop mode.3.1.1 the capacity requirement of strictness that can not retry
Fig. 4 shows basic analog result, and one of them node carries out the trial of the capacity of once asking for channel allocation at the most, and if can not distribute the capacity of all being asked then refuse this request.Transmission length between 1 to 256 kilobytes is simulated.Under high load condition, minimum is transmitted (1 and 2 kilobytes among Fig. 4) and do not simulate,, show that the control capacity is used up because the event queue of simulator becomes too big.
Under low loading condition, the great majority request of transmitting is accepted immediately, so throughput increases with load is linear.Under the higher load situation, the time slot reallocation becomes frequent, and some requests of transmitting are rejected, and throughput only increases with loading on to a certain extent.If it is higher that load becomes, then signaling capacity is depleted, and throughput no longer increases (or descend, be used for the situation that 1 kilobytes transmit as Fig. 4).Giving under the situation of fixed load, the transmission that less transmission is bigger needs more frequent control signaling, so the throughput the when throughput during less the transmission is lower than bigger the transmission (can reach 0.47 for being sent to of 1 kilobytes is many, with 85% comparing that 256 kilobytes transmit, just 50% of the maximum possible throughput).Throughput also is subjected to the restriction of strict user behavior, and this class customer requirements distributes whole capacity of asking in once attempting.Below Bao Gao analog case shows and can increase throughput by loosening this requirement.
Under low loading condition, the visit time-delay mainly is included as the time (being total up to 80 microseconds on average) that a node processing transmits request, waits for first the available control time slot (be used for channel and set up message) and first data slot.When load increased, node must be delayed time thereby introduce more to other node request time slot.But 3.1.2 the strict capacity requirement of retry
By allowing a node retry to increase throughput, be channel and carry out trial more than once distribution time slot.Fig. 5 shows for the throughput of the different maximum attempts values that allow and visit time-delay.When allowing the node retry, can set up more channel and throughput and also increase (reach at the most maximum possible throughput 92%), but be cost with long visit time-delay and more signaling.So retry is best suited for those rigid-bandwidth requirement is arranged, but allow the application of some visit time-delays.Yet if the lasting retries of a large amount of request quilts (as the situation in overload), performance will descend.The situation that performance reduced when Fig. 5 was illustrated in load height, node and allows retry 20 times, it shows that signaling capacity is not enough to allow retry 20 times.3.1.3 capacity requirement flexibly
Not too urgent application can be accepted the channel littler than required capacity to capacity requirement.So just can attempt setting up more channel by single time slot reallocation.Fig. 6 shows the throughput and the time-delay of following three kinds of situations: the user is to the capacity of any size be satisfied with (minimum 1 time slot); Requiring minimum is the capacity asked for half (20 time slots); Need all told (40 time slots).When user's demand reduced, throughput increased.When the user only needed 1 time slot, the throughput that reaches was 94% of a maximum possible throughput.Yet under these three kinds of situations, the time slot redistribution process is identical at all, and why this illustrated that the visit time-delay is substantially the same under these three kinds of situations.3.1.4 performance as the function of distance
When nodal pitch increased, the control messages take time of switching time slot reallocation was with increase.This can be as seen from Figure 7, and it shows for the throughput of different bus length and visit time-delay (not allowing the capacity requirement of the strictness of retry, promptly identical with Fig. 4 time slot allocation principle).When bus was elongated, the visit time-delay had increased greatly.Yet this mainly influences the foundation of channel, then throughput comparatively speaking with range-independence.Another influence that increases bus is, spread state information will be spent the longer time, and this has increased the possibility of time slot reallocation failure.This will mainly influence throughput, but analog result shows to have only slight influence.3.2 mesh
Fig. 8 shows the result for the multi-hop channel simulation.This network is that the complete grid of 20 * 20 nodes connects net.Channel uses two wire jumpers at the most, and determines route according to the information of status message.Identical among time slot allocation principle and Fig. 4, promptly do not allow the capacity requirement of the strictness of retry.
When source-purpose during to even distribution, can reach the theoretical maximum throughput that connects grid fully, wherein n is the number of bus.When 2.1% signaling overhead, 20 * 20 grid has the maximum possible throughput, is approximately 20.6.Fig. 8 show carry out 256 kilobytes when transmitting maximum throughput for may be peaked 97.5%, with carry out 16 kilobytes when transmitting maximum throughput be 95.1% contrast situation.This situation (Fig. 4) apparently higher than a dual bus, we to its explanation are, and node is few on every bus in grid, and this shows that free time slot pond not too disperses, so time slot reallocation possibility of success is higher.
The visit time-delay of multi-hop channel will be higher than the single-hop situation, because must distribute time slot on two buses.Transmit for 256 kilobytes, the visit time-delay under the multiple jump condition is approximately than the time-delay of the visit under the single-hop situation long 50%.Can expect that the visit time-delay under the grid situation will be also longer than this.Why not be to mainly contain two reasons like this.At first, on two wire jumpers, set up channel and have a certain amount of dual character.Secondly, the interval between the control time slot is than the weak point in the grid, so the time that node spends in etc. on the time slot to be controlled will be lacked.Yet, transmit for 16 kilobytes, as higher load, time-delay has increased greatly, and this shows the signaling capacity deficiency.4 conclusions and work in the future
Dynamic synchronous transfer mode (DTM) is the mesh architecture of integrated service.It is based on Fast Circuit Switching, and many speed, the multileaving channel with short settling time is provided.Capacity that channel can be given security and constant time-delay, this is applicable to such as real time business such as audio frequency and videos it.Different with traditional Circuit Switching Network, DTM also provides the dynamic reallocation of resource between node, to support the traffic carrying capacity of dynamic change.
We have reported for various configurations and have carried out computer simulation and the performance evaluation that draws.The focus of this analysis is the business model that is similar to grouping, because consider that this type of service is for possess most challenge based on Circuit-switched network.It is intended that research under the situation of frequent foundation and release channel, how to influence the utilization and the visit time-delay of network.We use the channel of 20Mbps, wherein set up a channel for each transmission, transmit size and change between 1 to 256 kilobytes.Analysis to two kinds of topological structures carry out, the grid of dual bus and dual bus.Protocol requirement keeps the definite part of total capacity to be used for control information: we use 5% for dual bus, and we use 2% for grid.
The result shows that when business comprised short frequent transmission (several kilobytes), performance depended on available signaling capacity.For dual bus, even during the long transmission of only several kilobytes of network loading, also can obtain good result, and grid just shows for the transmission of 16 kilobytes saturated.In a word, even showing the Fast Circuit Switching that is used for DTM that use connects in short-term, this paper also demonstrates good performance for Packet Service.Like this, in conjunction with itself support, just can make Fast Circuit Switching become a kind of compelling of B-ISDN and other network and substitute real time business.
For further work, the results of property suggestion needs to seek a kind of scheme of multiplexed described grouping on channel in order to send grouping on DTM, and it is synthetic than big unit with set of packets.Suppose that computer produces and to meet strain model [18]-grouping business in groups, wherein the scheme of the many groupings in a group with the early stage Fast Circuit Switching network in identical destination-be similar to seems more suitable [13], [15].In this scheme, just close channel release resource above the regular period when the transmit leg free time.Yet this is to need the further field of research.The performance conclusion is stem-winding, and we will further explore the influence such as burst source and asymmetric transmit leg-non-homogeneous business models such as recipient's distribution.In addition, we will study such as quick generation channel and time slot re-uses in simulator mechanism [19], [20], [21].The performance evaluation of B time slot management in the DTM net
This paper provides performance conclusion and the analysis based on express network architecture-dynamic synchronous transfer mode of new generation (DTM) net of Fast Circuit Switching.Performance evaluation is considered to utilize, delays time and is blocked.Packet switching is partial in this analysis, thinks that the business that is difficult to provide most on the Circuit-switched network is provided in packet switching.
Resource management uses token to guarantee the conflict-free access of time slot.The DTM model comprises two kinds of different token management schemes.First kind of asymmetric scheme that is based on the central token manager of all free tokens of management.Second kind is based on all symmetric scheme of the distributed token manager in shared token pond of all nodes.Token transmits by the reallocation agreement in distributed schemes.Recognize for distributed schemes to have the token fragment problems, the solution that merges mechanism based on fragment is provided.Two kinds of models all support time slot to re-use and the scheme that connects fast.
The professional generator that is used to simulate produces the business that the arrival interval is the professional or similar asymmetric client-server that the burst of exponential distribution arrives.All will set up circuit even the result shows for each independent grouping, the overhead that connects is also enough low, utilance is high and reallocation agreement working condition is good.When using time slot to re-use scheme, utilance doubles, and other performance has also been improved.Finally, the result shows if average packet length is little, and signaling capacity is to obtain high performance most important limitation factor.1. introduce
Communication industry and research unit are devoted to develop new large-capacity communication network and agreement always.Such development and change are frequent, are very important for new result of study the application developer of wanting among audio frequency, video and asynchronous communication services are integrated in application.These application can be presented on the access to netwoks terminal of wide scope.Terminal can be any electronic equipment as network host, comprises small-sized mobile phone or television set, the supercomputer of multimedia workstation and 1,000,000 yuan.Main frame requires to differ several magnitude with regard to its disposal ability and their communication service.These are diverse to require currently to be reflected in one group independently in the network class.Each network classification is optimum for its specific business and application: cable television network uses one-way broadcasting network, and wherein capacity is divided into the subchannel of the fixed dimension of carrying video.Telephone network uses that the time-delay of throughput with assurance and control slightly changes only is the 64Kbit/s duplicate circuitry.Computer network as Internet etc. utilizes disconnected packet switching to allow a large amount of parallel BlueDramas.They also use the statistics multiplex technique to effectively utilize link.Be used for extra control (or signaling) capacity of the backbone network needs of mobile system with all active terminal of dynamic tracking.
For the application of current this wide scope and the trend of constantly strengthening future, continuing new global network that is applicable to every kind of new business of exploitation and terminal interface is not cut with actual.Substitute, need that exploitation is a kind of supports current and integrated services network new business.Cost is minimum to make expensive network element at utmost share and can demarcate to the overall goal of this network for overall situation size with for making.The light communications has shown can provide necessary link capacity so that the solution that integrated services network becomes a reality by lower price.
Yet the comprehensive optic network of a new generation with very high capacity will bring the problem that does not occur in customizations in today, the low performance network.At first, leave the information propagation during stand-by period when network capacity increases and limited by the light velocity, the bandwidth of this increase time-delay product will be to proposing higher requirement to user's business independent mechanism from third party's Network.For example, a telephone conversation should not be subjected to the influence that other opens the user of high power capacity video channel.Secondly, in order from the network capacity that increases, to benefit, need application and agreement under the situation that increases the forwarding information amount, to work reliably.Cause in network, increasing grouping and each trading volume like this.
The scale of the network of current use such as Internet protocol (IP) stateless packet switching protocols such as [rfc791, come91:Internetworking1] is considerable.They are only to connect little network that computer that minority DARPA is used to study forms from the mid-1970s to develop and form, and develop into the internet that is seen everywhere [rfc1118] in the current whole world.Be used to the internet as the simple building blocks that are connected with bridge by router such as shared medium local area network (LAN)s (seeing [stallings94:data]) such as CSMA/CD, token ring and FDDI.These are easy to expand, low increase progressively the node expense and the combination of features such as fault-tolerant ability of malfunctioning node have been produced simple, flexible and healthy and strong network.In addition, shared medium also allows to use effectively the new multileaving agreement [rfe988] such as the IP multileaving.
The shortcoming of shared medium is, at any time only allows a terminal to send usually, so just can't effectively utilize whole network segment.Can design the scheme [7,8,9,10,11] that a kind of capacity that allows medium is reused, but be cost normally with the complexity of zero access control hardware.The size that is used for the access control mechanisms of shared resource and network is also closely related, and is only effective to local regional environment usually.
Two kinds of main network types of the Internet example are: the connection-oriented Circuit Switching Network and the connectionless packet switching network that are used for phone.When Circuit Switching Network is used for data communication, need be between twice burst information the connection of holding circuit, waste network capacity like this.The reason that this problem occurs is to compare the Circuit management operation with the dynamic change that the user wants slowly.Another reason that produces overhead in the Circuit Switching Network is the restriction of requirement symmetrical duplex channel, and when information flow when being unidirectional, it will introduce 100% overhead.Such constraint also makes the multileaving circuit invalid and be difficult to realize.On the other hand, do not have to connect be deficient in resources retention mechanism and before sending, be necessary for each message and add header of the packet switching network.In addition, prediction latency time exactly in do not have connecting the packet switching network, and because buffer overflows or the information header of error may cause packet loss.The two kinds of factors in back make it to be difficult to support real time business.Congestion avoidance mechanism [12] can be separated the Business Stream of different user.Yet these schemes are subject to the operation that likens on the time scale that comes and goes the grouping time-delay.
In order to address the above problem, communication industry concentrates exploitation ATM(Asynchronous Transfer Mode) [13] to propose ATM is used for the public network in LAN and many futures.CCITT also advises its transmission standard [14] as Broadband ISDN (B-ISDN).The ATM net is connection-oriented, and the same channel in foundation and the Circuit Switching Network, but uses the little fixed length packets that is called cell to come transmission information.The packet switching attribute of ATM means that network need be thought to be connected with the new mechanism the link scheduling device such as the buffer resource manager provides real-time ensuring.
We provide the real-time ensuring solution by Circuit Switching Network, so we must solve the problem of foregoing circuit exchange.We also use new shared medium control protocol, so must consider common shared medium problem.We are called the design of dynamic synchronous transfer mode (DTM) and use the abstract concept [15,16,17] of channel as communication.Our channel is different from the telephone circuit of variety of way.At first, the foundation time-delay is very short is enough to make dynamic allocation of resources/release to change the same fast with user's request.The second, they are very simple, overhead are minimized when being unidirectional when communication like this.The 3rd, they provide multiple bit rate with big variation on the capacity requirement of supporting the user.At last, they are multileavings, allow the destination of any number.
The DTM channel is enjoyed many good characteristics of circuit.After channel is set up, do not carry out the transmission of control information, the result transmits for big data can produce very high network resource utilization.Support that real time business is a nature, need not the administration in the network, congested control or current control.Control information separates with data, and this makes multi-point diffusion reduce complexity.Forward delay interval can be ignored (that is: less than 125 μ s), and does not exist and resemble the possibility of overflowing the loss of data that causes among the ATM owing to buffer.Bit error rate is relevant with basic link technology, and because the resource of strictness when channel is set up is continued to employ, makes exchange become simple, rapid.The main purpose of this paper be research legacy circuit-switched networks exist not enough aspect the performance of DTM: Dynamic Bandwidth Allocation, channel are set up time-delay and as the shared medium net.Propose and assessed the principle (being called token management) that is used for resource management.We have reported DTM are used for being similar to the business model that the common relative short-term of compunication transmits (4-400 kilobytes), and the analog result that obtains.Business has burst arrival interval, oriented client-server, and interarrival time is exponential distribution.
Part 2 has been described DTM agreement and its channel concept.Part 3 has been described token protocol.Part 4 has been discussed simulation and has been set up, and part 5 has provided the analog result for various configurations.At last, reach a conclusion in part 6, and further work of imagination.2DTM medium access control protocol DTM-MAC
The basic topological structure (see figure 9) of DTM net is the bus that connects all nodes with two one-way optical fibers.Some buses with friction speed can be connected together and form any multistage network.In current prototype realizes [15], can be with the grid [16] of the synthetic two dimension of bus group.The node of double bus junction can be between two lines swap data time slot synchronously.Can effectively exchange with constant time-delay by node like this.Basic communication abstract concept is unidirectional, many speed and multileaving channel [18] among the DTM.
The DTM media access protocol is the time division multiplexing pattern.Bus width is divided into cycle (Figure 10) of 125 μ s, again is divided into the time slot (or shorter time slot) of 64 bits each cycle.So the number of time slot depends on the bit rate of network in one-period; For example ,-6.4Gbit/s is online, nearly 12500 time slots of each cycle.
Time slot is divided into two groups: control time slot and data slot.The control time slot is used to carry the message of network internal operation, as the message with channel is set up and bandwidth is reallocated.Data slot is used to transmit user data and explains without intermediate network node.Intermediate node is the node between the source and destination.
One network node controller is all arranged in each network node, and control is to the visit of data time slot and carry out such as network management operations such as network startup mistake recoveries.The main task of Node Controller is to set up and remove channel according to user's needs, and according to user's request and background knowledge network resource administration.
The control time slot is exclusive use for the message of Node Controller.Each Node Controller has the permission of writing to a control time slot at least in each cycle, and Node Controller utilizes this time slot to descending node broadcasts control messages.Because the write access to the control time slot is exclusive, so Node Controller is no matter other node and offered load have access right always it is controlled time slot.The control timeslot number that node uses in network operation can change.3 token managements
Most of time slots are data slots in one-period.According to traffic needs, be time dependent to the visit of data time slot.Write access to time slot is subjected to the control of (or being used for the token that short pass is sent) of time slot token.If a Node Controller has corresponding token and just can write input time slot to data.Token protocol guarantees that the visit of time slot is conflict free, this means in same time slot node write data at the most.
Be used for the control messages that channel is set up and bandwidth is reallocated and carry one group of token as parameter.Yet control messages only is 64 bits, so only can carry few parameters.This means,, then need send several Control message and produce channel if the user asks a bigger bandwidth to transmit.This has introduced extra visit time-delay, has wasted some signaling capacities.
We consider that some reduce the measure that needs the amount of information of transmission during channel generation and token redistribution.First preferred version is to introduce the piece token in token management.Transmit a piece token in a single control messages, a piece token is represented one group of token, but can only be used for the particular combinations of token.For example, in network analog described herein, a piece token is represented by a timeslot number and the side-play amount that provides adjacent time-slots number in this group.This piece token prioritization scheme hypothesis is not divided into little fragment with the token pond.The token number of blocks medium and small in the free token pond may be a problem, and it is called fragment.3.1 time slot utilizes again
Token protocol guarantees that a data time slot can not used by two nodes simultaneously on bus.Sometimes this agreement is too in conservative.How Figure 11 is the example of three tokens of three channel subscriptions (A, B and C) if showing.Node is linked to each other by total segment, and channel uses a son group (grey) of total segment usually, and except remaining no, all the other are subscribed (white), cause the waste of shared resource like this.Replacement scheme is to allow channel only keep the capacity on the total segment between transmit leg and the recipient, example as shown in figure 12 preferably.In this case, single time slot can be used multiple times on bus.Channel D and E and channel A and C use identical time slot except on the different bus section.This is called time slot and utilizes.Time slot utilizes permission again in identical time slot, transmits simultaneously on the section that do not link to each other of bus.
It is a kind of universal method of utilizing the shared resource of annular and bus network better that time slot utilizes again.Time slot utilizes the algorithm can be referring to DQDB[11 again, and 7], Simple[19] and CRMA[20], they are relevant with the control messages in the time slot.When with METARING[9] described in the destination discharge and to combine, buffer is got involved Netcom and is crossed and utilize the elastic buffer inside stream of packets of delaying time can utilize the capacity of single link again and can solve contingent conflict.
Time slot utilizes the complexity that has increased access module again, no matter it is to resemble DQDB, Simple and CRMA etc. still resembles DTM with hardware and realizes with software.Except utilizing in the realization of other system again, the DTM time slot all the crucial high speed path by node has been increased complicated hardware, so increased node delay.In order in DTM, to utilize time slot again, piece token form is expanded to the parameter that comprises the total segment of describing its representative.Also the token management agreement is made amendment with the conflict of avoiding the timeslot number aspect and the conflict of segmentation aspect.Most important prerequisite is to realize not having the variation of hardware aspect with respect to original prototype.Performance gain is also very obvious: source and destination to equally distributed dual bus on, it has been seen in that throughput has increased by one times [21].The shortcoming that time slot utilizes again in DTM may be algorithm have the load of higher complexity and Node Controller and signaling channel may be very high (if particularly average channel retention time short).3.2 concentrate token manager
Two kinds of token management patterns are proposed.First kind, also be the simplest be to allow all free tokens on the individual node controller management optical fiber.The centralized server mechanism of this class is such as CRMA[22] etc. use in the system, front end node is to all other nodes dispensing fiber capacities.We are configured simulator, so that for every optical fiber, be token server from the 3rd node of time slot generator.Token server has approximately identical traffic carrying capacity both sides like this.
Work as the user again and ask to arrive a node, node pins them in whole channel survival period then at first to the manager request token, when the user sends the request of removing channel, immediately token is returned to manager.During token request the time-delay all requests, and by central manager with the all-access serialization.3.3 distributed token manager
Distributed token manager is more complicated than centralized token manager.We make it simple as far as possible.In our method, relevant its of each node periodic broadcasting also has the state information of how many free tokens.Other node with this information stores in their state table.Which node the node of capacity of coming back for moce decides from reference to its state table is asked for time slot.When a user asks to arrive a node, as follows in the agreement work of startup side:
1. if this node has abundant free token to satisfy this request, it just gives the user time slot allocation of request quantity, and sets up message and start this channel by send a channel to destination node, uses the time slot of subscribing to send data then.
Otherwise, node is marked into reservation with available token, checks its state table then; If the sum of free token is not enough to meet the demands in network, just refusal request (obstruction).Otherwise this node is to having the node request token of not using capacity.
If one of node that receives token request does not have the so much free time slot of being asked, it will give its all time slot.Under any circumstance, it is all sent one back to the node of the request of sending and replys.The request that node arrives by strict FIFO sequential processes.
When node receives the replying of token request, it just is masked as reservation to the time slot (if any) that receives in replying.When this node has received to the replying of its whole requests that sent, it just or start channel, the perhaps request of refusing user's, this depends on whether it has obtained enough capacity.If user's request is rejected, then the time slot of subscribing is designated as freedom.
When starting, all free time slots are distributed in the node of network, and each node occupies a free token at least, this token (or several token) is become active state, and declare that its (they) is the control time slot.Now just can receive user's request, and token can transmit between node on demand.When local node has user that enough tokens satisfy to arrive when asking, can any token redistribution just accept this request.
The shortcoming of this pattern is that the time slot reallocation is only asked to trigger by the user, and user's request is owing to unsettled token redistribution postpones.The prioritization scheme that we realize can remedy this shortcoming, and it carries out the time slot reallocation on the backstage.This causes need not the token of reallocating to little to medium sized request.
Replace evenly distributing, can adopt alternate manner to distribute the free token pond to increase the speed of successfully reallocating and utilizing.If by less node administration free token pond,, channel block is reduced owing to reduced the possibility that token is compelled to reallocate.
In this case, (from the near node of time slot generator than having more token from time slot generator node far away) pro rata distributed in whole token pond in all nodes.Replace total server node that uses, node can take place between every pair of node shift.Can satisfy a user's request that arrives when local node comprises abundant token, then need not token redistribution and just can accept this request.In addition, as long as the user who arrives request is good with token distribution coupling, then must reallocate never.
Before definite token pond that how to distribute, need to solve some problems, we solve following problem:
1. when the local resource of node can not satisfy the user and asks, should ask for more token to which node
2. if a node is asked for token to other several nodes, then what are wanted, and if this node of a part that receives only the capacity of asking for refuse this channel
3. if token moves freely between node, whether the token pond is divided little fragment, can make the useless 3.3.1 status message of piece token prioritization scheme
We determine user mode message to issue the information in relevant free token pond.When asking for more resources, the user mode information is helped node and is selected appropriate nodes.This method can be separated above-mentioned first problem.
Our scheme working condition is as follows.Each node broadcast related termly it have the state information of how many free tokens.Other nodes this information stores in their state table.Want more jumbo node which node to ask for time slot to reference to its state table decision.So the state information of broadcasting provides the approximate situation token request of the current state of token information and may be rejected, because they may be sent to the node that those no longer include the token that can send.
State table is " soft " information, says on this meaning, even they are out-of-date or inapplicable, system also can work.Yet they will improve the success rate of redistribution process.3.3.2 avoid unnecessary refusal
When the basic function of more centralized and distributed (Figure 17 and Figure 20) token management, we as can be seen, when also having up resources in system, frequent user's request in distributed way may be by with new form refusal.
A node user mode table is selected the node that can ask token to it.When request arrived destination node, variation may take place in active volume, may return requesting node than lacking of asking for, so the user goes whistle.
Produce unnecessary token passing and waste bandwidth resource like this, because during token passing, time slot can not use.More frequent when mobile when token, the efficient of status message mechanism works is also very low.If when the token pond distributed in a large amount of (hundreds of is individual) node in proportion, the big young pathbreaker in average token pond was less relatively.When load is high, the free token number in the pond will further reduce.If node is also set up with very high speed and removed channel, then free capacity will be at low capacity and is not had to change between the capacity at all in each node.If the average size of user's request is now compared greatly with the free token number of node, then necessarily require more node to satisfy this request.At this moment, requested node does not have the possibility of free capacity will cause the user to be rejected.
Need not get back to Centralized Mode just can have many modes to address this problem.At first, if can not satisfy whole request, we just needn't send any time slot.Even only ask for free token to a node, this agreement also is suitable for, if but ask for to many nodes, it will produce moving of token or token is lockable and can not uses.Secondly, if we receive after token request than lacking of being asked, we can carry out the one or many token simply again and ask for process.This will improve the user asks received possibility, and the token that receives can be used.The expense of duplicate test will increase signaling and visit time-delay, and may destroy the performance of overload network.Even user's repeated attempt causes increasing the foundation time-delay of the request of having submitted duplicate test to.The 3rd, the user may wish to accept to be lower than the channel of the capacity of asking for rather than to go whistle sometimes.If, for example the user receive asked 50%, he just is ready to accept.Figure 13 shows for having the different minimum performances of accepting little (16K byte) user request of capacity [100% (40 time slots), 50% (20 time slots) and 5% (1 time slot)].One than the harmonic(-)mean level, the minimum bandwidth of accepting causes higher utilance.Illustrated among Figure 14 final block request before, if maximum repeated attempts of user 8 times, the results of property of generation.It is to be cost with more signaling, longer time-delay that utilance increases.If recurrent words, duplicate test many times will be offset the benefit that it brings.
In a word, flexibly the user to ask the benefit of policy be to reduce the possibility of refusal and improve total throughput.Can when asking to arrive, the user determine to use a kind of in the disclosed configuration of Figure 13 and Figure 14.Have the user of strict demand can ask duplicate test to capacity on the channel, up to having distributed enough capacity, but other users would rather accept the channel less than the request capacity.For other situation of simulation given here, we determine that low can to accept bandwidth be 50% of the capacity of asking.3.3.3 fragment
Generally speaking, the average number of continuous free piece is less in a node, and this is because the random motion of token and the variation that the user asks capacity cause.In fact this fragment has caused piece token prioritization scheme to have no purposes, and (millisecond) relatively grown in the visit time-delay for high capacity channel.More effective for piece is distributed, be necessary to reduce the free token fragment, otherwise, this fragment will become high-bandwidth channels in the principal element of influence visit time-delay to the high load condition.The low capacity channel is set up time-delay no matter how many fragments are arranged always have short channel.Utilize under the situation at time slot, fragment problems is more serious again, because all may produce fragment (seeing Figure 12) aspect time slot (time) and the section (space).This is under the centralized server pattern, an application-specific of general dynamic memory assignment problem [23].In distributed token manager, most of fragments are to use the result (token pond of each node) in many free tokens pond.If find that two adjacent free tokens in same node, can only merge them.
We have realized a kind of mechanism that is called the fragment Merge Scenarios, avoiding fragment as much as possible, and increase the average block size of free token in the node.This scheme can be utilized use again with time slot, also can use separately.
Our fragment Merge Scenarios working condition is as follows:
1. when network startup, determine a host node and distribute token by this way that the token of sharing same host node has always defined a continuous time slot scope for each token.This produces a bigger average token area in token figure as shown in figure 12.
2. when the time slot-token that continues that in free pod, exists two to have identical time slot or section, then they are merged into a single token (needing cutting apart and union operation of recurrence sometimes).When merging, the section of making merging always has precedence over timeslot number and merges.
The reason of so doing is only little than the token purposes of crossing on many sections for other node at the token on less several sections.
3. when a joint receives local user or long-distance user's token request, use the best-fit algorithm of timeslot number and hop count aspect from the token pond, to select token (seeing Figure 12).The value of token is calculated as the area of token among the token figure, and we attempt to select the token of the minimum area that satisfies institute's capacity required.
4. need be when other node be asked for token when a node, if just may ask for bigger piece, just do not ask for less piece to some nodes from a few node.State table provides this information.So token passing is more effective, and less message and the less fragment set up arranged.
5. after free token has the transmission idle or length of long period, just send them back to host node.
This scheme is returned to host node with token can increase in the free-lists by two merged possibilities of token in succession.If host node " gravitation " is too strong, this scheme will cause the resource-sharing rate to reduce and unnecessary signaling.If it too a little less than, fragment also is still a problem.
Merge mechanism in order to estimate fragment, we have carried out another group simulation.We have designed three kinds of different simulators [A, B, C].Simulator A does not have any fragment when the simulation beginning, and makes aforesaid fragment amalgamation plan, and B has the maximum fragment of complete resource pool when the simulation beginning.All tokens only have a time slot, and do not connect the token of host node before merging the fragment mechanism works.At last, simulator C starts, and does not use fragment to merge mechanism, and has the resource pool that includes maximum fragment.Under all scenario, all carry out time slot and utilize again, and load is decided to be 80%.
In Figure 15,, the visit time-delay as the simulated time function has been described for the network of 10km.Have long visit time-delay during simulator C startup, and when signaling channel overload and message queue increase, time-delay increases also.Simulation B adopts fragment to merge mechanism, and situation is the same with C bad when starting, but after 10ms, the average access time-delay is reduced to below the 500ms.After a while, when simulated time 1 second in the past, connecting the B curve near A place, that is, if be not with any fragment when simulator begins, its is restrained.Convergence rate depends on the quantity of free capacity in the network and later load.Load is 80% in these simulations.Clearly fragment merging mechanism has been improved the visit time-delay and piece token prioritization scheme also is of great importance in distributed realization.3.4DTM performance
We are mainly interested in two class performance metrics in this article.Utilize and the visit time-delay.Utilization is the part of proper network capacity, and it is actually used in data and transmits, and is a kind of tolerance of network efficiency.The visit time-delay is to ask to arrive the time of ending to the data that send for the first time this request from the user, and we think that this is the supported important tolerance of evaluation calculation machine communication service.
There are two principal elements to influence utilization among the DTM.At first specify signaling capacity for each node, this means that the time slot that is used for the data transmission on the bus with a lot of nodes, given fixed link capacity is few with the form of control time slot.Secondly, although because when carrying out the time slot token redistribution between two nodes, corresponding time slot can not be used for data and transmit, so token redistribution is brought extra expense.
Visit time-delay is main and control the load on the time slot and sets up channel need to send how many control messages relevant.The visit time-delay is add up (representative value) of several time-delays normally: Node Controller is handled time-delay [5 μ s], seek and distribute the time-delay [100 μ s] of free token, etc. the to be passed first available control time slot [50 μ s], and the data slot that first distributed [62.5 μ s] that can carrying user data.In addition, the message that is input to Node Controller is delayed time in formation, waits pending.In the simulation that proposes in 5.2, average delay is at most the hundreds of microsecond.4. network analog
In simulation model, each conveying capacity begins with the arrival of new " grouping " of information.Node Controller is attempted for transmitting Resources allocation, send grouping and finally discharging channel.This is the simplification of the mechanism of real system, and wherein channel is set up, data transmit and the channel dismounting all is the independent operation that is started by the user.For example, know that the user of will take place to transmit may set up time-delay by " hiding " channel by prior request channel, when transmitting beginning, has set up channel so.Setting up and removing in the time of interchannel, the capacity of channel is entirely the user to be kept.The most straight-through use of channel is for single transmission, transmits or the video transmission as file.
According to the characteristics of using, can optimize the use of channel.For example, channel can be used to transmit higher layer message sequence such as ATM cell and IP grouping.If it is the multileaving channel, can the message on various objectives ground is thereon multiplexed.This means that each message all arrives each recipient on the multileaving channel, and the recipient must can filter message.Another solution is, is that each message produces and removes channel, but the token between reservation message, and these tokens can be used for next message of formation at any time like this.We are not added to this class user behavior in this simulation, because they just are only optimization to application-specific.On the contrary, we concentrate research how to work in the situation lower network that does not have client layer optimization.
Just can send data as long as transmit leg is assigned to resource, not set up message even recipient's this moment does not also receive channel.This is called Fast Channel and sets up [24].The recipient finally with control messages as replying, accept or refuse this channel.
User's request has following parameter:
The grouping size, it is in the amount of user data that channel is set up and the channel deenergized period transmits.The grouping size of our simulation is from several kilobytes to several Mbytes.
The channel capacity of request, it is that node is attempted the distributed time slot number.In all simulations of this paper, the channel capacity of being asked is fixed as 40 time slots or 20.48Mbit/s.
I is accepted capacity.If a node can not distribute the time slot of this number, it just blocks this request.Under the normal condition it is arranged to 40 or 20 time slots (capacity of asking 100% or 50%).
Source address
Destination address
Source address is (all nodes have identical probability) that produces at random with destination address, and the time between user's arrival is exponential distribution.This analog study utilization, the channel influence of setting up the signaling capacity and the time slot reallocation overhead of time-delay and obstruction aspect.We have simulated the topological structure that has following characteristics:
Dual bus with 100 nodes.Even more node may be connected to bus in theory, we think that it is unlikely that the node that connects on the management bus surpasses 100 network.When connecting 100 nodes, capacity is shared just is enough to verify the token management agreement.
The capacity of every bus is 6.4Gbit/s.We think that such capacity gears to actual circumstances between one or two years; 2.4Gbit/s applicable several years of optical fiber link, the link of 10Gbit/s have appeared in the newspapers and have led, and will come into the market soon.6.4Gbit/s corresponding to 100MHz time slot rate, this is the speed that time slot is handled the MAC hardware effort.100MHz can realize with current CMOS technology.
Total signaling capacity all is the same for all nodes, but time slot cuts apart in proportion in two optical fiber directions, and this depends on which node is positioned on bus.From the near more node of time slot generator, the control capacity that needs is many more.Yet the accumulation of control capacity is identical for all nodes on two buses.In the network that has two token servers, server has the disposal ability of more control capacity and Geng Gao than other nodes.
The length of bus is 10km, and this provides an enough big network, and the influence of propagation delay can be ignored.We further study the influence of propagation delay in Figure 19 and the simulation shown in Figure 21, and total line length that they use is 1km, 10km, 100km and 1000km.
Simulate two kinds of different token management patterns: all tokens are by the asymmetric mode of single token service management and a fraction of symmetric pattern in each node control management overall situation token pond on the optical fiber.5 performances, 5.1 desirable agreements
When analyzing the performance of DTM dual bus network, must solve this problem of theoretical maximum performance, and its performance with simulation is contrasted mutually.We also use theoretical maximum the performance relatively different mode and the embodiment of our estimation in this article.
The maximum throughput that does not carry out the double bus system that time slot utilizes again may be defined as the twice of link capacity, supposes that two optical fiber all receive identical traffic carrying capacity.In can carrying out the system that time slot utilizes again, the throughput of system also distributes relevant with the source purpose.In order to obtain this throughput of dual bus, we use the source and destination address is equally distributed Monte Carlo (Meng Te-Caro) simulation (referring to the figure on Figure 16 left side).In the figure on Figure 16 the right, comprise the performance of DTM network.The DTM network uses the centralized token manager, and the user asks to transmit the information of 4 kilobytes at every turn.In this system, signaling capacity is not bottleneck and finds that utilance more approaches ideal situation.The practical business behavior that resembles this situation is that blocks of data transmits and audio.Between them little difference be because: at first, some capacity are used to control time slot in DTM, have reduced data like this and have transmitted available timeslot number.Secondly, the random generator that is used for the DTM simulation can not produce identical traffic carrying capacity on the uplink and downlink direction, and when a direction capacity can utilize, and on other direction, might produce obstruction.The 3rd, during channel is set up, resource may be pinned temporarily, wasted some capacity like this.5.2 central token manager
Under the situation that adopts central token manager, two signaling capacities many (the control time slot that we distribute for server node is 8 times of other node) that management node is more required than other node.
In Figure 17, provide first group of analog result.Customer requirements 20Mbit/s channel, the time is exponential distribution (being produced by Poisson process) between the arrival, and simulation is carried out for the different grouping size.If can not be assigned to all told of channel, just refusal is asked and token is returned to token server.The size of grouping changes between 4 Mbytes and 2 kilobytes, puts us at this and can see that throughput descends.
If disposal ability or control channel capacity are too little on node, throughput then may take place descend.Especially server node may overload.The result is that the server queue that comprises control messages begins to become very big.Capacity is not used in the representative of control token, so throughput descends.
In the simulation of every channel 4 kilobytes, the control capacity is a limiting factor, and if increase more control time slot (signaling capacity), can more effectively support 4 kilobytes or littler grouping.
In Figure 18, next sets of curves illustrates time slot and utilizes mechanism how to improve the performance of system again.Before large volumes of channels was rejected arbitrarily, throughput had almost increased by one times.The even distribution limitation of source and destination of channel time slot utilize the capacity that brings to increase again.Find, produce source and destination equably, throughput multiplicable [21] on dual bus us if resemble.Show also that in this simulation except 2.5 loads that provide, we can actually obtain to be higher than 2.0 throughput.Yet, just can not reach this throughput level if do not refuse some channels.Having the highest channel that is rejected probability is those channels that use a lot of time slots or total segment.So remaining request is lost in user's request that system's " filtration " is not too greedy.Under normal circumstances this is that unacceptable shape is, so we have not further studied.
Transmitting if carry out 4 kilobytes, is that 1 place's throughput descends giving load then.Even we have enough resources in token server, we do not resemble yet control channel block quick foundation and remove channel.In addition, when we see that also carrying out 8 kilobytes transmits, be that 1.8 place's throughputs descend giving load, its reason is identical.
Can draw from the simulation of Figure 18, as long as control and server handling ability are not bottlenecks, only need central token protocol is revised a little, time slot utilizes mechanism that throughput of system is doubled again.As can be seen, when load increases to 0.5 the time from 0.1, in fact the visit time-delay has reduced from curve.This is the result how time slot is distributed to channel and had nothing to do with token request procedure.If have load, then request token take time will increase sharply from server.
When the theoretical value of the DTM performance of Figure 18 relatively and Figure 16, we see can support very short emergency case (several milliseconds duration) effectively.5.2.1 central token server performance as the function of total line length
When using single token server, set up channel all needs to the server requests token at every turn.If increase the length of bus, then token request will be spent long time, and may limit throughput and increase the visit time-delay.
In Figure 19, we increase by 100 times to the length of bus, are 1000km (time-delay is 50 μ s between node).Visit time-delay now and throughput all are subjected to the round stand-by period restriction of token server.
In this case, the distance to server is depended in the visit time-delay, but irrelevant with the transmission size.Throughput depends on average transmission size fully, because cushioned establishment stage at data transfer phase.
Also effectively supported when total line length is 1000km at the channel that transmits the bulk information such as 256 kilobytes in 1/10th seconds duration.5.2.2 discuss
Utilize the centralized token manager that some benefits are arranged.The client aspect can be simply to only comprising the state information relevant with they open channels.Time slot utilizes also simple and efficient again, because the time slot server has whole free tokens, when attempting to satisfy the user and ask, can therefrom select.Server can also realize that other is such as permission control and fairness strategy.Be moderate under the fragment normal condition in the free token pond of server, so even for high power capacity user request, message is set up in the connection that each channel also has seldom.
Also there are some shortcomings, frequently set up and remove the user of channel, always after use return token and ask token in a short time, may introduce too much signaling like this.If have many nodes or average packet size very little on the bus, may become overload by the disposal ability of server node.If with respect to bit period, each packet bit number and medium speed's product, moderate-length is very big, then to the also possible limiting performance of the return time of server.At last, server node comprises the state information all relevant with all nodes, is used to set up channel.So the fault of server node might influence all nodes.5.3 distributed Token Control performance
In this part, we simulate and study the characteristic of complete distributed token manager.5.3.1 distributed token management
When estimating the performance can carry out the distributed token manager that time slot utilizes again, if can distribute 50% of desired capacity except using, outside the strategy that then accepts request, identical business and parameter in our use and 5.2.
The result of Figure 20 is taken from the band time slot and utilizes the simulation of function, complete distributed token manager, a relevant node to have the status message and the fragment merging patterns of how many capacity again.All nodes have identical disposal ability, and handle much lower that server receives among duty ratio Figure 18.Internodal dependence is also very low, and this causes higher reliability.This system is not than utilizing the performance of system of function good again with time slot, but the integrated system not as discussing in early time.
(Figure 18) compares with Centralized Mode, blocks higherly, and load is lower during startup.
A result who does not reckon with is: actual performance has reduced when the grouping size increases! After check result, tried again once, we find that a bigger average transmission size causes token to have lower mobility, and in the network that provides of actual state information the appearance of free resource be compared to short pass return poor.In this case, if thinking, we can not find the just refusal request of any resource.Waste control capacity when break the bank has been avoided in the introducing of this mechanism.
Its reason is that status message is only described all sections " overall situation " token on the covering bus.Any node can use overall token, and overall token does not just utilize a kind of form of token in the DRM system of function again with time slot.Be higher than at 1.0 o'clock in load, a large amount of tokens are by segmentation, and utilize pattern to need them in new request again.The status message mechanism that we use (for not carrying out system design that time slot utilizes again) and causes higher degree of congestion in the worst case so helping new request to find the ability aspect the free capacity to be restricted.5.3.2 performance as the distributed token server of bus length function
When 1km was changed to 1000km, the throughput of distributed token manager and visit time-delay were shown among Figure 21 in total line length.Setting up and removing the grouping that interchannel sends 16 kilobytes.Throughput that 1km and 100km bus provide and visit time-delay are roughly the same with the 10km bus, because the stand-by period that the 125 μ s cycles of using introduce occupied an leading position in the flight stand-by period of system.For the bus of 1000km, we see that visit time-delay ratio uses the 1000km system of centralized token service device (seeing Figure 19) to lack.When low load, we find token very closely and the visit time-delay irrelevant with total line length, all roughly the same for all systems.Even load is higher, the visit time-delay also only is approximately 1 millisecond, and is shorter than the integrated system among Figure 19.5.3.3 sudden and client-server service condition
As long as disposal ability and signaling capacity are enough, the centralized token manager system just will have identical performance, and will be almost irrelevant with traffic carrying capacity.In order to estimate distributed system, we determine to use other two kinds of professional generators.At first, the generator that we use the analog subscriber request to arrive with burst mode: when a request arrives, produce a new request and after 200 μ s, arrive with 90% probability.The result is that request burst ground arrives node, forces source address to have very high interim locality.Secondly, in order to produce the business that more is similar to the client-server behavior, we increase the traffic carrying capacity that arrives 5 server nodes 0,25,50,75,99.And then the probability of server destination is also high.
In Figure 22, we show the throughput of distributed token server system and the performance situation of visit time-delay aspect.5.3.4 discuss
Clearly, compare with the centralized token service device, distributed realization has several advantages: node can the shared processing load, and is few to the demand of high-performance token server, for the low capacity request is redundant may be high but the visit time-delay is low.This can also regulate long bus.Shortcoming is higher obstruction.Also very clear, must revise status message and state table mechanism when allowing time slot to utilize again, to avoid useless obstruction.6. further work and conclusion
We find that DTM Fast Circuit Switching agreement working condition is good in the dual bus shared medium environment.Analyzed two kinds of time slots (token) management mode.Centralized Mode approaches desirable agreement most, and is simple relatively.
Distributed mode is more responsive for user's behavior, and it depends on the state information of frequently broadcasting, and needs the fragment merging patterns.The major advantage of distributed mode is that the visit time-delay was separated with the two-way time on the bus.
One of conclusion is that the status message pattern is utilized under the situation again and can not be worked well at time slot, needs to redesign it.Further work is to estimate to use a small part token server node that distributed and centralized token manager are combined.
Even conclusion is to transmit for little (several kilobytes), it is also very little that channel is set up overhead, thereby produce high utilance.Even under high load condition, the visit time-delay also has only the hundreds of microsecond.Time slot utilizes pattern that throughput is doubled again and need not introduce in node under the situation of any additional firmware and also can realize.C. replenish
Network is not limited to dual bus, but can be realized by the structure of any kind, as has the loop configuration of any a plurality of nodes.Except optical fiber, transmission medium can be that coaxial cable or other have the transmission medium of high bandwidth.In description, transmission medium will be called optical fiber.In a preferred embodiment, the bandwidth of DTM dual bus is divided into cycle of 125 μ s, it further is divided into the time slot of 64 bits.The invention is not restricted to use the DTM network of these values, can be used in cycle and time slot in the big or small arbitrarily network.
Clearly utilize under the situation at time slot, systematic function increases again: source and destination to equally distributed dual bus in, confirm that throughput has increased by one times.Performance might further improve in other type net, for example source and destination to equally distributed twin nuclei in, throughput has increased by three times.
The time slot reallocation result of DTM needs the long period to set up the channel of the big bandwidth of needs.This " trading off " is rational: need than all not too responsive to the bandwidth of distributing to transmission under the type of service normal condition of low media interviews time-delay, so also can accept this class business under the situation that does not adopt the agreement of reallocating.For the transmission of the big bandwidth of needs, the visit time-delay is all with higher and always be to use the reallocation agreement.Yet transmit in the broadband may be insensitive to the visit time-delay.
Simulation described above has shown that fast network exchange agreement working condition in being divided into the dual bus media environment of DTM is good.Analyzed two kinds of time slot management modes, and these two kinds of pattern work are good, inherited the advantage that time slot utilizes again.Centralized Mode more approaches desirable agreement and is easier to simultaneously to realize.Distributed system is more responsive to user's behavior, then depend on the state information of frequent transmission, and need fragment to merge the number of the mechanism required control messages that reduces that channel is set up and time slot is reallocated.On long bus, use the distributed model that has fragment merging mechanism than using centralized model can obtain more performance.Use a small part token server node also may realize the resource management system that centralized model and distributed model are combined.
In addition, overhead is set up in connection can be very low, even this causes also reaching the very high level of utilizing for the transmission of little (several kilobytes).Even under high load condition, the visit time-delay is the hundreds of microsecond just.Time slot utilizes method just performance can be doubled (on dual bus) for node increases hardware again.When using time slot to utilize method again, owing to all may have fragment aspect time slot and the bus sectionalization, so adopt fragment merging aspect even more important simultaneously.Discussion and commentary that D is more detailed to above explanation
Described a kind of integrated system that only has a host node for all tokens that is characterised in that in the above description especially in detail, the feature of this system is that also freedom (free time) token always directly returns to host node.
In complete distributed system, all nodes all are host node (server nodes), and token distributes between all host nodes equably.
Use fragment merging method, token experience after effective time (for example, when they idle certain hours, so-called " free time ", or when they for some time not in host node-what is called " nearest host node time ") finally to get back to their host node.We use, and work gravitation-low gravitation is corresponding to long effective time, and high-attraction is corresponding to short effective time.The distributed system that above-mentioned integrated system will have the gravitation (effective time=0) of an infinite height and not adopt fragment to merge mechanism will have gravitation=0 (effective time=∞).
Be also pointed out that resource management scheme can be applicable between these particular cases on any one.The host node number can and change between total node number 1.Gravitation can change between infinite 0 independently.Before the merging on the time slot direction, should preferentially carry out the merging of piece token and the merging of total segment.
We provide some examples in conjunction with the accompanying drawings, to be used for explanation.
Figure 23 illustrates the token figure that has two time slot consecutive blocks token A and B.Do not divide under the normal condition/merge, because section will be shorter.
Figure 24 shows two section consecutive blocks token C and D.Under the normal condition, carry out the division of two piece token C and F according to dotted line.Afterwards two established token C ' and D ' are merged into C ' D ', its length is original twice.This produces three new piece token C altogether ", D " and C ' D '.
Figure 25 illustrates three piece token E, F, G.Best, F and G are merged into the length that FG increases section.
Figure 26 illustrates two piece token H and I.The capacity of being asked is corresponding to half of H, and all I should be from node A, and NA sends to Node B, NB.Select H to be used for transmitting, it is divided according to dotted line.
Figure 27 illustrates token I.The capacity of being asked should be with it from node A corresponding to half of I, and NA delivers to B, NB.Select a part of I to be used for transmitting, this part is drawn on top, or resemble among Figure 27 in the bottom of I.Remainder must be divided (piece token normally rectangle) then.Best, carry out to keep big as far as possible section according to dotted line.
Certainly, needn't do in such a way.People may run into and will carry out merging this situation on the time slot direction before the merging on the section of the carrying out direction in some applications.
Under the normal condition, state table only comprises about all be the information of time slot freely on all sections.People may run into information about section and be also contained in situation in these state tables.Yet it will be to be distributed to all the other node more information.
Below in conjunction with Figure 28 the another kind of better mechanism that realizes is described.If node O, NO are except will be only to node N, NN sends capacity (b-a), also capacity (b-a) and whole section is had access right, then it can the section of use NN with its on.These are sent to NN then and are used for demand in the future.If node N, NN (d-c) has an access right to capacity (time slot), and this capacity is sent to node M, NM, then it will in addition to node M, NM sends free section NM and is used for demand in the future on it.Token piece (NO-NN) * (d-c) sends to node O downwards, and NO is used in the future possible demand.Need extra signaling like this, thereby token piece (NO-NN) * (d-c) can send with final lower priority.
The multiplexed form of Fig. 1 DTM;
The DTM node that links to each other in Fig. 2 network configuration;
Fig. 3 multiaddress group;
Throughput under Fig. 4 different grouping size cases and visit time-delay;
Fig. 5 is allowing under the request situation again, throughput for strict requests for capacity and visit time-delay (transmission of 16 kilobytes);
Throughput under Fig. 6 different user demands situation and visit time-delay (transmission of 16 kilobytes);
Fig. 7 is as the network throughput and the visit time-delay (transmission of 16 kilobytes) of bus length function;
The throughput of Fig. 8 on 20 * 20 grids and visit time-delay;
The DTM net of Fig. 9 dual-bus structure;
The 125 μ s cycles of Figure 10 DTM;
Figure 11 illustrates the token figure of timeslot number and section;
Figure 12 illustrates one section figure of time slot that time slot utilizes again;
The distributed token server of Figure 13;
The user that Figure 14 increases operation rate asks (16kB) again;
Figure 15 fragment merges: as the visit time-delay of simulated time function;
The dual bus of Figure 16 theory and DTM throughput;
Figure 17 centralized token service device;
Figure 18 centralized token service device and time slot utilize again;
Figure 19 has the 1000km bus of centralized token service device;
Figure 20 utilizes and the visit time-delay: distributed token server;
Figure 21 has the 1-1000km bus of distributed token server;
The distributed token server of Figure 22;
One section figure of Figure 23-28 time slot.

Claims (40)

1. method that is used for capacity being carried out centralized management in the Circuit Switching Network that has bus or loop configuration, wherein the bandwidth of network use is divided into the cycle, and the cycle further is divided into control time slot that is used for signaling and the data slot that is used to transmit data, each data slot is relevant with a token, and described method is characterised in that:
First node is called server node, is assigned the token corresponding to all data slots that flow on a direction of bus or ring;
Section Point is to server node request and the corresponding token of a constant volume; With
When server node has the not use capacity of request, just keep corresponding to the token of request capacity and with them sending other node to.
2. according to the method for claim 1, it is characterized in that this method realizes in DTM (dynamic synchronous transfer mode) type network.
3. according to the method for claim 1 or 2, it is characterized in that when corresponding data slot was not used further to transmit data, the token of transmission was returned to server node and is released.
4. according to the method for claim 1 or 2, it is characterized in that when corresponding data slot in effective time also is not used to transmit the data of described other node, just resending the token that is transmitted to server node and discharging them.
5. the method any according to claim 1-4, it is up to it is characterized in that selecting to make 1/3 node to be suitable for to server node, and 2/3 node is suitable for descending.
6. method that is used for capacity being carried out distributed management in the Circuit Switching Network that has bus or loop configuration, wherein bandwidth is divided into the cycle, and each cycle further is divided into control time slot that is used for signaling and the data slot that is used to transmit data, each data slot is relevant with a token (write access), it is characterized in that:
Define at least two nodes that are called server node, among them, distribute token corresponding to all data slots that on a direction of bus or ring, flow;
One node at least-the server node request is corresponding to the token of a constant volume;
When server node had the not use capacity of request, this server node reservation sent requesting node to corresponding to the token of request capacity and them.
7. according to the method for claim 6, it is characterized in that described method realizes on DTM (dynamic synchronous transfer mode) type network.
8. according to the method for claim 6 or 7, it is characterized in that when described several server node stack ups have the not use capacity of request, require several server nodes to keep the token of stack up, and send them to requesting node corresponding to whole request capacity.
9. the method any according to claim 6-8, it is characterized in that when a node has the token that receives corresponding to the request capacity, making the token that receives be used to set up channel by setting up message to one or several node transmitting channel that is designated as the data receiver.
10. the method any according to claim 6-8, it is characterized in that when described one or several server node stack up does not have the not use capacity of being asked they keep corresponding to all not using the token of capacity and send them to requesting node.
11. the method any according to claim 6-8 is characterized in that they do not transmit any token to requesting node when described one or several server node stack up does not have the not use capacity of request.
12., it is characterized in that when the token that may receive does not correspond to the capacity of being asked that node is done the request in a step to token according to the method for claim 10 or 11.
13., it is characterized in that the token that is received does not correspond to the capacity of being asked, and just discharges these tokens in node according to the method for claim 10 or 11.
14., it is characterized in that when node does not receive token corresponding to the request capacity,, using the token that is received to set up channel by setting up message to the node transmitting channel that is designated as the data receiver according to the method for claim 10 or 11.
15. the method any according to claim 6-14 is characterized in that each server node periodically sends the information of relevant its idle capacity to other node, each node is stored the information that receives from other node in its state table.
16. the method any according to claim 6-15 is characterized in that node has the server node request token that does not use capacity at most to this moment.
17. the method any according to claim 6-15, it is characterized in that node to nearest and have ask not use the server node request token of capacity.
18., it is characterized in that finding the server node of asking capacity to it by the reference state table according to the method for claim 16 or 17.
19. the method any according to claim 16-18 is characterized in that all nodes on bus or the ring are defined as server node and are distributed a token at least.
20., it is characterized in that all nodes on bus or the ring are assigned to the token of similar number according to the method for claim 19.
21. the method any according to claim 6-20 is characterized in that when corresponding data slot is not used further to data and transmits, and resends the token that is transmitted to each even is called the node server node, and discharge these tokens.
22. the method any according to claim 6-20 is characterized in that not being used further to discharge when data transmit the token that is transmitted when corresponding data slot.
23., it is characterized in that when corresponding data slot is not used further to transmit data in an effective time, keep the token that transmitted, resend to each even be called the node server node and discharge them according to the method for claim 22.
24. the method any according to claim 1-23 is characterized in that token adjacent in a node represented by a token that is called the piece token, this token is indicated with time slot sum in the timeslot number of first time slot in the delegation and this row.
25. the method any according to claim 1-24 is characterized in that a token is divided at least two corresponding to same time slot but the token of different sections, its stage casing refers to the part of the transmission medium of at least two nodes of connection.
26. the method any according to claim 1-25, it is characterized in that when server node has some time slots that can select therein or the adjacent token group of section, this server node keeps and comes from minimum time slot or the token in the adjacent token group of section that satisfies the request capacity and transmit them, and wherein minimum token group is defined as the group of the product minimum of time slot with predetermined weights and section.
27. the method any according to claim 1-26 is characterized in that being that the adjacent token group of segment section reconfigures at least, thereby is that to make the adjacent hop count of every group of token be maximum to cost with the adjacent time-slots number.
28. node control unit in the node of Circuit Switching Network with bus or loop configuration, wherein the bandwidth of network use is divided into the cycle, and the cycle further is divided into control time slot that is used for signaling and the data slot that is used to transmit data, each data slot is relevant with a token, and described node control unit is characterised in that:
For the node control unit distribute corresponding to the token of the data slot of the predetermined quantity that on a direction of bus or ring, flows and
The node control unit is arranged to when receiving from the token request of the Section Point control unit of Section Point and having the not use capacity of being asked, just keeps token and send them to the Section Point control unit.
29., it is characterized in that network is DTM (dynamic synchronous transfer mode) type network according to the node control unit of claim 28.
30. one kind in the Circuit Switching Network with bus or ring type structure even be called the node of server node, wherein the bandwidth of network use is divided into the cycle, and the cycle further is divided into control time slot that is used for signaling and the data slot that is used to transmit data, each data slot is relevant with a token, described node is characterised in that and is that the node control unit distributes the corresponding token of data slot with the some that flows on a direction of bus or ring, and the node control unit is arranged to when receiving from the token request of the Section Point control unit of Section Point and having the not use capacity of being asked keeps token and send them to the Section Point control unit.
31., it is characterized in that network is DTM (dynamic synchronous transfer mode) type according to the node of claim 30.
32., it is characterized in that being the token of node distribution corresponding to the total data time slot that on a direction of bus or ring, flows according to the node of claim 30 or 31.
33. according to the node of claim 32, it is up to it is characterized in that this node is suitable for 1/3 node of bus or ring, and makes 2/3 node of bus or ring be suitable for descending.
34. the Circuit Switching Network of bus or ring type structure, the bandwidth that network uses is divided into the cycle, and the cycle further is divided into the control time slot that is used for signaling and be used for the data slot that data transmit, each data slot is relevant with a token, described Circuit Switching Network be characterised in that in addition the node that is called server node distribute the corresponding token of data slot corresponding to the predetermined number that on a direction of bus or ring, flows, and it is arranged to when receiving from the token request of Section Point and having the not use capacity of being asked, keep token and send them to Section Point.
35. Circuit Switching Network with bus or loop configuration, the bandwidth that network uses is divided into the cycle, and the cycle further is divided into control time slot that is used for signaling and the data slot that is used to transmit data, each data slot is relevant with a token, described network is characterised in that and defines two nodes that are called server node at least, distribute token therein corresponding to the total data time slot that on a direction of bus or ring, flows, its feature also is when server node receives from the token request of the 3rd node and has the not use capacity of being asked, and keeps token and sends them to the 3rd node.
36., it is characterized in that network is DTM (dynamic synchronous transfer mode) type network according to the circuit-switched network of claim 34 or 35.
37. the Circuit Switching Network any according to claim 34-36, when it is characterized in that node arrangement Cheng Zaiyi received token corresponding to the request capacity, by using the token that has received to set up channel to setting up message as data receiver's node transmitting channel.
38. the circuit-switched network any according to claim 35-37, it is characterized in that when one or several server node does not totally have the not use capacity of being asked, it is arranged to reservation corresponding to all not using the token of capacity and send them to requesting node.
39. the circuit-switched network any according to claim 35-37 is characterized in that when one or several server node does not totally have the not use capacity of being asked, and it is not arranged to transmits any token to requesting node.
40. the circuit-switched network any according to claim 35-39, it is characterized in that node arrangement is become to use a state table, server node is made the decision of whom asking token to according to this state table, and state table is the tabulation that the Servers-all node of relative timing modification does not use capacity.
CN 96199374 1995-12-28 1996-12-23 Method and arrangement for network resource administration Pending CN1206526A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 96199374 CN1206526A (en) 1995-12-28 1996-12-23 Method and arrangement for network resource administration

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE95046819 1995-12-28
CN 96199374 CN1206526A (en) 1995-12-28 1996-12-23 Method and arrangement for network resource administration

Publications (1)

Publication Number Publication Date
CN1206526A true CN1206526A (en) 1999-01-27

Family

ID=5129451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 96199374 Pending CN1206526A (en) 1995-12-28 1996-12-23 Method and arrangement for network resource administration

Country Status (1)

Country Link
CN (1) CN1206526A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1302638C (en) * 2003-04-03 2007-02-28 华为技术有限公司 Method for making message flow limitation by adopting token leakage cylinder
CN1643878B (en) * 2002-03-18 2011-05-11 松下电器产业株式会社 Method and apparatus for configuring and controlling network resources in content delivery with distributed rules
CN101193043B (en) * 2006-12-01 2012-02-15 北京东方广视科技股份有限公司 A method for realizing data back transfer in CATV network
CN101729386B (en) * 2008-11-03 2012-07-25 华为技术有限公司 Flow control method and device based on token scheduling
CN101018208B (en) * 2000-02-25 2012-09-26 艾利森电话股份有限公司 Flow control between transmitter and receiver entities in communications system
CN108924843A (en) * 2018-06-22 2018-11-30 中国联合网络通信集团有限公司 Communication device in communication means and cluster in cluster
CN110214437A (en) * 2016-12-07 2019-09-06 马维尔国际贸易有限公司 The system and method redistributed for memory access token

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101018208B (en) * 2000-02-25 2012-09-26 艾利森电话股份有限公司 Flow control between transmitter and receiver entities in communications system
CN1643878B (en) * 2002-03-18 2011-05-11 松下电器产业株式会社 Method and apparatus for configuring and controlling network resources in content delivery with distributed rules
CN1302638C (en) * 2003-04-03 2007-02-28 华为技术有限公司 Method for making message flow limitation by adopting token leakage cylinder
CN101193043B (en) * 2006-12-01 2012-02-15 北京东方广视科技股份有限公司 A method for realizing data back transfer in CATV network
CN101729386B (en) * 2008-11-03 2012-07-25 华为技术有限公司 Flow control method and device based on token scheduling
CN110214437A (en) * 2016-12-07 2019-09-06 马维尔国际贸易有限公司 The system and method redistributed for memory access token
CN110214437B (en) * 2016-12-07 2023-04-14 马维尔亚洲私人有限公司 System and method for memory access token reallocation
CN108924843A (en) * 2018-06-22 2018-11-30 中国联合网络通信集团有限公司 Communication device in communication means and cluster in cluster
CN108924843B (en) * 2018-06-22 2021-05-11 中国联合网络通信集团有限公司 Intra-cluster communication method and intra-cluster communication device

Similar Documents

Publication Publication Date Title
CN1166247C (en) Interconnect network for operation within communication node
CN1104687C (en) Improved method and apparatus for dynamically shifting between routing select and switching packets in transmission network
CN1138380C (en) Controlled access atm switch
CN101159898B (en) Systems and methods for determining granularity level of information about buffer status
EP0873629B1 (en) Method and arrangement for network resource administration
CN1158830C (en) Service parameter interworking method
CN100388682C (en) Method for improving service quality in SGSN network processor
CN1151230A (en) Scalable multimedia network
CN1745549A (en) System for content based message processing
CN1286009A (en) Networking system
US6345040B1 (en) Scalable scheduled cell switch and method for switching
CN1829199A (en) Switch for integrated telecommunication networks
CN103888374B (en) Comprehensive sensor network service middle piece and service transmission achieving method thereof
JPH1141255A (en) Call connection controller for cell transmission exchange
CN1156367A (en) Exchange system and method for asynchronous transmission mode exchange
CN1206526A (en) Method and arrangement for network resource administration
CN1301096A (en) Control and distribution protocol for implantable router frame
CN100338911C (en) Method and apparatus for dynamic bandwidth allocation over an internet protocol telecommunications network
Zhang Designing a new architecture for packet switching communication networks.
CN105099947B (en) Spatial network cut-in method and device
SE514485C2 (en) Procedures and arrangements for defragmentation
CN1638357A (en) Memory management system having a linked list processor
EP1471694B1 (en) Method for dimensioning bandwidth in voice-over-IP networks
Kaur et al. Core-stateless guaranteed throughput networks
US20020172221A1 (en) Distributed communication device and architecture for balancing processing of real-time communication applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication