CN101188556A - Efficient multicast forward method based on sliding window in share memory switching structure - Google Patents

Efficient multicast forward method based on sliding window in share memory switching structure Download PDF

Info

Publication number
CN101188556A
CN101188556A CNA2007101640166A CN200710164016A CN101188556A CN 101188556 A CN101188556 A CN 101188556A CN A2007101640166 A CNA2007101640166 A CN A2007101640166A CN 200710164016 A CN200710164016 A CN 200710164016A CN 101188556 A CN101188556 A CN 101188556A
Authority
CN
China
Prior art keywords
address
cell
sliding window
multicast
sent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007101640166A
Other languages
Chinese (zh)
Inventor
汪洋
余少华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan FiberHome Networks Co Ltd
Original Assignee
Wuhan FiberHome Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan FiberHome Networks Co Ltd filed Critical Wuhan FiberHome Networks Co Ltd
Priority to CNA2007101640166A priority Critical patent/CN101188556A/en
Publication of CN101188556A publication Critical patent/CN101188556A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to a front end switching technology of an Ethernet switching system, in particular relates to a high-efficiency group-broadcasting repeat method based on a sliding window in a shared storage switching structure. The method adopts an optimization way outputting the sliding window and no longer uses a strict FIFO-type cell to make a repeat, but selects a suitable cell to make the repeat in a certain range (window width). The repeat way obviously decreases the repeat span of a group-broadcasting cell, thereby decreasing the stay time of the group-broadcasting cell in a shared storage area. The method has two most direct advantages that the method increases the synchronous level of the group-broadcasting cell repeat at every goal port, and lowers the consumption to a shared storage room. An analogue result shows that selecting the suitable window width can effectively increase the performances of the system at the two fields, but excessively widening the window width can not limitlessly increase the performances of the system, on the contrary, can increase the count complication.

Description

Share in the memory transactions structure efficient multicast forward method based on sliding window
Technical field
The invention belongs to the front end switching technology of Ethernet switch system, be specifically related in a kind of shared memory transactions structure efficient multicast forward method based on sliding window.
Background technology
Crossbar is a kind of switching fabric that is widely used in the High speed rear panel exchange.The Crossbar switching fabric of a M*N has M input port and N output port, this M input and N output port form M*N switch that crosses altogether, if the switch closure of i input port and j output port intersection, i input port and j output port form path, so form the exchange of input port i to output port j.Obviously at any time, each input port can only be closed into the switch that crosses of an output port.Common crossbar fabric has identical input and output port number, if because M ≠ N, the switch that crosses of synchronization closure depends on M, less number among the N.Fig. 1 is a general exchange model based on Crossbar, and the Crossbar of this model employing N * N is structure (Switching Fabric) in return, has input and output queue.Previous switching fabric only has output queue, for the throughput that improves exchange with avoid packet loss, the general now formation (CIOQ) of using input and output to combine.But single input rank will cause the end of a thread to block (HOL), therefore introduced VOQ (VOQ), the input rank that is about to each port is subdivided into N subqueue according to N output port, and the temporary transient like this grouping that can not get the dispatcher meeting or not blocked other follow-up packet scheduling.Dispatch existing most important algorithm about Crossbar and comprise iSLIP, WFA etc., the wherein iSLIP application of on Cisco GSR 12000, succeeding.ISLIP is a kind of heuristic (heuristic) algorithm in essence, it takes shape in more early algorithm PIM and iRRM, the something in common of this series of algorithms is, this is a kind of strategy that changes the time with the space, centralized dispatching algorithm needs intensive calculating, according to computing capability at that time, design a centralized scheduler and lack practicality.Therefore, scheduling strategy is shared on N * 2 moderator (arbiters) computing time of having accelerated scheduling strategy.
Taiwan's scholars has proposed two stage scheduling model recently [1,2], as shown in Figure 2.This model added the matrix of a load balancing function before traditional switching matrix.Present crossbar switching fabric can be called Birkhoff-von Neumann matrix, and the flow from port i to port j is designated as r Ij〉=0, so constituted a matrix R=[r Ij], become the traffic matrix of switching fabric.When not having excessive occupied bandwidth, traffic matrix satisfies:
Σ i = 1 N r ij ≤ 1 , Σ j = 1 N r ij ≤ 1 I, i=1 ..., this matrix of N can be a doubly stochastic matrix by replenishing:
Σ i = 1 N r ij = 1 , Σ j = 1 N r ij = 1 I, i=1 ..., the N doubly stochastic matrix can be carried out protruding decomposition according to a series of circular matrixes, promptly has integer K and circular matrix P kAnd coefficient φ k, make
R = Σ i = 1 K φ k P k , φ wherein k>0, and Σ i = 1 k φ k = 1 Because all P kAll be circular matrix, such decomposition can reach 100% throughput, as shown in Figure 3.
Crossbar speed is fast, but cost is higher, and current switching fabric based on Crossbar is mainly used in the backboard exchange of large-scale switching system.
Shared switching fabric be a kind of message of all input/output ports through after cutting is " cell " of regular length, these cells " are shared " a public memory field, through queuing, after the reorganization, the exchange model of exporting from each output port again.The characteristics of this model are that high-throughput is arranged, low delay and memory usage efficiently.The operation of chip is divided into " time slot ", in each time slot, all input ports can be stored a cell separately to common storage, and all simultaneously output ports also can take out a cell (if the cell that mails to this destination interface is arranged) respectively from this public memory.Somewhat similar one of a shared drive switch is exported the switch that cushions, and also needs to reach the throughput and the time delay of optimum, needs a concentrated storage area management mechanism to realize the scheduling of cell and the management of formation for this reason.This core buffer of concentrating management makes the capacity of exchange be subject to the memory access time of the read/write of internal memory, promptly each time slot need satisfy to N go into cell write with N go out reading of cell.
Fig. 5 is based on the conceptual model of the Shared Memory switching fabric of chained list, form a single stream after the cell process time division multiplexing from different port, after this convert to parallelly again, width is " the word stream " of cell length, sends into public memory space then.In memory inside, cell is according to forming different logic queries to different destination interfaces, and (Head of Line HOL) at first is taken out to output port to the end of a thread of these output queues in order, and each formation is got one.After this, output stream according to the time decomposition multiplex, export at output port thereby form cell.Each logic query is managed by two pointers, i.e. head pointer (HP) and tail pointer (TP).Head pointer points to the head of formation, HOL just, and tail pointer points to last cell or the next idle address of this logic query, and deposit the position of finding that makes the next one enter this formation.
In return/and routing device finishes the core of forwarding capability, and according to the cache policy classification that enters cell, switching fabric can be divided into input queue, output work queue and shared store queue.Wherein, preceding two kinds of implementations also comprise empty (space-division) unit that divides usually, and the Crossbar backboard of introducing as preamble exchanges.Existing studies show that, output work queue can realize optimum performance, avoids reducing such as the end of a thread blocks (HOL) the factor of system effectiveness simultaneously.With to input or output queue type different, the switching fabric of shared storage is all realized the function of exchange in a common storage, its queuing principle is very similar to output work queue, and because each queue sharing memory space, improved utilance to storage resources, add and do not use other sky separation structures, make sharing the memory transactions structure has higher efficient and lower realization cost, thus in the front end exchange of veneer exchange or large-scale switching system dominate.
Traditional switching fabric comprises the exchange of cell copy, address copy exchange and processed exchange.Simple for what describe, suppose that switching fabric all is 4 inputs and 4 output ports, is designated as #0 respectively to #3; Shared memory space is 9 cell size.A unicast cell that enters from input port #0 will exchange to output port #0, and a multicast cell that enters from input port #3 will exchange to output port #2 and #3, and the buffering area spillover might take place.
Multicast cell shown in Figure 6 copy switching fabric at first the multicast cell that arrives is copied to it each destination interface output queue and deposit shared cache area in.Therefore, the realization of this switching fabric only needs to have added a cell copy circuit just at input side.The cell of supposing to mail to destination interface #0 and #2 is stored in the address 0 and the address 5 of shared memory respectively, is dropped because shared memory takes to the cell of output port #3.The memory cell of storing the cell of newly arriving is linked to the afterbody of corresponding output queue.The address of using mark zero expression newly to apply among the figure, the cell meeting quilt of the first address storage of each formation output port separately sends.
Multicast address copy switching fabric shown in Figure 7 does not carry out copy operation to the multicast address cell that enters, but it duplicates the address in its memory block and adds in the corresponding ports address copy formation (ACQ) and go.In this example, the address of memory block 5 is replicated, and is placed in the output queue of #2 and #3 and lines up.New unicast cell is stored in address 0, and this address joins the output queue of #0.Because this structure does not have the duplicated multicast cell, therefore a multicast cell can not discharge it before all destination interfaces all obtain transmitting from buffer memory.In order to determine the release time of a multicast cell, adopt a counter usually.When multicast cell arrived, counter was set to the number of its destination interface, and the every transmission of each destination interface is this multicast cell once, this counter is just from subtracting 1, when this counter becomes 0, illustrate that this multicast cell is released in buffer memory, corresponding address becomes the free time.In example shown in Figure 7, address, memory block 0 and 5 counter are made as 1 and 2 respectively, and simultaneously, the address is that 1,2,4,6 counter has all subtracted 1.At this moment, address 2 and address 4 become the free time, because counter all becomes 1, but is all also using address 1 and address 6.
The processed mode of multicast cell adopts the multicast address formation (DMQ) of a special use to deposit multicast cell, and this multicast cell formation obtains being higher than the priority of other formations, like this since, cell also needn't duplicate a plurality of copies in shared memory.In the processed model example as shown in Figure 8, the multicast cell that is stored in shared buffer memory regional address 1 will be forwarded to #1 and #2 output port, after this, being forwarded to #0 and #3 unicast cell (being stored in address 2 and address 4) just is sent out, usually adopt a kind of screened circuit to realize aforesaid operations, when this circuit will send multicast cell in ports having, unicast cell is made as these ports invisible, promptly the clean culture output queue is covered these ports, therefore, when sending multicast cell, unicast cell is temporarily blocked.
The defective of DMQ scheme is, when the load of flux of multicast acquires a certain degree, will seriously hinder the forwarding of unicast cell, because multicast cell has higher priority.To this, the researcher has proposed wheel changes strategy, and the cum rights wheel changes schemes such as strategy [3]But this can cause the end of a thread of multicast queue congested again under many circumstances, because multicast queue has only one.ACQ has solved congested problem preferably, but has brought extra expense, promptly except that the memory space of cell, also needs extra address queue, and they coexist as public shared memory space.
Another weak point of the ACQ that realizes in FIFO (First In First Out, first-in first-out) mode is the bottleneck that it utilizes memory space under bigger flow pressure.In the address queue as shown in Figure 9, if port 0 sends address queue's cell pointed to port 3 according to the FIFO mode, then send all after dates at one, shared memory has only discharged the memory address of two unicast cells, is respectively address 5 and address 9.
Summary of the invention
The objective of the invention is at the existing problem of existing multicast message forwarding mechanism, efficient multicast forward method based on sliding window is provided in a kind of shared memory transactions structure, thereby improve the synchronization extent that multicast cell is transmitted at each destination interface, and reduce sharing the consumption of memory space.
Technical scheme of the present invention is as follows: based on the efficient multicast forward method of sliding window, comprise the steps: in a kind of shared memory transactions structure
(1) the arrival order numbering is pressed in the cell addresses formation of waiting for output, several continuous cell addresses row constitute sliding window, the number of row constitutes the width W id (SW) of sliding window, and the both sides of sliding window are respectively the forward position FE (SW) of sliding window and afterwards along BE (SW);
(2) value of getting sliding window width W id (SW) is W 0, forward position FE (SW)=0, the back is along BE (SW)=W 0-1;
(3) for the multicast cell address in sliding window forward position, the cell of setting port i address queue head indication is multicast cell MC i, i=0 is to N-1, and wherein N is the number of output port, judges the value Count (MC of the current counter of this multicast cell i),
First kind of situation is if Count is (MC i)>Inst (MC i, SW), Inst (MC i, SW) be the number of example in the sliding window, search for MC backward iThe cell of back if there is the address to point to unicast cell in the scope of sliding window width, is address to be sent with this unicast address temporary marker, otherwise, be address to be sent with this header addresses temporary marker;
Second kind of situation is if Count is (MC i)=Inst (MC i, SW), with MC iWhole example temporary markers in current sliding window in each address queue are address to be sent;
(4) to be sent if each cell addresses that is positioned at the forward position of sliding window all is marked as, then with sliding window address size of translation backward;
(5) address to be sent according to each port address formation sends corresponding cell, if unicast cell, Free up Memory immediately; If multicast cell, according to the successively decrease counter Count (MC) of cell of the instance number of transmitting, if counter is 0, Free up Memory; Then, go to step (3).
In the aforesaid shared memory transactions structure based on the efficient multicast forward method of sliding window, wherein, in step (3), if the forward position of sliding window does not have the multicast cell address, with each unicast cell address mark in forward position is address to be sent, address size of translation gliding window directly goes to step (5) backward.
In the aforesaid shared memory transactions structure based on the efficient multicast forward method of sliding window, wherein, in step (3), for first kind of situation, if it is to be sent that address mark is arranged in the scope of sliding window width, the cell of then establishing port i address queue head indication is unicast cell UC i, search for UC backward iThe cell of back is to be sent if address mark is arranged in the scope of sliding window width, then goes to step (4); Otherwise, be address to be sent with this unicast address temporary marker.
In the aforesaid shared memory transactions structure based on the efficient multicast forward method of sliding window, wherein, in step (3), for second kind of situation, if existing other multicast cell address is marked as address to be sent in each port address formation, then more marked address and soon tag address nearerly apart from the forward position be changed to address to be sent apart from the position in forward position, if far apart from the forward position be changed to address to be sent, just wipe this sign; If before being labeled as cell to be sent is unicast cell, then make way for this multicast cell address.
Beneficial effect of the present invention is: method provided by the present invention has significantly reduced the forwarding span of multicast cell, thereby reduced the residence time of multicast cell in shared memory, two advantages the most direct of this method have been to improve the synchronization extent that multicast cell transmits at each destination interface and have reduced sharing the consumption of memory space, its performance can reach existing address copy mechanism at least, and under given flux of multicast ratio and multicast fan-out condition, its performance has improvement significantly to DMQ and ACQ.
Description of drawings
Fig. 1 is the switching fabric model schematic diagram that has I/O queue and VOQ.
Fig. 2 is two stage scheduling model schematic diagrames.
Fig. 3 decomposes the schematic diagram that reaches 100% throughput for the circular matrix by Birkhoff-von Neumann.
Fig. 4 is for to come equivalence to find the solution the schematic diagram of circular matrix with bigraph (bipartite graph).
Fig. 5 is the Shared Memory switching fabric model schematic diagram based on chained list.
Fig. 6 is the switching fabric schematic diagram of multicast cell copy.
Fig. 7 be multicast cell by address copy mode schematic diagram.
Fig. 8 is the processed mode schematic diagram of multicast cell.
Fig. 9 is performance bottleneck and a kind of didactic improvement measure schematic diagram of ACQ under the FIFO mode.
Figure 10 is the definition mode schematic diagram of sliding window.
Figure 11 is the mode of the choosing schematic diagram of address to be sent.
Figure 12 is the preempt-mode schematic diagram of multicast cell scheduling.
Figure 13 forwarding span schematic diagram that multicast cell is dispatched when seizing.
Figure 14 is the forwarding span schematic diagram of the multicast cell seized more.
Figure 15 is the comparison diagram that sliding window width and multicast cell are transmitted span.
Figure 16 takies the comparison diagram of situation in the memory block during for different sliding window width.
Embodiment
Below in conjunction with drawings and Examples the present invention is described in detail.
(Sliding Window BasedPromiscuous Address Copy Queue SWPACQ) aims to provide a kind of optimisation strategy of part to the multicast of sliding window-clean culture combined address formation, and the width of sliding window has been stipulated the scope that can be optimized.By the cell in the window ranges being done the selection of forwarding, can improve the storage that traditional ACQ mode faced and the inefficiency problem of forwarding effectively when transmitting the unicast-multicast mixed traffic.
At first, introduce some notations and definition.
Define 1 sliding window and be made of the row of plurality of continuous, wherein each classifies the cell addresses by the wait output of arrival order queuing as.The number of row constitutes the width W id (SW) of sliding window, as shown in figure 10, the box indicating multicast cell of band shade among the figure, letter only are used for identifying the position of formation and presentation address not, and the both sides of sliding window are called the forward position FE (SW) of sliding window and afterwards along BE (SW).
Define the multicast cell MC that is arranged in identical memory address in 2 sliding windows and be called the example of this multicast cell in this position in a position of address queue.The maximum column spacing that these examples are striden is called the formation span of this multicast cell, is called for short span, be designated as QueueSpan (MC, SW), in the sliding window number of example be designated as Inst (MC, SW).Among Figure 10, the A of queue position, B, the span of the multicast cell of C indication in sliding window is 3, instance number is 2; And the E of queue position, the span of the multicast cell of D indication in sliding window is 4, instance number is 2.
Each multicast cell that uses the address to copy queue mechanism all has the forwarding counter of oneself, be used to write down the number of the port that needs transmit, whenever an example of this multicast cell is forwarded away, this counter is from subtracting 1, when the value of counter becomes 0, then discharge the address of this multicast cell, the value of the counter that the meter multicast cell is current is Count (MC).Following character is obviously arranged:
Character 1 Count (MC) 〉=Inst (MC, SW)
Sliding window is used for the highest cell of each OPADD formation forwarding priority in the selected window width range, for this reason, at first stipulate the value of the width W id (SW) of sliding window, this value depends on that to the estimation of historical data and the computational speed of forwarding engine, too big window width will increase the weight of the calculating strength and the time of forwarding engine.
Specific algorithm of the present invention is described below:
At first each address queue is numbered in order, as shown in figure 10, numbering is from 0 of the most close outlet side.The sliding window scheduling mechanism carries out according to following steps:
Step 0: the value of getting sliding window width W id (SW) is W 0, forward position FE (SW)=0, the back is along BE (SW)=W 0-1.
Step 1: if the forward position of sliding window does not have the multicast cell address, each (clean culture) cell addresses in forward position is labeled as address to be sent, address size of translation gliding window goes to step 4 backward.
Step 2:, handle according to following substep if comprise the multicast cell address in the forward position of sliding window:
To N-1, wherein N is the number of output port to i=0:
Step 2.1: the cell of establishing port i address queue head indication is multicast cell MC i, consider the value Count (MC of the current counter of this multicast cell i), in two kinds of situation:
(1) if Count (MC i)>Inst (MC i, SW), search for MC backward iThe cell of back is to be sent if address mark is arranged in the scope of sliding window width, then changes step 2.2; Otherwise,, be address to be sent with this (clean culture) address temporary marker if in the scope of sliding window width, there is the address to point to unicast cell; Otherwise, be address to be sent with this header addresses temporary marker;
(2) if Count (MC i)=Inst (MC i, SW), attempt MC iWhole example temporary markers in current sliding window in each address queue are address to be sent: if existing other multicast cell address is marked as address to be sent in each port address formation, marked address and be about to the position of tag address more then according to the forward position, nearerly apart from the forward position be changed to address to be sent, if far apart from the forward position be changed to address to be sent, just wipe this sign.If before being labeled as cell to be sent is unicast cell, then make way for this multicast cell address.Show that as Figure 11 output port 1 is used ● the cell addresses of indication is made way for the cell addresses with zero indication at last when choosing address to be sent.
Step 2.2: the cell of establishing port i address queue head indication is unicast cell UC i, search for UC backward iThe cell of back is to be sent if address mark is arranged in the scope of sliding window width, then changes next step; Otherwise, should (clean culture) address temporary marker be address to be sent;
Step 3: to be sent if each cell addresses that is positioned at the forward position of sliding window all is marked as, then with sliding window address size of translation backward.
Step 4: the address to be sent according to each port address formation sends corresponding cell, if unicast cell, Free up Memory immediately; If multicast cell, according to the successively decrease counter Count (MC) of cell of the instance number of transmitting, if counter is 0, Free up Memory.
Step 5: go to step 1.
The performance evaluation of algorithm provided by the present invention is as follows:
At first consider the algorithm complexity of SWPACQ, in step 1 and step 3, need the address of the N in judgement sliding window forward position, amount of calculation is N, and N circulation arranged in the step 2, and each circulation inside has N subcycle at the most, and search depth is at most W 0, so the amount of calculation of whole steps 2 is at most N 2W 0So the algorithm complexity of whole SWPACQ is O (N 2W 0).Especially, when getting W 0, just do not use traditional ACQ algorithm of sliding window at=1 o'clock.
As previously mentioned, SWPACQ can optimize in subrange sharing the utilization of memory space, in order to study its influence to systematic function, at first provides following definition:
Memory space is shared to the time interval that its last example is forwarded away for this cell writes residence time by the system that defines 3 multicast cells, is designated as Soj (MC).The time interval that first example of multicast cell and last example are transmitted is called the forwarding span of multicast cell, is designated as ForwardSpan (MC).
Sliding window algorithm can be realized repeating optimizing in window ranges, therefore the character below is obvious:
Character 2 ForwardSpan (MC)≤QueueSpan (MC, SW)
Transmit span and portrayed a multicast cell to the synchronous in time degree of a plurality of destination interfaces forwardings, it is favourable that less forwarding span is used (as Web conference) in real time for interactive mode.The order of multicast cell address in each port address formation is consistent.Therefore following character arranged:
If character 3 is multicast cell MC 1And MC 2Example coexist as certain port address formation, in this formation, MC 1Example prior to MC 2Example (be MC 1Example nearer apart from output port), if MC 1Example and MC 2Example also coexist as other port address formation, then in these formations, MC 1Example all prior to MC 2Example.Therefore " prior to " relation and the location independent of multicast cell example at which port.
The release of the memory space of multicast cell among table 1 Fig. 6
The forwarding cycle ACQ SWPACQ
1 2 3 4 6 0 0 2○◆ 0 1● 1○ 0 1◆ 1● 0
As previously mentioned, use in the scheduling process of sliding window, the situation of multicast cell may occur repeatedly selecting to send, this scheduling conflict that is called multicast cell.In Figure 12, the scheduling conflict has taken place in port 3, according to (2) of algorithm steps 2.1, win with the cell addresses of zero indication, but this will cause using ● and ◆ the transmission lag of the multicast cell of indication, we claim zero to seize ● and ◆, the situation of port 5 is similar.Table 1 has shown that multicast cell shown in Figure 6 distributes according to the contrast of traditional ACQ and SWPACQ dual mode cell memory space release time.The scheduling conflict is inevitably, the scheduling conflict has taken place will significantly reduce the forward efficiency of SWPACQ, needs to provide its estimation to performance impact for this reason.Following character is at first arranged:
Character 4 is if all examples of multicast cell are all in sliding window inside in the sliding window, and there is at least one multicast cell address in the forward position, then dispatches in the cycle of forward position cell addresses, can discharge the memory space of a multicast cell in the public buffer memory at least.
Proof: there is multicast cell MC in the forward position of establishing sliding window 1The address, cell MC 1Period T in scheduling forward position cell addresses 0In necessary and sufficient condition that its all examples are forwarded be that it is seized without any example, if MC 1In this dispatching cycle, all examples are not all forwarded, then MC 1Seized by another multicast cell, be designated as MC 2To MC 2Impose same derivation,, a MC must be arranged at last because the output port number is limited n, all examples of this multicast cell are positioned at sliding window and are not seized by any other multicast cell, therefore, and period T 0In have MC at least nAll examples all transmitted, thereby discharge its memory space.
Do as one likes matter 3 is not difficult to find out, the multicast cell address " prior to " relation is equivalent to " seizing " relation.That is to say, if MC 1Example prior to MC 2Example, and if only if MC 1Example seized MC 2Example.The forwarding span of multicast cell scheduling when seizing in order to study, note multicast cell address is the Front distance of this cell example to the distance in sliding window forward position.In Figure 13, the forward position of establishing sliding window is next line.Among the left figure, mark ● the multicast cell address be 1 at the Front distance of port 2, and among the right figure, mark ● the multicast cell address be 2 at the Front distance of port 2.When clashing, the forwarding span of the multicast cell of being seized has following character:
In the character 5 SWPACQ scheduling modes, establish multicast cell MC 1And MC 2All there is example to be positioned at the sliding window forward position, if MC 1Seized MC 2And whole examples of two multicast cells all are positioned at sliding window, then MC 2Forwarding span ForwardSpan (MC 2) be MC 2Minimum Front distance+1 on the port that clashes.
Proof: according to SWPACQ algorithm steps 2 and step 3, MC 2Example and MC on non-conflict port 1Whole examples just can all dispatch away in dispatching cycle in the forward position of sliding window, and dispatch MC next time 2The residue example, need wait until that these examples take place when moving down into the sliding window forward position, and once can all transmit and finish, and the time that this moves down just in time is exactly MC 2Minimum Front distance on the port that clashes is minimum Front distance+1 so it transmits span.
According to character 5, mark in the left figure among Fig. 7 ● the forwarding span of multicast cell be 2, mark among the right figure ● the forwarding span of multicast cell be 3.The forwarding span that this character has provided when two multicast cells are seized is calculated.Below note is seized example that cell is positioned at the minimum Front distance on the port that the clashes crucial example for this cell, mark in two examples among Figure 13 ● the crucial example of multicast cell be positioned at port 2.The situation that takes place repeatedly to seize wants complicated: establish multicast cell MC 1, MC 2, MC 3All there is example to be positioned at the sliding window forward position, if MC 1Seized MC 2, MC 2Seized MC 3, and whole examples of these 3 multicast cells all are positioned at sliding window, then MC 3The forwarding span in two kinds of situation: (1) is if MC 2The port at crucial example place on follow-up MC arranged 3Example, then cycle of being scheduled of this example has determined MC 3The forwarding span; (2) if MC 2The port at crucial example place on follow-uply do not have a MC 3Example then is positioned at MC on other ports 2Crucial example those MC after dispatching cycle with minimum Front distance 3Example has determined MC 3The forwarding span.As shown in figure 14, multicast cell ● seized multicast cell zero, multicast cell zero has been seized multicast cell ◆.The crucial example of multicast cell zero is positioned at port 3, do not have follow-up on the port at this crucial example place ◆ example, ◆ on other ports, and the example with minimum Front distance that the crucial example zero was transmitted after the cycle is positioned at port 2, and the Front distance of this example+1 is ◆ the forwarding span.
If it is p that each time slot enters the ratio that multicast cell accounts in the cell of shared buffer memory, the average fan-out of multicast cell is f.Further the example of each multicast cell of hypothesis is distributed to N output port equiprobably, and the probability that then enters a multicast cell example in time slot in each port address formation is p · C N f / C N - 1 f - 1 = p · f / N . In conjunction with character 5, can obtain a useful estimator:
Character 6 note P 0If=pf/N is at some moment multicast cell MC 1And MC 2Arrive the forward position of sliding window simultaneously, but MC 1Seized MC 2, when sliding window is enough wide, MC 2The forwarding span be no more than the probability of S (S 〉=2)
Pro ( ForwardSpan ( M C 2 ) ≤ S ) ≥ Σ i = 1 S - 1 ( 1 - P 0 ) i P 0
Proof: by assumed condition, MC 1The forward position that example arrival sliding window is arranged, and MC 1Seized MC 2, MC 2The forwarding span be at least 2.Note MC 1Be positioned at the port-for-port q at the example place in forward position.Do as one likes matter 5 has MC if port q goes up 2Enter, and its Front distance is n, then MC 2The forwarding span be at most n+1.By the analysis of front, each time slot MC 2The probability of entry port q is P 0, obviously, port q goes up MC 2The Front distance of example be that the probability of n is (1-P 0)) N-1P 0Thereby, MC 2The forwarding span total probability that is no more than S be at least Σ i = 1 S - 1 ( 1 - P 0 ) i P 0 .
Further introduce characteristics of the present invention below by analog study and analysis.
Simulate a 8*8 shared memory transactions structure, adopt the ON-OFF model of extensively being approved to generate input flow rate.Whole memory block sizes is 2000 cells.Data are according to 100 μ iThe stack in individual ON-OFF forms data source generates, wherein μ iBe used for regulating the intensity of flow, i.e. load.For single ON-OFF data source, the probability with 0.2 enters the ON state, and average waiting time is 10 time slots, and average every time slot produces 2 cells.The probability that enters OFF is 0.8, and its average waiting time is 3.75 time slots.The linear speed of input and output is μ iGet the average discharge of 1 o'clock stack back data.At the ON state, produce multicast cell with ratio p in each time slot.Continue to use the multicast fan-out model that [8] propose, the geometric distributions that the port number X of each multicast cell fan-out blocks below satisfying: g (k)=Pro[X=k]=(1-q) q K-1/ (1-q n) 0<q<1 1≤k≤n n is the output port number, q is constant this moment
E ( X ) = 1 1 - q - n q n 1 - q n .
Performance parameter to two keys compares: what at first compare is the average forwarding span of multicast cell.Choose three kinds of different multicast concentration, p value respectively is 0.2,0.25,0.3, and q gets 0.3.The contrast of the forwarding span that obtains as shown in figure 15, be not difficult to find out, use the SWPACQ algorithm, increase progressively 9 time to 91 from 1 (being traditional ACQ algorithm) according to 10 for step-length when the width of sliding window, no matter under the situation of which kind of multicast concentration, the average forwarding span of multicast cell nearly all has obvious decline along with the increase of the width of window, and as previously mentioned, this result is highly beneficial for interactively real-time application.What secondly compare is to sharing the situation that takies of memory space, fix a multicast concentration p=0.2 at this moment, getting the long observation interval of 400 time slots, is respectively the situation that takies of the common storage area of 1,51,91 contrast runtimes with regard to window width.The result of Figure 16 shows, the efficient of transmitting has significantly been optimized in the increasing of sliding window width, make under identical flow condition, the cell quantity of lining up in the system has the decline of certain degree than ACQ, at the sliding window width is 51 o'clock, taking of shared memory all descended 12% to 18%, if get wideer sliding window, the degree of decline is higher.Therefore, under the condition of the offered load of high load capacity, SWPACQ can effectively reduce the packet loss of system when selecting suitable window width.
List of references:
[1]Cheng-Shang Chang,Duan-Shin Lee and Yi-Shean Jou,Load balancedBirkhoff-von Neumann switches,part I:one-stage buffering,ComputerCommunications,Vol.25,pp.6 11-622,2002.
[2]Cheng-Shang Chang,Duan-Shin Lee and Ching-Ming Lien,Load balancedBirkhoff-von Neumann switches,part II:multi-stage buffering,ComputerCommunications,Vol.25,pp.623-634,2002.
[3]H.Jonathan Chao,Cheuk H.Lam,Eiji Oki,Broadband Packet SwitchingTechnologies-A Practical Guide to ATM Switches and IP Routers John Wiley &Sons Inc,2001
[4]Takahiro Okoge,Hiroshi Inai,and Jiro Yamakita A Shared-Memory ATM Switchwith Multicast Function,Electronics and Communications in Japan,Part 1,Vol.82,No.10,pp.316-324,1999
[5]Sanjeev Kumar,The Sliding-Window Packet Switch:A NewClass of Packet SwitchArchitecture With Plural Memory Modules and Decentralized Control,IEEEJOURNAL ON SELECTED AREAS IN COMMUNICATIONS,VOL.21,NO.4,pp656-673,MAY 2003
[6] Zhu Shengqiong, " independent research ten thousand mbit ethernet core exchange chips (research and the design of filter element fast among the 12GE+1 * 10GE) ",
Wuhan Research Institute of Posts ﹠ Telecommunications's master thesis, 2006,3[7] Peng Jianhui, " independent research ten thousand megabit Ethernet core exchange chips (12GE+1 * 10GE) research and the design of middle memory management unit ",
Wuhan Research Institute of Posts ﹠ Telecommunications's master thesis, 2006,3

Claims (4)

  1. In the shared memory transactions structure based on the efficient multicast forward method of sliding window, comprise the steps:
    (1) the arrival order numbering is pressed in the cell addresses formation of waiting for output, several continuous cell addresses row constitute sliding window, the number of row constitutes the width W id (SW) of sliding window, and the both sides of sliding window are respectively the forward position FE (SW) of sliding window and afterwards along BE (SW);
    (2) value of getting sliding window width W id (SW) is W 0, forward position FE (SW)=0, the back is along BE (SW)=W 0-1;
    (3) for the multicast cell address in sliding window forward position, the cell of setting port i address queue head indication is multicast cell MC i, i=0 is to N-1, and wherein N is the number of output port, judges the value Count (MC of the current counter of this multicast cell i),
    First kind of situation is if Count is (MC i)>Inst (MC i, SW), Inst (MC i, SW) be the number of example in the sliding window, search for MC backward iThe cell of back if there is the address to point to unicast cell in the scope of sliding window width, is address to be sent with this unicast address temporary marker, otherwise, be address to be sent with this header addresses temporary marker;
    Second kind of situation is if Count is (MC i)=Inst (MC i, SW), with MC iWhole example temporary markers in current sliding window in each address queue are address to be sent;
    (4) to be sent if each cell addresses that is positioned at the forward position of sliding window all is marked as, then with sliding window address size of translation backward;
    (5) address to be sent according to each port address formation sends corresponding cell, if unicast cell, Free up Memory immediately; If multicast cell, according to the successively decrease counter Count (MC) of cell of the instance number of transmitting, if counter is 0, Free up Memory; Then, go to step (3).
  2. 2. in the shared memory transactions structure as claimed in claim 1 based on the efficient multicast forward method of sliding window, it is characterized in that: in step (3), if the forward position of sliding window does not have the multicast cell address, with each unicast cell address mark in forward position is address to be sent, address size of translation gliding window directly goes to step (5) backward.
  3. 3. in the shared memory transactions structure as claimed in claim 1 based on the efficient multicast forward method of sliding window, it is characterized in that: in step (3), for first kind of situation, if it is to be sent that address mark is arranged in the scope of sliding window width, the cell of then establishing port i address queue head indication is unicast cell UC i, search for UC backward iThe cell of back is to be sent if address mark is arranged in the scope of sliding window width, then goes to step (4); Otherwise, be address to be sent with this unicast address temporary marker.
  4. 4. in the shared memory transactions structure as claimed in claim 1 based on the efficient multicast forward method of sliding window, it is characterized in that: in step (3), for second kind of situation, if existing other multicast cell address is marked as address to be sent in each port address formation, marked address and be about to the position of tag address more then apart from the forward position, nearerly apart from the forward position be changed to address to be sent,, just wipe this sign if far apart from the forward position be changed to address to be sent; If before being labeled as cell to be sent is unicast cell, then make way for this multicast cell address.
CNA2007101640166A 2007-10-16 2007-10-16 Efficient multicast forward method based on sliding window in share memory switching structure Pending CN101188556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007101640166A CN101188556A (en) 2007-10-16 2007-10-16 Efficient multicast forward method based on sliding window in share memory switching structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007101640166A CN101188556A (en) 2007-10-16 2007-10-16 Efficient multicast forward method based on sliding window in share memory switching structure

Publications (1)

Publication Number Publication Date
CN101188556A true CN101188556A (en) 2008-05-28

Family

ID=39480753

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007101640166A Pending CN101188556A (en) 2007-10-16 2007-10-16 Efficient multicast forward method based on sliding window in share memory switching structure

Country Status (1)

Country Link
CN (1) CN101188556A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107005489A (en) * 2014-12-29 2017-08-01 甲骨文国际公司 For supporting the efficient VOQ in networked devices(VOQ)The system and method that scheme is washed away in packet
CN107222435A (en) * 2016-03-21 2017-09-29 深圳市中兴微电子技术有限公司 Eliminate the method and device for exchanging head resistance of message
CN109845199A (en) * 2016-09-12 2019-06-04 马维尔国际贸易有限公司 Merge the read requests in network device architecture
CN110858791A (en) * 2018-08-22 2020-03-03 华为技术有限公司 Distributed parallel transmission method, device, equipment and storage medium
WO2020134949A1 (en) * 2018-12-29 2020-07-02 香港乐蜜有限公司 Session request sending method and apparatus, electronic device and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107005489A (en) * 2014-12-29 2017-08-01 甲骨文国际公司 For supporting the efficient VOQ in networked devices(VOQ)The system and method that scheme is washed away in packet
CN107005489B (en) * 2014-12-29 2021-02-02 甲骨文国际公司 System, method, medium, and apparatus for supporting packet switching
CN107222435A (en) * 2016-03-21 2017-09-29 深圳市中兴微电子技术有限公司 Eliminate the method and device for exchanging head resistance of message
CN107222435B (en) * 2016-03-21 2020-07-24 深圳市中兴微电子技术有限公司 Method and device for eliminating exchange head resistance of message
CN109845199A (en) * 2016-09-12 2019-06-04 马维尔国际贸易有限公司 Merge the read requests in network device architecture
CN109845199B (en) * 2016-09-12 2022-03-04 马维尔亚洲私人有限公司 Merging read requests in a network device architecture
CN110858791A (en) * 2018-08-22 2020-03-03 华为技术有限公司 Distributed parallel transmission method, device, equipment and storage medium
WO2020134949A1 (en) * 2018-12-29 2020-07-02 香港乐蜜有限公司 Session request sending method and apparatus, electronic device and storage medium

Similar Documents

Publication Publication Date Title
US5241536A (en) Broadband input buffered atm switch
US5636210A (en) Asynchronous transfer mode packet switch
US6876629B2 (en) Rate-controlled multi-class high-capacity packet switch
CA2247447C (en) Efficient output-request packet switch and method
EP0884876B1 (en) Improved packet switching
CN1201532C (en) Quick-circulating port dispatcher for high-volume asynchronous transmission mode exchange
US20020169921A1 (en) Packet buffer
WO2003090017A2 (en) Data forwarding engine
CN101478483A (en) Method for implementing packet scheduling in switch equipment and switch equipment
Sivaram et al. HIPIQS: A high-performance switch architecture using input queuing
CN101188556A (en) Efficient multicast forward method based on sliding window in share memory switching structure
Nikologiannis et al. Efficient per-flow queueing in DRAM at OC-192 line rate using out-of-order execution techniques
US7110405B2 (en) Multicast cell buffer for network switch
US7675930B2 (en) Chip circuit for combined and data compressed FIFO arbitration for a non-blocking switch
CN104333516A (en) Rotation rotation scheduling method for combined virtual output queue and crosspoint queue exchange structure
CN109218220A (en) A kind of single multicast mix of traffic exchange method of load balancing
Sharma Review of recent shared memory based ATM switches
Pattavina Design and performance evaluation of a packet switch for broadband central offices
CN100425035C (en) Switching system and switching method based on length variable packet
Berger Delivering 100% throughput in a buffered crossbar with round robin scheduling
CN107222435B (en) Method and device for eliminating exchange head resistance of message
CN100495974C (en) Flow shaping method in data transmission process
CN103731359A (en) FIFO cache sharing router based on fiber delay lines and working method thereof
Zhou et al. Design of per-VC queueing ATM switches
Jung et al. Banyan multipath self-routing ATM switches with shared buffer type switch elements

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20080528