CN115720216A - Switching network performance analysis method, network structure and scheduling method thereof - Google Patents

Switching network performance analysis method, network structure and scheduling method thereof Download PDF

Info

Publication number
CN115720216A
CN115720216A CN202211069087.9A CN202211069087A CN115720216A CN 115720216 A CN115720216 A CN 115720216A CN 202211069087 A CN202211069087 A CN 202211069087A CN 115720216 A CN115720216 A CN 115720216A
Authority
CN
China
Prior art keywords
network
queue
buffer
scheduling
queue length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211069087.9A
Other languages
Chinese (zh)
Inventor
郑凌
张科遥
姜静
冯丹
褚宏云
魏国栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Unmanned System Engineering Research Institute Co ltd
Xian University of Posts and Telecommunications
Original Assignee
Shaanxi Unmanned System Engineering Research Institute Co ltd
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Unmanned System Engineering Research Institute Co ltd, Xian University of Posts and Telecommunications filed Critical Shaanxi Unmanned System Engineering Research Institute Co ltd
Priority to CN202211069087.9A priority Critical patent/CN115720216A/en
Publication of CN115720216A publication Critical patent/CN115720216A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a structure of a Combined Input and cross queue structure (CICQ) switching network and a performance analysis method thereof. The structure comprises a Virtual Output Queuing (VOQ) and a Buffered Crossbar (BX). The performance analysis method comprises the following steps: obtaining the stable state of the network system by increasing simulation time; when the stable distribution of the system is obtained, the queue length N can be calculated in an accumulation summation mode; calculating to obtain a queue length average value by using a queue length formula, and further obtaining the average queuing delay W of the system, the packet loss rate L of the system and the throughput rate T; and analyzing the relation between the size B of the cross point buffer area and the size C of the exchange scale and the throughput rate T, the packet loss rate L, the average queuing delay W and the queue length N. The invention sets a small amount of buffer space at the cross node, and the data packet is not required to be immediately forwarded to the output port during scheduling. The design does not need centralized scheduling, greatly reduces scheduling complexity and realizes high-speed flow production.

Description

Switching network performance analysis method, network structure and scheduling method thereof
The invention belongs to the technical field of networks, and particularly relates to a switching network performance analysis method, a network structure and a scheduling method thereof.
Background
With the rapid growth of Internet networks, especially multimedia and data services, the data transmission capacity of the Internet has increased at an alarming rate. Routers serve as core nodes in a communication network, and the performance of the routers directly affects the communication quality of the network. The switching network is a key component of the router that enables the interconnection of data from input to output. The design and implementation of high performance switching networks is critical to improving overall communication network performance. Therefore, research into switching networks and their scheduling is very important.
The switching network structure can be divided into various types such as shared cache, input queue, output queue and the like, but the structures have the defects of high scheduling complexity, high cache overhead, poor expandability and the like, and are difficult to be applied to switches and routers with high speed and large capacity.
The cross-node cached network switch fabric is the most popular approach in modern high-speed switch fabric and router design and is widely used due to its simplicity and non-blocking capability. In the late eighties, buffered cross-node switches were considered multi-stage switches, but most used less input/output and less buffering. Due to technical limitations, large-scale buffering cannot be provided on the nodes. Therefore, at that time, a switch fabric with a large buffer has not been evaluated in detail.
However, most other popular buffering techniques require intensive control communication between line cards (where buffers are typically located) and centralized schedulers. Such dense communication is substantially achievable if the distance between the line cards and the switch fabric is placed on a rack at a very small distance, and no significant performance problems arise. However, in the switch architecture in modern practice, the line cards are very far from the switch. This means that the control traffic may not be fast enough to meet the transmission speed requirements of modern switches. The best method for overcoming the problem is to move the buffer from the line card to the switch fabric, and how to move the line card to the switch fabric, so as to completely eliminate the influence of slow transmission speed caused by the long distance between the line card and the switch in the control communication.
Another concern is that the amount of memory space required for a cross-node switch fabric with cache is proportional to the square of the number of ports, i.e., the space complexity is O (N) 2 ). There is a large buffering overhead in the case of a large number of switch port ratios. Therefore, how to set the capacity of the cross node cache is a technical problem to be solved in practical application.
Disclosure of Invention
The present application aims to provide a method for analyzing performance of a switching network with a cache at a cross node, a network structure thereof, and a scheduling method, so as to solve the technical problem that the switching speed cannot meet the requirement in the prior art.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
the application provides a method for analyzing the performance of a switching network, which is characterized by comprising the following steps:
step 1: obtaining the system steady state of the network by increasing simulation time for the network;
step 2: obtaining the length of the queue through accumulation and summation;
and step 3: obtaining a queue length average value according to the queue length, and obtaining the average queuing time delay of the network according to a queuing mode;
and 4, step 4: obtaining the packet loss rate and the throughput rate of the network;
and 5: analyzing the relation among the cross point buffer area B, the exchange scale C, the throughput rate T, the packet loss rate L, the average queuing delay W and the queue length N to obtain the network performance;
optionally, obtaining a system steady state of the network based on a markov process model;
optionally, the throughput rate of the network is a ratio of a number of packets leaving the network to a number of packets arriving at the network over a period of time, expressed as:
Figure BDA0003828518370000031
wherein T is the throughput of the crossbar output bus; a (t) is the packet arrival probability of time slot t, π i (t) indicates that there are i packets in the crosspoint queue of time slot t, B being the buffer size;
optionally, the network packet loss rate is a ratio of the number of lost packets to the number of transmitted data groups, and is expressed as L = a (t) pi B (t) where A (t) is the packet arrival probability of time slot t, π i (t) indicates that there are i packets in the crosspoint queue of time slot t, and B is the buffer size;
optionally, the queue length average is
Figure BDA0003828518370000032
Where A (t) is the packet arrival probability, π, of the time slot t i (t) indicates that there are i packets in the crosspoint queue for time slot t;
optionally, the average queuing delay is equal to the average of the queue length divided by the packet arrival rate, and is expressed as:
Figure BDA0003828518370000033
π i (t) indicates that there are i packets in the crosspoint queue of time slot t, the traffic load of the input bus is p, and C is the size of the switch scale;
optionally, the crosspoint queues are all empty in the initial state, that is, 0;
a second aspect the present application provides a switching network architecture characterized by performing a performance analysis using the method of performance analysis of a network according to claim 1; the network comprises: the system comprises an input end, a virtual output queue, a cross switch with a cache and an output end, wherein a small amount of cache space is arranged on a cross point;
a third aspect the present application provides a method for scheduling a switching network, wherein the scheduling method is applied to the network structure of claim 8; the scheduling method comprises the following steps:
step 1: after the data packet is input at the input port, caching the data packet to a corresponding queue of a corresponding port according to the information of the data packet;
step 2: inputting a virtual cache queue to a scheduling process of cross node cache;
and step 3: the scheduling process of the cross node input cache;
and 4, step 4: a scheduling process of cross node output cache;
optionally, the scheduling process includes the following steps:
step 1: detecting whether data exist in a virtual cache of an input end, and monitoring whether a cross node cache is full;
step 2: generating a port request signal for arbitration;
and 3, step 3: arbitrating according to the priority order of the ports;
and 4, step 4: reading data from the input virtual output queue buffer according to the result;
and 5: and caching data to the cross node.
The beneficial effect of this application is:
a method for analyzing the performance of a switching network is characterized by comprising the following steps:
step 1: obtaining the system steady state of the network by increasing simulation time for the network;
and 2, step: obtaining the length of the queue through accumulation and summation;
and step 3: obtaining a queue length average value according to the queue length, and obtaining the average queuing time delay of the network according to a queuing mode;
and 4, step 4: obtaining the packet loss rate and the throughput rate of the network;
and 5: analyzing the relation among the cross point buffer area B, the exchange scale C, the throughput rate T, the packet loss rate L, the average queuing delay W and the queue length N to obtain the network performance; through the design, internal acceleration and centralized scheduling are not needed, decisions can be made according to self conditions, and distributed implementation is realized at the input end and the output end. The arrangement of the switching network structure greatly reduces the scheduling complexity and can realize high-speed flow operation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic structural design diagram of a CICQ switching network structure provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a system model of a discrete Markov chain according to an embodiment of the present application;
fig. 3 is a flowchart of performance analysis of a CICQ switching network structure according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a relationship between a queue length N and a buffer size B according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a relationship between a queue length N and a swap size C according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a relationship between an average queuing delay W and a buffer size B according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a relationship between an average queuing delay W and a switch size C according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a relationship between a packet loss rate L and a cache size B according to an embodiment of the present application;
fig. 9 is a schematic diagram of a relationship between a packet loss rate L and a size C of an exchange scale according to an embodiment of the present application;
fig. 10 is a schematic diagram illustrating a relationship between a throughput rate T and a buffer size B according to an embodiment of the present application;
fig. 11 is a schematic diagram of a relationship between a throughput T and a swap size C according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Fig. 1 is a schematic architecture diagram of a CICQ switching network structure according to an embodiment of the present application. The architecture shown in fig. 1 includes: an input terminal, a Virtual Output Queue (VOQ), a Buffered Crossbar (BX), and an Output terminal. Virtual Output Queues (VOQs) are mainly used to eliminate HOL blocking, and divide the input buffer into n logical queues, called VOQs, which store packets destined for the relevant output ports. The HOL units of each VOQ are arranged in the current unit time slot, and only one HOL unit can be forwarded compared to the input queue.
The cross point queue (CQ) has good distributed parallel scheduling characteristics, a small amount of cache is stored on the cross nodes of the input bus and the output bus, and the cross point queue has good distributed parallel scheduling characteristics and does not need internal acceleration. It is assumed that all buffers on one output line have the same state distribution and that the input traffic follows a bernoulli distribution. The CICQ then works according to the following workflow:
(1) After the packet is input at the input port, caching the packet to a corresponding queue of a corresponding port according to the information of the packet;
(2) Inputting a virtual cache queue to a scheduling process of cross node cache;
(3) Scheduling process for cross-node input caching
(4) And the cross node outputs the dispatching process of the cache.
The scheduling process comprises the following steps:
(1) Detecting whether data exist in the VOQ virtual cache of the input end, and monitoring whether the cache of the cross node is full, so that the cross node can receive the data input by the input end;
(2) Generating a port request signal for arbitration;
(3) Arbitrating according to the priority order of the ports;
(4) Reading data from the input VOQ buffer according to the result;
(5) Data is cached to the cross node.
Fig. 2 is a schematic diagram of a system model of a discrete markov chain according to an embodiment of the present disclosure. The CICQ performance analysis, as shown in FIG. 2, is based on a Markov chain model, and the Markov (Markov) process is a type of stochastic process. The markov chain is the simplest markov process-that is, the time and state processes have discrete markov process parameters. Due to the symmetry of the switch fabric and the bernoulli nature of the incoming traffic, the performance of each outgoing bus can be considered independently. The system can be modeled as a discrete markov chain according to the above conditions.
Let A (t) denote the packet arrival probability of the time slot and the traffic load of the input bus be p, where p is greater than or equal to 0 and less than or equal to 1, then
A(t)=p/c (1)
Let D (t) then denote the packet departure probability at the time slot. Which can be expressed as formula (2)
Figure BDA0003828518370000081
Let pi i (t) indicates that there are i packets in the cross point queue of the slot, then the state transition equation is obtained as shown in equation (3):
Figure BDA0003828518370000082
the matrix form of the state transition equation can be written as shown in equation (4)
Figure BDA0003828518370000091
Wherein the state transition matrix P is shown in formula (5)
Figure BDA0003828518370000092
All crosspoint queues can be found in the state transition matrix to be empty in the initial state, as shown in equation (6):
π 0 (0)=1,π i (0)=0,1≤i≤B (6)
from the above analysis, the state probabilities of the markov chain at any time t segment can be calculated by an iterative method. When the time slot t tends to be infinite, the system becomes an equilibrium state, and the state probability pi i (t) converges to a constant.
Let T denote the throughput of the horizontal output bus and W denote the average packet delay. Throughput is defined as the ratio of the cumulative number of packets entering the crosspoint queue in the output bus to the cumulative number of arrivals. Thus, the throughput rate can be written as shown in equation (7):
Figure BDA0003828518370000093
the packet loss rate is defined as the ratio of the number of lost data packets to the number of transmitted data packets, that is, when a signal arrives but the system state has stabilized and cannot be received, the packet loss rate can be written as shown in equation (8):
L=A(t)π B (t) (8);
the average queue length can be written as shown in equation (9):
Figure BDA0003828518370000101
furthermore, according to Little's law, the average packet delay is equal to the average queue length divided by the packet arrival rate. The expression for the average packet delay is therefore shown in equation (10):
Figure BDA0003828518370000102
in order to select a suitable buffer capacity, the performance analysis of the switched network structure is required, which comprises the following steps:
step 1) solving a system steady state based on a Markov process model;
step 2), solving the queue length N of the system;
step 3) solving the average queuing time delay W of the system;
step 4), solving the packet loss rate L of the system;
step 5), solving the throughput rate T of the system;
and 6) analyzing the relation between the size B of the cross point buffer area, the size C of the exchange scale and the throughput rate T, the packet loss rate L, the average queuing time delay W and the queue length N.
Further, the step 1) specifically comprises:
the system steady state solution is based on a markov process model. Because a closed expression for obtaining the system steady state cannot be obtained through deduction, the solution of the system state distribution is carried out by increasing the simulation time, and the time is regarded as the state tends to the steady state infinitely, namely the system steady state is obtained by increasing the cycle number. When the number of cycles is large enough, the obtained result can be approximately driven to a steady state.
Further, the step 2) specifically comprises:
and after the steady-state distribution of the system is obtained, calculating the length of the queue in an accumulation and summation mode. The specific method comprises the following steps: the method mainly comprises the steps of setting circulation from circulation to adding circulation to obtain the queue length completely.
Further the step 3) is specifically as follows:
the average delay can be calculated by using the above formula for the queue length to calculate the average value of the queue length. Then according to the Little theorem in queuing theory: the average queue length in the queuing system is equal to the product of the arrival rate and the average waiting time, and the average queuing delay of the system can be obtained by dividing the queue length by the arrival rate at this time.
Further, the step 4) specifically comprises:
there is still a probability of data arriving if and only if one queue reaches its maximum length. The packet loss rate of the system is therefore equal to the probability that the queue is full for the system, multiplied by the arrival rate of the system. And then multiplying the probability that the queue is full by the probability of system arrival.
Further, the step 5) specifically comprises:
system throughput is defined as the ratio of the number of packets leaving the system to the number of packets arriving at the system over a sufficiently large period of time. According to the definition of throughput, the solving method thereof has two methods.
The first method considers that the dequeue strategy of the system is a greedy strategy, that is, as long as there is data in the queue, a packet must be selected to be dequeued. The throughput of the system is therefore equal to 1 minus the probability that the queue is empty while no packets arrive.
The second method takes into account that when the system goes to steady state, the length of the queue remains balanced, with the enqueued data being approximately equal to the dequeued data. The throughput of the system is determined only by the number of lost packets, i.e. the packet loss rate is the cause of the throughput degradation. The throughput is therefore equal to 1 minus the packet loss rate of the system.
Further the step 6) is specifically as follows:
and visually displaying the obtained performance indexes such as the throughput rate, the queue length, the average time delay, the packet loss rate and the like, and obtaining the change rule of the performance parameters along with the size of the cache and the scale of the switch, thereby analyzing the influence of the cache setting of the system on the performance of the switch.
Fig. 3 is a flowchart of performance analysis of a CICQ switching network structure according to an embodiment of the present application. The network performance analysis method shown in fig. 3 includes the following steps:
s301: by means of a recursion mode, the simulation time is increased, so that the system state distribution is solved, the time tends to infinity and is regarded as the state tends to be stable, and the system stable state is obtained by increasing the cycle number. When the number of cycles is sufficiently large, the obtained result can be approximated to a theoretical steady state. In the present embodiment, the number of cycles is set to 10 4 Secondly;
s302, in the process of realizing the queue length solving formula, circulation is set, and from circulation to circulation, the addend circulation is complete to obtain the queue length;
s303: dividing the queue length by the arrival rate of the time is the average queuing delay of the system;
s304: calling the previous steady state solution, and then multiplying the probability that the queue is full by the probability that the system arrives to obtain the packet loss rate of the system;
s305: the throughput rate formula selects 1 to subtract the packet loss rate, so that a steady state solving part and a packet loss rate solving part can be called, and the difficulty in writing the formula is reduced. In this experiment, the arrival rate was ρ =1
S306: and researching the relation between the size B of the cross point buffer area and the size C of the exchange scale and the throughput rate T, the packet loss rate L, the average queuing delay W and the queue length N.
Fig. 4 is a schematic diagram illustrating a relationship between a queue length N and a buffer size B according to an embodiment of the present disclosure. Fig. 4 shows a schematic diagram of the relationship between the queue length and the buffer size B under the conditions of C =2 for 401, C =8 for 402, C =16 for 403, and C =32 for 404. The queue length will increase as the buffer size B increases as shown. This is because increasing the buffer size means more space for accommodating arriving packets and therefore the number of packets accumulated in the queue increases. It can also be seen from fig. 4 that the increase rate of the queue length decreases to a different extent when the buffer size B is greater than 30. Especially when the switching capacity C =32, the magnitude of the decrease in the growth rate is particularly significant. This is because when the buffer space increases to a certain extent, which is larger than the average queue length of the system in steady state, no more data will accumulate in the queue even if the buffer capacity increases. At this point the rate of increase in queue length will slow and tend towards the extremes of an infinite capacity queuing system.
Fig. 5 is a schematic diagram illustrating a relationship between a queue length N and a swap size C according to an embodiment of the present application. Fig. 5 shows a schematic diagram of the relationship between the queue length and the size of the switch size C under the condition that 501 is B =2, 502 is B =4, 503 is B =8, and 504 is B = 16. As shown in fig. 5, when the buffer size B is fixed, the queue length will decrease as the switch size C increases. This is because it is assumed that the input traffic follows a bernoulli distribution, i.e. the traffic of the same input port will be evenly distributed to different output ports. As the switching size increases, the more evenly the traffic will be distributed, the lower the probability of queuing collisions at different ports will be, and therefore the frequency of queuing decreases, thereby reducing the length of the queue. It can be seen, however, that the queue length does not vary significantly with the size of the switch. When the switch size is above 5, the queue length N will tend to a stable value. But the queue length varies significantly with buffer size B. As the buffer size B increases, the queue length will change significantly.
Fig. 6 is a schematic diagram illustrating a relationship between an average queuing delay W and a buffer size B according to an embodiment of the present application. Fig. 6 shows a schematic diagram of the relation between the average queuing delay W and the buffer size B under the condition that 601 is C =2, 602 is C =4, 603 is C =8, and 604 is C = 16. As shown in fig. 6, for a given switch size C, the average queuing delay increases as the buffer size B increases. This is because an increase in buffer size results in an increase in average queue length, which results in an increase in packet queuing delay. The average queuing delay is larger for different switch sizes, larger switch sizes. When the buffer size is smaller than 4, the average queuing delay corresponding to switch sizes of 2 and 4 is very close.
Fig. 7 is a schematic diagram illustrating a relationship between an average queuing delay W and a switch size C according to an embodiment of the present application; fig. 7 shows a schematic diagram of the relation between the average queuing delay W and the switch size C under the condition of 701 being B =2, 702 being B =4, 703 being B =8, and 704 being B = 16. Increasing the size of the switch as shown in fig. 7 also results in increased packet delay. The reason for this is that the incoming traffic is evenly distributed between the crosspoints and the arrival rate of each crosspoint queue is. As the switch size C increases, the packet arrival rate and the packet departure rate of a certain queue decrease. For a given queue of infinite capacity, the average queue length remains constant as both the arrival and departure rates of incoming traffic decrease, but the average queuing time increases. Thus, packet delay will increase with switch size
Fig. 8 is a schematic diagram illustrating a relationship between a packet loss rate L and a cache size B according to an embodiment of the present application; as shown in fig. 8, 801 is C =2, 802 is C =4, 803 is C =8, and 804 is C =16, and a relationship between the packet loss rate L and the buffer size B is schematically illustrated. As the buffer size increases, the packet loss rate decreases as shown in fig. 8. This is because as the buffer size increases, there will be more space in the queue to accommodate more packets, and the packet loss rate will decrease accordingly. It can be seen that the packet loss rate of the system changes obviously with the buffer size B, and when the buffer size is expanded from 1 to 20, the packet loss rate is greatly reduced. When the buffer size is further enlarged, the effect of improving the system packet loss rate will be slowed down. When the buffer size is expanded from 20 to 60, the packet loss rate of the system is reduced only by a small amount. In addition, different switch scales also affect the result of the packet loss rate. It can be seen that when the switching size is smaller, the buffer size B has a more significant influence on the packet loss rate. When the buffer size is the same, the larger the exchange capacity is, the lower the corresponding packet loss rate is. When the exchange scale is 2, the packet loss rate approaches to 0 when the cache scale is larger than 50; and when the buffer is larger than 15, the packet loss rate is close to 0 when the switching size is 16. When the buffer size is larger than 30, the packet loss rates corresponding to the switch sizes of 4,8 and 16 are very close.
Fig. 9 is a schematic diagram of a relationship between a packet loss rate L and an exchange scale size C according to an embodiment of the present application; fig. 9 shows a relationship between the packet loss rate L and the switch size C under the conditions that 901 is B =2, 902 is B =4, 903 is B =8, and 904 is B = 16. As the size of the switch size increases, the packet loss rate decreases as shown in fig. 9. The reason for such a change is similar to the reason for the queue length analyzed in the above embodiment, and the same port is considered as a study target, so that the switching load of the system decreases as the switching scale increases, and the loss rate decreases accordingly. For different buffer sizes, a larger buffer means a lower packet loss rate. When the switch size is larger than 20, the packet loss rates corresponding to buffer sizes of 8 and 16 are very close and close to 0.
Fig. 10 is a schematic diagram illustrating a relationship between a throughput rate T and a buffer size B according to an embodiment of the present application; fig. 10 shows a relationship between the throughput T and the buffer size B under the condition that 1001 is C =2, 1002 is C =4, 1003 is C =8, and 1004 is C = 16. The throughput T increases as the buffer size B increases as shown in fig. 10. This is because as the cross point buffer capacity B increases, it can accommodate more packets. Accordingly, packet loss caused by scheduling contention can be reduced, thereby achieving higher throughput. With increasing cross point buffer size B, throughput tends to 1. Furthermore, for different switch size, throughput increases as the switch size increases. For a switch size of 16, throughput of more than 98% can be obtained when the buffer is larger than about 10, and for a switch size of 2, the throughput rate is greatly improved when the buffer is between 0 and 10, but is still smaller than the throughput rates corresponding to other switch sizes. For buffer sizes larger than 32, throughputs in excess of 98% can be achieved. When the buffer is larger than 50, the throughput difference between different switch sizes is not more than 5%.
Fig. 11 is a schematic diagram illustrating a relationship between a throughput T and a size C of an exchange scale according to an embodiment of the present application. Fig. 11 shows a schematic diagram of the relationship between the throughput T and the switch size C under the condition that 1101 is B =2, 1102 is B =4, 1103 is B =8, and 1104 is B = 16. The throughput T increases as the switch size C increases as shown in fig. 11. This occurs because as the switch size C increases, it can accommodate more signals to be switched simultaneously. Accordingly, packet loss caused by scheduling contention can be reduced, thereby achieving higher throughput. With the increasing size of the switch size C, the throughput tends to 1. For different buffer sizes, throughput increases as buffer size increases. For a buffer size of 16, throughput increases from 95% toward infinity to 1. When the switch size is greater than 20, the throughput rates for buffers 8 and 16 are very close.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for analyzing performance of a switching network, comprising the steps of:
step 1: obtaining the system steady state of the network by increasing simulation time for the network;
step 2: obtaining the queue length through accumulation and summation;
and step 3: obtaining a queue length average value according to the queue length, and obtaining the average queuing time delay of the network according to a queuing mode;
and 4, step 4: obtaining the packet loss rate and the throughput rate of the network;
and 5: and analyzing the relationship among the cross point buffer area B, the exchange scale C, the throughput rate T, the packet loss rate L, the average queuing delay W and the queue length N to obtain the network performance.
2. The method of claim 1, wherein the system steady state of the network is derived based on a markov process model.
3. The method of claim 1, wherein the throughput rate of the network is a period of timeThe ratio of the number of packets leaving the network to the number of packets arriving at the network is expressed as:
Figure FDA0003828518360000011
wherein T is the throughput of the crossbar output bus; a (t) is the packet arrival probability of time slot t, π i (t) indicates that there are i packets in the cross point queue for time slot t, and B is the buffer size.
4. The method according to claim 1, wherein the network packet loss ratio is a ratio of a number of lost packets to a number of transmitted packets, and is expressed as L = a (t) pi B (t) where A (t) is the packet arrival probability of time slot t, π i (t) indicates that there are i packets in the cross point queue for time slot t, and B is the buffer size.
5. The method of claim 1, wherein the queue length averages are
Figure FDA0003828518360000021
Where A (t) is the packet arrival probability, π, of the time slot t i (t) indicates that there are i packets in the crosspoint queue for time slot t.
6. The method of claim 1, wherein the average queuing delay is equal to the average queue length divided by packet arrival rate expressed as:
Figure FDA0003828518360000022
π i (t) indicates that there are i packets in the crosspoint queue of time slot t, the traffic load on the input bus is p, and C is the switch size.
7. A method for analyzing the performance of a switching network according to any of the claims 2-6, characterized in that said crosspoint queues are all empty in the initial state, i.e. 0.
8. A cross-node switched network architecture with caching, characterized in that the performance analysis is performed by the performance analysis method of the network according to claim 1; the network comprises: the system comprises an input end, a virtual output queue, a cross switch with a buffer and an output end, wherein a small amount of buffer space is arranged on the cross point.
9. A method for scheduling in a switching network, characterized in that the scheduling method is adapted to the network structure of claim 8; the scheduling method comprises the following steps:
step 1: after the data packet is input at the input port, caching the data packet to a corresponding queue of a corresponding port according to the information of the data packet;
and 2, step: inputting a virtual buffer queue to a scheduling process of a cross node buffer;
and 3, step 3: the cross node inputs the scheduling process of the buffer memory;
and 4, step 4: and the cross node outputs the cached scheduling process.
10. The method for scheduling in a switching network as claimed in claim 9, wherein the scheduling procedure comprises the steps of:
step 1: detecting whether data exist in a virtual cache of an input end, and monitoring whether a cross node cache is full;
step 2: generating a port request signal for arbitration;
and step 3: arbitrating according to the priority order of the ports;
and 4, step 4: reading data from the input virtual output queue buffer according to the result;
and 5: and caching data to the cross node.
CN202211069087.9A 2022-09-01 2022-09-01 Switching network performance analysis method, network structure and scheduling method thereof Pending CN115720216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211069087.9A CN115720216A (en) 2022-09-01 2022-09-01 Switching network performance analysis method, network structure and scheduling method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211069087.9A CN115720216A (en) 2022-09-01 2022-09-01 Switching network performance analysis method, network structure and scheduling method thereof

Publications (1)

Publication Number Publication Date
CN115720216A true CN115720216A (en) 2023-02-28

Family

ID=85254004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211069087.9A Pending CN115720216A (en) 2022-09-01 2022-09-01 Switching network performance analysis method, network structure and scheduling method thereof

Country Status (1)

Country Link
CN (1) CN115720216A (en)

Similar Documents

Publication Publication Date Title
Chuang et al. Practical algorithms for performance guarantees in buffered crossbars
Mhamdi et al. CBF: A high-performance scheduling algorithm for buffered crossbar switches
Keslassy et al. Maintaining packet order in two-stage switches
US7852866B2 (en) Low complexity scheduling algorithm for a buffered crossbar switch with 100% throughput
Zhang et al. An efficient scheduling algorithm for combined input-crosspoint-queued (CICQ) switches
Lin et al. The throughput of a buffered crossbar switch
Shen et al. Byte-focal: A practical load balanced switch
Liu et al. Packet scheduling in a low-latency optical interconnect with electronic buffers
CN115720216A (en) Switching network performance analysis method, network structure and scheduling method thereof
Mhamdi et al. Practical scheduling algorithms for high-performance packet switches
Tu et al. Design a simple and high performance switch using a two-stage architecture
Shen et al. Providing 100% throughput in a buffered crossbar switch
JP5615764B2 (en) Packet switch and packet scheduling method
Mhamdi A Partially Buffered Crossbar packet switching architecture and its scheduling
Panigrahy et al. Weighted random matching: a simple scheduling algorithm for achieving 100% throughput
Zheng et al. An efficient round-robin algorithm for combined input-crosspoint-queued switches
Gong et al. Performance evaluation of a parallel-poll virtual output queued switch with two priority levels
Shen et al. Performance analysis of a practical load balanced switch
Kevin F et al. Fast and noniterative scheduling in input-queued switches
Chen et al. A fast noniterative scheduler for input-queued switches with unbuffered crossbars
Guo et al. Packet switch with internally buffered crossbars
Li et al. Design and analysis of scheduling algorithms for switches with reconfiguration overhead
Feng Design of per Flow Queuing Buffer Management and Scheduling for IP Routers
JP2535928B2 (en) High speed switching circuit
Yi et al. A High-Speed Scheduling Algorithm for Combined Input-Crosspoint-Queued (CICQ) Switches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination