EP2632099B1 - Data flow switch control method and relevant device - Google Patents
Data flow switch control method and relevant device Download PDFInfo
- Publication number
- EP2632099B1 EP2632099B1 EP11871845.1A EP11871845A EP2632099B1 EP 2632099 B1 EP2632099 B1 EP 2632099B1 EP 11871845 A EP11871845 A EP 11871845A EP 2632099 B1 EP2632099 B1 EP 2632099B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- bandwidth
- data stream
- port
- sequencing
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 31
- 238000012163 sequencing technique Methods 0.000 claims description 119
- 239000004744 fabric Substances 0.000 claims description 40
- 238000012545 processing Methods 0.000 claims description 38
- 238000003491 array Methods 0.000 claims description 33
- 108010001267 Protein Subunits Proteins 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 description 12
- 238000010586 diagram Methods 0.000 description 7
- 239000000872 buffer Substances 0.000 description 5
- 230000000903 blocking effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
- H04L49/253—Routing or path finding in a switch fabric using establishment or release of connections between ports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/50—Overload detection or protection within a single switching element
Definitions
- the present invention relates to the field of communications, and in particular, to a method for controlling data stream switch and a relevant equipment.
- IP Internet protocol
- IP Internet Protocol
- TDM Time Division Multiplexing
- a switched network is a necessary technology and means for connecting different line cards and simplifying equipment architecture.
- the switched network and line cards constitute an integrated packet switched system that implements switch of data packets.
- Indexes of data switch include throughput, average cell (packet) delay, cell (packet) delay jitter, cell (packet) loss rate, blocking probability, and so on.
- a good switch equipment enables the throughput to be close to 100% as much as possible, and enables the average cell (packet) delay, the cell (packet) delay jitter, the cell (packet) loss rate, and the blocking probability to be as low as possible.
- a current switch equipment in the network (for example, a switch or a router with a switch function) generally adopts a virtual output queue (VOQ, Virtual Output Queue) for data switch.
- the VOQ is mainly used to perform classification and queuing on received packets according to destination addresses and priorities, and each input port sets a VOQ queue for each output port.
- FIG. 1 is the switch architecture of an existing switch equipment.
- An input end (that is, an ingress end) includes multiple VOQ queues corresponding to multiple switched paths from ingress ports to egress ports, where an ingress port refers to a connection port of the input end and a switch fabric, and an egress port refers to a connection port of the switch fabric and an output end.
- the switch fabric (SF, Switch Fabric) includes a data switching matrix and a control switching matrix.
- the data switching matrix is configured to establish a switch channel for a received data stream
- the control switching matrix is configured to establish a switch channel for control information (for example, information such as a switch request, authorization and a queue status).
- a switch procedure is as follows. The ingress end finds out a switched path from an input port to an output port according to header information of a received data stream, and then transfers the data stream to the VOQ queue corresponding to the switched path.
- the ingress end When it is time to switch the data stream, the ingress end requests the control switching matrix of the SF to establish a physical link for the switch of the data stream, and the ingress end sends a switch request message carrying a destination address to the output end (that is, an egress end) through the physical link.
- the egress end locally searches for queue information of an out queue (OQ, Out Queue) corresponding to the destination address in the switch request message according to the received switch request message, and when the OQ queue can still accommodate data, the egress end returns token information to the ingress end through the SF, allowing the ingress end to send the data stream to the egress end.
- the token information reaches the SF, the data switching matrix of the SF establishes a physical channel for subsequent switch.
- the ingress end After receiving the token information, the ingress end sends the data stream to the egress end through the physical channel established by the SF, and the egress end stores the received data in the OQ.
- the egress end encapsulates the data in the OQ according to the above destination address, and then sends the data to a corresponding downstream equipment.
- the SF needs to prepare two switching matrixes for data switch and control information switch, and the ingress end and the egress end exchange messages through the SF to determine whether the data switch is allowed, so the entire switch procedure is too complicated, and a packet loss of the ingress end is easily caused.
- a situation that multiple ingress ports send data to a same egress port at the same time easily occurs in the foregoing switch architecture, which results in blocking of the egress port.
- Embodiments of the present invention provide a method for controlling data stream switch and a relevant equipment, so as to solve the problem of scale limitation on a bufferless switch structure, and reduce the delay jitter during switch processing.
- the embodiments of the present invention provide the following technical solutions.
- a method for controlling data stream switch includes:
- a switch equipment includes:
- the input device includes: an obtaining unit, a first sending unit, a cell sequencing unit and a second sending unit.
- the switch device includes: a first calculation unit, a bandwidth sequencing unit and a third sending unit.
- the obtaining unit is configured to obtain bandwidth demand information of a data stream.
- the first sending unit is configured to encapsulate in a first control cell the bandwidth demand information obtained by the obtaining unit and the correspondence information between the bandwidth demand information and the data stream, and send the first control cell to the first calculation unit.
- the first calculation unit is configured to calculate a bandwidth map BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing TDM service bandwidth information, where the ingress port is a connection port of the input device and the switch device, the egress port is a connection port of the switch device and the output device, and the TDM service bandwidth information is used to indicate a bandwidth occupied by a TDM service on the egress port.
- the bandwidth sequencing unit is configured to perform sequencing processing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table, where the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot.
- the third sending unit is configured to encapsulate the bandwidth sequencing information table in a second control cell, and send the second control cell to the cell sequencing unit.
- the cell sequencing unit is configured to perform cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table, where the cell table includes a sequential position of each cell in the data stream, and sequential positions of cells of a same data stream are distributed at intervals.
- the second sending unit is configured to control sending of cells of the data stream according to the cell table.
- dynamic bandwidth arbitration processing is introduced into the switch fabric, that is, the BWM is calculated by obtaining the bandwidth demand information, the ingress port physical bandwidth information and the egress port physical bandwidth information to adjust a scheduling policy of the switch fabric.
- the BWM is calculated by obtaining the bandwidth demand information, the ingress port physical bandwidth information and the egress port physical bandwidth information to adjust a scheduling policy of the switch fabric.
- sequencing is performed on the entries of the BWM and cell even sequencing processing is performed on the sequencing information table, which avoids the problem of "many-to-one" conflict between ingress ports and egress ports, and ensures fair distribution of end-to-end switch resources and even sending of cells, thereby reducing the delay jitter during the switch processing.
- Embodiments of the present invention provide a method for controlling data stream switch and a relevant equipment.
- a method for controlling data stream switch in an embodiment of the present invention is described in the following. Referring to FIG. 2 , the method for controlling data stream switch in the embodiment of the present invention includes:
- the input end (that is, an ingress end) of the switch equipment receives a data stream sent from an upstream equipment, and the input end may acquire, according to header information of the data stream and a locally-stored forwarding table, a switched path of the data stream, that is, an ingress port and an egress port of the data stream, where the ingress port refers to a connection port of the input end and a switch fabric of the switch equipment, and the egress port refers to a connection port of the switch fabric and an output end (that is, an egress end) of the switch equipment.
- the input end obtains, in real time, the bandwidth demand information of the received data stream.
- the input end obtains bandwidth demand information of each data stream. Steps of obtaining the bandwidth demand information of the data stream may include:
- the bandwidth demand information of each data stream may be different.
- the input end needs to encapsulate the bandwidth demand information, together with the correspondence information (if possible, an identifier) between the bandwidth demand information and the data stream in the first control cell, and then send the first control cell to the switch fabric of the switch equipment.
- the switch fabric calculates a bandwidth map (BWM, Bandwidth Map) according to the bandwidth demand information, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing (TDM, Time Division Multiplexing) service bandwidth information.
- BWM Bandwidth Map
- TDM Time Division Multiplexing
- the switch fabric When receiving the first control cell from the input end, the switch fabric extracts the bandwidth demand information in the first control cell, acquires which data stream the bandwidth demand information belongs to from the correspondence information between the bandwidth demand information and the data stream in the first control cell, and further acquires the bandwidth demand of a corresponding data stream from the received bandwidth demand information. Therefore, a switch fabric may calculate the BWM according to the received bandwidth demand information, the physical bandwidth of the ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of the egress port of the data stream corresponding to the bandwidth demand information, and the TDM service bandwidth information, and output the BWM, where the TDM service bandwidth information may be obtained from the TDM service configuration information.
- a certain time length is preset as a switch period, and the switch fabric calculates the BWM according to all bandwidth demand information received in a switch period, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and the TDM service bandwidth information.
- time lengths of different switch periods may not be fixed, which is not limited here.
- the BWM meets the following conditions: The sum of bandwidth demands of all data streams with a same ingress port is less than or equal to the physical bandwidth of the ingress port; the sum of bandwidth demands of all data streams with a same egress port is less than or equal to the allocatable physical bandwidth of the egress port, where the allocatable physical bandwidth of the egress port equals the difference left by subtracting the bandwidth occupied by a TDM service on the egress port and indicated by the TDM service bandwidth information from the physical bandwidth of the egress port.
- bandwidth allocation may be performed according to the bandwidth demand information of each data stream, to obtain the BWM.
- bandwidth allocation may be performed according to a proportion of the bandwidth demand of each data stream to the sum of bandwidth demands of all data streams, to obtain the BWM.
- bandwidth allocation may also be performed after subtracting a preset value from the bandwidth demand of each data stream at the same time, to obtain the BWM. This is not limited here.
- the switch fabric performs sequencing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table.
- the entries of the BWM include bandwidth information of each data stream. Multiple data streams with a same ingress port are arranged at different time slot positions of the switch period, and time slot positions where multiple data streams with different ingress ports but with a same egress port are located possibly overlap. Therefore, the switch fabric performs sequencing on the entries of the BWM according to the preset sequencing criterion, where the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot.
- a level-1 port and a level-2 port are set at each egress port, where the level-1 port is a bus port from the switch fabric to the output end, and the level-2 port is a buffer port from the switch fabric to the output end.
- the switch fabric may, starting from a first time slot of the switch period, detect statuses of ingress ports and egress ports of data streams in turn according to a numbering sequence of the ingress port and the egress port that correspond to their respective data streams in the BWM, and perform sequencing processing on the entries of the BWM according to the preset sequencing criterion. For example, it is assumed that a current switch period is 6 time slots, and an input BWM as shown in FIG.
- 3-a includes 11 data streams and bandwidth information of each data stream, which are respectively as follows: a data stream from an ingress port 1 to an egress port 1, occupying 2 time slots of the switch period; a data stream from the ingress port 1 to an egress port 2, occupying 2 time slots of the switch period; a data stream from the ingress port 1 to an egress port 3, occupying 2 time slots of the switch period; a data stream from an ingress port 2 to the egress port 1, occupying 4 time slots of the switch period; a data stream from the ingress port 2 to the egress port 3, occupying 2 time slots of the switch period; a data stream from an ingress port 3 to the egress port 2, occupying 2 time slots of the switch period; a data stream from the ingress port 3 to the egress port 3, occupying 1 time slot of the switch period; a data stream from the ingress port 3 to an egress port 4, occupying 3 time slots of the switch period; a data stream from an ingress port 4
- level-1 ports of all egress ports may be occupied first.
- a sequencing processing procedure is as follows.
- sequencing processing may be performed on the BWM in other sequencing manners, which is not limited here.
- the switch fabric encapsulates the bandwidth sequencing information table in a second control cell, and sends the second control cell to the input end.
- the input end performs cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table.
- the input end When receiving the second control cell from the switch fabric, the input end extracts the bandwidth sequencing information table in the second control cell and then performs cell even sequencing processing on the data stream in the bandwidth sequencing information table, to obtain the cell table.
- the cell table includes a sequential position of each cell of each data stream, and sequential positions of cells of a same data stream are distributed at intervals.
- a total of cells expected to be sent in the current switch period may be divided into multiple arrays, so the number of time slots included in each array is equal to the total of the cells/the number of the arrays. Sequencing is performed on each array according to a set method, so that adjacent time slots of a same array have an interval of (M-1) sequential positions, where M is equal to the number of the arrays. Starting from a first array, according to an array sequence table obtained by sequencing, cells of each data stream are filled in turn into sequential positions where time slots of each array are located.
- cell even sequencing processing may be performed on the data stream by adopting a manner of dividing the arrays through negative carry, and a sequencing processing procedure is as follows.
- a total N of cells expected to be sent in the switch period are divided into 2L arrays, so each array includes N/2L time slots, where L is an integer greater than 1.
- L is an integer greater than 1.
- carry from a high bit to a low bit is performed in turn by adopting a negative carry manner
- sequencing is performed on the decimal values of sequentially-obtained 2L L-bit binary numbers where the decimal values are used in turn as sequential positions of first time slots of the arrays
- sequencing is performed on remaining time slots of the arrays according to a sequence relation of the sequential positions of the first time slots of the arrays, to obtain an array sequence table.
- a total of the cells expected to be sent in the current switch period is 16.
- the 16 cell units are divided into 8 arrays (that is, L is equal to 3), it is indicated in the bandwidth sequencing information table that a data stream 1 includes 8 cells, and a data stream 2 includes 4 cells.
- Sequential positions of the arrays are described by using a 3-bit binary number (b2 b1 b0), and a binary negative carry table shown in Table 1 may be obtained by adopting a negative carry manner.
- Sequencing may be performed on the 8 arrays according to the binary negative carry table shown in Table 1, to obtain an array sequence table shown in FIG. 4-a .
- Sequential positions of the first time slots of the arrays are distributed as shown in Table 1, and sequential positions of the second time slots of the arrays are distributed according to a sequence relation of the sequential positions of the first time slots of the arrays.
- all the cells of the data stream 1 and the data stream 2 are filled into sequential positions where time slots of corresponding arrays are located, to obtain a cell table shown in FIG. 4-b .
- the cells of the data stream 1 occupy two time slots (sequential positions are 0 and 8) of the first array, two time slots (sequential positions are 4 and 12) of a second array, two time slots (sequential positions are 2 and 10) of a third array, and two time slots (sequential positions are 6 and 14) of the fourth array in turn; therefore, the sequential positions occupied by the data stream 1 are (0, 2, 4, 6, 8, 10, 12, 14).
- the data stream 2 occupies two time slots (sequential positions are 1 and 9) of a fifth array and two time slots (sequential positions are 5 and 13) of a sixth array in turn; therefore, the sequential positions occupied by the data stream 2 are (1, 5, 9, 13).
- the input end merely needs to obtain the array sequence table in the foregoing manner in a first switch period, and may directly use the array sequence table for sequencing processing in a subsequent switch period to obtain a cell table.
- the input end controls sending of cells of the data stream according to the cell table.
- the input end sends the cells to a switch device 502 according to the sequential position of each cell in the cell table, so that the switch fabric delivers the received cells to an output device 503.
- dynamic bandwidth arbitration processing is introduced into the switch fabric, that is, the BWM is calculated by obtaining the bandwidth demand information, the physical bandwidth information of the ingress port and the physical bandwidth information of the egress port to adjust a scheduling policy of the switch fabric.
- the BWM is calculated by obtaining the bandwidth demand information, the physical bandwidth information of the ingress port and the physical bandwidth information of the egress port to adjust a scheduling policy of the switch fabric.
- sequencing is performed on the entries of the BWM and cell even sequencing processing is performed on the sequencing information table, which avoids the problem of "many-to-one" conflict between ingress ports and egress ports, and ensures fair distribution of end-to-end switch resources and even sending of the cells, thereby reducing the delay jitter during switch processing.
- a switch equipment 500 in the embodiment of the present invention includes:
- the input device 501 includes: an obtaining unit 5011, a first sending unit 5012, a cell sequencing unit 5013 and a second sending unit 5014.
- the switch device 502 includes: a first calculation unit 5021, a bandwidth sequencing unit 5022 and a third sending unit 5023.
- the obtaining unit 5011 is configured to obtain bandwidth demand information of a data stream.
- the obtaining unit 5011 specifically includes:
- the first sending unit 5012 is configured to encapsulate in a first control cell the bandwidth demand information obtained by the obtaining unit 5011 and correspondence information between the bandwidth demand information and the data stream, and send the first control cell to the first calculation unit 5021.
- bandwidth demand information of each data stream may be different.
- the first sending unit 5012 needs to encapsulate the bandwidth demand information, together with the correspondence information (if possible, an identifier) between the bandwidth demand information and the data stream in the first control cell, and then send the first control cell to the switch device of the switch equipment.
- the first calculation unit 5021 is configured to calculate a BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and TDM service bandwidth information, where an ingress port is a connection port of the input device 501 and the switch device 502, an egress port is a connection port of the switch device 502 and the output device 503, and the TDM service bandwidth information is used to indicate the bandwidth occupied by a TDM service on each egress port.
- a certain time length is preset as a switch period, and the first calculation unit 5021 calculates the BWM according to all bandwidth demand information received in a switch period, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and the TDM service bandwidth information.
- time lengths of different switch periods may not be fixed, which is not limited here.
- the BWM meets the following conditions: the sum of bandwidth demands of all data streams with a same ingress port is less than or equal to an ingress port physical bandwidth of the ingress port; the sum of bandwidth demands of all data streams with a same egress port is less than or equal to an allocatable physical bandwidth of the egress port, where the allocatable physical bandwidth of the egress port equals the difference left by subtracting the bandwidth occupied by a TDM service on the egress port and indicated by the TDM service bandwidth information from the physical bandwidth of the egress port.
- the bandwidth sequencing unit 5022 is configured to perform sequencing processing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table, where the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot.
- the third sending unit 5023 is configured to encapsulate the bandwidth sequencing information table in a second control cell, and send the second control cell to the cell sequencing unit 5013.
- the cell sequencing unit 5013 is configured to perform cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table, where the cell table includes a sequential position of each cell in the data stream, and sequential positions of cells of a same data stream are distributed at intervals.
- the cell sequencing unit 5013 may divide a total of cells expected to be sent in the current switch period into multiple arrays, so the number of time slots included in each array is equal to the total of the cells/the number of the arrays. Sequencing is performed on each array according to a set method, so that adjacent time slots of a same array have an interval of (M-1) sequential positions, where M is equal to the number of the arrays. Starting from a first array, according to an array sequence table obtained by sequencing, cells of each data stream are filled in turn into sequential positions where time slots of each array are located.
- the cell sequencing unit 5013 may include:
- the second sending unit 5014 is configured to control sending of cells of the data stream according to the cell table.
- the second sending unit 5014 sends the cells to the switch device 502 according to the sequential position of each cell in the cell table, so that the switch device delivers the received cells to the output device 503.
- the input device 501, the switch device 502 and the output device 503 in this embodiment may be used as the input end, the switch fabric and the output end in the foregoing method embodiment, to implement all the technical solutions in the foregoing method embodiment.
- Functions of various functional modules may be specifically implemented according to the method in the foregoing method embodiment. Reference may be made to the relevant description in the foregoing embodiment for a specific implementation process, and the details are not repeatedly described here.
- dynamic bandwidth arbitration processing is introduced into the switch device of the switch equipment, that is, the BWM is calculated by obtaining the bandwidth demand information, the physical bandwidth information of the ingress port and the physical bandwidth information the egress port to adjust a scheduling policy of the switch device.
- the BWM is calculated by obtaining the bandwidth demand information, the physical bandwidth information of the ingress port and the physical bandwidth information the egress port to adjust a scheduling policy of the switch device.
- sequencing is performed on the entries of the BWM and cell even sequencing processing is performed on the sequencing information table, which avoids the problem of "many-to-one" conflict between ingress ports and egress ports, and ensures fair distribution of end-to-end switch resources and even sending of the cells, thereby reducing a delay jitter during switch processing.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Description
- The present invention relates to the field of communications, and in particular, to a method for controlling data stream switch and a relevant equipment.
- The networking of mobile communication is developing towards all-IP (ALL-IP). Internet protocol (IP, Internet Protocol) technologies will be applied to every layer of a mobile network. Therefore, more and more service types, such as mobile services, voice services, video services, network games services, and network browsing services, emerge on a current IP network (that is, a packet switched network), and bandwidth requirements also become higher and higher. In order to ensure that a large number of real-time services originally running on a time division multiplexing (TDM, Time Division Multiplexing) network can run well on the IP network, complicated service classification needs to be performed on the IP network, and the processing procedures of IP switch equipments need to be simplified as much as possible, so as to improve the processing efficiency and quality of IP switch equipments.
- In a communication equipment system such as a large-scale switch or router, a switched network is a necessary technology and means for connecting different line cards and simplifying equipment architecture. The switched network and line cards constitute an integrated packet switched system that implements switch of data packets. Indexes of data switch include throughput, average cell (packet) delay, cell (packet) delay jitter, cell (packet) loss rate, blocking probability, and so on. Certainly, a good switch equipment enables the throughput to be close to 100% as much as possible, and enables the average cell (packet) delay, the cell (packet) delay jitter, the cell (packet) loss rate, and the blocking probability to be as low as possible.
- A current switch equipment in the network (for example, a switch or a router with a switch function) generally adopts a virtual output queue (VOQ, Virtual Output Queue) for data switch. The VOQ is mainly used to perform classification and queuing on received packets according to destination addresses and priorities, and each input port sets a VOQ queue for each output port.
FIG. 1 is the switch architecture of an existing switch equipment. An input end (that is, an ingress end) includes multiple VOQ queues corresponding to multiple switched paths from ingress ports to egress ports, where an ingress port refers to a connection port of the input end and a switch fabric, and an egress port refers to a connection port of the switch fabric and an output end. The switch fabric (SF, Switch Fabric) includes a data switching matrix and a control switching matrix. The data switching matrix is configured to establish a switch channel for a received data stream, and the control switching matrix is configured to establish a switch channel for control information (for example, information such as a switch request, authorization and a queue status). A switch procedure is as follows. The ingress end finds out a switched path from an input port to an output port according to header information of a received data stream, and then transfers the data stream to the VOQ queue corresponding to the switched path. When it is time to switch the data stream, the ingress end requests the control switching matrix of the SF to establish a physical link for the switch of the data stream, and the ingress end sends a switch request message carrying a destination address to the output end (that is, an egress end) through the physical link. The egress end locally searches for queue information of an out queue (OQ, Out Queue) corresponding to the destination address in the switch request message according to the received switch request message, and when the OQ queue can still accommodate data, the egress end returns token information to the ingress end through the SF, allowing the ingress end to send the data stream to the egress end. When the token information reaches the SF, the data switching matrix of the SF establishes a physical channel for subsequent switch. After receiving the token information, the ingress end sends the data stream to the egress end through the physical channel established by the SF, and the egress end stores the received data in the OQ. When the data stream is scheduled, the egress end encapsulates the data in the OQ according to the above destination address, and then sends the data to a corresponding downstream equipment. - However, because the number of cells (or packets) from an upstream equipment and the number of packets reaching a certain port of the downstream equipment cannot be predicted, in the foregoing switch structure, a large number of buffers needs to be used to store VOQs and OQs in the ingress end and the egress end. In addition, the SF needs to prepare two switching matrixes for data switch and control information switch, and the ingress end and the egress end exchange messages through the SF to determine whether the data switch is allowed, so the entire switch procedure is too complicated, and a packet loss of the ingress end is easily caused. Moreover, a situation that multiple ingress ports send data to a same egress port at the same time easily occurs in the foregoing switch architecture, which results in blocking of the egress port.
- Examples of data stream switches and methods for controlling data stream switches are disclosed in patent publications
US 2006/013133A1 ,WO 01/95661 A1 US 2007/280261 A1 . - Embodiments of the present invention provide a method for controlling data stream switch and a relevant equipment, so as to solve the problem of scale limitation on a bufferless switch structure, and reduce the delay jitter during switch processing.
- In order to solve the foregoing technical problems, the embodiments of the present invention provide the following technical solutions.
- A method for controlling data stream switch includes:
- obtaining, by an input end of a switch equipment, bandwidth demand information of a data stream, encapsulating in a first control cell the bandwidth demand information and the correspondence information between the bandwidth demand information and the data stream, and sending the first control cell to a switch fabric of the switch equipment;
- calculating, by the switch fabric, a bandwidth map BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing TDM service bandwidth information, where the ingress port is a connection port of the input end and the switch fabric, and the egress port is a connection port of the switch fabric and the output end, and the TDM service bandwidth information is used to indicate the bandwidth occupied by a TDM service on the egress port;
- performing, by the switch fabric, sequencing processing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table, encapsulating the bandwidth sequencing information table in a second control cell, and sending the second control cell to the input end, where the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot;
- performing, by the input end, cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table, where the cell table includes a sequential position of each cell in the data stream, and cells of a same data stream are distributed at intervals in terms of sequential positions; and
- controlling, by the input end, sending of cells of the data stream according to the cell table.
- A switch equipment includes:
- an input device, a switch device and an output device.
- The input device includes: an obtaining unit, a first sending unit, a cell sequencing unit and a second sending unit.
- The switch device includes: a first calculation unit, a bandwidth sequencing unit and a third sending unit.
- The obtaining unit is configured to obtain bandwidth demand information of a data stream.
- The first sending unit is configured to encapsulate in a first control cell the bandwidth demand information obtained by the obtaining unit and the correspondence information between the bandwidth demand information and the data stream, and send the first control cell to the first calculation unit.
- The first calculation unit is configured to calculate a bandwidth map BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing TDM service bandwidth information, where the ingress port is a connection port of the input device and the switch device, the egress port is a connection port of the switch device and the output device, and the TDM service bandwidth information is used to indicate a bandwidth occupied by a TDM service on the egress port.
- The bandwidth sequencing unit is configured to perform sequencing processing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table, where the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot.
- The third sending unit is configured to encapsulate the bandwidth sequencing information table in a second control cell, and send the second control cell to the cell sequencing unit.
- The cell sequencing unit is configured to perform cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table, where the cell table includes a sequential position of each cell in the data stream, and sequential positions of cells of a same data stream are distributed at intervals.
- The second sending unit is configured to control sending of cells of the data stream according to the cell table.
- It can be seen from the foregoing description that, in the embodiments of the present invention, dynamic bandwidth arbitration processing is introduced into the switch fabric, that is, the BWM is calculated by obtaining the bandwidth demand information, the ingress port physical bandwidth information and the egress port physical bandwidth information to adjust a scheduling policy of the switch fabric. There is no need to construct a data switching matrix and a control switching matrix in the switch fabric, which reduces the processing complexity of the switch fabric. Meanwhile, there is no need to use a large number of buffers to store VOQs and OQs in the input end and the output end, which solves the problem of scale limitation on a bufferless switch structure. In addition, sequencing is performed on the entries of the BWM and cell even sequencing processing is performed on the sequencing information table, which avoids the problem of "many-to-one" conflict between ingress ports and egress ports, and ensures fair distribution of end-to-end switch resources and even sending of cells, thereby reducing the delay jitter during the switch processing.
- To illustrate the technical solutions in the embodiments of the present invention or in the prior art more clearly, accompanying drawings for describing the embodiments or the prior art are introduced briefly in the following.
-
FIG. 1 is a schematic diagram of switch architecture of a switch equipment in prior art; -
FIG. 2 is a schematic flow chart of an embodiment of a method for controlling data stream switch provided in an embodiment of the present invention; -
FIG. 3-a is a schematic diagram of a BWM in an application scenario provided in an embodiment of the present invention; -
FIG. 3-b is a schematic diagram of a bandwidth sequencing information table obtained in an application scenario provided in an embodiment of the present invention; -
FIG. 3-c is a schematic diagram of a bandwidth sequencing information table obtained in another application scenario provided in an embodiment of the present invention; -
FIG. 4-a is a schematic diagram of an array sequence table obtained in an application scenario provided in an embodiment of the present invention; -
FIG. 4-b is a schematic diagram of a cell table obtained in an application scenario provided in an embodiment of the present invention; and -
FIG. 5 is a schematic structural diagram of a switch equipment provided in an embodiment of the present invention. - Embodiments of the present invention provide a method for controlling data stream switch and a relevant equipment.
- In order to make the objectives, features and advantages of the present invention more obvious and comprehensible, the technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the embodiments to be described are only part rather than all of the embodiments of the present invention.
- A method for controlling data stream switch in an embodiment of the present invention is described in the following. Referring to
FIG. 2 , the method for controlling data stream switch in the embodiment of the present invention includes: - 201: An input end of a switch equipment obtains bandwidth demand information of a data stream.
- In the embodiment of the present invention, the input end (that is, an ingress end) of the switch equipment receives a data stream sent from an upstream equipment, and the input end may acquire, according to header information of the data stream and a locally-stored forwarding table, a switched path of the data stream, that is, an ingress port and an egress port of the data stream, where the ingress port refers to a connection port of the input end and a switch fabric of the switch equipment, and the egress port refers to a connection port of the switch fabric and an output end (that is, an egress end) of the switch equipment.
- In the embodiment of the present invention, the input end obtains, in real time, the bandwidth demand information of the received data stream. When receiving multiple data streams, the input end obtains bandwidth demand information of each data stream. Steps of obtaining the bandwidth demand information of the data stream may include:
- A1: Obtain a stream rate of the data stream.
- A2: Obtain a length of buffered information of the data stream.
- A3: Divide the length of the buffered information of the data stream by a unit time, and then add the quotient to the stream rate of the data stream, to obtain the bandwidth demand information of the data stream.
- 202: The input end encapsulates in a first control cell the bandwidth demand information and correspondence information between the bandwidth demand information and the data stream, and sends the first control cell to the switch fabric of the switch equipment.
- In the embodiment of the present invention, the bandwidth demand information of each data stream may be different. The input end needs to encapsulate the bandwidth demand information, together with the correspondence information (if possible, an identifier) between the bandwidth demand information and the data stream in the first control cell, and then send the first control cell to the switch fabric of the switch equipment.
- 203: The switch fabric calculates a bandwidth map (BWM, Bandwidth Map) according to the bandwidth demand information, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing (TDM, Time Division Multiplexing) service bandwidth information.
- When receiving the first control cell from the input end, the switch fabric extracts the bandwidth demand information in the first control cell, acquires which data stream the bandwidth demand information belongs to from the correspondence information between the bandwidth demand information and the data stream in the first control cell, and further acquires the bandwidth demand of a corresponding data stream from the received bandwidth demand information. Therefore, a switch fabric may calculate the BWM according to the received bandwidth demand information, the physical bandwidth of the ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of the egress port of the data stream corresponding to the bandwidth demand information, and the TDM service bandwidth information, and output the BWM, where the TDM service bandwidth information may be obtained from the TDM service configuration information.
- In the embodiment of the present invention, a certain time length is preset as a switch period, and the switch fabric calculates the BWM according to all bandwidth demand information received in a switch period, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and the TDM service bandwidth information. Certainly, time lengths of different switch periods may not be fixed, which is not limited here.
- In the embodiment of the present invention, the BWM meets the following conditions: The sum of bandwidth demands of all data streams with a same ingress port is less than or equal to the physical bandwidth of the ingress port; the sum of bandwidth demands of all data streams with a same egress port is less than or equal to the allocatable physical bandwidth of the egress port, where the allocatable physical bandwidth of the egress port equals the difference left by subtracting the bandwidth occupied by a TDM service on the egress port and indicated by the TDM service bandwidth information from the physical bandwidth of the egress port. In an actual application, if the sum of bandwidth demands of all data streams with a same ingress port is less than or equal to the physical bandwidth of the ingress port, and the sum of bandwidth demands of all data streams with a same egress port is less than or equal to the allocatable physical bandwidth of the egress port, bandwidth allocation may be performed according to the bandwidth demand information of each data stream, to obtain the BWM. If the sum of bandwidth demands of all data streams with a same ingress port is greater than the ingress port physical bandwidth of the ingress port, and/or the sum of bandwidth demands of all data streams with a same egress port is greater than the allocatable physical bandwidth of the egress port, bandwidth allocation may be performed according to a proportion of the bandwidth demand of each data stream to the sum of bandwidth demands of all data streams, to obtain the BWM. Alternatively, bandwidth allocation may also be performed after subtracting a preset value from the bandwidth demand of each data stream at the same time, to obtain the BWM. This is not limited here.
- 204: The switch fabric performs sequencing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table.
- In the embodiment of the present invention, the entries of the BWM include bandwidth information of each data stream. Multiple data streams with a same ingress port are arranged at different time slot positions of the switch period, and time slot positions where multiple data streams with different ingress ports but with a same egress port are located possibly overlap. Therefore, the switch fabric performs sequencing on the entries of the BWM according to the preset sequencing criterion, where the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot.
- In an actual application, a level-1 port and a level-2 port are set at each egress port, where the level-1 port is a bus port from the switch fabric to the output end, and the level-2 port is a buffer port from the switch fabric to the output end.
- In an application scenario, after receiving the BWM of the input end, the switch fabric may, starting from a first time slot of the switch period, detect statuses of ingress ports and egress ports of data streams in turn according to a numbering sequence of the ingress port and the egress port that correspond to their respective data streams in the BWM, and perform sequencing processing on the entries of the BWM according to the preset sequencing criterion. For example, it is assumed that a current switch period is 6 time slots, and an input BWM as shown in
FIG. 3-a includes 11 data streams and bandwidth information of each data stream, which are respectively as follows: a data stream from an ingress port 1 to an egress port 1, occupying 2 time slots of the switch period; a data stream from the ingress port 1 to an egress port 2, occupying 2 time slots of the switch period; a data stream from the ingress port 1 to an egress port 3, occupying 2 time slots of the switch period; a data stream from an ingress port 2 to the egress port 1, occupying 4 time slots of the switch period; a data stream from the ingress port 2 to the egress port 3, occupying 2 time slots of the switch period; a data stream from an ingress port 3 to the egress port 2, occupying 2 time slots of the switch period; a data stream from the ingress port 3 to the egress port 3, occupying 1 time slot of the switch period; a data stream from the ingress port 3 to an egress port 4, occupying 3 time slots of the switch period; a data stream from an ingress port 4 to the egress port 2, occupying 2 time slots of the switch period; a data stream from the ingress port 4 to the egress port 4, occupying 3 time slots of the switch period; and a data stream from the ingress port 4 to the egress port 3, occupying 1 time slot of the current switch period. Starting from the first time slot of the switch period, sequencing processing is performed on data streams corresponding to different ingress ports according to a numbering sequence of the ingress ports. A sequencing processing procedure may be as follows. - In the first time slot:
- when it is detected that a level-1 port and a level-2 port of the
egress port 1 are idle, arrange a sending start point of the data stream from theingress port 1 to theegress port 1 at the first time slot, and indicate that the data stream occupies the first and the second time slots, and occupies the level-1 port of theegress port 1; - when it is detected that the level-2 port of the
egress port 1 is idle, arrange a sending start point of the data stream from theingress port 2 to theegress port 1 at the first time slot, and indicate that the data stream occupies the first, the second, the third and the fourth time slots, and occupies the level-2 port of theegress port 1; - when it is detected that the level-1 port and the level-2 port of the
egress port 1 are busy, and that a level-1 port and a level-2 port of theegress port 2 are idle, arrange a sending start point of the data stream from theingress port 3 to theegress port 2 at the first time slot, and indicate that the data stream occupies the first and the second time slots, and occupies the level-1 port of theegress port 2; - when it is detected that the level-1 port and the level-2 port of the
egress port 1 are busy, and that the level-2 port of theegress port 2 is idle, arrange a sending start point of the data stream from theingress port 4 to theegress port 2 at the first time slot, and indicate that the data stream occupies the first and the second time slots, and occupies the level-2 port of theegress port 2; and - when it is detected that ingress ports of all data streams are busy, enter a next time slot.
- In the second time slot:
- when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the third time slot:
- when it is detected that the
ingress port 1 and the level-1 port and the level-2 port of theegress port 2 are idle, arrange a sending start point of the data stream from theingress port 1 to theegress port 2 at the third time slot, and indicate that the data stream occupies the third and the fourth time slots, and occupies the level-1 port of theegress port 2; - when it is detected that the
ingress port 3 and a level-1 port and a level-2 port of theegress port 3 are idle, arrange a sending start point of the data stream from theingress port 3 to theegress port 3 at the third time slot, and indicate that the data stream occupies the third time slot, and occupies the level-1 port of theegress port 3; - when it is detected that the
ingress port 4 and the level-2 port of theegress port 3 are idle, arrange a sending start point of the data stream from theingress port 4 to theegress port 3 at the third time slot, and indicate that the data stream occupies the third time slot, and occupies the level-2 port of theegress port 3; and - when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the fourth time slot:
- when it is detected that the
ingress port 3 and a level-1 port and a level-2 port of theegress port 4 are idle, arrange a sending start point of the data stream from theingress port 3 to theegress port 2 at the fourth time slot, and indicate that the data stream occupies the fourth, a fifth and a sixth time slots, and occupies the level-1 port of theegress port 4; - when it is detected that the
ingress port 4 and the level-2 port of theegress port 4 are idle, arrange a sending start point of the data stream from theingress port 4 to theegress port 4 at the fourth time slot, and indicate that the data stream occupies the fourth, the fifth and the sixth time slots, and occupies the level-2 port of theegress port 4; and - when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the fifth time slot:
- when it is detected that the
ingress port 1 and the level-1 port and the level-2 port of theegress port 3 are idle, arrange a sending start point of the data stream from theingress port 1 to theegress port 3 at the fifth time slot, and indicate that the data stream occupies the fifth and the sixth time slots, and occupies the level-1 port of theegress port 3; - when it is detected that the
ingress port 2 and the level-2 port of theegress port 3 are idle, arrange a sending start point of the data stream from theingress port 2 to theegress port 3 at the fifth time slot, and indicate that the data stream occupies the fifth and the sixth time slots, and occupies the level-2 port of theegress port 3; and - when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the sixth time slot:
- when it is detected that the ingress ports of all the data streams are busy and the switch period is finished, end the sequencing processing procedure, and obtain a bandwidth sequencing information table shown in
FIG. 3-b . - In another application scenario, level-1 ports of all egress ports may be occupied first. Assuming that a BWM received by the switch fabric is as shown in
FIG. 3-a , a sequencing processing procedure is as follows. - In the first time slot:
- when it is detected that a level-1 port of the
egress port 1 is idle, arrange a sending start point of the data stream from theingress port 1 to theegress port 1 at the first time slot, and indicate that the data stream occupies the first and a second time slots, and occupies the level-1 port of theegress port 1; - when it is detected that a level-1 port of the
egress port 3 is idle, arrange a sending start point of the data stream from theingress port 2 to theegress port 3 at the first time slot, and indicate that the data stream occupies the first and the second time slots, and occupies the level-1 port of theegress port 3; - when it is detected that a level-1 port of the
egress port 2 is idle, arrange a sending start point of the data stream from theingress port 3 to theegress port 2 at the first time slot, and indicate that the data stream occupies the first and the second time slots, and occupies the level-1 port of theegress port 2; - when it is detected that a level-1 port of the
egress port 4 is idle, arrange a sending start point of the data stream from theingress port 4 to theegress port 4 at the first time slot, and indicate that the data stream occupies the first, the second and a third time slots, and occupies the level-1 port of theegress port 4; and - when it is detected that ingress ports of all data streams are busy, enter a next time slot.
- In the second time slot:
- when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the third time slot:
- when it is detected that the
ingress port 1 and the level-1 port of theegress port 2 are idle, arrange a sending start point of the data stream from theingress port 1 to theegress port 2 at the third time slot, and indicate that the data stream occupies the third and a fourth time slots, and occupies the level-1 port of theegress port 2; - when it is detected that the
ingress port 2 and the level-1 port of theegress port 1 are idle, arrange a sending start point of the data stream from theingress port 2 to theegress port 1 at the third time slot, and indicate that the data stream occupies the third, the fourth, a fifth and a sixth time slots, and occupies the level-1 port of theegress port 1; - when it is detected that the
ingress port 3 and the level-1 port of theegress port 3 are idle, arrange a sending start point of the data stream from theingress port 3 to theegress port 3 at the third time slot, and indicate that the data stream occupies the third time slot, and occupies the level-1 port of theegress port 3; and - when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the fourth time slot:
- when it is detected that the
ingress port 3 and the level-1 port of theegress port 4 are idle, arrange a sending start point of the data stream from theingress port 3 to theegress port 4 at the fourth time slot, and indicate that the data stream occupies the fourth, the fifth and the sixth time slots, and occupies the level-1 port of theegress port 4; - when it is detected that the
ingress port 4 and the level-1 port of theegress port 3 are idle, arrange a sending start point of the data stream from theingress port 4 to theegress port 3 at the fourth time slot, and indicate that the data stream occupies the fourth time slot, and occupies the level-1 port of theegress port 3; and - when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the fifth time slot:
- when it is detected that the
ingress port 1 and the level-1 port of theegress port 3 are idle, arrange a sending start point of the data stream from theingress port 1 to theegress port 3 at the fifth time slot, and indicate that the data stream occupies the fifth and the sixth time slots, and occupies the level-1 port of theegress port 3; - when it is detected that the
ingress port 4 and the level-1 port of theegress port 2 are idle, arrange a sending start point of the data stream from theingress port 4 to theegress port 2 at the fifth time slot, and indicate that the data stream occupies the fifth and the sixth time slots, and occupies the level-1 port of theegress port 2; and - when it is detected that the ingress ports of all the data streams are busy, enter a next time slot.
- In the sixth time slot:
- when it is detected that the ingress ports of all the data streams are busy and the switch period is finished, end the sequencing processing procedure, and obtain a bandwidth sequencing information table shown in
FIG. 3-c . - Certainly, under a precondition that no more than two ingress ports send a cell to the same egress port in the same time slot, sequencing processing may be performed on the BWM in other sequencing manners, which is not limited here.
- 205: The switch fabric encapsulates the bandwidth sequencing information table in a second control cell, and sends the second control cell to the input end.
- 206: The input end performs cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table.
- When receiving the second control cell from the switch fabric, the input end extracts the bandwidth sequencing information table in the second control cell and then performs cell even sequencing processing on the data stream in the bandwidth sequencing information table, to obtain the cell table. The cell table includes a sequential position of each cell of each data stream, and sequential positions of cells of a same data stream are distributed at intervals.
- In the embodiment of the present invention, a total of cells expected to be sent in the current switch period may be divided into multiple arrays, so the number of time slots included in each array is equal to the total of the cells/the number of the arrays. Sequencing is performed on each array according to a set method, so that adjacent time slots of a same array have an interval of (M-1) sequential positions, where M is equal to the number of the arrays. Starting from a first array, according to an array sequence table obtained by sequencing, cells of each data stream are filled in turn into sequential positions where time slots of each array are located.
- In an application scenario of the embodiment of the present invention, cell even sequencing processing may be performed on the data stream by adopting a manner of dividing the arrays through negative carry, and a sequencing processing procedure is as follows.
- A total N of cells expected to be sent in the switch period are divided into 2L arrays, so each array includes N/2L time slots, where L is an integer greater than 1. Starting from an L-bit binary number which equals 0, carry from a high bit to a low bit is performed in turn by adopting a negative carry manner, sequencing is performed on the decimal values of sequentially-obtained 2L L-bit binary numbers where the decimal values are used in turn as sequential positions of first time slots of the arrays, and sequencing is performed on remaining time slots of the arrays according to a sequence relation of the sequential positions of the first time slots of the arrays, to obtain an array sequence table. Starting from a first time slot of the first array, according to a criterion that all time slots of a single array are first completely filled, all cells of each data stream in the bandwidth sequencing information table are filled in turn into sequential positions where time slots of a corresponding array are located, until all cells of all data streams in the bandwidth sequencing information table are completely filled, where there is one-to-one correspondence between the cells and the time slots.
- For example, it is assumed that a total of the cells expected to be sent in the current switch period is 16. The 16 cell units are divided into 8 arrays (that is, L is equal to 3), it is indicated in the bandwidth sequencing information table that a
data stream 1 includes 8 cells, and adata stream 2 includes 4 cells. Sequential positions of the arrays are described by using a 3-bit binary number (b2 b1 b0), and a binary negative carry table shown in Table 1 may be obtained by adopting a negative carry manner.Table 1 b2 b1 b0 Array Sequential Position 0 0 0 First array 0 1 0 0 Second array 4 0 1 0 Third array 2 1 1 0 Fourth array 6 0 0 1 Fifth array 1 1 0 1 Sixth array 5 0 1 1 Seventh array 3 1 1 1 Eighth array 7 - Sequencing may be performed on the 8 arrays according to the binary negative carry table shown in Table 1, to obtain an array sequence table shown in
FIG. 4-a . Sequential positions of the first time slots of the arrays are distributed as shown in Table 1, and sequential positions of the second time slots of the arrays are distributed according to a sequence relation of the sequential positions of the first time slots of the arrays. - Sequencing is performed on the cells of the
data stream 1 and thedata stream 2 according to the obtained array sequence table, and a procedure may be as follows: input bandwidth information (thedata stream 1 = 8, thedata stream 2 = 4) of thedata stream 1 and thedata stream 2; judge whether the data streams in the bandwidth sequencing information table are all allocated time slots, and if yes, the sequencing procedure is ended; if no, data streams that are not allocated time slots are extracted in turn according to the bandwidth sequencing information table, and sequencing is performed on the extracted data streams. Starting from the first time slot of the first array, according to the criterion that all the time slots of a single array are first completely filled, all the cells of thedata stream 1 and thedata stream 2 are filled into sequential positions where time slots of corresponding arrays are located, to obtain a cell table shown inFIG. 4-b . The cells of thedata stream 1 occupy two time slots (sequential positions are 0 and 8) of the first array, two time slots (sequential positions are 4 and 12) of a second array, two time slots (sequential positions are 2 and 10) of a third array, and two time slots (sequential positions are 6 and 14) of the fourth array in turn; therefore, the sequential positions occupied by thedata stream 1 are (0, 2, 4, 6, 8, 10, 12, 14). Thedata stream 2 occupies two time slots (sequential positions are 1 and 9) of a fifth array and two time slots (sequential positions are 5 and 13) of a sixth array in turn; therefore, the sequential positions occupied by thedata stream 2 are (1, 5, 9, 13). - It can be understood that, if the switch period is a fixed value, the input end merely needs to obtain the array sequence table in the foregoing manner in a first switch period, and may directly use the array sequence table for sequencing processing in a subsequent switch period to obtain a cell table.
- 207: The input end controls sending of cells of the data stream according to the cell table.
- The input end sends the cells to a
switch device 502 according to the sequential position of each cell in the cell table, so that the switch fabric delivers the received cells to anoutput device 503. - It can be seen from the foregoing description that, in the embodiment of the present invention, dynamic bandwidth arbitration processing is introduced into the switch fabric, that is, the BWM is calculated by obtaining the bandwidth demand information, the physical bandwidth information of the ingress port and the physical bandwidth information of the egress port to adjust a scheduling policy of the switch fabric. There is no need to construct a data switching matrix and a control switching matrix in the switch fabric, which reduces processing complexity of the switch fabric. Meanwhile, there is no need to use a large number of buffers to store VOQs and OQs in the input end and the output end, which solves the problem of scale limitation on a bufferless switch structure. In addition, sequencing is performed on the entries of the BWM and cell even sequencing processing is performed on the sequencing information table, which avoids the problem of "many-to-one" conflict between ingress ports and egress ports, and ensures fair distribution of end-to-end switch resources and even sending of the cells, thereby reducing the delay jitter during switch processing.
- A switch equipment in an embodiment of the present invention is described in the following. As shown in
FIG. 5 , aswitch equipment 500 in the embodiment of the present invention includes: - an
input device 501, aswitch device 502 and anoutput device 503. - The
input device 501 includes: an obtainingunit 5011, afirst sending unit 5012, acell sequencing unit 5013 and asecond sending unit 5014. - The
switch device 502 includes: afirst calculation unit 5021, abandwidth sequencing unit 5022 and athird sending unit 5023. - The obtaining
unit 5011 is configured to obtain bandwidth demand information of a data stream. - In the embodiment of the present invention, the obtaining
unit 5011 specifically includes: - a first obtaining sub-unit, configured to obtain a stream rate of the data stream;
- a second obtaining sub-unit, configured to obtain a length of buffered information of the data stream; and
- a calculation obtaining sub-unit, configured to divide the length of the buffered information of the data stream by a unit time, and then add the quotient to the stream rate of the data stream, to obtain the bandwidth demand information of the data stream.
- The
first sending unit 5012 is configured to encapsulate in a first control cell the bandwidth demand information obtained by the obtainingunit 5011 and correspondence information between the bandwidth demand information and the data stream, and send the first control cell to thefirst calculation unit 5021. - In the embodiment of the present invention, bandwidth demand information of each data stream may be different. The
first sending unit 5012 needs to encapsulate the bandwidth demand information, together with the correspondence information (if possible, an identifier) between the bandwidth demand information and the data stream in the first control cell, and then send the first control cell to the switch device of the switch equipment. - The
first calculation unit 5021 is configured to calculate a BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and TDM service bandwidth information, where an ingress port is a connection port of theinput device 501 and theswitch device 502, an egress port is a connection port of theswitch device 502 and theoutput device 503, and the TDM service bandwidth information is used to indicate the bandwidth occupied by a TDM service on each egress port. - In the embodiment of the present invention, a certain time length is preset as a switch period, and the
first calculation unit 5021 calculates the BWM according to all bandwidth demand information received in a switch period, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and the TDM service bandwidth information. Certainly, time lengths of different switch periods may not be fixed, which is not limited here. - In the embodiment of the present invention, the BWM meets the following conditions: the sum of bandwidth demands of all data streams with a same ingress port is less than or equal to an ingress port physical bandwidth of the ingress port; the sum of bandwidth demands of all data streams with a same egress port is less than or equal to an allocatable physical bandwidth of the egress port, where the allocatable physical bandwidth of the egress port equals the difference left by subtracting the bandwidth occupied by a TDM service on the egress port and indicated by the TDM service bandwidth information from the physical bandwidth of the egress port.
- The
bandwidth sequencing unit 5022 is configured to perform sequencing processing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table, where the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot. - The
third sending unit 5023 is configured to encapsulate the bandwidth sequencing information table in a second control cell, and send the second control cell to thecell sequencing unit 5013. - The
cell sequencing unit 5013 is configured to perform cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table, where the cell table includes a sequential position of each cell in the data stream, and sequential positions of cells of a same data stream are distributed at intervals. - In the embodiment of the present invention, the
cell sequencing unit 5013 may divide a total of cells expected to be sent in the current switch period into multiple arrays, so the number of time slots included in each array is equal to the total of the cells/the number of the arrays. Sequencing is performed on each array according to a set method, so that adjacent time slots of a same array have an interval of (M-1) sequential positions, where M is equal to the number of the arrays. Starting from a first array, according to an array sequence table obtained by sequencing, cells of each data stream are filled in turn into sequential positions where time slots of each array are located. - In an application scenario of the present invention, the
cell sequencing unit 5013 may include: - a dividing unit, configured to divide a total N of cells expected to be sent in a switch period into 2L arrays, where each array includes N/2L time slots;
- an array sequencing unit, configured to, starting from an L-bit binary number which equals 0, perform carry from a high bit to a low bit in turn by adopting a negative carry manner, perform sequencing on the decimal values of sequentially-obtained 2L L-bit binary numbers where the decimal values are used in turn as sequential positions of first time slots of the arrays, and perform sequencing on remaining time slots of the arrays according to a sequence relation of the sequential positions of the first time slots of the arrays, to obtain an array sequence table; and
- a cell filling unit, configured to, starting from a first time slot of the first array, according to a criterion that all time slots of a single array are first completely filled, fill all cells of each data stream in the bandwidth sequencing information table in turn into sequential positions where time slots of a corresponding arrays are located, until all cells of all data streams in the bandwidth sequencing information table are completely filled, where there is one-to-one correspondence between the cells and the time slots.
- The
second sending unit 5014 is configured to control sending of cells of the data stream according to the cell table. - The
second sending unit 5014 sends the cells to theswitch device 502 according to the sequential position of each cell in the cell table, so that the switch device delivers the received cells to theoutput device 503. - It should be noted that, the
input device 501, theswitch device 502 and theoutput device 503 in this embodiment may be used as the input end, the switch fabric and the output end in the foregoing method embodiment, to implement all the technical solutions in the foregoing method embodiment. Functions of various functional modules may be specifically implemented according to the method in the foregoing method embodiment. Reference may be made to the relevant description in the foregoing embodiment for a specific implementation process, and the details are not repeatedly described here. - It can be seen from the foregoing description that, in the embodiment of the present invention, dynamic bandwidth arbitration processing is introduced into the switch device of the switch equipment, that is, the BWM is calculated by obtaining the bandwidth demand information, the physical bandwidth information of the ingress port and the physical bandwidth information the egress port to adjust a scheduling policy of the switch device. There is no need to construct a data switching matrix and a control switching matrix in the switch device, which reduces processing complexity of the switch device. Meanwhile, there is no need to use a large number of buffers to store VOQs and OQs in the input device and the output device, which solves the problem of scale limitation on a bufferless switch structure. In addition, sequencing is performed on the entries of the BWM and cell even sequencing processing is performed on the sequencing information table, which avoids the problem of "many-to-one" conflict between ingress ports and egress ports, and ensures fair distribution of end-to-end switch resources and even sending of the cells, thereby reducing a delay jitter during switch processing.
Claims (10)
- A method for controlling data stream switch, comprising:obtaining, by an input end of a switch equipment, bandwidth demand information of a data stream, encapsulating in a first control cell the bandwidth demand information and the correspondence information between the bandwidth demand information and the data stream, and sending the first control cell to a switch fabric of the switch equipment;calculating, by the switch fabric, a bandwidth map BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing TDM service bandwidth information, wherein the ingress port is a connection port of the input end and the switch fabric, and the egress port is a connection port of the switch fabric and the output end, and the TDM service bandwidth information is used to indicate the bandwidth occupied by a TDM service on the egress port;performing, by the switch fabric, sequencing processing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table, encapsulating the bandwidth sequencing information table in a second control cell, and sending the second control cell to the input end, wherein the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot;performing, by the input end, cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table, wherein the cell table comprises a sequential position of each cell in the data stream, and cells of a same data stream are distributed at intervals in terms of sequential positions; andcontrolling, by the input end, sending of cells of the data stream according to the cell table.
- The method for controlling data stream switch according to claim 1, wherein
obtaining the bandwidth demand information of a data stream comprises:obtaining a stream rate of the data stream;obtaining a length of buffered information of the data stream; anddividing the length of the buffered information of the data stream by a unit time, and then adding the quotient to the stream rate of the data stream, to obtain the bandwidth demand information of the data stream. - The method for controlling data stream switch according to claim 1 or 2, wherein
the BWM meets the following conditions:a sum of bandwidth demands of all data streams with a same ingress port is less than or equal to the physical bandwidth of the ingress port; anda sum of bandwidth demands of all data streams with a same egress port is less than or equal to the allocatable physical bandwidth of the egress port, wherein the allocatable physical bandwidth of the egress port equals the difference left by subtracting the bandwidth occupied by a TDM service on the egress port and indicated by the TDM service bandwidth information from the physical bandwidth of the egress port. - The method for controlling data stream switch according to any one of claims 1 to 3,
wherein
the step of calculating a bandwidth map BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing TDM service bandwidth information specifically comprises:using a preset time length as a switch period, and calculating the BWM according to all bandwidth demand information received in a switch period, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and the time division multiplexing TDM service bandwidth information. - The method for controlling data stream switch according to claim 4, wherein
the step of performing the cell even sequencing processing on the data stream according to the bandwidth sequencing information table comprises:dividing a total N of cells expected to be sent in the switch period into 2L arrays, wherein each array comprises N/2L time slots, and L is an integer greater than 1;starting from an L-bit binary number which equals 0, performing carry from a high bit to a low bit in turn by adopting a negative carry manner to sequentially obtain 2L L-bit binary numbers, performing sequencing on the decimal values of the sequentially-obtained 2L L-bit binary numbers to obtain sequenced decimal values, using the sequenced decimal values in turn as sequential positions of first time slots of the arrays, and performing sequencing on remaining time slots of the arrays according to a sequence relation of the sequential positions of the first time slots of the arrays, to obtain an array sequence table; andstarting from a first time slot of a first array, according to a criterion that all time slots of a single array are first completely filled, filling all cells of each data stream in the bandwidth sequencing information table in turn into sequential positions where time slots of the arrays are located, until all cells of all data streams are completely filled, wherein there is one-to-one correspondence between the cells and the time slots. - A switch equipment (500), comprising:an input device (501), a switch device (502) and an output device (503);wherein the input device (501) comprises: an obtaining unit (5011), a first sending unit (5012), a cell sequencing unit (5013) and a second sending unit (5014);the switch device (502) comprises: a first calculation unit (5021), a bandwidth sequencing unit (5022) and a third sending unit (5023);the obtaining unit (5011) is configured to obtain bandwidth demand information of a data stream;the first sending unit (5012) is configured to encapsulate in a first control cell the bandwidth demand information obtained by the obtaining unit (5011) and the correspondence information between the bandwidth demand information and the data stream, and send the first control cell to the first calculation unit (5021);the first calculation unit (5021) is configured to calculate a bandwidth map BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and time division multiplexing TDM service bandwidth information, wherein the ingress port is a connection port of the input device (501) and the switch device (502), the egress port is a connection port of the switch device (502) and the output device (503), and the TDM service bandwidth information is used to indicate a bandwidth occupied by a TDM service on the egress port;the bandwidth sequencing unit (5022) is configured to perform sequencing processing on entries of the BWM according to a preset sequencing criterion, to obtain a bandwidth sequencing information table, wherein the sequencing criterion is that no more than two ingress ports send a cell to a same egress port in a same time slot;the third sending unit (5023) is configured to encapsulate the bandwidth sequencing information table in a second control cell, and send the second control cell to the cell sequencing unit (5013);the cell sequencing unit (5013) is configured to perform cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table, wherein the cell table comprises a sequential position of each cell in the data stream, and sequential positions of cells of a same data stream are distributed at intervals; andthe second sending unit (5014) is configured to control sending of cells of the data stream according to the cell table.
- The switch equipment (500) according to claim 6, wherein
the obtaining unit (5011) specifically comprises:a first obtaining sub-unit, configured to obtain a stream rate of the data stream;a second obtaining sub-unit, configured to obtain a length of buffered information of the data stream; anda calculation obtaining sub-unit, configured to divide the length of the buffered information of the data stream by a unit time, and then add the quotient to the stream rate of the data stream, to obtain the bandwidth demand information of the data stream. - The switch equipment (500) according to claim 6 or 7, wherein
the BWM calculated by the first calculation unit (5021) meets the following conditions:a sum of bandwidth demands of all data streams with a same ingress port is less than or equal to the physical bandwidth of the ingress port; anda sum of bandwidth demands of all data streams with a same egress port is less than or equal to the allocatable physical bandwidth of the egress port, wherein the allocatable physical bandwidth of the egress port equals the difference left by subtracting the bandwidth occupied by a TDM service on the egress port and indicated by the TDM service bandwidth information from the physical bandwidth of the egress port. - The switch equipment (500) according to any one of claims 6 to 8, wherein
the first calculation unit (5021) is specifically configured to use a preset time length as a switch period, and calculate the BWM according to all bandwidth demand information received in a switch period, the physical bandwidth of at least one ingress port of the data stream corresponding to the bandwidth demand information, the physical bandwidth of at least one egress port of the data stream corresponding to the bandwidth demand information, and the time division multiplexing TDM service bandwidth information. - The switch equipment (500) according to claim 9, wherein
the cell sequencing unit (5013) specifically comprises:a dividing unit, configured to divide a total N of cells expected to be sent in the switch period into 2L arrays, wherein each array comprises N/2L time slots, and L is an integer greater than 1;an array sequencing unit, configured to, starting from an L-bit binary number being completely 0, perform carry from a high bit to a low bit in turn by adopting a negative carry manner to sequentially obtain 2L L-bit binary numbers, perform sequencing on the decimal values of sequentially-obtained 2L L-bit binary numbers to obtain sequenced decimal values, use the sequenced decimal values in turn as sequential positions of first time slots of the arrays, and perform sequencing on remaining time slots of the arrays according to a sequence relation of the sequential positions of the first time slots of the arrays, so as to obtain an array sequence table; anda cell filling unit, configured to, starting from a first time slot of a first array, according to a criterion that all time slots of a single array are first completely filled, fill all cells of each data stream in the bandwidth sequencing information table in turn into sequential positions where time slots of the arrays are located, until all cells of all data streams are completely filled, wherein the cells one-to-one correspond to the time slots.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/083030 WO2013078584A1 (en) | 2011-11-28 | 2011-11-28 | Data flow switch control method and relevant device |
Publications (3)
Publication Number | Publication Date |
---|---|
EP2632099A1 EP2632099A1 (en) | 2013-08-28 |
EP2632099A4 EP2632099A4 (en) | 2014-12-03 |
EP2632099B1 true EP2632099B1 (en) | 2016-02-03 |
Family
ID=46950483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP11871845.1A Active EP2632099B1 (en) | 2011-11-28 | 2011-11-28 | Data flow switch control method and relevant device |
Country Status (4)
Country | Link |
---|---|
US (1) | US8971317B2 (en) |
EP (1) | EP2632099B1 (en) |
CN (1) | CN102726009B (en) |
WO (1) | WO2013078584A1 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10021735B2 (en) * | 2013-06-04 | 2018-07-10 | Attobahn, Inc. | Viral molecular network architecture and design |
US11889590B2 (en) * | 2013-06-04 | 2024-01-30 | Attobahn, Inc. | System and method for a viral molecular network utilizing mobile devices |
US9813362B2 (en) | 2014-12-16 | 2017-11-07 | Oracle International Corporation | Framework for scheduling packets with multiple destinations in a virtual output queue network switch |
US10270713B2 (en) * | 2014-12-16 | 2019-04-23 | Oracle International Corporation | Scheduling packets with multiple destinations in a virtual output queue network switch |
CN105262696B (en) * | 2015-09-01 | 2019-02-12 | 上海华为技术有限公司 | A kind of method and relevant device of multipath shunting |
US10574555B2 (en) * | 2016-01-28 | 2020-02-25 | Arista Networks, Inc. | Network data stream tracer |
CN107979544A (en) * | 2016-10-25 | 2018-05-01 | 华为技术有限公司 | A kind of retransmission method of IP packet, equipment and system |
CN108322367B (en) * | 2017-01-16 | 2022-01-14 | 中兴通讯股份有限公司 | Method, equipment and system for service delivery |
CN107592275A (en) * | 2017-11-09 | 2018-01-16 | 深圳门之间科技有限公司 | One kind is lined up control method and its system |
CN109995673B (en) * | 2017-12-29 | 2022-06-21 | 中国移动通信集团四川有限公司 | Data transmission method, device, equipment and medium |
CN113541992A (en) * | 2020-04-21 | 2021-10-22 | 华为技术有限公司 | Data transmission method and device |
CN113923506B (en) * | 2020-07-10 | 2023-07-18 | 成都鼎桥通信技术有限公司 | Video data processing method, device, edge computing gateway and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6628609B2 (en) * | 1998-04-30 | 2003-09-30 | Nortel Networks Limited | Method and apparatus for simple IP-layer bandwidth allocation using ingress control of egress bandwidth |
EP1293103A1 (en) * | 2000-06-06 | 2003-03-19 | Accelight Networks Canada, Inc. | A multiservice optical switch |
US6865179B1 (en) * | 2000-07-20 | 2005-03-08 | Lucent Technologies Inc. | Apparatus and method for synchronous and asynchronous transfer mode switching of ATM traffic |
US6882799B1 (en) * | 2000-09-28 | 2005-04-19 | Nortel Networks Limited | Multi-grained network |
US7023840B2 (en) * | 2001-02-17 | 2006-04-04 | Alcatel | Multiserver scheduling system and method for a fast switching element |
US20060013133A1 (en) * | 2004-07-16 | 2006-01-19 | Wang-Hsin Peng | Packet-aware time division multiplexing switch with dynamically configurable switching fabric connections |
US8089959B2 (en) * | 2006-05-30 | 2012-01-03 | Ted Henryk Szymanski | Method and apparatus to schedule packets through a crossbar switch with delay guarantees |
CN101272345B (en) * | 2008-04-29 | 2010-08-25 | 杭州华三通信技术有限公司 | Method, system and device for controlling data flux |
US8126002B2 (en) | 2009-03-31 | 2012-02-28 | Juniper Networks, Inc. | Methods and apparatus related to a shared memory buffer for variable-sized cells |
-
2011
- 2011-11-28 WO PCT/CN2011/083030 patent/WO2013078584A1/en active Application Filing
- 2011-11-28 CN CN201180002460.2A patent/CN102726009B/en not_active Expired - Fee Related
- 2011-11-28 EP EP11871845.1A patent/EP2632099B1/en active Active
-
2013
- 2013-06-28 US US13/930,807 patent/US8971317B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
EP2632099A4 (en) | 2014-12-03 |
EP2632099A1 (en) | 2013-08-28 |
CN102726009B (en) | 2015-01-07 |
WO2013078584A1 (en) | 2013-06-06 |
CN102726009A (en) | 2012-10-10 |
US20130287017A1 (en) | 2013-10-31 |
US8971317B2 (en) | 2015-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2632099B1 (en) | Data flow switch control method and relevant device | |
EP2302843B1 (en) | Method and device for packet scheduling | |
US6115373A (en) | Information network architecture | |
CN104104616B (en) | The method, apparatus and system of data dispatch and exchange | |
US7133399B1 (en) | System and method for router central arbitration | |
US20110170545A1 (en) | Data transmission method, network node, and data transmission system | |
US7397792B1 (en) | Virtual burst-switching networks | |
WO2012003890A1 (en) | Switching node with load balancing of bursts of packets | |
Kesselman et al. | Improved competitive performance bounds for CIOQ switches | |
Chrysos et al. | Scheduling in Non-Blocking Buffered Three-Stage Switching Fabrics. | |
Hu et al. | Feedback-based scheduling for load-balanced two-stage switches | |
US7289440B1 (en) | Bimodal burst switching | |
Szymanski | A low-jitter guaranteed-rate scheduling algorithm for packet-switched ip routers | |
EP1170907B1 (en) | Improvements in or relating to switching devices | |
JP2002084322A (en) | Method for assigning multicast traffic band width of exchange unit | |
US7545804B2 (en) | High throughput rotator switch having excess tandem buffers | |
US6643702B1 (en) | Traffic scheduler for a first tier switch of a two tier switch | |
EP1158732A2 (en) | Improvements in or relating to packet switching | |
Li et al. | Research of on-board optical/electric switching of broadband multimedia satellite | |
CN1791097B (en) | Control method of RPF protocol applied in optical burst switching ring network based on WDM | |
US20030035426A1 (en) | Data switching system | |
WO2022267030A1 (en) | Switch chip and power supply method | |
Berger | Multipath packet switch using packet bundling | |
Vinokurov et al. | Resource sharing for QoS in agile all photonic networks | |
Szymanski | A low-jitter Guaranteed-Rate scheduling algorithm for crosspoint-buffered switches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20130311 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20141104 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 12/931 20130101ALI20141029BHEP Ipc: H04L 12/70 20130101AFI20141029BHEP Ipc: H04L 12/937 20130101ALI20141029BHEP Ipc: H04L 12/947 20130101ALI20141029BHEP |
|
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 12/937 20130101ALI20150528BHEP Ipc: H04L 12/947 20130101ALI20150528BHEP Ipc: H04L 12/70 20130101AFI20150528BHEP Ipc: H04L 12/931 20130101ALI20150528BHEP |
|
INTG | Intention to grant announced |
Effective date: 20150623 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 774116 Country of ref document: AT Kind code of ref document: T Effective date: 20160215 Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602011023151 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D Ref country code: NL Ref legal event code: MP Effective date: 20160203 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 774116 Country of ref document: AT Kind code of ref document: T Effective date: 20160203 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160503 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160504 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160603 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160603 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602011023151 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 |
|
26N | No opposition filed |
Effective date: 20161104 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160503 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20161130 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20161130 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20161130 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 7 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20161128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20111128 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20161128 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 8 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160203 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230929 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20231006 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20231003 Year of fee payment: 13 |