WO2013173966A1 - Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages - Google Patents

Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages Download PDF

Info

Publication number
WO2013173966A1
WO2013173966A1 PCT/CN2012/075832 CN2012075832W WO2013173966A1 WO 2013173966 A1 WO2013173966 A1 WO 2013173966A1 CN 2012075832 W CN2012075832 W CN 2012075832W WO 2013173966 A1 WO2013173966 A1 WO 2013173966A1
Authority
WO
WIPO (PCT)
Prior art keywords
switching unit
bandwidth
level switching
map information
port
Prior art date
Application number
PCT/CN2012/075832
Other languages
English (en)
Chinese (zh)
Inventor
陈志云
胡幸
周建林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201280000846.4A priority Critical patent/CN102835081B/zh
Priority to PCT/CN2012/075832 priority patent/WO2013173966A1/fr
Publication of WO2013173966A1 publication Critical patent/WO2013173966A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules

Definitions

  • the present invention relates to the field of data communications, and in particular, to a scheduling method, apparatus, and system based on a three-level interconnected switching network. Background technique
  • the switching network is a "bridge" connecting the input ports and output ports of the router, and is the core network for packet packet forwarding. As the traffic of the switching network increases, the capacity of the switching network needs to be upgraded from time to time. Therefore, a multi-level interconnection switching network composed of multiple switching units is used.
  • the three-level interconnection switching network is the most commonly used multi-level interconnection interaction.
  • the network, the three-level interconnection switching network is composed of three-level switching units, that is, the first-level switching unit, the intermediate-level switching unit, and the third-level switching unit.
  • the indicators of the switching network are: throughput (close to 100% optimal), average cell (packet) delay, cell (packet) delay jitter, cell (packet) loss rate, blocking probability, etc.
  • throughput close to 100% optimal
  • average cell (packet) delay cell (packet) delay jitter
  • cell (packet) loss rate cell (packet) loss rate
  • blocking probability etc.
  • many adaptive scheduling algorithms are used in the switching network, among which the general adaptive scheduling algorithm is used.
  • the adaptive scheduling is performed by the CRRD algorithm. Two matches occur during the adaptation of the schedule: Inbound and outbound port matching and path matching. among them,
  • S1 has a virtual output queue (Virtual Output Queue, hereinafter referred to as VOQ) to issue requests to all outbounds of the SI, each The outbound end takes the window position register value of the outbound end, selects a request according to the polling mode, and sends back the acknowledgement to the VOQ corresponding to the request; the VOQ arbiter takes out the window position register value of the VOQ, according to the polling mode, One of the possible acknowledgments is sent back to the admission. Repeat the above steps until the VOQ queued in S1 receives the acknowledgement sent by the outgoing end.
  • VOQ Virtual Output Queue
  • the path matching occurs in the intermediate-level switching unit (hereinafter referred to as S2). Specifically, if an inbound end of S2 obtains information from the outbound end of the corresponding S1: the cell is forwarded at the outbound end, and the ingress of S2 is obtained. A request is made to the outbound end of S2 determined by the destination address of the cell, and the outbound end of S2 takes out the window position register value of the local end, and selects from among multiple possible ingress requests according to the polling mode. Take one to admit.
  • the corresponding VOQ of S1 sends the queued cells.
  • the invention provides a scheduling method, device and system based on a three-level interconnected switching network, which solves the problem that the CRRD algorithm is used for adaptive scheduling in the prior art, and the switching throughput is limited while limiting the size of the switching network.
  • an embodiment of the present invention provides a scheduling method based on a three-level interconnected switching network, including:
  • the first level switching unit obtains the port bandwidth map information, where the outbound port bandwidth map information is a bandwidth allocated to the input module according to the rated bandwidth of the output port of the output module and the bandwidth requirement of the input module;
  • the first-level switching unit generates channel bandwidth map information according to the outbound port bandwidth map information, where the channel map information is a bandwidth allowed between the same first-level switching unit and the same third-level switching unit;
  • the first-level switching unit generates path bandwidth map information according to the channel map information, where the path bandwidth map information is allowed to be carried by each path between the same first-level switching unit and the same third-level switching unit.
  • Bandwidth is allowed to be carried by each path between the same first-level switching unit and the same third-level switching unit.
  • the first-level switching unit caches the data stream that needs to reach the output port of the output module through the same first-level switching unit and the same third-level switching unit according to the destination address of the data stream. And in the queue, according to the path bandwidth map information of each path between the first-level switching unit and the third-level switching unit, sending the data flow to the third-level switching unit by using each path .
  • the embodiment of the present invention provides a first-level switching unit, including: a first acquiring unit, configured to acquire port bandwidth map information, where the outbound port bandwidth map information is based on an output port of the output module.
  • the bandwidth and bandwidth requirements of the input module are the bandwidth allocated by the input module;
  • a first generating unit configured to generate channel bandwidth map information according to the outbound port bandwidth map information acquired by the first acquiring unit, where the channel map information is between the same first level switching unit and the same third level switching unit The bandwidth allowed to be carried;
  • a second generating unit configured to generate path bandwidth map information according to the channel map information generated by the first generating unit, where the path bandwidth map information is between the same first-level switching unit and the same third-level switching unit The bandwidth allowed for each path;
  • the first scheduling unit when receiving the data stream, is configured to buffer, according to the destination address of the data stream, a data stream that needs to reach the output port of the output module through the same first-level switching unit and the same third-level switching unit to one And in the queue, according to the path bandwidth map information of each path between the first-level switching unit and the third-level switching unit, sending the data flow to the third-level switching unit by using each path .
  • an embodiment of the present invention provides a three-level interconnected switching network, including the first-level switching unit. .
  • the first-level switching unit sets each path between the first-level switching unit and the third-level switching unit according to the out-port bandwidth map information.
  • the allowed bandwidth is generated, and the path bandwidth map information is generated corresponding to each path, so that when the first-level switching unit receives the data stream, the destination address of the data stream needs to pass through the same first-level switching unit and the same third.
  • the data stream of the egress port of the output switching unit is buffered into a queue, and the data stream is sent to the third-level switching unit according to the path bandwidth map information, so that the data stream is scheduled, and the data stream is not required.
  • a request-arbitration request is made every time a cell is transmitted, which solves the problem in the prior art that the size of the switching network is limited because one request-arbitration is required for each cell to be transmitted.
  • FIG. 1 (a) is a flow chart 1 of a scheduling method based on a three-level interconnected switching network according to Embodiment 1 of the present invention
  • FIG. 1(b) is a scheduling method based on a three-level interconnected switching network according to Embodiment 1 of the present invention. Flow chart two;
  • FIG. 2 ( a ) is a three-stage CLOS switching network system architecture provided by the second embodiment of the present invention.
  • FIG. 2 ( a2 ) is a three-level CLOS switching network system architecture provided by the second embodiment of the present invention.
  • FIG. 2(b) is a flowchart of a scheduling method based on a three-level interconnected switching network according to Embodiment 2 of the present invention
  • Figure 2 (c) is a schematic diagram of the uniform interleaving of the switching cell cycle in the second embodiment of the present invention
  • Figure 3 (a) is a schematic structural diagram 1 of the first-stage switching unit according to the third embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a first scheduling unit in the first-stage switching unit shown in FIG. 3 (a);
  • FIG. 3(c) is a schematic structural diagram 2 of a first-stage switching unit according to Embodiment 3 of the present invention
  • FIG. 4(a) is a schematic structural diagram 1 of an input module in a three-level interconnected switching network according to Embodiment 3 of the present invention
  • FIG. 4(b) is a second schematic diagram showing the structure of an input module in a three-level interconnected switching network according to Embodiment 3 of the present invention.
  • FIG. 5 is a schematic structural diagram of an output module in a three-level interconnected switching network according to Embodiment 3 of the present invention. detailed description
  • Embodiment 1 For purposes of illustration and description However, the detailed description of well-known devices, circuits, and methods may be omitted to avoid obscuring the description of the present invention.
  • the embodiment of the present invention provides a scheduling method based on a three-level interconnected switching network, which is applied to a first-level switching unit in the three-level interconnected switching network. As shown in FIG. 1(a), the method may include: 101.
  • the first-level switching unit acquires an information of an Out Port Bandwidth Map (hereinafter referred to as OP-BWM).
  • OP-BWM Out Port Bandwidth Map
  • the OP-BWM information is a bandwidth allocated to the input module according to a rated bandwidth of the egress port and a bandwidth requirement of the input module.
  • the OP-BWM information may be generated and sent by the output module, or may be a preset empirical value, which is not limited herein.
  • the output module When the OP-BWM information is generated and sent by the output module, the output module needs to acquire an output requirement of an output port of the plurality of input modules (Out Port Requirement, hereinafter referred to as OP-REQ), so that The output module output port allocates bandwidth according to the OP-REQ for the input module that sends the OP-REQ, and generates corresponding OP-BWM information.
  • OP-REQ Out Port Requirement
  • the OP-REQ may be sent to the corresponding egress port in the form of a control cell via the three-level interconnected switching network, that is, the control cell carrying the OP-REQ is received by the first-level switching unit, and according to The destination address in the control cell sends the control cell to the corresponding third-level switching unit, and is sent to the egress port of the corresponding output module through the third-level switching unit; optionally, the OP-REQ It can also be transmitted in the form of a control cell to the egress port of the output module corresponding to the destination address in the control cell through a dedicated control channel.
  • the OP-REQ is transmitted by the input module to the corresponding output module, Limited to the above two specific ways, not here - repeat.
  • the first-level switching unit generates a channel bandwidth map (S1S3 Bandwidth Map, hereinafter referred to as S13-BWM) according to the outbound port map information.
  • S1S3 Bandwidth Map hereinafter referred to as S13-BWM
  • the S13-BWM information is a bandwidth allowed to be carried between the same first-level switching unit and the same third-level switching unit.
  • the first-level switching unit generates a path bandwidth map S123-BWM (S1S2S3 Bandwidth Map, hereinafter abbreviated as S123-BWM) according to the S13-BWM information.
  • S123-BWM path bandwidth map
  • the S123-BWM information is a bandwidth allowed for each path between the same first-level switching unit and the same third-level switching unit.
  • the first-level switching unit When receiving the data stream, the first-level switching unit needs to reach the data stream buffer of the output port of the output module by using the same first-level switching unit and the same third-level switching unit according to the destination address of the data stream. Go to a queue, and send the data stream to the third-level switching unit through each path according to S123-BWM information of each path between the first-level switching unit and the third-level switching unit. .
  • the first level switching unit is configured according to the first level switching unit and the third level, in order to ensure the TDM performance of the time division multiplexing (TDM) type service flow.
  • the S123-BWM information of each path between the switching units is sent to the third-level switching unit by using the data path, and the method includes: if the data flow is a packet type service flow, the first And the level switching unit sends the packet type service flow to the first level exchange according to the S123-BWM information of each path between the first level switching unit and the third level switching unit.
  • the S123-BWM information of each path is selected, and a path is selected to distribute the TDM type service flow to the selected path.
  • the first-stage switching unit according to the S123-BWM information of each path between the first-level switching unit and the third-level switching unit
  • the data stream is sent to the third-level switching unit by using each path, and may further include: the first-level switching unit uniformly sorts the S123-BWM information of each path, and outputs the first cell. Outputting the sorting table; the first-stage switching unit sends the data stream sent by the path corresponding to the S123-BWM to the third-level switching unit according to the first cell output sorting table.
  • the method as shown in FIG. 1 (b), further includes:
  • the first-stage switching unit sends the OP-BWM information to a corresponding input module, so that the input module is configured according to the OP-BWM information and the ingress port data flow pair in the input module.
  • Bandwidth allocation is performed for the bandwidth requirement of the outbound port corresponding to the port bandwidth map information.
  • the input module performs bandwidth allocation according to the OP-BWM information and the inbound port data stream of the input module to the bandwidth requirement of the egress port that sends the OP-BWM information, Specifically, the input module generates a data flow bandwidth map corresponding to each ingress port according to the OP-BWM information and the data flow in the input module to the bandwidth requirement of the outbound port that sends the OP-BWM information (Flow Requirement) , hereinafter abbreviated as F-REQ) information; the F-BWM information is uniformly divided by the cells to output the second cell loss And sorting the table so that the data stream of the ingress port is sent to the first level switching unit according to the second cell output sorting table.
  • F-REQ Flow Requirement
  • the first-level switching unit sets the bandwidth allowed for each path between the first-level switching unit and the third-level switching unit according to the OP-BWM information.
  • S123-BWM information is generated corresponding to each path, so that when the first-stage switching unit receives the data stream, the destination address of the data stream needs to reach the output through the same first-level switching unit and the same third-level switching unit.
  • the data stream of the outbound port of the module is buffered into a queue, and the data stream is sent to the third-level switching unit according to the S123-BWM information, so that the data stream is scheduled, and no one cell is required to be sent.
  • a request-arbitration request is made to solve the problem in the prior art that the request-arbitration is required for each cell to be transmitted, which limits the size of the switching network.
  • the scheduling method based on the three-level interconnected switching network provided by the second embodiment of the present invention is described in detail in order to enable a person skilled in the art to better understand the technical solutions provided by the embodiments of the present invention.
  • a N*N three-level CLOS switching network system architecture is taken as an example for detailed description.
  • the three-level CLOS switching network system architecture includes N
  • the ingress port and the N egress ports are provided by multiple input ingress modules and multiple output Egress modules.
  • the N*N three-stage CLOS switching network is built by using switching units.
  • N is the number of inbound ports and the number of egress ports.
  • the switching network includes a first-level switching unit, an intermediate switching unit, and a third-level switching unit.
  • the level switching unit is m*n Crossbar, there are also k
  • the intermediate level switching network is k*k Crossbar, a total of m.
  • the three-stage CLOS switching network can be written as (n, m, k). Obviously, the three-stage CLOS switching network is a multi-path network. There are m paths between a pair of specified inbound and outbound ends, and each path passes through a different intermediate level switching unit.
  • the ingress module introduces a data stream bandwidth requirement collection (Requirement Convergence, hereinafter referred to as R-Con) processing unit, collects the data stream bandwidth requirement of the physical port of the module, ensures the data stream bandwidth requirement and is smaller than the physical port bandwidth, and uses the same outgoing port.
  • R-Con data stream bandwidth requirement collection
  • the data stream bandwidth requirements are combined into the outbound port bandwidth requirement (Out Port Requirement, hereinafter referred to as 0P-REQ);
  • Egress module 1 into distributed real-time bandwidth arbitration (Real Time Bandwidth Arbitrate, The following is an RBA unit, which collects the OP-REQ sent by each Ingress module, distributes the port bandwidth to the aggregated data stream of each ingress port, and outputs the Out Port Bandwidth Map (hereinafter referred to as OP-BWM) information. .
  • RBA unit collects the OP-REQ sent by each Ingress module, distributes the port bandwidth to the aggregated data stream of each ingress port, and outputs the Out Port Bandwidth Map (hereinafter referred to as OP-BWM) information.
  • OP-BWM Out Port Bandwidth Map
  • S1 collects the OP-BWM information, and extracts the aggregated data stream that has passed through the same S1 and mapped to the same S3 in the load balancing SORT (abbreviated as LB SORT) unit, and performs secondary aggregation to obtain the aggregated bandwidth, thereby generating the aggregated bandwidth.
  • LB SORT load balancing SORT
  • S 13-BWM A S1 to S3 channel bandwidth map (S1S3 Bandwidth Ma, hereinafter referred to as S 13-BWM) information, and the aggregated bandwidth is based on the S2 path for the traffic equalization output path bandwidth map (S1S2S3 Bandwidth Map, hereinafter referred to as S123-BWM)
  • S1S2S3 Bandwidth Map hereinafter referred to as S123-BWM
  • the information is then sorted by the cell, and the first cell output table is output (this table indicates the traffic path matching and cell scheduling transmission processing of S1).
  • the ingress module collects the OP-BWM information, and the Ingress Bandwidth Allocate SORT (hereinafter referred to as the IBA SORT) unit aggregates the bandwidth of the outgoing port of the module (that is, the l n ress module) according to the data stream bandwidth requirement.
  • the output F-BWM Flow Bandwidth Map
  • the second cell output sorting table is output through the cell uniform distribution sorting process (this table indicates the input and output port traffic matching and cell scheduling transmission of the Ingress module). deal with).
  • the scheduling method based on the three-level interconnection switching network may include:
  • the traffic management (Traffic Manager, hereinafter abbreviated as ⁇ ) module in the Ingress module detects the data flow bandwidth requirement (Flow Requirement, hereinafter referred to as F-REQ), and aggregates the outgoing port corresponding to the destination address of the data flow in the R-Con unit.
  • the OP-REQ is encapsulated into a control cell and sent to the Egress module.
  • the traffic measurement module in the TM module obtains the flow rate V
  • the queue cache control module in the TM module obtains the data stream buffer depth Depth, and generates F-REQ according to the flow rate and the cache information:
  • F-REQ flow rate + data stream buffer depth / unit time
  • the unit time is a set time for sending data of the data stream buffer depth in the cache.
  • the F-REQ corresponding to the egress port of the same output module is aggregated to generate an OP-REQ; the corresponding ID is assigned to the OP-REQ, and is encapsulated into a control cell, where the identifier ID is used to indicate that the cell is controlled.
  • the control cell is reported to the three-level interconnected switching network and sent to the third-level interconnected switching network.
  • ingress 1, inbound port 3, inbound port 4, and ingress port 6 of Ingress 1 have data flows to the egress port 1 of the egressl.
  • the F-REQ of port 1 is 0.3 Mbits/s
  • the F-port of port 3 is F-
  • the FQ is 0.2 Mbits/s
  • the F-REQ of port 4 is 0.5 Mbits/s
  • the F-REQ of port 6 is 1 Mbits/s.
  • the bandwidth requirement of Ingress 1 for outgoing port 1 of Egress1 is the above F-REQ. Sum: 2 Mbits/s.
  • the Egress module receives the control cell by using the three-level interconnection interaction network, and extracts an OP-REQ in the control cell, and sends the OP-REQ to the distributed RBA module, where the distributed RBA module corresponds to the destination address carried by the control cell.
  • the outbound port rated bandwidth and the OP-REQ calculate the OP-BWM information and send it to the corresponding Ingress module through the tertiary interconnected switching network.
  • an egress port corresponding to the egress module can receive control cells sent by multiple ingress modules, and multiple ingress modules request bandwidth from the egress port, and the egress port requests according to its rated bandwidth and each Ingress module.
  • the bandwidth is allocated to each Ingress fairly. For example, Ingress 1, Ingress 2, and Ingress 3 are separately queried to the egress1 outbound port 1 request: 1 Mbits/s, 2 Mbits/s, 3 Mbits/s bandwidth, and the outgoing bandwidth of the egress 1 is 3 Mbits/s.
  • Ingress 1, Ingress 2, and Ingress 3 allocate 0.5 Mbits/s, 1 Mbits/s, and 1.5 Mbits/s for Ingress 1, Ingress 2, and Ingress 3, and generate corresponding OP-BWM accordingly.
  • Information is sent to the corresponding Ingress 1, Ingress2, and Ingress 3 through a three-level interconnected interactive network.
  • the S1 in the three-level interconnection interaction network extracts the OP-BWM information sent to the Ingress module connected thereto, and performs the S3 home aggregation of the OP-BWM information through the LB SORT unit, and outputs the S13-BWM information, and then based on
  • the S2 intermediate path performs the traffic load burden processing to uniformly output the S123-BWM information, and then performs the sorting processing by the cell hook distribution, and outputs the first cell output sorting table, and controls the path matching of the S1 and the uniform scheduling of the cells.
  • the OP-BWM information is aggregated into S13-BWM information according to S1-S3; the S13-BWM information is distributed according to the number of paths between S1 and S3, that is, the number m of S2, for example, the minimum
  • the granularity is 4Mbits/s
  • S13 -BWM 1200M
  • (S13-BWM/granularity) /N is rounded to 37, the remainder is 4, and each S2 is allocated 37 or 38 granularities, ie 148 Mbits/s.
  • 152 Mbits/s output S123-BWM information.
  • the S123-BWM information is sent to the cell and the first cell output sorting table is output.
  • the ingress module extracts the OP-BWM information, and sends the information according to the ingress port data stream.
  • the F-REQ of the outgoing port of the OP-BWM message is proportionally distributed corresponding to each ingress port.
  • the F-BWM information is then subjected to cell uniform interleaving processing of the F-BWM information, and the output second cell output sorting table is calculated and output.
  • the global ingress and egress port bandwidth allocation is implemented in step 204, and the egress module has no traffic blocking, which can simplify the quality of service (QoS) design of the egress module, and significantly reduce the output module cache requirement. At the same time, it provides strict QoS guarantees for different data streams.
  • QoS quality of service
  • the Switch Management (hereinafter referred to as SM) module of the Ingress module controls the data stream cell transmission according to the second cell output sorting table of the module, ensuring that the cells of the ingress and egress ports are matched and sent, and the cell is guaranteed. Delay jitter.
  • the S1 receives the information of the ingress module, performs the S3 queue buffer according to the cell ID, and then sends the S2 path to the S2 according to the first cell output sorting table of the first level.
  • the path matching problem of cell transmission is guaranteed.
  • S2 receives the cell from S1, and sends it to a different S3 according to the cell ID, and S3 sends the message to the corresponding Egress module according to the cell ID.
  • the Egress module receives the cell, and performs reordering and reorganizing the data frame according to the cell serial number.
  • the processing flow of the S1 and Ingress module cells uniformly interleaving and sorting to generate the cell output sorting table may be implemented in the following manner:
  • the timing position is such that the array outputs a corresponding cell output sort table according to the book order position.
  • the preset switching cell period is 16 cell units, and the 16 cell units of the switching cell cycle are divided into 8 arrays, each array, 2 slots, and the binary reverse direction.
  • Table 1 a uniformly spaced cell output ordering table is obtained.
  • an ingress port of the Ingress module is to send data stream 1 and stream 2, stream 1 requires a bandwidth of 8 cells, and stream 2 requires a bandwidth of 4 cells, each The cell consists of 2 arrays
  • the first cell (Cell), the second cell is the data stream 2, the first cell, the fourth cell, the no data stream, the 14th cell, the data stream 2, the 4th cell, and the 15th cell, the data stream.
  • the 8th Cell of the 1st, the 16th Cell has no data stream transmission.
  • the first-level switching unit sets the bandwidth allowed for each path between the first-level switching unit and the third-level switching unit according to the OP-BWM information.
  • S123-BWM information is generated corresponding to each path, so that when the first-level switching unit receives the data stream, the data stream that needs to reach the egress port through the same third-level switching unit is buffered to a queue according to the destination address of the data stream.
  • the embodiment of the present invention provides a first-level switching unit corresponding to the method in the foregoing first embodiment, including:
  • the first obtaining unit 31 is configured to acquire port bandwidth map information, where the outbound port bandwidth is The figure information is the bandwidth allocated to the input module according to the rated bandwidth of the output port of the output module and the bandwidth requirement of the input module;
  • the first generating unit 32 is configured to generate channel bandwidth map information according to the outbound port bandwidth map information acquired by the first acquiring unit, where the channel map information is the same first level switching unit and the same third level switching unit. Bandwidth allowed between hosts;
  • a second generating unit 33 configured to generate path bandwidth map information according to the channel map information generated by the first generating unit, where the path bandwidth map information is between the same first level switching unit and the same third level switching unit Each path allows the bandwidth to be carried;
  • the first scheduling unit 34 when receiving the data stream, is configured to cache the data stream that needs to reach the output port of the output module through the same first-level switching unit and the same third-level switching unit according to the destination address of the data stream. And in a queue, according to the path bandwidth map information of each path between the first-level switching unit and the third-level switching unit, sending the data flow to the third-level exchange through each path unit.
  • the first scheduling unit includes:
  • a first scheduling sub-unit 341, configured to: if the data stream is a packet type service flow, according to the path bandwidth map information of each path between the first-level switching unit and the third-level switching unit, Packet-type service flows are distributed to each path between the first-level switching unit and the third-level switching unit;
  • a second scheduling sub-unit 342 configured to: if the data stream is a time-division multiplex type service flow, select, according to the path bandwidth map information of each path between the first-level switching unit and the third-level switching unit, A path for distributing the service stream of the time division multiplexing type to the selected path.
  • the first scheduling unit further includes:
  • the first sorting output sub-unit 343 is configured to perform path-to-interpolation sorting of the path bandwidth map information of each path, and output a first cell output sorting table;
  • the first sending subunit 344 is configured to send, by using the first cell output sorting table, the data stream that is sent by the path corresponding to the path bandwidth map to the third level switching unit.
  • the first-stage switching unit further includes: a first receiving unit 35, configured to receive a control cell sent by the input module, where the control cell includes a destination address. And the bandwidth requirement of the output port of the output module corresponding to the destination address of the input module;
  • a first sending unit 36 configured to receive, by the first receiving unit, according to the destination address And sending, by the control module, the egress port of the output module, so that the egress port of the output module allocates bandwidth to the input module according to the rated bandwidth of the egress port itself and the bandwidth requirement carried in the control cell , generating port bandwidth map information;
  • the first obtaining unit 31 is specifically configured to receive the outbound port bandwidth map information sent by the output module.
  • the first-stage switching unit further includes: a second sending unit 37, configured to send the outbound port bandwidth map information to a corresponding input module, so that the The input module performs bandwidth allocation according to the outbound port bandwidth map information and the ingress port data stream in the input module to the bandwidth requirement of the egress port corresponding to the egress port bandwidth map information.
  • a second sending unit 37 configured to send the outbound port bandwidth map information to a corresponding input module, so that the The input module performs bandwidth allocation according to the outbound port bandwidth map information and the ingress port data stream in the input module to the bandwidth requirement of the egress port corresponding to the egress port bandwidth map information.
  • the first acquiring unit 31, the first generating unit 32, the second generating unit 33, the first scheduling unit 34, the first receiving unit 35, the first sending unit 36, and the second sending unit 37 are executed by the foregoing.
  • the action can be performed by an electronic circuit, chip or processor having a certain structure.
  • the embodiment of the present invention further provides a three-level interconnection switching network, including the first-level switching unit shown in FIG. 3 ( a ) - FIG. 3 ( c ), and further, an input module and an output module;
  • the input module is configured to send a control cell to the output module, where the control cell includes a destination address and a bandwidth requirement of an output port of the output module corresponding to the destination address of the input module:
  • the output module is configured to receive a control cell sent by the input module, allocate bandwidth to the input module according to a bandwidth requirement carried by the control cell, generate port bandwidth map information, and send the information to the input module and The first stage switching unit;
  • the input module is further configured to perform bandwidth allocation according to the outbound port bandwidth map information and the inbound port data stream in the input module to the bandwidth requirement of the egress port corresponding to the egress port bandwidth map information.
  • the input module specifically includes: a second acquiring unit 41, configured to acquire a flow rate and cache information of an ingress port data stream of the input module;
  • the third generating unit 42 is configured to generate, according to the flow rate acquired by the second acquiring unit and the cache information, a bandwidth requirement of the ingress port data stream to the egress port;
  • Aggregation unit 43 configured to use the bandwidth requirement of the same egress port generated by the third generating unit Performing aggregation to generate a bandwidth requirement of the input module for the egress port;
  • the third sending unit 44 is configured to allocate a corresponding identifier for the bandwidth requirement of the egress port obtained by the aggregation of the aggregation unit, and encapsulate the packet into a control cell, where the identifier includes an identifier of the input module and an address of the egress port.
  • the input module further includes:
  • a fourth generating unit 45 configured to generate a data stream corresponding to each ingress port according to the outbound port bandwidth map information and the data stream bandwidth requirement of the outbound port that sends the outbound port bandwidth map information in the input module Bandwidth map information;
  • a sorting unit 46 configured to perform, by using the data stream bandwidth map information, a cell-to-be-interpolated sorting output, to output a second cell output sorting table, so that the data stream of the ingress port is sent according to the second cell output sorting table. Give the first level of exchange unit.
  • the operations performed by the third transmitting unit 44, the fourth generating unit 45, and the sorting unit 46 may be performed by an electronic circuit, chip, or processor having a certain structure.
  • the output module specifically includes:
  • a second receiving unit 51 configured to receive a control cell sent by the input module
  • a fifth generating unit 52 configured to allocate bandwidth to the input module according to a bandwidth requirement carried by the control cell received by the second receiving unit, to generate port bandwidth map information
  • the fourth sending unit 53 is configured to send the port bandwidth map information generated by the fifth generating unit to the input module and the first level switching unit.
  • the operations performed by the second receiving unit 51, the fifth generating unit 52, and the fourth transmitting unit 53 described above may be performed by an electronic circuit, chip or processor having a certain structure.
  • the first level switching unit and the third level interconnection switching network provided by the embodiment of the present invention, where the first level switching unit sets each path between the first level switching unit and the third level switching unit to be beared according to the OP-BWM information.
  • the bandwidth, corresponding to each path generates S123-BWM information, so that when the first-level switching unit receives the data stream, according to the destination address of the data stream, the data stream that needs to reach the egress port through the same third-level switching unit is cached.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention appartient au domaine technique des communications de données. Dans un mode de réalisation, la présente invention se rapporte à un procédé, à un dispositif et à un système adaptés pour exécuter une programmation basée sur un réseau commuté interconnecté à trois étages. L'invention a pour objectif de résoudre le problème lié, dans l'état de la technique, à la restriction de l'échelle du réseau commuté, tout en garantissant le débit de commutation durant une programmation adaptative au moyen de l'algorithme CRRD. Le procédé décrit dans la solution technique de la présente invention comprend les étapes suivantes : un module de commutation de premier étage obtient les données d'attribution de largeur de bande passante de port de sortie; il génère des données d'attribution de largeur de bande passante de canal sur la base des données d'attribution de largeur de bande passante de port de sortie; il génère des données d'attribution de largeur de bande passante de chemin sur la base des données d'attribution de largeur de bande passante de canal; enfin, à réception du flux de données, sur la base des adresses de destination du flux de données, il met en cache le flux de données qui doit atteindre un port de sortie d'un module de sortie via le même module de commutation de premier étage et le même module de commutation de troisième étage, dans une file d'attente, et il transmet le flux de données au module de commutation de troisième étage via des chemins respectifs, sur la base des données d'attribution de largeur de bande passante de chemin de chaque chemin entre le module de commutation de premier étage et le module de commutation de troisième étage.
PCT/CN2012/075832 2012-05-21 2012-05-21 Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages WO2013173966A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201280000846.4A CN102835081B (zh) 2012-05-21 2012-05-21 基于三级互联交换网络的调度方法、装置及系统
PCT/CN2012/075832 WO2013173966A1 (fr) 2012-05-21 2012-05-21 Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/075832 WO2013173966A1 (fr) 2012-05-21 2012-05-21 Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages

Publications (1)

Publication Number Publication Date
WO2013173966A1 true WO2013173966A1 (fr) 2013-11-28

Family

ID=47336885

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/075832 WO2013173966A1 (fr) 2012-05-21 2012-05-21 Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages

Country Status (2)

Country Link
CN (1) CN102835081B (fr)
WO (1) WO2013173966A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019620A (zh) * 2020-08-28 2020-12-01 中南大学 基于Nginx动态加权的Web集群负载均衡算法及系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013173966A1 (fr) * 2012-05-21 2013-11-28 华为技术有限公司 Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages
CN106375218B (zh) * 2015-07-23 2019-06-21 华为技术有限公司 一种报文转发方法及相关装置
CN107196862B (zh) * 2016-03-14 2021-05-14 深圳市中兴微电子技术有限公司 一种流量拥塞控制方法及系统
CN106453072A (zh) * 2016-06-22 2017-02-22 中国科学院计算技术研究所 片上网络路由器通道资源的贪婪分配方法、装置及路由器
CN107959642B (zh) * 2016-10-17 2020-08-07 华为技术有限公司 用于测量网络路径的方法、装置和系统
CN107979544A (zh) * 2016-10-25 2018-05-01 华为技术有限公司 一种ip报文的转发方法、设备和系统
CN108574642B (zh) * 2017-03-14 2020-03-31 深圳市中兴微电子技术有限公司 一种交换网络的拥塞管理方法及装置
WO2019000340A1 (fr) * 2017-06-29 2019-01-03 华为技术有限公司 Procédé et dispositif de mappage de structure de topologie de réseau, terminal, et support de stockage
CN107480229B (zh) * 2017-08-03 2020-10-30 太原学院 用于对象检索的分布式计算机数据库系统及其检索方法
CN108040302B (zh) * 2017-12-14 2020-06-12 天津光电通信技术有限公司 基于Clos和T-S-T的自适应交换网络路由方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848803A (zh) * 2005-07-27 2006-10-18 华为技术有限公司 一种基于三级交换网的下行队列快速反压传送装置及方法
WO2007125527A2 (fr) * 2006-04-27 2007-11-08 Dune Networks Inc. Procédé, dispositif et système de planification de transport de données sur une toile
WO2011014304A1 (fr) * 2009-07-29 2011-02-03 New Jersey Institute Of Technology Transfert de données via un commutateur de paquets d’un réseau de clos à trois étages comportant chacun une mémoire

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013173966A1 (fr) * 2012-05-21 2013-11-28 华为技术有限公司 Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848803A (zh) * 2005-07-27 2006-10-18 华为技术有限公司 一种基于三级交换网的下行队列快速反压传送装置及方法
WO2007125527A2 (fr) * 2006-04-27 2007-11-08 Dune Networks Inc. Procédé, dispositif et système de planification de transport de données sur une toile
WO2011014304A1 (fr) * 2009-07-29 2011-02-03 New Jersey Institute Of Technology Transfert de données via un commutateur de paquets d’un réseau de clos à trois étages comportant chacun une mémoire

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019620A (zh) * 2020-08-28 2020-12-01 中南大学 基于Nginx动态加权的Web集群负载均衡算法及系统
CN112019620B (zh) * 2020-08-28 2021-12-28 中南大学 基于Nginx动态加权的Web集群负载均衡方法及系统

Also Published As

Publication number Publication date
CN102835081B (zh) 2015-07-08
CN102835081A (zh) 2012-12-19

Similar Documents

Publication Publication Date Title
WO2013173966A1 (fr) Procédé, dispositif et système pour une programmation basée sur un réseau commuté interconnecté à trois étages
CN110620731B (zh) 一种片上网络的路由装置及路由方法
AU746166B2 (en) Fair and efficient cell scheduling in input-buffered multipoint switch
US6810031B1 (en) Method and device for distributing bandwidth
US7042883B2 (en) Pipeline scheduler with fairness and minimum bandwidth guarantee
US7161906B2 (en) Three-stage switch fabric with input device features
AU736406B2 (en) Method for providing bandwidth and delay guarantees in a crossbar switch with speedup
US7023841B2 (en) Three-stage switch fabric with buffered crossbar devices
US7221652B1 (en) System and method for tolerating data link faults in communications with a switch fabric
US7221647B2 (en) Packet communication apparatus and controlling method thereof
US6185221B1 (en) Method and apparatus for fair and efficient scheduling of variable-size data packets in an input-buffered multipoint switch
US8971317B2 (en) Method for controlling data stream switch and relevant equipment
US20050135356A1 (en) Switching device utilizing requests indicating cumulative amount of data
US20080267182A1 (en) Load Balancing Algorithms in Non-Blocking Multistage Packet Switches
WO2006081129A1 (fr) Reproduction de paquets de donnees multi-diffusion dans un systeme de commutation a etages multiples
EP2134037B1 (fr) Procédé et appareil de planification de flux de paquets de données
US8559443B2 (en) Efficient message switching in a switching apparatus
US7623456B1 (en) Apparatus and method for implementing comprehensive QoS independent of the fabric system
US7602797B2 (en) Method and apparatus for request/grant priority scheduling
Li System architecture and hardware implementations for a reconfigurable MPLS router
JP2004140538A (ja) 大容量化と低遅延化に対応したアービタおよびそれを用いたルータ
Hu et al. A distributed scheduling algorithm in central-stage buffered multi-stage switching fabrics
Dastangoo Performance consideration for building the next generation multi-service optical communications platforms
Fan et al. Global and dynamic round-robin scheduler for terabit routers

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201280000846.4

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12877418

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12877418

Country of ref document: EP

Kind code of ref document: A1