WO2021189994A1 - 基于FlexE传输业务流的方法及设备 - Google Patents

基于FlexE传输业务流的方法及设备 Download PDF

Info

Publication number
WO2021189994A1
WO2021189994A1 PCT/CN2020/137333 CN2020137333W WO2021189994A1 WO 2021189994 A1 WO2021189994 A1 WO 2021189994A1 CN 2020137333 W CN2020137333 W CN 2020137333W WO 2021189994 A1 WO2021189994 A1 WO 2021189994A1
Authority
WO
WIPO (PCT)
Prior art keywords
time slot
network device
service flow
allocation strategy
required bandwidth
Prior art date
Application number
PCT/CN2020/137333
Other languages
English (en)
French (fr)
Inventor
张坚
韩涛
赵巍
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20927878.7A priority Critical patent/EP4106284A4/en
Publication of WO2021189994A1 publication Critical patent/WO2021189994A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/801Real time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0057Operations, administration and maintenance [OAM]
    • H04J2203/006Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/14Monitoring arrangements

Definitions

  • This application relates to the field of communication technologies, and in particular to a method and equipment for transmitting service streams based on FlexE.
  • Flexible Ethernet Flexible Ethernet, Flex Eth or FlexE
  • FlexE mainly provides three functions, namely bundling, channelization and sub-rate.
  • bundling refers to bundling multiple PHYs into a FlexE group, and multiple PHYs in the same FlexE group can transmit service streams (client) together, thereby supporting a higher rate.
  • the user needs to manually perform configuration operations on network device A to configure the physical interface number (PHY number) of the PHY used to transmit the service flow, and Also configure the time slot number of the time slot occupied by the service flow on the PHY.
  • the network device A obtains the time slot configuration table according to the user's configuration operation, carries the time slot configuration table in the FlexE overhead frame, and sends the FlexE overhead frame to the network device B.
  • Network device B extracts the time slot configuration table from the FlexE overhead frame, parses the time slot configuration table, and obtains the physical interface number and the time slot number.
  • the network device B finds the corresponding PHY according to the physical interface number, finds the corresponding timeslot according to the timeslot number, and reconstructs the service flow from the corresponding timeslot of the corresponding PHY.
  • the embodiments of the present application provide a method and device for transmitting service flow based on FlexE, which can improve the efficiency of allocating time slots in FlexE.
  • the technical solution is as follows:
  • a method for transmitting service streams based on FlexE is provided.
  • a first network device obtains a time slot allocation strategy, and the time slot allocation strategy is used to allocate time slots according to the bandwidth required by the first service flow.
  • the first network device determines a first time slot according to the time slot allocation strategy and the required bandwidth, and the first time slot is the physical layer PHY between the first network device and the second network device.
  • the time slot of the link the first network device sends the first service flow to the second network device according to the first time slot.
  • the above provides a method for efficiently allocating time slots in FlexE.
  • the network equipment automatically allocates the time slots on the PHY link for the service flow by using the time slot allocation strategy and the required bandwidth of the service flow, and uses the allocated time slot for transmission.
  • Service flow because there is no need for users to manually specify the corresponding time slots for the service flow, it eliminates the learning cost caused by users perceiving how to arrange time slots, and eliminates the cumbersome operation of users to configure time slots for the service flow, thus greatly simplifying
  • the configuration complexity is improved, and the efficiency of time slot allocation is improved.
  • the first network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: if the free time slot meets the required bandwidth, the first network device determines the first time slot according to the required bandwidth.
  • the time slot allocation strategy and the required bandwidth are used to determine the first time slot that satisfies the required bandwidth from the idle time slots.
  • the time slot allocation strategy is used to automatically determine the time slot that meets the required bandwidth, and the time slot that meets the required bandwidth is allocated to the service flow, the service flow will be Through the time slot transmission that meets the required bandwidth, the bandwidth of the service flow is guaranteed. As the bandwidth of the service stream is guaranteed, it helps the service guarantee the SLA requirements. Especially, in the case where the required bandwidth is specified by the user, the time slot is allocated in this optional manner so that the bandwidth of the service flow meets the user's expectation of the bandwidth.
  • the first network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: if the idle time slot does not meet the required bandwidth, the first network device determines the first time slot according to the required bandwidth.
  • the time slot allocation strategy and the activation bandwidth, the first time slot that satisfies the activation bandwidth is determined from the idle time slots, the activation bandwidth is less than the required bandwidth, and the activation bandwidth is the capacity that the first network device can Starting to transmit the minimum required bandwidth of the first service flow.
  • the network device may not be able to find free time slots that meet the required bandwidth.
  • the time slot that meets the active bandwidth is automatically determined.
  • the time slots that meet the activation bandwidth are allocated to the service flow. Therefore, even if the free time slots are insufficient, the network device can start the transmission service flow using the time slot corresponding to the activated bandwidth. Therefore, the connectivity of the service flow is ensured, and the service flow can be transmitted.
  • the service stream is disconnected, so as to ensure that the maximum number of service streams are started to be transmitted.
  • the first network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: if the idle time slot does not meet the required bandwidth, the first network device determines the first time slot according to the required bandwidth.
  • the first time slot is determined from the time slots occupied by the second service flow, and the priority of the second service flow is lower than that of the first service flow. The priority of a service flow.
  • the time slot allocation strategy is used to automatically allocate the time slots originally occupied by the low-priority service flow to the high-priority service flow, even if there are not enough free time slots, the high-priority service flow can preempt to the low priority.
  • the high priority service flow can use the time slot originally occupied by the low priority service flow to transmit, thereby guaranteeing the bandwidth of the high priority service flow or the connectivity of the high priority service flow.
  • the first network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: the first network device determines the first time slot according to the time slot allocation strategy from the availability of the FlexE group In the PHY link, determine the first PHY link with the smallest physical interface number; the first network device determines the first PHY link with the smallest time slot number from the idle time slots of the first PHY link according to the required bandwidth; Time slot.
  • the time slot with the smallest time slot number on the available PHY link with the smallest number of the current physical interface is automatically determined, providing a simple way to automatically allocate time slots , It is convenient to manage the idle time slot of FlexE group.
  • the first network device determining the first time slot that satisfies the required bandwidth from the idle time slots according to the time slot allocation strategy includes: the first network device according to the time slot The allocation strategy is to determine the second PHY link with the smallest load from the available PHY links in the FlexE group; the first network device determines the time from the idle time slots of the second PHY link according to the required bandwidth The first slot with the smallest slot number.
  • all service flows may be concentrated on one or more PHY links, causing some PHY links to be fully loaded and some PHY links to be empty.
  • all service flows can be evenly shared on different PHY links, which reduces the pressure on a single PHY link and realizes the function of load sharing.
  • determining the first time slot by the first network device according to the time slot allocation strategy and the required bandwidth includes: the first network device according to the time slot allocation strategy and the required bandwidth, The first time slot is determined from idle time slots of multiple PHY links, and the first time slot is evenly distributed among different PHY links of the multiple PHY links.
  • the bandwidth required for the same service flow is balanced to as many available PHY links as possible.
  • it can greatly reduce the impact on the service flow after a single PHY link fails, even if there is no
  • the service stream can be transmitted using the time slot on other PHY links, it is ensured that the service stream has available bandwidth without interruption in transmission.
  • the required bandwidth of the first service stream can be equally shared among N PHY links, and each PHY link occupies 1/N timeslots corresponding to the required bandwidth.
  • the remaining (N-1) PHY links will still transmit the first service stream, thereby ensuring that the first service stream has (N-1)/ N shares of available bandwidth, so it can quickly recover from failures without human intervention. On the other hand, it reduces the pressure on a single PHY link and realizes the function of load sharing.
  • the method further includes: when the PHY link where the first time slot is located fails , The first network device determines a second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, where the second time slot is different from the first time slot; the first The network device sends the first service flow to the second network device according to the second time slot.
  • the first network device can dynamically migrate the first service flow from the original time slot to the newly determined time slot according to the time slot allocation strategy.
  • Time slots thereby redeploying time slots for the first service flow, and realizing the rearrangement of time slots.
  • the delay caused by the negotiation process is eliminated, and the outage time can be controlled within 50 milliseconds to ensure that the service flow is in the range of 50 milliseconds. The recovery is completed quickly within 50 milliseconds, thus greatly improving the speed of business recovery from failures.
  • the transceiver ends re-determine time slots according to the same time slot allocation strategy and the same required bandwidth. Therefore, the new time slots determined by the transceiver ends will be the same, so that the time After the slot migration, the time slot arrangement at both ends of the transceiver is consistent, so the transceiver ends can transmit service streams normally according to the consistent time slot arrangement, thus realizing the protection switching function of different PHY links in the FlexE group, and preventing failures.
  • the service flow on the PHY link is switched to the normal PHY link to avoid interruption of service flow transmission.
  • the method further includes: the first network device pushes the time slot allocation strategy to the second network device.
  • the policy consistency between the RX side and the TX side is ensured, so as to ensure that in various time slot migration scenarios such as PHY link failure, PHY link additions and deletions, and demand bandwidth updates, due to RX
  • the terminal and the TX terminal use the same time slot allocation strategy, and the time slot redeployed by the RX side and the time slot redeployed by the TX side are consistent, which is helpful for the rapid recovery of traffic.
  • the process of configuring the time slot allocation strategy for the user on the RX side is eliminated, thus reducing the configuration complexity and improving the efficiency of deploying the time slot allocation strategy.
  • the first network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, including: if an idle time slot meets the required bandwidth, the first The network device determines a second time slot that satisfies the required bandwidth from the idle time slots according to the time slot allocation strategy and the required bandwidth.
  • the re-determined time slot is used to transmit the service flow, so that the service flow is migrated from the original time slot to After the time slot is re-determined, the bandwidth of the service stream can still meet the required bandwidth, so as to ensure the normal operation of the maximum number of service streams.
  • the bandwidth of the service flow continues to be guaranteed, which helps to guarantee the service-level agreement (SLA) of the service.
  • SLA service-level agreement
  • the time slot is reallocated, so that the bandwidth of the service stream after the PHY fails can still meet the user's expectation of the bandwidth.
  • the first network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, including: if an idle time slot does not meet the required bandwidth, the first According to the time slot allocation strategy and the activation bandwidth, a network device determines a second time slot that satisfies the activation bandwidth from the idle time slots, the activation bandwidth is less than the required bandwidth, and the activation bandwidth is the The first network device can start to transmit the minimum required bandwidth of the first service flow.
  • the determined time slot is used for transmission Service flow, so that the service flow can be in a connected state, and the service flow can be transmitted to the opposite end, avoiding the interruption of the service flow after the PHY link fails, so as to try to ensure that the maximum number of service flows remains after the PHY link fails.
  • the transmission is initiated.
  • the first network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, including: if an idle time slot does not meet the required bandwidth, the first A network device determines the second time slot from the time slots occupied by the second service flow according to the time slot allocation strategy and the priority of the first service flow, and the priority of the second service flow Lower than the priority of the first service flow.
  • determining the second time slot by the first network device according to the time slot allocation strategy and the required bandwidth of the first service flow includes: the first network device according to the time slot allocation strategy, From the available PHY links in the FlexE group, determine the first PHY link with the smallest physical interface number; the first network device determines the time slot from the idle time slot of the first PHY link according to the required bandwidth The second time slot with the smallest number.
  • determining the second time slot by the first network device according to the time slot allocation strategy and the required bandwidth of the first service flow includes: the first network device according to the time slot allocation strategy, From the available PHY links in the FlexE group, determine the second PHY link with the smallest load; the first network device determines the smallest time slot number from the idle time slots of the second PHY link according to the required bandwidth The second time slot.
  • determining the second time slot by the first network device according to the time slot allocation strategy and the required bandwidth of the first service flow includes: the first network device according to the time slot allocation strategy and For the required bandwidth, the second time slot is determined from idle time slots of multiple PHY links, and the second time slot is evenly distributed among different PHY links of the multiple PHY links.
  • the method further includes: when the required bandwidth of the first service flow is updated, so The first network device determines a third time slot according to the time slot allocation strategy and the updated required bandwidth of the first service flow, where the third time slot is different from the first time slot; The network device sends the first service flow to the second network device according to the third time slot.
  • the transceiver can automatically re-allocate the same. Therefore, the communication overhead brought by the negotiation for configuring the time slot is eliminated, and it is helpful to realize the lossless update of the required bandwidth.
  • the method further includes: adding a PHY chain to the FlexE group where the first time slot is located According to the time slot allocation strategy and the required bandwidth of the first service flow, the first network device determines a fourth time slot from the time slots of the FlexE group to which the PHY link is added. The slot is different from the first time slot; the first network device sends the first service flow to the second network device according to the fourth time slot.
  • the FlexE group in which the first time slot is located deletes the PHY link
  • the first network device removes the PHY link from the FlexE group in which the PHY link is deleted according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • a fifth time slot is determined in the time slot, and the fifth time slot is different from the first time slot; the first network device sends the first service to the second network device according to the fifth time slot flow.
  • the time slot allocation strategy is used to reallocate time slots. Since the time slot allocation strategy used by the transceiver and the FlexE group after the addition and deletion of the PHY link is the same, the transceiver can automatically reallocate the time slot. Consistent time slots, so the communication overhead caused by negotiation for configuring the time slots is eliminated, and it is helpful to realize the lossless addition and deletion of the PHY link.
  • the method further includes: when the third service stream to be transmitted is added or the original service stream is deleted For the fourth service flow to be transmitted, the first network device determines a sixth time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, and the sixth time slot is the same as the first time slot Different; the first network device sends the first service flow to the second network device according to the sixth time slot.
  • the transceiver can automatically re-allocate the same time slots, thus eliminating the need for configuration
  • the communication overhead brought by the time slot negotiation helps to realize the lossless addition and deletion of the service flow.
  • determining the second time slot by the first network device according to the time slot allocation strategy and the required bandwidth of the first service flow includes: the first network device deletes the first time slot from the first FlexE group A second FlexE group is obtained for the PHY link where a time slot is located, and the second FlexE group does not include the PHY link where the first time slot is located. The first network device determines the second time slot from the second FlexE group according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the process of removing the failed PHY link from the FlexE group is quickly started, thereby automatically removing the failed PHY link from the FlexE group, leaving the remaining in the FlexE group
  • the PHY link is in the active state, thus ensuring that the FlexE group is available, and avoiding the unavailability of the entire FlexE group after a PHY link failure.
  • a method for transmitting service streams based on FlexE is provided.
  • a second network device obtains a time slot allocation strategy, and the time slot allocation strategy is used to allocate time slots according to the bandwidth required by the first service flow.
  • the second network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, and the first time slot is the physical layer PHY between the second network device and the first network device The time slot of the link; the second network device receives the first service flow from the first network device according to the first time slot.
  • the second network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: if an idle time slot meets the required bandwidth, the second network device determines the first time slot according to the required bandwidth.
  • the time slot allocation strategy and the required bandwidth are used to determine the first time slot that satisfies the required bandwidth from the idle time slots.
  • the second network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: if the idle time slot does not meet the required bandwidth, the second network device determines the first time slot according to the required bandwidth.
  • the time slot allocation strategy and the activation bandwidth, the first time slot that satisfies the activation bandwidth is determined from the idle time slots, the activation bandwidth is less than the required bandwidth, and the activation bandwidth is the capacity that the second network device can Starting to transmit the minimum required bandwidth of the first service flow.
  • the second network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: if the idle time slot does not meet the required bandwidth, the second network device determines the first time slot according to the required bandwidth.
  • the first time slot is determined from the time slots occupied by the second service flow, and the priority of the second service flow is lower than that of the first service flow. The priority of a service flow.
  • the second network device determines the first time slot according to the time slot allocation strategy and the required bandwidth, including: the second network device determines the first time slot according to the time slot allocation strategy from the availability of the FlexE group In the PHY link, the first PHY link with the smallest physical interface number is determined; the second network device determines the first PHY link with the smallest time slot number from the idle time slots of the first PHY link according to the required bandwidth. Time slot.
  • the second network device determines the first time slot that satisfies the required bandwidth from the idle time slots according to the time slot allocation strategy, including: the second network device according to the time slot The allocation strategy is to determine the second PHY link with the smallest load from the available PHY links in the FlexE group; the second network device determines the time from the idle time slots of the second PHY link according to the required bandwidth The first slot with the smallest slot number.
  • determining the first time slot by the second network device according to the time slot allocation strategy and the required bandwidth includes: the second network device according to the time slot allocation strategy and the required bandwidth,
  • the first time slot is determined from idle time slots of multiple PHY links, and the first time slot is evenly distributed among different PHY links of the multiple PHY links.
  • the method further includes: when the PHY link where the first time slot is located fails , The second network device determines a second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, where the second time slot is different from the first time slot; the second time slot The network device receives the first service flow from the first network device according to the second time slot.
  • the transceiver ends re-determine time slots according to the same time slot allocation strategy and the same required bandwidth.
  • the new time slots determined by the transceiver ends will be the same, so that the time After the slot migration, the time slot arrangement at both ends of the transceiver is consistent, so the transceiver ends can transmit service streams normally according to the consistent time slot arrangement, thus realizing the protection switching function of different PHY links in the FlexE group, and preventing failures.
  • the service flow on the PHY link is switched to the normal PHY link to avoid interruption of service flow transmission.
  • the second network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, including: if an idle time slot meets the required bandwidth, the second The network device determines a second time slot that satisfies the required bandwidth from the idle time slots according to the time slot allocation strategy and the required bandwidth.
  • the second network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, including: if an idle time slot does not meet the required bandwidth, the first 2. According to the time slot allocation strategy and the activation bandwidth, the network device determines a second time slot that satisfies the activation bandwidth from the idle time slots, the activation bandwidth is less than the required bandwidth, and the activation bandwidth is the The second network device can start to transmit the minimum required bandwidth of the first service flow.
  • the second network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, including: if an idle time slot does not meet the required bandwidth, the first 2.
  • the network device determines the second time slot from the time slots occupied by the second service flow according to the time slot allocation strategy and the priority of the first service flow, and the priority of the second service flow Lower than the priority of the first service flow.
  • determining the second time slot by the second network device according to the time slot allocation strategy and the required bandwidth of the first service flow includes: the second network device according to the time slot allocation strategy, From the available PHY links in the FlexE group, determine the first PHY link with the smallest physical interface number; the second network device determines the time slot from the idle time slot of the first PHY link according to the required bandwidth The second time slot with the smallest number.
  • determining the second time slot by the second network device according to the time slot allocation strategy and the required bandwidth of the first service flow includes: the second network device according to the time slot allocation strategy, From the available PHY links in the FlexE group, determine the second PHY link with the smallest load; the second network device determines from the idle timeslots of the second PHY link with the smallest slot number according to the required bandwidth The second time slot.
  • determining the second time slot by the second network device according to the time slot allocation strategy and the required bandwidth of the first service flow includes: the second network device according to the time slot allocation strategy and For the required bandwidth, the second time slot is determined from idle time slots of multiple PHY links, and the second time slot is evenly distributed among different PHY links of the multiple PHY links.
  • acquiring the time slot allocation strategy by the second network device includes: the second network device receives the time slot allocation strategy from the first network device.
  • the time slot allocation strategy is obtained by pushing.
  • the policy consistency between the RX side and the TX side is ensured, so as to ensure that the PHY link fails, the PHY link is added or deleted, and the required bandwidth is updated. Since the RX side and the TX side use the same time slot allocation strategy, the time slot redeployed on the RX side and the time slot redeployed on the TX side have the same consistency, which is helpful for the rapid recovery of traffic.
  • the process of configuring the time slot allocation strategy for the user on the RX side is eliminated, thus reducing the configuration complexity and improving the efficiency of deploying the time slot allocation strategy.
  • the second network device receives the time slot allocation policy from the first network device includes: the second network device receives a negotiation request of the first network device, and the negotiation request is used to indicate all The time slot allocation strategy; the second network device determines the time slot allocation strategy according to the negotiation request.
  • the method further includes: when the required bandwidth of the first service flow is updated, so The second network device determines a third time slot according to the time slot allocation strategy and the updated required bandwidth of the first service flow, where the third time slot is different from the first time slot; the second The network device receives the first service flow from the first network device according to the third time slot.
  • the method further includes: adding a PHY chain when the FlexE group where the first time slot is located According to the time slot allocation strategy and the required bandwidth of the first service flow, the second network device determines a fourth time slot from the time slots of the FlexE group to which the PHY link is added. The slot is different from the first time slot; the second network device receives the first service flow from the first network device according to the fourth time slot.
  • the method further includes: deleting the PHY chain when the FlexE group where the first time slot is located According to the time slot allocation strategy and the required bandwidth of the first service flow, the second network device determines a fifth time slot from the time slots of the FlexE group from which the PHY link has been deleted. The slot is different from the first time slot; the second network device receives the first service flow from the first network device according to the fifth time slot.
  • the method further includes: when the third service stream to be transmitted is added or the original service stream is deleted For the fourth service flow to be transmitted, the second network device determines a sixth time slot according to the time slot allocation strategy and the required bandwidth of the first service flow, and the sixth time slot is the same as the first time slot Different; the second network device receives the first service flow from the first network device according to the sixth time slot.
  • a first network device in a third aspect, is provided, and the first network device has a function of implementing FlexE-based service flow transmission in the foregoing first aspect or any one of the optional methods of the first aspect.
  • the first network device includes at least one module, and the at least one module is configured to implement the FlexE-based service flow transmission method provided in the first aspect or any one of the optional manners of the first aspect.
  • a second network device in a fourth aspect, has a function of implementing FlexE-based service flow transmission in the second aspect or any of the optional manners in the second aspect.
  • the second network device includes at least one module, and the at least one module is configured to implement the FlexE-based service flow transmission method provided in the foregoing second aspect or any of the optional manners of the second aspect.
  • a first network device in a fifth aspect, includes a processor and a physical interface, and the processor is configured to execute instructions so that the first network device executes the first aspect or any one of the first aspects.
  • the physical interface is used to send the service flow.
  • a second network device in a sixth aspect, includes a processor and a physical interface, and the processor is configured to execute instructions so that the second network device executes the second aspect or any one of the options of the second aspect.
  • the physical interface is used to receive the service flow.
  • a computer-readable storage medium stores at least one instruction.
  • the instruction is read by a processor to enable a first network device to execute the first aspect or any one of the first aspects. Select the method based on FlexE to transmit the service flow provided by the method.
  • a computer-readable storage medium stores at least one instruction, and the instruction is read by a processor to enable a second network device to execute the second aspect or any one of the second aspects described above. Select the method based on FlexE to transmit the service flow provided by the method.
  • a computer program product is provided.
  • the computer program product runs on a first network device
  • the first network device executes the above-mentioned first aspect or any one of the optional methods of the first aspect.
  • the method of FlexE transmission service flow is provided.
  • a computer program product is provided.
  • the second network device executes the above-mentioned second aspect or any one of the optional methods of the second aspect.
  • the method of FlexE transmission service flow is provided.
  • a chip is provided.
  • the first network device executes the FlexE-based transmission service provided in the first aspect or any one of the optional methods of the first aspect. Streaming method.
  • a chip is provided.
  • the second network device executes the FlexE-based transmission service provided in the second aspect or any one of the optional methods of the second aspect. Streaming method.
  • a network system in a thirteenth aspect, includes a first network device and a second network device.
  • the second network device is configured to execute the method described in the second aspect or any one of the optional manners of the second aspect.
  • a first network device in a fourteenth aspect, includes a central processing unit, a network processing unit, and a physical interface.
  • the central processor is used to obtain the time slot allocation strategy; and determine the first time slot according to the time slot allocation strategy and the required bandwidth.
  • the network processor is configured to trigger the physical interface to send the first service flow to the second network device according to the first time slot.
  • the first network device includes a main control board and an interface board
  • the central processing unit is arranged on the main control board
  • the network processor and the physical interface are arranged on the interface board
  • the The main control board is coupled with the interface board.
  • an inter-process communication protocol (IPC) channel is established between the main control board and the interface board, and the main control board and the interface board communicate through the IPC channel.
  • IPC inter-process communication protocol
  • a second network device in a fifteenth aspect, includes a central processing unit, a network processing unit, and a physical interface.
  • the central processor is used to obtain the time slot allocation strategy; and determine the first time slot according to the time slot allocation strategy and the required bandwidth.
  • the network processor is configured to trigger the physical interface to receive the first service flow from the first network device according to the first time slot.
  • the second network device includes a main control board and an interface board
  • the central processing unit is arranged on the main control board
  • the network processor and the physical interface are arranged on the interface board
  • the The main control board is coupled with the interface board.
  • an IPC channel is established between the main control board and the interface board, and the main control board and the interface board communicate through the IPC channel.
  • Fig. 1 is a schematic structural diagram of a FlexE Group provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a data structure in FlexE provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the structure of an overhead frame and an overhead multiframe provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of the docking between the transceiver ends of the FlexE provided by the embodiment of the present application;
  • FIG. 5 is a schematic diagram of a time slot configuration provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a system architecture 100 provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a system architecture 200 provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a resource management layer provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of a method 300 for transmitting a service flow based on FlexE according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of LLDPDU in an LLDP frame provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of protection switching between different PHY links in a FlexE group according to an embodiment of the present application.
  • FIG. 12 is a flowchart of a method 400 for transmitting a service flow based on FlexE according to an embodiment of the present application
  • FIG. 13 is a schematic structural diagram of a network device 500 provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a network device 600 provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a network device 700 provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a network device 800 provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a network system 900 provided by an embodiment of the present application.
  • first, second and other words in this application are used to distinguish the same or similar items that have basically the same function and function. It should be understood that there is no logic or sequence between “first” and “second” The dependence relationship on the above does not limit the number and execution order. It should also be understood that although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another.
  • the first network device may be referred to as the second network device, and similarly, the second network device may be referred to as the first network device. Both the first network device and the second network device may be network devices, and in some cases, may be separate and different network devices.
  • first and “second” are used to distinguish different “time slots” or different “service flows”, and do not limit the protection scope of the embodiments of the present application.
  • Ethernet interface standard formulation and product development are stepwise, the current Ethernet interface standards are all fixed rates, so there will be a gap between the transmission requirements and the actual device interface capabilities. It is often necessary to resolve the current Ethernet interface rate level To meet the demand for higher bandwidth.
  • OIF Optical Internet Forum
  • FlexE creates an adaptation layer between the media access control (MAC) layer and the physical coding sublayer (PCS), so that The Ethernet interface rate can flexibly match a variety of business scenarios, and when higher-bandwidth network processors (NP)/forwarding devices appear, you don’t have to wait for a new fixed-rate Ethernet standard to be released to maximize the equipment. performance.
  • the adaptation layer is called FlexE interlayer (shim).
  • FlexE The basic function of FlexE is to map M FlexE service streams (clients) according to FlexE Shim's time division multiplexing (TDM) mechanism to a flexible Ethernet composed of N physical layer (PHY) links.
  • TDM time division multiplexing
  • M and N are both positive integers, and the basic structure of FlexE can be shown in Figure 1.
  • M 6
  • N 4
  • the FlexE shown in Figure 1 maps the service flows of 6 FlexE clients to a FlexE group consisting of 4 PHY links according to the FlexE Shim TDM mechanism.
  • each 100G PHY corresponds to 20 time slots corresponding to 64-bit (bit, B)/66B code blocks (block). slot, TS), each code block corresponds to a payload rate of 5Gbps (switching bandwidth).
  • the current FlexE standard supports FlexE on 100GE, 200GE, 400GE, and 50GE interfaces.
  • the format of a piece of 100GE PHY data is shown in Figure 2.
  • each block is a 64B/66B block encoded according to IEEE 802.3 Clause 82, and every 20 blocks form a time slot table (calendar), and each block is a time slot in the TDM mapping mechanism.
  • Shim slices the bandwidth resources of the Ethernet port into time slots based on blocks after 64/66B (blocks based on 66B), and uniformly number the sliced time slots to obtain the time slot number corresponding to each time slot.
  • the shim at the sending (transport, TX) side slices the service data, and encapsulates the sliced service data into pre-divided time slots, and uses the calendar in the overhead frame overhead to divide the local service flow and the time slot number
  • the mapping relationship is passed to the receiving (receive, RX) end.
  • the RX side extracts the mapping relationship between the service flow and the time slot number from the overhead frame overhead, and reorganizes the service flow from the specific time slot according to the mapping relationship. Shim can correspond to network equipment.
  • For the overhead frame overhead please refer to the schematic diagram of the frame structure shown in FIG. 3.
  • FlexE group is also called FlexE bundle group or bundle group.
  • the FlexE group includes one or more PHYs.
  • the FlexE group may consist of 1 to 254 PHYs supporting 100GE rates, where 0 and 255 are reserved bits.
  • the bandwidth resource corresponding to a FlexE group is the sum of the bandwidth resources corresponding to the PHYs in the FlexE group. Therefore, based on the FlexE group, FlexE can meet a greater transmission rate and transmission bandwidth. FlexE can transmit multiple service streams in parallel through the FlexE group. The service data of the same service stream can be carried by one PHY in the FlexE group or different PHYs in the FlexE group.
  • the service data of the same service flow can be transmitted to the opposite end through one PHY in the FlexE group, or can be transmitted to the opposite end through multiple PHYs in the FlexE group.
  • the following embodiments of the present application use the form of "GRP+number" to simplify and represent a specific FlexE group without introducing difficulties in understanding, for example, a FlexE group is simplified to represent a form of "GRP1".
  • the number in "GRP+number" is the group ID of the FlexE group.
  • Group ID (Group Number, also known as GRP_Number, GRP_ID, group number or group ID) is used to identify a group of physical interfaces (FlexE group).
  • GRP_ID group number
  • the GRP_ID parameter is reflected in a fixed field of the overhead frame of each physical interface belonging to the FlexE group. It can be considered that the GRP_ID is the identifier of the large physical pipe.
  • the group IDs at both ends of the FlexE group connection can be the same.
  • PHY can be defined as: providing mechanical, electronic, functional and standardized characteristics for the establishment, maintenance, and removal of physical links required for data transmission.
  • the PHY mentioned in this article can include the physical layer working devices at both ends of the transceiver and the transmission medium (such as optical fiber) located between the transmitting and receiving ends.
  • the physical layer working devices can include, for example, physical layer interface devices (physical layer interface devices) of Ethernet. Wait. Therefore, in this article, a PHY link can be understood as a physical layer channel, which includes the port of the RX end device, the port of the TX end device, and the communication link between the two ports.
  • the physical interface number (PHY Number, also called physical port number, physical port ID or physical port ID) is the identification of the physical interface. FlexE organizes multiframes according to the physical interface number, and analyzes the time slots on multiple PHY links based on the physical interface number. Uniform numbering. Generally speaking, the physical interface numbers of a PHY link at both ends of the transceiver can be the same. Or, a PHY link has different physical interface numbers at the transceiver ends, but there is a one-to-one correspondence between the physical interface numbers at the transceiver ends.
  • a time slot refers to a time slice in the time division multiplexing mode.
  • a FlexE group with a bandwidth of 100G has 20 time slots with a bandwidth of 5G.
  • each time slot with a bandwidth of 5G can be divided into 5 sub-slots with a bandwidth of 1G.
  • TS+number the form of “TS+number” to simplify and represent a time slot without introducing difficulties in understanding
  • a time slot is simplified to represent a time slot in the form of “TS1”.
  • the number in "TS+number" is the time slot number.
  • the time slot number (TS Number, TS_NUM, also called ts_no, TS ID or TS ID) is used to identify the corresponding time slot.
  • TS_NUM also called ts_no, TS ID or TS ID
  • a FlexE group usually has multiple time slots, and these time slots are uniformly numbered, and each time slot corresponds to a time slot number.
  • the service flow (client) corresponds to various service interfaces of the network, which is consistent with the traditional service interface in the IP/Ethernet network.
  • FlexE Client can be flexibly configured according to bandwidth requirements, supports Ethernet MAC data streams of various rates (such as 10G, 40G, n*25G data streams, and even non-standard rate data streams), and transfers data through 64B/66B encoding.
  • the flow is passed to the FlexE Shim layer.
  • the following embodiments of the present application use the form of "client+number" to simplify and express a service flow without introducing difficulties in understanding, for example, simplifying and expressing a service flow in the form of "client1".
  • the number in "client+number" is the service flow identifier.
  • the service flow identifier (client_ID) is used to identify the service flow. Based on a certain FlexE group, one or more business flows can be created, and different business flows can be distinguished by different business flow identifiers.
  • the formats of overhead frames (also called management frames) and overhead multi-frames are shown in Figure 3.
  • the client id is reflected in the multi-frame overhead calendar.
  • the FlexE overhead (overhead, OH) includes the time slot table configuration information of all FlexE Clients in the FlexE group.
  • two time slot tables can be used: Calendar A and Calendar B. These two time slot tables have the following characteristics.
  • Feature 1 Only one time slot table is working at any time, that is, at any time, either Calendar A is working or Calendar B is working.
  • Feature 2 Docking the TX end and RX end of the FlexE group, and guarantee the consistency of the working time slot tables of TX and RX through the time slot negotiation mechanism of FlexE OH overhead.
  • Calendar A is in the working state, then Calendar B is in the standby state of the corresponding time slot configuration.
  • TX time slot negotiation
  • RX Recommendation unit
  • CSR time slot negotiation request
  • TX receives the response from RX
  • TX triggers both TX and RX to switch the work table to Calendar B.
  • Figure 3 also includes the following information.
  • bit field numbered 8 in the first block, the bit field numbered 0 in the second block, and the bit field numbered 0 in the third block all carry C. .
  • Overhead multiframe indicator (OMFI), called OMF in standards such as IA OIF-FlexE-01.0/01.1/02.2/02.1, is used to indicate the boundary of the multiframe.
  • the bit field numbered 9 in the first block of the overhead frame as shown in FIG. 3 carries the OMF.
  • OMF value of the first 16 single frames is 0, and the OMF value of the next 16 single frames is 1, and the boundary of the multi-frame can be determined by the conversion between 0 and 1.
  • RPF Remote PHY fault
  • Synchronization control (synchronization control, SC): used for synchronization control.
  • SC Synchronization control.
  • the bit field numbered 11 in the first block of the overhead frame as shown in FIG. 3 carries the SC.
  • Flexible Ethernet Map Used to control which FlexE instances are members of this group (Control of which FlexE Instances are members of this group).
  • the bit fields numbered 1 to 8 in the second block of the overhead frame as shown in FIG. 3 carry the FlexE Map.
  • the FlexE Map includes the PHY link information in the FlexE group.
  • Each bit of the FlexE Map corresponds to a PHY link, and the value of each bit of the FlexE Map is used to indicate the PHY link corresponding to the bit. Whether it is in this FlexE group. For example, if the value of the bit is the first value, for example, the first value is 1, then the PHY link corresponding to the bit is considered to be in the FlexE group. If the value of the bit is the second value, for example, the second value is 0, it is considered that the PHY link corresponding to the bit is not in the FlexE group.
  • Flexible Ethernet instance number (FlexE instance Number): Represents the identity of this FlexE instance within the group (Identity of this FlexE instance within the group).
  • the bit field numbered from 9 to 16 in the second block of the overhead frame as shown in FIG. 3 carries the FlexE instance Number.
  • the bit field numbered 12 to 31 in the first block of the overhead frame carries Group Number.
  • Time slot table switch acknowledgement (calendar switch acknowledgement, CSA): It is called CA in the implementation agreement (implementation agreements, IA) OIF-FlexE-01.0/01.1/02.2/02.1 and other standards, where 01.0/01.1/02.2/02.1 is Several versions of the IA OIF-FlexE standard.
  • the bit field numbered 34 in the third block of the overhead frame as shown in FIG. 3 carries the CA.
  • Time slot table switch request (calendar switch request, CSR): It is called CR in standards such as IA OIF-FlexE-01.0/01.1/02.2/02.1.
  • the bit field numbered 33 in the third block of the overhead frame shown in FIG. 3 carries the CR.
  • Synchronization head (SH): the frame header of the overhead frame as shown in FIG. 3.
  • S valid sync header bits: the fields under SH in the fourth block to the eighth block of the overhead frame as shown in FIG. 3 carry the S.
  • Management Channel (Management Channel): The fourth block to the eighth block of the overhead frame as shown in FIG. 3 carries the management channel.
  • CRC-16 used to perform cyclic redundancy check (CRC) protection on the content of the overhead block.
  • CRC cyclic redundancy check
  • FIG. 3 also includes a reserved (reserved) field.
  • Bit fields numbered 35 to 47 are reserved fields.
  • FlexE technology is currently in the commercial promotion stage. It lies in the configuration resources and definitions presented to the application at the protocol level, as well as the differences in resource configuration at different speed levels.
  • users need to connect to the FlexE group's networking, PHY link, PHY link rate, time slot, sub-time slot bundling strategy and restriction carry out in-depth intervention, the following briefly describes the process of establishing the service flow at both ends.
  • FIG 4 shows the docking model of the receiving and transmitting ends.
  • the user has formed a 200G FlexE group.
  • the FlexE group is composed of two 100G PHY links, and each PHY link has 20 5G time slots.
  • the user needs to configure many parameters such as group identification, physical interface number, time slot number, and flow identification.
  • client1 is the service flow transmitted from network device A to network device B.
  • the bandwidth required by client1 is 5G.
  • the user creates a service flow on network device A, and the user configures the flow identifier of the service flow as client1.
  • the user creates a service flow on network device B, and the user configures the flow identifier of the service flow as client1.
  • the flow identification configured on network device B should be consistent with the flow identification configured on network device A.
  • the user specifies on the network device A that client1 is sent from the time slot 2 of the physical interface of the phy number1 of the FlexE group.
  • the configuration information of the user in S3 (that is, the correspondence between client1 and time slot No. 2) is transmitted to network device B through the FlexE overhead frame.
  • the network device B extracts the configuration information from the FlexE overhead frame (ie, the overhead frame shown in FIG. 3), and obtains the time slot 2 of the physical interface of the phy number 1 of the client 1 from the group.
  • Network device B rebuilds the client from time slot 2 to establish traffic.
  • the above S1 to S6 are the transmission direction from the network device A to the network device B as an example for description.
  • the transmission direction of the service flow is from the network device B to the network device A
  • the user needs to perform a configuration operation on the network device B to configure the mapping relationship between the time slot and the service flow.
  • the time slot specified on the network device B does not need to be consistent with the time slot specified on the network device A, that is, the receiving and sending time slots of a client may be inconsistent.
  • the port types increase by 200G and 400G.
  • the PHY number defined in 1.0 is modified to instance number, and the hierarchical level of resources is increased by one level, which makes it more difficult to manage the user configuration time slot.
  • the user should at least understand the following (1) to (5).
  • the user deploys a FlexE group between network device A and network device B.
  • the FlexE group includes two 200G FlexE physical interfaces, and the total bandwidth of the FlexE group is 400G.
  • the user created 3 business flows namely client1, client2 and client3.
  • the bandwidth of client1 is 1G.
  • a 5G timeslot can be split into 5 1G subslots.
  • the user must first confirm all 5G timeslots in the FlexE group , Whether there are currently 5G time slots that have been split, and there are still 1G free sub-slots. If so, the user assigns 1 free sub-slot to client1 from the free sub-slot. If not, the user selects an idle main time slot, splits out 5 1G subslots based on the main time slot, and the user selects 1 1G subslot to allocate to client1.
  • the bandwidth of client2 is 5G.
  • the user selects any free 5G time slot to allocate to client2.
  • the bandwidth of client3 is 15G.
  • the user selects any three free 5G time slots to allocate to client3.
  • the physical interfaces that make up the FlexE group are at risk of failure. Conditions such as fiber damage and fiber aging may cause physical interface failures.
  • the impact of physical interface failure on the service flow is uncontrollable, and whether it affects the service flow depends on the user's configuration of the time slot. Specifically, after a user deploys a time slot on a certain PHY link for a service flow, if the physical interface corresponding to the PHY link fails, the physical interface cannot transmit the service flow, causing the service flow transmission to be interrupted, in other words, When the time slot deployed by the user happens to be provided by the failed physical interface, it will affect the service flow.
  • the failure recovery of the service flow depends on the user's redeployment of the service flow in the available time slots. In other words, as long as the user has not reassigned the corresponding time slot for the service flow, since the time slot allocated by the service flow is always the time slot on the faulty PHY link, the service flow will always be in an interrupted state and it is difficult to recover from the faulty state. Recover in time.
  • the current business protection capabilities of the FlexE group are insufficient, and can only be based on protection between different FlexE groups, and cannot achieve 1:1 or N:1 protection based on physical interfaces in the FlexE group.
  • this embodiment of the application provides a solution based on FlexE transmission service flow.
  • the receiving and sending ends of the service flow combine the bandwidth required for the service flow, and the service is automatically provided based on the time slot allocation strategy.
  • the flow allocates time slots, and the service flow is transmitted according to the allocated time slots. From the perspective of configuration difficulty, since the user does not need to perceive how the time slots are arranged, the complicated operation of configuring the time slots is eliminated, thus greatly reducing the configuration difficulty.
  • the receiving and sending ends of the service flow can automatically reallocate the time slots based on the original time slot allocation strategy and the bandwidth required by the service flow.
  • the used time slot is automatically switched to the newly allocated time slot, thereby realizing the dynamic migration of the time slot, so that the service flow can quickly recover from the failure.
  • the system architecture 100 is an example of the hardware environment on which the method 300 is based.
  • the system architecture 100 includes a network device 101 and a network device 102.
  • the network device 101 and the network device 102 are, for example, a router or a switch.
  • the network device 101 and the network device 102 establish one or more FlexE groups, and each FlexE group includes multiple PHY links in a bundle relationship.
  • each FlexE group includes multiple PHY links in a bundle relationship.
  • the FlexE group includes two bundled PHY links, and the two PHY links are PHY1 and PHY2, respectively.
  • the bundling between different PHY links refers to a logical bundling relationship, and there is not necessarily a physical connection relationship.
  • multiple PHY links in a FlexE link group can be physically independent of each other.
  • the PHY link includes, for example, optical fiber.
  • Each PHY link in the FlexE group can provide at least one time slot, and each time slot corresponds to a certain size of bandwidth.
  • the total bandwidth of the FlexE group is, for example, the sum of the bandwidths corresponding to each time slot on each PHY link.
  • the FlexE group is configured with a total bandwidth of 100G, and the FlexE group has a total of 20 time slots, and each time slot corresponds to a 5G bandwidth.
  • PHY1 provides 10 time slots
  • PHY2 provides another 10 time slots.
  • One or more service streams are transmitted between the network device 101 and the network device 102 through the FlexE group, and each service stream occupies one or more time slots on one or more PHY links in the FlexE group.
  • the time slots occupied by the same service flow are distributed on the same PHY link, or the time slots occupied by the same service flow are distributed on each of the multiple PHY links, for example, evenly distributed On different PHY links in the FlexE group.
  • three service flows are created between the network device 101 and the network device 102, and the three service flows are client1, client2, and client3, respectively.
  • client1 uses 5G bandwidth
  • client1 occupies TS1 of PHY1.
  • client2 occupies TS1 of PHY2
  • client3 uses 40G bandwidth
  • client3 occupies TS2 to TS9 of PHY1.
  • the protection relationship includes, but is not limited to, a 1:1 protection relationship or an N:1 protection relationship.
  • the 1:1 protection relationship refers to the use of one PHY link to protect another PHY link.
  • the N:1 protection relationship refers to the use of one PHY link to protect N PHY links.
  • the protection relationship includes, but is not limited to, the primary-standby protection relationship and the peer-to-peer protection relationship.
  • different PHY links that have established protection relationships are the master and backup relationships.
  • PHY1 is the master PHY link
  • PHY2 is the backup PHY link
  • PHY2 is used to protect PHY1.
  • PHY1 fails, PHY1 is connected to The service flow of the switch is switched to PHY2.
  • different PHY links with established protection relationships are peer-to-peer relationships.
  • PHY1 and PH2 are mutually protected. When PHY1 fails, the service flow on PHY1 is switched to PHY2, and when PHY2 fails After that, the service flow on PHY2 is switched to PHY1.
  • the physical interface of the network device 101 and the network device 102 is divided into a working port and a protection port, the working port of the network device 101 and the working port of the network device 102 establish a main PHY link, and the protection port of the network device 101 and the network device
  • the protection port of 102 establishes a backup PHY link, and one backup PHY link protects one primary PHY link to form a 1:1 protection relationship, or one backup PHY link protects N primary PHY links to form an N:1 protection relationship.
  • each service flow transmitted between the network device 101 and the network device 102 corresponds to a priority.
  • the priority of different business flows is the same or different.
  • client1, client2, and client3 have priorities respectively.
  • client1 has the highest priority
  • client2 has the second priority
  • client3 has the lowest priority.
  • the priority of the service flow transmitted on the backup PHY link is lower than the priority of the service flow transmitted on the main PHY link.
  • the scenario of establishing a FlexE group shown in FIG. 6 is only an example, and the scenario where the FlexE group includes two PHY links is also only an example.
  • the number of FlexE groups established in the system architecture 100 can be more or less, and the number of PHY links included in a FlexE group can be more or less.
  • the system architecture 100 also includes one of GRP1.
  • the system architecture 100 also includes other PHY links other than PHY1 and PHY2.
  • the embodiment of the present application does not limit the number of FlexE groups and the number of PHY links established in the system architecture 100.
  • the network device 101 and the network device 102 may also establish four PHY links of PHY1, PHY2, PHY3, and PHY4.
  • the above system architecture 100 focuses on describing the overall network architecture.
  • the following uses the system architecture 200 to describe the logical function architecture inside the device.
  • the system architecture 200 is an example of the logical function architecture of the network device.
  • the system architecture 200 includes a user configuration layer 201, a resource management layer (also called Resource Management Layer, RS MNG Layer, resource management sublayer or RS MNG) 202, a shim layer 203, and a FlexE physical interface 204.
  • a resource management layer also called Resource Management Layer, RS MNG Layer, resource management sublayer or RS MNG
  • RS MNG Resource Management Layer
  • shim layer 203 a shim layer 203
  • FlexE physical interface 204 In the view of the FlexE business architecture, the resource management layer 202 is located between the user configuration layer 201 and the shim layer 203.
  • the user configuration layer 201 is used to receive and store user configuration information, for example, to store the time slot allocation strategy and the required bandwidth of the service flow.
  • the time slot allocation strategy includes the time slot allocation strategy used when the PHY link is normal (also called the bandwidth allocation strategy) and the time slot allocation strategy used in the case of a PHY link failure (also called the time slot migration strategy) ), the user configuration layer 201 saves the bandwidth allocation strategy and the time slot migration strategy.
  • the user configuration layer 201 saves the correspondence between the service flow identifier and the required bandwidth. For example, referring to Figure 7, the required bandwidth (Band Width, BW) of client1 is BW1, the required bandwidth of client2 is BW2, and the required bandwidth of client3 is BW3.
  • the user configuration layer 201 saves the correspondence between client1 and BW1, client2 and BW2. Correspondence between client3 and BW3.
  • the resource management layer 202 is used to manage time slots.
  • the functions of the resource management layer 202 include the following functions (1) to (5).
  • Function (1) The user directly plans and configures the required bandwidth of the client, without the need to perceive the time slot arrangement, and shield the user from the details of the management time slot.
  • LLDP Link Layer Discovery Protocol
  • the pushed time slot allocation strategy is used for the local end to allocate time slots in the TX direction during the time slot migration process.
  • the received time slot allocation strategy is used for the local end to allocate time slots in the RX direction during the time slot migration.
  • Function (5) monitors the status of the FlexE physical interface 204, quickly responds to the fault status of the FlexE physical interface 204, and executes time slot migration in the TX direction or the RX direction according to a predetermined time slot allocation strategy.
  • the above system architecture 200 introduces the overall logical function architecture, and the resource management layer 202 in the system architecture 200 is introduced in detail below.
  • the resource management layer 202 includes at least one functional module, and each functional module is implemented by software.
  • the functional module is generated after the processor of the network device reads the program code stored in the memory.
  • the functional modules of the resource management layer 202 include a TX policy module 2021, an RX policy module 2022, a bandwidth allocation module 2023, a time slot migration module 2024, and a time slot resource pool 2025.
  • the TX strategy module 2021 is used to store the time slot allocation strategy according to the user's definition.
  • the TX policy module 2021 is also used to push the time slot allocation policy to the opposite end through LLDP.
  • the TX strategy module 2021 is further configured to allocate time slots in the TX direction according to the time slot allocation strategy.
  • the RX strategy module 2022 is used to receive the time slot allocation strategy pushed by the peer and save the time slot allocation strategy.
  • the RX strategy module 2022 is also used to allocate time slots in the RX direction according to the time slot allocation strategy.
  • the bandwidth allocation module 2023 is configured to allocate time slots according to the required bandwidth of the service flow and the time slot allocation strategy saved by the TX policy module 2021 when the user adds or deletes service streams or configures the required bandwidth.
  • the time slot migration module 2024 is used to allocate time slots according to the required bandwidth of the service flow and the time slot allocation strategy saved by the RX strategy module 2022 when the PHY link is in a fault state.
  • the time slot resource pool 2025 is used to store and maintain idle time slots of the PHY link.
  • the system architecture 100 and the system architecture 200 have been introduced above, and the method 300 is used to exemplarily introduce the process of the method for transmitting service flows based on the system architecture 100 and the system architecture 200.
  • FIG. 9 is a flowchart of a method 300 for transmitting a service flow based on FlexE according to an embodiment of the present application.
  • the method 300 includes the following S301 to S311.
  • the method 300 is described by taking an example in which the transmission direction of the service flow is from the first network device to the second network device.
  • the first network device is an upstream network element
  • the second network device is a downstream network element.
  • the service flow transmission process of the first network device is the same as the service flow transmission process of the second network device (the first network device). If the transmission direction of the service flow is replaced with from the second network device to The first network device can also use the method 300 to transmit the service flow, which will not be repeated here.
  • the first network device is the network device 101 in the system architecture 100
  • the second network device is the network device 102 in the system architecture 100.
  • both the first network device and the second network device have a logical function architecture shown in the system architecture 200.
  • the first network device and the second network device execute the method 300 through the functional modules included in the system architecture 200.
  • data such as the time slot allocation strategy and required bandwidth in the method 300 are received, stored, and maintained by the user configuration layer 201, and the steps related to time slot allocation in the method 300 (such as S306, S307, S309, and S310) are passed through the resource management layer 202.
  • the step of transmitting the service flow in the method 300 is executed through the shim layer 203 and the FlexE physical interface 204.
  • the method 300 is processed by a central processing unit (CPU), or may be processed by the CPU and the NP together.
  • the CPU executes processing actions corresponding to 302 to S307 and 309 to S310, and the NP executes S308 and S311. The corresponding processing action.
  • processors suitable for message forwarding instead of NP to execute the processing actions corresponding to S308 and S311, which is not limited in this application.
  • the method 300 focuses on describing how to allocate time slots, and for the technical details of how to transmit service streams, please refer to the introduction of FIG. 1 to FIG. 3 above.
  • the first network device establishes a PHY link with the second network device.
  • the first network device and the second network device may create a FlexE group, and the FlexE group includes multiple PHY links between the first network device and the second network device.
  • the FlexE group can be created based on the user's plan.
  • the FlexE group is used to transmit service flows.
  • the FlexE group can be understood as a large pipe, which includes one or more physical interfaces of the first network device and the second network device that are capable of bundling.
  • the network device 101 and the network device 102 form a FlexE group with a bandwidth of 100G and a group ID of GRP1, and two physical interfaces with a bandwidth of 50G are added to the FlexE group.
  • the first network device and the second network device create a FlexE group includes multiple implementation methods.
  • the user performs configuration operations on the first network device and the second network device based on the docking parameters of the FlexE group, thereby completing the configuration of the FlexE group.
  • the properties of the FlexE group include the configured bandwidth of the FlexE group and the available bandwidth of the FlexE group.
  • the configured bandwidth of the FlexE group refers to the bandwidth of the FlexE group planned by the user, and the configured bandwidth of the FlexE group is the sum of the bandwidths of the physical interfaces bound in the FlexE group.
  • the available bandwidth of the FlexE group refers to the sum of the bandwidth of the physical interfaces currently active in the FlexE group.
  • the active state is also called the link state.
  • the active state is the relative concept of the deactivated state. If some physical interfaces in the FlexE group are in the non-link state, the bandwidth resources corresponding to the physical interfaces in the non-link state are unavailable , The physical interface is in the deactivated state.
  • the process of configuring the FlexE group includes the following steps A and B.
  • Step A The user performs the creation operation of the FlexE group on the first network device and the second network device respectively, the user specifies the group ID of the FlexE group, and enters the group ID on the first network device and the second network device respectively.
  • the first network device and the second network device create a FlexE group, and configure the group identifier of the FlexE group as a group identifier specified by the user.
  • Step B The user specifies the physical interface number of the PHY link and other parameters required for the docking of the FlexE group, and enters the specified parameters on the first network device and the second network device respectively.
  • the first network device and the second network device are based on the steps A created FlexE group, add the physical interface of FlexE, and configure the specified parameters.
  • the first network device obtains configuration information of the first service flow.
  • the configuration information of the first service flow includes at least one of the required bandwidth of the first service flow or the priority of the first service flow.
  • the configuration information of the first service flow is obtained through a configuration operation of the user. In other words, the required bandwidth of the first service flow and the priority of the first service flow are specified by the user.
  • the second network device obtains configuration information of the first service flow.
  • the configuration information of the first service flow obtained by the second network device is the same as the configuration information of the first service flow obtained by the first network device.
  • the configuration information of the first service flow is the same on the RX end and the TX end.
  • S302 and S303 can be executed sequentially.
  • S302 can be executed first, and then S303; or S303 can be executed first, and then S302.
  • S302 and S303 can also be executed in parallel, that is, S302 and S303 can be executed simultaneously.
  • the first network device obtains a time slot allocation strategy.
  • One or more service streams can be transmitted between the first network device and the second network device.
  • the transmission of the first service stream will be taken as an example to illustrate the process of how to implement the first time slot allocation strategy. Sexual description.
  • the time slot allocation strategy is used to allocate time slots according to the required bandwidth of the first service flow. Triggered by various scenarios such as adding or deleting service streams, PHY link failure, demand bandwidth update, adding or deleting PHY links, etc., the first network device or the second network device will automatically allocate the first service stream according to the time slot allocation strategy Time slot.
  • the time slot allocation strategy By providing a time slot allocation strategy, the user only needs to configure the bandwidth without knowing the implementation details of the FlexE protocol, and the user does not need to carefully plan the time slots and sub-time slots, which greatly simplifies the configuration complexity.
  • the required bandwidth refers to the bandwidth required to transmit the first service stream.
  • the required bandwidth is the bandwidth specified by the user for the first service flow, and the required bandwidth is also referred to as the configured bandwidth.
  • the first service flow is client1
  • the terminal sends a bandwidth request to the first network device
  • the bandwidth request is used to apply for the allocation of required bandwidth for client1
  • the bandwidth request carries BW1
  • BW1 is the required bandwidth corresponding to client1.
  • the first network device obtains BW1 from the bandwidth request, thereby determining that client1 needs to pass the required bandwidth of the size of BW1.
  • S302 and S304 can be executed sequentially. For example, S302 may be executed first, and then S304; or S304 may be executed first, and then S302 may be executed. In other embodiments, S302 and S304 can also be executed in parallel, that is, S302 and S304 can be executed simultaneously.
  • the second network device obtains a time slot allocation strategy.
  • the time slot allocation strategy obtained by the second network device is the same as the time slot allocation strategy obtained by the first network device.
  • the RX side and the TX side are based on the same time slot allocation strategy and the same required bandwidth , The same time slot will be determined for the service flow. Since the time slot determined by the RX side is the same as the time slot determined by the TX side, the communication overhead caused by the double-ended negotiation time slot is eliminated, and the delay of transmitting the service stream is reduced.
  • TX end pushes time slot allocation strategy to RX end.
  • the first network device After the first network device (TX side) obtains the time slot allocation strategy, it will send the time slot allocation strategy to the second network device (RX side), and the second network device can receive the time slot allocation strategy from the first network device.
  • the effect of the implementation method (1) includes: obtaining the time slot allocation strategy by pushing. On the one hand, it ensures the consistency of the strategy at the RX end and the TX end, thereby ensuring that the PHY link fails, the PHY link is added or deleted, and the required bandwidth is updated. In various time slot migration scenarios, since the RX side and the TX side use the same time slot allocation strategy, the time slot redeployed on the RX side and the time slot redeployed on the TX side are consistent, which helps to recover the traffic quickly. On the other hand, the process of configuring the time slot allocation strategy for the user on the RX side is eliminated, thus reducing the configuration complexity and improving the efficiency of deploying the time slot allocation strategy.
  • the frequency of pushing the time slot allocation strategy includes many situations.
  • the first network device pushes the time slot allocation strategy to the second network device every other time period.
  • the TX side regularly pushes the time slot allocation strategy to the RX side.
  • the first network device may also push the time slot allocation strategy in real time, or push the time slot allocation strategy under the trigger of an instruction.
  • the process of pushing the time slot allocation strategy is carried out through negotiation.
  • the first network device generates a negotiation request, sends the negotiation request to the second network device, the second network device receives the negotiation request from the first network device, and the second network device determines the time slot allocation strategy according to the negotiation request.
  • the negotiation request is used to indicate the time slot allocation strategy.
  • the negotiation request includes the identification of the time slot allocation strategy.
  • How to negotiate a time slot allocation strategy includes multiple implementation methods.
  • the second network device and the first network device negotiate the time slot allocation strategy based on the LLDP protocol, and accordingly, the above-mentioned negotiation request is an LLDP frame.
  • the structure of the LLDP frame is extended so that the LLDP frame includes a strategy field, and the value of the strategy field is used to indicate the time slot allocation strategy adopted for transmitting the first service stream. For example, if the value of the strategy field is 0, it indicates that a time slot allocation strategy based on the required bandwidth is adopted for the transmission of the first service stream. If the value of the strategy field is 1, it indicates that the first service stream is transmitted to adopt the time slot allocation strategy based on the activated bandwidth. If the value of the strategy field is 2, it indicates that a time slot allocation strategy based on priority preemption is adopted for transmitting the first service stream. In this way, the first network device uses the LLDP negotiation method to push the adopted time slot allocation strategy to the second network device.
  • the policy field is carried by the Type-Length-Value (TLV) of the LLDP frame.
  • TLV Type-Length-Value
  • the LLDP frame includes the policy TLV
  • the value of the policy TLV includes the policy field.
  • the policy TLV specifically includes a variety of situations.
  • the policy TLV is a new top TLV, and the value of the type field of the policy TLV indicates the type of the unused top TLV.
  • the policy TLV is a new sub-TLV of the top TLV, and the value of the type field of the policy TLV indicates the type of the unused sub-TLV.
  • the policy TLV is a new sub-sub-TLV (sub-sub-TLV) of the top TLV, and the type of the policy TLV is the type of the unused sub-sub-TLV. This embodiment does not limit whether the policy TLV is top TLV, sub-TLV or sub-sub-TLV.
  • the LLDP payload (LLDPDU) in the LLDP frame includes Chassis ID TLV (Chassis ID TLV), Port ID TLV (Port ID TLV), Time to Live TLV (Time to Live TLV, TTL TLV), optional TLV (Optional TLV), LLDP Load End TLV (End of LLDPDU TLV).
  • the policy TLV includes a subType field and a policy field.
  • the value of the subType field is a newly added value, which is used to indicate the policy TLV. Using this implementation method, by extending a sub-TLV, the time slot allocation strategy is specified.
  • the Chassis ID TLV is used to advertise the chassis ID (chassis ID) of the sender of the LLDPDU
  • the Port ID TLV is used to identify the port of the device that sends the LLDPDU.
  • Time to Live TLV is used to notify the receiving end of the validity period of the information received at the same time.
  • End Of LLDPDU TLV is used to mark the end of LLDPDU.
  • the process of negotiating the time slot allocation strategy is executed by the resource management layer.
  • the first network device corresponds to the TX policy module 2021 in the resource management layer 202
  • the second network device corresponds to the RX policy module 2022 in the resource management layer 202.
  • the TX strategy module 2021 on the TX side will push the time slot allocation strategy to the RX side based on the LLDP protocol.
  • the RX strategy module 2022 on the RX side is based on the LLDP protocol, receives the time slot allocation strategy pushed by the TX side, and saves the time slot allocation strategy.
  • Implementation mode (2) A consistent time slot allocation strategy is statically configured on the TX side and the RX side.
  • the user triggers a configuration operation on the second network device, and the second network device determines the time slot allocation strategy according to the configuration operation of the user.
  • S304 and S05 are implemented by means of static configuration, this embodiment does not limit the timing of S304 and S305.
  • S304 and S305 can be executed sequentially. For example, S304 can be executed first, and then S305; or S305 can be executed first, and then S304. In other embodiments, S304 and S305 can also be executed in parallel, that is, S304 and S305 can be executed simultaneously.
  • S303 and S305 can be executed sequentially. For example, S303 can be executed first, and then S305; or S305 can be executed first, and then S303. In other embodiments, S303 and S305 can also be executed in parallel, that is, S303 and S305 can be executed simultaneously.
  • the first network device determines the first time slot according to the time slot allocation strategy and the required bandwidth.
  • the first network device and the second network device use the time slot allocation strategy obtained in S304 to determine the time slot for the first service flow, and allocate the determined time slot to the first service flow.
  • the resource management layer 202 saves the priority of the first service flow, the bandwidth requirement of the first service flow, the time slot resource pool and the bandwidth allocation strategy customized by the user.
  • the above is the input data of S306, the resource management layer 202 executes the step of allocating time slots according to the input data, and the resource management layer 202 outputs the time slots corresponding to the service flow.
  • the user specifies the time slot through the instruction to improve the bandwidth requirement for the user.
  • the network equipment manages the time slot according to the user-customized time slot allocation strategy and the required bandwidth, so that the right to time slot allocation is taken back from the user to the network. Therefore, the complicated operation of user configuration time slot in the related technology is eliminated, and the user configuration is simplified.
  • the network device can reallocate time slots according to the time slot allocation strategy, so that the service flow migrates from the original time slots to the re-allocated time slots, so it has the ability to dynamically migrate the time slots.
  • the time slot determined for the first service flow in S306 is called the first time slot as an example for description.
  • the first time slot is a time slot of the PHY link between the first network device and the second network device.
  • the first time slot is one time slot, or the first time slot is a set including multiple time slots, and the number of time slots included in the first time slot is not limited in this embodiment.
  • the network device 101 determines the TS1 of the PHY1 for the client1, and the network device 101 determines the TS2 to TS9 of the PHY1 for the client3.
  • the first time slot is TS1 of PHY1.
  • the first time slot is TS2 to TS9 of PHY1.
  • the first time slot is a time slot on the same PHY link.
  • the first time slot is a time slot on PHY1, or the first time slot is a time slot on PHY2.
  • the first time slot includes time slots respectively located on multiple PHY links.
  • the first time slot includes a time slot on PHY1 and a time slot on PHY2.
  • the first time slot includes TS1 on PHY1 and TS2 on PHY2.
  • the number of time slots on each of the multiple PHY links that the first time slot is distributed on is the same.
  • the first time slot includes N time slots on PHY1 and N time slots on PHY2, and N is a positive integer.
  • the first time slot is distributed in a different number of time slots on each PHY link among the multiple PHY links.
  • the first time slot includes p time slots on PHY1 and q time slots on PHY2, and p and q are positive integers.
  • whether the first time slot is distributed on each of the multiple PHY links and whether the time slots on each PHY link are the same or approximately the same can be determined according to the adopted time slot allocation strategy. For example, when the time slot allocation strategy based on traffic load sharing in the following optional manner 6 is adopted, the first time slots are distributed on the same or approximately the same time slots on each of the multiple PHY links.
  • Optional method one is to configure the time slot allocation strategy based on the bandwidth of the service flow.
  • the first network device determines the first time slot that meets the required bandwidth from the free time slots of the FlexE group according to the time slot allocation strategy and the required bandwidth.
  • the identifier of the time slot allocation strategy in the first alternative may be 000.
  • the first network device obtains the available bandwidth of the FlexE group according to the number of free time slots in the FlexE group and the bandwidth corresponding to a time slot, and the first network device determines whether the available bandwidth of the FlexE group is greater than or equal to this
  • the required bandwidth if the available bandwidth of the FlexE group is greater than or equal to the required bandwidth, it is determined that the idle time slot meets the required bandwidth.
  • the available bandwidth of the FlexE group is the product of the number of idle time slots and the bandwidth corresponding to one time slot.
  • the bandwidth corresponding to one time slot is 5G
  • the FlexE group currently has 3 free time slots
  • the first network device will determine that the available bandwidth of the FlexE group of 15G is greater than the required bandwidth of 10G, and the first network device will select from the 3 free time slots according to the time slot allocation strategy and the required bandwidth of 10G. Determine 2 idle time slots, and the determined 2 idle time slots are the first time slots.
  • the time slot allocation strategy is used to automatically determine the time slot that meets the required bandwidth, the time slot that meets the required bandwidth is allocated to the service flow, so in the process of transmitting the service flow, the service flow will pass through the time slot that meets the required bandwidth. Transmission, thereby ensuring the bandwidth of the service flow. As the bandwidth of the service stream is guaranteed, it helps the service guarantee the SLA requirements. In particular, in the case where the required bandwidth is specified by the user, the time slot is allocated through the optional method 1, so that the bandwidth of the service flow meets the user's expectation of the bandwidth.
  • Optional method two time slot allocation strategy based on service flow activation bandwidth.
  • the time slot allocation strategy not only considers the required bandwidth, but also considers the active bandwidth. Specifically, the first network device determines whether the free time slots of the FlexE group meet the required bandwidth. If the free time slots do not meet the required bandwidth, the first network device determines from the free time slots that the activated bandwidth is satisfied according to the time slot allocation strategy and the activation bandwidth. The first time slot.
  • the identifier of the time slot allocation strategy in Option 2 may be 001.
  • the activation bandwidth is the minimum required bandwidth for the first network device to start transmitting the first service flow.
  • the physical interface of the first network device (such as the FlexE physical interface) can be in the up state, and the service flow is started to be transmitted.
  • the activated bandwidth is smaller than the required bandwidth. For example, the required bandwidth of client1 is 10G, the activated bandwidth is 5G, and the bandwidth corresponding to one time slot is 5G.
  • the first network device will determine according to the time slot allocation strategy This 1 free time slot is used to start the transmission client1 with the active bandwidth of 5G size. In this example, the determined 1 free time slot is the first time slot.
  • the network device may not be able to find free time slots that meet the required bandwidth.
  • the time slot that meets the activated bandwidth is automatically determined, and the time slot allocation for the activated bandwidth will be met.
  • the network device can start the transmission service flow with the time slot corresponding to the activated bandwidth, thus ensuring the connectivity of the service flow, enabling the service flow to be transmitted, avoiding the disconnection of the service flow, and doing its best In order to ensure that the maximum number of service streams are started to be transmitted.
  • Optional method three time slot allocation strategy based on service flow priority preemption.
  • the time slot allocation strategy not only considers the required bandwidth, but also considers the priority of the service flow. Specifically, taking the first service flow as a high-priority service flow as an example, the first network device determines whether the free time slots of the FlexE group meet the required bandwidth. If the free time slots do not meet the required bandwidth, the first network device The allocation strategy and the priority of the first service flow determine the first time slot from the time slots occupied by the second service flow.
  • the identifier of the time slot allocation strategy in Option 3 may be 002. Wherein, the priority of the second service flow is lower than the priority of the first service flow. Between the first business flow and the second business flow, the first business flow is a high-priority business flow, and the second business flow is a low-priority business flow.
  • How to determine the priority of a business flow includes multiple implementation methods. The following uses method A and method B to illustrate.
  • Method A the priority of the service flow is specified by the user.
  • the configuration information of the first service flow obtained by performing S302 includes the priority of the first service flow.
  • the network device obtains the priority of the first service flow from the configuration information of the first service flow.
  • Method B Determine the priority of the service flow according to the ID of the service flow.
  • the first network device obtains the priority of the first service flow according to the ID of the first service flow.
  • the priority of the service flow is negatively related to the ID of the service flow, that is, the smaller the ID of the service flow, the higher the priority of the service flow. For example, if the ID of the first service flow is smaller than the ID of the second service flow, the priority of the first service flow is higher than the priority of the second service flow.
  • the first network device judges whether the configuration information of the first service flow includes the priority of the first service flow, and if the configuration information of the first service flow includes the priority of the first service flow, the above method A is selected. If the configuration information of the first service flow does not include the priority of the first service flow, the above method B is selected.
  • the time slot allocation strategy not only considers the required bandwidth, but also considers the sequence of physical interface numbers and the sequence of time slot numbers.
  • the PHY link with a smaller physical interface number has a higher priority for resource allocation.
  • a time slot with a smaller time slot number has a higher priority for resource allocation.
  • the first network device determines the first PHY link with the smallest physical interface number from the available PHY links in the FlexE group according to the time slot allocation strategy; Determine the first time slot with the smallest time slot number among the idle time slots.
  • the first PHY link is the available PHY link with the smallest physical interface number among the available PHY links in the FlexE group. For example, if there are 3 PHY links in the FlexE group, the 3 PHY links are respectively PHY1, PHY2, and PHY3, where PHY1 is currently unavailable, and PHY2 and PHY3 are currently available, then the first PHY link is PHY2.
  • the first time slot is the idle time slot with the smallest time slot number among the idle time slots of the first PHY link.
  • the first PHY link includes 10 time slots, which are respectively TS1, TS2, TS3 to TS ⁇ , where TS1 and TS2 are not available, and the 8 time slots from TS3 to TS ⁇ are idle time slots, then The first time slot is TS3.
  • the first network device searches the PHY link with the smallest physical interface number among all available PHY links, starting from the time slot corresponding to time slot number 0, and searching for idle time in the order of time slot number from small to large. Slots, until a free time slot is found, the found free time slot is the first time slot.
  • Optional mode five time slot allocation strategy based on PHY link load sharing.
  • the time slot allocation strategy not only considers the required bandwidth, but also considers the load of the PHY link.
  • the load of the different PHY links in the FlexE group is equal or approximately equal, so that the load of the different PHY links in the FlexE group is as balanced as possible.
  • the first network device determines the second PHY link with the smallest load from the available PHY links in the FlexE group according to the time slot allocation strategy ; The first network device determines the first time slot with the smallest time slot number from the idle time slots of the second PHY link according to the required bandwidth.
  • the second PHY link is an available PHY link with the smallest load among the available PHY links of the FlexE group. For example, if there are 2 PHY links in the FlexE group, the 2 PHY links are PHY1 and PHY2, and PHY1 and PHY2 are both available PHY links. If the load of PHY1 is less than the load of PHY2, the second PHY link is PHY1.
  • the load of the PHY link is determined according to the number of timeslots that have carried service streams in the PHY link, and the process of determining the second timeslot is, for example, that the first network device obtains the services carried by each PHY link in the FlexE group.
  • the number of timeslots of the stream is determined to determine the PHY link with the smallest number of timeslots that has carried the service stream, and the PHY link with the smallest number of timeslots that has carried the service stream is the second PHY link.
  • the load of the PHY link is determined according to the number of idle time slots in the PHY link.
  • the process of determining the second time slot is, for example, that the first network device obtains the number of free time slots of each PHY link in the FlexE group, and determines the PHY link with the largest number of free time slots.
  • the number of free time slots is The most PHY link is the second PHY link.
  • the time slot allocation strategy based on service stream load sharing not only considers the required bandwidth, but also considers how to share the required bandwidth of the same service stream to as many PHY links as possible, and use as many PHY links as possible to transmit the same service stream.
  • the first network device determines the first time slot from the idle time slots of the multiple PHY links according to the time slot allocation strategy and the required bandwidth. Among them, the first time slot is evenly distributed among different PHY links among multiple PHY links. Specifically, the first time slot includes multiple time slots, and the multiple time slots are respectively located on multiple PHY links. The number of time slots determined on different PHY links is equal or approximately equal. These multiple PHY links will be The load sharing method jointly carries the first service flow. For example, if there are currently 4 or more available PHY links, the first network device determines 1 time slot from the 4 available PHY links, and the determined 4 time slots are the first time slot.
  • the determined 4 time slots are evenly distributed among the 4 available PHY links, so that the service flow is shared among the 4 PHY links. If there are currently two available PHY links, the first network device respectively determines two time slots from the two available PHY links, and the determined four time slots are the first time slots. In this way, the same service stream is transmitted on different PHY links as much as possible.
  • the first network device obtains the bandwidth required to be allocated on each available PHY link according to the required bandwidth of the service flow and the number of available PHY links in the FlexE group.
  • the bandwidth required to be allocated on the available PHY link is determined from each available PHY link to determine the time slot to determine the first time slot.
  • the bandwidth required to be allocated on each available PHY link is, for example, the ratio between the required bandwidth and the number of available PHY links.
  • the required bandwidth of client1 is 40G
  • the available PHY links in the FlexE group are PHY1, PHY2, PHY3, and PHY4, and the bandwidth corresponding to one time slot is 5G.
  • the remaining (N-1) PHY links will still transmit the first service stream, thereby ensuring that the first service stream has (N-1)/ N shares of available bandwidth, so it can quickly recover from failures without human intervention. On the other hand, it reduces the pressure on a single PHY link and realizes the function of load sharing.
  • the specific type of time slot allocation strategy is customized by the user.
  • which of the above-mentioned optional methods 1 to 6 is used by the network device to allocate time slots is customized by the user.
  • the above-mentioned optional method 1 to optional method 6 are mapped to multiple options, and each option corresponds to one or more optional methods.
  • the foregoing optional method 4 is mapped to the "sequential allocation" option
  • the foregoing optional method 5 is mapped to the "PHY link-based load sharing” option
  • the foregoing optional method 6 is mapped to the "service flow based load sharing” option.
  • the above-mentioned option 1 to option 6 mapping options are presented to the user through the interface.
  • the selection operation is triggered for the option corresponding to the desired optional manner, and the first network device The time slot will be allocated according to the optional method corresponding to this option.
  • users are provided with various selectable and specific types of time slot allocation strategies, and the first network device will allocate time slots according to the time slot allocation strategy customized by the user, thereby meeting the user's customized requirements.
  • optional manner 1 to optional manner 6 may be combined in any manner.
  • only one optional manner among the six optional manners may be executed, or two or more optional manners among the six optional manners may be executed.
  • the logical relationship between the different optional methods can be an AND relationship or an OR relationship. The following is an example of how to combine the different options.
  • the first network device will remove the data from the one that has been occupied by the second service flow according to the time slot allocation strategy and the priority of the first service flow.
  • the first time slot that satisfies the required bandwidth is determined among the time slots.
  • the first network device will remove the data from the one that has been occupied by the second service flow according to the time slot allocation strategy and the priority of the first service flow.
  • the first time slot that satisfies the activation bandwidth is determined among the time slots.
  • the first network device determines the first with the smallest physical interface number from the available PHY links in the FlexE group according to the time slot allocation strategy. PHY link; according to the required bandwidth, the first network device determines the time slots from the idle time slots of the first PHY link in ascending order of time slot numbers, until the determined time slot meets the required bandwidth.
  • the first network device determines the second with the smallest physical interface number from the available PHY links in the FlexE group according to the time slot allocation strategy.
  • a PHY link the first network device determines the time slots from the idle time slots of the first PHY link in ascending order of time slot numbers from the idle time slots of the first PHY link according to the activation bandwidth, until the determined time slots meet the activation bandwidth.
  • optional manner 1 to optional manner 6 are only exemplary descriptions, and do not represent a mandatory implementation manner for allocating time slots according to the time slot allocation strategy and the required bandwidth.
  • other implementation methods can also be used to realize the function of allocating time slots according to the time slot allocation strategy and required bandwidth. These other methods are the specific conditions covered by S306 and should also be covered in the protection of the embodiments of this application. Within range.
  • the second network device determines the first time slot according to the time slot allocation strategy and the required bandwidth.
  • the time slot determined by the second network device according to the time slot allocation strategy and the required bandwidth is the same as the time slot determined by the first network device according to the time slot allocation strategy and the required bandwidth, and is the first time slot.
  • the optional manner adopted by the second network device is the same as the optional manner adopted by the first network device.
  • S307 also includes the following optional manner 1 to optional manner 6. For the technical details of S307, refer to S306.
  • Option 1 If the idle time slot meets the required bandwidth, the second network device determines the first time slot that meets the required bandwidth from the idle time slot according to the time slot allocation strategy and the required bandwidth.
  • Option 2 If the idle time slot does not meet the required bandwidth, the second network device determines the first time slot that meets the activated bandwidth from the idle time slot according to the time slot allocation strategy and the activation bandwidth.
  • the second network device determines the first time slot from the time slots already occupied by the second service flow according to the time slot allocation strategy and the priority of the first service flow.
  • the priority of the second service flow is lower than the priority of the first service flow.
  • the second network device determines the first PHY link with the smallest physical interface number from the available PHY links in the FlexE group according to the time slot allocation strategy; the second network device selects the first PHY link from the first PHY link according to the required bandwidth Determine the first time slot with the smallest time slot number among the idle time slots of the road.
  • the second network device determines the second PHY link with the smallest load from the available PHY links in the FlexE group according to the time slot allocation strategy; Determine the first time slot with the smallest time slot number among the idle time slots.
  • the second network device determines the first time slot from the idle time slots of multiple PHY links according to the time slot allocation strategy and required bandwidth.
  • the first time slot is evenly distributed among the multiple PHY links. PHY link.
  • S308 The first network device and the second network device transmit the first service stream according to the first time slot.
  • S308 includes the following S308A and S308B.
  • the first network device sends the first service flow to the second network device according to the first time slot.
  • the second network device receives the first service flow from the first network device according to the first time slot.
  • the first service flow is client1, and the first time slot is TS2 on PHY1 and TS1 on PHY2.
  • the process of transmitting the first service flow according to the first time slot includes: client1 is first performed by the TX end (first network device) Business processing.
  • client1 first performs quality of service (QoS) control through the traffic management (TM) module of the first network device, and then encapsulates the physical layer information through the MAC layer module of the first network device, and processes The service data obtained later is sent to the shim of the first network device. Then, the shim of the first network device can slice and encapsulate the received service data, that is, encapsulate the service data into TS2 on PHY1 and TS1 on PHY2.
  • QoS quality of service
  • TM traffic management
  • PHY1 and PHY2 in the FlexE group can transmit the service data of client1 to the second network device through the optical module connected to the RX end (the second network device).
  • the second network device will reassemble the service data of client1 transmitted on PHY1 and PHY2 into client1 according to the inverse process of the processing process of the first network device.
  • the first network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the resource management layer 202 maintains user-customized time slot allocation strategies, service flow priority, available physical interfaces in the FlexE group (that is, physical interfaces in the active state), and TX time slot resource pool for available physical interfaces in the FlexE group. , Based on the time slot allocation strategy, re-arrange the available time slots in the TX direction on the FlexE group.
  • the first network device may determine that the PHY link where the first time slot is located has failed. How to determine the failure of the PHY link includes multiple implementation methods.
  • the first network device actively detects that the PHY link fails. For example, the first network device detects the state of the physical interface, and if the physical interface is in a down state, the first network device determines that the PHY link has failed.
  • the first network device when the PHY link where the first time slot is located fails, the first network device first removes the failed PHY link from the FlexE group, and then re-allocates it according to the FlexE group of the removed PHY link and the time slot allocation strategy Time slot. Specifically, taking the FlexE groups before and after removing the PHY links as the first FlexE group and the second FlexE group, respectively, as an example, the first network device and the second network device originally transmitted service streams through the first FlexE group. The PHY link where the slot is located fails, and the first network device deletes the PHY link where the first time slot is located from the first FlexE group to obtain the second FlexE group. The second FlexE group does not include the PHY chain where the first time slot is located. road. In S309, the first network device determines the second time slot from the second FlexE group according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the process of deleting the failed PHY link from the FlexE group is quickly started, thereby automatically removing the failed PHY link from the FlexE group, so that the remaining PHY links in the FlexE group
  • the PHY link is in the active state, thus ensuring that the FlexE group is available, and avoiding the unavailability of the entire FlexE group after the PHY link fails.
  • the first network device can dynamically migrate the first service flow from the original time slot to the newly determined time slot according to the time slot allocation strategy. , Thereby redeploying time slots for the first service flow, and realizing the rearrangement of time slots.
  • the time slot allocation strategy also serves as a dynamic migration strategy.
  • the resource management layer 202 dynamically migrates user services according to the time slot allocation strategy.
  • the method 300 refers to the re-determined time slot as the second time slot.
  • the second time slot is different from the first time slot.
  • the second time slot and the first time slot are located on different PHY links.
  • the second time slot is one time slot, or the second time slot is a set including multiple time slots.
  • the second time slot is a time slot on the same PHY link.
  • the second time slot includes time slots respectively located on multiple PHY links.
  • the number of time slots distributed on each PHY link of the multiple PHY links is the same.
  • the second time slot is distributed in a different number of time slots on each PHY link in the multiple PHY links.
  • whether the second time slot is distributed on each of the multiple PHY links and whether the time slots on each PHY link are the same or approximately the same can be determined according to the adopted time slot allocation strategy.
  • optional manner 1 to optional manner 6 uses optional manner 1 to optional manner 6 as examples. It should be understood that optional manner one to optional manner six in S309 correspond to optional manner one to optional manner six in S306, and the technical details of the optional manner in S309 may refer to the corresponding optional manner in S306.
  • Option 1 If the idle time slot meets the required bandwidth, the first network device determines the second time slot that meets the required bandwidth from the idle time slot according to the time slot allocation strategy and the required bandwidth.
  • the effects achieved include: since the time slot that meets the required bandwidth is re-determined, the re-determined time slot is used to transmit the service flow, so that the service flow is changed from where it was originally. After the time slot is migrated to the re-determined time slot, the bandwidth of the service flow can still meet the required bandwidth, so as to do its best to ensure the normal operation of the maximum number of service flows. After the PHY link fails, the bandwidth of the service flow continues to be guaranteed, which helps to guarantee the service-level agreement (SLA) of the service.
  • SLA service-level agreement
  • the time slot can be re-allocated through the optional method 1, so that the bandwidth of the service stream after the PHY fails can still meet the user's expectation of the bandwidth.
  • Option 2 If the idle time slot does not meet the required bandwidth, the first network device determines a second time slot that meets the activated bandwidth from the idle time slot according to the time slot allocation strategy and the activation bandwidth.
  • the achieved effects include: when the PHY link fails and there are insufficient free time slots, since the time slot that satisfies the active bandwidth is re-determined, the The time slot transmits the service stream, so that the service stream can be in a connected state, and the service stream can be transmitted to the opposite end, avoiding the interruption of the service stream after the PHY link fails, so as to try to ensure the maximum number of service streams on the PHY link Transmission is still started after a failure.
  • the first network device determines the second time slot from the time slots already occupied by the second service flow according to the time slot allocation strategy and the priority of the first service flow.
  • the achieved effects include: when the PHY link fails and there are insufficient free time slots, since the time slots are re-allocated according to the priority of the service flow, the lower priority
  • the time slots originally occupied by the service flow of high priority are re-allocated to the service flow of high priority, so that the service flow of high priority has the right to compete for the time slot first, and the service flow of high priority can pass the original service flow of low priority. Occupied time slot for transmission, thereby avoiding the disconnection of high-priority service streams and ensuring the rapid recovery of high-priority service streams.
  • the first network device determines the first PHY link with the smallest physical interface number from the available PHY links in the FlexE group according to the time slot allocation strategy; the first network device selects the first PHY link from the first PHY chain according to the required bandwidth Determine the second time slot with the smallest time slot number among the idle time slots of the road.
  • Option 4 can be combined with Option 1 and Option 3 above, that is, in the case of a PHY link failure, during the time slot migration process, based on the priority of the service flow and the required bandwidth of the service flow, The corresponding time slots are allocated sequentially on the available PHY links.
  • Option 4 can be combined with Option 2 and Option 3 above, that is, in the case of a PHY link failure, during the time slot migration process, based on the size of the service flow and the activation bandwidth of the service flow, when the available The corresponding time slots are allocated sequentially on the PHY link.
  • the first network device determines the second PHY link with the smallest load from the available PHY links in the FlexE group according to the time slot allocation strategy; the first network device determines the second PHY link with the smallest load according to the required bandwidth. Determine the second time slot with the smallest time slot number among the idle time slots.
  • the first network device determines the second time slot from the idle time slots of multiple PHY links according to the time slot allocation strategy and required bandwidth, and the second time slots are evenly distributed among the multiple PHY links. PHY link.
  • the time slot allocation strategy used when the PHY link fails and the time slot allocation strategy used when the PHY link is normal may be completely the same, or there may be slight differences.
  • the optional method selected by the first network device from the optional method 1 to the optional method 6 in performing S306 may be the same as the optional method selected from the optional method 1 and the optional method 6 in S309.
  • optional mode one is implemented in S306, and optional mode two is implemented in S309.
  • the optional method selected by the second network device from the optional method 1 to the optional method 6 in S307 may be the same as the optional method selected from the optional method 1 and the optional method 6 in S310. , Can also be different.
  • the optional method selected by the first network device in S306 is consistent with the optional method selected by the second network device in S307
  • the optional method selected by the first network device in S309 is guaranteed to be consistent with the optional method selected by the first network device in S309.
  • the optional method selected by the network device in S310 is the same, and the optional method selected by the first network device in S306 is not limited to the optional method reselected in S309, and the optional method selected by the second network device in S307 is not limited It is consistent with the optional way of reselecting in S310.
  • the method of obtaining the time slot allocation strategy used when the PHY link fails and the method of obtaining the time slot allocation strategy used when the PHY link is normal may be completely the same, or there may be slight differences.
  • the time slot allocation strategy used when the PHY link fails is pushed from the TX side to the RX side, and the time slot allocation strategy used when the PHY link is normal is statically defined by the user.
  • the specific time slot allocation strategy to be executed when the PHY link is normal and when the PHY link fails is customized by the user.
  • the time slot allocation strategy used when the PHY link is normal is called a bandwidth allocation strategy
  • the time slot allocation strategy used in the case of a PHY link failure is called a dynamic migration strategy.
  • the bandwidth allocation strategy includes at least one of optional manner 1 to optional manner 6 in 306, and the specific optional manner used by the bandwidth allocation strategy is determined by the configuration operation of the user.
  • the dynamic migration strategy includes at least one of the optional manner 1 to the optional manner 6 in S309, and the specific optional manner used by the dynamic migration strategy is determined by the configuration operation of the user.
  • the time slot migration strategy customization is provided to users, which solves the FlexE operation and maintenance problem, cooperates with different migration strategies, performs fast time slot migration according to user expectations, and introduces FlexE time slot dynamic migration Ability, users can customize the migration strategy.
  • the first network device After the first network device determines the second time slot, it can force the current customer schedule according to the second time slot to update the time slot corresponding to the first service flow in the customer schedule from the first time slot to the second time slot.
  • strong brush refers to a refresh method that does not go through the negotiation process including request and response.
  • the customer schedule is used to store the mapping relationship between the service flow and the time slot.
  • the first network device is the TX end of the service flow, and the customer schedule of the first network device is also referred to as the TX current table.
  • the second network device determines the second time slot according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the second network device After the second network device determines that the PHY link fails, it will also re-determine a new time slot.
  • the time slot allocation strategy used by the second network device is different from that of the first network device.
  • the time slot allocation strategy used is the same, and the required bandwidth used by the second network device is consistent with the required bandwidth used by the first network device. Therefore, the new time slot determined by the second network device is the same as the new time determined by the first network device.
  • the slots will be the same, they are all second slots. For example, please refer to Figure 8.
  • the RX policy module 2022 of the resource management layer maintains the time slot migration strategy pushed by the peer network element, the priority of the service flow, the available physical interfaces of the FlexE group (that is, the physical interfaces in the active state), and the FlexE group is available.
  • the physical interface RX time slot resource pool based on the time slot allocation strategy, re-arranges the available time slots in the RX direction on the FlexE group.
  • the achieved effects include at least:
  • the negotiation process includes the process in which the TX end sends a negotiation request, the RX end receives the negotiation request and returns a negotiation response to the TX end, and the TX end receives the negotiation response.
  • the sender and receiver negotiate the business. After the time slot to which the flow is to be migrated, the service flow is switched to the negotiated time slot.
  • the switching time is on the order of hundreds of milliseconds.
  • the transmitting and receiving ends respectively re-determine the time slot, and migrate the service flow to the re-determined time slot.
  • the delay caused by the negotiation process is eliminated, the interruption time can be controlled within 50 milliseconds, and the business flow can be quickly restored within 50 milliseconds, thus greatly Improve the speed of business recovery from failure.
  • the transceiver ends re-determine time slots according to the same time slot allocation strategy and the same required bandwidth. Therefore, the new time slots determined by the transceiver ends will be the same, so that the time After the slot migration, the time slot arrangement at both ends of the transceiver is consistent, so the transceiver ends can transmit service streams normally according to the consistent time slot arrangement, thus realizing the protection switching function of different PHY links in the FlexE group, and preventing failures.
  • the service flow on the PHY link is switched to the normal PHY link to avoid interruption of service flow transmission.
  • the second network device removes the failed PHY link from the FlexE group. Specifically, taking the FlexE groups before and after removing the PHY links as the first FlexE group and the second FlexE group respectively, as an example, the second network device and the second network device originally transmitted service streams through the first FlexE group. The PHY link where the slot is located fails, and the second network device deletes the PHY link where the first time slot is located from the first FlexE group to obtain the second FlexE group, which does not include the PHY chain where the first time slot is located road. In S310, the second network device determines the second time slot from the second FlexE group according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the RX end (the second network device) and the TX end (the first network device) synchronously execute the process of deleting the PHY link, ensuring the consistency of the re-determined time slots, and avoiding PHY link failure Later, the entire FlexE group to which the PHY link belongs is unavailable.
  • the second network device can forcibly refresh the current customer schedule according to the second time slot, and update the time slot corresponding to the first service flow in the customer schedule from the first time slot to the second time slot.
  • the customer schedule is used to store the mapping relationship between the service flow and the time slot.
  • the second network device is the RX end of the service flow, and the customer schedule of the second network device is also referred to as the RX current table.
  • the first network device and the second network device transmit the first service flow according to the second time slot.
  • S311 includes the following S311A and S311B.
  • S311A The first network device sends the first service flow to the second network device according to the first time slot.
  • the second network device receives the first service flow from the first network device according to the first time slot.
  • the service flow can be switched from the failed PHY link to other PHY links in the FlexE group, thereby realizing mutual protection between different PHY links in the FlexE group .
  • a 1:1 service switching can be realized by using the time slot allocation strategy.
  • the PHY link is deployed as 1:1 redundancy
  • the FlexE group is configured with a bandwidth of 100G
  • all clients have the same priority.
  • the required bandwidth of client1 is 5G
  • the required bandwidth of client2 is 5G
  • the required bandwidth of client3 is 40G.
  • time slot 1 of PHY1 will be allocated to client1
  • time slot 1 of PHY2 will be allocated to client2
  • time slot 2 of PHY1 will be allocated to time slot 9 is allocated to client3.
  • time slot 1 of PHY2 will be re-allocated to client1, and time slot 2 of PHY2 will be re-allocated to client2.
  • Time slot 2 to time slot 9 are migrated to time slot 3 to time slot ⁇ of PHY2. Therefore, the service flow of the faulty PH1 is switched to PHY2 to realize 1:1 service switching and realize 1:1 between different PHYs in the FlexE group: 1 Protection.
  • the use of a time slot allocation strategy can achieve N:1 protection between different PHY links in the FlexE group.
  • the FlexE group includes N primary PHY links and 1 backup PHY link, and the priority of each service flow is configured.
  • the N primary PHY links are in a normal state, low-priority service flows are transmitted on the standby PHY link, and high-priority service flows are transmitted on the main PHY link.
  • the RX side and TX side will implement the time slot allocation strategy to determine the time slot on the standby PHY link according to the priority of the service flow, and the service The flow is switched from the failed primary PHY link to the backup PHY link, thereby preempting the time slot originally occupied by the low-priority service flow and completing the N:1 protection of the PHY link.
  • This embodiment provides a method for efficiently allocating time slots in FlexE.
  • the network device automatically allocates the time slots on the PHY link for the service flow by using the time slot allocation strategy and the required bandwidth of the service flow, and uses the allocated time. Since the user does not need to manually specify the corresponding time slot for the service flow, the user perceives how to arrange the time slot and the learning cost caused by the user's perception of how to arrange the time slot is eliminated, and the tedious operation of configuring the time slot for the service flow is eliminated.
  • the configuration complexity is greatly simplified, and the efficiency of time slot allocation is improved.
  • the method 300 is illustrated below by using the method 400 as an example.
  • the action of time slot allocation is performed by the resource management layer 202 in the system architecture 200.
  • the time slot allocation strategy used when the PHY link is normal is called the bandwidth allocation strategy, and it is used when the PHY link fails.
  • the time slot allocation strategy is called dynamic migration strategy.
  • the method flow described in the method 400 is about how the resource management layer uses the bandwidth allocation strategy to allocate time slots when the PHY link is normal, and how the resource management layer uses the dynamic migration strategy to reallocate time slots when the PHY link fails. It should be understood that the steps of the method 400 and the method 300 are the same, please refer to the method 300, and the method 400 will not be repeated.
  • FIG. 12 is a flowchart of a method 400 for transmitting a service flow based on FlexE according to an embodiment of the present application.
  • RS MNG represents the resource management layer 202.
  • the method 400 includes three stages, the first stage is to build a pipeline, the second stage is to configure services, and the third stage is to implement dynamic migration of services due to PHY failures.
  • Phase one includes SP1000 to SP1004.
  • Phase two includes SP2001 to SP2006.
  • Phase three includes SP3001 to SP3007.
  • the user performs the creation operation of the FlexE group on the first network device and the second network device respectively.
  • the first network device creates the FlexE group in response to the creation operation.
  • the second network device creates the FlexE group in response to the creation operation.
  • the first network device and the second network device add and delete PHY links from the FlexE group.
  • the user customizes the FlexE group bandwidth allocation strategy, and the resource management layer of the first network device saves the user customized bandwidth allocation strategy to the database (database, DB).
  • the resource management layer of the first network device saves the user-customized dynamic migration strategy in DB.
  • the resource management layer of the first network device is based on LLDP and pushes the dynamic migration strategy to the opposite end.
  • the resource management layer of the second network device receives the pushed dynamic migration strategy, and saves the dynamic migration strategy in the DB.
  • the user executes the creation operation of the service flow on the first network device and the second network device respectively.
  • the first network device creates a service flow in response to the creation operation.
  • the second network device creates a service flow in response to the creation operation.
  • SP2002 the user specifies the priority of the service flow on the first network device and the second network device respectively.
  • the user configures the required bandwidth of the service flow on the first network device and the second network device respectively.
  • the resource management layer of the first network device allocates time slots in the TX direction based on the bandwidth allocation strategy customized by the user.
  • the first network device executes the action of the TX configuration backup table, and sends a request (Request, REQ) to the second network device.
  • the second network device acts as the RX end, responds to the request, and returns an Acknowledge (ACK) message to the first network device.
  • the first network device executes the TX table cut action
  • the second network device executes the RX table cut action
  • the first network device quickly senses the PHY link failure, and starts the process of adding and deleting PHY links in the FlexE group.
  • the second network device quickly senses the PHY link failure, and starts the process of adding and deleting the PHY link in the FlexE group.
  • the resource management layer of the first network device obtains the user-customized dynamic migration strategy of the local TX.
  • the resource management layer of the second network device obtains the RX dynamic migration strategy pushed by the opposite end to the local end.
  • the resource management layer of the first network device obtains the priority of the service flow.
  • the resource management layer of the second network device obtains the priority of the service flow.
  • the resource management layer of the first network device executes the time slot rearrangement in the TX direction based on the dynamic migration strategy.
  • the resource management layer of the second network device performs the RX direction time slot rearrangement based on the dynamic migration strategy.
  • phase three process does not rely on two-terminal negotiation, so it is guaranteed to be completed within 50 milliseconds.
  • the technical means of re-determining the time slot according to the time slot allocation strategy and the required bandwidth is applied in other scenarios besides the failure of the PHY link.
  • the following examples illustrate some extended application scenarios.
  • this technical means is applied in scenarios where the required bandwidth of the service stream is updated.
  • the first network device determines the third time slot according to the time slot allocation strategy and the updated required bandwidth of the first service flow, and the third time slot is different from the first time slot ;
  • the first network device sends the first service flow to the second network device according to the third time slot.
  • the second network device determines the third time slot according to the time slot allocation strategy and the updated required bandwidth of the first service flow; the second network device determines the third time slot according to the third time slot.
  • the first network device receives the first service flow.
  • the third time slot in this scenario please refer to method 300 or method 400, for example, through any one or more of optional manner 1 to optional manner six.
  • the user configures the required bandwidth of the service flow to 5G
  • the first network device uses the 5G time slot to transmit the service flow.
  • the bandwidth of 5G size is insufficient, and the user reconfigures the required bandwidth of the business flow to 10G.
  • the first network device and the second network device re-determine the 10G time slot according to the time slot allocation strategy and the 10G required bandwidth.
  • the first network device and the second network device use the 10G time slot to transmit services flow.
  • the transceiver can automatically re-allocate the same. Therefore, the communication overhead brought by the negotiation for configuring the time slot is eliminated, and it is helpful to realize the lossless update of the required bandwidth.
  • this technical means is applied to the scenario of adding or deleting PHY links in the FlexE group.
  • the bandwidth provided by the PHYs in the FlexE group is insufficient, one or more PHYs need to be added to the current FlexE group to support more service streams.
  • the first network device increases the time slot of the FlexE group of the PHY link according to the time slot allocation strategy and the bandwidth required by the first service stream.
  • the fourth time slot is determined in, and the fourth time slot is different from the first time slot; the first network device sends the first service flow to the second network device according to the fourth time slot.
  • the second network device determines the second network device from the time slots of the FlexE group to which the PHY link is added according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the fourth time slot is different from the first time slot; the second network device receives the first service stream from the first network device according to the fourth time slot.
  • the FlexE group where the first time slot is located deletes the PHY link
  • the first network device deletes the time slot of the FlexE group from the PHY link according to the time slot allocation strategy and the bandwidth required by the first service stream.
  • the fifth time slot is determined in, and the fifth time slot is different from the first time slot; the first network device sends the first service flow to the second network device according to the fifth time slot.
  • the second network device determines the first time slot from the FlexE group where the PHY link is deleted according to the time slot allocation strategy and the bandwidth required by the first service stream. Five time slots, the fifth time slot is different from the first time slot; the second network device receives the first service stream from the first network device according to the fifth time slot.
  • the time slot allocation strategy is used to reallocate time slots. Since the time slot allocation strategy used by the transceiver and the FlexE group after the addition and deletion of the PHY link is the same, the transceiver can automatically reallocate the time slot. Consistent time slots, so the communication overhead caused by negotiation for configuring the time slots is eliminated, and it is helpful to realize the lossless addition and deletion of the PHY link.
  • this technical means is applied to the scenario of adding and deleting business flows.
  • the first network device determines the sixth time slot according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the time slot is different from the first time slot; the first network device sends the first service stream to the second network device according to the sixth time slot.
  • the second network device determines the sixth time slot according to the time slot allocation strategy and the required bandwidth of the first service flow.
  • the time slot is different from the first time slot; the second network device receives the first service stream from the first network device according to the sixth time slot.
  • the first network device allocates the time slots that meet the bandwidth requirement of service flow 1 (first service flow) to service flow 1, and then there is a new service flow 2 (third service flow) Need to transmit, the priority of service flow 2 is higher than the priority of service flow 1, but the current idle time slot is insufficient.
  • the sender and receiver can re-allocate the time slots that meet the activation bandwidth for service stream 1 according to the time slot allocation strategy. Since the required bandwidth is greater than the activation bandwidth, free time slots with a certain bandwidth can be freed up. The idle timeslots are allocated to service stream 2, so as to satisfy the bandwidth requirement of service stream 2.
  • the required bandwidth of service flow A (first service flow) cannot be met, and the time slots satisfying the active bandwidth are allocated to service flow A at both ends of the transceiver.
  • the service stream B (fourth service stream) originally transmitted by the sender and receiver is deleted due to service stop or other reasons, and the time slot occupied by service stream B is released, so that the free time slot of the FlexE group increases, and the current bandwidth of the FlexE group
  • the resource is changed from insufficient to sufficient to meet the bandwidth required by service flow A.
  • the transceiver can re-allocate time slots that meet the required bandwidth for service stream A according to the time slot allocation strategy, so as to use the newly released time slots to meet the bandwidth requirement of service stream A.
  • the transceiver can automatically re-allocate the same time slots, thus eliminating the need
  • the communication overhead brought by the negotiation for configuring the time slot helps to realize the lossless addition and deletion of the service flow.
  • the method 300 and method 400 of the embodiment of the present application are described above, and the network device of the embodiment of the present application is described below. It should be understood that the network device described below has the function of the first network device or the second network device in the method 300 or method 400. Any function.
  • FIG. 13 is a schematic structural diagram of a network device 500 provided by an embodiment of the present application.
  • the network device 500 includes: an acquisition module 501, configured to perform S304, SP1002, or SP1003; and a determining module 502, configured to perform S306 Or SP2004; sending module 503, used to execute S308A.
  • the determining module 502 is also used to perform S309, and the sending module 503 is also used to perform S311A.
  • the network device 500 corresponds to the first network device in the foregoing method embodiment, and each module in the network device 500 and the foregoing other operations and/or functions are used to implement each implementation of the first network device in the method embodiment.
  • the specific steps and methods please refer to the method 300 or the method 400 mentioned above.
  • the details are not repeated here.
  • the network device 500 transmits service streams based on FlexE
  • only the division of the above functional modules is used as an example.
  • the above functions can be allocated by different functional modules as required, that is, the network device 500
  • the internal structure is divided into different functional modules to complete all or part of the functions described above.
  • the network device 500 provided in the foregoing embodiment belongs to the same concept as the foregoing method 300. For the specific implementation process, refer to the method 300, which will not be repeated here.
  • the acquiring module 501 in the network device 500 is equivalent to the user configuration layer 201 in the system architecture 200; the determining module 502 in the network device 500 is equivalent to the resource management layer 202 in the system architecture 200; the sending module in the network device 500 503 is equivalent to the FlexE physical interface 204 in the system architecture 200.
  • FIG. 14 is a schematic structural diagram of a network device 600 provided by an embodiment of the present application.
  • the network device 600 includes: an acquisition module 601 for performing S305 or SP1004; a determining module 602 for performing S307; receiving Module 603 is used to execute S308B.
  • the determining module 602 is further used to perform S310, and the receiving module 603 is further used to perform S311B.
  • the network device 600 corresponds to the second network device in the foregoing method embodiment, and each module in the network device 600 and the foregoing other operations and/or functions are used to implement each implementation of the second network device in the method embodiment.
  • the specific steps and methods please refer to the method 300 or the method 400 mentioned above.
  • the details are not repeated here.
  • the network device 600 transmits service streams based on FlexE
  • only the division of the above-mentioned functional modules is used as an example for illustration.
  • the above-mentioned functions can be allocated by different functional modules as required, that is, the network device 600
  • the internal structure is divided into different functional modules to complete all or part of the functions described above.
  • the network device 600 provided in the foregoing embodiment belongs to the same concept as the foregoing method 300. For the specific implementation process, refer to the method 300, which will not be repeated here.
  • the acquiring module 601 in the network device 600 is equivalent to the user configuration layer 201 in the system architecture 200; the determining module 602 in the network device 600 is equivalent to the resource management layer 202 in the system architecture 200; the receiving module in the network device 600 603 is equivalent to the FlexE physical interface 204 in the system architecture 200.
  • the embodiments of the present application also provide a network device.
  • the hardware structure of the network device is introduced below.
  • the network device 700 or the network device 800 corresponds to the first network device or the second network device in the foregoing method embodiment, and each hardware, module, and the foregoing other operations and/or functions in the network device 700 or the network device 800 are used to implement the method.
  • the various steps and methods implemented by the first network device or the second network device in the embodiment, regarding the detailed process of how the network device 700 or the network device 800 allocates time slots, can refer to the above method embodiment for specific details. For the sake of brevity, I won't repeat them here.
  • the steps of the above method 300 or method 400 are completed by hardware integrated logic circuits in the processor of the network device 700 or the network device 800 or instructions in the form of software.
  • the steps of the method disclosed in combination with the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the network device 700 or the network device 800 corresponds to the network device 500 or the network device 600 in the foregoing virtual device embodiment, and each functional module in the network device 500 or the network device 600 is implemented by the software of the network device 700 or the network device 800.
  • the functional modules included in the network device 500 or the network device 600 are generated after the processor of the network device 700 or the network device 800 reads the program code stored in the memory.
  • FIG. 15 is a schematic structural diagram of a network device 700 provided by an embodiment of the present application.
  • the network device 700 may be configured as a first network device or a second network device.
  • the network device 700 includes at least one processor 701, a communication bus 702, a memory 703, and at least one physical interface 704.
  • the processor 701 may be a general-purpose CPU, NP, microprocessor, or may be one or more integrated circuits used to implement the solution of the present application, for example, application-specific integrated circuit (ASIC), programmable logic A device (programmable logic device, PLD) or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic A device
  • the above-mentioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL), or any combination thereof.
  • the communication bus 702 is used to transfer information between the above-mentioned components.
  • the communication bus 702 can be divided into an address bus, a data bus, a control bus, and so on. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the memory 703 can be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, or it can be a random access memory (RAM) or can store information and instructions
  • ROM read-only memory
  • RAM random access memory
  • Other types of dynamic storage devices can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage , CD storage (including compressed CDs, laser disks, CDs, digital universal CDs, Blu-ray CDs, etc.), disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures And any other media that can be accessed by the computer, but not limited to this.
  • the memory 703 may exist independently and is connected to the processor 701 through the communication bus 702.
  • the memory 703 may also be integrated with the processor 701.
  • the physical interface 704 uses any device such as a transceiver for communicating with other devices or communication networks.
  • the physical interface 704 includes a wired communication interface, and may also include a wireless communication interface.
  • the wired communication interface may be, for example, an Ethernet interface.
  • the Ethernet interface can be an optical interface, an electrical interface, or a combination thereof.
  • the wireless communication interface may be a wireless local area network (WLAN) interface, a cellular network communication interface, or a combination thereof.
  • WLAN wireless local area network
  • the physical interface 704 is also called a physical port, and the physical interface 704 corresponds to the FlexE physical interface 204 in the system architecture 200.
  • the processor 701 may include one or more CPUs, such as CPU0 and CPU1 as shown in FIG. 15.
  • the network device 700 may include multiple processors, such as the processor 701 and the processor 705 as shown in FIG. 15. Each of these processors can be a single-core processor (single-CPU) or a multi-core processor (multi-CPU).
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (such as computer program instructions).
  • the network device 700 may further include an output device 706 and an input device 707.
  • the output device 706 communicates with the processor 701 and can display information in a variety of ways.
  • the output device 706 may be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector, etc.
  • the input device 707 communicates with the processor 701, and can receive user input in a variety of ways.
  • the input device 707 may be a mouse, a keyboard, a touch screen device, a sensor device, or the like.
  • the memory 703 is used to store the program code 710 for executing the solution of the present application, and the processor 701 may execute the program code 710 stored in the memory 703. That is, the network device 700 may implement the method 300 or the method 400 provided in the method embodiment through the processor 701 and the program code 710 in the memory 703.
  • the network device 700 in the embodiment of the present application may correspond to the first network device or the second network device in the foregoing method embodiments, and the processor 701, the physical interface 704, etc. in the network device 700 may implement the foregoing methods.
  • the sending module 503 in the network device 500 is equivalent to the physical interface 704 in the network device 700; the acquiring module 501 and the determining module 502 in the network device 500 may be equivalent to the processor 701 in the network device 700.
  • the receiving module 603 in the network device 600 is equivalent to the physical interface 704 in the network device 700; the acquiring module 601 and the determining module 602 in the network device 600 may be equivalent to the processor 701 in the network device 700.
  • FIG. 16 is a schematic structural diagram of a network device 800 provided by an embodiment of the present application.
  • the network device 800 may be configured as a first network device or a second network device.
  • the network device 800 includes: a main control board 810 and an interface board 830.
  • the main control board 810 is also called a main processing unit (MPU) or a route processor card (route processor card).
  • the main control board 810 controls and manages each component in the network device 800, including routing calculation, device management, Equipment maintenance and protocol processing functions.
  • the main control board 810 includes: a central processing unit 811 and a memory 812.
  • the interface board 830 is also called a line processing unit (LPU), a line card (line card), or a business board.
  • the interface board 830 is used to provide various service interfaces and implement data packet forwarding.
  • Service interfaces include, but are not limited to, Ethernet interfaces, POS (Packet over SONET/SDH) interfaces, etc.
  • the Ethernet interfaces are, for example, Flexible Ethernet Clients (Flexible Ethernet Clients, FlexE Clients).
  • the interface board 830 includes: a central processor 831, a network processor 832, a forwarding entry memory 834, and a physical interface card (PIC) 833.
  • PIC physical interface card
  • the central processing unit 831 on the interface board 830 is used to control and manage the interface board 830 and to communicate with the central processing unit 811 on the main control board 810.
  • the network processor 832 is used to implement message forwarding processing.
  • the form of the network processor 832 may be a forwarding chip.
  • the processing of the upstream message includes: processing of the message ingress interface, forwarding table lookup; downstream message processing: forwarding table lookup, and so on.
  • the physical interface card 833 is used to implement the docking function of the physical layer, the original traffic enters the interface board 830 from this, and the processed packets are sent from the physical interface card 833.
  • the physical interface card 833 includes at least one physical interface, which is also called a physical port, and the physical interface card 833 corresponds to the FlexE physical interface 204 in the system architecture 200.
  • the physical interface card 833 is also called a daughter card, which can be installed on the interface board 830, and is responsible for converting the photoelectric signal into a message, and then forwarding the message to the network processor 832 for processing after checking the validity of the message.
  • the central processor 831 of the interface board 803 can also perform the functions of the network processor 832, such as realizing software forwarding based on a general-purpose CPU, so that the network processor 832 is not required in the physical interface card 833.
  • the network device 800 includes multiple interface boards.
  • the network device 800 further includes an interface board 840.
  • the interface board 840 includes: a central processing unit 841, a network processor 842, a forwarding entry memory 844, and a physical interface card 843.
  • the network device 800 further includes a switching network board 820.
  • the switch fabric unit 820 may also be referred to as a switch fabric unit (SFU).
  • SFU switch fabric unit
  • the switching network board 820 is used to complete data exchange between the interface boards.
  • the interface board 830 and the interface board 840 may communicate with each other through the switching network board 820.
  • the main control board 810 and the interface board 830 are coupled.
  • the main control board 810, the interface board 830, the interface board 840, and the switching network board 820 are connected to the system backplane through the system bus to achieve intercommunication.
  • an inter-process communication protocol (IPC) channel is established between the main control board 810 and the interface board 830, and the main control board 810 and the interface board 830 communicate through the IPC channel.
  • IPC inter-process communication protocol
  • the network device 800 includes a control plane and a forwarding plane.
  • the control plane includes a main control board 810 and a central processing unit 831.
  • the forwarding plane includes various components that perform forwarding, such as a forwarding entry memory 834, a physical interface card 833, and network processing. ⁇ 832.
  • the control plane performs functions such as routers, generation of forwarding tables, processing of signaling and protocol messages, configuration and maintenance of the status of the equipment, etc.
  • the control plane issues the generated forwarding tables to the forwarding plane.
  • the network processor 832 is based on the control plane.
  • the issued forwarding table looks up and forwards the message received by the physical interface card 833.
  • the forwarding table issued by the control plane can be stored in the forwarding entry storage 834. In some embodiments, the control plane and the forwarding plane can be completely separated and not on the same device.
  • the central processor 811 obtains the time slot allocation strategy; and determines the first time slot according to the time slot allocation strategy and the required bandwidth.
  • the network processor 832 triggers the physical interface card 833 to send the first service flow to the second network device according to the first time slot.
  • the central processor 811 obtains the time slot allocation strategy; and determines the first time slot according to the time slot allocation strategy and the required bandwidth.
  • the network processor 832 triggers the physical interface card 833 to receive the first service flow from the first network device according to the first time slot.
  • the sending module 503 in the network device 500 is equivalent to the physical interface card 833 or the physical interface card 843 in the network device 800; the acquiring module 501 and the determining module 502 in the network device 500 may be equivalent to the central processing in the network device 800 811 or central processing unit 831.
  • the receiving module 603 in the network device 600 is equivalent to the physical interface card 833 or the physical interface card 843 in the network device 800; the acquiring module 601 and the determining module 602 in the network device 600 can be equivalent to the central processing in the network device 800 811 or central processing unit 831.
  • the operations on the interface board 840 in the embodiment of the present application are consistent with the operations on the interface board 830, and will not be repeated for the sake of brevity.
  • the network device 800 of this embodiment may correspond to the first network device or the second network device in each of the foregoing method embodiments, and the main control board 810, the interface board 830, and/or the interface board 840 in the network device 800
  • the functions and/or various steps implemented by the first network device or the second network device in each of the foregoing method embodiments can be implemented, and for the sake of brevity, details are not described herein again.
  • main control boards there may be one or more main control boards, and when there are more than one, it may include the main main control board and the standby main control board.
  • the switching network board may not exist, or there may be one or more. When there are more than one, the load sharing and redundant backup can be realized together. Under the centralized forwarding architecture, the network equipment may not need to switch the network board, and the interface board undertakes the processing function of the business data of the entire system.
  • the network device can have at least one switching network board, and data exchange between multiple interface boards is realized through the switching network board, providing large-capacity data exchange and processing capabilities. Therefore, the data access and processing capabilities of network equipment with a distributed architecture are greater than those with a centralized architecture.
  • the form of the network device may also have only one board, that is, there is no switching network board, and the functions of the interface board and the main control board are integrated on the one board.
  • the central processing unit and the main control board on the interface board The central processing unit on the board can be combined into a central processing unit on this board, and perform the functions of the superposition of the two.
  • the data exchange and processing capacity of this form of equipment is low (for example, low-end switches or routers and other networks) equipment).
  • the specific architecture used depends on the specific networking deployment scenario, and there is no restriction here.
  • the foregoing first network device or second network device may be implemented as a virtualized device.
  • the virtualization device may be a virtual machine (English: Virtual Machine, VM) running a program for sending messages, and the virtual machine is deployed on a hardware device (for example, a physical server).
  • a virtual machine refers to a complete computer system with complete hardware system functions that is simulated by software and runs in a completely isolated environment.
  • the virtual machine can be configured as the first network device or the second network device.
  • the first network device or the second network device can be implemented based on a general physical server combined with network function virtualization (Network Functions Virtualization, NFV) technology.
  • Network Functions Virtualization Network Functions Virtualization
  • the first network device or the second network device is a virtual host, a virtual router, or a virtual switch.
  • Those skilled in the art can combine the NFV technology to virtualize the first network device or the second network device with the above-mentioned functions on a general physical server by reading this application. I won't repeat them here.
  • network devices of the various product forms described above respectively have any function of the first network device or the second network device in the foregoing method embodiment, and will not be repeated here.
  • the embodiment of the present application provides a computer program product, which when the computer program product runs on a network device, causes the network device to execute the method executed by the first network device in the method 300 or the method 400 described above.
  • the embodiment of the present application provides a computer program product, which when the computer program product runs on a network device, causes the network device to execute the method executed by the second network device in the above method 300 or method 400.
  • the system 900 includes a first network device 901 and a second network device 902.
  • the first network device 901 is a network device 500, a network device 700, or a network device 800
  • the second network device 902 is a network device 600, a network device 700, or a network device 800.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this application is essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .
  • the computer program product includes one or more computer program instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program instructions can be passed from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a digital video disc (DVD), or a semiconductor medium (for example, a solid state hard disk).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了一种基于FlexE传输业务流的方法及设备,属于通信技术领域。本申请提供了一种在FlexE中高效分配时隙的方法,网络设备通过利用时隙分配策略和业务流的需求带宽,为业务流自动分配PHY链路上的时隙,并使用分配的时隙传输业务流,由于无需用户为业务流人工指定对应的时隙,因此免去了用户感知如何编排时隙带来的学习成本,并免去了用户为业务流配置时隙的繁琐操作,因此大大简化了配置复杂度,提高了时隙分配的效率。

Description

基于FlexE传输业务流的方法及设备
本申请要求于2020年03月26日提交的申请号为202010225075.5、发明名称为“基于FlexE传输业务流的方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,特别涉及一种基于FlexE传输业务流的方法及设备。
背景技术
灵活以太网(Flexible Ethernet,Flex Eth或FlexE)是在传统以太网基础上发展出来的一种更加先进的以太网技术。FlexE主要提供三种功能,分别是捆绑、通道化和子速率。其中,捆绑是指将多个PHY捆绑为一个FlexE组,同一个FlexE组中的多个PHY能够一起传输业务流(client),从而支持更高速率。
以FlexE中业务流的传输方向为从网络设备A到网络设备B为例,用户要在网络设备A上人工进行配置操作,配置用于传输业务流的PHY的物理接口编号(PHY number),并且还要配置业务流在PHY上占用的时隙的时隙编号。网络设备A根据用户的配置操作,获取时隙配置表,将时隙配置表携带在FlexE开销帧中,向网络设备B发送该FlexE开销帧。网络设备B从FlexE开销帧中提取时隙配置表,解析时隙配置表,得到物理接口编号和时隙编号。网络设备B根据物理接口编号找到对应的PHY,根据时隙编号找到对应的时隙,从对应的PHY的对应的时隙上重建业务流。
采用上述方法时,用户需要通过执行配置操作,人工指定将哪个时隙分配给哪个业务流,配置操作十分复杂,导致费时费力,效率低下。
发明内容
本申请实施例提供了一种基于FlexE传输业务流的方法及设备,能够提高FlexE中分配时隙的效率。所述技术方案如下:
第一方面,提供了一种基于FlexE传输业务流的方法,在该方法中,第一网络设备获取时隙分配策略,所述时隙分配策略用于根据第一业务流的需求带宽分配时隙;所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,所述第一时隙是所述第一网络设备与第二网络设备之间的物理层PHY链路的时隙;所述第一网络设备根据所述第一时隙向所述第二网络设备发送所述第一业务流。
以上提供了一种在FlexE中高效分配时隙的方法,网络设备通过利用时隙分配策略和业务流的需求带宽,为业务流自动分配PHY链路上的时隙,并使用分配的时隙传输业务流,由于无需用户为业务流人工指定对应的时隙,因此免去了用户感知如何编排时隙带来的学习成本,并免去了用户为业务流配置时隙的繁琐操作,因此大大简化了配置复杂度,提高了时隙分配的效率。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包 括:如果空闲时隙满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第一时隙。
通过这种可选方式,由于利用时隙分配策略,自动地确定出了满足需求带宽的时隙,将满足需求带宽的时隙分配给业务流,因此在传输业务流的过程中,业务流会通过满足需求带宽的时隙传输,从而保证了业务流的带宽。由于业务流的带宽得到了保障,有助于业务保障SLA的要求。尤其是,在需求带宽由用户指定的情况下,通过这种可选方式分配时隙,使得业务流的带宽符合用户对带宽的期望。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:如果空闲时隙不满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第一时隙,所述激活带宽小于所述需求带宽,所述激活带宽是所述第一网络设备能够启动传输所述第一业务流的最小需求带宽。
通过这种可选方式,在空闲时隙不足的情况下,网络设备可能无法找到满足需求带宽的空闲时隙,而由于利用时隙分配策略,自动地确定出了满足激活带宽的时隙,将满足激活带宽的时隙分配给业务流,因此即使空闲时隙不足,网络设备也能够利用激活带宽对应的时隙启动传输业务流,因此保证了业务流的连通性,使得业务流得到传输,避免业务流断开,从而尽力而为保证最大数量的业务流被启动传输。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:如果空闲时隙不满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务流占用的时隙中确定所述第一时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
通过这种可选方式,在空闲时隙不足的情况下,不同业务流之间存在资源竞争的关系,业务流竞争的资源即为空闲时隙。由于利用时隙分配策略,自动地将低优先级的业务流原本占用的时隙分配给了高优先级的业务流,因此即使空闲时隙不足,高优先级的业务流能够抢占到低优先级的业务流的时隙,高优先级的业务流可利用低优先级业务流原本占用的时隙传输,从而保障高优先级的业务流的带宽或保障高优先级的业务流的连通性。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:所述第一网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;所述第一网络设备根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
通过这种可选方式,由于利用时隙分配策略,自动地确定出了当前物理接口编号最小的可用PHY链路上时隙编号最小的时隙,提供了一种简单的自动分配时隙的方式,方便管理FlexE组的空闲时隙。
可选地,所述第一网络设备根据所述时隙分配策略,从所述空闲时隙中确定满足所述需求带宽的第一时隙,包括:所述第一网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;所述第一网络设备根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
通过这种可选方式,由于基于PHY链路实现了负载分担,当一个PHY链路发生故障时,通过故障PHY链路之外的其他PHY链路传输的业务流不会受到影响,能够保持正常传输,因此避免一个PHY链路发生故障导致所有业务流中断的情况。例如,如果FlexE组包括PHY1 和PHY2这2个PHY链路,需要传输2N个业务流,采用这种可选方式,PHY1上会传输N个业务流,PHY2上会传输另外N个业务流,那么即使PHY1发生故障,而且没有执行S309对应的时隙动态迁移功能,PHY2上的N个业务流也会正常传输,因此确保有50%的业务流在没有人为干预的情况下仍可以快速恢复。此外,在不考虑PHY链路的负载的情况下,可能导致所有业务流集中分布在一个或多个PHY链路上,造成部分PHY链路是满载的,而部分PHY链路是空载的,而通过这种可选方式,能够将所有业务流均匀分担至不同PHY链路上,减轻了单个PHY链路的压力,实现了负载分担的功能。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:所述第一网络设备根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第一时隙,所述第一时隙平均分布在所述多个PHY链路中的不同PHY链路。
通过这种可选方式,将同一个业务流的需求带宽均衡地分担至尽可能多的可用PHY链路,一方面,能够极大地减少单个PHY链路故障后对业务流造成的影响,即使没有进行时隙迁移的步骤,由于业务流能够利用其他PHY链路上的时隙传输,保证业务流具有可用的带宽,而不至于传输中断。例如,通过这种可选方式,可以将第一业务流的需求带宽平均分担至N个PHY链路上,每个PHY链路上占用1/N份需求带宽对应的时隙。那么,即使这N个PHY链路上的一个PHY链路发生故障,剩余的(N-1)个PHY链路仍会传输第一业务流,从而保证第一业务流具有(N-1)/N份可用的带宽,因此在没有人为干预的情况下可以快速从故障恢复。另一方面,减轻了单个PHY链路的压力,实现了负载分担的功能。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当所述第一时隙所在的PHY链路发生故障,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,所述第二时隙与所述第一时隙不同;所述第一网络设备根据所述第二时隙向所述第二网络设备发送所述第一业务流。
通过这种可选方式,在第一业务流所在的PHY链路故障的情况下,第一网络设备依据时隙分配策略,将第一业务流能够从原来所在的时隙动态迁移到重新确定的时隙,从而为第一业务流重新部署了时隙,实现了时隙的重排布。此外,通过根据时隙分配策略重新确定时隙,由于收发两端无需执行协商流程,因此免去了协商流程带来的时延,能够将断流时间控制在50毫秒范围内,确保业务流在50毫秒内快速完成恢复,因此极大地提高业务从故障恢复的速度。另一方面,由于在PHY链路故障后,收发两端根据相同的时隙分配策略和相同的需求带宽重新确定时隙,因此收发两端确定出的新的时隙会是相同的,使得时隙迁移后,收发两端的时隙排布具有一致性,那么收发两端根据一致的时隙排布,能够正常传输业务流,从而实现了FlexE组中不同PHY链路保护倒换的功能,将故障的PHY链路上的业务流倒换至正常的PHY链路上,避免业务流传输中断。
可选地,所述第一网络设备获取时隙分配策略之后,所述方法还包括:所述第一网络设备将所述时隙分配策略推送至所述第二网络设备。
通过推送时隙分配策略,一方面,保证了RX端和TX端的策略一致性,从而保证在PHY链路发生故障、PHY链路增删、需求带宽更新等各种时隙迁移的场景下,由于RX端和TX端利用一致的时隙分配策略,RX端重新部署的时隙和TX端重新部署的时隙具有一致性,有助于流量快速恢复。另一方面,免去了用户对RX端配置时隙分配策略的流程,因此降低了配置的复杂度,提高了部署时隙分配策略的效率。
可选地,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:如果空闲时隙满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第二时隙。
通过在PHY链路故障的情况下执行这种可选方式,由于重新确定出了满足需求带宽的时隙,利用重新确定出的时隙传输业务流,使得业务流从原来所在的时隙迁移至重新确定的时隙后,业务流的带宽仍能满足需求带宽,从而尽力而为保证最大数量的业务流正常工作。由于PHY链路发生故障后业务流的带宽继续得到了保障,有助于保障业务的服务等级协议(Service-Level Agreement,SLA)。尤其是,在需求带宽由用户指定的情况下,通过这种可选方式重新分配时隙,使得PHY发生故障后业务流的带宽仍能符合用户对带宽的期望。
可选地,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:如果空闲时隙不满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第二时隙,所述激活带宽小于所述需求带宽,所述激活带宽是所述第一网络设备能够启动传输所述第一业务流的最小需求带宽。
通过在PHY链路故障的情况下执行这种可选方式,在PHY链路发生故障而空闲时隙不足的情况下,由于重新确定出了满足激活带宽的时隙,利用确定出的时隙传输业务流,使得业务流能够处于连通状态,业务流能被传输至对端,避免PHY链路发生故障后业务流断流,从而尽力而为保证最大数量的业务流在PHY链路发生故障后仍被启动传输。
可选地,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:如果空闲时隙不满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务流占用的时隙中确定所述第二时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
通过在PHY链路故障的情况下执行这种可选方式,在PHY链路发生故障而空闲时隙不足的情况下,由于根据业务流的优先级重新分配时隙,将低优先级的业务流原本占用的时隙重新分配给了高优先级的业务流,使得高优先级的业务流具有优先竞争到时隙的权利,高优先级的业务流能够通过低优先级业务流原本占用的时隙传输,从而避免高优先级的业务流断开,保证高优先级的业务流快速恢复。
可选地,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:所述第一网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;所述第一网络设备根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第二时隙。
可选地,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:所述第一网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;所述第一网络设备根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号最小的第二时隙。
可选地,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:所述第一网络设备根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第二时隙,所述第二时隙平均分布在所述多个PHY链路中的不同PHY链路。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当所述第一业务流的需求带宽发生更新,所述第一网络设备根据所述时隙分配策略和所述第一业务流更新后的需求带宽,确定第三时隙,所述第三时隙与所述第一时隙不同;所述第一网络设备根据所述第三时隙向所述第二网络设备发送所述第一业务流。
通过在业务流的需求带宽发生更新的场景下利用时隙分配策略重新分配时隙,由于收发两端利用的时隙分配策略和更新后的需求带宽一致,因此收发两端能自动地重新分配一致的时隙,因此免去了为配置时隙进行协商带来的通信开销,有助于实现需求带宽的无损更新。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当所述第一时隙所在的FlexE组增加PHY链路,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,从增加了PHY链路的FlexE组的时隙中确定第四时隙,所述第四时隙与所述第一时隙不同;所述第一网络设备根据所述第四时隙向所述第二网络设备发送所述第一业务流。当所述第一时隙所在的FlexE组删除PHY链路,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,从删除了PHY链路的FlexE组的时隙中确定第五时隙,所述第五时隙与所述第一时隙不同;所述第一网络设备根据所述第五时隙向所述第二网络设备发送所述第一业务流。
通过在FlexE组中增删PHY链路下利用时隙分配策略重新分配时隙,由于收发两端利用的时隙分配策略和增删PHY链路后的FlexE组一致,因此收发两端能自动地重新分配一致的时隙,因此免去了为配置时隙进行协商带来的通信开销,有助于实现PHY链路的无损增删。
可选地,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当增加了待传输的第三业务流或删除了原本传输的第四业务流,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第六时隙,所述第六时隙与所述第一时隙不同;所述第一网络设备根据所述第六时隙向所述第二网络设备发送所述第一业务流。
通过在增删业务流的场景下利用时隙分配策略重新分配时隙,由于收发两端利用的时隙分配策略一致,因此收发两端能自动地重新分配一致的时隙,因此免去了为配置时隙进行协商带来的通信开销,有助于实现业务流的无损增删。
可选地,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:所述第一网络设备从第一FlexE组中删除第一时隙所在的PHY链路,得到第二FlexE组,第二FlexE组不包括第一时隙所在的PHY链路。所述第一网络设备根据时隙分配策略和第一业务流的需求带宽,从第二FlexE组中确定第二时隙。
通过这种可选方式,由于在PHY链路故障后,快速启动了从FlexE组中删除故障的PHY链路的流程,从而自动地将故障的PHY链路剔除出FlexE组,使得FlexE组中剩余的PHY链路处于激活状态,因此保证了FlexE组是可用的,避免了PHY链路故障后导致整个FlexE组不可用。
第二方面,提供了一种基于FlexE传输业务流的方法,在该方法中,第二网络设备获取时隙分配策略,所述时隙分配策略用于根据第一业务流的需求带宽分配时隙;所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,所述第一时隙是所述第二网络设备与第一网络设备之间的物理层PHY链路的时隙;所述第二网络设备根据所述第一时隙从 所述第一网络设备接收所述第一业务流。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:如果空闲时隙满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第一时隙。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:如果空闲时隙不满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第一时隙,所述激活带宽小于所述需求带宽,所述激活带宽是所述第二网络设备能够启动传输所述第一业务流的最小需求带宽。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:如果空闲时隙不满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务流占用的时隙中确定所述第一时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:所述第二网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;所述第二网络设备根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
可选地,所述第二网络设备根据所述时隙分配策略,从所述空闲时隙中确定满足所述需求带宽的第一时隙,包括:所述第二网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;所述第二网络设备根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:所述第二网络设备根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第一时隙,所述第一时隙平均分布在所述多个PHY链路中的不同PHY链路。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当所述第一时隙所在的PHY链路发生故障,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,所述第二时隙与所述第一时隙不同;所述第二网络设备根据所述第二时隙从所述第一网络设备接收所述第一业务流。
通过在PHY链路发生故障的情况下,根据时隙分配策略重新确定时隙,由于收发两端无需执行协商流程,因此免去了协商流程带来的时延,能够将断流时间控制在50毫秒范围内,确保业务流在50毫秒内快速完成恢复,因此极大地提高业务从故障恢复的速度。另一方面,由于在PHY链路故障后,收发两端根据相同的时隙分配策略和相同的需求带宽重新确定时隙,因此收发两端确定出的新的时隙会是相同的,使得时隙迁移后,收发两端的时隙排布具有一致性,那么收发两端根据一致的时隙排布,能够正常传输业务流,从而实现了FlexE组中不同PHY链路保护倒换的功能,将故障的PHY链路上的业务流倒换至正常的PHY链路上,避免业务流传输中断。
可选地,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:如果空闲时隙满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第二时隙。
可选地,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定 第二时隙,包括:如果空闲时隙不满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第二时隙,所述激活带宽小于所述需求带宽,所述激活带宽是所述第二网络设备能够启动传输所述第一业务流的最小需求带宽。
可选地,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:如果空闲时隙不满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务流占用的时隙中确定所述第二时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
可选地,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:所述第二网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;所述第二网络设备根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第二时隙。
可选地,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:所述第二网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;所述第二网络设备根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号最小的第二时隙。
可选地,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,包括:所述第二网络设备根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第二时隙,所述第二时隙平均分布在所述多个PHY链路中的不同PHY链路。
可选地,所述第二网络设备获取时隙分配策略,包括:所述第二网络设备从所述第一网络设备接收时隙分配策略。
通过推送的方式得到时隙分配策略,一方面,保证了RX端和TX端的策略一致性,从而保证在PHY链路发生故障、PHY链路增删、需求带宽更新等各种时隙迁移的场景下,由于RX端和TX端利用一致的时隙分配策略,RX端重新部署的时隙和TX端重新部署的时隙具有一致性,有助于流量快速恢复。另一方面,免去了用户对RX端配置时隙分配策略的流程,因此降低了配置的复杂度,提高了部署时隙分配策略的效率。
可选地,所述第二网络设备从所述第一网络设备接收时隙分配策略,包括:所述第二网络设备接收所述第一网络设备的协商请求,所述协商请求用于指示所述时隙分配策略;所述第二网络设备根据所述协商请求,确定所述时隙分配策略。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当所述第一业务流的需求带宽发生更新,所述第二网络设备根据所述时隙分配策略和所述第一业务流更新后的需求带宽,确定第三时隙,所述第三时隙与所述第一时隙不同;所述第二网络设备根据所述第三时隙从所述第一网络设备接收所述第一业务流。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当所述第一时隙所在的FlexE组增加PHY链路,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,从增加了PHY链路的FlexE组的时隙中确定第四时隙,所述第四时隙与所述第一时隙不同;所述第二网络设备根据所述第四时隙从所述第一网络设备接收所述第一业务流。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当所述第一时隙所在的FlexE组删除PHY链路,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,从删除了PHY链路的FlexE组的时隙中确定第五时隙,所述第五时隙与所述第一时隙不同;所述第二网络设备根据所述第五时隙从所述第一网络设备接收所述第一业务流。
可选地,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:当增加了待传输的第三业务流或删除了原本传输的第四业务流,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第六时隙,所述第六时隙与所述第一时隙不同;所述第二网络设备根据所述第六时隙从所述第一网络设备接收所述第一业务流。
第三方面,提供了一种第一网络设备,该第一网络设备具有实现上述第一方面或第一方面任一种可选方式中基于FlexE传输业务流的功能。该第一网络设备包括至少一个模块,至少一个模块用于实现上述第一方面或第一方面任一种可选方式所提供的基于FlexE传输业务流的方法。第三方面提供的第一网络设备的具体细节可参见上述第一方面或第一方面任一种可选方式,此处不再赘述。
第四方面,提供了一种第二网络设备,该第二网络设备具有实现上述第二方面或第二方面任一种可选方式中基于FlexE传输业务流的功能。该第二网络设备包括至少一个模块,至少一个模块用于实现上述第二方面或第二方面任一种可选方式所提供的基于FlexE传输业务流的方法。第四方面提供的第二网络设备的具体细节可参见上述第二方面或第二方面任一种可选方式,此处不再赘述。
第五方面,提供了一种第一网络设备,该第一网络设备包括处理器和物理接口,该处理器用于执行指令,使得该第一网络设备执行上述第一方面或第一方面任一种可选方式所提供的基于FlexE传输业务流的方法,所述物理接口用于发送业务流。第五方面提供的第一网络设备的具体细节可参见上述第一方面或第一方面任一种可选方式,此处不再赘述。
第六方面,提供了第二网络设备,该第二网络设备包括处理器和物理接口,该处理器用于执行指令,使得该第二网络设备执行上述第二方面或第二方面任一种可选方式所提供的基于FlexE传输业务流的方法,所述物理接口用于接收业务流。第六方面提供的第二网络设备的具体细节可参见上述第二方面或第二方面任一种可选方式,此处不再赘述。
第七方面,提供了一种计算机可读存储介质,该存储介质中存储有至少一条指令,该指令由处理器读取以使第一网络设备执行上述第一方面或第一方面任一种可选方式所提供的基于FlexE传输业务流的方法。
第八方面,提供了一种计算机可读存储介质,该存储介质中存储有至少一条指令,该指令由处理器读取以使第二网络设备执行上述第二方面或第二方面任一种可选方式所提供的基 于FlexE传输业务流的方法。
第九方面,提供了一种计算机程序产品,当该计算机程序产品在第一网络设备上运行时,使得第一网络设备执行上述第一方面或第一方面任一种可选方式所提供的基于FlexE传输业务流的方法。
第十方面,提供了一种计算机程序产品,当该计算机程序产品在第二网络设备上运行时,使得第二网络设备执行上述第二方面或第二方面任一种可选方式所提供的基于FlexE传输业务流的方法。
第十一方面,提供了一种芯片,当该芯片在第一网络设备上运行时,使得第一网络设备执行上述第一方面或第一方面任一种可选方式所提供的基于FlexE传输业务流的方法。
第十二方面,提供了一种芯片,当该芯片在第二网络设备上运行时,使得第二网络设备执行上述第二方面或第二方面任一种可选方式所提供的基于FlexE传输业务流的方法。
第十三方面,提供了一种网络系统,该网络系统包括第一网络设备以及第二网络设备,该第一网络设备用于执行上述第一方面或第一方面任一种可选方式所述的方法,该第二网络设备用于执行上述第二方面或第二方面任一种可选方式所述的方法。
第十四方面,提供了一种第一网络设备,该第一网络设备包括:中央处理器、网络处理器和物理接口。中央处理器用于获取时隙分配策略;根据所述时隙分配策略和所述需求带宽,确定第一时隙。网络处理器用于触发物理接口根据所述第一时隙向第二网络设备发送第一业务流。
可选地,所述第一网络设备包括主控板和接口板,所述中央处理器设置在所述主控板上,所述网络处理器和所述物理接口设置在接口板上,所述主控板和所述接口板耦合。
在一种可能的实现方式中,主控板和接口板之间建立进程间通信协议(inter-process communication,IPC)通道,主控板和接口板之间通过IPC通道进行通信。
第十五方面,提供了一种第二网络设备,该第二网络设备包括:中央处理器、网络处理器和物理接口。
中央处理器用于获取时隙分配策略;根据所述时隙分配策略和所述需求带宽,确定第一时隙。网络处理器用于触发物理接口根据所述第一时隙从第一网络设备接收第一业务流。
可选地,所述第二网络设备包括主控板和接口板,所述中央处理器设置在所述主控板上,所述网络处理器和所述物理接口设置在接口板上,所述主控板和所述接口板耦合。
在一种可能的实现方式中,主控板和接口板之间建立IPC通道,主控板和接口板之间通过IPC通道进行通信。
附图说明
图1是本申请实施例提供的一种FlexE Group的结构示意图;
图2是本申请实施例提供的一种FlexE中数据结构示意图;
图3是本申请实施例提供的一种开销帧和开销复帧的结构示意图;
图4是本申请实施例提供的一种FlexE收发两端的对接示意图;
图5是本申请实施例提供的一种时隙配置示意图;
图6是本申请实施例提供的一种系统架构100的示意图;
图7是本申请实施例提供的一种系统架构200的示意图;
图8是本申请实施例提供的一种资源管理层的示意图;
图9是本申请实施例提供的一种基于FlexE传输业务流的方法300的流程图;
图10是本申请实施例提供的一种LLDP帧中的LLDPDU的示意图;
图11是本申请实施例提供的一种FlexE组中不同PHY链路之间保护倒换的示意图;
图12是本申请实施例提供的一种基于FlexE传输业务流的方法400的流程图;
图13是本申请实施例提供的一种网络设备500的结构示意图;
图14是本申请实施例提供的一种网络设备600的结构示意图;
图15是本申请实施例提供的一种网络设备700的结构示意图;
图16是本申请实施例提供的一种网络设备800的结构示意图;
图17是本申请实施例提供的一种网络系统900的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
本申请中术语“第一”“第二”等字样用于对作用和功能基本相同的相同项或相似项进行区分,应理解,“第一”、“第二”之间不具有逻辑或时序上的依赖关系,也不对数量和执行顺序进行限定。还应理解,尽管以下描述使用术语第一、第二等来描述各种元素,但这些元素不应受术语的限制。这些术语只是用于将一元素与另一元素区别分开。例如,在不脱离各种示例的范围的情况下,第一网络设备可以被称为第二网络设备,并且类似地,第二网络设备可以被称为第一网络设备。第一网络设备和第二网络设备都可以是网络设备,并且在某些情况下,可以是单独且不同的网络设备。类似地,“第一”、“第二”用于区分不同的“时隙”,或者区分不同的“业务流”,并不对本申请实施例的保护范围构成限定。
还应理解,术语“如果”可被解释为意指“当...时”(“when”或“upon”)或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定...”或“如果检测到[所陈述的条件或事件]”可被解释为意指“在确定...时”或“响应于确定...”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。
由于本申请实施例涉及灵活以太网(Flexible Ethernet,FlexEth或FlexE)技术的应用,为了便于理解,下面先对FlexE技术以及本申请实施例涉及的FlexE技术中的术语相关概念进行介绍。
(1)FlexE
随着互联网协议(internet protocol,IP)网络应用和业务的多样化,网络流量增加的趋势越来越明显。由于以太网接口标准制定和产品开发中是阶梯型的,当前以太网接口标准都是固定速率,因而会存在传送需求和实际设备接口能力之间的差距,经常需要解决在当前以太 网接口速率等级下,满足更高带宽的需求。对此,光互联网论坛(optical internetworking forum,OIF)FlexE技术创建了一个媒体访问控制(media access control,MAC)层和物理编码子层(physical coding sub Layer,PCS)之间的适配层,使得以太网接口速率可以灵活匹配多种业务场景,并且在更高带宽的网络处理器(network processor,NP)/转发设备出现时,不必等待新的固定速率以太网标准出台,即可发挥设备的最大性能。其中,该适配层称为FlexE夹层(shim)。
FlexE的基本功能是将M个FlexE的业务流(client)按照FlexE Shim的时分复用(time division multiplexing,TDM)机制映射到一个由N条物理层(Physical Layer,PHY)链路组成的灵活以太网组FlexE组上,M和N均为正整数,FlexE的基本架构可如图1所示。其中,M为6,N为4,即图1所示的FlexE是将6个FlexE clients的业务流按照FlexE Shim的TDM机制映射到一个由4条PHY链路组成的FlexE组上。
以100千兆以太网(Gigabit Ethernet,GE)PHY为例,FlexE的映射机制中,每条100G PHY对应着20个64比特(bit,B)/66B码块(block)对应的时隙(time slot,TS),每个码块对应5Gbps(交换带宽)速率的净荷速率(payload rate)。当前FlexE标准支持100GE、200GE、400GE和50GE接口上的FlexE。经过一条100GE PHY的数据的格式如图2所示。图2中,每个块为一个根据IEEE 802.3Clause 82编码(encoded)的64B/66B块,每20个blocks组成一个时隙表(calendar),每个块即TDM映射机制中的一个时隙。每个calendar重复1023次之后,插入1个64B/66B encoded开销块(overhead block)。然后,每8个开销块组成一个开销帧,每32个开销帧组成一个开销复帧。整个FlexE的流量时隙映射(client-slot mapping)和各种管理,都在开销复帧内完成。
(2)shim
shim基于64/66B之后的块(以66B为基础单位的块)将以太口的带宽资源进行时隙切片,并将切片后的时隙进行统一编号,得到每个时隙对应的时隙编号。发送(transport,TX)端的shim对业务数据进行切片,并将切片后的业务数据封装至预先划分的时隙中,并通过开销帧开销中的calendar,将本端的业务流与时隙编号之间的映射关系传递到接收(receive,RX)端。RX端从开销帧开销中提取业务流与时隙编号之间的映射关系,根据该映射关系从特定的时隙中重组业务流。shim可以对应于网络设备。其中,开销帧开销请参见图3所示的帧结构示意图。
(3)FlexE组
FlexE组也称FlexE捆绑组或捆绑组。FlexE组包括一个或多个PHY,例如,FlexE组可以由1~254个支持100GE速率的PHY组成,其中,0和255是预留位。一个FlexE组对应的带宽资源为该FlexE组中的PHY对应的带宽资源之和。因此,基于FlexE组,FlexE能够满足更大的传输速率和传输带宽。FlexE通过FlexE组可以并行地传输多个业务流,同一业务流的业务数据可以承载于FlexE组中的一个PHY,也可以承载于FlexE组中的不同PHY。换句话说,同一业务流的业务数据可以通过FlexE组中的一个PHY传输至对端,也可以通过FlexE组中的多个PHY传输至对端。为了简明起见,本申请实施例后续在不至于引入理解困难的情况下用“GRP+数字”的形式来简化表示一个具体的FlexE组,如将一个FlexE组简化表示为“GRP1”的形式。可选地,“GRP+数字”中的数字为FlexE组的组标识。
(4)组标识
组标识(Group Number,也称GRP_Number、GRP_ID、组号或组ID)用于标识一组物理接口(FlexE组)。参见图3所示的帧开销,GRP_ID参数体现在归属于FlexE组下的每一个物理接口的开销帧固定的字段,可以认为,GRP_ID是大物理管道的标识。FlexE组对接的两端的组标识可以是一致的。
(5)PHY链路
PHY可以定义为:为传输数据所需要的物理链路建立、维持、拆除而提供具有机械的、电子的、功能的和规范的特性。本文中提到的PHY可以包括收发两端的物理层工作器件,以及位于收发两端之间的传输介质(比如光纤),物理层工作器件例如可以包括以太网的物理层接口设备(physicalLayer interface devices)等。因此,在本文中,一个PHY链路可以理解为一个物理层通道,该物理层通道包括RX端设备的端口、TX端设备的端口和两个端口之间的通信链路。为了简明起见,本申请实施例后续在不至于引入理解困难的情况下用“PHY+数字”的形式来简化表示一条具体的PHY链路,如将一条PHY链路简化表示为“PHY1”的形式。可选地,“PHY+数字”中的数字为PHY链路的物理接口编号。
(6)物理接口编号
物理接口编号(PHY Number,也称物理口号、物理口标识或物理口ID)为物理接口的标识,FlexE根据物理接口编号来组织复帧,基于物理接口编号对多个PHY链路上的时隙进行统一编号。一般来讲,一个PHY链路在收发两端的物理接口编号可以是相同的。或者,一个PHY链路在收发两端的物理接口编号不同,但是收发两端的物理接口编号存在一一对应的关系。
(7)时隙
时隙是指时分复用模式中的一个时间片。例如,100G带宽的FlexE组具有20个带宽为5G的时隙。此外,在支持带宽为1G粒度的场景下,每个带宽为5G的时隙可以划分为5个带宽为1G的子时隙。为了简明起见,本申请实施例后续在不至于引入理解困难的情况下用“TS+数字”的形式来简化表示一个时隙,如将一个时隙简化表示为“TS1”的形式。可选地,“TS+数字”中的数字为时隙编号。
(8)时隙编号
时隙编号(TS Number,TS_NUM,也称ts_no、TS标识或TS ID)用于标识对应的时隙。一个FlexE组通常具有多个时隙,这些时隙会被统一编号,每个时隙对应1个时隙编号。
(9)业务流
业务流(client)对应于网络的各种业务接口,与IP/以太网(Ethernet)网络中的传统业务接口一致。FlexE Client可根据带宽需求灵活配置,支持各种速率的以太网MAC数据流(如10G、40G、n*25G数据流,甚至非标准速率数据流),并通过64B/66B的编码的方式将数据流传递至FlexE Shim层。为了简明起见,本申请实施例后续在不至于引入理解困难的情况下用“client+数字”的形式来简化表示一条业务流,如将一条业务流简化表示为“client1”的形式。可选地,“client+数字”中的数字为业务流标识。
(10)业务流标识
业务流标识(client_ID)用于标识业务流。基于某一个FlexE组,可以创建一条或多条业务流,不同的业务流可通过不同的业务流标识区分。
参见图3,开销帧(也称管理帧)和开销复帧的格式如图3所示。client id在复帧开销的 calendar中体现。如图3所示,FlexE开销(overhead,OH)中包括FlexE组中所有的FlexE Client的时隙表配置信息。相关技术中,为了使FlexE Client在改变时隙带宽配置的时候,不出现流量损失,可采用两张时隙表:Calendar A和Calendar B,这两张时隙表具有如下特点。
特点1:任意时间只有一张时隙表处于工作状态,也就是说,任意时间,要么Calendar A处于工作状态,要么Calendar B处于工作状态。
特点2:对接FlexE组的TX端和RX端,通过FlexE OH开销的时隙协商机制保障TX与RX的工作时隙表的一致性。
例如,Calendar A处于工作状态,那么Calendar B则处于相应时隙配置的备用状态。
特点3:时隙协商的发起端是TX,而RX则处于被动接收状态。假设Calendar A处于工作状态,那么TX会将变化的Calendar B通过FlexE OH开销刷新给RX。随后TX会发起时隙表切换请求(calendar switch request,CSR)时隙协商请求,要求将工作表切换到Calendar B上,TX收到RX的回应后,TX触发TX和RX均将工作表切换到Calendar B。
需要说明的是,在对接FlexE组的TX和RX两端,首次建立连接后,也会触发一次FlexE OH开销的时隙协商,以保证两端处于工作的时隙表是一致的。
图3中除了包括上述Calendar A和Calendar B之外,还包括如下信息。
C:用于指示正在使用的时隙配置(calendar configuration in use)。如图3所示的开销帧的第一个块中编号为8的比特位字段、第二个块中编号为0的比特位字段和第三个块中编号为0的比特位字段均携带C。
开销多帧指示器(overhead multiframe indicator,OMFI),在IA OIF-FlexE-01.0/01.1/02.2/02.1等标准中称为OMF:用于指示复帧的边界。如图3所示的开销帧的第一个块中编号为9的比特位字段携带该OMF。其中,在一个复帧里,前16个单帧的OMF的值为0,后16个单帧的OMF的值为1,通过0和1之间的转换,能够确定复帧的边界。
远程物理故障(remote PHY fault,RPF):用于指示远程物理故障。如图3所示的开销帧的第一个块中编号为10的比特位字段携带该RPF。
同步控制(synchronization control,SC):用于同步控制。如图3所示的开销帧的第一个块中编号为11的比特位字段携带该SC。
灵活以太图(FlexE Map):用于控制哪些FlexE实例是此组的成员(Control of which FlexE Instances are members of this group)。如图3所示的开销帧的第2个块中编号为1至编号为8的比特位字段携带该FlexE Map。示例性地,该FlexE Map包括FlexE组内的PHY链路信息,FlexE Map的每个比特位对应一个PHY链路,FlexE Map的每个比特位的值用于表示该比特位对应的PHY链路是否在该FlexE组中。例如,如果比特位的值为第一值,例如该第一值为1,则认为该比特位对应的PHY链路在该FlexE组中。如果比特位的值为第二值,例如该第二值为0,则认为该比特位对应的PHY链路不在该FlexE组中。
灵活以太实例号(FlexE instance Number):表示组中此FlexE实例的标识(Identity of this FlexE instance within the group)。如图3所示的开销帧的第2个块中编号为9至编号为16的比特位字段携带该FlexE instance Number。
灵活以太网组标识。如图3所示的开销帧的第1个块中编号为12至编号为31的比特位字段携带Group Number。
时隙表切换确认(calendar switch acknowledgement,CSA):在执行协议(implementation agreements,IA)OIF-FlexE-01.0/01.1/02.2/02.1等标准中称为CA,其中,01.0/01.1/02.2/02.1是IA OIF-FlexE标准的几个版本。如图3所示的开销帧的第3个块中编号为34的比特位字段携带该CA。
时隙表切换请求(calendar switch request,CSR):在IA OIF-FlexE-01.0/01.1/02.2/02.1等标准中称为CR。如图3所示的开销帧的第3个块中编号为33的比特位字段携带该CR。
同步头(synchronization head,SH):如图3所示的开销帧的帧头。
S:有效同步头位(valid sync header bits):如图3所示的开销帧的第4个块至第8个块中的SH下的字段携带该S。
管理通道(Management Channel):如图3所示的开销帧的第4个块至第8个块携带该管理通道。
CRC-16:用于对开销块的内容进行循环冗余校验(cyclic redundancy check,CRC)保护。如图3所示的开销帧的第3个块中编号为48至编号为63的比特位字段携带该CRC-16。
除包括上述信息的字段外,图3中还包括预留(reserved)字段,如图3所示的开销帧的第二个块中编号为17至编号为63比特位字段、第三个块中编号为35至编号为47比特位字段均为预留字段。
以上介绍了FlexE技术以及相关的术语概念,以下对FlexE技术在具体应用中的情况举例说明。
FlexE技术当前已经处于商用推广阶段,介于协议层面本身对应用呈现的配置资源及定义,以及不同速率等级下的资源配置差异,在两端业务流建立过程,用户需要对FlexE组的组网、PHY链路、PHY链路速率、时隙、子时隙捆绑策略及限制进行深度干预,以下简述两端业务流建立过程。
请参考图4,其示出了收发两端的对接模型,用户组建了一个200G的FlexE组,该FlexE组由2个100G的PHY链路组成,每个PHY链路有20个5G的时隙。在建立FlexE组以及两端业务流的对接的过程中,用户要配置组标识、物理接口编号、时隙编号、流标识等诸多参数。
例如,为了建立client1,需要执行以下S1至S6。其中,client1为从网络设备A至网络设备B传输的业务流。client1所需的带宽为5G。
S1、用户在网络设备A上创建业务流,用户将业务流的流标识配置为client1。
S2、用户在网络设备B上创建业务流,用户将业务流的流标识配置为client1。换句话说,在网络设备B上配置的流标识要与在网络设备A上配置的流标识一致。
S3、用户在网络设备A上指定client1从FlexE组的phy number1的物理接口的2号时隙发送。
S4、用户在S3中的配置信息(即client1与2号时隙之间的对应关系)通过FlexE开销帧传递到网络设备B。
S5、网络设备B从FlexE开销帧(即图3所示的开销帧)中提取配置信息,并获取到client1来自group的phy number1的物理接口的2号时隙。
S6、网络设备B从2号时隙重建client,从而建立流量。
其中,上述S1至S6为传输方向为从网络设备A向网络设备B方向为例进行说明。当业务流的传输方向为从网络设备B向网络设备A时,用户要在网络设备B上执行配置操作,配置时隙与业务流之间的映射关系。其中,在网络设备B上指定的时隙不要求与在网络设备A上指定的时隙一致,即,一条client的收发时隙可以不一致。
然而,在实施上述方法时,会面临诸多问题。
从配置难度的角度来看,随着IA OIF-FlexE-02.0对协议做了扩展,端口类型增加200G及400G。协议开销中将1.0定义的PHY number修改为instance number,资源的分层增加了一级,导致用户配置时隙变得更加难以管理。用户至少要了解以下(1)至(5)。
(1)FlexE组下绑定了哪些物理接口。
(2)物理接口下对应的instance number。
(3)特定的instance中哪些时隙是空闲可用的。
(4)特定的时隙中哪些子时隙是空闲可用的。
(5)多个空闲的子时隙间可能存在的捆绑限制。
用户只有完全理解如上协议层面的配置信息及限制,并需要强感知分片使用情况,才能合理的规划时隙,从而对运维人员要求较高。
请参考图5,下面结合一个业务部署的场景介绍时隙配置的复杂性。
用户在网络设备A与网络设备B之间部署一个FlexE组,该FlexE组包括2个200G的FlexE物理接口,该FlexE组总带宽400G。在运行初期,用户创建了3条业务流,分别为client1、client2和client3。
client1的带宽1G。在为client1配置时隙的过程中,由于1G时隙是基于5G时隙时分复用,一个5G时隙可以拆分出5个1G子时隙,用户先要确认FlexE组的所有5G时隙中,是否当前已经有5G时隙被拆分,并且仍然存在1G的空闲子时隙。如果有,则用户从空闲子时隙中指定1个空闲子时隙给client1。如果没有,则用户选择一个空闲的主时隙,基于该主时隙拆分出5个1G子时隙,用户选择1个1G子时隙分配给client1。
client2的带宽5G。在为client2配置时隙的过程中,用户选择任意一个空闲的5G时隙分配给client2。
client3的带宽15G。在为client3配置时隙的过程中,用户选择任意三个空闲的5G时隙分配给client3。
总结来看,用户配置时隙的过程相对复杂。
从网络运维的角度来看,组成FlexE组的物理接口存在故障的风险,如光纤损坏、光纤老化等情况都有可能导致物理接口发生故障。时下,物理接口的故障对业务流的影响范围是不可控的,是否影响到业务流依赖于用户对时隙的配置。具体地,用户为业务流部署了某个PHY链路上的时隙后,如果该PHY链路对应的物理接口发生故障,该物理接口无法传输业务流,导致业务流传输中断,换句话说,用户部署的时隙刚好由故障的物理接口提供时,就会影响到业务流。并且,业务流的故障恢复依赖于用户对业务流在可用时隙的重新部署。换句话说,只要用户还没有为业务流重新指定对应的时隙,由于业务流所分配的时隙一直是故障PHY链路上的时隙,业务流会一直处于中断状态,难以从故障状态下及时恢复。此外,当前FlexE组的业务保护能力不足,只能基于不同的FlexE组之间保护,无法实现基于FlexE 组内物理接口间的1:1或者N:1保护。
有鉴于上述描述的FlexE技术的应用情况,本申请实施例提供了一种基于FlexE传输业务流的方案,由业务流的收发两端结合对业务流的需求带宽,基于时隙分配策略自动为业务流分配时隙,根据分配的时隙传输业务流。从配置难度的角度来看,由于用户无需感知时隙如何编排,免去了配置时隙的复杂操作,因此极大地降低了配置难度。从网络运维的角度来看,当物理接口或PHY链路发生故障时,业务流的收发两端能结合原来的时隙分配策略和业务流需求带宽自动重新分配时隙,将业务流从不可用的时隙自动倒换至新分配的时隙上,从而实现时隙的动态迁移,使得业务流快速从故障中恢复。
下面,将从系统架构、方法、虚拟装置、实体装置、介质等多个角度,对本申请实施例提供的技术方案进行描述。
下面介绍本申请实施例提供的系统架构。
参见附图6,本申请实施例提供了一种系统架构100。系统架构100是对方法300所基于的硬件环境的举例说明。系统架构100包括网络设备101和网络设备102。网络设备101和网络设备102例如是路由器或交换机等。
网络设备101和网络设备102建立一个或多个FlexE组,每个FlexE组包括具有捆绑关系的多条PHY链路。例如,请参考图6,网络设备101和网络设备102建立了一个组标识为GRP1的FlexE组,该FlexE组包括两条捆绑的PHY链路,这两条PHY链路分别是PHY1和PHY2。应理解,不同PHY链路之间的捆绑是指逻辑上的捆绑关系,而不一定存在物理连接关系。也就是说,FlexE链路组中的多条PHY链路在物理上可以是互相独立的。PHY链路例如包括光纤。
FlexE组中的每个PHY链路可提供至少一个时隙,每个时隙对应一定大小的带宽,FlexE组总共具有的带宽例如为每个PHY链路上每个时隙对应的带宽之和。例如,请参考图6,FlexE组总共被配置了100G大小的带宽,FlexE组总共具有20个时隙,每个时隙对应于5G的带宽。其中,PHY1提供10个时隙,PHY2提供另外10个时隙。
网络设备101和网络设备102之间通过FlexE组传输一个或多个业务流,每个业务流占用FlexE组中一个或多个PHY链路上的一个或多个时隙。可选地,同一个业务流占用的时隙分布在同一个PHY链路上,或,同一个业务流占用的时隙分布在多个PHY链路中的每个PHY链路上,例如平均分布在FlexE组中的不同PHY链路上。例如,请参考图6,网络设备101和网络设备102之间创建了3条业务流,这3条业务流分别为client1、client2和client3。其中,client1使用了5G的带宽,client1占用了PHY1的TS1。client2使用了5G的带宽,client2占用了PHY2的TS1,client3使用了40G的带宽,client3占用了PHY1的TS2至TS9。
可选地,网络设备101和网络设备102中同一个FlexE组内的不同PHY链路之间具有保护关系。保护关系包括而不限于1:1保护关系或N:1保护关系。1:1保护关系是指使用一条PHY链路保护另一条PHY链路。N:1保护关系是指使用一条PHY链路保护N条PHY链路。
保护关系包括而不限于主备保护关系和对等保护关系。例如,建立了保护关系的不同PHY链路为主备关系,例如,请参考图1,PHY1为主PHY链路,PHY2为备PHY链路,PHY2用于保护PHY1,当PHY1故障后,PHY1上的业务流倒换至PHY2。可选地,建立了保护关 系的不同PHY链路为对等关系,例如,请参考图1,PHY1和PH2之间互相保护,当PHY1故障后,PHY1上的业务流倒换至PHY2,当PHY2故障后,PHY2上的业务流倒换至PHY1。可选地,网络设备101和网络设备102的物理接口分为工作口和保护口,网络设备101的工作口和网络设备102的工作口建立主PHY链路,网络设备101的保护口和网络设备102的保护口建立备PHY链路,一条备PHY链路保护一条主PHY链路,形成1:1保护关系,或者一条备PHY链路保护N条主PHY链路,形成N:1保护关系。
可选地,网络设备101和网络设备102之间传输的每个业务流对应一个优先级。不同业务流的优先级相同或不同。例如,请参考图6,client1、client2和client3分别具有优先级。例如,这3条业务流中,client1的优先级最高,client2的优先级其次,client3的优先级最低。可选地,备PHY链路上传输的业务流的优先级低于主PHY链路传输的业务流的优先级,当主PHY链路发生故障时,主PHY链路上的业务流会抢占备PHY链路的时隙,主PHY链路上的业务流会被切换至备PHY链路上。
应理解,图6所示的建立一个FlexE组的场景仅是举例,另外,FlexE组包括两个PHY链路的场景也仅是举例。系统架构100内建立的FlexE组的数量可以更多或更少,一个FlexE组包括的PHY链路的数量可以更多或更少,此时虽然图6未示出,系统架构100还包括GRP1之外的其他FlexE组,系统架构100还包括PHY1、PHY2之外的其他PHY链路,本申请实施例对系统架构100内建立的FlexE组的数量以及PHY链路的数量不加以限定。例如,请参考图1,网络设备101和网络设备102也可以建立PHY1、PHY2、PHY3和PHY4这4条PHY链路。
以上系统架构100侧重于描述整体的网络架构,以下通过系统架构200,对设备内部的逻辑功能架构进行描述。
请参考附图7,本实施例提供了另一种系统架构200。系统架构200是对网络设备的逻辑功能架构的举例说明。
系统架构200包括用户配置层201、资源管理层(也称Resource Management Layer、RS MNG Layer、资源管理子层或RS MNG)202、shim层203和FlexE物理接口204。在FlexE业务架构的视图中,资源管理层202位于用户配置层201与shim层203之间。
用户配置层201用于接收和保存用户的配置信息,例如保存时隙分配策略和业务流的需求带宽。可选地,时隙分配策略包括PHY链路正常的情况下使用的时隙分配策略(也称带宽分配策略)和PHY链路故障的情况下使用的时隙分配策略(也称时隙迁移策略),用户配置层201保存带宽分配策略和时隙迁移策略。可选地,用户配置层201保存业务流标识和需求带宽之间的对应关系。例如,参见图7,client1的需求带宽(Band Width,BW)为BW1,client2的需求带宽为BW2,client3的需求带宽为BW3,用户配置层201保存client1和BW1之间的对应关系、client2和BW2之间的对应关系、client3和BW3之间的对应关系。
shim层203的功能请参考上文术语介绍(2)的描述。
资源管理层202用于管理时隙。资源管理层202的功能包括以下功能(1)至功能(5),每个功能分别如何实现还请参考下述方法300或方法400。
功能(1)用户直接规划和配置client的需求带宽,而无需感知时隙编排,对用户屏蔽管理时隙的细节。
功能(2)接收用户定制的时隙分配策略,基于时隙分配策略自主完成时隙的管理、维护和分配。
功能(3)在PHY链路发生故障的场景,基于用户定制的时隙分配策略,自主完成时隙的迁移。
功能(4)基于链路层发现协议(Link Layer Discovery Protocol,LLDP)实现双端网元间协商机制,实现本端对时隙分配策略的推送和从对端接收时隙分配策略。
其中,推送的时隙分配策略用于供本端在时隙迁移的过程中在TX方向分配时隙。接收的时隙分配策略用于供本端在时隙迁移的过程中在RX方向分配时隙。
功能(5)监控FlexE物理接口204的状态,快速响应FlexE物理接口204的故障状态,按照预定的时隙分配策略,执行TX方向或RX方向的时隙迁移。
以上系统架构200介绍了整体的逻辑功能架构,以下对系统架构200中的资源管理层202进行详细介绍。
资源管理层202包括至少一个功能模块,每个功能模块采用软件实现,换句话说,功能模块为网络设备的处理器读取存储器中存储的程序代码后生成的。例如,请参考附图8,资源管理层202的功能模块包括TX策略模块2021、RX策略模块2022、带宽分配模块2023、时隙迁移模块2024、时隙资源池2025。
TX策略模块2021用于根据用户的定义,保存时隙分配策略。TX策略模块2021还用于通过LLDP,将时隙分配策略推送到对端。TX策略模块2021还用于根据时隙分配策略在TX方向分配时隙。
RX策略模块2022用于接收对端推送的时隙分配策略,保存时隙分配策略。RX策略模块2022还用于根据时隙分配策略在RX方向分配时隙。
带宽分配模块2023用于在用户增删业务流或配置需求带宽等情况下,根据业务流的需求带宽和TX策略模块2021保存的时隙分配策略分配时隙。
时隙迁移模块2024用于在PHY链路处于故障状态的情况下,根据业务流的需求带宽和RX策略模块2022保存的时隙分配策略分配时隙。
时隙资源池2025用于保存和维护PHY链路的空闲时隙。
以上介绍了系统架构100和系统架构200,以下通过方法300,示例性介绍基于系统架构100和系统架构200传输业务流的方法流程。
参见图9,图9是本申请实施例提供的一种基于FlexE传输业务流的方法300的流程图。方法300包括以下S301至S311。
方法300以业务流的传输方向为从第一网络设备至第二网络设备为例进行说明。换句话说,第一网络设备为上游网元,第二网络设备为下游网元。应理解,第一网络设备(第二网络设备的业务流传输流程与第二网络设备(第一网络设备的业务流传输流程原理一致,如果将业务流的传输方向替换为从第二网络设备至第一网络设备,也可以利用方法300传输业务流,在此不做赘述。
可选地,第一网络设备为系统架构100中的网络设备101,第二网络设备为系统架构100中的网络设备102。
可选地,第一网络设备和第二网络设备均具有系统架构200所示的逻辑功能架构。第一网络设备和第二网络设备通过系统架构200包括的功能模块执行方法300。例如,方法300中的时隙分配策略、需求带宽等数据通过用户配置层201接收、保存和维护,方法300中时隙分配相关的步骤(如S306、S307、S309和S310)通过资源管理层202执行,方法300中传输业务流的步骤通过shim层203和FlexE物理接口204执行。
可选的,方法300由通用中央处理器(central processing unit,CPU)处理,也可以由CPU和NP共同处理,例如,CPU执行302至S307、309至S310对应的处理动作,NP执行S308和S311对应的处理动作。当然,也可以不用NP,而使用其他适合用于报文转发的处理器执行S308和S311对应的处理动作,本申请不做限制。
应理解,方法300侧重描述如何分配时隙,如何传输业务流的技术细节还请参考上文图1至图3的介绍。
S301、第一网络设备与第二网络设备建立PHY链路。
第一网络设备与第二网络设备可以创建FlexE组,该FlexE组包括第一网络设备与第二网络设备之间的多个PHY链路。其中,FlexE组可以基于用户的规划创建。FlexE组用于传输业务流。FlexE组可以理解为一个大管道,该管道包括第一网络设备和第二网络设备的具备捆绑能力的一个或多个物理接口。例如,请参考图6所示的系统架构100,网络设备101和网络设备102组建了带宽为100G、组标识为的GRP1的FlexE组,并将2个带宽为50G的物理接口加入到FlexE组。
第一网络设备与第二网络设备如何创建FlexE组包括多种实现方式。在一种可能的实现中,用户基于FlexE组的对接参数,对第一网络设备与第二网络设备进行配置操作,从而完成FlexE组的配置。FlexE组的属性包括FlexE组的配置带宽和FlexE组的可用带宽。FlexE组的配置带宽是指用户规划的FlexE组的带宽,FlexE组的配置带宽为FlexE组内绑定的物理接口带宽数量之和。FlexE组的可用带宽是指FlexE组内当前处于激活状态的物理接口的带宽数量之和。其中,激活状态也称连接(link)状态,激活状态是去激活状态相对的概念,若FlexE组内存在部分物理接口处于非link状态,则非link状态的物理接口对应的带宽资源是不可用的,该物理接口处于去激活态。
在一些实施例中,配置FlexE组的流程包括以下步骤A和步骤B。
步骤A、用户在第一网络设备和第二网络设备上分别执行FlexE组的创建操作,用户指定FlexE组的组标识,在第一网络设备和第二网络设备上分别输入组标识。第一网络设备和第二网络设备响应于用户的操作,创建FlexE组,将FlexE组的组标识配置为用户指定的组标识。
步骤B、用户指定PHY链路的物理接口编号等其他FlexE组对接所需的参数,在第一网络设备和第二网络设备上分别输入指定的参数,第一网络设备和第二网络设备基于步骤A创建的FlexE组,添加FlexE的物理接口,并配置指定的参数。
S302、第一网络设备获取第一业务流的配置信息。
第一业务流的配置信息包括第一业务流的需求带宽或第一业务流的优先级中的至少一项。可选地,第一业务流的配置信息通过用户的配置操作得到。换句话说,第一业务流的需求带宽和第一业务流的优先级由用户指定。
S303、第二网络设备获取第一业务流的配置信息。
第二网络设备获取的第一业务流的配置信息与第一网络设备获取的第一业务流的配置信息相同。换句话说,第一业务流的配置信息在RX端和TX端是一致的。
应理解,本实施例对S302与S303的时序不做限定。在一些实施例中,S302与S303可以顺序执行。例如,可以先执行S302,再执行S303;也可以先执行S303,再执行S302。在另一些实施例中,S302与S303也可以并行执行,即,可以同时执行S302以及S303。
S304、第一网络设备获取时隙分配策略。
本实施例中,针对PHY链路正常以及PHY链路故障下的时隙分配场景定义了若干策略,在此将这种策略称为时隙分配策略。第一网络设备与第二网络设备之间可以传输一条或多条业务流,在下面的实施例中,将以传输第一业务流为例,对如何实施第一时隙分配策略的流程进行示例性说明。
时隙分配策略用于根据第一业务流的需求带宽分配时隙。在增删业务流、PHY链路发生故障、需求带宽发生更新、增删PHY链路等各种场景的触发下,第一网络设备或第二网络设备会按照时隙分配策略自动为第一业务流分配时隙。通过提供了时隙分配策略,用户只需配置带宽即可,无需了解FlexE协议实现细节,用户无需对时隙、子时隙进行精心规划,因而大大简化了配置复杂度。
需求带宽是指传输第一业务流所需满足的带宽。可选地,需求带宽是用户给第一业务流指定的带宽,需求带宽也称为配置带宽。例如,第一业务流为client1,终端向第一网络设备发送带宽请求,带宽请求用于为client1申请分配需求带宽,带宽请求携带BW1,BW1为client1对应的需求带宽。第一网络设备从带宽请求中获取BW1,从而确定client1需要通过BW1大小的需求带宽。
应理解,本实施例对S302与S304的时序不做限定。在一些实施例中,S302与S304可以顺序执行。例如,可以先执行S302,再执行S304;也可以先执行S304,再执行S302。在另一些实施例中,S302与S304也可以并行执行,即,可以同时执行S302以及S304。
S305、第二网络设备获取时隙分配策略。
第二网络设备获取的时隙分配策略和第一网络设备获取的时隙分配策略相同。通过这种方式,由于RX端(第二网络设备)和TX端(第一网络设备)的时隙分配策略是一致的,因此RX端和TX端根据同样的时隙分配策略和同样的需求带宽,会为业务流确定相同的时隙。由于RX端确定的时隙和TX端确定的时隙相同,因此免去了双端协商时隙带来的通信开销,降低了传输业务流的时延。
如何保证RX端和TX端的策略一致性包括多种实现方式,以下通过实现方式(1)至实现方式(2)举例说明。
实现方式(1)TX端向RX端推送时隙分配策略。
第一网络设备(TX端)得到时隙分配策略后,会向第二网络设备(RX端)发送时隙分配策略,第二网络设备可以从第一网络设备接收时隙分配策略。
实现方式(1)的效果包括:通过推送的方式得到时隙分配策略,一方面,保证了RX端和TX端的策略一致性,从而保证在PHY链路发生故障、PHY链路增删、需求带宽更新等各种时隙迁移的场景下,由于RX端和TX端利用一致的时隙分配策略,RX端重新部署的时隙和TX端重新部署的时隙具有一致性,有助于流量快速恢复。另一方面,免去了用户对RX端配置时隙分配策略的流程,因此降低了配置的复杂度,提高了部署时隙分配策略的效率。
推送时隙分配策略的频率包括多种情况。可选地,第一网络设备每隔一个时间周期,向第二网络设备推送一次时隙分配策略。如此,TX端定时将时隙分配策略推送给RX端。当然,第一网络设备也可以实时推送时隙分配策略,或在指令的触发下推送时隙分配策略。
如何推送时隙分配策略包括多种实现方式。可选地,推送时隙分配策略的流程通过协商的方式进行。具体地,第一网络设备生成协商请求,向第二网络设备发送协商请求,第二网络设备从第一网络设备接收协商请求,第二网络设备根据协商请求,确定时隙分配策略。其中,该协商请求用于指示时隙分配策略。例如,协商请求包括时隙分配策略的标识。
如何协商时隙分配策略包括多种实现方式。可选地,第二网络设备与第一网络设备基于LLDP协议对时隙分配策略进行协商,相应地,上述协商请求为LLDP帧。
如何利用LLDP协商时隙分配策略包括多种实现方式。在一种可能的实现中,对LLDP帧的结构进行扩展,使得LLDP帧包括策略字段,策略字段的值用于指示传输第一业务流采用的时隙分配策略。例如,若策略字段的值为0,指示传输第一业务流采用基于需求带宽的时隙分配策略。若策略字段的值为1,指示传输第一业务流采用基于激活带宽的时隙分配策略。若策略字段的值为2,指示为传输第一业务流采用基于优先级抢占的时隙分配策略。通过这种方式,第一网络设备采用LLDP协商的方式,将采用的时隙分配策略推送至第二网络设备。
如何通过LLDP帧携带策略字段包括多种实现方式。在一种可能的实现中,通过LLDP帧的类型-长度-值(Type-Length-Value,TLV)携带策略字段。以携带策略字段的TLV称为策略TLV为例,LLDP帧包括策略TLV,策略TLV的值包括策略字段。
策略TLV具体包括多种情况。可选地,策略TLV是新的顶级(top)TLV,该策略TLV的类型(type)字段的值表示未使用的top TLV的类型。可选地,该策略TLV是top TLV的新的子TLV,该策略TLV的type字段的值表示未使用的子TLV的类型。可选地,该策略TLV是top TLV的新的子子TLV(sub-sub-TLV),该策略TLV的type是未使用的sub-sub-TLV的类型。本实施例对策略TLV是top TLV、sub-TLV还是sub-sub-TLV不做限定。
可选地,策略TLV为T=127的TLV的子TLV。例如,请参考图10,LLDP帧中的LLDP载荷(LLDPDU)包括机箱ID TLV(Chassis ID TLV)、端口ID TLV(Port ID TLV)、生存时间TLV(Time to Live TLV,TTL TLV)、可选TLV(Optional TLV)、LLDP载荷结束TLV(End of LLDPDU TLV)。OptionalTLV包括T=127的TLV,T=127的TLV的子TLV包括策略TLV。
其中,T=127的TLV是TLV Type字段包括127的TLV。T=127的TLV为厂商预留的TLV。T=127的TLV的TLVLength字段可以包括9。T=127的TLV可以包括组织机构的ID(Organizationally unique identifier,OUI)。策略TLV包括子类型(sub Type)字段和策略字段。sub Type字段的值为新增的值,用于表示策略TLV。利用这种实现方式,通过扩展一个子TLV,指明了时隙分配策略。
其中,Chassis ID TLV用于通告LLDPDU发送者的机箱ID(chassis ID),Port ID TLV用于标识发送该LLDPDU的设备的端口。Time to Live TLV用于同时通知接收端接收到的信息的有效期。End Of LLDPDU TLV用于标识LLDPDU的结束。
可选地,协商时隙分配策略的流程通过资源管理层执行。例如,请参考图8,第一网络设备对应于资源管理层202中的TX策略模块2021,第二网络设备对应于资源管理层202中的RX策略模块2022。TX端的TX策略模块2021会基于LLDP协议,向RX端推送时隙分 配策略。RX端的RX策略模块2022基于LLDP协议,接收TX端推送的时隙分配策略,保存时隙分配策略。
实现方式(2)在TX端和RX端静态配置一致的时隙分配策略。
可选地,用户对第二网络设备触发配置操作,第二网络设备根据用户的配置操作确定时隙分配策略。
应理解,在采用静态配置的手段实现S304和S05时,本实施例对S304与S305的时序不做限定。在一些实施例中,S304与S305可以顺序执行。例如,可以先执行S304,再执行S305;也可以先执行S305,再执行S304。在另一些实施例中,S304与S305也可以并行执行,即,可以同时执行S304以及S305。
应理解,本实施例对S303与S305的时序不做限定。在一些实施例中,S303与S305可以顺序执行。例如,可以先执行S303,再执行S305;也可以先执行S305,再执行S303。在另一些实施例中,S303与S305也可以并行执行,即,可以同时执行S303以及S305。
S306、第一网络设备根据时隙分配策略和需求带宽,确定第一时隙。
第一网络设备和第二网络设备基于创建的FlexE组,利用S304得到的时隙分配策略,为第一业务流确定时隙,将确定的时隙分配给第一业务流。例如,请参考图8,资源管理层202保存第一业务流的优先级、第一业务流的带宽需求、时隙资源池和用户定制的带宽分配策略,以上作为S306的输入数据,资源管理层202根据输入数据,执行分配时隙的步骤,资源管理层202输出业务流对应的时隙。
通过这种方式,由用户通过指令指定时隙,改进为用户指定带宽需求,由网络设备根据用户定制的时隙分配策略和需求带宽管理时隙,从而将时隙分配的权利从用户收回给网络设备,因此免去了相关技术中用户配置时隙的繁琐操作,简化了用户配置。此外,在很多场景下,网络设备可以根据时隙分配策略重新分配时隙,使得业务流从原来的时隙迁移至重新分配的时隙,因此具备时隙动态迁移的能力。
方法300中,以S306中为第一业务流确定的时隙称为第一时隙为例进行说明。第一时隙是第一网络设备与第二网络设备之间的PHY链路的时隙。
可选地,第一时隙是一个时隙,或者,第一时隙是包括多个时隙的集合,本实施例对第一时隙包括的时隙数量不做限定。例如,请参考图6,网络设备101为client1确定了PHY1的TS1,网络设备101为client3确定了PHY1的TS2至TS9。在这个例子中,如果第一业务流为client1,则第一时隙为PHY1的TS1。如果第一业务流为client3,则第一时隙为PHY1的TS2至TS9。
可选地,第一时隙是同一个PHY链路上的时隙。例如,请参考图6,第一时隙是PHY1上的时隙,或者,第一时隙是PHY2上的时隙。或者,第一时隙包括分别位于多个PHY链路上的时隙。例如,请参考图6,第一时隙包括PHY1上的时隙和PHY2上的时隙,比如说,第一时隙包括PHY1上的TS1和PHY2上的TS2。
可选地,在第一时隙包括多个PHY链路上的时隙的情况下,第一时隙分布在多个PHY链路中每个PHY链路上的时隙的数量是相同的。例如,请参考图6,第一时隙包括PHY1上的N个时隙和PHY2上的N个时隙,N为正整数。或者,第一时隙分布在多个PHY链路中每个PHY链路上的时隙的数量是不同的。例如,请参考图6,第一时隙包括PHY1上的p个时隙和PHY2上的q个时隙,p和q为正整数。此外,第一时隙分布在多个PHY链路中每个 PHY链路上的时隙是否相同或近似相同,可以根据采用的时隙分配策略确定。例如,在采用下述可选方式六中的基于业务流负载分担的时隙分配策略时,第一时隙分布在多个PHY链路中每个PHY链路上的时隙相同或近似相同。
如何根据时隙分配策略和需求带宽确定出第一时隙包括多种实现方式,以下通过可选方式一至可选方式六举例说明。
可选方式一、基于业务流配置带宽的时隙分配策略。
如果空闲时隙满足需求带宽,第一网络设备根据时隙分配策略和需求带宽,从FlexE组的空闲时隙中确定满足需求带宽的第一时隙。可选方式一中的时隙分配策略的标识可以为000。
如何确定空闲时隙满足需求带宽包括多种方式。在一种可能的实现中,第一网络设备根据FlexE组空闲时隙的数量以及一个时隙对应的带宽,获取FlexE组的可用带宽,第一网络设备判断FlexE组的可用带宽是否大于或等于该需求带宽,如果FlexE组的可用带宽大于或等于该需求带宽,确定空闲时隙满足需求带宽。其中,FlexE组的可用带宽为空闲时隙的数量以及一个时隙对应的带宽之间的乘积。例如,1个时隙对应的带宽为5G,FlexE组当前存在3个空闲时隙,FlexE组的可用带宽为5G*3=15G。如果业务流的需求带宽为10G,那么第一网络设备会判断FlexE组的可用带宽15G大于需求带宽10G,则第一网络设备根据时隙分配策略和需求带宽10G,会从3个空闲时隙中确定2个空闲时隙,确定出的2个空闲时隙为第一时隙。
本段对可选方式一达到的效果进行论述。由于利用时隙分配策略,自动地确定出了满足需求带宽的时隙,将满足需求带宽的时隙分配给业务流,因此在传输业务流的过程中,业务流会通过满足需求带宽的时隙传输,从而保证了业务流的带宽。由于业务流的带宽得到了保障,有助于业务保障SLA的要求。尤其是,在需求带宽由用户指定的情况下,通过可选方式一分配时隙,使得业务流的带宽符合用户对带宽的期望。
可选方式二、基于业务流激活带宽的时隙分配策略。
时隙分配策略不仅考虑需求带宽,还考虑激活带宽。具体地,第一网络设备判断FlexE组的空闲时隙是否满足需求带宽,如果空闲时隙不满足需求带宽,第一网络设备根据时隙分配策略和激活带宽,从空闲时隙中确定满足激活带宽的第一时隙。可选方式二中的时隙分配策略的标识可以为001。
其中,激活带宽是第一网络设备能够启动传输第一业务流的最小需求带宽。当业务流被分配到的时隙满足激活带宽时,第一网络设备的物理接口(如FlexE物理接口)能处于up状态,启动传输业务流。激活带宽小于需求带宽,例如,client1的需求带宽为10G,激活带宽为5G,1个时隙对应的带宽为5G。如果FlexE组当前仅存在1个空闲时隙,FlexE组的可用带宽为5G*1=5G,FlexE组的可用带宽不足以满足10G的需求带宽,那么第一网络设备根据时隙分配策略,会确定这1个空闲时隙,以便利用5G大小的激活带宽启动传输client1。在这个例子中,确定出的1个空闲时隙即为第一时隙。
本段对可选方式二达到的效果进行论述。在空闲时隙不足的情况下,网络设备可能无法找到满足需求带宽的空闲时隙,而由于利用时隙分配策略,自动地确定出了满足激活带宽的时隙,将满足激活带宽的时隙分配给业务流,因此即使空闲时隙不足,网络设备也能够利用激活带宽对应的时隙启动传输业务流,因此保证了业务流的连通性,使得业务流得到传输,避免业务流断开,从而尽力而为保证最大数量的业务流被启动传输。
可选方式三、基于业务流优先级抢占的时隙分配策略。
时隙分配策略不仅考虑需求带宽,还考虑业务流的优先级。具体地,以第一业务流为高优先级的业务流为例,第一网络设备判断FlexE组的空闲时隙是否满足需求带宽,如果空闲时隙不满足需求带宽,第一网络设备根据时隙分配策略和第一业务流的优先级,从已被第二业务流占用的时隙中确定第一时隙。可选方式三中的时隙分配策略的标识可以为002。其中,第二业务流的优先级低于第一业务流的优先级。在第一业务流和第二业务流之间,第一业务流为高优先级的业务流,第二业务流为低优先级的业务流。
如何确定业务流的优先级包括多种实现方式。以下通过方式A和方式B举例说明。
方式A、由用户指定业务流的优先级。
具体地,用户在针对第一业务流执行配置操作时,输入了第一业务流的优先级,相应地,执行S302得到的第一业务流的配置信息包括第一业务流的优先级,第一网络设备从第一业务流的配置信息中获取第一业务流的优先级。
方式B、根据业务流的ID确定业务流的优先级。
第一网络设备根据第一业务流的ID获取第一业务流的优先级。可选地,业务流的优先级与业务流的ID负相关,即,业务流的ID越小,则业务流的优先级越高。例如,第一业务流的ID小于第二业务流的ID,则第一业务流的优先级高于第二业务流的优先级。
选择采用上述方式A还是方式B可以包括多种情况。例如,第一网络设备判断第一业务流的配置信息是否包括第一业务流的优先级,如果第一业务流的配置信息包括第一业务流的优先级,则选择采用上述方式A。如果第一业务流的配置信息不包括第一业务流的优先级,则选择采用上述方式B。
本段对可选方式三达到的效果进行论述。在空闲时隙不足的情况下,不同业务流之间存在资源竞争的关系,业务流竞争的资源即为空闲时隙。由于利用时隙分配策略,自动地将低优先级的业务流原本占用的时隙分配给了高优先级的业务流,因此即使空闲时隙不足,高优先级的业务流能够抢占到低优先级的业务流的时隙,高优先级的业务流可利用低优先级业务流原本占用的时隙传输,从而保障高优先级的业务流的带宽或保障高优先级的业务流的连通性。
可选方式四、基于顺序的时隙分配策略。
时隙分配策略不仅考虑需求带宽,还考虑物理接口编号的顺序以及时隙编号的顺序。在FlexE组内不同的PHY链路间,物理接口编号较小的PHY链路具有较高的资源分配优先级。在PHY链路内不同时隙之间,时隙编号较小的时隙具有较高的资源分配优先级。具体地,第一网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;第一网络设备根据需求带宽,从第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
第一PHY链路是FlexE组的可用PHY链路中物理接口编号最小的可用PHY链路。例如,如果FlexE组中共有3条PHY链路,这3条PHY链路分别为PHY1、PHY2和PHY3,其中PHY1当前不可用,PHY2和PHY3当前可用,则第一PHY链路是PHY2。
第一时隙是第一PHY链路的空闲时隙中时隙编号最小的空闲时隙。例如,第一PHY链路包括10个时隙,这10个时隙分别为TS1、TS2、TS3至TSλ,其中,TS1和TS2不可用,TS3至TSλ这8个时隙为空闲时隙,则第一时隙是TS3。
可选地,第一网络设备从所有可用PHY链路中物理接口编号最小的PHY链路上,从时隙编号0对应的时隙开始,按照时隙编号从小到大的顺序,依次寻找空闲时隙,直至找到空闲时隙为止,找到的空闲时隙即为第一时隙。
本段对可选方式四达到的效果进行论述。由于利用时隙分配策略,自动地确定出了当前物理接口编号最小的可用PHY链路上时隙编号最小的时隙,提供了一种简单的自动分配时隙的方式,方便管理FlexE组的空闲时隙。
可选方式五、基于PHY链路负载分担的时隙分配策略。
具体地,时隙分配策略不仅考虑需求带宽,还考虑PHY链路的负载。FlexE组中不同PHY链路的负载相等或近似相等,使得FlexE组中不同PHY链路的负载尽量维持均衡。
在一种可能的实现中,在为第一业务流分配时隙的过程中,第一网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;第一网络设备根据需求带宽,从第二PHY链路的空闲时隙中确定时隙编号最小的第一时隙。其中,第二PHY链路是FlexE组的可用PHY链路中负载最小的可用PHY链路。例如,如果FlexE组中共有2条PHY链路,这2条PHY链路分别为PHY1和PHY2,PHY1和PHY2均为可用PHY链路。如果PHY1的负载小于PHY2的负载,则第二PHY链路为PHY1。
可选地,PHY链路的负载根据PHY链路中已承载业务流的时隙的数量确定,确定第二时隙的过程例如是第一网络设备获取FlexE组中每个PHY链路已承载业务流的时隙的数量,确定已承载业务流的时隙数量最少的PHY链路,该已承载业务流的时隙数量最少的PHY链路为第二PHY链路。通过这种可选方式,FlexE组中不同PHY链路上承载业务流的时隙的数量会尽量维持均衡。例如,如果FlexE组中共有N条PHY链路,PHY1上有m 1个时隙承载业务流,PHY2上有m 2个时隙承载业务流,依次类推,PHYN上有m N个时隙承载业务流,采用可选方式五后,m 1、m 2至m N相等或近似相等。
可选地,PHY链路的负载根据PHY链路中空闲时隙的数量确定。在这种方式下,确定第二时隙的过程例如是第一网络设备获取FlexE组中每个PHY链路空闲时隙的数量,确定空闲时隙数量最多的PHY链路,该空闲时隙数量最多的PHY链路为第二PHY链路。
本段对可选方式五的效果进行论述。如果为所有业务流分配同一个PHY链路上的时隙,当该PHY链路发生故障时,会导致所有业务流中断。而通过可选方式五,由于基于PHY链路实现了负载分担,当一个PHY链路发生故障时,通过故障PHY链路之外的其他PHY链路传输的业务流不会受到影响,能够保持正常传输,因此避免一个PHY链路发生故障导致所有业务流中断的情况。例如,如果FlexE组包括PHY1和PHY2这2个PHY链路,需要传输2N个业务流,采用可选方式五,PHY1上会传输N个业务流,PHY2上会传输另外N个业务流,那么即使PHY1发生故障,而且没有执行S309对应的时隙动态迁移功能,PHY2上的N个业务流也会正常传输,因此确保有50%的业务流在没有人为干预的情况下仍可以快速恢复。此外,在不考虑PHY链路的负载的情况下,可能导致所有业务流集中分布在一个或多个PHY链路上,造成部分PHY链路是满载的,而部分PHY链路是空载的,而通过可选方式五,能够将所有业务流均匀分担至不同PHY链路上,减轻了单个PHY链路的压力,实现了负载分担的功能。
可选方式六、基于业务流负载分担的时隙分配策略。
基于业务流负载分担的时隙分配策略不仅考虑需求带宽,还考虑如何将同一个业务流的 需求带宽分担至尽量多的PHY链路上,利用尽量多的PHY链路传输同一个业务流。
在一种可能的实现中,第一网络设备根据时隙分配策略和需求带宽,从多个PHY链路的空闲时隙中确定第一时隙。其中,第一时隙平均分布在多个PHY链路中的不同PHY链路。具体的,第一时隙包括多个时隙,多个时隙分别位于多个PHY链路上,不同PHY链路上被确定的时隙数量相等或近似相等,这多个PHY链路会以负载分担的方式共同承载第一业务流。例如,如果当前存在的可用PHY链路在4个或4个以上,则第一网络设备从4个可用PHY链路上分别确定1个时隙,总共确定出的4个时隙为第一时隙,确定出的4个时隙平均分布在4个可用PHY链路,使得业务流分担至4个PHY链路。如果当前存在的可用PHY链路为2个,则第一网络设备从2个可用PHY链路上分别确定2个时隙,确定出的4个时隙为第一时隙。如此,使得同一个业务流尽可能传输在不同的PHY链路上。
在一种可能的实现中,第一网络设备根据业务流的需求带宽以及FlexE组中可用PHY链路的数量,获取每个可用PHY链路上所需分配的带宽,第一网络设备根据每个可用PHY链路上所需分配的带宽,从每个可用PHY链路上分别确定时隙,以确定出第一时隙。其中,每个可用PHY链路上所需分配的带宽例如是需求带宽与可用PHY链路的数量之间的比值。例如,client1的需求带宽为40G,FlexE组中可用PHY链路为PHY1、PHY2、PHY3和PHY4,1个时隙对应的带宽为5G。第一网络设备根据40G的需求带宽以及4个可用PHY链路,确定每个PHY链路要分担40G/4=10G的带宽,要在每个PHY链路上确定10G/2=2个时隙。则第一网络设备从PHY1上确定2个时隙,从PHY2上确定2个时隙,从PHY3上确定2个时隙,从PHY4上确定2个时隙,确定出的分布在4个PHY链路上的8个时隙为第一时隙。
本段对可选方式六达到的效果进行论述。通过将同一个业务流的需求带宽均衡地分担至尽可能多的可用PHY链路,一方面,能够极大地减少单个PHY链路故障后对业务流造成的影响,即使没有进行时隙迁移的步骤,由于业务流能够利用其他PHY链路上的时隙传输,保证业务流具有可用的带宽,而不至于传输中断。例如,通过可选方式六,可以将第一业务流的需求带宽平均分担至N个PHY链路上,每个PHY链路上占用1/N份需求带宽对应的时隙。那么,即使这N个PHY链路上的一个PHY链路发生故障,剩余的(N-1)个PHY链路仍会传输第一业务流,从而保证第一业务流具有(N-1)/N份可用的带宽,因此在没有人为干预的情况下可以快速从故障恢复。另一方面,减轻了单个PHY链路的压力,实现了负载分担的功能。
可选地,时隙分配策略的具体类型由用户定制。换句话说,网络设备按照上述可选方式一至可选方式六中的哪种可选方式来分配时隙是由用户自定义的。例如,上述可选方式一至可选方式六映射为多个选项,每个选项对应一种或多种可选方式。例如,上述可选方式四映射为“顺序分配”选项,上述可选方式五映射为“基于PHY链路负载分担”选项,上述可选方式六映射为“基于业务流负载分担”选项。上述可选方式一至可选方式六映射的选项通过界面呈现给用户,用户期望网络设备按照某种可选方式分配时隙时,对期望的可选方式对应的选项触发选择操作,第一网络设备会按照该选项对应的可选方式分配时隙。通过这种方式,为用户提供了各种可选择的具体类型的时隙分配策略,第一网络设备会按照用户定制的时隙分配策略分配时隙,从而满足了用户的自定义需求。
应理解,上述可选方式一至可选方式六可以采用任意方式结合。例如,可以仅执行这六种可选方式中的一种可选方式,或者,执行这六种可选方式中的两种或两种以上的可选方式。 其中,如果将不同可选方式结合起来,不同可选方式之间的逻辑关系可以是且的关系,也可以是或的关系。以下对不同可选方式如何结合进行举例说明。
以可选方式一和可选方式三结合为例,如果空闲时隙不满足需求带宽,第一网络设备根据时隙分配策略和第一业务流的优先级,从已被第二业务流占用的时隙中确定满足需求带宽的第一时隙。
以可选方式二和可选方式三结合为例,如果空闲时隙不满足需求带宽,第一网络设备根据时隙分配策略和第一业务流的优先级,从已被第二业务流占用的时隙中确定满足激活带宽的第一时隙。
以可选方式一和可选方式四结合为例,如果空闲时隙满足需求带宽,第一网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;第一网络设备根据需求带宽,从第一PHY链路的空闲时隙中按照时隙编号从小到大的顺序确定时隙,直至确定的时隙满足需求带宽。
以可选方式二和可选方式四结合为例,如果空闲时隙不满足激活带宽,第一网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;第一网络设备根据激活带宽,从第一PHY链路的空闲时隙中按照时隙编号从小到大的顺序确定时隙,直至确定的时隙满足激活带宽。
还应理解,上述可选方式一至可选方式六仅是示例性说明,并不代表是根据时隙分配策略和需求带宽分配时隙的必选实现方式。在另一些实施例中,也可以采用其他实现方式来实现根据时隙分配策略和需求带宽分配时隙的功能,而这些其他方式作为S306覆盖的具体情况,也应涵盖在本申请实施例的保护范围之内。
S307、第二网络设备根据时隙分配策略和需求带宽,确定第一时隙。
第二网络设备根据时隙分配策略和需求带宽确定的时隙,与第一网络设备根据时隙分配策略和需求带宽确定的时隙相同,均为第一时隙。此外,在时隙分配策略细分为可选方式一至可选方式六的情况下,第二网络设备采用的可选方式与第一网络设备采用的可选方式相同。具体地,S307同样包括下述可选方式一至可选方式六,S307的技术细节可参考S306。
可选方式一、如果空闲时隙满足需求带宽,第二网络设备根据时隙分配策略和需求带宽,从空闲时隙中确定满足需求带宽的第一时隙。
可选方式二、如果空闲时隙不满足需求带宽,第二网络设备根据时隙分配策略和激活带宽,从空闲时隙中确定满足激活带宽的第一时隙。
可选方式三、如果空闲时隙不满足需求带宽,第二网络设备根据时隙分配策略和第一业务流的优先级,从已被第二业务流占用的时隙中确定第一时隙,第二业务流的优先级低于第一业务流的优先级。
可选方式四、第二网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;第二网络设备根据需求带宽,从第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
可选方式五、第二网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;第二网络设备根据需求带宽,从第二PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
可选方式六、第二网络设备根据时隙分配策略和需求带宽,从多个PHY链路的空闲时隙 中确定第一时隙,第一时隙平均分布在多个PHY链路中的不同PHY链路。
S308、第一网络设备和第二网络设备根据第一时隙传输第一业务流。
S308包括以下S308A和S308B。
S308A、第一网络设备根据第一时隙向第二网络设备发送第一业务流。
S308B、第二网络设备根据第一时隙从第一网络设备接收第一业务流。
例如,第一业务流为client1,第一时隙为PHY1上的TS2和PHY2上的TS1,根据第一时隙传输第一业务流的过程包括:client1先被TX端(第一网络设备)进行业务处理。例如,client1先通过第一网络设备的流量管理(traffic management,TM)模块进行服务质量(quality of service,QoS)控制,然后通过第一网络设备的MAC层模块进行物理层信息的封装,将处理后得到的业务数据发送至第一网络设备的shim。然后,第一网络设备的shim可以对接收到的业务数据进行切片以及时隙封装,即将业务数据封装至PHY1上的TS2和PHY2上的TS1中。然后,FlexE组中PHY1和PHY2可以通过与RX端(第二网络设备)相连的光模块,将client1的业务数据传输至第二网络设备。第二网络设备会按照第一网络设备处理过程的逆过程,将PHY1和PHY2上传输的client1的业务数据重新拼装成client1。
S309、当第一时隙所在的PHY链路发生故障,第一网络设备根据时隙分配策略和第一业务流的需求带宽,确定第二时隙。
如果第一时隙所在的PHY链路发生故障,会导致PHY链路上原本分配的第一时隙不可用。第一网络设备响应于PHY链路的状态变化,根据原有的时隙分配策略和第一业务流的需求带宽,重新确定新的时隙,将重新确定的时隙分配给第一业务流,以便用重新确定的时隙发送第一业务流。例如,请参考图8,资源管理层202维护用户定制的时隙分配策略,业务流优先级、FlexE组可用物理接口(即处于激活状态的物理接口)、FlexE组可用物理接口TX时隙资源池,基于时隙分配策略重新在FlexE组上排布TX方向可用时隙。
其中,第一网络设备可以确定第一时隙所在的PHY链路发生故障。如何确定PHY链路发生故障包括多种实现方式。可选地,第一网络设备主动检测到PHY链路发生故障。例如,第一网络设备检测物理接口的状态,若物理接口处于关闭(down)状态,第一网络设备会确定PHY链路发生故障。
可选地,当第一时隙所在的PHY链路发生故障,第一网络设备先将故障的PHY链路剔除出FlexE组,再根据已剔除PHY链路的FlexE组和时隙分配策略重新分配时隙。具体地,以剔除PHY链路前后的FlexE组分别称为第一FlexE组和第二FlexE组为例,第一网络设备和第二网络设备原本通过第一FlexE组传输业务流,当第一时隙所在的PHY链路发生故障,第一网络设备从第一FlexE组中删除第一时隙所在的PHY链路,得到第二FlexE组,第二FlexE组不包括第一时隙所在的PHY链路。在S309中,第一网络设备根据时隙分配策略和第一业务流的需求带宽,从第二FlexE组中确定第二时隙。
本段对剔除故障的PHY链路的效果进行论述。按照协议的规定,当FlexE组处于可用状态时,要求FlexE组中的所有PHY链路均处于激活状态。而相关技术中,当一个PHY链路发生故障后,故障的PHY链路处于去激活状态,导致该PHY链路所属的整个FlexE组不可用。而本实施例中,通过在PHY链路故障后,快速启动了从FlexE组中删除故障的PHY链路的流程,从而自动地将故障的PHY链路剔除出FlexE组,使得FlexE组中剩余的PHY链路处于激活状态,因此保证了FlexE组是可用的,避免了PHY链路故障后导致整个FlexE组 不可用。
通过这种方式,在第一业务流所在的PHY链路故障的情况下,第一网络设备依据时隙分配策略,将第一业务流能够从原来所在的时隙动态迁移到重新确定的时隙,从而为第一业务流重新部署了时隙,实现了时隙的重排布。换句话说,时隙分配策略同时充当了动态迁移策略。例如,请参考图8,在PHY链路故障场景下,资源管理层202按照时隙分配策略,对用户业务进行动态迁移。
为了区分描述,方法300将重新确定出的时隙称为第二时隙。第二时隙与第一时隙不同。例如,第二时隙和第一时隙位于不同的PHY链路上。可选地,第二时隙是一个时隙,或者,第二时隙是包括多个时隙的集合。可选地,第二时隙是同一个PHY链路上的时隙。或者,第二时隙包括分别位于多个PHY链路上的时隙。可选地,在第二时隙包括多个PHY链路上的时隙的情况下,第二时隙分布在多个PHY链路中每个PHY链路上的时隙的数量是相同的。或者,第二时隙分布在多个PHY链路中每个PHY链路上的时隙的数量是不同的。此外,第二时隙分布在多个PHY链路中每个PHY链路上的时隙是否相同或近似相同,可以根据采用的时隙分配策略确定。
如何根据时隙分配策略和需求带宽确定出第二时隙包括多种实现方式,以下通过可选方式一至可选方式六举例说明。应理解,S309中的可选方式一至可选方式六与S306中的可选方式一至可选方式六对应,S309中的可选方式的技术细节可以参考前述S306中的对应可选方式。
可选方式一、如果空闲时隙满足需求带宽,第一网络设备根据时隙分配策略和需求带宽,从空闲时隙中确定满足需求带宽的第二时隙。
通过在PHY链路故障的情况下执行可选方式一,达到的效果包括:由于重新确定出了满足需求带宽的时隙,利用重新确定出的时隙传输业务流,使得业务流从原来所在的时隙迁移至重新确定的时隙后,业务流的带宽仍能满足需求带宽,从而尽力而为保证最大数量的业务流正常工作。由于PHY链路发生故障后业务流的带宽继续得到了保障,有助于保障业务的服务等级协议(Service-Level Agreement,SLA)。尤其是,在需求带宽由用户指定的情况下,通过可选方式一重新分配时隙,使得PHY发生故障后业务流的带宽仍能符合用户对带宽的期望。
可选方式二、如果空闲时隙不满足需求带宽,第一网络设备根据时隙分配策略和激活带宽,从空闲时隙中确定满足激活带宽的第二时隙。
通过在PHY链路故障的情况下执行可选方式二,达到的效果包括:在PHY链路发生故障而空闲时隙不足的情况下,由于重新确定出了满足激活带宽的时隙,利用确定出的时隙传输业务流,使得业务流能够处于连通状态,业务流能被传输至对端,避免PHY链路发生故障后业务流断流,从而尽力而为保证最大数量的业务流在PHY链路发生故障后仍被启动传输。
可选方式三、如果空闲时隙不满足需求带宽,第一网络设备根据时隙分配策略和第一业务流的优先级,从已被第二业务流占用的时隙中确定第二时隙。
通过在PHY链路故障的情况下执行可选方式三,达到的效果包括:在PHY链路发生故障而空闲时隙不足的情况下,由于根据业务流的优先级重新分配时隙,将低优先级的业务流原本占用的时隙重新分配给了高优先级的业务流,使得高优先级的业务流具有优先竞争到时隙的权利,高优先级的业务流能够通过低优先级业务流原本占用的时隙传输,从而避免高优先级的业务流断开,保证高优先级的业务流快速恢复。
可选方式四、第一网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;第一网络设备根据需求带宽,从第一PHY链路的空闲时隙中确定时隙编号最小的第二时隙。
可选方式四可以和上述可选方式一、可选方式三结合,即,在PHY链路发生故障的情况下,在时隙迁移过程中,基于业务流的优先级及业务流的需求带宽,在可用的PHY链路上顺序分配对应的时隙。
可选方式四可以和上述可选方式二、可选方式三结合,即,在PHY链路发生故障的情况下,在时隙迁移过程中,基于业务流的大小及业务流激活带宽,在可用的PHY链路上顺序分配对应的时隙。
可选方式五、第一网络设备根据时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;第一网络设备根据需求带宽,从第二PHY链路的空闲时隙中确定时隙编号最小的第二时隙。
可选方式六、第一网络设备根据时隙分配策略和需求带宽,从多个PHY链路的空闲时隙中确定第二时隙,第二时隙平均分布在多个PHY链路中的不同PHY链路。
应理解,PHY链路发生故障的情况下使用的时隙分配策略和PHY链路正常的情况下使用的时隙分配策略可以是完全相同的,也可以存在细微的区别。换句话说,第一网络设备在执行S306中从可选方式一至可选方式六中所选择的可选方式与S309中从可选方式一和可选方式六中所选择的可选方式可以相同,也可以不同。例如,S306中实施可选方式一,S309中实施可选方式二。同理地,第二网络设备在执行S307中从可选方式一至可选方式六中所选择的可选方式与S310中从可选方式一和可选方式六中所选择的可选方式可以相同,也可以不同。在一些可选的实施例中,会保证第一网络设备在S306选择的可选方式和第二网络设备在S307选择的可选方式一致,保证第一网络设备在S309选择的可选方式和第二网络设备在S310选择的可选方式一致,而不限定第一网络设备在S306选择的可选方式和在S309重新选择的可选方式一致,不限定第二网络设备在S307选择的可选方式和在S310重新选择的可选方式一致。
应理解,PHY链路发生故障的情况下使用的时隙分配策略的获得方式和PHY链路正常的情况下使用的时隙分配策略的获得方式可以是完全相同的,也可以存在细微的区别。例如,PHY链路发生故障的情况下使用的时隙分配策略由TX端推送至RX端,PHY链路正常的情况下使用的时隙分配策略由用户静态限定。
可选地,PHY链路正常的情况以及PHY链路故障的情况下分别执行哪种具体的时隙分配策略由用户定制。在一种可能的实现中,将PHY链路正常的情况下使用的时隙分配策略称之为带宽分配策略,将PHY链路故障的情况下使用的时隙分配策略称之为动态迁移策略,带宽分配策略包括306中的可选方式一至可选方式六中的至少一项,带宽分配策略具体使用哪种可选方式由用户的配置操作确定。动态迁移策略包括S309中的可选方式一至可选方式六中的至少一项,动态迁移策略具体使用哪种可选方式由用户的配置操作确定。通过上述可选方式一和可选方式六,面向用户提供时隙迁移策略定制,解决了FlexE运维问题,配合不同的迁移策略,按照用户的预期执行快速时隙迁移,引入FlexE时隙动态迁移能力,用户可以定制迁移策略。
第一网络设备确定第二时隙后,可以根据第二时隙强刷当前的客户日程表,将客户日程 表中第一业务流对应的时隙从第一时隙更新为第二时隙。其中,强刷是指未经过包括请求和应答的协商流程的刷新方式。客户日程表用于保存业务流与时隙之间的映射关系,本实施例中,第一网络设备为业务流的TX端,第一网络设备的客户日程表也称为TX当前表。
S310、当第一时隙所在的PHY链路发生故障,第二网络设备根据时隙分配策略和第一业务流的需求带宽,确定第二时隙。
第二网络设备确定PHY链路发生故障后,也会重新确定新的时隙,在第二网络设备重新确定时隙的过程中,由于第二网络设备使用的时隙分配策略与第一网络设备使用的时隙分配策略一致,第二网络设备使用的需求带宽与第一网络设备使用的需求带宽一致,因此第二网络设备确定出的新的时隙和第一网络设备确定出的新的时隙会是相同的,都是第二时隙。例如,请参考图8,资源管理层的RX策略模块2022维护对端网元推送的时隙迁移策略,业务流优先级、FlexE组可用物理接口(即处于激活状态的物理接口)、FlexE组可用物理接口RX时隙资源池,基于时隙分配策略重新在FlexE组上排布RX方向可用时隙。
通过在PHY链路发生故障的情况下,根据时隙分配策略重新确定时隙,达到的效果至少包括:在相关技术中,当PHY链路发生故障时,业务流的TX端与业务流的RX端要先执行协商流程,再切换业务流所在的时隙。其中,协商流程也称请求应答方案,协商流程包括TX端发送协商请求、RX端接收协商请求并向TX端返回协商响应、TX端接收协商响应的过程,通过协商流程,收发两端协商好业务流要迁移至哪个时隙后,再将业务流切换至协商的时隙。然而,由于在PHY链路故障后要执行协商流程,会造成很大的时延,切换时间在百毫秒量级。而通过上述方式,在PHY链路故障后,收发两端分别重新确定时隙,将业务流迁移至重新确定出的时隙。一方面,由于收发两端无需执行协商流程,因此免去了协商流程带来的时延,能够将断流时间控制在50毫秒范围内,确保业务流在50毫秒内快速完成恢复,因此极大地提高业务从故障恢复的速度。另一方面,由于在PHY链路故障后,收发两端根据相同的时隙分配策略和相同的需求带宽重新确定时隙,因此收发两端确定出的新的时隙会是相同的,使得时隙迁移后,收发两端的时隙排布具有一致性,那么收发两端根据一致的时隙排布,能够正常传输业务流,从而实现了FlexE组中不同PHY链路保护倒换的功能,将故障的PHY链路上的业务流倒换至正常的PHY链路上,避免业务流传输中断。
可选地,当第一时隙所在的PHY链路发生故障,第二网络设备将故障的PHY链路剔除出FlexE组。具体地,以剔除PHY链路前后的FlexE组分别称为第一FlexE组和第二FlexE组为例,第二网络设备和第二网络设备原本通过第一FlexE组传输业务流,当第一时隙所在的PHY链路发生故障,第二网络设备从第一FlexE组中删除第一时隙所在的PHY链路,得到第二FlexE组,第二FlexE组不包括第一时隙所在的PHY链路。在S310中,第二网络设备根据时隙分配策略和第一业务流的需求带宽,从第二FlexE组中确定第二时隙。通过这种方式,RX端(第二网络设备)和TX端(第一网络设备)同步执行了删除PHY链路的流程,保证了重新确定出的时隙的一致性,另外避免PHY链路故障后导致PHY链路所属的整个FlexE组不可用。
此外,第二网络设备确定第二时隙后,可以根据第二时隙强刷当前的客户日程表,将客户日程表中第一业务流对应的时隙从第一时隙更新为第二时隙。其中,客户日程表用于保存业务流与时隙之间的映射关系,本实施例中,第二网络设备为业务流的RX端,第二网络设备的客户日程表也称为RX当前表。
S311、第一网络设备和第二网络设备根据第二时隙传输第一业务流。
S311包括以下S311A和S311B。
S311A、第一网络设备根据第一时隙向第二网络设备发送第一业务流.
S311B、第二网络设备根据第一时隙从第一网络设备接收第一业务流。
通过根据重新确定的时隙传输第一业务流,能够将业务流从故障状态的PHY链路倒换至FlexE组内其他的PHY链路,从而实现了FlexE组中不同PHY链路之间的互相保护。
在PHY链路故障场景下,利用时隙分配策略能够实现1:1业务倒换。例如,请参考图11,将PHY链路部署为1:1冗余,FlexE组配置带宽为100G,所有client配置的优先级相同。其中,client1的需求带宽为5G,client2的需求带宽为5G,client3的需求带宽为40G。在PHY1和PHY2处于正常状态下,利用本实施例提供的时隙分配策略,会将PHY1的时隙1分配给client1,将PHY2的时隙1分配给client2,将PHY1的时隙2至时隙9分配给client3。在PHY1处于故障状态下,PHY1的每个时隙不可用,利用本实施例提供的时隙分配策略,会重新将PHY2的时隙1分配给client1,重新将PHY2的时隙2分配给client2,重新将PHY2的时隙3至时隙λ分配给client3,使得client1从PHY1的时隙1迁移至PHY2的时隙1,client2从PHY2的时隙1迁移至PHY2的时隙2,client3从PHY1的时隙2至时隙9迁移至PHY2的时隙3至时隙λ,因此,故障的PH1的业务流被倒换至PHY2上,实现1:1业务倒换,实现FlexE组内不同PHY间的1:1保护。
此外,在PHY链路故障场景下,利用时隙分配策略能够实现FlexE组内不同PHY链路间的N:1保护。例如,FlexE组包括N个主PHY链路和1个备PHY链路,配置了每个业务流的优先级。在N个主PHY链路处于正常状态下,备PHY链路上传输低优先级的业务流,主PHY链路上传输高优先级的业务流。当N个主PHY链路中的一个主PHY链路发生故障时,RX端和TX端通过实施时隙分配策略,会根据业务流的优先级确定出备PHY链路上的时隙,将业务流从故障的主PHY链路倒换至备PHY链路,从而抢占低优先级的业务流原本占用的时隙,完成PHY链路的N:1保护。
本实施例提供了一种在FlexE中高效分配时隙的方法,网络设备通过利用时隙分配策略和业务流的需求带宽,为业务流自动分配PHY链路上的时隙,并使用分配的时隙传输业务流,由于无需用户为业务流人工指定对应的时隙,因此免去了用户感知如何编排时隙带来的学习成本,并免去了用户为业务流配置时隙的繁琐操作,因此大大简化了配置复杂度,提高了时隙分配的效率。
以下通过方法400,对方法300进行举例说明。在方法400中,时隙分配的动作通过系统架构200中的资源管理层202执行,PHY链路正常的情况下使用的时隙分配策略称之为带宽分配策略,PHY链路故障的情况下使用的时隙分配策略称之为动态迁移策略。
换句话说,方法400描述的方法流程关于资源管理层如何在PHY链路正常时利用带宽分配策略分配时隙,以及资源管理层如何在PHY链路故障时利用动态迁移策略重新分配时隙。应理解,方法400与方法300同理的步骤还请参见方法300,在方法400中不做赘述。
参见图12,图12是本申请实施例提供的一种基于FlexE传输业务流的方法400的流程图。示例性地,方法400中以RS MNG表示资源管理层202。方法400包括三个阶段,阶段一为组建管道,阶段二为配置业务,阶段三为PHY故障实施业务动态迁移。阶段一包括SP1000 至SP1004。阶段二包括SP2001至SP2006。阶段三包括SP3001至SP3007。
SP1000、用户在第一网络设备和第二网络设备上分别执行FlexE组的创建操作。第一网络设备响应于创建操作,创建FlexE组。第二网络设备响应于创建操作,创建FlexE组。
SP1001、第一网络设备和第二网络设备从FlexE组增删PHY链路。
SP1002、用户定制FlexE组带宽分配策略,第一网络设备的资源管理层将用户定制的带宽分配策略保存到数据库(database,DB)。
SP1003、用户定制动态迁移策略,第一网络设备的资源管理层将用户定制的动态迁移策略保存到DB。
SP1004、第一网络设备的资源管理层基于LLDP,将动态迁移策略推送到对端。第二网络设备的资源管理层接收推送的动态迁移策略,将动态迁移策略保存到DB。
SP2001、用户在第一网络设备和第二网络设备上分别执行业务流的创建操作。第一网络设备响应于创建操作,创建业务流。第二网络设备响应于创建操作,创建业务流。
SP2002、用户在第一网络设备和第二网络设备上分别指定业务流优先级。
SP2003、用户在第一网络设备和第二网络设备上分别配置业务流的需求带宽。
SP2004、第一网络设备的资源管理层基于用户定制的带宽分配策略,分配TX方向时隙。
SP2005、第一网络设备执行TX配置备份表的动作,向第二网络设备发送请求(Request,REQ)。第二网络设备作为RX端,对请求进行应答,向第一网络设备返回确认(Acknowledge,ACK)消息。
SP2006、第一网络设备执行TX切表的动作,第二网络设备执行RX切表的动作。
SP3001、第一网络设备快速感知PHY链路故障,启动FlexE组增删PHY链路流程。第二网络设备快速感知PHY链路故障,启动FlexE组增删PHY链路流程。
SP3002、第一网络设备的资源管理层获取本端TX的用户定制的动态迁移策略。
SP3003、第二网络设备的资源管理层获取对端推送至本端的RX动态迁移策略。
SP3004、第一网络设备的资源管理层获取业务流优先级。
SP3005、第二网络设备的资源管理层获取业务流优先级。
SP3006、第一网络设备的资源管理层基于动态迁移策略执行TX方向时隙重排布。
SP3007、第二网络设备的资源管理层基于动态迁移策略执行RX方向时隙重排布。
从图12可以看出,阶段三的流程不依赖双端协商,因此保证在50毫秒内完成。
可选地,根据时隙分配策略和需求带宽重新确定时隙的技术手段应用在PHY链路发生故障之外的其他场景下。以下对一些扩展的应用场景举例说明。
可选地,该技术手段应用在业务流的需求带宽发生更新的场景。
具体地,当第一业务流的需求带宽发生更新,第一网络设备根据时隙分配策略和第一业务流更新后的需求带宽,确定第三时隙,第三时隙与第一时隙不同;第一网络设备根据第三时隙向第二网络设备发送第一业务流。相应地,当第一业务流的需求带宽发生更新,第二网络设备根据时隙分配策略和第一业务流更新后的需求带宽,确定第三时隙;第二网络设备根据第三时隙从第一网络设备接收第一业务流。这种场景下如何确定第三时隙请参见方法300或方法400,例如通过可选方式一至可选方式六中的任一项或多项实现。
例如,在组建网络时,用户将业务流的需求带宽配置为5G,第一网络设备使用5G的时 隙传输业务流。而随着业务量的增加,5G大小的带宽不足,用户将业务流的需求带宽重新配置为10G。第一网络设备和第二网络设备响应于需求带宽的增加,根据时隙分配策略和10G的需求带宽,重新确定10G的时隙,第一网络设备和第二网络设备使用10G的时隙传输业务流。
通过在业务流的需求带宽发生更新的场景下利用时隙分配策略重新分配时隙,由于收发两端利用的时隙分配策略和更新后的需求带宽一致,因此收发两端能自动地重新分配一致的时隙,因此免去了为配置时隙进行协商带来的通信开销,有助于实现需求带宽的无损更新。
可选地,该技术手段应用在FlexE组中增删PHY链路的场景。例如,当FlexE组内PHY所提供带宽不足时,需要增加1个或多个PHY进入当前FlexE组,支持更多的业务流。在这一场景下,当第一时隙所在的FlexE组增加PHY链路,第一网络设备根据时隙分配策略和第一业务流的需求带宽,从增加了PHY链路的FlexE组的时隙中确定第四时隙,第四时隙与第一时隙不同;第一网络设备根据第四时隙向第二网络设备发送第一业务流。相应地,当第一时隙所在的FlexE组增加PHY链路,第二网络设备根据时隙分配策略和第一业务流的需求带宽,从增加了PHY链路的FlexE组的时隙中确定第四时隙,第四时隙与第一时隙不同;第二网络设备根据第四时隙从第一网络设备接收第一业务流。
又如,当前FlexE组内有大量带宽闲置,可以移除其中一个或者多个PHY链路,释放网络资源给其他业务使用。在这一场景下,当第一时隙所在的FlexE组删除PHY链路,第一网络设备根据时隙分配策略和第一业务流的需求带宽,从删除了PHY链路的FlexE组的时隙中确定第五时隙,第五时隙与第一时隙不同;第一网络设备根据第五时隙向第二网络设备发送第一业务流。相应地,当第一时隙所在的FlexE组删除PHY链路,第二网络设备根据时隙分配策略和第一业务流的需求带宽,从删除了PHY链路的FlexE组的时隙中确定第五时隙,第五时隙与第一时隙不同;第二网络设备根据第五时隙从第一网络设备接收第一业务流。
通过在FlexE组中增删PHY链路下利用时隙分配策略重新分配时隙,由于收发两端利用的时隙分配策略和增删PHY链路后的FlexE组一致,因此收发两端能自动地重新分配一致的时隙,因此免去了为配置时隙进行协商带来的通信开销,有助于实现PHY链路的无损增删。
例如,该技术手段应用在增删业务流的场景。具体地,当增加了待传输的第三业务流或删除了原本传输的第四业务流,第一网络设备根据时隙分配策略和第一业务流的需求带宽,确定第六时隙,第六时隙与第一时隙不同;第一网络设备根据第六时隙向第二网络设备发送第一业务流。相应地,当增加了待传输的第三业务流或删除了原本传输的第四业务流,第二网络设备根据时隙分配策略和第一业务流的需求带宽,确定第六时隙,第六时隙与第一时隙不同;第二网络设备根据第六时隙从第一网络设备接收第一业务流。
例如,起初空闲时隙足够,第一网络设备将满足业务流1(第一业务流)的需求带宽的时隙分配给了业务流1,之后,有新的业务流2(第三业务流)需要传输,业务流2的优先级高于业务流1的优先级,然而当前的空闲时隙不足。在这一场景下,收发两端可以根据时隙分配策略,为业务流1重新分配满足激活带宽的时隙,由于需求带宽大于激活带宽,能够腾出一定带宽的空闲时隙,可以将腾出的空闲时隙分配给业务流2,以便优先满足业务流2对带宽的需求。
例如,起初由于空闲时隙不足,业务流A(第一业务流)的需求带宽无法满足,则收发两端将满足激活带宽的时隙分配给业务流A。之后,收发两端原本传输的业务流B(第四业 务流)由于业务停止或其他原因被删除,业务流B占用的时隙被释放,使得FlexE组的空闲时隙增加,FlexE组当前的带宽资源从不足变为足够满足业务流A的需求带宽。在这一场景下,收发两端可以根据时隙分配策略,为业务流A重新分配满足需求带宽的时隙,从而利用新释放的时隙满足业务流A对带宽的需求。
并且,通过在增删业务流的场景下利用时隙分配策略重新分配时隙,由于收发两端利用的时隙分配策略一致,因此收发两端能自动地重新分配一致的时隙,因此免去了为配置时隙进行协商带来的通信开销,有助于实现业务流的无损增删。
以上介绍了本申请实施例的方法300和方法400,以下介绍本申请实施例的网络设备,应理解,以下介绍的网络设备具有上述方法300或方法400中第一网络设备或第二网络设备的任意功能。
图13是本申请实施例提供的一种网络设备500的结构示意图,如图13所示,网络设备500包括:获取模块501,用于执行S304、SP1002或SP1003;确定模块502,用于执行S306或SP2004;发送模块503,用于执行S308A。
可选地,确定模块502,还用于执行S309,发送模块503,还用于执行S311A。
应理解,网络设备500对应于上述方法实施例中的第一网络设备,网络设备500中的各模块和上述其他操作和/或功能分别为了实现方法实施例中的第一网络设备所实施的各种步骤和方法,具体细节可参见上述方法300或方法400,为了简洁,在此不再赘述。
应理解,网络设备500在基于FlexE传输业务流时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将网络设备500的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的网络设备500与上述方法300属于同一构思,其具体实现过程详见方法300,这里不再赘述。
应理解,网络设备500中的获取模块501相当于系统架构200中的用户配置层201;网络设备500中的确定模块502相当于系统架构200中的资源管理层202;网络设备500中的发送模块503相当于系统架构200中的FlexE物理接口204。
图14是本申请实施例提供的一种网络设备600的结构示意图,如图14所示,网络设备600包括:获取模块601,用于执行S305或SP1004;确定模块602,用于执行S307;接收模块603,用于执行S308B。
可选地,确定模块602,还用于执行S310,接收模块603,还用于执行S311B。
应理解,网络设备600对应于上述方法实施例中的第二网络设备,网络设备600中的各模块和上述其他操作和/或功能分别为了实现方法实施例中的第二网络设备所实施的各种步骤和方法,具体细节可参见上述方法300或方法400,为了简洁,在此不再赘述。
应理解,网络设备600在基于FlexE传输业务流时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将网络设备600的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的网络设备600与上述方法300属于同一构思,其具体实现过程详见方法300,这里不再赘述。
应理解,网络设备600中的获取模块601相当于系统架构200中的用户配置层201;网络设备600中的确定模块602相当于系统架构200中的资源管理层202;网络设备600中的接收模块603相当于系统架构200中的FlexE物理接口204。
与本申请提供的方法实施例以及虚拟装置实施例相对应,本申请实施例还提供了一种网络设备,下面对网络设备的硬件结构进行介绍。
网络设备700或网络设备800对应于上述方法实施例中的第一网络设备或第二网络设备,网络设备700或网络设备800中的各硬件、模块和上述其他操作和/或功能分别为了实现方法实施例中的第一网络设备或第二网络设备所实施的各种步骤和方法,关于网络设备700或网络设备800如何分配时隙的详细流程,具体细节可参见上述方法实施例,为了简洁,在此不再赘述。其中,上文方法300或方法400的各步骤通过网络设备700或网络设备800处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
网络设备700或网络设备800对应于上述虚拟装置实施例中的网络设备500或网络设备600,网络设备500或网络设备600中的每个功能模块采用网络设备700或网络设备800的软件实现。换句话说,网络设备500或网络设备600包括的功能模块为网络设备700或网络设备800的处理器读取存储器中存储的程序代码后生成的。
参见图15,图15是本申请实施例提供的一种网络设备700的结构示意图,该网络设备700可以配置为第一网络设备或第二网络设备。
网络设备700包括至少一个处理器701、通信总线702、存储器703以及至少一个物理接口704。
处理器701可以是一个通用CPU、NP、微处理器、或者可以是一个或多个用于实现本申请方案的集成电路,例如,专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
通信总线702用于在上述组件之间传送信息。通信总线702可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器703可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,也可以是随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only Memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备,或者是能够用于携带或存 储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器703可以是独立存在,并通过通信总线702与处理器701相连接。存储器703也可以和处理器701集成在一起。
物理接口704使用任何收发器一类的装置,用于与其它设备或通信网络通信。物理接口704包括有线通信接口,还可以包括无线通信接口。其中,有线通信接口例如可以为以太网接口。以太网接口可以是光接口,电接口或其组合。无线通信接口可以为无线局域网(wireless local area networks,WLAN)接口,蜂窝网络通信接口或其组合等。物理接口704也称物理口,物理接口704对应于系统架构200中的FlexE物理接口204。
在具体实现中,作为一种实施例,处理器701可以包括一个或多个CPU,如图15中所示的CPU0和CPU1。
在具体实现中,作为一种实施例,网络设备700可以包括多个处理器,如图15中所示的处理器701和处理器705。这些处理器中的每一个可以是一个单核处理器(single-CPU),也可以是一个多核处理器(multi-CPU)。这里的处理器可以指一个或多个设备、电路和/或用于处理数据(如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,网络设备700还可以包括输出设备706和输入设备707。输出设备706和处理器701通信,可以以多种方式来显示信息。例如,输出设备706可以是液晶显示器(liquid crystal display,LCD)、发光二级管(light emitting diode,LED)显示设备、阴极射线管(cathode ray tube,CRT)显示设备或投影仪(projector)等。输入设备707和处理器701通信,可以以多种方式接收用户的输入。例如,输入设备707可以是鼠标、键盘、触摸屏设备或传感设备等。
在一些实施例中,存储器703用于存储执行本申请方案的程序代码710,处理器701可以执行存储器703中存储的程序代码710。也即是,网络设备700可以通过处理器701以及存储器703中的程序代码710,来实现方法实施例提供的方法300或方法400。
本申请实施例的网络设备700可对应于上述各个方法实施例中的第一网络设备或第二网络设备,并且,该网络设备700中的处理器701、物理接口704等可以实现上述各个方法实施例中的第一网络设备或第二网络设备所具有的功能和/或所实施的各种步骤和方法。为了简洁,在此不再赘述。
应理解,网络设备500中的发送模块503相当于网络设备700中的物理接口704;网络设备500中的获取模块501和确定模块502可以相当于网络设备700中的处理器701。
应理解,网络设备600中的接收模块603相当于网络设备700中的物理接口704;网络设备600中的获取模块601和确定模块602可以相当于网络设备700中的处理器701。
参见图16,图16是本申请实施例提供的一种网络设备800的结构示意图,网络设备800可以配置为第一网络设备或第二网络设备。
网络设备800包括:主控板810和接口板830。
主控板810也称为主处理单元(main processing unit,MPU)或路由处理卡(route processor card),主控板810对网络设备800中各个组件的控制和管理,包括路由计算、设备管理、设备维护、协议处理功能。主控板810包括:中央处理器811和存储器812。
接口板830也称为线路接口单元卡(line processing unit,LPU)、线卡(line card)或业 务板。接口板830用于提供各种业务接口并实现数据包的转发。业务接口包括而不限于以太网接口、POS(Packet over SONET/SDH)接口等,以太网接口例如是灵活以太网业务接口(Flexible Ethernet Clients,FlexE Clients)。接口板830包括:中央处理器831、网络处理器832、转发表项存储器834和物理接口卡(ph8sical interface card,PIC)833。
接口板830上的中央处理器831用于对接口板830进行控制管理并与主控板810上的中央处理器811进行通信。
网络处理器832用于实现报文的转发处理。网络处理器832的形态可以是转发芯片。具体而言,上行报文的处理包括:报文入接口的处理,转发表查找;下行报文的处理:转发表查找等等。
物理接口卡833用于实现物理层的对接功能,原始的流量由此进入接口板830,以及处理后的报文从该物理接口卡833发出。物理接口卡833包括至少一个物理接口,物理接口也称物理口,物理接口卡833对应于系统架构200中的FlexE物理接口204。物理接口卡833也称为子卡,可安装在接口板830上,负责将光电信号转换为报文并对报文进行合法性检查后转发给网络处理器832处理。在一些实施例中,接口板803的中央处理器831也可执行网络处理器832的功能,比如基于通用CPU实现软件转发,从而物理接口卡833中不需要网络处理器832。
可选地,网络设备800包括多个接口板,例如网络设备800还包括接口板840,接口板840包括:中央处理器841、网络处理器842、转发表项存储器844和物理接口卡843。
可选地,网络设备800还包括交换网板820。交换网板820也可以称为交换网板单元(switch fabric unit,SFU)。在网络设备有多个接口板830的情况下,交换网板820用于完成各接口板之间的数据交换。例如,接口板830和接口板840之间可以通过交换网板820通信。
主控板810和接口板830耦合。例如。主控板810、接口板830和接口板840,以及交换网板820之间通过系统总线与系统背板相连实现互通。在一种可能的实现方式中,主控板810和接口板830之间建立进程间通信协议(inter-process communication,IPC)通道,主控板810和接口板830之间通过IPC通道进行通信。
在逻辑上,网络设备800包括控制面和转发面,控制面包括主控板810和中央处理器831,转发面包括执行转发的各个组件,比如转发表项存储器834、物理接口卡833和网络处理器832。控制面执行路由器、生成转发表、处理信令和协议报文、配置与维护设备的状态等功能,控制面将生成的转发表下发给转发面,在转发面,网络处理器832基于控制面下发的转发表对物理接口卡833收到的报文查表转发。控制面下发的转发表可以保存在转发表项存储器834中。在有些实施例中,控制面和转发面可以完全分离,不在同一设备上。
如果网络设备800被配置为第一网络设备,中央处理器811获取时隙分配策略;根据时隙分配策略和需求带宽,确定第一时隙。网络处理器832触发物理接口卡833根据第一时隙向第二网络设备发送第一业务流。
如果网络设备800被配置为第二网络设备,中央处理器811获取时隙分配策略;根据时隙分配策略和需求带宽,确定第一时隙。网络处理器832触发物理接口卡833根据第一时隙从第一网络设备接收第一业务流。
应理解,网络设备500中的发送模块503相当于网络设备800中的物理接口卡833或物理接口卡843;网络设备500中的获取模块501和确定模块502可以相当于网络设备800中的中央处理器811或中央处理器831。
应理解,网络设备600中的接收模块603相当于网络设备800中的物理接口卡833或物理接口卡843;网络设备600中的获取模块601和确定模块602可以相当于网络设备800中的中央处理器811或中央处理器831。
应理解,本申请实施例中接口板840上的操作与接口板830的操作一致,为了简洁,不再赘述。应理解,本实施例的网络设备800可对应于上述各个方法实施例中的第一网络设备或第二网络设备,该网络设备800中的主控板810、接口板830和/或接口板840可以实现上述各个方法实施例中的第一网络设备或第二网络设备所具有的功能和/或所实施的各种步骤,为了简洁,在此不再赘述。
值得说明的是,主控板可能有一块或多块,有多块的时候可以包括主用主控板和备用主控板。接口板可能有一块或多块,网络设备的数据处理能力越强,提供的接口板越多。接口板上的物理接口卡也可以有一块或多块。交换网板可能没有,也可能有一块或多块,有多块的时候可以共同实现负荷分担冗余备份。在集中式转发架构下,网络设备可以不需要交换网板,接口板承担整个系统的业务数据的处理功能。在分布式转发架构下,网络设备可以有至少一块交换网板,通过交换网板实现多块接口板之间的数据交换,提供大容量的数据交换和处理能力。所以,分布式架构的网络设备的数据接入和处理能力要大于集中式架构的设备。可选地,网络设备的形态也可以是只有一块板卡,即没有交换网板,接口板和主控板的功能集成在该一块板卡上,此时接口板上的中央处理器和主控板上的中央处理器在该一块板卡上可以合并为一个中央处理器,执行两者叠加后的功能,这种形态设备的数据交换和处理能力较低(例如,低端交换机或路由器等网络设备)。具体采用哪种架构,取决于具体的组网部署场景,此处不做任何限定。
在一些可能的实施例中,上述第一网络设备或第二网络设备可以实现为虚拟化设备。例如,虚拟化设备可以是运行有用于发送报文功能的程序的虚拟机(英文:Virtual Machine,VM),虚拟机部署在硬件设备上(例如,物理服务器)。虚拟机指通过软件模拟的具有完整硬件系统功能的、运行在一个完全隔离环境中的完整计算机系统。可以将虚拟机配置为第一网络设备或第二网络设备。例如,可以基于通用的物理服务器结合网络功能虚拟化(Network Functions Virtualization,NFV)技术来实现第一网络设备或第二网络设备。第一网络设备或第二网络设备为虚拟主机、虚拟路由器或虚拟交换机。本领域技术人员通过阅读本申请即可结合NFV技术在通用物理服务器上虚拟出具有上述功能的第一网络设备或第二网络设备。此处不再赘述。
应理解,上述各种产品形态的网络设备,分别具有上述方法实施例中第一网络设备或第二网络设备的任意功能,此处不再赘述。
本申请实施例提供了一种计算机程序产品,当该计算机程序产品在网络设备上运行时,使得网络设备执行上述方法300或方法400中第一网络设备执行的方法。
本申请实施例提供了一种计算机程序产品,当该计算机程序产品在网络设备上运行时,使得网络设备执行上述方法300或方法400中第二网络设备执行的方法。
参见图17,本申请实施例提供了一种网络系统900,系统900包括:第一网络设备901和第二网络设备902。可选的,第一网络设备901为网络设备500、网络设备700或网络设备800,第二网络设备902为网络设备600、网络设备700或网络设备800。
本领域普通技术人员可以意识到,结合本文中所公开的实施例中描述的各方法步骤和单元,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各实施例的步骤及组成。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域普通技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参见前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本申请实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
该集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例中方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上描述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机程序指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产 生按照本申请实施例中的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机程序指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上描述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (32)

  1. 一种基于灵活以太网FlexE传输业务流的方法,其特征在于,所述方法包括:
    第一网络设备获取时隙分配策略,所述时隙分配策略用于根据第一业务流的需求带宽分配时隙;
    所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,所述第一时隙是所述第一网络设备与第二网络设备之间的物理层PHY链路的时隙;
    所述第一网络设备根据所述第一时隙向所述第二网络设备发送所述第一业务流。
  2. 根据权利要求1所述的方法,其特征在于,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    如果空闲时隙满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第一时隙。
  3. 根据权利要求1所述的方法,其特征在于,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    如果空闲时隙不满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第一时隙,所述激活带宽小于所述需求带宽,所述激活带宽是所述第一网络设备能够启动传输所述第一业务流的最小需求带宽。
  4. 根据权利要求1所述的方法,其特征在于,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    如果空闲时隙不满足所述需求带宽,所述第一网络设备根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务流占用的时隙中确定所述第一时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
  5. 根据权利要求1所述的方法,其特征在于,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    所述第一网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;
    所述第一网络设备根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
  6. 根据权利要求1所述的方法,其特征在于,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    所述第一网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;
    所述第一网络设备根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号 最小的第一时隙。
  7. 根据权利要求1所述的方法,其特征在于,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    所述第一网络设备根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第一时隙,所述第一时隙平均分布在所述多个PHY链路中的不同PHY链路。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述第一网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:
    当所述第一时隙所在的PHY链路发生故障,所述第一网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,所述第二时隙与所述第一时隙不同;
    所述第一网络设备根据所述第二时隙向所述第二网络设备发送所述第一业务流。
  9. 一种基于灵活以太网FlexE传输业务流的方法,其特征在于,所述方法包括:
    第二网络设备获取时隙分配策略,所述时隙分配策略用于根据第一业务流的需求带宽分配时隙;
    所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,所述第一时隙是所述第二网络设备与第一网络设备之间的物理层PHY链路的时隙;
    所述第二网络设备根据所述第一时隙从所述第一网络设备接收所述第一业务流。
  10. 根据权利要求9所述的方法,其特征在于,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    如果空闲时隙满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第一时隙。
  11. 根据权利要求9所述的方法,其特征在于,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    如果空闲时隙不满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第一时隙,所述激活带宽小于所述需求带宽,所述激活带宽是所述第二网络设备能够启动传输所述第一业务流的最小需求带宽。
  12. 根据权利要求9所述的方法,其特征在于,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    如果空闲时隙不满足所述需求带宽,所述第二网络设备根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务流占用的时隙中确定所述第一时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
  13. 根据权利要求9所述的方法,其特征在于,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    所述第二网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;
    所述第二网络设备根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
  14. 根据权利要求9所述的方法,其特征在于,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    所述第二网络设备根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;
    所述第二网络设备根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
  15. 根据权利要求9所述的方法,其特征在于,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙,包括:
    所述第二网络设备根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第一时隙,所述第一时隙平均分布在所述多个PHY链路中的不同PHY链路。
  16. 根据权利要求9至15中任一项所述的方法,其特征在于,所述第二网络设备根据所述时隙分配策略和所述需求带宽,确定第一时隙之后,所述方法还包括:
    当所述第一时隙所在的PHY链路发生故障,所述第二网络设备根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,所述第二时隙与所述第一时隙不同;
    所述第二网络设备根据所述第二时隙从所述第一网络设备接收所述第一业务流。
  17. 一种网络设备,其特征在于,所述网络设备包括:
    获取模块,用于获取时隙分配策略,所述时隙分配策略用于根据第一业务流的需求带宽分配时隙;
    确定模块,用于根据所述时隙分配策略和所述需求带宽,确定第一时隙,所述第一时隙是物理层PHY链路的时隙;
    发送模块,用于根据所述第一时隙发送所述第一业务流。
  18. 根据权利要求17所述的网络设备,其特征在于,所述确定模块,用于如果空闲时隙满足所述需求带宽,根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第一时隙。
  19. 根据权利要求17所述的网络设备,其特征在于,所述确定模块,用于如果空闲时隙不满足所述需求带宽,根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第一时隙,所述激活带宽小于所述需求带宽,所述激活带宽是能够启动传输所述第一业务流的最小需求带宽。
  20. 根据权利要求17所述的网络设备,其特征在于,所述确定模块,用于如果空闲时隙不满足所述需求带宽,根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务流占用的时隙中确定所述第一时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
  21. 根据权利要求17所述的网络设备,其特征在于,所述确定模块,用于根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
  22. 根据权利要求17所述的网络设备,其特征在于,所述确定模块,用于根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
  23. 根据权利要求17所述的网络设备,其特征在于,所述确定模块,用于根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第一时隙,所述第一时隙平均分布在所述多个PHY链路中的不同PHY链路。
  24. 根据权利要求17至23中任一项所述的网络设备,其特征在于,
    所述确定模块,还用于当所述第一时隙所在的PHY链路发生故障,根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,所述第二时隙与所述第一时隙不同;
    所述发送模块,还用于根据所述第二时隙发送所述第一业务流。
  25. 一种网络设备,其特征在于,所述网络设备包括:
    获取模块,用于获取时隙分配策略,所述时隙分配策略用于根据第一业务流的需求带宽分配时隙;
    确定模块,用于根据所述时隙分配策略和所述需求带宽,确定第一时隙,所述第一时隙是物理层PHY链路的时隙;
    接收模块,用于根据所述第一时隙接收所述第一业务流。
  26. 根据权利要求25所述的网络设备,其特征在于,所述确定模块,用于如果空闲时隙满足所述需求带宽,根据所述时隙分配策略和所述需求带宽,从所述空闲时隙中确定满足所述需求带宽的第一时隙。
  27. 根据权利要求25所述的网络设备,其特征在于,所述确定模块,用于如果空闲时隙不满足所述需求带宽,根据所述时隙分配策略和激活带宽,从所述空闲时隙中确定满足所述激活带宽的第一时隙,所述激活带宽小于所述需求带宽,所述激活带宽是能够启动传输所述第一业务流的最小需求带宽。
  28. 根据权利要求25所述的网络设备,其特征在于,所述确定模块,用于如果空闲时隙不满足所述需求带宽,根据所述时隙分配策略和所述第一业务流的优先级,从已被第二业务 流占用的时隙中确定所述第一时隙,所述第二业务流的优先级低于所述第一业务流的优先级。
  29. 根据权利要求25所述的网络设备,其特征在于,所述确定模块,用于根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定物理接口编号最小的第一PHY链路;根据所述需求带宽,从所述第一PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
  30. 根据权利要求25所述的网络设备,其特征在于,所述确定模块,用于根据所述时隙分配策略,从FlexE组的可用PHY链路中,确定负载最小的第二PHY链路;根据所述需求带宽,从所述第二PHY链路的空闲时隙中确定时隙编号最小的第一时隙。
  31. 根据权利要求25所述的网络设备,其特征在于,所述确定模块,用于根据所述时隙分配策略和所述需求带宽,从多个PHY链路的空闲时隙中确定所述第一时隙,所述第一时隙平均分布在所述多个PHY链路中的不同PHY链路。
  32. 根据权利要求25至31中任一项所述的网络设备,其特征在于,
    所述确定模块,还用于当所述第一时隙所在的PHY链路发生故障,根据所述时隙分配策略和所述第一业务流的需求带宽,确定第二时隙,所述第二时隙与所述第一时隙不同;
    所述接收模块,还用于根据所述第二时隙接收所述第一业务流。
PCT/CN2020/137333 2020-03-26 2020-12-17 基于FlexE传输业务流的方法及设备 WO2021189994A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20927878.7A EP4106284A4 (en) 2020-03-26 2020-12-17 SERVICE FLOW TRANSFER METHOD AND DEVICE BASED ON FLEXE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010225075.5 2020-03-26
CN202010225075.5A CN113452623B (zh) 2020-03-26 2020-03-26 基于FlexE传输业务流的方法及设备

Publications (1)

Publication Number Publication Date
WO2021189994A1 true WO2021189994A1 (zh) 2021-09-30

Family

ID=77807608

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137333 WO2021189994A1 (zh) 2020-03-26 2020-12-17 基于FlexE传输业务流的方法及设备

Country Status (3)

Country Link
EP (1) EP4106284A4 (zh)
CN (1) CN113452623B (zh)
WO (1) WO2021189994A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055021A (zh) * 2023-03-31 2023-05-02 之江实验室 一种多用户灵活以太网小颗粒时隙分配方法及装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113885307A (zh) * 2021-10-12 2022-01-04 广东安朴电力技术有限公司 Svg并机冗余控制方法、svg控制方法及控制系统
CN113973083B (zh) * 2021-10-26 2023-09-19 新华三信息安全技术有限公司 一种数据流传输方法及第一设备
CN116112452A (zh) * 2021-11-11 2023-05-12 华为技术有限公司 报文传输方法及通信装置
CN113890827B (zh) * 2021-12-03 2022-04-15 国网江苏省电力有限公司信息通信分公司 电力通信资源分配方法、装置、存储介质以及电子设备
WO2023138390A1 (zh) * 2022-01-24 2023-07-27 华为技术有限公司 一种时隙分配的方法、网络设备和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803814A (zh) * 2015-11-26 2017-06-06 中兴通讯股份有限公司 一种灵活以太网路径的建立方法、装置及系统
CN108075903A (zh) * 2016-11-15 2018-05-25 华为技术有限公司 用于建立灵活以太网群组的方法和设备
CN108322367A (zh) * 2017-01-16 2018-07-24 中兴通讯股份有限公司 一种业务传递的方法、设备和系统
CN110691034A (zh) * 2015-07-17 2020-01-14 华为技术有限公司 传输灵活以太网的业务流的方法和装置
CN110856052A (zh) * 2019-11-13 2020-02-28 Ut斯达康通讯有限公司 支持多种粒度的FlexE实现方法、装置及电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104093009B (zh) * 2014-07-17 2018-09-11 重庆邮电大学 无线自组织网络中基于网络效用的视频传输方法
CN107204941A (zh) * 2016-03-18 2017-09-26 中兴通讯股份有限公司 一种灵活以太网路径建立的方法和装置
CN108243120B (zh) * 2016-12-26 2021-06-22 北京华为数字技术有限公司 基于灵活以太网的业务流传输方法、装置和通信系统
CN108632886B (zh) * 2017-03-21 2020-11-06 华为技术有限公司 一种业务处理方法及装置
CN109728853B (zh) * 2017-10-30 2020-09-11 深圳市中兴微电子技术有限公司 一种数据处理的方法、设备及存储介质
CN109729588B (zh) * 2017-10-31 2020-12-15 华为技术有限公司 业务数据传输方法及装置
CN111585778B (zh) * 2019-02-19 2022-02-25 华为技术有限公司 一种灵活以太网通信方法及网络设备
CN110166382B (zh) * 2019-05-31 2021-04-27 新华三技术有限公司 一种报文转发方法及装置
CN110912736B (zh) * 2019-11-13 2022-04-15 中国联合网络通信集团有限公司 一种资源配置方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691034A (zh) * 2015-07-17 2020-01-14 华为技术有限公司 传输灵活以太网的业务流的方法和装置
CN106803814A (zh) * 2015-11-26 2017-06-06 中兴通讯股份有限公司 一种灵活以太网路径的建立方法、装置及系统
CN108075903A (zh) * 2016-11-15 2018-05-25 华为技术有限公司 用于建立灵活以太网群组的方法和设备
US20190280797A1 (en) * 2016-11-15 2019-09-12 Huawei Technologies Co., Ltd. Flexible ethernet group establishment method and device
CN108322367A (zh) * 2017-01-16 2018-07-24 中兴通讯股份有限公司 一种业务传递的方法、设备和系统
CN110856052A (zh) * 2019-11-13 2020-02-28 Ut斯达康通讯有限公司 支持多种粒度的FlexE实现方法、装置及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055021A (zh) * 2023-03-31 2023-05-02 之江实验室 一种多用户灵活以太网小颗粒时隙分配方法及装置

Also Published As

Publication number Publication date
CN113452623A (zh) 2021-09-28
EP4106284A4 (en) 2023-08-09
CN113452623B (zh) 2023-11-14
EP4106284A1 (en) 2022-12-21

Similar Documents

Publication Publication Date Title
WO2021189994A1 (zh) 基于FlexE传输业务流的方法及设备
JP6412154B2 (ja) 光伝送システム及びリソース最適化方法
CN112688754B (zh) 基于灵活以太网FlexE传输业务流的方法和装置
KR102326446B1 (ko) 통신 방법 및 디바이스, 및 저장 매체
US9705589B2 (en) Method of resizing a protected ODUflex connection in an optical transport network
EP3038376B1 (en) Data migration method and communication node
JP6412158B2 (ja) フレーマ、及びフレーミング方法
US11251863B2 (en) Protection switching method and node
US6816494B1 (en) Method and apparatus for distributed fairness algorithm for dynamic bandwidth allocation on a ring
CN114125019A (zh) 数据传输的方法和装置、电子设备、计算机可读介质
US9634813B2 (en) Hitless multi-carrier spectrum migration method and apparatus
CN115914887A (zh) 一种业务数据的传输方法及相关设备
US20230209517A1 (en) Resource Configuration Method and Communication Apparatus
US8422363B2 (en) Method and apparatus for service protection
JP2011010188A (ja) ノード装置、通信システム、及びパス割当方法
EP2953299A1 (en) Protection switching method, system and node
CN109150747B (zh) 一种变更业务带宽的方法、装置及计算机可读存储介质
EP2953294B1 (en) Protection switching method, system, and node
WO2023138390A1 (zh) 一种时隙分配的方法、网络设备和系统
US20240205074A1 (en) Slot negotiation method and apparatus
CN116527193A (zh) 一种时隙分配的方法、网络设备和系统
WO2013029248A1 (zh) 一种保护资源分配方法、装置和系统
WO2023116449A1 (zh) 一种控制网络切片状态的方法及相关设备
CN116346734A (zh) 数据包丢包率统计方法、设备以及计算机可读存储介质
CN116389263A (zh) 一种控制网络切片状态的方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927878

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020927878

Country of ref document: EP

Effective date: 20220915

NENP Non-entry into the national phase

Ref country code: DE