WO2020168897A1 - 一种灵活以太网通信方法及网络设备 - Google Patents

一种灵活以太网通信方法及网络设备 Download PDF

Info

Publication number
WO2020168897A1
WO2020168897A1 PCT/CN2020/073619 CN2020073619W WO2020168897A1 WO 2020168897 A1 WO2020168897 A1 WO 2020168897A1 CN 2020073619 W CN2020073619 W CN 2020073619W WO 2020168897 A1 WO2020168897 A1 WO 2020168897A1
Authority
WO
WIPO (PCT)
Prior art keywords
network device
overhead
flexe
phys
phy
Prior art date
Application number
PCT/CN2020/073619
Other languages
English (en)
French (fr)
Inventor
李春荣
陈井凤
孙洪亮
胡俊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20759402.9A priority Critical patent/EP3905593A4/en
Priority to JP2021548641A priority patent/JP7163508B2/ja
Priority to MX2021009929A priority patent/MX2021009929A/es
Priority to KR1020217026739A priority patent/KR102509386B1/ko
Publication of WO2020168897A1 publication Critical patent/WO2020168897A1/zh
Priority to US17/405,452 priority patent/US20210385127A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/14Monitoring arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2823Reporting information sensed by appliance or service execution status of appliance services in a home automation network
    • H04L12/2827Reporting to a device within the home network; wherein the reception of the information reported automatically triggers the execution of a home appliance functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/413Bus networks with decentralised control with random access, e.g. carrier-sense multiple-access with collision detection [CSMA-CD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • H04L41/0613Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time based on the type or category of the network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0659Management of faults, events, alarms or notifications using network fault recovery by isolating or reconfiguring faulty entities
    • H04L41/0661Management of faults, events, alarms or notifications using network fault recovery by isolating or reconfiguring faulty entities by reconfiguring faulty entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0057Operations, administration and maintenance [OAM]
    • H04J2203/006Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet

Definitions

  • This application relates to the field of communication technology, and in particular to a flexible Ethernet (English: Flexible Ethernet, FlexE) communication method, network equipment and system.
  • a flexible Ethernet English: Flexible Ethernet, FlexE
  • FlexE technology is based on high-speed Ethernet (English: Ethernet) interface, through the Ethernet media access control (English: Media Access Control, MAC) layer and physical layer decoupling to achieve low-cost, highly reliable carrier-class interface technology. FlexE technology realizes the decoupling of the MAC layer and the physical layer by introducing a flexible Ethernet shim (English: FlexE shim) layer on the basis of IEEE802.3, thereby realizing flexible rate matching.
  • Ethernet Ethernet
  • MAC Media Access Control
  • FlexE technology meets the requirements of flexible bandwidth port applications by binding multiple Ethernet physical layer devices (hereinafter referred to as PHY for physical layer devices) into a flexible Ethernet group (English: FlexE group) and physical layer channelization. Therefore, the MAC rate provided by FlexE can be greater than the rate of a single PHY (through bundling), or less than the rate of a single PHY (through channelization).
  • the embodiment of the present application provides a FlexE communication method, which can reduce the influence of the PHY in a fault state on the Client service carried by the PHY in the normal state in the FlexE group.
  • this application provides a flexible Ethernet FlexE communication method, the method includes:
  • the first network device receives p first overhead blocks sent by the second network device through p physical layer device PHYs in the flexible Ethernet group FlexE group, and the p first overhead blocks correspond to p FlexE overhead frames one-to-one ,
  • the p FlexE overhead frames correspond to the p PHYs one-to-one, and the FlexE group is composed of n PHYs, n ⁇ 2, and n is an integer;
  • the first network device simultaneously reads the p head overhead blocks from the p memories.
  • the method further includes:
  • the first network device sends a continuous Ethernet Local Fault Ordered Set Ethernet Local Fault Ordered Set on the time slot mapped by the client carried by the m PHYs.
  • the first network device sends a continuous Ethernet Local Fault Ordered Set Ethernet Local Fault Ordered Set on the time slots mapped by the clients carried by the m PHYs, including:
  • the first network device writes the continuous Ethernet Local Fault Ordered Set into the m memories corresponding to the m PHYs.
  • the method before the first network device stores the p first overhead blocks in p memories of the n memories, the method further includes:
  • the first network device determines that the first PHY is in a fault state, and the first PHY is one of the m PHYs;
  • the first network device issues an alarm, and the alarm indicates that the FlexE group has failed
  • the first network device determines that the failure type of the first PHY belongs to the first failure type, and stops the alarm.
  • the network device when any PHY in the FlexE group is in a fault state, the network device will issue an alarm to indicate that the FlexE group has failed. It will not stop until all PHYs in the FlexE group are in a normal state. The alarm.
  • the alarm sent by the first network device can also be understood as the first network device switching to the FlexE group alarm state. In the alarm state, the entire FlexE group's business will be affected and cannot work normally.
  • the method of the present application after the first network device sends out an alarm, it is determined to stop the alarm by judging the failure type of the PHY, thereby avoiding interruption of the client services carried by the normal PHY.
  • the method before the first network device saves the p head overhead blocks to p memories of the n memories, the method further includes:
  • the first network device determines that the first PHY is in a fault state, and the first PHY is one of the m PHYs;
  • the first network device determines that the failure type of the first PHY belongs to the first failure type, and avoids issuing an alarm indicating that the FlexE group has failed.
  • the first overhead block of the failed PHY is not used as the PHY alignment judgment condition. That is, only after the first overhead block of the PHY currently in the normal state in the FlexE group is stored in the corresponding memory, it is considered that the PHY of the FlexE group is aligned.
  • this application provides a flexible Ethernet FlexE communication method.
  • the method includes:
  • the first network device receives n header overhead blocks sent by the second network device through a flexible Ethernet group FlexE group, where the FlexE group is composed of the n physical layer devices PHY, and the n header overhead blocks There is a one-to-one correspondence with n FlexE overhead frames, and the n FlexE overhead frames have a one-to-one correspondence with the n PHYs, where n ⁇ 2, and n is an integer.
  • the first network device stores the n head overhead blocks in n memories, and the n head overhead blocks correspond to the n memories in a one-to-one correspondence.
  • the first network device reads the n first overhead blocks from the n memories at the same time, where the n first overhead blocks pass a preset time period T after the specific first overhead block is saved in the corresponding memory. Is read.
  • the specific head overhead block is the last saved head overhead block among the n head overhead blocks.
  • the duration T is greater than or equal to 1 clock cycle, and the clock cycle is the duration required for the first network device to perform a read operation on a memory.
  • T The greater the value of T, the greater the delay deviation that can be tolerated.
  • those skilled in the art can configure the value of T according to actual network scenarios.
  • the method further includes:
  • the first network device receives p header overhead blocks sent by the second network device through p PHYs in the FlexE group.
  • the p first overhead blocks correspond to p FlexE overhead frames one-to-one
  • the first network device saves the p header overhead blocks to p memories in the n memories, and the p header overhead blocks correspond to the p memories in a one-to-one correspondence.
  • the first network device simultaneously reads the p head overhead blocks from the p memories.
  • the method further includes:
  • the first network device sends a continuous Ethernet Local Fault Ordered Set Ethernet Local Fault Ordered Set on the time slot mapped by the client carried by the m PHYs.
  • the first network device sends a continuous Ethernet Local Fault Ordered Set Ethernet Local Fault Ordered Set on the time slots mapped by the clients carried by the m PHYs, including:
  • the first network device writes the continuous Ethernet Local Fault Ordered Set into the m memories corresponding to the m PHYs.
  • the method further includes:
  • the first network device determines that a first PHY has failed, and the first PHY is one of the m PHYs;
  • the first network device issues an alarm, and the alarm indicates that the FlexE group has failed
  • the first network device determines that the failure type of the first PHY belongs to the first failure type, and stops the alarm.
  • the method further includes:
  • the first network device determines that a first PHY has failed, and the first PHY is one of the m PHYs;
  • the first network device determines that the failure type of the first PHY belongs to the first failure type, so as to avoid triggering an alarm indicating that the FlexE group has failed.
  • the buffer duration T can absorb the delay changes that may be caused when the PHY fails to recover, and avoid the resulting PHY realignment . In this way, business interruption is avoided, and lossless recovery of a failed PHY can be achieved.
  • this application provides a network device for implementing any one of the possible designs of the first aspect, the second aspect, the first aspect, or any one of the possible designs of the second aspect.
  • the network device includes a receiver, a processor, and a memory.
  • the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a computer, causes the computer to execute the first, second, and first aspects above. Any one of the possible designs in the first aspect or any one of the possible designs in the second aspect.
  • the present application provides a computer-readable storage medium, including any one of the possible designs used to implement the first, second, and first aspects, or any one of the possible designs in the second aspect. Method of the program.
  • this application provides a communication system, including the network equipment provided in the third aspect, which is used to implement any possible design of the first aspect, the second aspect, or the second aspect Method of design.
  • FIG. 1A is a schematic diagram of code type definition of 64B/66B encoding in an embodiment of this application;
  • FIG. 1B is a schematic diagram of the code pattern definition of free blocks in an embodiment of the application.
  • Figure 2 is a schematic diagram of the FlexE standard architecture
  • Figure 3 is a schematic diagram of a network scenario in an embodiment of the application.
  • FIG. 4 is a schematic diagram of an architecture for transmitting information using FlexE technology in an embodiment of the application
  • FIG. 5 is a schematic diagram of the code pattern definition of the first overhead block in an embodiment of the application.
  • FIG. 6 is a schematic flowchart of a communication method for fault isolation provided by an embodiment of this application.
  • FIG. 7 is a schematic flowchart of a communication method for failure recovery provided by an embodiment of the application.
  • FIG. 8 is a schematic flowchart of another communication method for fault isolation provided by an embodiment of this application.
  • FIG. 9 is a schematic flowchart of another communication method for failure recovery provided by an embodiment of this application.
  • FIG. 10 is a schematic structural diagram of a network device provided by an embodiment of this application.
  • Ethernet ports usually appear as data-oriented logical concepts, called logical ports or simply ports, and Ethernet physical interfaces appear as hardware concepts, called physical interfaces or simply interfaces.
  • an Ethernet port is marked with a MAC address.
  • the speed of the Ethernet port is determined on the basis of the speed of the Ethernet physical interface.
  • the maximum bandwidth of an Ethernet port corresponds to the bandwidth of an Ethernet physical interface, such as 10 megabits per second (Mbps), 100Mbps, 1000Mbps (1Gbps), 10Gbps, 40Gbps, 100Gbps and 400Gbps Ethernet Physical interface.
  • Ethernet has been widely used and developed by leaps and bounds in a considerable period of time in the past.
  • the Ethernet port rate has been increased by 10 times, from 10Mbps to 100Mbps, 1000Mbps (1Gbps), 10Gbps, 40Gbps, 100Gbps, 400Gbps continuous evolution and development.
  • the more developed the technology the greater the difference in bandwidth particles, and the easier it is to deviate from the expectations of actual application requirements.
  • the bandwidth growth required by mainstream applications does not show such a 10-fold increase feature, such as 50Gbps, 75Gbps, 200Gbps and so on.
  • the industry hopes to provide support for Ethernet ports (virtual connections) with bandwidths of 50Gbps, 60Gbps, 75Gbps, 200Gbps, and 150Gbps.
  • ports with flexible bandwidth can be provided.
  • These ports can use one or several Ethernet physical interfaces together.
  • two 40GE ports and two 10GE ports share a 100G physical interface;
  • Flexible rate adjustments are made in response to changes in demand, such as adjusting from 200Gbps to 330Gbps, or 50Gbps to 20Gbps, to improve port efficiency or extend its life cycle.
  • they can be cascaded and bundled to support the stacking increase of logical port rates (for example, two 100GE physical interfaces are cascaded and bundled to support 200GE logical ports).
  • FlexE supports functions such as sub-rate, channelization, and reverse multiplexing for Ethernet services.
  • FlexE can support 250G Ethernet services (MAC code stream) to be transmitted using 3 existing 100GE physical interfaces.
  • FlexE can support the transmission of 200G Ethernet services using two existing 100GE physical media-related (English: Physical Medium Dependent, PMD) sublayers.
  • FlexE can support several logical ports to use one or more physical interfaces together, and can support multiplexing multiple low-rate Ethernet services into high-rate flexible Ethernet.
  • this FlexE technology based on the service flow aggregation function of Ethernet technology can realize seamless connection with the Ethernet interface of the underlying service network.
  • the introduction of these FlexE sub-rate, channelization and reverse multiplexing functions greatly expands the application of Ethernet, enhances the flexibility of Ethernet applications, and makes Ethernet technology gradually penetrate the field of transport networks.
  • FlexE provides a feasible evolution direction for the virtualization of Ethernet physical links.
  • Flexible Ethernet needs to support several virtual Ethernet data connections on a group of cascaded physical interfaces. For example, four 100GE physical interfaces are cascaded and bundled to support several logical ports. If the bandwidth of some logical ports of several logical ports decreases, the bandwidth of another part of logical ports increases, and the total amount of reduced bandwidth is equal to the total amount of increased bandwidth. The bandwidth block speed of several logical ports is adjusted flexibly and used together Four 100GE physical interfaces.
  • FlexE draws on Synchronous Digital Hierarchy (SDH)/Optical Transfer Network (OTN) technology, constructs a fixed frame format for physical interface transmission, and divides TDM time slots.
  • SDH Synchronous Digital Hierarchy
  • OTN Optical Transfer Network
  • FlexE's TDM slot division granularity is 66 bits, which can correspond to a 64B/66B bit block.
  • a FlexE frame contains 8 lines. The first 64B/66B bit block in each line is the FlexE overhead block. After the overhead block is the payload area for time slot division. With a granularity of 66 bits, it corresponds to 20x1023 66-bit bearing spaces.
  • the bandwidth of the 100GE interface is divided into 20 time slots, and the bandwidth of each time slot is about 5Gbps.
  • FlexE implements multiple transmission channels on a single physical interface through interleaving and multiplexing, that is, multiple time slots.
  • Ethernet logical port Several physical interfaces can be bundled, and all time slots of the several physical interfaces can be combined to carry an Ethernet logical port. For example, 10GE requires two time slots, 25GE requires 5 time slots, and so on. The 64B/66B bit blocks that can be seen on the logical port are still sequentially transmitted. Each logical port corresponds to a MAC and transmits the corresponding Ethernet message. The start and end of the message and the identification of idle padding are the same as traditional Ethernet. . FlexE is just an interface technology, and the related switching technology can be based on existing Ethernet packets, or it can be crossed based on FlexE, which will not be repeated here.
  • bit blocks mentioned in this application can be M1/M2 bit blocks, or M1B/M2B bit blocks.
  • M1/M2 represents a coding method, where M1 represents the number of payload bits in each bit block, and M2 represents The total number of bits in each bit block, M1 and M2 are positive integers, M2>M1.
  • This kind of M1/M2 bit block stream is transmitted on the Ethernet physical layer link.
  • 1G Ethernet uses 8/10Bit encoding, and 1GE physical layer link transmits 8/10 bit block streams;
  • 10GE/40GE/100GE uses 64/ 66Bit encoding, the 10GE/40GE/100GE physical layer link transmits a 64/66 bit block stream.
  • other encoding methods will also appear, such as 128/130Bit encoding, 256/258Bit encoding, etc.
  • For the M1/M2 bit block stream there are different types of bit blocks and they are clearly specified in the standard.
  • Figure 1A includes 16 code type definitions. Each row represents a code type definition of a bit block. Among them, D0 ⁇ D7 represents the data byte, C0 ⁇ C7 represents the control byte, S0 represents the start byte, T0 ⁇ T7 represents the end byte, and the second The row corresponds to the pattern definition of the idle bit block (idle block), the idle bit block can be represented by /I/, as shown in Fig. 1B.
  • Line 7 corresponds to the code definition of the start block, which can be represented by /S/.
  • Line 8 corresponds to the code type definition of the O code (for example, OAM code block) code block, and the O code code block can be represented by /O/.
  • Lines 9 ⁇ 16 correspond to the code definitions of the 8 end blocks, and the 8 end blocks can be represented by /T/ uniformly.
  • the FlexE technology realizes the decoupling of the MAC layer and the physical layer by introducing the FlexE shim layer on the basis of IEEE802.3, and its realization is shown in Figure 2 to realize flexible rate matching.
  • part of the FlexE architecture includes the MAC sublayer, FlexE shim layer, and physical layer.
  • the MAC sublayer belongs to a sublayer of the data link layer, and is connected to the logical link control sublayer.
  • the physical layer can be further divided into a physical coding sublayer (English: physical coding sublayer, PCS), a physical medium attachment (physical medium attachment, PMA) sublayer, and a PMD sublayer.
  • the functions of the above-mentioned layers are all realized by corresponding chips or modules.
  • PCS In the process of sending signals, PCS is used to encode data, scrambled (scrambled), insert overhead (OH), and insert alignment markers (alignment marker, AM); in the process of receiving signals, PCS performs operations such as The reverse process of the above steps will be performed.
  • Sending and receiving signals can be realized by different functional modules of the PCS.
  • the main functions of the PMA sublayer are link monitoring, carrier monitoring, encoding and decoding, sending clock synthesis, and receiving clock recovery.
  • the main functions of the PMD sublayer are scrambling/descrambling of the data stream, encoding and decoding, and DC recovery and adaptive equalization of the received signal.
  • the FlexE architecture applicable to this application is not limited to this.
  • RS reconciliation sublayer
  • FEC forward error correction
  • the FlexE communication system 100 includes a network device 1, a network device 2, a user device 1, and a user device 2.
  • the network device 1 may be an intermediate node. At this time, the network device 1 is connected to the user equipment 1 through other network devices.
  • the network device 1 may be an edge node. In this case, the network device 1 is directly connected to the user equipment 1.
  • the network device 1 may be an intermediate node. At this time, the network device 1 is connected to the user equipment 1 through other network devices.
  • the network device 1 may also be an edge node. In this case, the network device 1 is directly connected to the user equipment 1.
  • the network device 2 may be an intermediate node.
  • the network device 2 is connected to the user equipment 2 through other network devices.
  • the network device 2 may also be an edge node.
  • the network device 2 is directly connected to the user equipment 2.
  • the network device 1 includes a FlexE interface 1, and the network device 2 includes a FlexE interface 2. FlexE interface 1 is adjacent to FlexE interface 2.
  • Each FlexE interface includes a sending port and a receiving port.
  • the difference from a traditional Ethernet interface is that one FlexE interface can carry multiple clients, and the FlexE interface as a logical interface can be composed of multiple physical interfaces.
  • the flow of business data in the forward channel shown in FIG. 3 is shown by the solid arrow in FIG. 3, and the flow of business data in the reverse channel is shown by the dotted arrow in FIG. 3.
  • the transmission channel in the embodiment of the present invention takes the forward channel as an example, and the flow direction of the service data in the transmission channel is user equipment 2 ⁇ network equipment 2 ⁇ network equipment 1 ⁇ user equipment 1.
  • FIG. 3 only exemplarily shows two network devices and two user equipments, and the network may include any other number of network devices and user equipment, which is not limited in the embodiment of the present application.
  • the FlexE communication system shown in FIG. 3 is only an example, and the application scenario of the FlexE communication system provided in this application is not limited to the scenario shown in FIG. 3.
  • the technical solution provided in this application is applicable to all network scenarios where FlexE technology is used for data transmission.
  • a FlexE group interface is a logical interface bound by a group of physical interfaces.
  • the FlexE group interface carries a total of 6 clients, client1 to client6.
  • the data mapping of client1 and client2 are transmitted on PHY1; the data mapping of client3 is transmitted on PHY2 and PHY3; the data mapping of client4 is transmitted on PHY3; the data mapping of client5 and client6 is transmitted on PHY4. It can be seen that different FlexE clients are mapped and transmitted on the FlexE group to realize the bundling function. among them:
  • FlexE group It can also be called a bundle group.
  • the multiple PHYs included in each FlexE group have a logical binding relationship.
  • the so-called logical bundling relationship means that different PHYs may not have a physical connection relationship. Therefore, multiple PHYs in the FlexE group may be physically independent.
  • the network equipment in FlexE can identify which PHYs are included in a FlexE group through the number of PHYs to realize the logical bundling of multiple PHYs.
  • the number of each PHY can be identified by a number between 1 ⁇ 254, and 0 and 255 are reserved numbers.
  • the number of a PHY can correspond to an interface on a network device. Two adjacent network devices need to use the same number to identify the same PHY.
  • the number of each PHY included in a FlexE group need not be consecutive. Generally, there is a FlexE group between two network devices, but this application does not limit that there is only one FlexE group between two network devices, that is, there may be multiple FlexE groups between two network devices.
  • One PHY can be used to carry at least one client, and one client can transmit on at least one PHY.
  • the PHY includes the physical layer device of the transmitting device and the physical layer device of the receiving device.
  • the PHY in FlexE also includes devices for performing FlexE shim layer functions.
  • the physical layer device of the sending device may also be called the sending PHY or the PHY in the sending direction, and the physical layer device of the receiving device may also be called the receiving PHY or the PHY in the receiving direction.
  • FlexE client Corresponds to various user interfaces of the network, consistent with the traditional business interfaces in the existing IP/Ethernet network. FlexE client can be flexibly configured according to bandwidth requirements, and supports Ethernet MAC data streams of various rates (such as 10G, 40G, n*25G data streams, and even non-standard rate data streams). For example, it can be encoded by 64B/66B. The data stream is passed to the FlexE shim layer. FlexE client can be interpreted as an Ethernet stream based on a physical address. Clients sent through the same FlexE group need to share the same clock, and these clients need to adapt according to the allocated time slot rate.
  • FlexE shim as an additional logical layer inserted between the MAC and PHY (PCS sublayer) of the traditional Ethernet architecture, the core architecture of FlexE technology is realized through the time slot (English: time slot) distribution mechanism based on the daily table (English: calendar) .
  • the main function of FlexE shim is to slice data according to the same clock and encapsulate the sliced data into pre-divided slots. Then, according to the pre-configured time slot configuration table, each divided time slot is mapped to the PHY in the FlexE group for transmission. Among them, each time slot is mapped to a PHY in the FlexE group.
  • the FlexE shim layer reflects the time slot mapping relationship between the client and the FlexE group and the calendar working mechanism by defining overhead frames (English: overhead frame)/overhead multiframe (English: overhead Multiframe). It should be noted that the above overhead frame may also be called a flexible Ethernet overhead frame (English: FlexE overhead frame), and the above overhead multiframe may also be called a flexible Ethernet overhead multiframe (English: FlexE overhead Multiframe).
  • the FlexE shim layer provides an in-band management channel through overhead, supports the transfer of configuration and management information between the two FlexE interfaces that are connected, and realizes the establishment of automatic link negotiation.
  • an overhead multiframe is composed of 32 overhead frames, and an overhead frame has 8 overhead blocks (English: overhead block), and the above overhead block may also be called an overhead slot (English: overhead slot).
  • the overhead block may be, for example, a 64B/66B code block, which appears once every 1023*20 blokcs, but the fields contained in each overhead block are different.
  • the first overhead block (hereinafter referred to as the first overhead block) contains information such as "0x4B" control characters and "0x5" code characters.
  • the two Bits of the header of the header overhead block are 10
  • the control block type is 0x4B
  • the "O code" character of the header overhead block is 0x5.
  • the control character "0x4B" and the "O code” character “0x5" are matched between the two connected FlexE interfaces to determine the first overhead block of the overhead frame for each PHY lock transmission.
  • the first overhead block transmitted on each PHY serves as an identifier (English: marker), which is used to align the PHYs bound to the FlexE group in the receiving direction. Aligning the PHYs of the FlexE group can achieve data synchronization and locking, and subsequently, the data carried by each PHY can be read from the memory synchronously.
  • the first code block of each overhead frame can also be called the frame header of the overhead frame. Aligning each PHY of the FlexE group essentially refers to aligning the first overhead block of the overhead frame of each PHY. The following describes the PHY alignment process with an example in conjunction with the scenario of FIG. 4.
  • the network device 2 simultaneously sends the overhead frame 1 to the overhead frame 4 through PHY1, PHY2, PHY3, and PHY4.
  • the overhead frame 1 to the overhead frame 4 include the first overhead block 1 to the first overhead block 4, respectively.
  • the first overhead block 1, the first overhead block 2, the first overhead block 3, and the first overhead block 4 correspond to PHY1, PHY2, PHY3, and PHY4 respectively.
  • network device 2 sends overhead frame 1 to overhead frame 4 at the same time, but the length of the different fibers corresponding to PHY1, PHY2, PHY3, and PHY4 may be different. Therefore, the first overhead block 1 to the first overhead Block 4 may not be received by network device 1 at the same time. For example, the network device 1 receives the first overhead block 1 to the first overhead block 4 in the order of the first overhead block 1 -> the first overhead block 2 -> the first overhead block 3 -> the first overhead block 4. After the network device 1 receives the first overhead block 1, it stores the first overhead block 1 in the memory 1 corresponding to the PHY1.
  • the network device 1 stores the subsequently received first overhead block 2 in the memory 2 corresponding to PHY2, and stores the received first overhead block 3 in the memory 3 corresponding to PHY3.
  • the network device 1 receives the first overhead block 4 transmitted on the PHY4, and stores the first overhead block 4 in the memory 4 corresponding to the PHY4, it immediately starts to read each head overhead block and other cached data from each memory at the same time.
  • start immediately means that after the last head overhead block 4 is cached in the memory, the read operation of the memory 1 to the memory 4 is started at the same time.
  • the time that the last head overhead block 4 waits in the buffer is zero. That is, for the last arriving head cost block 4, the interval between the write operation of the network device 1 buffering the head cost block 4 to the memory 4 and the read operation of reading the head cost block 4 from the memory 4 is 0 .
  • PHY alignment can also be called FlexE group deskew.
  • FlexE group deskew Through PHY alignment, the delay deviation between each PHY is eliminated, thereby realizing time slot alignment between all PHYs in the FlexE group.
  • the aforementioned delay deviation is caused by different fiber lengths, for example.
  • the network device 1 can simultaneously receive the first overhead block of each PHY to be sent subsequently, simultaneously cache the respective first overhead blocks in their respective corresponding memories, and simultaneously read the respective stored data from the respective memories.
  • the data of each client can be recovered according to the time slot.
  • Solution 1 Currently, the FlexE standard specified by OIF defines: When one or more PHYs in a FlexE group fails, all FlexE cliets in the FlexE group will be sent a continuous Ethernet local failure sequence set (English: Ethernet Local Fault Ordered Set), hereinafter referred to as LF, means that the network device in the receiving direction will write continuous LF in the memory corresponding to all PHYs in the FlexE group. The above operations will cause all client services of FlexE group to be interrupted.
  • LF Ethernet Local Fault Ordered Set
  • Solution 2 Use automatic protection switching (English: automatic protection switching, APS) and other protection mechanisms to switch the working FlexE group to the protection FlexE group, and protect the FlexE group to carry client services, but the above operations will also cause all clients in the FlexE group
  • the service is interrupted during the switching process, and the interruption duration may be as long as 50 ms, for example.
  • Solution 3 When PHY4 fails, the network device removes the failed PHY4 from the FlexE group, creates a new FlexE group that does not include PHY4, and uses the new FlexE group to continue to carry client services. However, the above operations will also cause all client services in the FlexE group to be interrupted during the rebuilding of the group.
  • this application proposes a method 100 for fault isolation.
  • the method 100 provided by the embodiment of the present application will be described in detail below with reference to FIG. 6.
  • the network architecture of the application method 100 includes a network device 1 and a network device 2.
  • the network device 1 may be the network device 1 shown in FIG. 3 or FIG. 4
  • the network device 2 may be the network device 2 shown in FIG. 3 or FIG. Among them, network device 1 and network device 2 are connected through FlexE Group.
  • the network architecture may be the network architecture shown in FIG. 3 or FIG. 4.
  • the following uses the architecture shown in FIG. 4 as an example to introduce the method 100.
  • the method 100 includes: in time period 1, the following operations S101 to S104 are performed.
  • the network device 2 simultaneously sends three FlexE overhead frames to the network device 1 through PHY1, PHY2 and PHY3 in the FlexE group.
  • the network device 2 sends the FlexE overhead frame 1 to the network device 1 through the PHY1, and the FlexE overhead frame 1 includes the first overhead block 1.
  • the network device 2 sends the FlexE overhead frame 2 to the network device 1 through the PHY2, and the FlexE overhead frame 2 includes the first overhead block 2.
  • the network device 3 sends the FlexE overhead frame 3 to the network device 1 through the PHY3, and the FlexE overhead frame 3 includes the first overhead block 3.
  • PHY4 in the FlexE group is in a fault state, and PHY1, PHY2 and PHY3 are all in a normal working state.
  • network device 2 can send the corresponding FlexE overhead frame through PHY4.
  • network device 1 cannot receive it. To the FlexE overhead frame.
  • the network device 1 receives the first overhead block 1, the first overhead block 2 and the first overhead block 3 through PHY1, PHY2 and PHY3.
  • the network device 1 saves the received three header overhead blocks into three memories, and the three header overhead blocks have a one-to-one correspondence with the three memories.
  • each PHY has a corresponding memory for storing PHY-related data.
  • the first network device simultaneously reads the three header overhead blocks from the three memories.
  • the first overhead block of the failed PHY is not used as a judgment condition for PHY alignment. That is, only after the first overhead block of the PHY currently in the normal state in the FlexE group is stored in the corresponding memory, it is considered that the PHY of the FlexE group is aligned.
  • the method 100 further includes:
  • the network device 1 sends a continuous LF to the time slot mapped by the client carried by the PHY4 in the failed state.
  • the network device 1 can, but is not limited to, send continuous LFs to the time slot mapped by the client carried by the PHY4 in the faulty state in the following manner.
  • Method 1 The network device 1 writes the continuous Ethernet Local Fault Ordered Set to the memory corresponding to the PHY4 in the fault state.
  • network device 1 uses flexE cross technology to transmit data, and writes LF in the memory corresponding to the faulty PHY, so that when the client service carried by the faulty PHY is forwarded to the downstream device, the client is inserted into the LF and continues to be forwarded to the downstream device
  • the sink device can recognize that the CLIENT service carried by PHY4 has an error based on the LF. In this way, the wrong data can be discarded in time to avoid providing wrong data to the user.
  • Method 2 When PHY4 fails, network device 1 does not write LF in the memory corresponding to PHY4. At this time, you can write the actually received data, or write to the Idle block, or not write data.
  • network device 1 restores the client carried by PHY4, it writes LF in the time slot mapped by the client. In a specific implementation manner, the network device 1 reads the cached data from the PHY memory, restores the client data, and stores the client data in the memory corresponding to each client. At this time, write continuous LF to the memory corresponding to the client.
  • the method 100 further includes:
  • the network device 1 After determining that the PHY4 is in a fault state, the network device 1 issues an alarm, which indicates that the FlexE group is faulty.
  • the network device 1 determines that the failure type of the PHY4 belongs to the first failure type, and stops the alarm.
  • the prior art can be effectively compatible.
  • an alarm indication of the group level will be triggered. Once the group level alarm is triggered, business processing will be interrupted until the alarm stops.
  • the network device determines that the PHY failure belongs to a predetermined failure type, it will stop the alarm.
  • the subsequent processing of the data received by the normal PHY can be continued without interrupting the service.
  • the method 100 further includes:
  • the first network device determines that the first PHY is in a fault state, and the first PHY is one of the m PHYs;
  • the first network device determines that the failure type of the first PHY belongs to the first failure type, and avoids issuing an alarm indicating that the FlexE group has failed.
  • the failure type of the PHY is first determined. Then, according to the failure type of the PHY, it is determined whether to issue an alarm indicating that the FlexE group has failed. Therefore, when the PHY failure belongs to a specific failure type, no alarm will be issued, and subsequent processing of the data received by the normal PHY can be continued without interrupting the service.
  • the network device 1 recognizes the failure type of the PHY, and can implement corresponding processing for different failure types.
  • the fault types can be divided into two categories, namely the first fault type and the second fault type mentioned above.
  • the first fault type the network device 1 can use the fault isolation method provided in this application to isolate the faulty PHY. Clients that are not related to the faulty PHY can still work normally without being affected by the faulty PHY. The entire process will not be affected.
  • the CLIENT carried by the normal PHY is written to LF, and the group will not be rebuilt.
  • the above-mentioned first failure type includes but is not limited to fiber failure, high bit error rate, and optical module damage.
  • the PHY failure belongs to the second type of failure, for example, the deskew of the shim layer fails, the group number Group Number is configured incorrectly, the instance number Instance Number is configured incorrectly, etc., after the group level alarm is issued, the group level alarm is issued for the above failure types Afterwards, continuous LF is inserted into all clients carried by the FlexE group.
  • the method provided in this application can effectively isolate the faulty PHY, reduce the impact on the client carried in the normal PHY, and improve the reliability of service transmission.
  • the cause of the PHY4 failure is a flexE shim layer failure, for example, a shim layer failure causes a data error in the transmission direction
  • the previously failed PHY can automatically recover and join the FlexE group. And can carry the client normally, no need to re-create the group.
  • the network device 2 sends data synchronously, and the network device 1 receives data synchronously, and the received data can be processed according to the method in the prior art.
  • the present application provides a processing method 200 for fault recovery.
  • the method 200 for processing fault recovery provided by the present application will be specifically introduced below in conjunction with FIG. 7.
  • the method 200 includes the following operations S201-S204. It should be noted that the operations in the method 200 should be performed before the method 100, so that when the PHY failure recovers, it can be added to the group without loss.
  • network device 2 sends 4 FlexE overhead frames to network device 1 through FlexE group.
  • the four FlexE overhead frames are FlexE overhead frame A, FlexE overhead frame B, FlexE overhead frame C, and FlexE overhead frame D.
  • the 4 FlexE overhead frames include 4 first overhead blocks.
  • the network device 2 sends the FlexE overhead frame A to the network device 1 through the PHY1, and the FlexE overhead frame A includes the first overhead block A.
  • the network device 2 sends the FlexE overhead frame B to the network device 1 through the PHY2, and the FlexE overhead frame B includes the first overhead block B.
  • the network device 2 sends the FlexE overhead frame C to the network device 1 through the PHY3, and the FlexE overhead frame C includes the first overhead block C.
  • the network device 2 sends the FlexE overhead frame D to the network device 1 through the PHY4, and the FlexE overhead frame D includes the first overhead block D.
  • the network device 1 receives the 4 header overhead blocks sent by the network device 2 through the FlexE group.
  • the network device 1 saves the 4 header overhead blocks into 4 memories, and the 4 header overhead blocks correspond to the 4 memories in a one-to-one correspondence.
  • the network device 1 reads the 4 first cost blocks from the 4 memories at the same time, where the 4 first cost blocks are saved for a preset time period T after the specific first cost block is saved in the corresponding memory. Reading, the specific head cost block is the last head cost block saved among the 4 head cost blocks.
  • the duration T is greater than or equal to 1 clock cycle
  • the clock cycle is the duration required for the network device 1 to perform a read operation on a memory.
  • the network device 1 can read at least one data block from a memory.
  • the duration T is greater than or equal to 2 clock cycles.
  • the above operations S201-S204 in the above method 200 are performed.
  • the memory's delayed read mechanism that is, the mechanism of temporarily reading the memory
  • the buffer duration T can absorb the delay difference that may be caused by different PHYs when the failed PHY is restored, and avoid PHY realignment caused by the delay difference between different PHYs. In this way, business interruption is avoided, and the failed PHY can be recovered without damage.
  • the head cost block with the shortest stay in the memory of the network device 1 is the specific head cost block.
  • the duration of the other three first overhead blocks in the memory of the network device 1 is greater than the duration T.
  • T can be configured adaptively according to the specific design scheme in the actual network.
  • T can take w clock cycles.
  • W can take any integer in the value [2,1000].
  • w can be 2, can be 5, can be 10, can be 50, 100, 200, 300, 400, or 500.
  • T can also be greater than 1000 clock cycles.
  • FIG. 8 is a schematic flowchart of a communication method 300 provided by an embodiment of the present application.
  • the network architecture of the application method 300 includes at least a first network device and a second network device.
  • the first network device may be shown in FIG. 3 or FIG.
  • the second network device may be the network device 2 shown in FIG. 3 or FIG.
  • the network architecture may be the network architecture shown in FIG. 3 or FIG. 4.
  • the method shown in FIG. 8 can specifically implement the method shown in FIG. 6.
  • the first network device and the second network device in FIG. 8 may be the network device 1 and the network device 2 in the method 100 shown in FIG. 6, respectively.
  • the method 300 includes the following operations S301-S304.
  • the second network device simultaneously sends p FlexE overhead frames to the network device 1 through p PHYs currently available in the FlexE group.
  • the p FlexE overhead frames include p first overhead blocks, the p first overhead blocks are in one-to-one correspondence with p FlexE overhead frames, and the p FlexE overhead frames are in one-to-one correspondence with the p PHYs.
  • the first network device receives p first overhead blocks sent by the second network device through p physical layer devices PHY in the flexible Ethernet group FlexE group.
  • the first network device saves the p first overhead blocks to p memories in the n memories, and the p first overhead blocks correspond to the p memories in a one-to-one correspondence.
  • the first network device simultaneously reads the p head overhead blocks from the p memories.
  • the method 300 further includes:
  • the first network device sends a continuous Ethernet Local Fault Ordered Set Ethernet Local Fault Ordered Set on the time slot mapped by the client carried by the m PHYs.
  • the first network device may send continuous LFs to the timeslots mapped by the clients carried by the m PHYs in but not limited to the following manner.
  • Manner 1 The first network device writes the continuous Ethernet Local Fault Ordered Set into the m memories corresponding to the m PHYs.
  • Manner 2 When the m PHYs fail, the first network device does not write LF in the m memories corresponding to the m PHY4. At this time, the actually received data may be written in the m memories, or the Idle block may be written, or no data may be written.
  • the first network device restores the clients carried by the m PHY4 in the failed state, it writes LF in the time slot mapped by each client. In a specific implementation manner, when the first network device restores client data from m memories, it writes the client data into the memory corresponding to each client. At this time, write continuous LF to the memory corresponding to the client.
  • the method further includes:
  • the first network device determines that the first PHY is in a fault state, and the first PHY is one of the m PHYs;
  • the first network device issues an alarm, and the alarm indicates that the FlexE group has failed
  • the first network device determines that the failure type of the first PHY belongs to the first failure type, and stops the alarm.
  • the method before the first network device saves the p header overhead blocks in p memories of the n memories in the first time period, the method further includes:
  • the first network device determines that the first PHY is in a fault state, and the first PHY is one of the m PHYs;
  • the first network device determines that the failure type of the first PHY belongs to the first failure type, and avoids issuing an alarm indicating that the FlexE group has failed.
  • the first time period is, for example, time period 1 in the method 100.
  • the p available PHYs are PHY1, PHY2 and PHY3.
  • the m PHYs in the failed state are, for example, PHY4.
  • FIG. 9 is a schematic flowchart of a communication method 400 provided by an embodiment of the present application.
  • the network architecture of the application method 400 includes at least a first network device and a second network device.
  • the first network device may be shown in FIG. 3 or FIG.
  • the second network device may be the network device 2 shown in FIG. 3 or FIG.
  • the network architecture may be the network architecture shown in FIG. 3 or FIG. 4.
  • the method 400 shown in FIG. 9 may specifically implement the method 200 shown in FIG. 7.
  • the first network device and the second network device in FIG. 9 may be the network device 1 and the network device 2 in the method 200 shown in FIG. 7, respectively.
  • the method 400 includes the following operations S401-S404.
  • the second network device sends n FlexE overhead frames to the first network device through the FlexE group.
  • the FlexE group is composed of the n physical layer devices PHY.
  • the n FlexE overhead frames include n first overhead blocks.
  • the n first overhead blocks have a one-to-one correspondence with the n FlexE overhead frames.
  • the n FlexE overhead frames have a one-to-one correspondence with the n PHYs. n ⁇ 2, n is an integer.
  • the first network device receives the n header overhead blocks sent by the second network device through the flexible Ethernet group FlexE group.
  • the first network device saves the n head overhead blocks in n memories.
  • the n head overhead blocks have a one-to-one correspondence with the n memories.
  • the first network device reads the n first overhead blocks from the n memories at the same time, where the n first overhead blocks are stored in the corresponding memory after the specific first overhead block is stored in the corresponding memory. Read.
  • the specific head overhead block is the last saved head overhead block among the n head overhead blocks.
  • the duration T is greater than or equal to 2 clock cycles, and the clock cycle is the duration required for the first network device to perform a read operation on a memory.
  • the second time period is, for example, the time period 2 in the method 200.
  • PHY1, PHY2, PHY3 and PHY4 are available.
  • FIG. 10 is a schematic diagram of a network device 500 provided by this application.
  • the network device 500 may be applied to the network architecture shown in FIG. 3 or FIG. 4 to perform operations performed by the network device 1 in the method 100 or method 200, or to perform the operations performed by the first network device in the method 300 or the method 400. Operation.
  • the network device 500 may be, for example, the network device 1 in the network architecture shown in FIG. 3 or FIG. 4, or may be a line card or chip that implements related functions.
  • the network device 500 includes a receiver 501, a processor 502 coupled to the receiver, and n memories 503.
  • the receiver 501 is specifically configured to perform the operation of receiving information performed by the network device 1 in the foregoing method 100 or method 200; the processor 502 is configured to perform other processing performed by the network device 1 in the foregoing method 100 or method 200 except for receiving information .
  • the n memories 503 are used to store the FlexE data received by the network device 1 through the FlexE group in the above method 100 or method 200.
  • the receiver 501 is also used to perform the operation of receiving information performed by the first network device in the above method 300 or method 400; the processor 502 is used to perform other than receiving information performed by the first network device in the above method 300 or method 400 Other processing.
  • the n memories 503 are used to store the FlexE data received by the first network device through the FlexE group in the above method 300 or method 400.
  • the receiver can refer to one interface or multiple logically bundled interfaces.
  • the interface may be, for example, an interface between the PHY layer and the transmission medium layer, such as a medium dependent interface (MDI).
  • Interface can also refer to the physical interface of a network device.
  • the processor 502 may be an application-specific integrated circuit (English: application-specific integrated circuit, abbreviation: ASIC), a programmable logic device (English: programmable logic device, abbreviation: PLD) or a combination thereof.
  • the above-mentioned PLD can be a complex programmable logic device (English: complex programmable logic device, abbreviation: CPLD), field programmable logic gate array (English: field-programmable gate array, abbreviation: FPGA), general array logic (English: generic array) logic, abbreviation: GAL) or any combination thereof.
  • the processor 502 may also be a central processing unit (English: central processing unit, abbreviation: CPU), a network processor (English: network processor, abbreviation: NP), or a combination of CPU and NP.
  • the processor 502 may refer to one processor, or may include multiple processors.
  • the memory 503 may include a volatile memory (English: volatile memory), such as a random access memory (English: random-access memory, abbreviation: RAM); the memory may also include a non-volatile memory (English: non-volatile memory) , Such as read-only memory (English: read-only memory, abbreviation: ROM), flash memory (English: flash memory), hard disk (English: hard disk drive, abbreviation: HDD) or solid state drive (English: solid-state drive) , Abbreviation: SSD); the memory 820 may also include a combination of the foregoing types of memory.
  • the n memories 503 described in this application may be n independent memories. The n memories can also be integrated in one or more memories. At this time, each memory can be understood as a different storage area in the corresponding memory.
  • the receiver 501, the processor 502, and the n memories 503 may be independent physical units.
  • the processor 502 and n memories 503 can be integrated together and implemented by hardware.
  • the receiver 501 may also be integrated with the processor 502 and n memories 503, and implemented by hardware.
  • the aforementioned hardware may be, for example, ASIC, PLD, or a combination thereof.
  • the above-mentioned PLD can be CPLD, FPGA, general array logic GAL or any combination thereof.
  • the steps of the method or algorithm described in the embodiments of the present application can be directly embedded in hardware, a software unit executed by a processor, or a combination of the two.
  • the software unit can be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM or any other storage medium in the field.
  • the storage medium may be connected to the processor, so that the processor can read information from the storage medium, and can store and write information to the storage medium.
  • the storage medium may also be integrated into the processor.
  • the processor and the storage medium can be arranged in the ASIC.
  • the size of the sequence number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not correspond to the different The implementation process constitutes any limitation.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)
  • Small-Scale Networks (AREA)

Abstract

本申请提供了一种灵活以太网组FlexE group中一个或多个物理层装置PHY故障后的隔离和恢复方法以及网络设备。该方法包括:网络设备确定当前可用的每个PHY所对应的首开销块都被保存到相应的存储器中,则确定FlexE group组满足了PHY对齐的条件,启动从各存储器中同时读取缓存数据。从而无需对所有的客户client插入本地故障LF码块,也无需重新建组。有效降低了故障PHY对正常PHY承载的客户业务的影响。另外,通过设置存储器缓读机制,有效吸收了故障PHY带来的时延变化,无需重新执行PHY对齐操作,实现PHY的无损故障恢复。

Description

一种灵活以太网通信方法及网络设备
本申请要求于2019年02月19日提交中国国家知识产权局、申请号为2019110121587.4、申请名称为“一种灵活以太网通信方法及网络设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,特别涉及一种灵活以太网(英文:Flexible Ethernet,FlexE)通信方法,网络设备及系统。
背景技术
FlexE技术是基于高速以太网(英文:Ethernet)接口,通过Ethernet媒体接入控制(英文:Media Access Control,MAC)层与物理层解耦而实现的低成本,高可靠的电信级接口技术。FlexE技术通过在IEEE802.3基础上引入灵活以太网垫片(英文:FlexE shim)层实现了MAC层与物理层解耦,从而实现了灵活的速率匹配。
FlexE技术通过将多个以太网物理层装置(以下将物理层装置简称PHY)绑定成一个灵活以太网组(英文:FlexE group)以及物理层通道化等功能,满足灵活带宽的端口应用需求。因此,FlexE提供的MAC速率可以大于单个PHY的速率(通过捆绑实现),也可以小于单个PHY的速率(通过通道化实现)。
按照当前FlexE标准以及相关现有技术的方案,如果FlexE group中一个或多个PHY处于故障状态时,则整个FlexE group所承载的所有灵活以太网客户(英文:FlexE cliet)业务均会受损,即正常工作的PHY上所承载的client业务也会受损,中断时长可能达到几十毫秒。因此,如何能够减少处于故障状态下的PHY对FlexE group中正常状态的PHY所承载的Client业务的影响,成为目前亟待解决的问题。
发明内容
本申请实施例提供了一种FlexE的通信方法,能够减少故障状态下的PHY对FlexE group中正常状态的PHY所承载的Client业务的影响。
第一方面,本申请提供了一种灵活以太网FlexE的通信方法,所述方法包括:
所述第一网络设备通过灵活以太网组FlexE group中的p个物理层装置PHY接收第二网络设备发送的p个首开销块,所述p个首开销块与p个FlexE开销帧一一对应,所述p个FlexE开销帧与所述p个PHY一一对应,所述FlexE group由n个PHY组成,n≥2,n为整数;其中,
在所述第一时间段,所述FlexE group中的m个PHY处于故障状态,并且,所述p个PHY处于正常状态,p+m=n,1≤m<n,m和p均为整数;
所述第一网络设备将所述p个首开销块保存到所述n个存储器中的p个存储器,所述p个首开销块与所述p个存储器一一对应;
所述第一网络设备同时从所述p个存储器读取所述p个首开销块。
一种可能的设计中,所述方法还包括:
所述第一网络设备在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set。
一种可能的设计中,所述第一网络设备在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set,包括:
所述第一网络设备向所述m个PHY所对应的m个存储器中写入所述连续的Ethernet Local Fault Ordered Set。
一种可能的设计中,所述第一网络设备将所述p个首开销块保存到所述n个存储器的 p个存储器之前,所述方法还包括:
所述第一网络设备确定第一PHY处于故障状态,所述第一PHY是所述m个PHY中的一个PHY;
所述第一网络设备发出告警,所述告警指示所述FlexE group发生故障;
所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,停止所述告警。
在现有技术中,网络设备在FlexE group中任一PHY处于故障状态时,网络设备将发出用于指示FlexE group发生故障的告警,直到FlexE group所有PHY都处于正常状态,此时,才会停止所述告警。所述第一网络设备发出告警也可以理解为第一网络设备切换为FlexE group告警状态。在告警状态下,整个FlexE group的业务都将受到影响,无法正常工作。通过本申请的方法,在第一网络设备发出告警后,通过判断PHY的故障类型,来决定停止告警,从而避免正常的PHY所承载的客户业务受到中断。
一种可能的设计中,所述第一网络设备将所述p个首开销块保存到所述n个存储器的p个存储器之前,所述方法还包括:
所述第一网络设备确定第一PHY处于故障状态,所述第一PHY是所述m个PHY中的一个PHY;
所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,避免发出指示所述FlexE group发生故障的告警。
在本申请中,在当前FlexE group中一个或多个PHY出现故障时,不使用出现故障的PHY的首开销块作为PHY对齐的判断条件。即只需要在该FlexE group中当前处于正常状态的PHY的首开销块都存储到对应的存储器后,即认为FlexE group的PHY已经对齐。通过本申请提供的技术方案,无需对client插入LF,无需启动group级别的保护倒换,更不用重新建FlexE group,有效隔离故障PHY对正常PHY的影响,保证正常工作的PHY所承载的client业务不受影响,提高了业务传输可靠性。
第二方面,本申请提供了一种灵活以太网FlexE的通信方法,在第一时间段,所述方法包括:
所述第一网络设备通过灵活以太网组FlexE group接收所述第二网络设备发送的n个首开销块,所述FlexE group由所述n个物理层装置PHY组成,所述n个首开销块与n个FlexE开销帧一一对应,所述n个FlexE开销帧与所述n个PHY一一对应,n≥2,n为整数。所述第一网络设备将所述n个首开销块保存到n个存储器中,所述n个首开销块与所述n个存储器一一对应。所述第一网络设备同时从所述n个存储器读取所述n个首开销块,其中,所述n个首开销块在特定首开销块被保存到对应的存储器之后经过预设的时长T被读取。所述特定首开销块是所述n个首开销块中最后被保存的首开销块。其中,所述时长T大于等于1个时钟周期,所述时钟周期为所述第一网络设备对一个存储器执行一次读操作所需的时长。
T的值越大,能够容忍的时延偏差越大,在实际设计中,本领域技术人员可以根据实际网络场景来配置T的值。
一个可能的设计中,在第二时间段,所述方法还包括:
所述第一网络设备通过所述FlexE group中的p个PHY接收所述第二网络设备发送的p个首开销块。所述p个首开销块与p个FlexE开销帧一一对应,所述p个FlexE开销帧与所述p个PHY一一对应,其中,在所述第二时间段内,所述FlexE group中的m个PHY处于故障状态,并且,所述p个PHY处于正常状态,n=p+m,1≤m<n,m和p均为整数。
所述第一网络设备将所述p个首开销块保存到所述n个存储器中的p个存储器,所述p个首开销块与所述p个存储器一一对应。所述第一网络设备同时从所述p个存储器读取所述p个首开销块。
在一个可能的设计中,在所述第二时间段,所述方法还包括:
所述第一网络设备在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set。
在一个可能的设计中,所述第一网络设备在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set,包括:
所述第一网络设备向所述m个PHY所对应的m个存储器中写入所述连续的Ethernet Local Fault Ordered Set。
在一个可能的设计中,在所述第二时间段,所述第一网络设备将所述p个首开销块保存到所述n个存储器的p个存储器之前,所述方法还包括:
所述第一网络设备确定第一PHY发生故障,所述第一PHY是所述m个PHY中的一个PHY;
所述第一网络设备发出告警,所述告警指示所述FlexE group发生故障;
所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,停止所述告警。
在一个可能的设计中,在所述第二时间段,所述第一网络设备将所述p个首开销块保存到所述n个存储器的p个存储器之前,所述方法还包括:
所述第一网络设备确定第一PHY发生故障,所述第一PHY是所述m个PHY中的一个PHY;
所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,避免触发指示所述FlexE group发生故障的告警。
上述方法中,通过设置存储器的缓读机制,使得当FlexE group中最晚到达的PHY上的首开销块被存储到存储器中以后,等待一段缓存时长T,再启动从各存储器中同时读取缓存数据,即同时开始读取各存储器中存储的各个PHY对应的首开销块,由此,该段缓存时长T能够吸收PHY故障恢复时可能带来的时延变化,避免因此而导致的PHY重新对齐。由此,避免业务中断,能够实现故障PHY无损恢复。
第三方面,本申请提供了一种网络设备,用于实现上述第一方面、第二方面、第一方面任意一种可能的设计或者第二方面任意一种可能的设计中的方法。在一种可能的设计中,该网络设备包括接收器,处理器和存储器。
第四方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面、第二方面、第一方面任意一种可能的设计或者第二方面任意一种可能的设计中的方法。
第五方面,本申请提供了一种计算机可读存储介质,包括了用于实现上述第一方面、第二方面、第一方面任意一种可能的设计或者第二方面任意一种可能的设计中的方法的程序。
第六方面,本申请提供了一种通信系统,包括第三方面提供的网络设备,用于执行第一方面、第二方面、第一方面任意一种可能的设计或者第二方面任意一种可能的设计中的方法。
附图说明
图1A为本申请实施例中64B/66B编码的码型定义示意图;
图1B为本申请实施例中空闲块的码型定义示意图;
图2为FlexE标准架构示意图;
图3为本申请实施例中网络场景示意图;
图4为本申请实施例中使用FlexE技术传输信息的架构示意图;
图5为本申请实施例中首开销块的码型定义示意图;
图6为本申请实施例提供的一种故障隔离的通信方法流程示意图;
图7为本申请实施例提供的一种故障恢复的通信方法流程示意图;
图8为本申请实施例提供的另一种故障隔离的通信方法流程示意图;
图9为本申请实施例提供的另一种故障恢复的通信方法流程示意图;
图10为本申请实施例提供的一种网络设备的结构示意图。
具体实施方式
下面将结合附图,对本申请实施例中的技术方案进行描述。本申请实施例描述的网络架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请中的“1”、“2”、“3”、“4”、“第一”、“第二”、“第三”以及“第四”等序数词用于对多个对象进行区分,不用于限定多个对象的顺序。
本申请所涉及的相关FlexE的现有技术可以参见光互联网论坛(英文:Optical Internetworking Forum,OIF)所制定的FlexE标准IA OIF-FLEXE-01.0或者IA OIF-FLEXE-02.0的相关说明,上述标准以全文引用的方式并入本申请中。
在以太网中,以太网端口通常作为面向数据的逻辑上的概念出现,称为逻辑端口或简称为端口,以太网物理接口则为硬件上的概念出现,称为物理接口或简称为接口。通常,用一个MAC地址标记一个以太网端口。传统地,以太网端口的速率的确定以以太网物理接口的速率为基础。一般情况下,一个以太网端口最大带宽对应一个以太网物理接口的带宽,例如10兆比特每秒(megabit per second,Mbps)、100Mbps、1000Mbps(1Gbps)、10Gbps、40Gbps、100Gbps以及400Gbps等以太网物理接口。
以太网在过去的相当一段时间内获得了广泛的应用和长足的发展。以太网端口速率以10倍提升,从10Mbps向100Mbps、1000Mbps(1Gbps)、10Gbps、40Gbps、100Gbps、400Gbps不断演进发展。技术越发展,带宽颗粒差异越大,越容易出现与实际应用需求期望的偏差。主流应用需求的带宽增长并不呈现这样的10倍增长特征,例如50Gbps、75Gbps、200Gbps等。业界希望提供对50Gbps、60Gbps、75Gbps、200Gbps和150Gbps等带宽的以太网端口(虚拟连接)的支持。
一方面,更进一步地,希望能够提供一些灵活带宽的端口,这些端口可以共同使用一个或者若干个以太网物理接口,例如2个40GE端口和2个10GE端口共同使用一个100G物理接口;并能够随着需求的变化做出灵活的速率调整,例如从200Gbps调整为330Gbps,或者50Gbps调整为20Gbps,以提高端口使用效率或者延长其使用生命周期。对于固定速率的物理链路,可以将其级联捆绑,以支持逻辑端口速率的堆叠增加(例如,将2个100GE物理接口堆叠级联捆绑以支持200GE逻辑端口)。另一方面,能够将物理接口灵活堆叠所得到的带宽资源池化,将其带宽按照颗粒(例如,5G为一个颗粒)分配给特定的以太网逻辑端口,实现若干以太网虚拟连接对堆叠级联的物理链路组的高效共享。
由此,FlexE的概念应运而生,灵活以太网又称为灵活虚拟以太网。FlexE支持针对以太网业务的子速率、通道化、反向复用等功能。例如,针对以太网业务的子速率应用场景,FlexE能够支持将250G的以太网业务(MAC码流)采用3路现有的100GE的物理接口进行传送。针对以太网业务的反向复用场景,FlexE能够支持将200G的以太网业务采用2路现有的100GE的物理媒质相关(英文:Physical Medium Dependent,PMD)子层进行传送。针对以太网业务的通道化场景,FlexE能够支持若干个逻辑端口共同使用一个或者多个物理接口,能够支持将多路低速率的以太网业务复用到高速率的灵活以太网的中。
由于接入网和城域网中大量采用以太网作为业务接口,这种基于以太网技术的业务流量汇聚功能的FlexE技术能够实现和底层业务网络的以太网接口的无缝连接。这些FlexE的子速率、通道化和反向复用功能的引入,极大的扩展了以太网的应用场合,增强了以太网应用的灵活性,并使得以太网技术逐渐向传送网领域渗透。
FlexE为以太网物理链路的虚拟化,提供了一个可行的演进方向。灵活以太网需要在 级联的一组物理接口上支持若干个虚拟的以太网数据连接。例如,4个100GE物理接口级联捆绑,支持若干逻辑端口。若干逻辑端口中一部分逻辑端口的带宽减小,则另外一部分逻辑端口的带宽增大,并且带宽减小的总量和带宽增大的总量相等,若干逻辑端口的带宽块速弹性调整,共同使用4个100GE物理接口。
FlexE借鉴同步数字体系(Synchronous digital hierarchy,SDH)/光传输网络(Optical transfer network,OTN)技术,对物理接口传输构建固定帧格式,并进行TDM的时隙划分。下面以现有的FlexE帧格式举例说明。FlexE的TDM时隙划分粒度是66比特,正好可以对应承载一个64B/66B比特块。一个FlexE帧包含8行,每行第一个64B/66B比特块位置为FlexE开销块,开销块后为进行时隙划分的净荷区域,以66比特为粒度,对应20x1023个66比特承载空间,100GE接口的带宽划分20个时隙,每个时隙的带宽约为5Gbps。FlexE通过交织复用的方式在单个物理接口上实现了多个传输通道,即实现了多个时隙。
若干个物理接口可以捆绑,该若干个物理接口的全部的时隙可以组合承载一个以太网逻辑端口。例如10GE需要两个时隙,25GE需要5个时隙等。逻辑端口上可见的仍为顺序传输的64B/66B比特块,每个逻辑端口对应一个MAC,传输相应的以太网报文,对报文的起始结束和对空闲填充的识别与传统以太网相同。FlexE只是一种接口技术,相关的交换技术可以基于现有的以太网包进行,也可以基于FlexE交叉进行,此处不再赘述。
本申请中提到的比特块可以为M1/M2比特块,或者叫做M1B/M2B比特块,M1/M2代表一种编码方式,其中,M1表示每个比特块中的净荷比特数,M2表示每个比特块的总比特数,M1、M2为正整数,M2>M1。
在Ethernet物理层链路传递的就是这种M1/M2比特块流,比如1G Ethernet采用8/10Bit编码,1GE物理层链路传递的就是8/10比特块流;10GE/40GE/100GE采用64/66Bit编码,10GE/40GE/100GE物理层链路传递的就是64/66比特块流。未来随着Ethernet技术发展,还也会出现其他编码方式,比如可能出现128/130Bit编码、256/258Bit编码等。对于M1/M2比特块流,存在不同类型的比特块并且在标准中明确规范,下面以64/66Bit编码的码型定义为例进行说明,如图1A所示,其中首部的2个Bit“10”或“01”是64/66比特块同步头比特,后64Bit用于承载净荷数据或协议。图1A中包括16种码型定义,每一行代表一种比特块的码型定义,其中,D0ˉD7代表数据字节,C0ˉC7代表控制字节,S0代表开始字节,T0ˉT7代表结束字节,第2行对应空闲比特块(空闲Block)的码型定义,空闲比特块可以用/I/来表示,具体如图1B所示。第7行对应开始块的码型定义,开始块可以用/S/来表示。第8行对应O码(例如OAM码块)码块的码型定义,O码码块可以用/O/来表示。第9ˉ16行分别对应8种结束块的码型定义,8种结束块可以统一用/T/来表示。
FlexE技术通过在IEEE802.3基础上引入FlexE shim层实现了MAC层与物理层解耦,其实现如图2所示,实现灵活的速率匹配。如图2所示,FlexE的部分架构包括MAC子层、FlexE shim层和物理层。其中,MAC子层属于数据链路层的一个子层,上接逻辑链路控制子层。物理层又可分为物理编码子层(英文:physical coding sublayer,PCS)、物理介质接入(physical medium attachment,PMA)子层和PMD子层。上述各个层的功能均由相应的芯片或模块实现。
在发送信号的过程中,PCS用于对数据进行编码、扰码(scrambled)、插入开销(overhead,OH)以及插入对齐标签(alignment marker,AM)等操作;在接收信号的过程中,PCS则会进行上述步骤的逆处理过程。发送和接收信号可以由PCS的不同功能模块实现。
PMA子层的主要功能是链路监测、载波监测、编译码、发送时钟合成以及接收时钟恢复。PMD子层的主要功能是数据流的加扰/解扰、编译码以及对接收信号进行直流恢复和自适应均衡。
应理解,上述架构仅是举例说明,适用于本申请的FlexE的架构不限于此,例如,在MAC子层和FlexE shim层之间还可以存在一个调和子层(reconciliation sublayer,RS),用于提供MII与MAC子层之间的信号映射机制;PCS与PMA子层之间还可以存在一个前向纠错(forward error correction,FEC)子层,用于增强发送的数据的可靠性。
图3示出了本申请涉及的FlexE通信系统的应用场景示意图。如图3所示,FlexE通信系统100包括网络设备1、网络设备2、用户设备1和用户设备2。网络设备1可以是中间节点,此时网络设备1通过其他网络设备与用户设备1连接。网络设备1可以是边缘节点,此时网络设备1直接与用户设备1连接。网络设备1可以是中间节点,此时网络设备1通过其他网络设备与用户设备1连接。网络设备1也可以是边缘节点,此时网络设备1直接与用户设备1连接。网络设备2可以是中间节点,此时网络设备2通过其他网络设备与用户设备2连接。网络设备2也可以是边缘节点,此时网络设备2直接与用户设备2连接。网络设备1包括FlexE接口1,网络设备2包括FlexE接口2。FlexE接口1与FlexE接口2相邻。每个FlexE接口均包括发送端口和接收端口,与传统以太网接口的区别在于一个FlexE接口可以承载多个Client,且作为逻辑接口的FlexE接口可以由多个物理接口组合而成。图3中所示的正向通道中业务数据的流向如图3中实线箭头所示,反向通道中业务数据的流向如图3中虚线箭头所示。本发明实施例的传输通道以正向通道为例,传输通道中业务数据的流向为用户设备2→网络设备2→网络设备1→用户设备1。
应理解,图3中仅示例性的示出了2个网络设备和2个用户设备,该网络可以包括任意其它数量的网络设备和用户设备,本申请实施例对此不做限定。图3中所示的FlexE通信系统仅是举例说明,本申请提供的FlexE通信系统的应用场景不限于图3所示的场景。本申请提供的技术方案适用于所有应用采用FlexE技术进行数据传输的网络场景。
下面结合图4进一步描述图3中所示网络设备1和网络设备2采用FlexE技术传输数据的过程。
如图4所示,PHY1、PHY2、PHY3和PHY4绑定成为一个FlexE group。网络设备1和网络设备2之间通过FlexE group接口连接,即通过FlexE接口1与FlexE接口2连接。需要说明的是,上述FlexE group接口也可以被称之为FlexE接口。FlexE group接口是由一组物理接口绑定而成的逻辑接口。该FlexE group接口共承载有6个client,分别为client1至client6。其中,client1和client2的数据映射在PHY1上传输;client3的数据映射在PHY2和PHY3上传输;client4的数据映射在PHY3上传输;client5和client6的数据映射在PHY4上传输。可见,不同FlexEclient在FlexE group上进行映射和传输,实现捆绑功能。其中:
FlexE group:也可称之为捆绑组。每个FlexE group包括的多个PHY具有逻辑上的捆绑关系。所谓的逻辑上捆绑关系,指的是不同的PHY之间可以不存在物理连接关系,因此,FlexE group中的多个PHY在物理上可以是独立的。FlexE中的网络设备可以通过PHY的编号来标识一个FlexE group中包含哪些PHY,来实现多个PHY的逻辑捆绑。例如,每个PHY的编号可用1ˉ254之间的一个数字来标识,0和255为保留数字。一个PHY的编号可对应网络设备上的一个接口。相邻的两个网络设备之间需采用相同的编号来标识同一个PHY。一个FlexE group中包括的各个PHY的编号不必是连续的。通常情况下,两个网络设备之间具有一个FlexE group,但本申请并不限定两个网络设备之间仅存在一个FlexE group,即两个网络设备之间也可以具有多个FlexE group。一个PHY可用于承载至少一个client,一个client可在至少一个PHY上传输。PHY包括发送设备的物理层装置(device)以及接收设备的物理层装置。FlexE中的PHY除了包括IEEE802.3中所定义PHY层装置,还包括用于执行FlexE shim层功能的装置。发送设备的物理层装置也可以被称之为发送PHY或发送方向的PHY,接收设备的物理层装置也可以被称之为接收PHY或接收方向的PHY。
FlexE client:对应于网络的各种用户接口,与现有的IP/Ethernet网络中的传统业 务接口一致。FlexE client可根据带宽需求灵活配置,支持各种速率的以太网MAC数据流(如10G、40G、n*25G数据流,甚至非标准速率数据流),例如可以通过64B/66B的编码的方式将数据流传递至FlexE shim层。FlexE client可以被解释为基于一个物理地址的以太网流。通过同一FlexE group发送的客户需要共用同一时钟,且这些客户需要按照分配的时隙速率进行适配。
FlexE shim:作为插入传统以太架构的MAC与PHY(PCS子层)中间的一个额外逻辑层,通过基于日常表(英文:calendar)的时隙(英文:time slot)分发机制实现FlexE技术的核心架构。FlexE shim的主要作用是根据相同的时钟对数据进行切片,并将切片后的数据封装至预先划分的时隙(slot)中。然后,根据预先配置的时隙配置表,将划分好的各时隙映射至FlexE group中的PHY上进行传输。其中,每个时隙映射于FlexE group中的一个PHY。
FlexE shim层通过定义开销帧(英文:overhead frame)/开销复帧(英文:overhead Multiframe)的方式体现client与FlexE group中的时隙映射关系以及calendar工作机制。需要说明的是,上述的开销帧,也可以称之为灵活以太开销帧(英文:FlexE overhead frame),上述的开销复帧也可以称之为灵活以太开销复帧(英文:FlexE overhead Multiframe)。FlexE shim层通过开销提供带内管理通道,支持在对接的两个FlexE接口之间传递配置、管理信息,实现链路的自动协商建立。
具体而言,一个开销复帧由32个开销帧组成,一个开销帧则有8个开销块(英文:overhead block),上述开销块也可以称之为开销时隙(英文:overhead slot)。开销块例如可以是一个64B/66B编码的码块,每间隔1023*20blokcs出现一次,但每个开销块所包含的字段是不同的。开销帧中,第一个开销块(下文中称之为首开销块)中包含“0x4B”的控制字符与“0x5”的“O”码字符等信息。如图5所示,该首开销块的首部的两个Bit是10,控制块类型为0x4B,首开销块的“O码”字符为0x5。在信息传送过程中,对接的两个FlexE接口之间通过控制字符“0x4B”和“O码”字符“0x5”的匹配确定每个PHY上锁传输的开销帧的首开销块。每个PHY上所传输的首开销块作为一个标识(英文:marker),在接收方向用于对齐FlexE group绑定的各个PHY。对齐FlexE group的各个PHY,可以实现数据的同步锁定,后续可以同步从存储器中读取各个PHY所承载的数据。每个开销帧的首码块也可以被称之为开销帧的帧头。对齐FlexE group的各个PHY实质上是指对齐各个PHY的开销帧的首开销块,下面结合图4的场景,举例说明PHY对齐的过程。
在图4所示的场景中,当FlexE group中所有的PHY都正常工作时,网络设备2通过PHY1,PHY2,PHY3和PHY4同时发送开销帧1至开销帧4。其中,开销帧1至开销帧4分别包括首开销块1至首开销块4。首开销块1,首开销块2,首开销块3和首开销块4分别与PHY1,PHY2,PHY3以及PHY4一一对应。
在实际传输过程中,网络设备2同时发送开销帧1至开销帧4,但是由于PHY1,PHY2,PHY3和PHY4所对应的不同的光纤的长度可能会有差异,因此,首开销块1至首开销块4可能无法同时被网络设备1接收。举例来说,网络设备1按照首开销块1-〉首开销块2-〉首开销块3-〉首开销块4的顺序先后接收到首开销块1至首开销块4。当网络设备1接收到首开销块1后,将首开销块1存储到PHY1所对应的存储器1中。依次,网络设备1将后续接收到的首开销块2存储到PHY2对应的存储器2中,并将接收到的首开销块3存储到PHY3对应的存储器3中。直到网络设备1接收到PHY4上传输的首开销块4,并将首开销块4存储到PHY4对应的存储器4中,并立即启动从各个存储器中同时读取各首开销块以及其它缓存数据。上述“立即启动”是指在最后一个首开销块4缓存到存储器后,立即同时启动对存储器1至存储器4的读操作。其中,最后一个首开销块4在缓存器中等待的时长为0。即对于最后一个到达的首开销块4而言,网络设备1将首开销块4缓存至存储器4的写操作与从存储器4中读取该首开销块4的读操作之间的间隔时长为0。
PHY对齐也可以被称之为FlexE group去偏移(deskew)。通过PHY对齐,消除了各个PHY之间的时延偏差,从而实现了FlexE group中所有PHY之间时隙对齐。上述的时延偏差例如是由不同的光纤长度引起的。现有技术中,执行了上述的PHY对齐操作后,当FlexE group中所有PHY都处于正常工作状态时,所有PHY发送的数据都能够实现时隙对齐。因此,网络设备1得以同时接收到后续发送的每个PHY的首开销块,将各个首开销块同时缓存到各自对应的存储器中,并从各个存储器中同时读取各自存储的数据。从而根据时隙恢复出各个client的数据。
但是,当FlexE group中的一条或多条PHY发生故障后,例如,当PHY4发生故障,按照当前标准或者现有技术的方案都将导致正常工作的PHY上所承载的client业务受损。需要说明的是,在本申请中,PHY处于故障状态或者说PHY发生故障,参照当前OIF FlexE标准中的定义,即该PHY发生了例如信号丢失,定帧失败,对齐失败,高误码率或其他情况导致PCS_status=FALSE。
下面简单介绍一下当前对PHY故障进行处理的几种方案。
方案一:当前,由OIF所指定的FlexE标准中定义:当FlexE group中一个或者多个PHY故障时,该FlexE group中所有的FlexE cliet)会被发送连续的以太网本地故障顺序集(英文:Ethernet Local Fault Ordered Set),以下简称LF,即接收方向的网络设备会在与该FlexE group中所有的PHY所对应的存储器中写入连续LF。上述操作会导致FlexE group的所有client业务发生中断。
方案二:通过自动保护倒换(英文:automatic protection switching,APS)等保护机制将工作FlexE group倒换到保护FlexE group,通过保护FlexE group来承载client业务,但是上述操作也会导致FlexE group中的所有client业务在倒换过程中发生中断,中断时长例如可以长达50ms。
方案三:当PHY4故障后,网络设备在FlexE group中将故障PHY4移除,重新创建一个不包括PHY4的新的FlexE group,并用该新的FlexE group来继续承载client业务。但是,上述操作同样会导致FlexE group中的所有client业务在重新建组的过程中发生业务中断。
由上可知,当FlexE group中一条或者多条PHY发生故障时,如何有效降低FlexE group中处于正常工作状态的PHY所承载的client业务受到的影响,成为需要解决的问题。为了解决上述问题,本申请提出了一种故障隔离的方法100。
下面结合图6对本申请实施例提供的方法100进行详细说明。应用方法100的网络架构包括网络设备1和网络设备2。举例来说,网络设备1可以是图3或图4中所示的网络设备1,网络设备2可以是图3或图4所示的网络设备2。其中,网络设备1和网络设备2通过FlexE group连接。该网络架构可以是图3或图4所示的网络架构。下面以图4所示的架构为例,对方法100进行介绍。方法100包括:在时间段1,执行以下操作S101至S104。
S101、网络设备2通过FlexE group中PHY1,PHY2和PHY3同时向网络设备1发送3个FlexE开销帧。
具体来说,网络设备2通过PHY1向网络设备1发送FlexE开销帧1,FlexE开销帧1包括首开销块1。网络设备2通过PHY2向网络设备1发送FlexE开销帧2,FlexE开销帧2包括首开销块2。网络设备3通过PHY3向网络设备1发送FlexE开销帧3,FlexE开销帧3包括首开销块3。
在时间段段A,FlexE group中PHY4处于故障状态,PHY1,PHY2和PHY3均处于正常工作状态。当PHY4处于故障状态时,网络设备2可以通过PHY4发送对应的FlexE开销帧,此时,例如如果PHY4对应的光纤发生中断,此时即使网络设备2发送了FlexE开销帧,网络设备1也无法接收到该FlexE开销帧。再例如,如果PHY4对应的光纤接触不良,导 致链路高误码率,此时,即使网络设备2发送了FlexE开销帧,网络设备1根据接收到的数据,确定PHY4发生了高误码率故障,也同样会丢弃PHY4传输的数据。当然,网络设备2也可以不发送FlexE开销帧,待PHY4故障恢复时,再同步发送FlexE开销帧。本申请不做具体限定。
网络设备2发送FlexE开销帧的具体过程,参照现有技术的方法,此处不再赘述。
S102、网络设备1通过PHY1,PHY2和PHY3接收首开销块1,首开销块2和首开销块3。
S103、网络设备1将接收到的3个首开销块保存到3个存储器中,所述3个首开销块与所述3个存储器一一对应。在网络设备1中,每个PHY都有一个对应的存储器,用于存储PHY相关的数据。
S104、所述第一网络设备同时从所述3个存储器读取所述3个首开销块。
在本申请中,在当前FlexE group中一个或多个PHY出现故障时,不使用出现故障的PHY的首开销块作为PHY对齐的判断条件。即只需要在该FlexE group中当前处于正常状态的PHY的首开销块都存储到对应的存储器后,即认为FlexE group的PHY已经对齐。通过本申请提供的技术方案,无需对client插入LF,无需启动group级别的保护倒换,更不用重新建FlexE group,有效隔离故障PHY对正常PHY的影响,保证正常工作的PHY所承载的client业务不受影响,提高了业务传输可靠性。
在一个具体的实施方式中,在时间段1,方法100还包括:
网络设备1向处于故障状态的PHY4所承载的client所映射的时隙上发送连续的LF。
网络设备1可以但不限于通过以下方式向处于故障状态的PHY4所承载的client所映射的时隙上发送连续的LF。
方式一:网络设备1向处于故障状态的PHY4所对应的存储器中写入连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set。
在故障PHY对应的存储器中写入LF,使得网络设备在恢复client业务时,根据LF可以确定对应的client发生错误,从而避免向用户提供错误的数据。
举例来说,网络设备1采用flexE交叉技术传输数据,通过在故障PHY对应的存储器中写入LF,使得向下游设备转发故障PHY承载的client业务时,该client被插入LF,继续转发至下游设备,最终宿端设备可以根据LF识别出PHY4所承载的CLIENT业务发生错误。从而可以及时的丢弃错误的数据,避免像用户提供错误的数据。
方式二:在PHY4故障时,网络设备1在PHY4对应的存储器中不写入LF。此时,可以写入实际接收到的数据,或者写入Idle块,或者不写入数据。网络设备1恢复PHY4承载的client时,在client所映射的时隙上写入LF。一种具体的实施方式中,网络设备1从PHY存储器中读取缓存数据,恢复client的数据,将client的数据存储到每个client对应的存储器中。此时,在向client对应的存储器中写入连续的LF。
在一个具体的实施方式中,在网络设备1将所述3个首开销块保存到所述3个存储器之前,所述方法100还包括:
网络设备1确定PHY4处于故障状态后,发出告警,所述告警指示所述FlexE group发生故障。
网络设备1确定PHY4的故障类型属于第一故障类型,停止所述告警。
在该实施方式中,可以有效兼容现有技术,现有技术中,当一个PHY出现故障时,则会触发group级别的告警指示。一旦触发group级别的告警,则会中断业务处理,直至告警停止。而通过本申请提供的方法,当网络设备确定PHY的故障属于预定的故障类型后,则会停止告警。从而可以继续对正常PHY所接收的数据进行后续处理,不会中断业务。
在另一个具体的实施方式中,在网络设备1将所述3个首开销块保存到所述3个存储器之前,所述方法100还包括:
所述第一网络设备确定第一PHY处于故障状态,所述第一PHY是所述m个PHY中的一个PHY;
所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,避免发出指示所述FlexE group发生故障的告警。
在该实施方式中,当PHY发生故障后,首先判断PHY的故障类型。然后,根据PHY的故障类型,来决定是否发出指示所述FlexE group发生故障的告警。由此,当PHY故障属于特定的故障类型,则不会发出告警,进而可以继续对正常PHY所接收的数据进行后续处理,不会中断业务。
在本申请中,网络设备1识别PHY的故障类型,针对不同的故障类型可以实施相应的处理。可以将故障类型分为两类,即上述的第一故障类型和第二故障类型。第一故障类型下,网络设备1可以使用如本申请提供的故障隔离方法,将故障PHY隔离,跟故障PHY不相关的client仍然可以正常工作,不受故障PHY的影响,整个过程,不会对正常PHY所承载的CLIENT写入LF,也不会重新建group。上述第一故障类型包括但不限于光纤故障、高误码率、光模块损坏等。
如果PHY故障属于第二种故障类型,例如,shim层deskew失败,组编号Group Number配置错误,实例编号Instance Number配置错误等,则发出group级别的告警后,对于上述故障类型,发出group级别的告警后,向该FlexE group所承载的所有client中插入连续的LF。
具体第一故障类型和第二故障类型分别包括哪些类型故障,根据本领域技术人员具体实现,可以灵活的设置,本申请不再赘述。
综上可知,通过本申请提供的方法,能够有效的实现故障PHY的隔离,同时减少对正常PHY中承载的client所带来的影响,提高了业务传输的可靠性。
对于PHY4来说,如果引起PHY4故障的原因是flexE shim层故障,例如,shim层故障导致发送方向数据错误,则当引起故障的原因消失后,之前故障PHY可以自动恢复并加入到FlexE group中,并能正常承载client,无需重新创建group。待故障恢复后,网络设备2同步发送数据,网络设备1同步接收数据,按照现有技术中的方法对接收到的数据进行处理即可。
但是,还有一些情况,例如,当光纤断裂引起PHY故障,为了消除故障,重新更换光纤。更换光纤,可能引起该PHY的传输时延相对于故障前发生变化。例如,更换后的光纤可能变长,在数据接收方向来说,PHY4上的首开销块要落后于其他PHY上的首开销块到达网络设备1。从而导致网络设备1无法对所有的PHY进行对齐,此时,则需要重新执行一次PHY对齐的操作。但是,如果重新执行PHY对齐的操作,则会对正在传输的client造成业务中断。为了解决故障PHY无损恢复,本申请提供了一种故障恢复的处理方法200。
下面结合图7对本申请提供的故障恢复的处理方法200进行具体介绍。在时间段2,该方法200包括以下操作S201-S204。需要说明的是,方法200中的操作要在方法100之前执行,从而能够使得PHY故障又恢复时,可以无损的从新加入group中。
S201、在时间段2,网络设备2通过FlexE group向网络设备1发送4个FlexE开销帧。该4个FlexE开销帧分别为FlexE开销帧A,FlexE开销帧B,FlexE开销帧C以及FlexE开销帧D。该4个FlexE开销帧包括4个首开销块。具体来说,网络设备2通过PHY1向网络设备1发送FlexE开销帧A,FlexE开销帧A包括首开销块A。网络设备2通过PHY2向网络设备1发送FlexE开销帧B,FlexE开销帧B包括首开销块B。网络设备2通过PHY3向网络设备1发送FlexE开销帧C,FlexE开销帧C包括首开销块C。网络设备2通过PHY4向网络设备1发送FlexE开销帧D,FlexE开销帧D包括首开销块D。
S202、网络设备1通过所述FlexE group接收所述网络设备2发送的4个首开销块。
S203、网络设备1将所述4个首开销块保存到4个存储器中,所述4个首开销块与所 述4个存储器一一对应。
S204、网络设备1同时从所述4个存储器读取所述4个首开销块,其中,所述4个首开销块在特定首开销块被保存到对应的存储器之后经过预设的时长T被读取,所述特定首开销块是所述4个首开销块中最后被保存的首开销块。其中,所述时长T大于等于1个时钟周期,所述时钟周期为网络设备1对一个存储器执行一次读操作所需的时长。在一次读操作中,网络设备1可以从一个存储器中读取至少一个数据块。在一个具体的实施方式中,时长T大于等于2个时钟周期。
在一个具体的实施方式中,在设备上电后执行PHY对齐操作时,执行上述方法200中上述操作S201-S204。本申请通过设置存储器的缓读机制,即暂缓读取存储器的机制,使得当FlexE group中最晚到达的网络设备1的首开销块被存储到存储器中以后,等待一段预设的时长T,再启动从各存储器中同时读取缓存数据,即同时开始读取各存储器中存储的各个PHY对应的首开销块。由此,该段缓存时长T能够吸收故障PHY恢复时不同的PHY可能带来的时延的差异,避免不同PHY之间的时延差异而导致的PHY重新对齐。由此,避免业务中断,能够使故障PHY无损恢复。
可以理解,4个首开销块中,在网络设备1的存储器中停留的时长最短的首开销块为特定首开销块。其他3个首开销块在网络设备1的存储器中停留的时长都大于时长T。
上述时长T按照实际网络中具体的设计方案,可以适应性的进行配置。T可以取w个时钟周期。W可以取值[2,1000]中任意一个整数。例如,w可以为2,可以为5,可以为10,可以为50,100,200,300,400或500。当然T也可以取值为大于1000个时钟周期。
图8是本申请实施例提供的一种通信方法300的流程示意图,应用方法300的网络架构至少包括第一网络设备和第二网络设备,举例来说,第一网络设备可以是图3或图4所示的网络设备1,第二网络设备可以是图3或图4所示的网络设备2。该网络架构可以是图3或图4所示的网络架构。另外,图8所示的方法可以具体实现图6所示的方法。例如,图8中第一网络设备和第二网络设备可以分别是图6所示方法100中的网络设备1和网络设备2。在第一时间段,方法300包括以下操作S301-S304。
S301、第二网络设备通过FlexE group中当前可用在p个PHY同时向网络设备1发送p个FlexE开销帧。
所述p个FlexE开销帧包括p个首开销块,所述p个首开销块与p个FlexE开销帧一一对应,所述p个FlexE开销帧与所述p个PHY一一对应,所述FlexE group由n个PHY组成,n≥2,n为整数;其中,在所述第一时间段,所述FlexE group中的m个PHY处于故障状态,并且,所述p个PHY处于正常状态,p+m=n,n≥2,1≤m<n,m和p均为整。
S302、所述第一网络设备通过灵活以太网组FlexE group中的p个物理层装置PHY接收第二网络设备发送的p个首开销块。
S303、所述第一网络设备将所述p个首开销块保存到所述n个存储器中的p个存储器,所述p个首开销块与所述p个存储器一一对应。
S304、所述第一网络设备同时从所述p个存储器读取所述p个首开销块。
在一个具体的实施方式中,在所述第一时间段,所述方法300还包括:
所述第一网络设备在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set。
第一网络设备可以通过但不限于通过以下方式向所述m个PHY所承载的client所映射的时隙上发送连续的LF。
方式一:所述第一网络设备向所述m个PHY所对应的m个存储器中写入所述连续的Ethernet Local Fault Ordered Set。
在故障PHY对应的存储器中写入LF,使得网络设备在恢复client业务时,根据LF可以确定对应的client发生错误,从而避免向用户提供错误的数据。
方式二:在所述m个PHY故障时,第一网络设备在m个PHY4对应的m个存储器中不写入LF。此时,可以在上述m个存储器中写入实际接收到的数据,或者写入Idle块,或者不写入数据。第一网络设备在恢复由故障状态的m个PHY4所承载的client时,在各client所映射的时隙上写入LF。一种具体的实施方式中,第一网络设备从m个存储器中恢复client的数据时,将client的数据分别写人每个client对应的存储器中。此时,在向client对应的存储器中写入连续的LF。
在一个具体的实施方式中,所述第一网络设备将所述p个首开销块保存到所述n个存储器的p个存储器之前,所述方法还包括:
所述第一网络设备确定第一PHY处于故障状态,所述第一PHY是所述m个PHY中的一个PHY;
所述第一网络设备发出告警,所述告警指示所述FlexE group发生故障;
所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,停止所述告警。
在一个具体的实施方式中,在所述第一时间段,所述第一网络设备将所述p个首开销块保存到所述n个存储器的p个存储器之前,所述方法还包括:
所述第一网络设备确定第一PHY处于故障状态,所述第一PHY是所述m个PHY中的一个PHY;
所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,避免发出指示所述FlexE group发生故障的告警。
当图8所示的方法用于实现图6所示的方法100时,该第一时间段例如是方法100中的时间段1。该p个可用的PHY是PHY1,PHY2和PHY3。处于故障状态的m个PHY例如是PHY4。方法300各操作的具体实现细节,参见方法100中的具体阐述,此处不再赘述。
图9是本申请实施例提供的一种通信方法400的流程示意图,应用方法400的网络架构至少包括第一网络设备和第二网络设备,举例来说,第一网络设备可以是图3或图4所示的网络设备1,第二网络设备可以是图3或图4所示的网络设备2。该网络架构可以是图3或图4所示的网络架构。另外,图9所示的方法400可以具体实现图7所示的方法200。例如,图9中第一网络设备和第二网络设备可以分别是图7所示方法200中的网络设备1和网络设备2。在第二时间段,方法400包括以下操作S401-S404。
S401、在第二时间段,第二网络设备通过FlexE group向第一网络设备发送n个FlexE开销帧。
所述FlexE group由所述n个物理层装置PHY组成。n个FlexE开销帧包括n个首开销块。所述n个首开销块与n个FlexE开销帧一一对应。所述n个FlexE开销帧与所述n个PHY一一对应。n≥2,n为整数。
S402、所述第一网络设备通过所述灵活以太网组FlexE group接收所述第二网络设备发送的所述n个首开销块。
S403、所述第一网络设备将所述n个首开销块保存到n个存储器中。所述n个首开销块与所述n个存储器一一对应。
S404、所述第一网络设备同时从所述n个存储器读取所述n个首开销块,其中,所述n个首开销块在特定首开销块被保存到对应的存储器之后经过时长T被读取。
所述特定首开销块是所述n个首开销块中最后被保存的首开销块。其中,所述时长T大于等于2个时钟周期,所述时钟周期为所述第一网络设备对一个存储器执行一次读操作所需的时长。
当图9所示的方法用于实现图7所示的方法200时,该第二时间段例如是方法200中的时间段2。该n个可用的PHY是PHY1,PHY2,PHY3和PHY4。方法400各操作的具体实现细节,参见方法200中的具体阐述,此处不再赘述。
图10是本申请提供的一种网络设备500的示意图。该网络设备500可以应用于图3 或图4所示的网络架构中,用于执行方法100或者方法200中网络设备1执行的操作,或者用于执行方法300或方法400中第一网络设备执行的操作。网络设备500例如可以是图3或图4所示的网络架构中的网络设备1,也可以是实现相关功能的线卡或者芯片。如图10所示,网络设备500包括接收器501,与所述接收器耦合连接的处理器502以及n个存储器503。接收器501具体用于执行上述方法100或者方法200中网络设备1执行的信息接收的操作;该处理器502用于执行上述方法100或者方法200中网络设备1执行的除了接收信息以外的其它处理。n个存储器503用于存储上述方法100或者方法200中网络设备1通过FlexE group所接收的FlexE数据。接收器501还用于执行上述方法300或者方法400中第一网络设备执行的信息接收的操作;该处理器502用于执行上述方法300或者方法400中第一网络设备执行的除了接收信息以外的其它处理。n个存储器503用于存储上述方法300或者方法400中第一网络设备通过FlexE group所接收的FlexE数据。
接收器可以是指一个接口,也可是指多个逻辑捆绑的接口。接口例如可以是PHY层与传输介质层之间的接口,例如:介质相关接口(medium dependent interface,MDI)。接口也可以指网络设备的物理接口。处理器502可以是专用集成电路(英文:application-specific integrated circuit,缩写:ASIC),可编程逻辑器件(英文:programmable logic device,缩写:PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(英文:complex programmable logic device,缩写:CPLD),现场可编程逻辑门阵列(英文:field-programmable gate array,缩写:FPGA),通用阵列逻辑(英文:generic array logic,缩写:GAL)或其任意组合。处理器502还可以是中央处理器(英文:central processing unit,缩写:CPU),网络处理器(英文:network processor,缩写:NP)或者CPU和NP的组合。处理器502可以是指一个处理器,也可以包括多个处理器。存储器503可以包括易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM);存储器也可以包括非易失性存储器(英文:non-volatile memory),例如只读存储器(英文:read-only memory,缩写:ROM),快闪存储器(英文:flash memory),硬盘(英文:hard disk drive,缩写:HDD)或固态硬盘(英文:solid-state drive,缩写:SSD);存储器820还可以包括上述种类的存储器的组合。本申请中所述的n个存储器503可以是n个独立的存储器。n个存储器也可以集成在一个或者多个存储器中,此时,每个存储器可以理解为对应的存储器中不同的存储区域。
接收器501,处理器502与n个存储器503可以分别是独立的物理单元。处理器502与n个存储器503可以集成在一起,通过硬件实现。收器501也可以与处理器502与n个存储器503集成在一起,通过硬件实现。上述硬件例如可以是ASIC,PLD或其组合。上述PLD可以是CPLD,FPGA,通用阵列逻辑GAL或其任意组合。
本申请实施例中所描述的方法或算法的步骤可以直接嵌入硬件、处理器执行的软件单元、或者这两者的结合。软件单元可以存储于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动磁盘、CD-ROM或本领域中其它任意形式的存储媒介中。示例性地,存储媒介可以与处理器连接,以使得处理器可以从存储媒介中读取信息,并可以向存储媒介存写信息。可选地,存储媒介还可以集成到处理器中。处理器和存储媒介可以设置于ASIC中。
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部 分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
本说明书的各个部分均采用递进的方式进行描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点介绍的都是与其他实施例不同之处。尤其,对于装置和系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例部分的说明即可。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种灵活以太网FlexE的通信方法,其特征在于,在第一时间段,所述方法包括:
    所述第一网络设备通过灵活以太网组FlexE group接收所述第二网络设备发送的n个首开销块,所述FlexE group由所述n个物理层装置PHY组成,所述n个首开销块与n个FlexE开销帧一一对应,所述n个FlexE开销帧与所述n个PHY一一对应,n≥2,n为整数;
    所述第一网络设备将所述n个首开销块保存到n个存储器中,所述n个首开销块与所述n个存储器一一对应;
    所述第一网络设备同时从所述n个存储器读取所述n个首开销块,其中,所述n个首开销块在特定首开销块被保存到对应的存储器之后经过预设的时长T被读取,所述特定首开销块是所述n个首开销块中最后被保存的首开销块;其中,
    所述时长T大于等于2个时钟周期,所述时钟周期为所述第一网络设备对一个存储器执行一次读操作所需的时长。
  2. 根据权利要求1所述的方法,其特征在于,在第二时间段,所述方法还包括:
    所述第一网络设备通过所述FlexE group中的p个PHY接收所述第二网络设备发送的p个首开销块,所述p个首开销块与p个FlexE开销帧一一对应,所述p个FlexE开销帧与所述p个PHY一一对应,其中,在所述第二时间段内,所述FlexE group中的m个PHY处于故障状态,并且,所述p个PHY处于正常状态,n=p+m,1≤m<n,m和p均为整数;
    所述第一网络设备将所述p个首开销块保存到所述n个存储器中的p个存储器,所述p个首开销块与所述p个存储器一一对应;
    所述第一网络设备同时从所述p个存储器读取所述p个首开销块。
  3. 根据权利要求2所述的方法,其特征在于,在所述第二时间段,所述方法还包括:
    所述第一网络设备在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set。
  4. 根据权利要求3所述的方法,其特征在于,所述第一网络设备在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set,包括:
    所述第一网络设备向所述m个PHY所对应的m个存储器中写入所述连续的Ethernet Local Fault Ordered Set。
  5. 根据权利要求2-4任一项所述的方法,其特征在于,在所述第二时间段,所述第一网络设备将所述p个首开销块保存到所述n个存储器的p个存储器之前,所述方法还包括:
    所述第一网络设备确定第一PHY发生故障,所述第一PHY是所述m个PHY中的一个PHY;
    所述第一网络设备发出告警,所述告警指示所述FlexE group发生故障;
    所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,停止所述告警。
  6. 根据权利要求2-4任一项所述的方法,其特征在于,在所述第二时间段,所述第一 网络设备将所述p个首开销块保存到所述n个存储器的p个存储器之前,所述方法还包括:
    所述第一网络设备确定第一PHY发生故障,所述第一PHY是所述m个PHY中的一个PHY;
    所述第一网络设备确定所述第一PHY的故障类型属于第一故障类型,避免触发指示所述FlexE group发生故障的告警。
  7. 一种第一网络设备,其特征在于,包括:接收器,处理器和n个存储器;
    所述接收器用于:在第一时间段,通过灵活以太网组FlexE group接收第二网络设备发送的n个首开销块,所述FlexE group由n个PHY组成,所述n个首开销块与n个FlexE开销帧一一对应,所述n个FlexE开销帧与所述n个PHY一一对应,n≥2,n为整数;
    所述处理器用于:
    在所述第一时间段,将所述n个首开销块保存到所述n个存储器中,所述n个首开销块与所述n个存储器一一对应;
    在所述第一时间段,同时从所述n个存储器读取所述n个首开销块,其中,所述n个首开销块在特定首开销块被保存到对应的存储器之后经过时长T被读取,所述特定首开销块是所述n个首开销块中最后被保存的首开销块;其中,
    所述时长T大于等于1个时钟周期,所述时钟周期为所述第一网络设备对一个存储器执行一次读操作所需的时长。
  8. 根据权利要求7所述的第一网络设备,其特征在于,所述n个PHY中包括p个PHY,所述接收器还用于:在第二时间段,接收所述第二网络设备发送的p个首开销块,所述p个首开销块与p个FlexE开销帧一一对应,所述p个FlexE开销帧与所述p个PHY一一对应,其中,在所述第二时间段内,所述FlexE group中的m个PHY处于故障状态,并且,所述p个PHY处于正常状态,n=p+m,1≤m<n,m和p均为整数;
    所述处理器还用于:在所述第二时间段,将所述p个首开销块保存到所述n个存储器中的p个存储器,并同时从所述p个存储器读取所述p个首开销块,所述p个首开销块与所述p个存储器一一对应。
  9. 根据权利要求8所述的第一网络设备,其特征在于,所述处理器还用于:
    在所述第二时间段,在所述m个PHY所承载的client所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set。
  10. 根据权利要求9所述的第一网络设备,其特征在于,所述处理器,还用于在所述m个PHY所承载的全部client中所映射的时隙上发送连续的以太网本地故障顺序集Ethernet Local Fault Ordered Set,包括:
    所述处理器,还用于向所述m个PHY所对应的m个存储器中写入所述连续的Ethernet Local Fault Ordered Set。
  11. 根据权利要求8-10任一项所述的第一网络设备,其特征在于,在所述第二时间段,在所述处理器将所述p个首开销块保存到所述n个存储器中的p个存储器之前,所述处理器还用于:
    确定第一PHY处于故障状态,所述第一PHY是所述m个PHY中的一个PHY;
    发出告警,所述告警用于指示所述FlexE group发生故障;
    确定所述第一PHY的故障类型属于第一故障类型,停止所述告警。
  12. 根据权利要求8-10任一项所述的第一网络设备,其特征在于,在所述第二时间段,在所述处理器将所述p个首开销块保存到所述n个存储器中的p个存储器之前,所述处理器还用于:
    确定第一PHY处于故障状态,所述第一PHY是所述m个PHY中的一个PHY;
    确定所述第一PHY的故障类型属于第一故障类型,避免发出指示所述FlexE group发生故障的告警。
  13. 一种第一网络设备,其特征在于,用于执行权利要求1-6任一项所述的灵活以太网FlexE的通信方法。
  14. 一种计算机可读存储介质,包括计算机程序,当所述程序被处理器运行时,使得所述处理器执行权利要求1-6任一项所述方法。
  15. 一种计算机程序产品,包括计算机程序,所述计算机程序被计算机运行时,使得所述计算机利要求1-6任一项所述方法。
PCT/CN2020/073619 2019-02-19 2020-01-21 一种灵活以太网通信方法及网络设备 WO2020168897A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP20759402.9A EP3905593A4 (en) 2019-02-19 2020-01-21 FLEXIBLE ETHERNET COMMUNICATION METHOD AND NETWORK DEVICE
JP2021548641A JP7163508B2 (ja) 2019-02-19 2020-01-21 フレキシブルイーサネット通信方法とネットワークデバイス
MX2021009929A MX2021009929A (es) 2019-02-19 2020-01-21 Metodo de comunicacion ethernet flexible y dispositivo de red.
KR1020217026739A KR102509386B1 (ko) 2019-02-19 2020-01-21 플렉서블 이더넷 통신 방법 및 네트워크 디바이스
US17/405,452 US20210385127A1 (en) 2019-02-19 2021-08-18 Flexible ethernet communication method and network device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910121587.4 2019-02-19
CN201910121587.4A CN111585779B (zh) 2019-02-19 2019-02-19 一种灵活以太网通信方法及网络设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/405,452 Continuation US20210385127A1 (en) 2019-02-19 2021-08-18 Flexible ethernet communication method and network device

Publications (1)

Publication Number Publication Date
WO2020168897A1 true WO2020168897A1 (zh) 2020-08-27

Family

ID=72125996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073619 WO2020168897A1 (zh) 2019-02-19 2020-01-21 一种灵活以太网通信方法及网络设备

Country Status (7)

Country Link
US (1) US20210385127A1 (zh)
EP (1) EP3905593A4 (zh)
JP (1) JP7163508B2 (zh)
KR (1) KR102509386B1 (zh)
CN (1) CN111585779B (zh)
MX (1) MX2021009929A (zh)
WO (1) WO2020168897A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069568A (zh) * 2021-10-29 2023-05-05 华为技术有限公司 一种故障信息处理方法及装置
CN115865808B (zh) * 2022-12-01 2023-09-26 苏州异格技术有限公司 灵活以太网的数据块的处理方法、装置、存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006360A1 (en) * 2015-06-30 2017-01-05 Ciena Corporation Flexible ethernet chip-to-chip inteface systems and methods
CN106330630A (zh) * 2015-07-03 2017-01-11 华为技术有限公司 传输灵活以太网的数据流的方法、发射机和接收机
US20180076932A1 (en) * 2016-09-13 2018-03-15 Fujitsu Limited Transmission device and transmission method
CN108809674A (zh) * 2017-04-28 2018-11-13 华为技术有限公司 配置链路组的方法和设备
CN108809901A (zh) * 2017-05-02 2018-11-13 华为技术有限公司 一种业务承载的方法、设备和系统
CN109218061A (zh) * 2017-07-07 2019-01-15 中兴通讯股份有限公司 灵活以太网之故障通知及获取方法、装置、通信设备

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7242736B2 (en) * 2003-05-15 2007-07-10 Sun Microsystems, Inc. Data transfer
US7760749B2 (en) * 2007-01-11 2010-07-20 Via Technologies, Inc. Apparatus and method for deskewing 1000 BASE-T Ethernet physical layer signals
JP5038207B2 (ja) * 2008-03-27 2012-10-03 日本オクラロ株式会社 伝送システム及びデータ伝送方法
US10097480B2 (en) * 2015-09-29 2018-10-09 Ciena Corporation Time transfer systems and methods over flexible ethernet
US9800361B2 (en) * 2015-06-30 2017-10-24 Ciena Corporation Flexible ethernet switching systems and methods
US10218823B2 (en) * 2015-06-30 2019-02-26 Ciena Corporation Flexible ethernet client multi-service and timing transparency systems and methods
US9900206B2 (en) * 2015-07-20 2018-02-20 Schweitzer Engineering Laboratories, Inc. Communication device with persistent configuration and verification
CN106612203A (zh) * 2015-10-27 2017-05-03 中兴通讯股份有限公司 一种处理灵活以太网客户端数据流的方法及装置
US10505655B2 (en) * 2016-07-07 2019-12-10 Infinera Corp. FlexE GMPLS signaling extensions
JP2018038017A (ja) * 2016-09-02 2018-03-08 富士通株式会社 伝送装置及び検出方法
CN107888516B (zh) * 2016-09-29 2021-05-11 中兴通讯股份有限公司 一种承载业务的方法、设备和系统
JP6612717B2 (ja) * 2016-11-24 2019-11-27 日本電信電話株式会社 光伝送システム、及び光伝送方法
JP6659530B2 (ja) * 2016-12-21 2020-03-04 日本電信電話株式会社 伝送異常検出方法、送信側装置、受信側装置及びコンピュータプログラム
CN108347317B (zh) * 2017-01-22 2020-11-10 华为技术有限公司 一种业务的传输方法、网络设备及网络系统
CN109150361B (zh) * 2017-06-16 2021-01-15 中国移动通信有限公司研究院 一种传输网络系统、数据交换和传输方法、装置及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006360A1 (en) * 2015-06-30 2017-01-05 Ciena Corporation Flexible ethernet chip-to-chip inteface systems and methods
CN106330630A (zh) * 2015-07-03 2017-01-11 华为技术有限公司 传输灵活以太网的数据流的方法、发射机和接收机
US20180076932A1 (en) * 2016-09-13 2018-03-15 Fujitsu Limited Transmission device and transmission method
CN108809674A (zh) * 2017-04-28 2018-11-13 华为技术有限公司 配置链路组的方法和设备
CN108809901A (zh) * 2017-05-02 2018-11-13 华为技术有限公司 一种业务承载的方法、设备和系统
CN109218061A (zh) * 2017-07-07 2019-01-15 中兴通讯股份有限公司 灵活以太网之故障通知及获取方法、装置、通信设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3905593A4

Also Published As

Publication number Publication date
CN111585779B (zh) 2021-10-15
US20210385127A1 (en) 2021-12-09
KR20210116625A (ko) 2021-09-27
EP3905593A4 (en) 2022-04-06
MX2021009929A (es) 2021-09-21
JP2022521586A (ja) 2022-04-11
JP7163508B2 (ja) 2022-10-31
CN111585779A (zh) 2020-08-25
KR102509386B1 (ko) 2023-03-14
EP3905593A1 (en) 2021-11-03

Similar Documents

Publication Publication Date Title
WO2020168898A1 (zh) 一种灵活以太网通信方法及网络设备
US10931554B2 (en) Flexible ethernet operations, administration, and maintenance systems and methods
CN109391494B (zh) 一种通信方法、设备及可读存储介质
US20210273826A1 (en) Communication method and communications apparatus
US20210385127A1 (en) Flexible ethernet communication method and network device
US20230035379A1 (en) Service flow adjustment method and communication apparatus
US11804982B2 (en) Communication method and apparatus
WO2023197770A1 (zh) 一种故障通告方法及装置
WO2024002188A1 (zh) 用于灵活以太网的方法、网络设备及存储介质
WO2023141777A1 (zh) 通信方法及网络设备
WO2024032191A1 (zh) 一种故障码块处理方法及装置
WO2023071249A1 (zh) 一种时隙协商方法及装置
WO2019023824A1 (zh) 一种比特块流处理、速率匹配、交换的方法和装置
EP3664371B1 (en) Switching method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20759402

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020759402

Country of ref document: EP

Effective date: 20210729

ENP Entry into the national phase

Ref document number: 2021548641

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217026739

Country of ref document: KR

Kind code of ref document: A