WO2018120914A1 - Dcn报文处理方法、网络设备和网络系统 - Google Patents

Dcn报文处理方法、网络设备和网络系统 Download PDF

Info

Publication number
WO2018120914A1
WO2018120914A1 PCT/CN2017/101337 CN2017101337W WO2018120914A1 WO 2018120914 A1 WO2018120914 A1 WO 2018120914A1 CN 2017101337 W CN2017101337 W CN 2017101337W WO 2018120914 A1 WO2018120914 A1 WO 2018120914A1
Authority
WO
WIPO (PCT)
Prior art keywords
network device
dcn
network
message
packet
Prior art date
Application number
PCT/CN2017/101337
Other languages
English (en)
French (fr)
Inventor
李孝弟
高川
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17886402.1A priority Critical patent/EP3554008B1/en
Priority to KR1020197021396A priority patent/KR102342286B1/ko
Priority to ES17886402T priority patent/ES2863776T3/es
Priority to JP2019534840A priority patent/JP6930801B2/ja
Priority to EP21154082.8A priority patent/EP3902234B1/en
Publication of WO2018120914A1 publication Critical patent/WO2018120914A1/zh
Priority to US16/453,692 priority patent/US11894970B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0246Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/10Mapping addresses of different types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2854Wide area networks, e.g. public data networks
    • H04L12/2856Access arrangements, e.g. Internet access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication

Definitions

  • the present application relates to the field of flexible Ethernet communication technologies, and more particularly to a data communication network (English: Data Communication Network, DCN) message processing method, a network device, and a network system.
  • a data communication network English: Data Communication Network, DCN
  • DCN Data Communication Network
  • the DCN is a network that implements operation, management, and maintenance (English: operation, administration and maintenance, OAM) information between a network management system (NMS) and a network element (NE).
  • NMS network management system
  • NE network element
  • the NE directly connected to the NMS functions as a gateway network element (Gateway Network Element, GNE).
  • GNE gateway network element
  • the NMS performs DCN packet exchange with other NEs to complete management of the NE.
  • NEs can use flexible Ethernet (Flex Ethernet, Flex Eth) technology to form a network.
  • NE's physical interface supports standard Ethernet (English: Ethernet) mode and Flex Eth mode. Switch.
  • Ethernet International Standard Ethernet
  • Flex Eth Flex Eth
  • Switch For each NE, when switching to the Flex Eth mode, the NEs must have the same Flex Eth configuration to enable the Flex Eth channel between the NEs to interact with DCN packets. Therefore, when a new NE is added to the network, in order to ensure that the NMS manages the newly added NE and implements the DCN packet exchange between the NMS and the newly added NE, the Flex Eth configuration of the newly added NE is required to make a new The joined NE has the same Flex Eth configuration as the directly connected NE.
  • the present application provides a DCN packet processing method, a network device, and a network system, and aims to achieve channel conduction between a network device and an NMS without relying on artificial configuration, thereby reducing the cost when constructing a network, and improving the network construction time.
  • Network device access efficiency is a DCN packet processing method, a network device, and a network system.
  • a first aspect of the embodiment of the present application provides a DCN packet processing method, where the method includes:
  • the first network device generates a first DCN packet, where the destination address of the first DCN packet is an IP address of the network management system NMS, and the next hop of the destination address of the first DCN packet is the second network device.
  • the first network device and the second network device are connected by a physical link;
  • the first network device loads the first DCN message on a flexible Ethernet Flex Ethernet overhead multiframe Inside;
  • the first network device loads the generated first DCN message in the Flex Eth overhead multiframe, and sends the Flex Eth overhead multiframe to the second network device through the physical link, and sends the packet to the second network device through the second network device.
  • NMS NMS.
  • a communication connection between the NMS and the NMS is implemented, so that the NMS can sense that a new network device accesses the network.
  • the process does not require the technician to go to the site to manually configure and operate the network equipment of the newly accessed network, which can save manpower, material resources and operation and maintenance costs.
  • the process of sending the first DCN message through the physical link does not involve human operation and is not easy to make mistakes, thereby further improving the efficiency of the network device accessing the network.
  • the first network device loads the first DCN message in a flexible Ethernet Flex Ethernet overhead multiframe, including:
  • the first network device loads the first DCN message in a segment management channel of the Flex Ethernet overhead multiframe
  • the first network device loads the first DCN message in a time division multiplexing layer management channel shim to shim management channel of the Flex Ethernet overhead multiframe;
  • the first network device splits the first DCN message and loads the segment management channel and the time division multiplexing layer management channel shim to shim management of the flexible Ethernet Flex Ethernet overhead multiframe. In the channel.
  • the first network device loads the first DCN message in the Flex Ethernet overhead multiframe in multiple manners, and the selection is diverse and flexible.
  • the method further includes:
  • the first network device generates a second DCN packet, where the destination address of the second DCN packet is an IP address of the NMS, and the next hop of the destination address of the second DCN packet is the first hop.
  • the first network device monitors a status of the Flex Ethernet interface, and determines that the status of the Flex Ethernet interface is a conductive state
  • the first network device sends the second DCN message to the second network device by using the Flex Ethernet interface.
  • the first network device after determining that the state of the Flex Eth client is in the on state, the first network device automatically switches the channel that sends the second DCN packet, that is, switches from the physical link to the Flex Eth channel.
  • the Flex Eth channel can be used by the first network device to perform DCN packet interaction with other network devices, thereby improving the transmission efficiency of DCN messages.
  • the method further includes: the first network device buffering the first DCN message; or the first network device buffering the second DCN message.
  • the first DCN packet and the second DCN packet are cached to avoid packet loss.
  • a second aspect of the embodiments of the present application provides a network device, where the first network device includes:
  • a generating unit configured to generate a first DCN packet, where the destination address of the first DCN packet is an IP address of the network management system NMS, and the next hop of the destination address of the first DCN packet is a second network Equipment, The first network device and the second network device are connected by a physical link;
  • a loading unit configured to load the first DCN message in a flexible Ethernet Flex Ethernet overhead multiframe
  • a sending unit configured to send, by using the physical link, the Flex Ethernet overhead multiframe to the second network device, so that the second network device extracts the first frame from the Flex Ethernet overhead multiframe
  • the DCN packet forwards the first DCN message to the NMS.
  • the loading unit is configured to load the first DCN message into a segment management channel of the flexible Ethernet Flex Ethernet overhead multiframe; or Loading a DCN message in the time division multiplexing layer management channel shim to shim management channel of the Flex Ethernet overhead multiframe; or loading the first DCN message on the flexible Ethernet Flex Ethernet overhead multiframe
  • the segment management channel section management channel and the time division multiplexing layer management channel are in the shim to shim management channel.
  • the first network device further includes: a switching unit;
  • the generating unit is further configured to generate a second DCN packet, where a destination address of the second DCN packet is an IP address of the NMS, and a next hop of the destination address of the second DCN packet is Said second network device;
  • the switching unit is configured to monitor a status of the Flex Ethernet interface, determine that the status of the Flex Ethernet interface is in an on state, and send the second DCN message to the second network device by using the Flex Ethernet interface.
  • the first network device further includes: a buffering unit, configured to cache the first DCN message; or cache the second DCN message.
  • a third aspect of the embodiments of the present application provides a network device, where the first network device is connected to a second network device by using a physical link, where the first network device includes: a memory, and a processor in communication with the memory;
  • the memory is configured to store a program code for processing a DCN message
  • the processor is configured to execute the program code saved by the memory to implement the operations in the first aspect and various possible designs described above.
  • a fourth aspect of the embodiments of the present application provides a DCN packet processing method, where the method includes:
  • the second network device receives the flexible Ethernet Flex Ethernet overhead multiframe sent by the first network device by using the physical link, and extracts the first DCN packet from the Flex Ethernet overhead multiframe, and the purpose of the first DCN packet
  • the address is the IP address of the network management system NMS
  • the second network device is the next hop that reaches the destination address of the first DCN packet
  • the second network device and the first network device pass the physical link connection;
  • the second network device sends the first DCN message to the NMS based on the destination address.
  • the second network device receives the Flex Ethernet overhead multiframe sent by the first network device, and extracts the first DCN packet from the first network device to the NMS, and does not require the technician to go to the site for manual configuration and operation and maintenance, thereby saving manpower and material resources. And the cost of operation and maintenance does not involve human operation, and is not easy to make mistakes, further improving the efficiency of network devices accessing the network.
  • the second network device extracts the first DCN message from the Flex Ethernet overhead multiframe, including:
  • the second network device extracts the first DCN message from a segment management channel of the Flex Ethernet overhead multiframe
  • the second network device extracts the first DCN message from the time division multiplexing layer management channel shim to shim management channel of the Flex Ethernet overhead multiframe;
  • the second network device extracts the first DCN message from the segment management channel section management channel and the time division multiplexing layer management channel shim to shim management channel of the Flex Ethernet overhead multiframe.
  • the second network device caches the first DCN message.
  • the foregoing solution caches the first DCN packet to avoid packet loss.
  • a fifth aspect of the present application provides a network device, which is used as a second network device, where the second network device includes:
  • An extracting unit configured to receive a flexible Ethernet Flex Ethernet overhead multiframe sent by the first network device by using a physical link, and extract a first DCN packet from the Flex Ethernet overhead multiframe, where the first DCN packet is The destination address is the IP address of the network management system NMS, the second network device is the next hop that reaches the destination address of the first DCN packet, and the second network device and the first network device pass the physical chain.
  • Road connection configured to receive a flexible Ethernet Flex Ethernet overhead multiframe sent by the first network device by using a physical link, and extract a first DCN packet from the Flex Ethernet overhead multiframe, where the first DCN packet is The destination address is the IP address of the network management system NMS, the second network device is the next hop that reaches the destination address of the first DCN packet, and the second network device and the first network device pass the physical chain.
  • a sending unit configured to send the first DCN message to the NMS based on the destination address.
  • the extracting unit is configured to extract the first DCN message from a segment management channel of the Flex Ethernet overhead multiframe; or, from the Flex Ethernet overhead And extracting, by the time division multiplexing layer management channel of the frame, the first DCN message; or, from the segment management channel of the Flex Ethernet overhead multiframe, a section management channel and a time division multiplexing layer management channel shim to Extracting the first DCN message from the shim management channel.
  • the second network device further includes: a buffering unit, configured to cache the first DCN message.
  • a sixth aspect of the embodiments of the present application provides a network device, where the second network device is connected to a first network device by using a physical link, where the second network device includes: a memory, and a processor in communication with the memory;
  • the memory is configured to store a program code for processing a data communication network DCN message
  • the processor is configured to execute the program code saved by the memory to implement operations of various possible designs in the fourth aspect and the fourth aspect.
  • a seventh aspect of the embodiments of the present application provides a network system, including: a network management system NMS, a first network device and a second network device connected by a physical link; and the first network device may be the foregoing second aspect or the The network device according to the third aspect, wherein the second network device is the network device according to the fifth aspect or the sixth aspect.
  • An eighth aspect of the embodiments of the present application provides a computer readable storage medium for storing a computer program, the computer program comprising any of the possible designs or the first aspect, the fourth aspect, the first aspect, or the first aspect Four aspects of any possible design of the method of the instruction.
  • FIG. 1 is a schematic diagram of an application scenario of a network structure according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for processing a DCN packet according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of another DCN packet processing method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a first network device according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of another first network device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a second network device according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of another second network device according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of the result of a network system according to an embodiment of the present application.
  • the embodiment of the present invention provides a DCN packet processing method, a network device, and a system, which are used to implement channel communication between a network device and an NMS without relying on human configuration, reduce the cost when constructing a network, and improve network equipment when constructing a network. Access efficiency.
  • Flex Eth technology is a technology that supports flexible variable rate Ethernet proposed by the Optical Internetworking Forum (OIF), through the Ethernet Media Access Control Sublayer Protocol (English: Media Access Control, MAC).
  • OIF Optical Internetworking Forum
  • a flexible Ethernet pad sublayer (English: Flex Eth Shim) is added between the link layer and the physical layer (English: physical layer, PHY) to implement a flexible bandwidth physical channel.
  • Flex Eth is based on the 802.3 100GBASE-R standard definition, which divides the 100GE PHY into 20 time slots, each with 5G bandwidth.
  • the MAC flexibly selects one or more time slot bindings from one or more PHYs through the FlexE Shim as a variable bandwidth interface bearer service, supporting a variable rate Ethernet interface.
  • the flexible Ethernet group between Flex Eth Shim (English: Flex Eth Group) consists of 1-254 100GBASE-R Ethernet PHYs, and the Flex Eth Group IDs at both ends of the Flex Eth Group need to be consistent.
  • the Flex Eth Shim added between the MAC layer and the PHY layer can support 8 flexible Ethernet ports/Flex Eth clients.
  • Each Flex Eth client has its own independent MAC and adaptation sublayer. (English: reconciliation sublayer, RS).
  • FIG. 1 is a schematic diagram of an application scenario of a network structure disclosed in an embodiment of the present application.
  • the application scenario includes NE1, NE2, NE3, NE4, NE5, NE6, NE7, NMS, and DCN.
  • the NEs are connected by physical links and use Flex Eth technology for Flex Eth networking.
  • the NMS is connected to the NE through the DCN.
  • the DCN is a network shared with the service to support communication between the NMS and the NE.
  • the NE is directly connected to the NE of the NMS as the GNE.
  • the NMS manages NE1, NE2, NE3, NE4, NE5, and NE7 through the GNE.
  • NE1 and NE7 in FIG. 1 are NEs newly accessing the network structure.
  • NE1 and NE2 are connected through physical links, and Flex Eth technology is used to connect to NE2 and then access the network.
  • NE7 and GNE are connected through physical links, and Flex Eth technology
  • the NE In order to prevent the NMS from being able to perceive the NE of the newly accessed network, in the process of constructing the network based on the existing technology, when the hardware technician needs to access the network through the physical link, the NE needs soft-tuning technicians. In the field, the Flex Eth configuration and operation and maintenance of the newly accessed NE. In the existing way, a lot of manpower, material resources and operation and maintenance costs will be added. There are many parameters to be configured, and the configuration process is also complicated. If a configuration error occurs, the configuration needs to be reconfigured, which reduces the access efficiency of the NE when the network is built.
  • the embodiment of the present application provides a method for processing a DCN packet, and takes a Flex Eth of 100GE as an example.
  • Flex Eth Based on the definition of Flex Eth based on the 802.3 100GBASE-R standard, for a 100GBASE-R port, there is one overhead block (one overhead block of 66 bits) per 13.1 microseconds, and every eight overhead blocks (English: Block) form an overhead frame.
  • Each 32 overhead frames form an overhead multiframe.
  • the overhead frame is transmitted through the PHY between the two network devices, part of the information is transmitted through each overhead frame, and a part of the information is transmitted through an overhead multiframe.
  • the section management channel (English: section management channel) occupies two overhead blocks with a bandwidth of 1.222 Mbps; the time division multiplexing layer management channel (English: shim to shim management channel) occupies three overheads.
  • Block, bandwidth is 1.890Mbps.
  • the DCN is generated when the network device of the new access network is started.
  • the packet is encapsulated in the overhead multiframe and sent to the connected network device of the connected network through the physical link.
  • the network device that has accessed the network sends the DCN packet to the NMS, so that the channel between the network device of the new access network and the NMS is turned on, so that the NMS senses that the network device accesses the network.
  • the NMS is configured to manage the network device of the newly accessed network.
  • the DCN packet processing method disclosed in the embodiment of the present application sends a DCN message loaded in a Flex Eth overhead multiframe through a physical link of the network device of the new access network, so as to establish a relationship with the NMS.
  • the communication connection enables the NMS to sense that new network devices are connected to the network.
  • the process does not need to rely on manual configuration and operation of the Flex Eth network device for the newly accessed network, which can save manpower, material resources and operation and maintenance costs.
  • the process of sending DCN messages through the physical link does not involve human operations and is not easy to make mistakes, which further improves the efficiency of network devices accessing the network.
  • the network device disclosed in the embodiment of the present application includes a hardware device and software running on the hardware device.
  • the network device may be a switch or a router.
  • the first network device generates a first DCN message.
  • the first network device may be EN1 in FIG. 1 or EN7 in FIG.
  • the first DCN message may be generated at startup.
  • the PPPoE encapsulation payload in the first DCN packet carries a destination address, and the destination address is an IP address of the NMS. That is to say, the first DCN message finally needs to be sent to the NMS.
  • the DCN message includes: a 6-byte destination MAC address (Destination Address, DA). 6-byte source MAC address (Source address, SA).
  • DA Destination Address
  • SA Source address
  • PPPoE header The format of the Point-to-Point Protocol over Ethernet header (PPPoE header) is shown in Table 2.
  • the PPPoE encapsulation payload is the bearer packet.
  • VER refers to the version number of the PPPoE protocol.
  • TYPE refers to the PPPoE protocol type.
  • LENGTH refers to the payload length in PPPoE, which generally occupies 2 bits.
  • NEID refers to the source network element identifier, which generally occupies 4 bits. In the embodiment of the present application, it refers to the ID of the first network device.
  • the first network device loads the first DCN message in a Flex Eth overhead multiframe.
  • the Flex Eth overhead multiframe is a frame that is sent out at a fixed time interval by using the High-Level Data Link Control (HDLC) protocol after the network device is started.
  • HDMI High-Level Data Link Control
  • the frame format of HDLC is shown in Table 3.
  • the HDLC frame consists of a flag field, an address field (Address, A), a control field (Control, C), an information field (Information, I), a Frame Check Sequence (FCS), and a label field.
  • the flag field occupies an 8-bit bit and is a bit pattern of "01111110F”.
  • the address field occupies 8 bits.
  • the control field occupies 8 bits and is used to form various commands and responses.
  • the information field occupies 8n bits, which means that it can be any binary bit string, and the length is not limited.
  • the tag field occupies an 8-bit bit and is a bit pattern of "01111110F".
  • the first DCN message is loaded in the Flex Eth overhead multiframe, and the complete Flex Eth overhead multiframe is encapsulated and sent in the information field of the HDLC.
  • the first network device may load the first DCN message in a section management channel of the Flex Eth overhead multiframe.
  • the first network device may also load the first DCN message in a shim to shim management channel of the Flex Eth overhead multiframe.
  • the first network device may also divide the first DCN packet into two parts, and respectively load the section management channel and the shim to shim management channel of the Flex Eth overhead multiframe.
  • the first network device sends the Flex Eth overhead multiframe to the second network device by using a physical link.
  • the Flex Eth overhead multiframe is sent to the second network device through the PHY.
  • the second network device is NE2 in FIG. If the first network device is NE7 in FIG. 1, the second network device is the GNE in FIG.
  • the first network device is connected to the second network device by a physical link. Because the physical link is established between the first network device and the second network device, the first network device can send the data stream to the second network device through the PHY. In the process of sending, the management channels of each PHY are transmitted independently and will not be aggregated on the Flex Eth Group. This prevents the PHY from being abnormal and causes the middle of the management channel to be unable to transmit DCN messages.
  • the first network device determines, according to the local routing table, that the next hop network node that arrives at the NMS is the second network device, and sends the encapsulated Flex Eth overhead multiframe to the second network device by using the physical link.
  • the second network device receives the Flex Eth overhead multiframe sent by the first network device by using the physical link, and extracts the first DCN packet from the Flex Eth overhead multiframe.
  • the section management channel and the shim to shim management channel of the Flex Eth overhead multiframe are checked. If it is checked that the first DCN message is loaded in the section management channel, the first DCN message is extracted from the section management channel, and the first DCN message is presented in the PPPoE format.
  • the first DCN message is loaded in the shim to shim management channel, the first DCN message is extracted from the section management channel, and the first DCN message is presented in the PPPoE format.
  • the loaded first DCN message is separately extracted from the section management channel and the shim to shim management channel to be combined to obtain the original DCN.
  • the packet is presented in the PPPoE format.
  • the second network device sends the first DCN message to the NMS based on the destination address.
  • the second network device sends the first DCN message to the NMS based on the destination address in the first DCN message, the destination address is the IP address of the NMS.
  • the first network device is NE1 in FIG. 1
  • the second network device is NE2 in FIG. 1
  • NE2 is not directly connected to the NMS
  • NE2 is based on the local routing table and the first DCN message.
  • the destination MAC address carried in the network is determined to send the first DCN message to the NMS
  • the next network node that needs to pass is NE3, and the NE2 sends the first DCN message to the NE3.
  • the NE2 determines that the first DCN packet is to be sent to the NMS according to the local routing table and the destination address carried in the first DCN packet
  • the NE2 needs to pass the next network node as NE4, and the NE2 reports the first DCN.
  • the text is sent to NE4.
  • the NE3 determines that the first DCN packet sent by the NE2 is received according to the local routing table and the destination address carried in the first DCN packet.
  • the first DCN message is sent to the NMS, and the next network node that needs to pass is the GNE, and the NE3 sends the first DCN message to the GNE.
  • the GNE receives the first DCN packet forwarded by the NE3, and sends the first DCN packet to the NMS.
  • the GNE can directly send the first DCN packet to the NMS according to the local routing table and the destination address carried in the first DCN packet.
  • the first DCN message does not need to be forwarded to other network devices.
  • the first network device accesses the network
  • the first network device is loaded into the Flex Eth overhead multiframe, and the Flex Eth overhead multiframe is sent to the accessed network through the physical link.
  • the network device is sent to the NMS via a network device that has access to the network.
  • a communication connection between the NMS and the NMS is implemented, so that the NMS can sense that a new network device accesses the network.
  • the process does not require the technician to go to the site to manually configure and operate the network equipment of the newly accessed network, which can save manpower, material resources and operation and maintenance costs.
  • the process of sending the first DCN message through the physical link does not involve human operation and is not easy to make mistakes, thereby further improving the efficiency of the network device accessing the network.
  • the NMS can sense that a new network device accesses the network, and then the newly accessed network device can be managed.
  • the first network device sends the generated first DCN message in the Flex Eth overhead multiframe to send, in order to prevent the first DCN packet from being lost, the first network device provides a cache. Space, the generated first DCN message is cached in the cache space. Due to the bandwidth limitations of Flex Eth overhead multiframe. The first network device needs to set the size of the buffer space based on the bandwidth of the Flex Eth overhead multiframe, and control the buffer traffic when the first DCN packet is buffered.
  • the bandwidth of the section management channel in the Flex Eth overhead multiframe is 1.222 Mbps
  • the bandwidth of the shim to shim management channel in the Flex Eth overhead multiframe is 1.890 Mbps.
  • the size of the buffer space that the first network device can provide is as shown in formula (1) or formula (2) or formula (3).
  • the duration of the buffer is the length of time required to store the first DCN packet in the storage space.
  • the first network device can also provide a larger storage space, or the size of the storage space can be set by a technician according to requirements.
  • the bandwidth of the section management channel in the Flex Eth overhead multiframe is 1.222 Mbps
  • the bandwidth of the shim to shim management channel in the Flex Eth overhead multiframe is 1.890 Mbps.
  • the sending traffic when sent to the buffer space is as shown in formula (4) or formula (5) or formula (6).
  • the first network device may first perform caching on the first DCN message, and then perform loading.
  • the first DCN message can also be cached and reprinted at the same time.
  • the second network device may further perform processing after buffering the first DCN packet.
  • the manner of buffering the first DCN packet is the same as the manner in which the first network device caches the first DCN packet.
  • the network device that passes the network device may forward the first DCN message to the first DCN message.
  • the method of buffering and buffering the first DCN packet is the same as the manner in which the first network device caches the first DCN packet.
  • the first network device is NE1 in FIG. 1
  • the second network device is NE2 in FIG. 1
  • when the first DCN message is forwarded to the NMS it needs to pass through NE3, and is also adopted in NE3.
  • the first DCN message is buffered in the manner of NE1 and NE2.
  • the method for buffering the first DCN packet is used to avoid packet loss of the first DCN packet.
  • the bandwidth of the Flex Eth overhead multiframe is relatively small compared to the DCN packet sent by the Flex Eth client. If the DCN packet is transmitted through the Flex Eth overhead multiframe, the transmission efficiency is relatively low. . Therefore, after the DCN packet processing method disclosed in the foregoing embodiment of the present application, after the first network device of the new access network establishes a communication connection with the NMS, the first network device can freely select to send the DCN through the Flex Eth client. The message, or still sends a DCN message through the Flex Eth overhead multiframe.
  • the first network device after determining that the first network device establishes a communication connection with the NMS, the first network device automatically switches to the Flex Eth client to send the DCN message.
  • a schematic flowchart of another DCN packet processing method disclosed in the embodiment of the present application includes:
  • S301 The first network device generates a second DCN message.
  • the first network device may be EN1 in FIG. 1 or EN7 in FIG.
  • the destination address of the second DCN packet is the IP address of the NMS, and the next hop of the destination address of the second DCN packet is the second network device.
  • S302 The first network device monitors the status of the Flex Eth client. If it is detected that the Flex Eth client is in the on state, the process proceeds to S303. Otherwise, the second DCN message is loaded in the Flex Eth overhead multiframe, based on Figure 2 of the present application. A similar step in S202-S205 in the corresponding embodiment sends the second DCN message to the NMS.
  • the status of the Flex Eth client can be monitored in real time, and the status of the Flex Eth client can be monitored according to a preset time or time interval.
  • the time interval for monitoring the status of the Flex Eth client can be set by the technician.
  • the first network device determines that the Flex Eth client is in the conductive state, and sends the second DCN packet to the second network device by using the Flex Eth client.
  • the first network device After determining that the status of the Flex Eth client is in the on state, the first network device automatically switches the channel that sends the second DCN packet, that is, switches from the physical link to the Flex Eth channel.
  • the Flex Eth channel can be used by the first network device to perform DCN packet interaction with other network devices, thereby enhancing the DCN message. Transmission efficiency.
  • the NMS may also send a management packet to the first network device by using the Flex Eth channel. Implement management of the first network device.
  • the first embodiment of the present application further discloses a first network device that performs a DCN packet processing method.
  • the first network device 400 includes:
  • the generating unit 401 is configured to generate a first DCN packet, where the destination address of the first DCN packet is an IP address of the NMS, and the next hop of the destination address of the first DCN packet is a second network device, where the A network device is connected to the second network device through a physical link.
  • the generating unit 401 can execute S201 shown in FIG. 2 of the embodiment of the present application, and details are not described herein.
  • the loading unit 402 is configured to load the first DCN message generated by the generating unit 401 into the Flex Ethernet overhead multiframe.
  • the loading unit 402 is configured to load the first DCN message in a section management channel of the Flex Ethernet overhead multiframe; or load the first DCN message in the Flex Ethernet overhead multiframe. In the shim to shim management channel; or, the first DCN message is loaded into the section management channel and the shim to shim management channel of the Flex Ethernet overhead multiframe.
  • the loading unit 402 can execute S202 shown in FIG. 2 of the embodiment of the present application, and details are not described herein.
  • the sending unit 403 is configured to send the Flex Ethernet overhead multiframe to the second network device by using the physical link.
  • the sending unit 403 can perform S203 shown in FIG. 2 of the embodiment of the present application, and details are not described herein.
  • the first network device 400 further includes a switching unit 404.
  • the generating unit 401 is further configured to generate a second DCN packet, where the destination address of the second DCN packet is an IP address of the NMS, and the next hop of the destination address of the second DCN packet is the first hop. Two network devices.
  • the generating unit 401 can execute S301 shown in FIG. 3 of the embodiment of the present application, and details are not described herein.
  • the switching unit 404 is configured to monitor the status of the Flex Ethernet interface, determine the status of the Flex Ethernet interface to be in the on state, and send the second DCN message to the second network device through the Flex Ethernet interface.
  • the switching unit 404 can perform S302 and S303 shown in FIG. 3 of the embodiment of the present application, and details are not described herein.
  • the first network device 400 further includes a cache unit 405.
  • the buffer unit 405 is configured to cache the first DCN message and/or the second DCN message generated by the generating unit 401.
  • the buffer unit 405 caches the first DCN message and/or the second DCN message generated by the generating unit 401 in a preset cache space.
  • the size of the preset cache space can be set based on the bandwidth of the Flex Eth overhead multiframe or the cache requirement. For details, refer to the description about the cache in the embodiment of the present application.
  • the first network device disclosed in the embodiment of the present application may also be directly implemented by hardware, a memory executed by a processor, or a combination of the two, in combination with the DCN packet processing method disclosed in the embodiment of the present application.
  • the first network device 500 includes a processor 501 and a memory 502.
  • the network device 500 further includes a network interface 503.
  • the processor 501 is coupled to the memory 502 via a bus.
  • Processor 502 is coupled to the network node 503 via a bus.
  • the processor 501 may be a central processing unit (English: Central Processing Unit, CPU for short), a network processor (English: Network Processor, NP for short), and an application specific integrated circuit (English: Application-Specific Integrated Circuit, ASIC for short). ) or programmable logic device (English: Programmable Logic Device, abbreviation: PLD).
  • the PLD may be a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA) or a General Array Logic (English: Generic Array). Logic, abbreviation: GAL).
  • the memory 502 may specifically be a content-addressable memory (English: Content-Addressable Memory, CAM for short) or a random access memory (English: Random-Access Memory, RAM for short).
  • the CAM can be a three-state content addressed memory (English: Ternary CAM, abbreviation: TCAM).
  • the network interface 503 can be a wired interface, such as a Fiber Distributed Data Interface (FDDI) or an Ethernet (English) interface.
  • FDDI Fiber Distributed Data Interface
  • Ethernet Ethernet
  • the memory 502 can also be integrated in the processor 501. If memory 502 and processor 501 are mutually independent devices, memory 502 is coupled to processor 501, for example, memory 502 and processor 501 can communicate over a bus.
  • the network interface 503 and the processor 501 can communicate via a bus, and the network interface 503 can also be directly connected to the processor 501.
  • the memory 502 is configured to store an operation program, code or instruction for processing the DCN message.
  • the memory 502 includes an operating system and an application for storing an operating program, code or instruction for processing the DCN message.
  • the process of the first network device involved in FIG. 2 and FIG. 3 can be completed by calling and executing the operation program, code or instruction stored in the memory 502.
  • the specific process refer to the corresponding parts of the foregoing embodiment of the present application, and details are not described herein again.
  • Figure 5 only shows a simplified design of the network device.
  • the network device may include any number of interfaces, processors, memories, etc., and all the network devices that can implement the embodiments of the present application are within the protection scope of the embodiments of the present application.
  • the embodiment of the present application further discloses a second network device that performs a DCN packet processing method.
  • the second network device is connected to the first network device 400 shown in FIG. 4 of the embodiment of the present application by a physical link.
  • FIG. 6 is a schematic structural diagram of a second network device 600 according to an embodiment of the present application.
  • the second network device 600 includes:
  • the extracting unit 601 is configured to receive a Flex Ethernet overhead multiframe sent by the first network device by using the physical link, and extract the first DCN packet from the Flex Ethernet overhead multiframe, where the destination address of the first DCN packet is NMS The IP address, the second network device is the next hop of the destination address of the first DCN message.
  • the extracting unit 601 is configured to extract a first DCN packet from a section management channel of a Flex Ethernet overhead multiframe; or extract from a shim to shim management channel of a Flex Ethernet overhead multiframe.
  • the extraction unit 601 can execute S204 shown in FIG. 2 of the embodiment of the present application, and details are not described herein.
  • the sending unit 602 is configured to send the first DCN message to the NMS based on the destination address in the first DCN message extracted by the extracting unit 601.
  • the sending unit 602 can execute S205 shown in FIG. 2 of the embodiment of the present application, and details are not described herein.
  • the second network device 600 further includes a cache unit 603.
  • the buffer unit 603 is configured to buffer the first DCN packet extracted by the extracting unit 601, or cache the first DCN packet before the sending unit 602 sends the first DCN packet.
  • the buffer unit 503 caches the first DCN message in a preset cache space.
  • the size of the preset cache space can be set based on the needs of the cache. For details, refer to the description about the cache in the embodiment of the present application.
  • the second network device disclosed in the embodiment of the present application may also be directly implemented by hardware, a memory executed by a processor, or a combination of the two.
  • the second network device is connected to the first network device 500 shown in FIG. 5 of the embodiment of the present application by a physical link.
  • the second network device 700 includes a processor 701 and a memory 702.
  • the network device 700 further includes a network interface 703.
  • the processor 701 is coupled to the memory 702 via a bus.
  • Processor 702 is coupled to the network node 703 via a bus.
  • the processor 701 may specifically be a CPU, NP, ASIC or PLD.
  • the above PLD can be a CPLD, an FPGA or a GAL.
  • the memory 702 may specifically be a CAM or a RAM.
  • the CAM can be a TCAM.
  • Network interface 703 can be a wired interface, such as an FDDI or Ethernet interface.
  • the memory 702 can also be integrated in the processor 701. If memory 702 and processor 701 are mutually independent devices, memory 702 is coupled to processor 701, for example, memory 702 and processor 701 can communicate over a bus.
  • the network interface 703 and the processor 701 can communicate via a bus, and the network interface 703 can also be directly connected to the processor 701.
  • the memory 702 is configured to store an operation program, code or instruction for processing the DCN message.
  • the memory 702 includes an operating system and an application for storing an operating program, code or instruction for processing the DCN message.
  • the process of the second network device involved in FIG. 2 and FIG. 3 can be completed by calling and executing the operation program, code or instruction stored in the memory 702.
  • the specific process refer to the corresponding parts of the foregoing embodiment of the present application, and details are not described herein again.
  • Figure 7 only shows a simplified design of the network device.
  • the network device may include any number of interfaces, processors, memories, etc., and all the network devices that can implement the embodiments of the present application are within the protection scope of the embodiments of the present application.
  • the functional units in the various embodiments of the present application may be integrated into one processor, or each unit may exist physically separately, or two or more circuits may be integrated into one circuit.
  • Each of the above functional units may be implemented in the form of hardware or in the form of a software functional unit.
  • FIG. 8 is a network system 800 according to an embodiment of the present disclosure, including: an NMS, a first network device 801 and a second network device 802 connected by a physical link.
  • the first network device 801 is configured to generate a first DCN packet, where the destination address of the first DCN packet is an IP address of the NMS, and the first DCN packet is loaded in the Flex Ethernet overhead multiframe, and the physical link is used. Flex The Ethernet overhead multiframe is sent to the second network device 802.
  • the second network device 802 is configured to receive a Flex Ethernet overhead multiframe sent by the first network device 801 through the physical link, and extract the first DCN packet from the Flex Ethernet overhead multiframe.
  • the second network device 802 is further configured to forward the first DCN message to the NMS according to the destination address of the first DCN message.
  • the first network device 801 may be specifically the network device disclosed in FIG. 4 and FIG. 5, and is configured to perform the corresponding execution of the first network device in FIG. 2 and FIG. 3 in the embodiment of the present application. operating.
  • the second network device 802 may be specifically the network device disclosed in FIG. 6 and FIG. 7 for performing the corresponding operations performed by the second network device in FIG. 2 and FIG. 3 of the embodiment of the present application.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了一种DCN报文处理方法、网络设备和网络系统,该DCN报文处理方法包括:网络设备生成目的地址是NMS的IP地址的第一DCN报文,并将第一DCN报文装载于Flex Eth开销复帧中,通过物理链路将该Flex Eth开销复帧发送给已接入网络的网络设备,由已接入网络的网络设备提取该第一DCN报文,并将该第一DCN报文基于目的地址发送至NMS,使NMS感知有新的网络设备接入网络。该过程不需要技术人员到现场对新接入网络的网络设备进行人为配置和运维,能够节约人力,物力以及运维的成本。且因通过物理链路发送第一DCN报文的过程,不涉及人为操作,不易出错,进一步提高了网络设备接入网络的效率。

Description

DCN报文处理方法、网络设备和网络系统
本申请要求于2016年12月26日提交中国专利局、申请号为201611218007.6、申请名称为“DCN报文处理方法、网络设备和网络系统“”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及灵活以太网通信技术领域,更具体的说,涉及一种数据通信网(英文:Data Communication Network,DCN)报文处理方法、网络设备和网络系统。
背景技术
DCN是实现网络管理系统(英文:Network Management System,NMS)和网元(Network Element,NE)之间传送操作,管理和维护(英文:operation,administration and maintenance,OAM)信息的网络。其中,直接与NMS连接的NE作为网关网元(英文:Gateway Network Element,GNE),NMS通过该GNE与其他NE进行DCN报文交互,完成对NE的管理。
目前,在构建网络的过程中,NE之间可以利用灵活以太网(英文:Flex Ethernet,Flex Eth)技术组建网络,NE的物理接口则支持标准以太网(英文:Ethernet)模式和Flex Eth模式的切换。对于各个NE而言,当切换到Flex Eth模式时,NE之间需要具有相同的Flex Eth配置,才能够导通NE之间的Flex Eth通道,进行DCN报文的交互。因此,当网络中新加入NE时,为了确保NMS对新加入的NE进行管理,实现NMS与该新加入的NE之间的DCN报文交互,需要对新加入的NE进行Flex Eth配置,使新加入的NE与直连的NE具有相同的Flex Eth配置。
在现有技术,当网络中新加入NE时,需要技术人员到现场完成对新加入的NE的Flex Eth配置。会造成大量人力、物力以及运维成本。且因需要配置的参数多,配置的过程较为复杂,一旦出现配置失误的情况,则需要重新配置。导致降低构建网络时NE的接入效率。
发明内容
有鉴于此,本申请提供一种DCN报文处理方法、网络设备和网络系统,目的在于不依赖人为配置实现网络设备与NMS之间的通道导通,降低构建网络时的成本,提高构建网络时网络设备的接入效率。
本申请实施例提供如下技术方案:
本申请实施例的第一方面提供了一种DCN报文处理方法,所述方法包括:
第一网络设备生成第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,到达所述第一DCN报文的目的地址的下一跳为第二网络设备,所述第一网络设备与所述第二网络设备通过物理链路连接;
所述第一网络设备将所述第一DCN报文装载于灵活以太网Flex Ethernet开销复帧 内;
所述第一网络设备通过所述物理链路将所述Flex Ethernet开销复帧发送给所述第二网络设备,以便于所述第二网络设备从所述Flex Ethernet开销复帧中提取所述第一DCN报文并向所述NMS转发所述第一DCN报文。
上述方案,第一网络设备将生成的第一DCN报文装载于Flex Eth开销复帧中,通过物理链路将该Flex Eth开销复帧发送至第二网络设备,并经由第二网络设备发送至NMS。实现建立与NMS之间的通信连接,使NMS能够感知到有新的网络设备接入网络。该过程不需要技术人员到现场对新接入网络的网络设备进行人为配置和运维,能够节约人力,物力以及运维的成本。且因通过物理链路发送第一DCN报文的过程,不涉及人为操作,不易出错,进一步提高了网络设备接入网络的效率。
在一种可能的设计中,所述第一网络设备将所述第一DCN报文装载于灵活以太网Flex Ethernet开销复帧内,包括:
所述第一网络设备将所述第一DCN报文装载于所述Flex Ethernet开销复帧的段层管理通道section management channel;
或者,所述第一网络设备将所述第一DCN报文装载于所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中;
或者,所述第一网络设备将所述第一DCN报文拆分后装载于所述灵活以太网Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim management channel中。
上述方案,第一网络设备采用多种方式将第一DCN报文装载于Flex Ethernet开销复帧中,选择多样,灵活性好。
在一种可能的设计中,所述方法还包括:
所述第一网络设备生成第二DCN报文,所述第二DCN报文的目的地址为所述NMS的IP地址,到达所述第二DCN报文的目的地址的下一跳为所述第二网络设备;
所述第一网络设备监测Flex Ethernet接口状态,确定所述Flex Ethernet接口状态为导通状态;
所述第一网络设备通过所述Flex Ethernet接口向所述第二网络设备发送所述第二DCN报文。
上述方案,第一网络设备在确定Flex Eth client的状态为导通状态后,自动对发送第二DCN报文的通道进行切换,即从物理链路切换至Flex Eth通道。第一网络设备在之后与其他网络设备进行DCN报文的交互时可以采用该Flex Eth通道,从而提升DCN报文的传送效率。
在一种可能的设计中,所述方法还包括:所述第一网络设备对所述第一DCN报文进行缓存;或者,所述第一网络设备对所述第二DCN报文进行缓存。
上述方案,对第一DCN报文和第二DCN报文进行缓存,可以避免丢包。
本申请实施例的第二方面提供了一种网络设备,用作第一网络设备,所述第一网络设备包括:
生成单元,用于生成第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,到达所述第一DCN报文的目的地址的下一跳为第二网络设备, 所述第一网络设备与所述第二网络设备通过物理链路连接;
装载单元,用于将所述第一DCN报文装载于灵活以太网Flex Ethernet开销复帧内;
发送单元,用于通过所述物理链路将所述Flex Ethernet开销复帧发送给所述第二网络设备,以便于所述第二网络设备从所述Flex Ethernet开销复帧中提取所述第一DCN报文并向所述NMS转发所述第一DCN报文。
在一种可能的设计中,所述装载单元,用于将所述第一DCN报文装载于所述灵活以太网Flex Ethernet开销复帧的段层管理通道section management channel;或者,将所述第一DCN报文装载于所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中;或者,将所述第一DCN报文装载于所述灵活以太网Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim management channel中。
在一种可能的设计中,所述第一网络设备还包括:切换单元;
所述生成单元,还用于生成第二DCN报文,所述第二DCN报文的目的地址为所述NMS的IP地址,到达所述第二DCN报文的目的地址的下一跳为所述第二网络设备;
所述切换单元,用于监测Flex Ethernet接口状态,确定所述Flex Ethernet接口状态为导通状态,通过所述Flex Ethernet接口向所述第二网络设备发送所述第二DCN报文。
在一种可能的设计中,所述第一网络设备还包括:缓存单元,用于对所述第一DCN报文进行缓存;或者,对所述第二DCN报文进行缓存。
本申请实施例的第三方面提供了一种网络设备,用作第一网络设备,所述第一网络设备与第二网络设备通过物理链路连接,所述第一网络设备包括:存储器,以及与所述存储器通信的处理器;
所述存储器,用于存储处理DCN报文的程序代码;
所述处理器,用于执行所述存储器保存的程序代码,以实现上述第一方面以及各种可能的设计中的操作。
本申请实施例的第四方面提供了一种DCN报文处理方法,所述方法包括:
第二网络设备接收第一网络设备通过物理链路发送的灵活以太网Flex Ethernet开销复帧,并从所述Flex Ethernet开销复帧中提取第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,所述第二网络设备为到达所述第一DCN报文的目的地址的下一跳,所述第二网络设备与所述第一网络设备通过物理链路连接;
所述第二网络设备基于所述目的地址,将所述第一DCN报文发送至所述NMS。
上述方案,第二网络设备接收第一网络设备发送的Flex Ethernet开销复帧,并从中提取第一DCN报文转发至NMS,不需要技术人员到现场进行人为配置和运维,能够节约人力,物力以及运维的成本不涉及人为操作,不易出错,进一步提高了网络设备接入网络的效率。
在一种可能的设计中,所述第二网络设备从所述Flex Ethernet开销复帧中提取所述第一DCN报文,包括:
所述第二网络设备从所述Flex Ethernet开销复帧的段层管理通道section management channel中提取所述第一DCN报文;
或者,所述第二网络设备从所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中提取所述第一DCN报文;
或者,所述第二网络设备从所述Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim management channel中提取所述第一DCN报文。
在一种可能的设计中,所述第二网络设备对所述第一DCN报文进行缓存。
上述方案,对第一DCN报文进行缓存,可以避免丢包。
本申请实施例的第五方面提供了一种网络设备,用作第二网络设备,所述第二网络设备包括:
提取单元,用于接收第一网络设备通过物理链路发送的灵活以太网Flex Ethernet开销复帧,并从所述Flex Ethernet开销复帧中提取第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,所述第二网络设备为到达所述第一DCN报文的目的地址的下一跳,所述第二网络设备与所述第一网络设备通过物理链路连接;
发送单元,用于基于所述目的地址,将所述第一DCN报文发送至所述NMS。
在一种可能的设计中,所述提取单元,用于从所述Flex Ethernet开销复帧的段层管理通道section management channel中提取所述第一DCN报文;或者,从所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中提取所述第一DCN报文;或者,从所述Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim management channel中提取所述第一DCN报文。
在一种可能的设计中,所述第二网络设备还包括:缓存单元,用于对所述第一DCN报文进行缓存。
本申请实施例的第六方面提供了一种网络设备,用作第二网络设备,所述第二网络设备与第一网络设备通过物理链路连接,所述第二网络设备包括:存储器,以及与所述存储器通信的处理器;
所述存储器,用于存储处理数据通信网络DCN报文的程序代码;
所述处理器,用于执行所述存储器保存的程序代码,实现第四方面以及第四方面中各种可能的设计的操作。
本申请实施例的第七方面提供了一种网络系统,包括:网络管理系统NMS,通过物理链路连接的第一网络设备和第二网络设备;第一网络设备可以是上述第二方面或第三方面所述的网络设备,第二网络设备为上述第五方面或第六方面所述的网络设备。
本申请实施例的第八方面提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序包括用于执行第一方面、第四方面、第一方面的任一可能的设计或第四方面的任一可能的设计中的方法的指令。
附图说明
图1为本申请实施例公开的一种网络结构的应用场景示意图;
图2为本申请实施例公开的一种DCN报文处理方法的流程示意图;
图3为本申请实施例公开的另一种DCN报文处理方法的流程示意图;
图4为本申请实施例公开的一种第一网络设备的结构示意图;
图5为本申请实施例公开的另一种第一网络设备的结构示意图;
图6为本申请实施例公开的一种第二网络设备的结构示意图;
图7为本申请实施例公开的另一种第二网络设备的结构示意图;
图8为本申请实施例公开的一种网络系统的结果示意图。
具体实施方式
本申请实施例提供了一种DCN报文处理方法、网络设备及系统,用于不依赖人为配置实现网络设备与NMS之间的通道导通,降低构建网络时的成本,提高构建网络时网络设备的接入效率。
本申请实施例和权利要求书及附图中的术语“第一”和“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”不是排他的。例如,包括了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,还可以包括没有列出的步骤或单元。
Flex Eth技术是光互联网联盟(英文:Optical Internetworking Forum,OIF)提出的一种支持灵活可变速率以太网的技术,通过在以太网媒体访问控制子层协议(英文:Media Access Control,MAC)即链路层和物理层(英文:physical layer,PHY)之间添加灵活以太网垫片子层(英文:Flex Eth Shim),实现灵活带宽的物理通道。
基于Flex Eth的标准OIF-FLEXE-01的定义,例如,在现有技术中,NE之间进行Flex Eth组网时,PHY被时隙化。例如,Flex Eth基于802.3 100GBASE-R标准定义,将100GE PHY共划分了20个时隙,每个时隙5G带宽。MAC通过FlexE Shim灵活的从一个或多个PHY上选择1个或多个时隙绑定,作为可变带宽的接口承载业务,支持可变速率的Ethernet接口。Flex Eth Shim之间的灵活以太网组(英文:Flex Eth Group)由1-254个100GBASE-R以太PHY组成,Flex Eth Group两端的Flex Eth Group ID需要保持一致。增加于MAC层和PHY层之间的Flex Eth Shim可以支持8个灵活以太网端口/灵活以太网接口(英文:Flex Eth client),每个Flex Eth client都有自己独立的MAC和适配子层(英文:reconciliation sublayer,RS)。
如图1所示,为本申请实施例公开的一种网络结构的应用场景示意图。该应用场景中包括NE1、NE2、NE3、NE4、NE5、NE6、NE7、NMS和DCN。NE之间通过物理链路连接,并利用Flex Eth技术进行Flex Eth组网。NMS通过DCN与NE连接,DCN为一个与业务共用的网络,用于支持NMS与NE之间的通信。NE6直接与NMS连接的NE,作为GNE。NMS通过该GNE管理NE1、NE2、NE3、NE4、NE5和NE7。作为举例,图1中的NE1和NE7分别为新接入该网络结构的NE。NE1与NE2通过物理链路连接,并利用Flex Eth技术与NE2之间进行Flex Eth组网,然后接入网络。NE7与GNE通过物理链路连接,并利用Flex Eth技术与GNE之间进行Flex Eth组网,然后接入网络。
基于Flex Eth的标准OIF-FLEXE-01的定义,在现有技术中,NE1和NE2进行Flex Eth组网时,NE7和GNE进行Flex Eth组网时,PHY被时隙化。且因在OIF-FLEXE-01标准中未定义DCN的传送方式,在现有技术中,DCN报文需要通过Flex Eth client随 业务报文一起传送。
为了避免NMS无法感知新接入网络的NE,在目前基于现有技术构建网络的过程中,当硬件技术人员将需要接入网络的NE通过物理链路接入网络后,还需要软调技术人员在现场对新接入NE的Flex Eth配置和运维。采用现有的方式,会增加大量人力、物力以及运维成本。且需要配置的参数多,配置的过程也较为复杂,一旦出现配置失误的情况,则需要重新配置,导致降低构建网络时NE的接入效率。
本申请实施例提供了一种对DCN报文进行处理的方法,以100GE的Flex Eth为例。基于802.3 100GBASE-R标准对Flex Eth的定义,对于一个100GBASE-R端口,每13.1微秒有一个开销块(一个开销块66比特),每8个开销块(英文:Block)组成一个开销帧,每32个开销帧组成一个开销复帧。在进行信息传送的过程中,开销帧通过两个网络设备之间的PHY进行传送,一部分信息通过每一个开销帧进行传送,一部分信息则通过一个开销复帧传送。作为举例,在一个开销帧中,段层管理通道(英文:section management channel)占用两个开销块,带宽为1.222Mbps;时分复用层管理通道(英文:shim to shim management channel)占用三个开销块,带宽为1.890Mbps。
在本申请实施例公开的DCN报文的处理方法中,在新接入网络的网络设备与已接入网络的网络设备建立物理连接之后,在该新接入网络的网络设备启动时,生成DCN报文,将该DCN报文封装在于开销复帧中,通过物理链路发送至连接的已接入网络的网络设备。由已接入网络的网络设备将该DCN报文发送至NMS,使新接入网络的网络设备与NMS之间的通道导通,使NMS感知到有网络设备接入网络。进一步的使NMS对该新接入网络的网络设备进行管理。
相较于现有技术,本申请实施例公开的DCN报文处理方法,由新接入网络的网络设备通过物理链路发送装载于Flex Eth开销复帧的DCN报文,实现建立与NMS之间的通信连接,使NMS能够感知到有新的网络设备接入网络。该过程不需要依赖人工对新接入网络的网络设备进行Flex Eth配置和运维,能够节约人力,物力以及运维的成本。且因通过物理链路发送DCN报文的过程不涉及人为操作,不易出错,进一步的提高了网络设备接入网络的效率。
本申请实施例所公开的网络设备包括硬件设备和运行于该硬件设备上的软件,可选的,该网络设备可以为交换机,也可以为路由器。
本申请实施例所公开的技术方案的具体实现过程通过以下实施例进行详细说明。
基于图1示出的网络结构的应用场景示意图。如图2所示,为本申请实施例公开的一种DCN报文处理方法的流程示意图,包括:
S201:第一网络设备生成第一DCN报文。
作为举例,第一网络设备可以是附图1中的EN1,也可以是附图1中的EN7。
在具体实现中,该第一网络设备在物理连接后,作为举例,在启动时可以生成第一DCN报文。该第一DCN报文中的PPPoE封装净荷中携带有目的地址,该目的地址为NMS的IP地址。也就是说,该第一DCN报文最终需要发送至NMS。
DCN报文的格式如表1所示。其中,
表1:DCN报文格式
Figure PCTCN2017101337-appb-000001
在DCN报文中包括:6字节的目的MAC地址(Destination Address,DA)。6字节的源MAC地址(Source address,SA)。点对点以太网承载协议报文头(Point-to-Point Protocol over Ethernet header,PPPoE header)的格式如表2所示。PPPoE封装净荷为承载的报文。
Figure PCTCN2017101337-appb-000002
该PPPoE header格式中,VER指PPPoE协议的版本号。
TYPE指PPPoE协议类型。
LENGTH指PPPoE中的净荷长度,一般占2bit。
NEID指源网元标识符,一般占4bit。在本申请实施例中,指第一网络设备的ID。
S202:第一网络设备将该第一DCN报文装载于Flex Eth开销复帧内。
Flex Eth开销复帧为网络设备启动后,采用高级链路数据控制(High-Level Data Link Control,HDLC)协议进行封装,以固定时间间隔对外发送的帧。
HDLC的帧格式如表3所示。HDLC的帧由标志字段、地址字段(Address,A)、控制字段(Control,C)、信息字段(Information,I)、帧校验字段(Frame Check Sequence,FCS)和标签字段组成。标志字段占8bit位,为“01111110F”的比特模式。地址字段占8bit位。控制字段占8bit位,用于构成各种命令及响应。信息字段占用8nbit位,指可以是任意的二进制比特串,长度未作限定。标签字段占用8bit位,为“01111110F”的比特模式。
在本申请实施例中,第一DCN报文装载于Flex Eth开销复帧中,完整的Flex Eth开销复帧则封装于HDLC的信息字段内对外发送。
表3:HDLC的帧格式
Figure PCTCN2017101337-appb-000003
基于Flex Eth的标准OIF-FLEXE-01,定义了Flex Eth开销复帧的每个PHY上的两个管理通道section management channel和shim to shim management channel。DCN报文在Flex Eth开销复帧中的装载位置可以预先进行设置。
可选的,第一网络设备可以将该第一DCN报文装载于Flex Eth开销复帧的section management channel中。
可选的,第一网络设备也可以将该第一DCN报文装载于Flex Eth开销复帧的shim to shim management channel中。
可选的,第一网络设备也可以将该第一DCN报文分两部分,分别装载于Flex Eth开销复帧的section management channel和shim to shim management channel中。
S203:第一网络设备通过物理链路将Flex Eth开销复帧发送给第二网络设备。
在具体实现中,该Flex Eth开销复帧通过PHY发送给第二网络设备。
作为举例,若第一网络设备为附图1中的NE1时,则第二网络设备是附图1中的NE2。若第一网络设备为附图1中的NE7时,则第二网络设备是附图1中的GNE。
第一网络设备与第二网络设备通过物理链路连接。因第一网络设备和第二网络设备之间已建立物理链路,因此第一网络设备可以通过PHY将数据流发送给第二网络设备。在发送的过程中,每个PHY的管理通道都是独立传送的,不会在Flex Eth Group上汇聚,能够防止某个PHY出现异常时导致管理通道中段而无法进行DCN报文的传送。
第一网络设备根据本地路由表确定到达NMS的下一跳网络节点为第二网络设备,则将封装后的Flex Eth开销复帧通过物理链路发送给第二网络设备。
S204:第二网络设备接收第一网络设备通过物理链路发送的Flex Eth开销复帧,并从Flex Eth开销复帧中提取第一DCN报文。
在具体实现中,可选的,在对封装的Flex Eth开销复帧拆封装后,检查于Flex Eth开销复帧的section management channel和shim to shim management channel。若检查到第一DCN报文装载于section management channel中,则从section management channel提取第一DCN报文,并按照PPPoE格式呈现该第一DCN报文。
若检查到第一DCN报文装载于shim to shim management channel中,则从section management channel提取第一DCN报文,并按照PPPoE格式呈现该第一DCN报文。
若检查到第一DCN报文装载于section management channel和shim to shim management channel中,则从section management channel和shim to shim management channel中分别提取所装载的第一DCN报文进行组合,得到原始的DCN报文,并按照PPPoE格式呈现该第一DCN报文。
S205:第二网络设备基于目的地址,将该第一DCN报文发送至NMS。
在具体实现过程中,第二网络设备基于第一DCN报文中的目的地址,该目的地址为NMS的IP地址,将第一DCN报文发送至NMS。
作为举例,若第一网络设备为附图1中的NE1时,则第二网络设备是附图1中的NE2,NE2与NMS没有直接连接,若NE2根据本地路由表,以及第一DCN报文中携带的目的MAC地址,确定要将该第一DCN报文发送至NMS,需要经过的下一网络节点为NE3,则NE2将该第一DCN报文发送给NE3。若NE2根据本地路由表,以及第一DCN报文中携带的目的地址,确定要将该第一DCN报文发送给NMS,需要经过的下一个网络节点为NE4,则NE2将该第一DCN报文发送给NE4。
以NE2将第一DCN报文发送给NE3为例,当NE3接收到NE2发送的第一DCN报文后,并根据本地路由表,以及第一DCN报文中携带的目的地址,确定要将该第一DCN报文发送至NMS,需要经过的下一网络节点为GNE,则NE3将该第一DCN报文发送给GNE。GNE在接收到NE3转发的第一DCN报文,将该第一DCN报文发送给NMS。
作为举例,若第一网络设备为附图1中的NE7时,则第二网络设备是附图1中的GNE,GNE与NMS直接连接。因此,GNE提取第一DCN报文后,根据本地路由表以及该第一DCN报文中携带的目的地址,直接将该第一DCN报文发送给NMS即可。不需要再向其他网络设备转发该第一DCN报文。
执行上述S201-S205,第一网络设备接入网络后,将生成的第一DCN报文装载于Flex Eth开销复帧中,通过物理链路将该Flex Eth开销复帧发送至已接入网络的网络设备,并经由已接入网络的网络设备发送至NMS。实现建立与NMS之间的通信连接,使NMS能够感知到有新的网络设备接入网络。该过程不需要技术人员到现场对新接入网络的网络设备进行人为配置和运维,能够节约人力,物力以及运维的成本。且因通过物理链路发送第一DCN报文的过程,不涉及人为操作,不易出错,进一步提高了网络设备接入网络的效率。
进一步,在执行本申请实施例提供的DCN报文处理方法之后,NMS能够感知到有新的网络设备接入网络,进而可以对该新接入的网络设备进行管理。
可选的,在本申请实施例中,第一网络设备将生成的第一DCN报文装载于Flex Eth开销复帧中进行发送,为了避免第一DCN报文丢失,第一网络设备提供一缓存空间,将生成的第一DCN报文缓存于该缓存空间内。因受到Flex Eth开销复帧的带宽限制。第一网络设备需要基于Flex Eth开销复帧的带宽对缓存空间的大小进行设定,以及对缓存第一DCN报文时的缓存流量进行控制。
作为举例,若Flex Eth开销复帧中的section management channel的带宽为1.222Mbps,Flex Eth开销复帧中的shim to shim management channel的带宽为1.890Mbps。
第一网络设备所能够提供的缓存空间的大小如公式(1)或公式(2)或公式(3)所示。
缓存大小(字节)=1.222*缓存的时长/8            (1)
缓存大小(字节)=1.890*缓存的时长/8        (2)
缓存大小(字节)=(1.222+1.890)*缓存的时长/8        (3)
其中,缓存的时长为将该第一DCN报文存储于该存储空间所需要的时长。
可选的,第一网络设备也可以提供更大的存储空间,或者由技术人员根据需求对该存储空间的大小进行设定。
作为举例,若Flex Eth开销复帧中的section management channel的带宽为1.222Mbps,Flex Eth开销复帧中的shim to shim management channel的带宽为1.890Mbps。
第一网络设备将第一DCN报文进行缓存时,发送至该缓存空间时的发送流量如公式(4)或公式(5)或公式(6)所示。
发送流量=第一DCN报文长度*8*每秒发送第一DCN报文的个数
<1.222                                (4)
发送流量=第一DCN报文长度*8*每秒发送第一DCN报文的个数
<1.890                                (5)
发送流量=第一DCN报文长度*8*每秒发送第一DCN报文的个数
<(1.222+1.890)                       (6)
可选的,在本申请实施例中,第一网络设备对第一DCN报文可以先执行缓存,后执行装载。也可以同时对第一DCN报文进行缓存和转载。
可选的,在本申请实施例中,第二网络设备在接收到第一网络设备发送的第一DCN报文之后,也可以对该第一DCN报文进行缓存之后,再进行处理。具体的缓存第一DCN报文的方式与上述第一网络设备缓存第一DCN报文的方式相同,可以参见上述记载,这里不再进行赘述。
进一步的,若第二网络设备为非GNE,其在转发该第一DCN报文至NMS的过程中,所经过的网络设备都可以在转发该第一DCN报文之前,对该第一DCN报文进行缓存,缓存第一DCN报文的方式与上述第一网络设备缓存第一DCN报文的方式相同,可以参见上述记载,这里不再进行赘述。作为举例,若第一网络设备为附图1中的NE1,第二网络设备为附图1中的NE2,其在向NMS转发第一DCN报文时,需要经过NE3,则在NE3中也采用与NE1和NE2的方式缓存该第一DCN报文。
在上述本申请实施例中,采用对第一DCN报文缓存的方式,可以避免第一DCN报文丢包。
可选的,在本申请实施例中,相对于通过Flex Eth client发送DCN报文,Flex Eth开销复帧的带宽较小,如果一直通过Flex Eth开销复帧传送DCN报文,传送效率会比较低。因此,基于上述本申请实施例公开的一种DCN报文处理方法,在新接入网络的第一网络设备与NMS之间建立通信连接之后,第一网络设备可以自由选择通过Flex Eth client发送DCN报文,或者仍然通过Flex Eth开销复帧发送DCN报文。
可选的,在本申请实施例中,第一网络设备也可以在确定第一网络设备与NMS之间建立通信连接之后,自动切换至Flex Eth client发送DCN报文。
如图3所示,为本申请实施例公开的另一种DCN报文处理方法的流程示意图,包括:
S301:第一网络设备生成第二DCN报文。
作为举例,第一网络设备可以是附图1中的EN1,也可以是附图1中的EN7。该第二DCN报文的目的地址为NMS的IP地址,到达第二DCN报文的目的地址的下一跳为第二网络设备。
S302:第一网络设备监测Flex Eth client的状态,若监测到Flex Eth client处于导通状态,则执行S303;否则,将第二DCN报文装载于Flex Eth开销复帧内,基于本申请图2对应的实施例中的S202-S205中相似的步骤,将第二DCN报文发送至NMS。
在具体实现中,第一网络设备在启动后,可以实时对Flex Eth client的状态进行监测,也可以按照预设时间或时间间隔对Flex Eth client的状态进行监测。具体对Flex Eth client的状态进行监测的时间间隔可以由技术人员进行设定。
S303:第一网络设备确定Flex Eth client处于导通状态,通过Flex Eth client向第二网络设备发送该第二DCN报文。
第一网络设备在确定Flex Eth client的状态为导通状态后,自动对发送第二DCN报文的通道进行切换,即从物理链路切换至Flex Eth通道。第一网络设备在之后与其他网络设备进行DCN报文的交互时可以采用该Flex Eth通道,从而提升DCN报文的 传送效率。
进一步的,NMS也可以通过该Flex Eth通道向第一网络设备发送管理报文。实现对第一网络设备的管理。
基于本申请实施例公开的DCN报文处理方法,本申请实施例还公开了执行DCN报文处理方法的第一网络设备。
如图4所示,为本申请实施例公开的第一网络设备400的结构示意图,该第一网络设备400包括:
生成单元401,用于生成第一DCN报文,该第一DCN报文的目的地址为NMS的IP地址,到达该第一DCN报文的目的地址的下一跳为第二网络设备,该第一网络设备与第二网络设备通过物理链路连接。
该生成单元401可以执行本申请实施例图2示出的S201,这里不再进行赘述。
装载单元402,用于将生成单元401生成的第一DCN报文装载于Flex Ethernet开销复帧内。
在具体实现中,可选的,该装载单元402,用于将第一DCN报文装载于Flex Ethernet开销复帧的section management channel中;或者,将第一DCN报文装载于Flex Ethernet开销复帧的shim to shim management channel中;或者,将第一DCN报文装载于Flex Ethernet开销复帧的section management channel和shim to shim management channel中。
该装载单元402可以执行本申请实施例图2示出的S202,这里不再进行赘述。
发送单元403,用于将通过物理链路将Flex Ethernet开销复帧发送给第二网络设备。
该发送单元403可以执行本申请实施例图2示出的S203,这里不再进行赘述。
可选的,该第一网络设备400还包括切换单元404。
在具体实现中,该生成单元401,还用于生成第二DCN报文,该第二DCN报文的目的地址为NMS的IP地址,到达第二DCN报文的目的地址的下一跳为第二网络设备。
该生成单元401可以执行本申请实施例图3示出的S301,这里不再进行赘述。
该切换单元404,用于监测Flex Ethernet接口状态,确定Flex Ethernet接口状态为导通状态,通过Flex Ethernet接口向第二网络设备发送第二DCN报文。
该切换单元404可以执行本申请实施例图3示出的S302和S303,这里不再进行赘述。
可选的,该第一网络设备400还包括缓存单元405。
缓存单元405,用于将生成单元401生成的第一DCN报文和/或第二DCN报文进行缓存。
该缓存单元405将生成单元401生成的第一DCN报文和/或第二DCN报文缓存于预设缓存空间内。该预设缓存空间的大小,可以基于Flex Eth开销复帧的带宽,或者缓存的需求进行设定。具体可参见本申请实施例中有关缓存的记载。
结合本申请实施例公开的DCN报文处理方法,本申请实施例所公开的第一网络设备也可以直接用硬件、处理器执行的存储器,或者二者的结合来实施。
如图5所示,该第一网络设备500包括:处理器501和存储器502。可选的,该网络设备500还包括网络接口503。该处理器501通过总线与存储器502耦合。处理器502通过总线与该网络节点503耦合。
处理器501具体可以是中央处理器(英文:Central Processing Unit,简称:CPU),网络处理器(英文:Network Processor,简称:NP),专用集成电路(英文:Application-Specific Integrated Circuit,简称:ASIC)或者可编程逻辑器件(英文:Programmable Logic Device,缩写:PLD)。上述PLD可以是复杂可编程逻辑器件(英文:Complex Programmable Logic Device,缩写:CPLD),现场可编程逻辑门阵列(英文:Field-Programmable Gate Array,缩写:FPGA)或者通用阵列逻辑(英文:Generic Array Logic,缩写:GAL)。
存储器502具体可以是内容寻址存储器(英文:Content-Addressable Memory,简称:CAM)或者随机存取存储器(英文:Random-Access Memory,简称:RAM)。CAM可以是三态内容寻址存储器(英文:Ternary CAM,简称:TCAM)。
网络接口503可以是有线接口,例如光纤分布式数据接口(英文:Fiber Distributed Data Interface,简称:FDDI)或者以太网(英文:Ethernet)接口。
存储器502也可以集成在处理器501中。如果存储器502和处理器501是相互独立的器件,存储器502和处理器501相连,例如存储器502和处理器501可以通过总线通信。网络接口503和处理器501可以通过总线通信,网络接口503也可以与处理器501直接连接。
存储器502,用于存储处理DCN报文的操作程序、代码或指令。可选的,该存储器502包括操作系统和应用程序,用于存储处理DCN报文的操作程序、代码或指令。
当处理器501或硬件设备要对DCN报文进行处理时,调用并执行存储器502中存储的操作程序、代码或指令可以完成图2和图3中涉及的第一网络设备的处理过程。具体过程可参见上述本申请实施例相应的部分,这里不再赘述。
可以理解的是,图5仅仅示出了该网络设备的简化设计。在实际应用中,网络设备可以包含任意数量的接口,处理器,存储器等,而所有可以实现本申请实施例的网络设备都在本申请实施例的保护范围之内。
基于本申请实施例公开的DCN报文处理方法,本申请实施例还公开了执行DCN报文处理方法的第二网络设备。该第二网络设备与本申请实施例图4示出的第一网络设备400通过物理链路连接。
如图6所示,为本申请实施例公开的第二网络设备600的结构示意图,该第二网络设备600包括:
提取单元601,用于接收第一网络设备通过物理链路发送的Flex Ethernet开销复帧,并从Flex Ethernet开销复帧中提取第一DCN报文,该第一DCN报文的目的地址为NMS的IP地址,第二网络设备为到达第一DCN报文的目的地址的下一跳。
在具体实现中,可选的,该提取单元601,用于从Flex Ethernet开销复帧的section management channel中提取第一DCN报文;或者,从Flex Ethernet开销复帧的shim to shim management channel中提取第一DCN报文;或者,从Flex Ethernet开销复帧的section management channel和shim to shim management channel中提取该第一DCN报 文。
该提取单元601可以执行本申请实施例图2示出的S204,这里不再进行赘述。
发送单元602,用于基于提取单元601提取的第一DCN报文中的目的地址,将第一DCN报文发送至NMS。
该发送单元602可以执行本申请实施例图2示出的S205,这里不再进行赘述。
可选的,该第二网络设备600还包括缓存单元603。
该缓存单元603,用于对提取单元601提取的第一DCN报文进行缓存,或者在发送单元602发送第一DCN报文之前,对该第一DCN报文进行缓存。
该缓存单元503将第一DCN报文缓存于预设缓存空间内。该预设缓存空间的大小,可以基于缓存的需求进行设定。具体可参见本申请实施例中有关缓存的记载。
结合本申请实施例公开的DCN报文处理方法,本申请实施例所公开的第二网络设备也可以直接用硬件、处理器执行的存储器,或者二者的结合来实施。该第二网络设备与本申请实施例图5示出的第一网络设备500通过物理链路连接。
如图7所示,该第二网络设备700包括:处理器701和存储器702。可选的,该网络设备700还包括网络接口703。该处理器701通过总线与存储器702耦合。处理器702通过总线与该网络节点703耦合。
处理器701具体可以是CPU,NP,ASIC或者PLD。上述PLD可以是CPLD,FPGA或者GAL。
存储器702具体可以是CAM或者RAM。CAM可以是TCAM。
网络接口703可以是有线接口,例如FDDI或者Ethernet接口。
存储器702也可以集成在处理器701中。如果存储器702和处理器701是相互独立的器件,存储器702和处理器701相连,例如存储器702和处理器701可以通过总线通信。网络接口703和处理器701可以通过总线通信,网络接口703也可以与处理器701直接连接。
存储器702,用于存储处理DCN报文的操作程序、代码或指令。可选的,该存储器702包括操作系统和应用程序,用于存储处理DCN报文的操作程序、代码或指令。
当处理器701或硬件设备要对DCN报文进行处理时,调用并执行存储器702中存储的操作程序、代码或指令可以完成图2和图3中涉及的第二网络设备的处理过程。具体过程可参见上述本申请实施例相应的部分,这里不再赘述。
可以理解的是,图7仅仅示出了该网络设备的简化设计。在实际应用中,网络设备可以包含任意数量的接口,处理器,存储器等,而所有可以实现本申请实施例的网络设备都在本申请实施例的保护范围之内。
本申请各个实施例中的各功能单元可以集成在一个处理器中,也可以是各个单元单独物理存在,也可以两个或两个以上电路集成在一个电路中。上述各功能单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
图8为本申请实施例公开的一种网络系统800,包括:NMS,通过物理链路连接的第一网络设备801和第二网络设备802。
第一网络设备801,用于生成第一DCN报文,该第一DCN报文的目的地址为NMS的IP地址,将第一DCN报文装载于Flex Ethernet开销复帧内,通过物理链路将Flex  Ethernet开销复帧发送给第二网络设备802。
第二网络设备802,用于接收第一网络设备801通过物理链路发送的Flex Ethernet开销复帧,并从Flex Ethernet开销复帧中提取第一DCN报文。
第二网络设备802,还用于根据第一DCN报文的目的地址,向NMS转发该第一DCN报文。
以上本申请实施例公开的网络系统中,第一网络设备801可以具体为图4和图5中公开的网络设备,用于执行本申请实施例图2和图3中第一网络设备执行的相应操作。第二网络设备802可以具体为图6和图7中公开的网络设备,用于执行本申请实施例图2和图3中第二网络设备执行的相应操作。具体过程以及执行原理可以参照上述说明,这里不再进行赘述。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质中的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中,通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
本说明书的各个部分均采用递进的方式进行描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点介绍的都是与其他实施例不同之处。尤其,对于装置和系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例部分的说明即可。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (11)

  1. 一种数据通信网络DCN报文处理方法,其特征在于,所述方法包括:
    第一网络设备生成第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,到达所述第一DCN报文的目的地址的下一跳为第二网络设备,所述第一网络设备与所述第二网络设备通过物理链路连接;
    所述第一网络设备将所述第一DCN报文装载于灵活以太网Flex Ethernet开销复帧内;
    所述第一网络设备通过所述物理链路将所述Flex Ethernet开销复帧发送给所述第二网络设备,以便于所述第二网络设备从所述Flex Ethernet开销复帧中提取所述第一DCN报文并向所述NMS转发所述第一DCN报文。
  2. 根据权利要求1所述的数据通信网络DCN报文处理方法,其特征在于,所述第一网络设备将所述第一DCN报文装载于灵活以太网Flex Ethernet开销复帧内,包括:
    所述第一网络设备将所述第一DCN报文装载于所述Flex Ethernet开销复帧的段层管理通道section management channel;
    或者,
    所述第一网络设备将所述第一DCN报文装载于所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中;
    或者,
    所述第一网络设备将所述第一DCN报文拆分后装载于所述灵活以太网Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim management channel中。
  3. 根据权利要求1或2所述的数据通信网络DCN报文处理方法,其特征在于,还包括:
    所述第一网络设备生成第二DCN报文,所述第二DCN报文的目的地址为所述NMS的IP地址,到达所述第二DCN报文的目的地址的下一跳为所述第二网络设备;
    所述第一网络设备监测Flex Ethernet接口状态,确定所述Flex Ethernet接口状态为导通状态;
    所述第一网络设备通过所述Flex Ethernet接口向所述第二网络设备发送所述第二DCN报文。
  4. 一种网络设备,用作第一网络设备,其特征在于,所述第一网络设备包括:
    生成单元,用于生成第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,到达所述第一DCN报文的目的地址的下一跳为第二网络设备,所述第一网络设备与所述第二网络设备通过物理链路连接;
    装载单元,用于将所述第一DCN报文装载于灵活以太网Flex Ethernet开销复帧内;
    发送单元,用于通过所述物理链路将所述lex Ethernet开销复帧发送给所述第二网络设备,以便于所述第二网络设备从所述Flex Ethernet开销复帧中提取所述第一DCN报文并向所述NMS转发所述第一DCN报文。
  5. 根据权利要求4所述网络设备,其特征在于,所述装载单元,用于将所述第一DCN报文装载于所述灵活以太网Flex Ethernet开销复帧的段层管理通道section  management channel;或者,将所述第一DCN报文装载于所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中;或者,将所述第一DCN报文装载于所述灵活以太网Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim management channel中。
  6. 根据权利要求4或5所述的网络设备,其特征在于,还包括:切换单元;
    所述生成单元,还用于生成第二DCN报文,所述第二DCN报文的目的地址为所述NMS的IP地址,到达所述第二DCN报文的目的地址的下一跳为所述第二网络设备;
    所述切换单元,用于监测Flex Ethernet接口状态,确定所述Flex Ethernet接口状态为导通状态,通过所述Flex Ethernet接口向所述第二网络设备发送所述第二DCN报文。
  7. 一种数据通信网络DCN报文处理方法,其特征在于,所述方法包括:
    第二网络设备接收第一网络设备通过物理链路发送的灵活以太网Flex Ethernet开销复帧,并从所述Flex Ethernet开销复帧中提取第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,所述第二网络设备为到达所述第一DCN报文的目的地址的下一跳,所述第二网络设备与所述第一网络设备通过物理链路连接;
    所述第二网络设备基于所述目的地址,将所述第一DCN报文发送至所述NMS。
  8. 根据权利要求7所述的数据通信网络DCN报文处理方法,其特征在于,所述第二网络设备从所述Flex Ethernet开销复帧中提取所述第一DCN报文,包括:
    所述第二网络设备从所述Flex Ethernet开销复帧的段层管理通道section management channel中提取所述第一DCN报文;
    或者,
    所述第二网络设备从所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中提取所述第一DCN报文;
    或者,
    所述第二网络设备从所述Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim management channel中提取所述第一DCN报文。
  9. 一种网络设备,用于第二网络设备,其特征在于,所述第二网络设备包括:
    提取单元,用于接收第一网络设备通过物理链路发送的灵活以太网Flex Ethernet开销复帧,并从所述Flex Ethernet开销复帧中提取第一DCN报文,所述第一DCN报文的目的地址为网络管理系统NMS的IP地址,所述第二网络设备为到达所述第一DCN报文的目的地址的下一跳,所述第二网络设备与所述第一网络设备通过物理链路连接;
    发送单元,用于基于所述目的地址,将所述第一DCN报文发送至所述NMS。
  10. 根据权利要求9所述的网络设备,其特征在于,所述提取单元,用于从所述Flex Ethernet开销复帧的段层管理通道section management channel中提取所述第一DCN报文;或者,从所述Flex Ethernet开销复帧的时分复用层管理通道shim to shim management channel中提取所述第一DCN报文;或者,从所述Flex Ethernet开销复帧的段层管理通道section management channel和时分复用层管理通道shim to shim  management channel中提取所述第一DCN报文。
  11. 一种网络系统,其特征在于,包括:网络管理系统NMS,通过物理链路连接的第一网络设备和第二网络设备;
    所述第一网络设备,用于生成第一DCN报文,所述第一DCN报文的目的地址为所述NMS的IP地址,将所述第一DCN报文装载于灵活以太网Flex Ethernet开销复帧内,通过所述物理链路将所述Flex Ethernet开销复帧发送给所述第二网络设备;
    所述第二网络设备,用于接收所述第一网络设备通过所述物理链路发送的所述Flex Ethernet开销复帧,并从所述Flex Ethernet开销复帧中提取所述第一DCN报文;
    所述第二网络设备,还用于根据所述第一DCN报文的所述目的地址,向所述NMS转发所述第一DCN报文。
PCT/CN2017/101337 2016-12-26 2017-09-12 Dcn报文处理方法、网络设备和网络系统 WO2018120914A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP17886402.1A EP3554008B1 (en) 2016-12-26 2017-09-12 Method, network device, and network system for processing dcn message
KR1020197021396A KR102342286B1 (ko) 2016-12-26 2017-09-12 Dcn 메시지 처리 방법, 네트워크 디바이스, 및 네트워크 시스템
ES17886402T ES2863776T3 (es) 2016-12-26 2017-09-12 Método, dispositivo de red y sistema de red para procesar mensajes de DCN
JP2019534840A JP6930801B2 (ja) 2016-12-26 2017-09-12 Dcnパケット処理方法、およびネットワークデバイス
EP21154082.8A EP3902234B1 (en) 2016-12-26 2017-09-12 Dcn packet processing method, network device, and network system
US16/453,692 US11894970B2 (en) 2016-12-26 2019-06-26 DCN packet processing method, network device, and network system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611218007.6 2016-12-26
CN201611218007.6A CN108243035B (zh) 2016-12-26 2016-12-26 Dcn报文处理方法、网络设备和网络系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/453,692 Continuation US11894970B2 (en) 2016-12-26 2019-06-26 DCN packet processing method, network device, and network system

Publications (1)

Publication Number Publication Date
WO2018120914A1 true WO2018120914A1 (zh) 2018-07-05

Family

ID=62701346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/101337 WO2018120914A1 (zh) 2016-12-26 2017-09-12 Dcn报文处理方法、网络设备和网络系统

Country Status (7)

Country Link
US (1) US11894970B2 (zh)
EP (2) EP3554008B1 (zh)
JP (2) JP6930801B2 (zh)
KR (1) KR102342286B1 (zh)
CN (2) CN113300876B (zh)
ES (1) ES2863776T3 (zh)
WO (1) WO2018120914A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022507745A (ja) * 2018-11-21 2022-01-18 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 通信方法および通信装置
JP2022512470A (ja) * 2018-12-10 2022-02-04 華為技術有限公司 通信方法および装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110875862B (zh) * 2018-08-31 2022-07-19 中兴通讯股份有限公司 一种报文传输方法及装置、计算机存储介质
CN111082957B (zh) * 2018-10-22 2023-04-07 中兴通讯股份有限公司 端口配置检测方法、终端和计算机可读存储介质
EP3935788A4 (en) 2019-03-22 2022-03-16 Huawei Technologies Co., Ltd. NETWORK NODE AND DEVICE FOR DATA COMMUNICATION NETWORKS
CN111817986B (zh) * 2019-04-11 2023-05-09 中国移动通信有限公司研究院 一种报文处理方法、装置及计算机可读存储介质
CN111917621B (zh) * 2019-05-10 2021-09-07 烽火通信科技股份有限公司 通信设备的网管服务器与网元的通信方法及系统
CN112911420A (zh) * 2019-12-03 2021-06-04 中兴通讯股份有限公司 基于FlexE网络的重路由方法、电子设备和可读存储介质
CN111368330B (zh) * 2020-03-03 2022-08-05 泰华智慧产业集团股份有限公司 一种基于区块链的以太坊智能合约审计系统及方法
CN115334042A (zh) * 2021-04-25 2022-11-11 中国移动通信有限公司研究院 一种数据传输方法、装置、系统和通信设备
CN114040036B (zh) * 2021-10-26 2023-04-28 中国联合网络通信集团有限公司 一种数据处理方法、装置及存储介质
CN114615136B (zh) * 2022-03-04 2023-10-27 浙江国盾量子电力科技有限公司 一种5G智能电网切片的FlexE接口管理方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101064565A (zh) * 2006-04-30 2007-10-31 上海贝尔阿尔卡特股份有限公司 一种自动交换光网络中控制信息传送/接收方法及其系统
CN102308523A (zh) * 2011-07-27 2012-01-04 华为技术有限公司 数据通信网络配置方法、网关网元及数据通信系统
US20120250695A1 (en) * 2011-03-31 2012-10-04 Nokia Siemens Networks Ethernet Solutions Ltd. Hitless node insertion for ethernet networks
CN102780569A (zh) * 2011-05-09 2012-11-14 中兴通讯股份有限公司 远程管理方法及网元设备
CN104639360A (zh) * 2013-11-14 2015-05-20 中兴通讯股份有限公司 一种控制网元设备加入网络的方法及网元设备

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06261362A (ja) * 1993-03-08 1994-09-16 Hitachi Ltd 無線基地局増設制御方式
JP2871469B2 (ja) 1994-07-19 1999-03-17 日本電気株式会社 Atm網構成管理方法
EP1197045B1 (en) * 1999-07-21 2007-12-12 Broadcom Corporation Unified table for L2, L3, L4 switching and filtering
CN1984028A (zh) * 2005-11-28 2007-06-20 华为技术有限公司 一种数据包传输方法
CN101141284B (zh) * 2007-01-31 2011-01-19 中兴通讯股份有限公司 业务带宽配置方法和网管系统
JP4106074B2 (ja) * 2007-09-10 2008-06-25 株式会社エヌ・ティ・ティ・ドコモ ネットワーク機器管理システム及びその方法並びにネットワーク機器設定制御装置、ネットワーク機器
CN101184098B (zh) * 2007-12-11 2011-11-02 华为技术有限公司 数据传输方法和传输装置
CN102136959B (zh) * 2010-01-22 2014-01-22 华为技术有限公司 以太网链路管理方法、装置及系统
JP6050720B2 (ja) 2013-05-15 2016-12-21 Kddi株式会社 コアネットワークにおけるゲートウェイのセッション情報を移行させるシステム及び方法
US9609400B2 (en) * 2013-08-22 2017-03-28 Nec Corporation Reconfigurable and variable-rate shared multi-transponder architecture for flexible ethernet-based optical networks
CN105264778B (zh) * 2013-12-31 2019-04-19 华为技术有限公司 一种crc计算方法及装置
CN107682217B (zh) * 2014-01-14 2021-07-27 广东经纬天地科技有限公司 以太网信号调度方法、装置和系统
US9344323B2 (en) * 2014-01-23 2016-05-17 Ciena Corporation G.8032 ethernet multiple fault recovery mechanisms
US10225037B2 (en) * 2014-10-24 2019-03-05 Ciena Corporation Channelized ODUflex systems and methods
US10637604B2 (en) * 2014-10-24 2020-04-28 Ciena Corporation Flexible ethernet and multi link gearbox mapping procedure to optical transport network
US9998434B2 (en) * 2015-01-26 2018-06-12 Listat Ltd. Secure dynamic communication network and protocol
US9800361B2 (en) 2015-06-30 2017-10-24 Ciena Corporation Flexible ethernet switching systems and methods
US10218823B2 (en) * 2015-06-30 2019-02-26 Ciena Corporation Flexible ethernet client multi-service and timing transparency systems and methods
US10200248B1 (en) * 2016-06-30 2019-02-05 Juniper Networks, Inc. Translating high-level configuration instructions to low-level device configuration
US9985724B2 (en) * 2016-09-09 2018-05-29 Ciena Corporation Horizontal synchronization extensions for service resizing in optical networks
CN113285781B (zh) 2016-12-08 2022-08-19 中兴通讯股份有限公司 复帧发送、接收方法、装置、通讯设备及通讯网络系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101064565A (zh) * 2006-04-30 2007-10-31 上海贝尔阿尔卡特股份有限公司 一种自动交换光网络中控制信息传送/接收方法及其系统
US20120250695A1 (en) * 2011-03-31 2012-10-04 Nokia Siemens Networks Ethernet Solutions Ltd. Hitless node insertion for ethernet networks
CN102780569A (zh) * 2011-05-09 2012-11-14 中兴通讯股份有限公司 远程管理方法及网元设备
CN102308523A (zh) * 2011-07-27 2012-01-04 华为技术有限公司 数据通信网络配置方法、网关网元及数据通信系统
CN104639360A (zh) * 2013-11-14 2015-05-20 中兴通讯股份有限公司 一种控制网元设备加入网络的方法及网元设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3554008A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022507745A (ja) * 2018-11-21 2022-01-18 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 通信方法および通信装置
JP7168286B2 (ja) 2018-11-21 2022-11-09 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 通信方法および通信装置
JP2022512470A (ja) * 2018-12-10 2022-02-04 華為技術有限公司 通信方法および装置
JP7191228B2 (ja) 2018-12-10 2022-12-16 華為技術有限公司 通信方法および装置
US11804982B2 (en) 2018-12-10 2023-10-31 Huawei Technologies Co., Ltd. Communication method and apparatus

Also Published As

Publication number Publication date
EP3554008A4 (en) 2019-11-13
US11894970B2 (en) 2024-02-06
EP3902234B1 (en) 2024-03-20
CN113300876B (zh) 2022-09-02
JP6930801B2 (ja) 2021-09-01
EP3554008B1 (en) 2021-03-10
KR102342286B1 (ko) 2021-12-22
CN108243035A (zh) 2018-07-03
EP3902234A1 (en) 2021-10-27
CN113300876A (zh) 2021-08-24
JP2021168480A (ja) 2021-10-21
JP2020503765A (ja) 2020-01-30
CN108243035B (zh) 2021-04-09
EP3554008A1 (en) 2019-10-16
US20190319829A1 (en) 2019-10-17
ES2863776T3 (es) 2021-10-11
KR20190094463A (ko) 2019-08-13
JP7235397B2 (ja) 2023-03-08

Similar Documents

Publication Publication Date Title
WO2018120914A1 (zh) Dcn报文处理方法、网络设备和网络系统
US10616091B2 (en) Exploratory linktrace operations in a computer network
CA2493383C (en) Apparatus and method for a virtual hierarchial local area network
US8634308B2 (en) Path detection in trill networks
WO2019019906A1 (zh) 一种通信方法、设备及存储介质
US11563680B2 (en) Pseudo wire load sharing method and device
WO2004095158A2 (en) Embedded management channel for sonet path terminating equipment connectivity
US7945656B1 (en) Method for determining round trip times for devices with ICMP echo disable
WO2012062106A1 (zh) 线性保护组隧道复用方法和隧道尾节点
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
Cisco Configuring Frame Relay
WO2023046006A1 (zh) 网络传输方法和设备
CN115633279B (zh) Osu交叉设备及基于osu交叉设备的数据传输方法
JP2010136008A (ja) ネットワーク構成装置、管理装置および通信ネットワーク
Yongjun et al. Service Adaptation and Label Forwarding Mechanism for MPLS-TP

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17886402

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019534840

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017886402

Country of ref document: EP

Effective date: 20190709

ENP Entry into the national phase

Ref document number: 20197021396

Country of ref document: KR

Kind code of ref document: A