WO2020010875A1 - 一种链路容量的调整方法及装置 - Google Patents

一种链路容量的调整方法及装置 Download PDF

Info

Publication number
WO2020010875A1
WO2020010875A1 PCT/CN2019/079127 CN2019079127W WO2020010875A1 WO 2020010875 A1 WO2020010875 A1 WO 2020010875A1 CN 2019079127 W CN2019079127 W CN 2019079127W WO 2020010875 A1 WO2020010875 A1 WO 2020010875A1
Authority
WO
WIPO (PCT)
Prior art keywords
delay
physical interface
flexible ethernet
ethernet group
physical
Prior art date
Application number
PCT/CN2019/079127
Other languages
English (en)
French (fr)
Inventor
占治国
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP19835048.0A priority Critical patent/EP3823212A4/en
Priority to US17/257,639 priority patent/US11546221B2/en
Publication of WO2020010875A1 publication Critical patent/WO2020010875A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/0864Round trip delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0882Utilisation of link capacity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter
    • H04L7/02Speed or phase control by the received code signals, the signals containing no special synchronisation information
    • H04L7/033Speed or phase control by the received code signals, the signals containing no special synchronisation information using the transitions of the received signal to control the phase of the synchronising-signal-generating means, e.g. using a phase-locked loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0089Multiplexing, e.g. coding, scrambling, SONET

Definitions

  • Embodiments of the present disclosure relate to the field of communications technologies, and in particular, to a method and device for adjusting link capacity.
  • Flexible Ethernet (FlexEthernet, FlexE) is an interface technology used by a bearer network to implement service isolation and network fragmentation.
  • a flexible Ethernet group is a group obtained by binding one or more Ethernet physical interfaces together.
  • the embodiments of the present disclosure provide a method and device for adjusting link capacity, which can realign clock offsets of all physical interfaces in a flexible Ethernet group to prevent physical interfaces during link capacity adjustment. Data loss occurs due to clock skew.
  • an embodiment of the present disclosure provides a method for adjusting link capacity, including:
  • An embodiment of the present disclosure further provides a node device, including:
  • An acquisition module configured to acquire a delay of a physical interface whose link capacity is adjusted by a flexible Ethernet group
  • a processing module configured to align the clock offsets of all physical interfaces in the flexible Ethernet group according to the obtained delay.
  • the delay of the physical interfaces of the flexible Ethernet group link capacity adjustment is obtained, and the clock offsets of all physical interfaces in the flexible Ethernet group are aligned according to the obtained delay, thereby preventing the When the link capacity is adjusted, the physical interfaces in the Ethernet group lose data due to clock skew.
  • FIG. 1 is a schematic flowchart of a link capacity adjustment method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a node device according to an embodiment of the present disclosure.
  • Flexible Ethernet technology provides a universal mechanism to transmit a series of services with different Media Access Control (MAC) rates. It can be a service with a single MAC rate or multiple services with a small MAC rate. The set is no longer limited to services with a single MAC rate.
  • MAC Media Access Control
  • flexible Ethernet has a cushion layer between the MAC layer and the Physical Coding Sublayer (PCS).
  • PCS Physical Coding Sublayer
  • the function of the cushion layer is to build a schedule.
  • the schedule consists of 20 * n data blocks. Each data block is 66 bits in size and represents a 5G time slot. N is the number of bound physical interfaces (PHYs).
  • the main content is for 200G / 400G PHY, which can be similar to 100G PHY in the 1.0 standard.
  • 200G / 400G introduces a logical concept—instance. That is, 200GPHY is decomposed into 2 instances, and 400G PHY is decomposed into 4 instances. Each instance is basically equivalent to a 100G PHY.
  • the PHYMAP is changed to FlexeMap
  • the PHYNumber is changed to instanceNumber
  • an indication of clock synchronization is added.
  • An embodiment of the present disclosure provides a method for adjusting link capacity. As shown in FIG. 1, the method includes:
  • Step 101 The node device obtains a delay of a physical interface whose link capacity of the flexible Ethernet group is adjusted.
  • step 101 is an action that occurs when the node device needs to adjust the link capacity, but the adjustment has not yet started. Before the link capacity adjustment, the clock offset of the physical interface needs to be aligned, that is, before the link capacity actually starts. Ready to work.
  • Step 102 Align the clock offsets of all physical interfaces in the flexible Ethernet group according to the obtained delay.
  • a node device acquires a delay of a physical interface of a flexible Ethernet group for link capacity adjustment; and according to the obtained delay, a clock offset is applied to all physical interfaces in the flexible Ethernet group. Perform alignment. It can be seen from the embodiments of the present disclosure that the delay of the physical interfaces of the flexible Ethernet group link capacity adjustment is obtained, and the clock offsets of all physical interfaces in the flexible Ethernet group are aligned according to the obtained delay, thereby preventing In the case of link capacity adjustment, the data loss of physical interfaces in the Ethernet group due to clock skew occurs.
  • the physical interface for adjusting the link capacity of the flexible Ethernet group is an increased physical interface. After aligning the clock offsets of all the physical interfaces in the flexible Ethernet group according to the obtained delay, the method further includes:
  • Step 103 If the alignment is successful, add a physical interface for adjusting the link capacity of the flexible Ethernet group.
  • Step 104 Send the conventional overhead to the peer node device in a new overhead period.
  • the method before the delay in acquiring the physical interface of the link capacity adjustment of the flexible Ethernet group by the node device, the method further includes:
  • Step 105 Send the overhead carrying the schedule request to the peer node device, so that the peer node device adjusts the link capacity of the flexible Ethernet group and aligns the clock offset.
  • the peer node device Before the new overhead cycle sends regular overhead to the peer node device, it also includes:
  • Step 106 Receive the overhead carrying the schedule confirmation from the peer node device to confirm that the peer node has aligned the clock offset.
  • the node device when the node device receives the overhead from the peer node device carrying the schedule confirmation, it confirms that the peer node has aligned the clock offset, thereby confirming that the peer node is equipped to perform link adjustment. Capacity, so the node device sends regular overhead to the peer node device in the new overhead cycle, so as to make the final link adjustment.
  • the physical interface for adjusting the link capacity of the flexible Ethernet group is a deleted physical interface. After aligning the clock offsets of all the physical interfaces in the flexible Ethernet group according to the obtained delay, the method further includes:
  • Step 107 If the alignment is successful, delete the physical interface for the flexible Ethernet group link capacity adjustment.
  • Step 108 Stop the services on the physical interface of the flexible Ethernet group link capacity adjustment, and delete the physical interface of the flexible Ethernet group link capacity adjustment in a new overhead cycle.
  • the number of physical interfaces for link capacity adjustment of the flexible Ethernet group is one, and for an increased physical interface, the clock offsets of all physical interfaces in the flexible Ethernet group are aligned according to the obtained obtained delay, include:
  • Step 102a When the obtained delay is greater than the time delay of any one of the original physical interfaces in the flexible Ethernet group, the read clock of the original physical interface in the flexible Ethernet group is paused by N 1 beats according to the requirements of the pending service.
  • N 1 is equal to the difference between the delay of the newly added physical interface in the flexible Ethernet group and the shortest delay of the delay of the original physical interface.
  • Step 102b Buffer the data stream transmitted by the original physical interface in the flexible Ethernet group, insert a free block into the buffered data stream, and adjust the rate so that the clock offsets of all the physical interfaces in the flexible Ethernet group are aligned.
  • the method further includes:
  • step 102c the read clocks of the newly added physical interface and the read clocks of other physical interfaces other than the physical interface with the longest delay among the original physical interfaces are paused by N 2 beats according to the requirements of the service to be processed.
  • N 2 is equal to the difference between the longest delay of the original physical interface in the flexible Ethernet group and the delay of the newly added physical interface.
  • Step 102d The buffer read clock pauses the data stream transmitted by the N 2 beat physical interface, inserts a free block into the cached data stream, and adjusts the rate so that the clock offsets of all the physical interfaces in the flexible Ethernet group are aligned.
  • the method further includes:
  • step 102e the read clock of the newly added physical interface and the read clocks of other physical interfaces except the physical interface with the longest delay among the original physical interfaces are stopped for N 3 beats according to the requirements of the pending service.
  • N 3 is equal to the difference between the longest delay and the shortest delay among the delays of the original physical interfaces in the flexible Ethernet group.
  • step 102f the buffer read clock pauses the data stream transmitted by the physical interface of N 3 ticks, and inserts a free block into the buffered data stream, so that the clock offsets of all physical interfaces in the flexible Ethernet group are aligned.
  • inserting a free block into the cached data stream includes:
  • the data stream buffered in step 102b refers to the data stream transmitted by all the physical interfaces in the flexible Ethernet group
  • the data stream buffered in step 102d refers to the physical interface where the clock is paused for N 2 beats
  • the data stream transmitted that is, the newly added physical interface and the data stream transmitted by other physical interfaces except the physical interface with the longest delay among the original physical interfaces.
  • the data stream buffered in step 102f refers to the read clock stall N
  • the data flow transmitted by the physical interface at 3 beats that is, the data flow transmitted by the newly added physical interface and the physical interface other than the physical interface with the longest delay in the original physical interface).
  • the number of physical interfaces for link capacity adjustment of the flexible Ethernet group is one, and for deleted physical interfaces, the clock offsets of all physical interfaces in the flexible Ethernet group are aligned according to the obtained delay, include:
  • Step 102f When the obtained delay is greater than the time delay of any original physical interface in the flexible Ethernet group, read clocks of other physical interfaces in the flexible Ethernet group except for the physical interface with the longest delay and the deleted physical interface Pause N 4 beats.
  • N 4 is equal to the difference between the delay of the physical interface with the second longest delay and the delay of the physical interface with the shortest delay in the flexible Ethernet group.
  • Step 102g The buffer read clock pauses the data stream transmitted by the N 4 beat physical interface, and inserts a free block into the cached data stream, and adjusts the rate so that the clocks of other physical interfaces in the flexible Ethernet group physics except the physical interface deleted. Offset alignment.
  • the method further includes:
  • step 102h the read clocks of the other physical interfaces in the flexible Ethernet group except the longest-delayed physical interface and the deleted physical interface are paused by N 5 ticks.
  • N 5 is equal to the difference between the delay of the physical interface with the longest delay in the flexible Ethernet group and the delay of the physical interface with the next shortest delay.
  • Step 102i The buffer read clock pauses the data stream transmitted by the physical interface of N 5 ticks, and inserts a free block in the cached data stream, and adjusts the rate so that the clocks of other physical interfaces except the physical interface deleted in the flexible Ethernet group physics. Offset alignment.
  • the method further includes:
  • Step 102j The read clock of other physical interfaces in the flexible Ethernet group except the longest-delayed physical interface and the deleted physical interface is paused for N 6 ticks; where N 6 is equal to the longest delay in the flexible Ethernet group. The difference between the delay of the long physical interface and the delay of the shortest physical interface.
  • Step 102k The buffer read clock pauses N data streams transmitted by these six physical interfaces, and inserts a free block into the cached data stream, and adjusts the rate so that the physical interfaces of the flexible Ethernet group except the physical interface deleted Clock offset alignment.
  • the number of physical interfaces for link capacity adjustment of the flexible Ethernet group is N, where N is an integer greater than 1, and the clock offsets of all physical interfaces in the flexible Ethernet group are aligned according to the obtained delay.
  • Step 102l Align the clock offsets of all the physical interfaces in the flexible Ethernet group among the N delays corresponding to the N physical interfaces adjusted according to the link capacity.
  • i 1, 2 ... N.
  • the physical interface of the flexible Ethernet group link capacity adjustment is an increased physical interface
  • the original flexible Ethernet group includes physical interface 1, physical interface 2, and physical interface 3.
  • the obtained delay is physical
  • the delay of interface 4 and the delay of physical interface 5, then aligning the clock offsets of all physical interfaces in the flexible Ethernet group according to the obtained delay can be: according to the obtained delay of physical interface 4 to physical interface 1
  • the clock offsets of all the physical interfaces in the flexible Ethernet group consisting of physical interface 2, physical interface 2, and physical interface 3 are aligned, and then physical interface 1, physical interface 2, physical interface 3, and physical interface are based on the obtained delay of physical interface 5.
  • the clock offsets of all the physical interfaces in the flexible Ethernet group of 4 are aligned.
  • the physical interface for link capacity adjustment of the flexible Ethernet group is the deleted physical interface
  • the original flexible Ethernet group includes physical interface 1, physical interface 2, physical interface 3, physical interface 4, delay, and physical interface 5
  • the obtained delay is the delay of physical interface 4 and the delay of physical interface 5
  • aligning the clock offsets of all physical interfaces in the flexible Ethernet group according to the obtained delay can be: according to the obtained physical interface
  • the delay of 4 aligns the clock offsets of all the physical interfaces in the flexible Ethernet group consisting of physical interface 1, physical interface 2, physical interface 3, and physical interface 5, and then the physical interface is based on the obtained delay of physical interface 5. 1. Align the clock offsets of all physical interfaces in the flexible Ethernet group consisting of physical interface 2 and physical interface 3.
  • An embodiment of the present disclosure further provides a computer-readable storage medium that stores computer-executable instructions, and the computer-executable instructions are used to perform any one of the foregoing link capacity adjustment methods.
  • the present disclosure also provides a method for adjusting a physical interface link capacity.
  • the method includes:
  • Step 1 When the schedule is set, both the local end and the remote end start a new PHY.
  • Step 2 When the original PHY sends overhead, the local end simultaneously sends an overhead frame on the new PHY as a Calendar Request (CR).
  • the definition of the overhead frame of the new PHY a start flag with flexible Ethernet overhead of 0X4B, an O code of 0X5, and an OMF multiframe indication.
  • the FlexEGroupNumber is the corresponding FlexEGroup number, the CR field is 1, and the other fields are set to 0.
  • Step 3 After receiving the overhead frame (CR) of the new PHY, the peer node needs to align the clock offsets of multiple PHYs because multiple PHYs are involved:
  • N 1 is equal to the newly added PHY.
  • the buffer read clock stalls the data streams transmitted by these PHYs for 1 beat, and inserts free blocks in the buffered data stream. The rate is adjusted so that the clocks of all PHYs Offset alignment,
  • the read clock of the new PHY and the read clock of the other PHYs except the PHY with the longest delay in the original PHY are paused by N 2 ticks; where N 2 is equal to the original PHY
  • N has a maximum threshold, and the alarm indicates that alignment cannot be achieved if the maximum threshold is exceeded
  • the buffer read clock stalls N 2 beats of data transmitted by these PHYs Stream and insert free blocks into the buffered data stream, adjusting the rate so that the clock offsets of all PHYs are aligned.
  • the read clock of the new PHY is paused by N 3 ticks; at the same time, the read clock of the other PHYs is also paused except for the PHY with the longest delay.
  • Solution 1 Insert an idle block between the S block and the T block.
  • Solution 2 Insert an idle block between the T block and the S block.
  • the advantage of this scheme is that the data format is a standard format that can be supported by the receiving and sending nodes, but the disadvantage is that the delay is slightly larger than the first scheme.
  • Step 4 If the peer node can switch, when the original PHY sends an overhead frame, it also sends an overhead frame on the new PHY as a calendar response (Calendar Acknowledge, CA).
  • Step 5 After the local node receives the overhead frame (CA) of the new PHY, because it involves multiple PHYs, it performs alignment processing. The processing process is the same as that of the peer node, and will not be repeated here.
  • CA overhead frame
  • Step 6 The local node and the peer node formally switch at the beginning of a new overhead period, and transmit service data in the new PHY.
  • the overhead frames of all PHYs are filled according to the existing standards.
  • the present disclosure also provides a method for adjusting link capacity.
  • the method includes:
  • Step 1 The local node identifies the PHY that it intends to delete, and calculates the processing of the peer node's alignment if this PHY is deleted.
  • the read clock of the physical interface in the flexible Ethernet group is suspended by N4 beats; where N4 is equal to the delay of the physical interface with the second longest delay in the flexible Ethernet group and the shortest physical delay.
  • N4 is equal to the delay of the physical interface with the second longest delay in the flexible Ethernet group and the shortest physical delay.
  • the difference in the interface delay; the buffer read clock stalls the data stream transmitted by the N4 beat physical interface, and inserts free blocks in the cached data stream, and adjusts the rate so that the physical group of the flexible Ethernet group except the physical interface is deleted. Clock offset alignment of the interface.
  • the read clock of the physical interface in the flexible Ethernet group is paused by N 5 beats; where N 5 is equal to the delay of the physical interface with the longest delay in the flexible Ethernet group and the second shortest delay.
  • N 5 is equal to the delay of the physical interface with the longest delay in the flexible Ethernet group and the second shortest delay.
  • the difference in the time delay of the physical interface; the buffer read clock stalls the data stream transmitted by the physical interface of 5 beats, and inserts a free block into the cached data stream, and adjusts the rate so that the physical interface of the flexible Ethernet group is deleted. Clock offset alignment for other physical interfaces.
  • the read clock of the physical interface of the flexible Ethernet group is suspended by N 6 ticks; where N 6 is equal to the time of the physical interface with the longest delay in the flexible Ethernet group.
  • N 6 is equal to the time of the physical interface with the longest delay in the flexible Ethernet group.
  • the buffer read clock stalls the data streams transmitted by these 6 physical interfaces for N beats, and inserts free blocks into the cached data stream, and adjusts the rate to make the flexible Ethernet group physical Clock offset alignment of other physical interfaces except the deleted physical interface.
  • Step 2 The peer node identifies the PHY that it intends to delete. Calculate the alignment process of the local node if the PHY is deleted. The specific process is the same as the alignment processing of the peer node calculated by the local node. To repeat.
  • Step 3 If the local node and the peer node can be switched, the PHY is officially deleted; otherwise, the operation of deleting the PHY is terminated.
  • Step 4 If you want to officially start deleting the PHY, stop deleting the services on the PHY, officially switch at the beginning of a new overhead cycle, use a new cache scheme to handle the alignment of each PHY, and insert it into the normal code stream. Free blocks, rate adjustments, and overhead frames for all PHYs are padded in accordance with existing standards.
  • Step 1 Both the local end and the peer end start a new PHY.
  • Step 2 When the local end sends overhead frames at PHYa, b, and c, it also sends an overhead overhead frame at PHYd as CR.
  • PHY overhead frame definition FlexE overhead start flag 0X4B, O code 0X5, OMF multiframe indication, FlexEGroupNumber is the corresponding FlexEGroup number, CR field is 1, other fields are set to 0.
  • Step 3 After receiving the overhead frame (CR) of PHY d, the opposite node performs alignment processing because it involves multiple PHYs. It is known by calculation that the PHY d delay is the longest. The read clocks of PHY a, b, and c are paused for N 1 ticks (N has a maximum threshold, and an alarm indicates that alignment cannot be achieved if this maximum threshold is exceeded). N 1 is equal to the delay of the newly added PHYd and the original PHY. The difference between the shortest delays; buffer the data of PHY a, b, and c; insert a free block into the normal code stream; adjust the rate, and finally achieve the alignment of all PHYs. There are two options for inserting free blocks:
  • Solution 1 Insert a free block between the S block and the T block.
  • Solution 2 Insert a free block between the T block and the S block.
  • Step 4 If the opposite node can switch, when the PHY a, b, and c send the overhead frame, they also send the overhead frame on the PHY as the CA.
  • PHY overhead frame definition FlexE overhead start flag 0X4B, O code 0X5, OMF multiframe indication, FlexEGroupNumber is the corresponding FlexEGroup number, CA field is 1, and other fields are set to 0.
  • Step 5 After the local node receives the PHY overhead frame (CA), since multiple PHYs are involved, the alignment processing process is the same as that of the peer node, and is not repeated here.
  • CA PHY overhead frame
  • Step 6 The local node and the peer node are formally switched at the beginning of a new overhead period.
  • Service data is transmitted in the PHY, and the overhead frames of PHYa, b, c, and d are filled according to the existing standards.
  • Step 1 Both the peer and the peer start a new PHY.
  • Step 2 When the local end sends an overhead overhead frame at PHY a, b, and c, it also sends an overhead frame at the same time as a CR (calendar request).
  • PHY overhead frame definition FlexE overhead start flag 0X4B, O code 0X5, OMF multiframe indication, FlexEGroupNumber is the corresponding FlexEGroup number, CR field is 1, other fields are set to 0.
  • Step 3 After receiving the overhead overhead frame (CR) of PHY d, the opposite node performs deskew processing because multiple PHYs are involved. It is known by calculation that the PHY d delay is the shortest. The read clock of PHY d pauses for N ticks (N has a maximum threshold, and an alarm indicates that alignment cannot be achieved if the maximum threshold is exceeded). At the same time, except for the PHY with the longest delay in the original PHY, the read clocks of other PHYs N 2 beats are also paused; among them, N 2 is equal to the difference between the longest delay in the original PHY and the delay of the newly added PHY. The buffer read clock stalls the data streams transmitted by these PHYs for N 2 ticks, and inserts free blocks in the buffered data stream, and adjusts the rate so that the clock offsets of all PHYs are aligned.
  • N has a maximum threshold, and an alarm indicates that alignment cannot be achieved if the maximum threshold is exceeded.
  • N 2 is equal to the difference between the longest
  • Step 4 If the opposite node can switch, when the PHY a, b, and c send the overhead frame, they also send the overhead frame on the PHY as the CA.
  • PHY overhead definition FlexE overhead start flag 0X4B, O code 0X5, OMF multiframe indication, FlexEGroupNumber is the corresponding FlexEGroup number, CA field is 1, and other fields are set to 0.
  • Step 5 After receiving the PHY d overhead frame (CA), the local node performs alignment processing because multiple PHYs are involved. It is known by calculation that the PHY d delay is the shortest. The read clock of PHY d pauses for N ticks (N has a maximum threshold, and an alarm indicates that alignment cannot be achieved if the maximum threshold is exceeded). At the same time, except for the PHY with the longest delay in the original PHY, the read clocks of other PHYs N 2 beats are also paused; among them, N 2 is equal to the difference between the longest delay in the original PHY and the delay of the newly added PHY. The buffer read clock stalls the data streams transmitted by these PHYs for N 2 ticks, and inserts free blocks in the buffered data stream, and adjusts the rate so that the clock offsets of all PHYs are aligned.
  • CA PHY d overhead frame
  • Step 6 The local node and the peer node are formally switched at the beginning of a new overhead period.
  • Service data is transmitted in the PHY, and the overhead frames of PHYa, b, c, and d are filled according to the existing standards.
  • Step 1 Both the local end and the peer end start a new PHY.
  • Step 2 When the local end sends overhead frames at PHYa, b, and c, it also sends an overhead overhead frame at PHYd as CR.
  • PHY overhead frame definition FlexE overhead start flag 0X4B, O code 0X5, OMF multiframe indication, FlexEGroupNumber is the corresponding FlexEGroup number, CR field is 1, other fields are set to 0.
  • Step 3 After receiving the overhead overhead frame (CR) of PHY d, the opposite node performs alignment processing because multiple PHYs are involved. It is known by calculation that the delay of PHY d is between the delays of PHY a, b, and c. Then, a buffer of PHY d is provided, and the read clock of PHY d is suspended by N 3 ticks. For the long PHY, the read clocks of other PHYs also pause for N 3 ticks; among them, N 3 is equal to the difference between the longest delay and the shortest delay in the original PHY. The buffer read clock stalls the data streams transmitted by these PHYs for N 3 ticks, and inserts a free block in the buffered data stream, and adjusts the rate so that the clock offsets of all PHYs are aligned.
  • CR overhead overhead frame
  • Step 4 If the peer node can switch, when the PHYs a, b, and c send the overhead, they simultaneously send the overhead frame on the PHYs as the CA.
  • PHY overhead frame definition FlexE overhead start flag 0X4B, O code 0X5, OMF multiframe indication, FlexEGroupNumber is the corresponding FlexEGroup number, CA field is 1, and other fields are set to 0.
  • Step 5 After receiving the PHY d overhead frame (CA), the local node performs alignment processing because multiple PHYs are involved. It is known by calculation that the delay of PHY d is between the delays of PHY a, b, and c. Then, the buffer for PHY d stops the read clock of PHY d by N ticks. At the same time, the longest delay is eliminated in the original PHY. For the other PHY, the read clock of other PHYs also pauses for N 3 ticks; among them, N 3 is equal to the difference between the longest delay and the shortest delay in the original PHY. The buffer read clock pauses the data streams transmitted by these PHYs for N ticks, and inserts a free block in the buffered data stream, and adjusts the rate so that the clock offsets of all PHYs are aligned.
  • CA PHY d overhead frame
  • Step 6 The local node and the peer node are formally switched at the beginning of a new overhead period.
  • Service data is transmitted in the PHY, and the overhead frames of the PHY a, b, c, and d are filled according to the existing standards.
  • Step 1 The central controller notifies the local end and the opposite end of the intention to delete PHYd.
  • Step 2 After the local node knows that it intends to delete the PHY, it calculates how to align the peer node if the PHY is deleted. Through calculation, we know that PHY has the shortest delay. The new buffering scheme will complete the alignment of all PHYs (excluding PHYs to be deleted) after the data of the longest PHY arrives. -Second-shortest PHY.
  • Step 3 After the peer node knows that it intends to delete the PHY, it calculates how the local node is aligned if the PHY is deleted. Through calculation, we know that PHY has the shortest delay. The new buffering scheme will complete the alignment of all PHYs (excluding PHYs to be deleted) after the data of the longest PHY arrives. -Second-shortest PHY.
  • Step 5 The central controller receives a switchable reply, and notifies the local end and the remote end to officially delete the PHY; otherwise, it does not send a notification of the deletion of the PHY to the local end and the remote end.
  • Step 6 After receiving the notification of deleting the PHY, the local and peer nodes stop the services on the PHY, formally switch at the beginning of the new overhead cycle, and adopt a new buffer scheme to handle the alignment of the PHYs a, b, and c.
  • a free block is inserted into the normal code stream, the rate is adjusted, and the overhead frames of PHY a, b, and c are filled according to the existing standard.
  • An embodiment of the present disclosure further provides a node device.
  • the node device 2 includes:
  • the obtaining module 21 is configured to obtain a delay of a physical interface whose link capacity is adjusted by a flexible Ethernet group.
  • the processing module 22 is configured to align the clock offsets of all the physical interfaces in the flexible Ethernet group according to the obtained delay.
  • the physical interface for flexible Ethernet group link capacity adjustment is an increased physical interface
  • the processing module 22 is further configured to add a physical interface for flexible Ethernet group link capacity adjustment if the alignment is successful.
  • the processing module 22 is further configured to send the regular overhead to the peer node device in a new overhead period.
  • it further includes:
  • the sending module 23 is configured to send the overhead carrying the schedule request to the peer node device, so that the peer node device adjusts the link capacity of the flexible Ethernet group and aligns the clock offset.
  • the receiving module 24 is configured to receive the overhead from the peer node device that carries the schedule confirmation to confirm that the peer node has aligned the clock offset.
  • the physical interface of the flexible Ethernet group link capacity adjustment is a deleted physical interface
  • the processing module 22 is further configured to delete the physical interface of the flexible Ethernet group link capacity adjustment if the alignment is successful.
  • the processing module 22 is further configured to stop services on the physical interface of the flexible Ethernet group link capacity adjustment, and delete the physical interface of the flexible Ethernet group link capacity adjustment in a new overhead cycle.
  • the number of physical interfaces for adjusting the link capacity of the flexible Ethernet group is one, and for an increased physical interface, the processing module 22 is specifically configured to:
  • the read clock of the original physical interface in the flexible Ethernet group is paused by N 1 ticks according to the requirements of the pending service; where N 1 is equal to The difference between the delay of the newly added physical interface in the flexible Ethernet group and the shortest delay of the delay of the original physical interface.
  • Buffer the data stream transmitted by the original physical interface in the flexible Ethernet group and insert free blocks in the cached data stream. Adjust the rate so that the clock offsets of all the physical interfaces in the flexible Ethernet group are aligned.
  • the processing module 22 is specifically configured to:
  • the read clock of the newly added physical interface and the read clock of other physical interfaces except the physical interface with the longest delay among the original physical interfaces are paused for N 2 beats; where N 2 is equal to the flexible Ethernet The difference between the longest delay of the original physical interface in the network group and the delay of the newly added physical interface.
  • the buffer read clock stalls the data stream transmitted by the physical interface of 2 beats, and inserts a free block in the buffered data stream, and adjusts the rate so that the clock offsets of all physical interfaces in the flexible Ethernet group are aligned.
  • the processing module 22 is specifically configured to:
  • the read clock of the newly added physical interface and the read clock of other physical interfaces except the physical interface with the longest delay among the original physical interfaces are stopped for N 3 beats; of which, N 3 is equal to the flexible Ethernet The difference between the longest delay and the shortest delay among the delays of the original physical interfaces in the network group.
  • the buffer read clock pauses the data stream transmitted by the physical interface of N 3 ticks, and inserts a free block in the buffered data stream, and adjusts the rate so that the clock offsets of all physical interfaces in the flexible Ethernet group are aligned.
  • processing module 22 is specifically configured to:
  • the number of physical interfaces for link capacity adjustment of the flexible Ethernet group is one, and the number of physical interfaces is deleted.
  • the processing module 22 is specifically configured to:
  • the read clocks of other physical interfaces in the flexible Ethernet group except the physical interface with the second longest delay and the deleted physical interface are stalled N 4 beats; where N 4 is equal to the difference between the delay of the physical interface with the second longest delay and the delay of the physical interface with the shortest delay in the flexible Ethernet group.
  • Buffer read clock stalls N 4 ticks of the physical interface's data stream, and inserts free blocks into the cached data stream, adjusting the rate to align the clock offsets of other physical interfaces except for the deleted physical interface in the flexible Ethernet group physics .
  • the processing module 22 is further specifically configured to:
  • the buffer read clock stalls the data stream transmitted by the physical interface of 5 beats, and inserts a free block in the cached data stream, and adjusts the rate to align the clock offsets of other physical interfaces in the flexible Ethernet group physics except the physical interface deleted. .
  • the processing module 22 is further specifically configured to:
  • the read clock of other physical interfaces in the flexible Ethernet group except the longest-delayed physical interface and the deleted physical interface is paused for N 6 ticks; where N 6 is equal to the longest-delayed physical interface in the flexible Ethernet group.
  • N 6 is equal to the longest-delayed physical interface in the flexible Ethernet group.
  • the cache read clock stalls the data streams transmitted by these physical interfaces for N 6 ticks, and inserts free blocks into the cached data streams.
  • the speed is adjusted so that the clock offset of other physical interfaces in the flexible Ethernet group physics except for the deleted physical interface Aligned.
  • the number of physical interfaces for link capacity adjustment of the flexible Ethernet group is N, where N is an integer greater than 1, and the processing module 22 is specifically configured to:
  • the node device provided in the embodiment of the present disclosure obtains a delay of a physical interface whose link capacity is adjusted by a flexible Ethernet group; and aligns clock offsets of all physical interfaces in the flexible Ethernet group according to the obtained delay. It can be seen from the embodiments of the present disclosure that the delay of the physical interfaces of the flexible Ethernet group link capacity adjustment is obtained, and the clock offsets of all physical interfaces in the flexible Ethernet group are aligned according to the obtained delay, thereby preventing In the case of link capacity adjustment, the data loss of physical interfaces in the Ethernet group due to clock skew occurs.
  • the acquisition module 21, the processing module 22, the sending module 23, and the receiving module 24 can each be a central processing unit (CPU), a microprocessor (Micro Processor Unit, MPU), and a digital device located in a node device.
  • CPU central processing unit
  • MPU Micro Processor Unit
  • a signal processor Digital Signal Processor, DSP
  • FPGA Field Programmable Gate Array
  • An embodiment of the present disclosure further provides an apparatus for adjusting link capacity, including a memory and a processor, where the memory stores the following instructions that can be executed by the processor:
  • the physical interface of the flexible Ethernet group link capacity adjustment is an increased physical interface
  • the following instructions that can be executed by the processor are also stored in the memory:
  • the memory further stores the following instructions that can be executed by the processor:
  • the overhead of sending the schedule request to the peer node device enables the peer node device to adjust the link capacity of the flexible Ethernet group and align the clock offset.
  • the overhead of receiving the schedule confirmation from the peer node device is received to confirm that the peer node has aligned the clock offset.
  • the physical interface for link capacity adjustment of the flexible Ethernet group is a deleted physical interface, and the following instructions that can be executed by the processor are also stored in the memory:
  • the number of physical interfaces for adjusting the link capacity of the flexible Ethernet group is one, and for an increased physical interface, the following instructions that can be executed by the processor are specifically stored in the memory:
  • the read clock of the original physical interface in the flexible Ethernet group is paused by N 1 ticks according to the requirements of the pending service; where N 1 is equal to The difference between the delay of the newly added physical interface in the flexible Ethernet group and the shortest delay of the delay of the original physical interface.
  • Buffer the data stream transmitted by the original physical interface in the flexible Ethernet group and insert free blocks in the cached data stream. Adjust the rate so that the clock offsets of all the physical interfaces in the flexible Ethernet group are aligned.
  • the following instructions that can be executed by the processor are also stored in the memory:
  • the read clock of the newly added physical interface and the read clock of other physical interfaces except the physical interface with the longest delay among the original physical interfaces are paused for N 2 beats; where N 2 is equal to the flexible Ethernet The difference between the longest delay of the original physical interface in the network group and the delay of the newly added physical interface.
  • the buffer read clock stalls the data stream transmitted by the physical interface of 2 beats, and inserts a free block in the buffered data stream, and adjusts the rate so that the clock offsets of all physical interfaces in the flexible Ethernet group are aligned.
  • the memory further stores the following instructions that can be executed by the processor:
  • the read clock of the newly added physical interface and the read clock of other physical interfaces except the physical interface with the longest delay among the original physical interfaces are stopped for N 3 beats; of which, N 3 is equal to the flexible Ethernet The difference between the longest delay and the shortest delay among the delays of the original physical interfaces in the network group.
  • the buffer read clock stalls the data stream transmitted by the physical interface of N 3 ticks, and the rate is adjusted so that the clock offsets of all physical interfaces in the flexible Ethernet group are aligned.
  • the memory further stores the following instructions that can be executed by the processor:
  • the number of physical interfaces for link capacity adjustment of the flexible Ethernet group is one, and the deleted physical interfaces are specifically stored in the memory.
  • the following instructions can be executed by the processor:
  • N 4 equals the difference between the delay of the physical interface with the second longest delay in the flexible Ethernet group and the delay of the physical interface with the shortest delay.
  • Buffer read clock stalls N 4 ticks of the physical interface's data stream, and inserts free blocks into the cached data stream, adjusting the rate to align the clock offsets of other physical interfaces except for the deleted physical interface in the flexible Ethernet group physics .
  • the memory further stores the following instructions that can be executed by the processor:
  • the read clock of other physical interfaces in the flexible Ethernet group except the longest-delayed physical interface and the deleted physical interface is paused by N 5 ticks; where N 5 is equal to the longest-delayed physical interface in the flexible Ethernet group.
  • N 5 is equal to the longest-delayed physical interface in the flexible Ethernet group.
  • the buffer read clock stalls the data stream transmitted by the physical interface of 5 beats, and inserts a free block in the cached data stream, and adjusts the rate to align the clock offsets of other physical interfaces in the flexible Ethernet group physics except the physical interface deleted. .
  • the memory further stores the following instructions that can be executed by the processor:
  • the read clock of other physical interfaces in the flexible Ethernet group except the longest-delayed physical interface and the deleted physical interface is paused for N 6 ticks; where N 6 is equal to the longest-delayed physical interface in the flexible Ethernet group.
  • N 6 is equal to the longest-delayed physical interface in the flexible Ethernet group.
  • the cache read clock stalls the data streams transmitted by these physical interfaces for N 6 ticks, and inserts free blocks into the cached data streams.
  • the speed is adjusted so that the clock offset of other physical interfaces in the flexible Ethernet group physics except for the deleted physical interface Aligned.
  • the number of physical interfaces for adjusting the link capacity of the flexible Ethernet group is N, where N is an integer greater than 1, and the following instructions that can be executed by the processor are also specifically stored in the memory:
  • the present disclosure is applicable to the field of communication technology, and is used to prevent data loss due to clock skew of physical interfaces in an Ethernet group when link capacity is adjusted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本文公开了一种链路容量的调整方法及装置,包括:节点设备获取灵活以太网组链路容量调整的物理接口的时延;根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。从本公开实施例可见,由于对灵活以太网组中所有物理接口的时钟偏移进行了对齐,从而防止了在链路容量调整时以太网组中的物理接口由于时钟偏移而造成数据丢失的情况出现。 (图1)

Description

一种链路容量的调整方法及装置 技术领域
本公开实施例涉及通信技术领域,尤指一种链路容量的调整方法及装置。
背景技术
灵活以太网(Flex Ethernet,FlexE)是承载网实现业务隔离承载和网络分片的一种接口技术,灵活以太网组是将一个或多个以太网物理接口绑定起来得到的一个组。
当灵活以太网组的链路容量调整时,不可避免地要进行日程表配置(calendar configuration),而在进行日程表配置时,由于业务的流向错误、节点的切换顺序错误、节点的时隙配置不符合实际应用情况以及经过链路容量调整的灵活以太网组里多个物理接口的时钟发生偏移等问题,都会造成其中物理接口传输的数据丢失。
因此,要达到日程表的无损配置,需要对这些会造成数据丢失的问题进行解决,其中,对于业务的流向错误、节点的切换顺序错误、节点的时隙配置不符合实际应用情况的问题,相关技术中均有相应的解决方法,而对于链路容量调整的灵活以太网组里多个物理接口的时钟发生偏移的问题来说,解决该问题就是要使链路容量调整的灵活以太网组里所有物理接口的时钟偏移重新对齐,即重新调整所有物理接口传输的数据流的到达顺序以满足业务需求,相关技术中缺乏相应的技术手段。
发明内容
为了解决上述技术问题,本公开实施例提供了一种链路容量的调整方法及装置,能够对灵活以太网组中所有物理接口的时钟偏移重新进行对齐,防止在链路容量调整时物理接口由于时钟偏移而造成数据丢失的情况出现。
为了达到本公开目的,本公开实施例提供了一种链路容量的调整方法,包括:
节点设备获取灵活以太网组的链路容量调整的物理接口的时延;
根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
本公开实施例还提供了一种节点设备,包括:
获取模块,设置为获取灵活以太网组的链路容量调整的物理接口的时延;
处理模块,设置为根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
与相关技术相比,由于获取了灵活以太网组链路容量调整的物理接口的时延,并根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行了对齐,从而防止了在链路容量调整时以太网组中的物理接口由于时钟偏移而造成数据丢失的情况出现。
本公开的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
附图说明
附图用来提供对本公开技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。
图1为本公开实施例提供的链路容量的调整方法的流程示意图;
图2为本公开实施例提供的节点设备的结构示意图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,下文中将结合附图对本公开的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
在说明本公开实施例提供的数据处理方法之前,先对一些相关技术进行说明:
灵活以太网技术提供一种通用的机制来传送一系列不同媒体访问控制(Media Access Control,MAC)速率的业务,可以是单个MAC速率比较大的业务,也可以是多个MAC速率比较小的业务的集合,不再限定为单一MAC速率的业务。
灵活以太网与传统以太网结构上的区别在于灵活以太网在MAC层和物理编码子层(Physical Coding Sublayer,PCS)之间多了一个垫层,该垫层的功能是构建一个日程表,该日程表由20*n个数据块组成,每个数据块大小为66bit且代表一个5G的时隙,n为绑定的物理接口(Physical Layer,PHY)个数。
2017年光联网论坛(Optical Internet Forum,OIF)的Q3会议发布了Flexe2.0的初稿,主要内容是针对200G/400G的PHY,可以类似1.0标准里100G的PHY进行绑定。为了最大程度利用原有100G PHY的规范内容,200G/400G引入一个逻辑上的概念—实例(instance),就是200GPHY分解为2个instance,400G PHY分解为4个instance。每个instance基本等同于一个100G PHY。灵活以太网1.0开销字段里PHY MAP修改为Flexe Map,PHY Number修改为instance Number,同时增加时钟同步的指示。
本公开实施例提供一种链路容量的调整方法,如图1所示,该方法包括:
步骤101、节点设备获取灵活以太网组链路容量调整的物理接口的时延。
具体的,步骤101是在节点设备要调整链路容量,但还未真正开始调整时发生的动作,链路容量调整前需要对物理接口的时钟偏移进行对齐,即链路容量真正开始前的准备工作。
步骤102、根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
本公开实施例提供的链路容量的调整方法,节点设备获取灵活以太网组的链路容量调整的物理接口的时延;根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。从本公开实施例可见,由于获取了灵活以太网组链路容量调整的物理接口的时延,并根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行了对齐,从而防止了在链路容量调 整时以太网组中的物理接口由于时钟偏移而造成数据丢失的情况出现。
可选地,灵活以太网组链路容量调整的物理接口为增加的物理接口,根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐之后,还包括:
步骤103、如果成功对齐,增加灵活以太网组链路容量调整的物理接口。
需要说明的是,如果无法对齐,则不能增加灵活以太网组链路容量调整的物理接口。
步骤104、在新的开销周期向对端节点设备发送常规开销。
可选地,节点设备获取灵活以太网组的链路容量调整的物理接口的时延之前,还包括:
步骤105、向对端节点设备发送携带有日程表请求的开销,使得对端节点设备对灵活以太网组的链路容量进行调整并对时钟偏移进行对齐。
在新的开销周期向对端节点设备发送常规开销之前,还包括:
步骤106、接收来自对端节点设备的携带有日程表确认的开销,以确认对端节点对时钟偏移进行了对齐。
需要说明的是,节点设备接收到了来自对端节点设备的携带有日程表确认的开销,就确认了对端节点已对时钟偏移进行了对齐,从而确认了对端节点已具备进行链路调整的能力,因此节点设备在新的开销周期向对端节点设备发送常规开销,从而进行最终的链路调整。
可选地,灵活以太网组链路容量调整的物理接口为删除的物理接口,根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐之后,还包括:
步骤107、如果成功对齐,删除灵活以太网组链路容量调整的物理接口。
步骤108、停止灵活以太网组链路容量调整的物理接口上的业务,并在新的开销周期删除灵活以太网组链路容量调整的物理接口。
可选地,灵活以太网组链路容量调整的物理接口的数量为一个,且为增加的物理接口,根据获得的获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐,包括:
步骤102a、当获得的时延大于灵活以太网组中原有任意一个物理接口的时延时,根据待处理业务的要求将灵活以太网组中原有物理接口的读时钟停顿N 1个节拍。
其中,N 1等于灵活以太网组中新增加的物理接口的时延与原有物理接口的时延中最短时延的差。
步骤102b、缓存灵活以太网组中原有物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,当获得的时延小于灵活以太网组中原有任意一个物理接口的时延时,还包括:
步骤102c、根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 2个节拍。
其中,N 2等于灵活以太网组中原有物理接口的时延中最长时延与新增加的物理接口的时延的差。
步骤102d、缓存读时钟停顿N 2个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,当获得的时延在灵活以太网组中原有任意两个物理接口的时延之间时,还包括:
步骤102e、根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 3个节拍。
其中,N 3等于灵活以太网组中原有物理接口的时延中最长时延与最短时延的差。
步骤102f、缓存读时钟停顿N 3个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,在缓存的数据流中插入空闲块,包括:
在缓存的数据流中的起始数据块和结束数据块之间插入空闲块。
或者,
在缓存的数据流中的结束数据块和下一个起始数据块之间插入空闲块。
需要说明的是,在步骤102b中缓存的数据流指的是灵活以太网组中所有物理接口传输的数据流,在步骤102d中缓存的数据流指的是读时钟停顿N 2个节拍的物理接口传输的数据流(即新增加的物理接口以及原有物理接口中除时延最长的物理接口以外其他物理接口传输的数据流),在步骤102f中缓存的数据流指的是读时钟停顿N 3个节拍的物理接口传输的数据流(即新增加的物理接口以及原有物理接口中除时延最长的物理接口以外其他物理接口传输的数据流)。
可选地,灵活以太网组链路容量调整的物理接口的数量为一个,且为删除的物理接口,根据获得的获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐,包括:
步骤102f、当获得的时延大于灵活以太网组中原有任意一个物理接口的时延时,将灵活以太网组中除时延次长的物理接口和删除的物理接口以外其他物理接口的读时钟停顿N 4个节拍。
其中,N 4等于灵活以太网组中时延次长的物理接口的时延与时延最短的物理接口的时延的差。
步骤102g、缓存读时钟停顿N 4个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,当获得的时延小于灵活以太网组中原有任意一个物理接口的时延时,还包括:
步骤102h、将灵活以太网组中除时延最长的物理接口和删除的物理接口以外其他物理接口的读时钟停顿N 5个节拍。
其中,N 5等于灵活以太网组中时延最长的物理接口的时延与时延次短的物理接口的时延的差。
步骤102i、缓存读时钟停顿N 5个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,当获得的时延在灵活以太网组中原有任意两个物理接口的时延之间时,还包括:
步骤102j、将灵活以太网组中除时延最长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 6个节拍;其中,N 6等于灵活以太网组中时延最长的物理接口的时延 与时延最短的物理接口的时延的差。
步骤102k、缓存读时钟停顿N 6个节拍的这些物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,灵活以太网组链路容量调整的物理接口的数量为N个,其中,N为大于1的整数,根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐,包括:
步骤102l、根据链路容量调整的N个物理接口对应的N个时延中第i个时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
其中,i=1、2…N。
具体的,假设灵活以太网组链路容量调整的物理接口为增加的物理接口,并假设原有的灵活以太网组包括物理接口1、物理接口2和物理接口3,所获得的时延为物理接口4的时延和物理接口5的时延,那么根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐可以是:根据获得的物理接口4的时延对物理接口1、物理接口2和物理接口3组成的灵活以太网组中所有物理接口的时钟偏移进行对齐,然后根据获得的物理接口5的时延对物理接口1、物理接口2、物理接口3和物理接口4组成的灵活以太网组中所有物理接口的时钟偏移进行对齐。假设灵活以太网组链路容量调整的物理接口为删除的物理接口,并假设原有的灵活以太网组包括物理接口1、物理接口2、物理接口3、物理接口4的时延和物理接口5,所获得的时延为物理接口4的时延和物理接口5的时延,那么根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐可以是:根据获得的物理接口4的时延对物理接口1、物理接口2、物理接口3和物理接口5组成的灵活以太网组中所有物理接口的时钟偏移进行对齐,然后根据获得的物理接口5的时延对物理接口1、物理接口2、物理接口3组成的灵活以太网组中所有物理接口的时钟偏移进行对齐,
本公开实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,计算机可执行指令用于执行上述任一项链路容量的调整方法。
本公开还提供一种物理接口链路容量的调整方法,当增加物理接口时,该方法包括:
步骤1、在日程表设置时,本端和对端都启动新的PHY。
步骤2、本端在原来的PHY发送开销的时候,同时在新的PHY发送开销帧来作为日程表请求(Calendar Request,CR)。新PHY的开销帧定义:带有灵活以太网开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CR字段为1,其他字段设置为0。
步骤3、对端节点收到新PHY的开销帧(CR)之后,由于涉及到多个PHY,所以需要把多个PHY的时钟偏移进行对齐:
如果新的PHY时延最长,那把原来所有的PHY的读时钟都停顿N 1个节拍(N有最大门限,超过这个最大门限则报警表示无法实现对齐),N 1等于新增加的PHY的时延与原有PHY中时延最短的时延的差,缓存读时钟停顿N 1个节拍的这些PHY传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所有PHY的时钟偏移对齐,
如果新的PHY时延最短,那把新的PHY的读时钟和除原有PHY中时延最长的那个PHY以外其他的PHY的读时钟停顿N 2个节拍;其中,N 2等于原有PHY中时延最长的时延与新增加的PHY的时延的差(N有最大门限,超过这个最大门限则报警表示无法实现对齐),缓存读时钟停顿N 2个节拍的这些PHY传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所有PHY的时钟偏移对齐。
如果新的PHY的时延在原来PHY的时延之间,将新PHY的读时钟停顿N 3个节拍;同时,原有PHY除了时延最长的那个PHY,其他的PHY的读时钟也停顿N 3个节拍;其中,N 3等于原有PHY中时延最长的时延与最短的时延的差;缓存读时钟停顿N 3个节拍的这些PHY传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所有PHY的时钟偏移对齐。
插入空闲块有两种方案:
方案一、在S块和T块之间插入idle块。
需要说明的是,这种方案优点是时延较小,缺点是收发双发节点都要处理这种数据的能力。
方案二、在T块和S块之间插入idle块。
需要说明的是,这种方案优点是数据格式是标准格式,收发节点都可以支持,缺点是时延比方案一略大。
需要说明的是,对于新PHY带有FlexE的开销帧,认为是正常的,不报错。
步骤4、对端节点如果可以切换,在原来的PHY发送开销帧的时候,同时在新的PHY发送开销帧来作为日程表应答(Calendar Acknowledge,CA)。新PHY的开销帧定义:带有FlexE开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CA字段为1,其他字段设置为0。
步骤5、本端节点收到新PHY的开销帧(CA)之后,由于涉及到多个PHY,所以进行对齐处理,处理过程与对端节点的处理过程一致,在此不再赘述。
步骤6、本端和对端节点在新的开销周期开始的时候正式切换,在新的PHY里传业务数据,所有PHY的开销帧按照现有标准进行填充。
本公开还提供一种链路容量的调整方法,当删除物理接口时,该方法包括:
步骤1、本端节点识别出打算删除的PHY,计算一下如果删除了这个PHY,对端节点对齐的处理情况。
如果删除的PHY时延最长,将灵活以太网组中物理接口的读时钟停顿N4个节拍;其中,N4等于灵活以太网组中时延次长的物理接口的时延与时延最短的物理接口的时延的差;缓存读时钟停顿N4个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
如果删除的PHY时延最短,将灵活以太网组中物理接口的读时钟停顿N 5个节拍;其中,N 5等于灵活以太网组中时延最长的物理接口的时延与时延次短的物理接口的时延的差;缓存读时钟停顿N 5个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速 率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
如果删除的PHY时延在所有PHY的时延之间,将灵活以太网组物理接口的读时钟停顿N 6个节拍;其中,N 6等于灵活以太网组中时延最长的物理接口的时延与时延最短的物理接口的时延的差;缓存读时钟停顿N 6个节拍的这些物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
步骤2、对端节点识别出打算删除的PHY,计算一下如果删除了这个PHY,本端节点的对齐处理情况,具体过程与本端节点计算的对端节点对齐的处理情况一致,在此不再赘述。
步骤3、本端和对端节点如果可以切换,那么正式开始删除PHY;否则,删除PHY的操作终止。
步骤4、如果要正式开始删除PHY,停止要删除PHY上的业务,在新的开销周期开始的时候正式切换,采用新的缓存方案来处理各个PHY的对齐情况,并且在正常的码流里插入空闲块,调整速率,所有PHY的开销帧按照现有标准进行填充。
下面提供几个具体的实施例来说明本公开提供的链路容量的调整法。
实施例一
假设100G PHY a、b、c绑定成一个Flexe group,group number=1,此时打算新增100G PHY d。
步骤1、本端和对端都启动新的PHY d。
步骤2、本端在PHY a、b、c发送开销帧的时候,同时在PHY d发送overhead开销帧来作为CR。PHY d的开销帧定义:带有FlexE开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CR字段为1,其他字段设置为0。
步骤3、对端节点收到PHY d的开销帧(CR)之后,由于涉及到多个PHY,所以进行对齐的处理。通过计算知道PHY d时延最长。那把PHY a、b、c的读时钟都停顿N 1个节拍(N有最大门限,超过这个最大门限则报警表示无法实现对齐),N 1等于新增加的PHYd的时延与原有PHY中时延最短的时延的差;缓存PHY a、b、c的数据,并且在正常的码流里插入空闲块,调整速率,最终实现所有PHY的对齐。插入空闲块有两种方案:
方案一、在S块和T块之间插入空闲块。
方案二、在T块和S块之间插入空闲块。
步骤4、对端节点如果可以切换,在PHY a、b、c发送开销帧的时候,同时在PHY d发送开销帧来作为CA。PHY d的开销帧定义:带有FlexE开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CA字段为1,其他字段设置为0。
步骤5、本端节点收到PHY d的开销帧(CA)之后,由于涉及到多个PHY,所以进行对齐的处理处理过程与对端节点的处理过程一致,在此不再赘述。
步骤6、本端和对端节点在新的开销周期开始的时候正式切换,在PHY d里传业务数据,PHYa、b、c、d的开销帧按照现有标准进行填充。
实施例二
假设100G PHY a、b、c绑定成一个Flexe group,group number=1,此时打算新增100G PHY d。
步骤1、端和对端都启动新的PHY d。
步骤2、本端在PHY a,b,c发送overhead开销帧的时候,同时在PHY d发送开销帧来作为CR(calendar request)。PHY d的开销帧定义:带有FlexE开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CR字段为1,其他字段设置为0。
步骤3、对端节点收到PHY d的overhead开销帧(CR)之后,由于涉及到多个PHY,所以进行deskew的处理。通过计算知道PHY d时延最短。那把PHY d的读时钟停顿N个节拍(N有最大门限,超过这个最大门限则报警表示无法实现对齐),同时,除开原有PHY中时延最长的那个PHY,其他的PHY的读时钟也停顿N 2个节拍;其中,N 2等于原有PHY中时延最长的时延与新增加的PHY的时延的差。缓存读时钟停顿N 2个节拍的这些PHY传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所有PHY的时钟偏移对齐。
步骤4、对端节点如果可以切换,在PHY a、b、c发送开销帧的时候,同时在PHY d发送开销帧来作为CA。PHY d的开销定义:带有FlexE开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CA字段为1,其他字段设置为0。
步骤5、本端节点收到PHY d的开销帧(CA)之后,由于涉及到多个PHY,所以进行对齐处理。通过计算知道PHY d时延最短。那把PHY d的读时钟停顿N个节拍(N有最大门限,超过这个最大门限则报警表示无法实现对齐),同时,除开原有PHY中时延最长的那个PHY,其他的PHY的读时钟也停顿N 2个节拍;其中,N 2等于原有PHY中时延最长的时延与新增加的PHY的时延的差。缓存读时钟停顿N 2个节拍的这些PHY传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所有PHY的时钟偏移对齐。
步骤6、本端和对端节点在新的开销周期开始的时候正式切换,在PHY d里传业务数据,PHYa、b、c、d的开销帧按照现有标准进行填充。
实施例三
假设100G PHY a、b、c绑定成一个Flexe group,group number=1,此时打算新增100G PHY d。
步骤1、本端和对端都启动新的PHY d。
步骤2、本端在PHY a、b、c发送开销帧的时候,同时在PHY d发送overhead开销帧来作为CR。PHY d的开销帧定义:带有FlexE开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CR字段为1,其他字段设置为0。
步骤3、对端节点收到PHY d的overhead开销帧(CR)之后,由于涉及到多个PHY,所以进行对齐的处理。通过计算知道PHY d的时延在PHY a、b、c的时延之间,则提供PHY d的缓存,将PHY d的读时钟停顿N 3个节拍;同时,除开原有PHY中时延最长的那个PHY, 其他的PHY的读时钟也停顿N 3个节拍;其中,N 3等于原有PHY中时延最长的时延与最短的时延的差。缓存读时钟停顿N 3个节拍的这些PHY传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所有PHY的时钟偏移对齐。
步骤4、对端节点如果可以切换,在PHY a、b、c发送开销的时候,同时在PHY d发送开销帧来作为CA。PHY d的开销帧定义:带有FlexE开销的开始标志0X4B,O码0X5,OMF复帧指示,FlexE Group Number为对应的FlexE Group编号,CA字段为1,其他字段设置为0。
步骤5、本端节点收到PHY d的开销帧(CA)之后,由于涉及到多个PHY,所以进行对齐处理。通过计算知道PHY d的时延在PHY a、b、c的时延之间,则供PHY d的缓存,将PHY d的读时钟停顿N个节拍;同时,除开原有PHY中时延最长的那个PHY,其他的PHY的读时钟也停顿N 3个节拍;其中,N 3等于原有PHY中时延最长的时延与最短的时延的差。缓存读时钟停顿N个节拍的这些PHY传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所有PHY的时钟偏移对齐。
步骤6、本端和对端节点在新的开销周期开始的时候正式切换,在PHY d里传业务数据,PHY a、b、c、d的开销帧按照现有标准进行填充。
实施例四
假设100G PHY a、b、c、d绑定成一个Flexe group,group number=1,此时打算删除100G PHY d。
步骤1、由中心控制器通知本端和对端打算删除PHYd。
步骤2、本端节点知道打算删除PHY d之后,计算一下如果删除了PHY d,对端节点对齐的处理情况。通过计算知道PHY d时延最短,新的缓存方案将在时延最长PHY的数据到达后完成所有PHY(不包含要删除的PHY)的对齐,读时钟停顿的节拍=时延最长的PHY-时延次短的PHY。
步骤3、对端节点知道打算删除PHY d之后,计算一下如果删除了PHY d,本端节点对齐的处理情况。通过计算知道PHY d时延最短,新的缓存方案将在时延最长PHY的数据到达后完成所有PHY(不包含要删除的PHY)的对齐,读时钟停顿的节拍=时延最长的PHY-时延次短的PHY。
步骤4、端和对端节点如果可以切换,通知中心控制器。
步骤5、中心控制器收到可以切换的回复,通知本端和对端正式开始删除PHY;否则,不会发送删除PHY的通知给本端和对端。
步骤6、本端和对端节点收到删除PHY的通知后,停止PHY d上的业务,在新的开销周期开始的时候正式切换,采用新的缓存方案来处理PHY a、b、c的对齐情况,并且在正常的码流里插入空闲块,调整速率,PHY a、b、c的开销帧按照现有标准进行填充。
本公开实施例还提供一种节点设备,如图2所示,该节点设备2包括:
获取模块21,用于获取灵活以太网组的链路容量调整的物理接口的时延。
处理模块22,用于根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
可选地,灵活以太网组链路容量调整的物理接口为增加的物理接口,处理模块22,还用于如果成功对齐,增加灵活以太网组链路容量调整的物理接口。
处理模块22,还用于在新的开销周期向对端节点设备发送常规开销。
可选地,还包括:
发送模块23,用于向对端节点设备发送携带有日程表请求的开销,使得对端节点设备对灵活以太网组的链路容量进行调整并对时钟偏移进行对齐。
接收模块24,用于接收来自对端节点设备的携带有日程表确认的开销,以确认对端节点对时钟偏移进行了对齐。
可选地,灵活以太网组链路容量调整的物理接口为删除的物理接口,处理模块22,还用于如果成功对齐,删除灵活以太网组链路容量调整的物理接口。
处理模块22,还用于停止灵活以太网组链路容量调整的物理接口上的业务,并在新的开销周期删除灵活以太网组链路容量调整的物理接口。
可选地,灵活以太网组链路容量调整的物理接口的数量为一个,且为增加的物理接口,处理模块22具体用于:
当获得的时延大于灵活以太网组中原有任意一个物理接口的时延时,根据待处理业务的要求将灵活以太网组中原有物理接口的读时钟停顿N 1个节拍;其中,N 1等于灵活以太网组中新增加的物理接口的时延与原有物理接口的时延中最短时延的差。
缓存灵活以太网组中原有物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,当获得的时延小于灵活以太网组中原有任意一个物理接口的时延时,处理模块22具体用于:
根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 2个节拍;其中,N 2等于灵活以太网组中原有物理接口的时延中最长时延与新增加的物理接口的时延的差。
缓存读时钟停顿N 2个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,当获得的时延在灵活以太网组中原有任意两个物理接口的时延之间时,处理模块22具体用于:
根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 3个节拍;其中,N 3等于灵活以太网组中原有物理接口的时延中最长时延与最短时延的差。
缓存读时钟停顿N 3个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,处理模块22具体用于:
在缓存的数据流中的起始数据块和结束数据块之间插入空闲块。
或者,
在缓存的数据流中的结束数据块和下一个起始数据块之间插入空闲块。
可选地,灵活以太网组链路容量调整的物理接口的数量为一个,且为删除的物理接口,处理模块22具体用于:
当获得的时延大于灵活以太网组中原有任意一个物理接口的时延时,将灵活以太网组中除时延次长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 4个节拍;其中,N 4等于灵活以太网组中时延次长的物理接口的时延与时延最短的物理接口的时延的差。
缓存读时钟停顿N 4个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,当获得的时延小于灵活以太网组中原有任意一个物理接口的时延时,处理模块22具体还用于:
将灵活以太网组中物理接口中除时延最长的物理接口和删除的物理接口以外其他的读时钟停顿N 5个节拍;其中,N 5等于灵活以太网组中时延最长的物理接口的时延与时延次短的物理接口的时延的差。
缓存读时钟停顿N 5个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,当获得的时延在灵活以太网组中任意两个物理接口的时延之间时,处理模块22具体还用于:
将灵活以太网组中除时延最长的物理接口和删除的物理接口以外其他物理接口的读时钟停顿N 6个节拍;其中,N 6等于灵活以太网组中时延最长的物理接口的时延与时延最短的物理接口的时延的差。
缓存读时钟停顿N 6个节拍的这些物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,灵活以太网组链路容量调整的物理接口的数量为N个,其中,N为大于1的整数,处理模块22具体用于:
根据链路容量调整的N个物理接口对应的N个时延中第i个时延对灵活以太网组中所有物理接口的时钟偏移进行对齐;其中,i=1、2…N。
本公开实施例提供的节点设备,获取灵活以太网组的链路容量调整的物理接口的时延;根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。从本公开实施例可见,由于获取了灵活以太网组链路容量调整的物理接口的时延,并根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行了对齐,从而防止了在链路容量调整时以太网组中的物理接口由于时钟偏移而造成数据丢失的情况出现。
在实际应用中,获取模块21、处理模块22、发送模块23和接收模块24均可由位于节点设备中的中央处理器(Central Processing Unit,CPU)、微处理器(Micro Processor Unit,MPU)、数字信号处理器(Digital Signal Processor,DSP)或现场可编程门阵列(Field Programmable Gate Array,FPGA)等实现。
本公开实施例还提供一种链路容量的调整装置,包括存储器和处理器,其中,存储器中存储有以下可被处理器执行的指令:
获取灵活以太网组链路容量调整的物理接口的时延。
根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
可选地,灵活以太网组链路容量调整的物理接口为增加的物理接口,存储器中还存储有以下可被处理器执行的指令:
如果成功对齐,增加灵活以太网组链路容量调整的物理接口。
在新的开销周期向对端节点设备发送常规开销。
可选地,存储器中还存储有以下可被处理器执行的指令:
向对端节点设备发送携带有日程表请求的开销,使得对端节点设备对灵活以太网组的链路容量进行调整并对时钟偏移进行对齐。
接收来自对端节点设备的携带有日程表确认的开销,以确认对端节点对时钟偏移进行了对齐。
可选地,灵活以太网组链路容量调整的物理接口为删除的物理接口,存储器中还存储有以下可被处理器执行的指令:
如果成功对齐,删除灵活以太网组链路容量调整的物理接口。
停止灵活以太网组链路容量调整的物理接口上的业务,并在新的开销周期删除灵活以太网组链路容量调整的物理接口。
可选地,灵活以太网组链路容量调整的物理接口的数量为一个,且为增加的物理接口,存储器中具体存储有以下可被处理器执行的指令:
当获得的时延大于灵活以太网组中原有任意一个物理接口的时延时,根据待处理业务的要求将灵活以太网组中原有物理接口的读时钟停顿N 1个节拍;其中,N 1等于灵活以太网组中新增加的物理接口的时延与原有物理接口的时延中最短时延的差。
缓存灵活以太网组中原有物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,当获得的时延小于灵活以太网组中原有任意一个物理接口的时延时,存储器中还存储有以下可被处理器执行的指令:
根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 2个节拍;其中,N 2等于灵活以太网组中原有物理接口的时延中最长时延与新增加的物理接口的时延的差。
缓存读时钟停顿N 2个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,当获得的时延在灵活以太网组中原有任意两个物理接口的时延之间时,存储器中还具体存储有以下可被处理器执行的指令:
根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 3个节拍;其中,N 3等于灵活以太网组中原有 物理接口的时延中最长时延与最短时延的差。
缓存读时钟停顿N 3个节拍的物理接口传输的数据流,调整速率使得灵活以太网组中所有物理接口的时钟偏移对齐。
可选地,存储器中还具体存储有以下可被处理器执行的指令:
在缓存的数据流中的起始数据块和结束数据块之间插入空闲块;
或者,
在缓存的数据流中的结束数据块和下一个起始数据块之间插入空闲块。
可选地,灵活以太网组链路容量调整的物理接口的数量为一个,且为删除的物理接口,存储器中还具体存储有以下可被处理器执行的指令:
当获得的时延大于灵活以太网组中原有任意一个物理接口的时延时,将灵活以太网组中除时延次长的物理接口和删除的物理接口以外其他物理接口的读时钟停顿N 4个节拍;其中,N 4等于灵活以太网组中时延次长的物理接口的时延与时延最短的物理接口的时延的差。
缓存读时钟停顿N 4个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,当获得的时延小于灵活以太网组中原有任意一个物理接口的时延时,存储器中还具体存储有以下可被处理器执行的指令:
将灵活以太网组中除时延最长的物理接口和删除的物理接口以外其他物理接口的读时钟停顿N 5个节拍;其中,N 5等于灵活以太网组中时延最长的物理接口的时延与时延次短的物理接口的时延的差。
缓存读时钟停顿N 5个节拍的物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,当获得的时延在灵活以太网组中原有任意两个物理接口的时延之间时,存储器中还具体存储有以下可被处理器执行的指令:
将灵活以太网组中除时延最长的物理接口和删除的物理接口以外其他物理接口的读时钟停顿N 6个节拍;其中,N 6等于灵活以太网组中时延最长的物理接口的时延与时延最短的物理接口的时延的差。
缓存读时钟停顿N 6个节拍的这些物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
可选地,灵活以太网组链路容量调整的物理接口的数量为N个,其中,N为大于1的整数,存储器中还具体存储有以下可被处理器执行的指令:
根据链路容量调整的N个物理接口对应的N个时延中第i个时延对所述灵活以太网组中所有物理接口的时钟偏移进行对齐;其中,i=1、2…N。
虽然本公开所揭露的实施方式如上,但的内容仅为便于理解本公开而采用的实施方式,并非用以限定本公开。任何本公开所属领域内的技术人员,在不脱离本公开所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本公开的专利保护范 围,仍须以所附的权利要求书所界定的范围为准。
工业实用性
本公开适用于通信技术领域,用以防止在链路容量调整时以太网组中的物理接口由于时钟偏移而造成数据丢失的情况出现。

Claims (24)

  1. 一种链路容量的调整方法,包括:
    节点设备获取灵活以太网组的链路容量调整的物理接口的时延;
    根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
  2. 根据权利要求1所述的调整方法,其中,所述灵活以太网组链路容量调整的物理接口为增加的物理接口,所述根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐之后,还包括:
    如果成功对齐,增加所述灵活以太网组链路容量调整的物理接口;
    在新的开销周期向对端节点设备发送常规开销。
  3. 根据权利要求2所述的调整方法,其中,所述节点设备获取灵活以太网组的链路容量调整的物理接口的时延之前,还包括:
    向对端节点设备发送携带有日程表请求的开销,使得所述对端节点设备对灵活以太网组的链路容量进行调整并对时钟偏移进行对齐;
    所述在新的开销周期向对端节点设备发送常规开销之前,还包括:
    接收来自所述对端节点设备的携带有日程表确认的开销,以确认所述对端节点对所述时钟偏移进行了对齐。
  4. 根据权利要求1所述的调整方法,其中,所述灵活以太网组链路容量调整的物理接口为删除的物理接口,所述根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐之后,还包括:
    如果成功对齐,删除所述灵活以太网组链路容量调整的物理接口;
    停止所述灵活以太网组链路容量调整的物理接口上的业务,并在新的开销周期删除所述灵活以太网组链路容量调整的物理接口。
  5. 根据权利要求1所述的调整方法,其中,所述灵活以太网组链路容量调整的物理接口的数量为一个,且为增加的物理接口,所述根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐,包括:
    当所述获得的时延大于所述灵活以太网组中原有任意一个物理接口的时延时,根据待处理业务的要求将所述灵活以太网组中原有物理接口的读时钟停顿N 1个节拍;其中,N 1等于所述灵活以太网组中新增加的物理接口的时延与原有物理接口的时延中最短时延的差;
    缓存所述灵活以太网组中原有物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所述灵活以太网组中所有物理接口的时钟偏移对齐。
  6. 根据权利要求5所述的调整方法,其中,当所述获得的时延小于所述灵活以太网组中原有任意一个物理接口的时延时,还包括:
    根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 2个节拍;其中,N 2等于所述灵活以太网 组中原有物理接口的时延中最长时延与新增加的物理接口的时延的差;
    缓存所述读时钟停顿N 2个节拍的物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组中所有物理接口的时钟偏移对齐。
  7. 根据权利要求5所述的调整方法,其中,当所述获得的时延在所述灵活以太网组中原有任意两个物理接口的时延之间时,还包括:
    根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 3个节拍;其中,N 3等于所述灵活以太网组中原有物理接口的时延中最长时延与最短时延的差;
    缓存所述读时钟停顿N 3个节拍的物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组中所有物理接口的时钟偏移对齐。
  8. 根据权利要求5-7任一项所述的调整方法,其中,所述在缓存的数据流中插入空闲块,包括:
    在所述缓存的数据流中的起始数据块和结束数据块之间插入所述空闲块;
    或者,
    在所述缓存的数据流中的结束数据块和下一个起始数据块之间插入所述空闲块。
  9. 根据权利要求1所述的调整方法,其中,所述灵活以太网组链路容量调整的物理接口的数量为一个,且为删除的物理接口,所述根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐,包括:
    当所述获得的时延大于所述灵活以太网组中原有任意一个物理接口的时延时,将所述灵活以太网组中除时延次长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 4个节拍;其中,N 4等于所述灵活以太网组中时延次长的物理接口的时延与时延最短的物理接口的时延的差;
    缓存所述读时钟停顿N 4个节拍的物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
  10. 根据权利要求9所述的调整方法,其中,当所述获得的时延小于所述灵活以太网组中原有任意一个物理接口的时延时,还包括:
    将所述灵活以太网组中除时延最长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 5个节拍;其中,N 5等于所述灵活以太网组中时延最长的物理接口的时延与时延次短的物理接口的时延的差;
    缓存所述读时钟停顿N 5个节拍的物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
  11. 根据权利要求9或10所述的调整方法,其中,当所述获得的时延在所述灵活以太网组中原有任意两个物理接口的时延之间时,还包括:
    将所述灵活以太网组中除时延最长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 6个节拍;其中,N 6等于所述灵活以太网组中时延最长的物理接口的时延与时延最短的物理接口的时延的差;
    缓存所述读时钟停顿N 6个节拍的这些物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
  12. 根据权利要求1所述的调整方法,其中,所述灵活以太网组链路容量调整的物理接口的数量为N个,其中,N为大于1的整数,所述根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐,包括:
    根据链路容量调整的N个物理接口对应的N个时延中第i个时延对所述灵活以太网组中所有物理接口的时钟偏移进行对齐;其中,i=1、2…N。
  13. 一种节点设备,包括:
    获取模块,设置为获取灵活以太网组的链路容量调整的物理接口的时延;
    处理模块,设置为根据获得的时延对灵活以太网组中所有物理接口的时钟偏移进行对齐。
  14. 根据权利要求13所述的节点设备,其中,所述灵活以太网组链路容量调整的物理接口为增加的物理接口,所述处理模块,还设置为如果成功对齐,增加所述灵活以太网组链路容量调整的物理接口;
    所述处理模块,还设置为在新的开销周期向对端节点设备发送常规开销。
  15. 根据权利要求14所述的节点设备,,还包括:
    发送模块,设置为向对端节点设备发送携带有日程表请求的开销,使得所述对端节点设备对灵活以太网组的链路容量进行调整并对时钟偏移进行对齐;
    接收模块,设置为接收来自所述对端节点设备的携带有日程表确认的开销,以确认所述对端节点对所述时钟偏移进行了对齐。
  16. 根据权利要13所述的节点设备,其中,所述灵活以太网组链路容量调整的物理接口为删除的物理接口,所述处理模块,还设置为如果成功对齐,删除所述灵活以太网组链路容量调整的物理接口;
    所述处理模块,还设置为停止所述灵活以太网组链路容量调整的物理接口上的业务,并在新的开销周期删除所述灵活以太网组链路容量调整的物理接口。
  17. 根据权利要求13所述的节点设备,其中,所述灵活以太网组链路容量调整的物理接口的数量为一个,且为增加的物理接口,所述处理模块设置为:
    当所述获得的时延大于所述灵活以太网组中原有任意一个物理接口的时延时,根据待处理业务的要求将所述灵活以太网组中原有物理接口的读时钟停顿N 1个节拍;其中,N 1等于所述灵活以太网组中新增加的物理接口的时延与原有物理接口的时延中最短时延的差;
    缓存所述灵活以太网组中原有物理接口传输的数据流,并在缓存的数据流中插入空闲块,调整速率使得所述灵活以太网组中所有物理接口的时钟偏移对齐。
  18. 根据权利要求17所述的节点设备,其中,当所述获得的时延小于所述灵活以太网组中原有任意一个物理接口的时延时,所述处理模块设置为:
    根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 2个节拍;其中,N 2等于所述灵活以太网组中原有物理接口的时延中最长时延与新增加的物理接口的时延的差;
    缓存所述读时钟停顿N 2个节拍的物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组中所有物理接口的时钟偏移对齐。
  19. 根据权利要求17所述的节点设备,其中,当所述获得的时延在所述灵活以太网组中原有任意两个物理接口的时延之间时,所述处理模块设置为:
    根据待处理业务的要求将新增加的物理接口的读时钟,以及原有物理接口中除时延最长的物理接口以外其他物理接口的读时钟停顿N 3个节拍;其中,N 3等于所述灵活以太网组中原有物理接口的时延中最长时延与最短时延的差;
    缓存所述读时钟停顿N 3个节拍的物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组中所有物理接口的时钟偏移对齐。
  20. 根据权利要求17-19任一项所述的节点设备,其中,所述处理模块设置为:
    在所述缓存的数据流中的起始数据块和结束数据块之间插入所述空闲块;
    或者,
    在所述缓存的数据流中的结束数据块和下一个起始数据块之间插入所述空闲块。
  21. 根据权利要求13所述的节点设备,其中,所述灵活以太网组链路容量调整的物理接口的数量为一个,且为删除的物理接口,所述处理模块设置为:
    当所述获得的时延大于所述灵活以太网组中原有任意一个物理接口的时延时,将所述灵活以太网组中除时延次长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 4个节拍;其中,N 4等于所述灵活以太网组中时延次长的物理接口的时延与时延最短的物理接口的时延的差;
    缓存所述读时钟停顿N 4个节拍的物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
  22. 根据权利要求21所述的节点设备,其中,当所述获得的时延小于所述灵活以太网组中原有任意一个物理接口的时延时,所述处理模块还设置为:
    将所述灵活以太网组中除时延最长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 5个节拍;其中,N 5等于所述灵活以太网组中时延最长的物理接口的时延与时延次短的物理接口的时延的差;
    缓存所述读时钟停顿N 5个节拍的物理接口传输的数据流,并在缓存的数据流中插入 所述空闲块,调整速率使得所述灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
  23. 根据权利要求21或22所述的节点设备,其中,当所述获得的时延在所述灵活以太网组中原有任意两个物理接口的时延之间时,所述处理模块还设置为:
    将所述灵活以太网组中除时延最长的物理接口和所述删除的物理接口以外其他物理接口的读时钟停顿N 6个节拍;其中,N 6等于所述灵活以太网组中时延最长的物理接口的时延与时延最短的物理接口的时延的差;
    缓存所述读时钟停顿N 6个节拍的这些物理接口传输的数据流,并在缓存的数据流中插入所述空闲块,调整速率使得所述灵活以太网组物理中除删除的物理接口以外其他物理接口的时钟偏移对齐。
  24. 根据权利要求13所述的节点设备,其中,所述灵活以太网组链路容量调整的物理接口的数量为N个,其中,N为大于1的整数,所述处理模块设置为:
    根据链路容量调整的N个物理接口对应的N个时延中第i个时延对所述灵活以太网组中所有物理接口的时钟偏移进行对齐;其中,i=1、2…N。
PCT/CN2019/079127 2018-07-12 2019-03-21 一种链路容量的调整方法及装置 WO2020010875A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19835048.0A EP3823212A4 (en) 2018-07-12 2019-03-21 LINK CAPACITY ADJUSTMENT METHOD AND DEVICE
US17/257,639 US11546221B2 (en) 2018-07-12 2019-03-21 Link capacity adjustment method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810765154.8 2018-07-12
CN201810765154.8A CN110719182B (zh) 2018-07-12 2018-07-12 一种链路容量的调整方法及装置

Publications (1)

Publication Number Publication Date
WO2020010875A1 true WO2020010875A1 (zh) 2020-01-16

Family

ID=69143303

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/079127 WO2020010875A1 (zh) 2018-07-12 2019-03-21 一种链路容量的调整方法及装置

Country Status (4)

Country Link
US (1) US11546221B2 (zh)
EP (1) EP3823212A4 (zh)
CN (1) CN110719182B (zh)
WO (1) WO2020010875A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929199A (zh) * 2019-12-06 2021-06-08 华为技术有限公司 灵活以太网组的管理方法、设备及计算机可读存储介质
WO2022021409A1 (zh) * 2020-07-31 2022-02-03 华为技术有限公司 灵活以太网组中物理接口调整方法和设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141826A (zh) * 2007-08-08 2008-03-12 中兴通讯股份有限公司 链路容量调整方案协议的实现方法和装置
US20170171163A1 (en) * 2015-12-11 2017-06-15 Ciena Corporation Flexible ethernet encryption systems and methods
CN106918730A (zh) * 2017-02-09 2017-07-04 深圳市鼎阳科技有限公司 一种数字示波器及其多通道信号同步方法
CN107438029A (zh) * 2016-05-27 2017-12-05 华为技术有限公司 转发数据的方法和设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104113517A (zh) * 2013-04-22 2014-10-22 华为技术有限公司 时间戳生成方法、装置及系统
EP3713158B1 (en) * 2015-06-30 2022-02-09 Ciena Corporation Time transfer systems and methods over a stream of ethernet blocks
US10097480B2 (en) * 2015-09-29 2018-10-09 Ciena Corporation Time transfer systems and methods over flexible ethernet
CN106612203A (zh) * 2015-10-27 2017-05-03 中兴通讯股份有限公司 一种处理灵活以太网客户端数据流的方法及装置
CN107800528B (zh) 2016-08-31 2021-04-06 中兴通讯股份有限公司 一种传输同步信息的方法、装置和系统
CN108075903B (zh) * 2016-11-15 2020-04-21 华为技术有限公司 用于建立灵活以太网群组的方法和设备
CN111106964A (zh) * 2017-04-28 2020-05-05 华为技术有限公司 配置链路组的方法和设备
CN110650002B (zh) * 2018-06-26 2021-01-29 华为技术有限公司 一种FlexE组中PHY的调整方法、相关设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141826A (zh) * 2007-08-08 2008-03-12 中兴通讯股份有限公司 链路容量调整方案协议的实现方法和装置
US20170171163A1 (en) * 2015-12-11 2017-06-15 Ciena Corporation Flexible ethernet encryption systems and methods
CN107438029A (zh) * 2016-05-27 2017-12-05 华为技术有限公司 转发数据的方法和设备
CN106918730A (zh) * 2017-02-09 2017-07-04 深圳市鼎阳科技有限公司 一种数字示波器及其多通道信号同步方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3823212A4 *

Also Published As

Publication number Publication date
EP3823212A1 (en) 2021-05-19
US20210273854A1 (en) 2021-09-02
US11546221B2 (en) 2023-01-03
CN110719182A (zh) 2020-01-21
EP3823212A4 (en) 2021-12-15
CN110719182B (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2019128467A1 (zh) 基于灵活以太网FlexE传输业务流的方法和装置
TWI323585B (en) Radio network controller for communicating with node b and method thereof
WO2017173923A1 (zh) 一种数据传输方法及装置
WO2020010875A1 (zh) 一种链路容量的调整方法及装置
JP2019504533A5 (zh)
US10439940B2 (en) Latency correction between transport layer host and deterministic interface circuit
EP3840311A1 (en) Traffic scheduling method, device, and system
JP2003523130A (ja) パケット・ネットワークにおけるノード間のクロック同期方法
TWI794645B (zh) 傳輸資料包的方法和實施該方法的裝置
JP2012182551A (ja) データ送信装置、データ通信装置および通信プログラム
CN102123073A (zh) 数据包重排序方法及装置
WO2008014662A1 (fr) Procédé de délimitation de synchronisation de message et système correspondant
WO2021109705A1 (zh) 灵活以太网组的管理方法、设备及计算机可读存储介质
US20190116000A1 (en) Transport layer identifying failure cause and mitigation for deterministic transport across multiple deterministic data links
CN102201907A (zh) 一种分布式仿真同步的实现方法
JPS63279633A (ja) 同報通信システム
CN106791908B (zh) 一种支持云平台采用双缓冲的实时视频流存储方法
TW201811083A (zh) 通訊系統以及通訊方法
US6622183B1 (en) Data transmission buffer having frame counter feedback for re-transmitting aborted data frames
US11271711B2 (en) Communication control device, communication control method, network switch, route control method, and communication system
JP2002532036A (ja) パケット交換網における待ち行列の管理
WO2012162947A1 (zh) 传输时延控制方法及系统
WO2019174424A1 (zh) 链路容量调整方法及装置、系统、控制器、网络节点
TWI320280B (en) Method and apparatus for network packet transmission
WO2007060722A1 (ja) 受信装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19835048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE