WO2018113329A1 - 一种传输速率的调整方法及网络设备 - Google Patents

一种传输速率的调整方法及网络设备 Download PDF

Info

Publication number
WO2018113329A1
WO2018113329A1 PCT/CN2017/097999 CN2017097999W WO2018113329A1 WO 2018113329 A1 WO2018113329 A1 WO 2018113329A1 CN 2017097999 W CN2017097999 W CN 2017097999W WO 2018113329 A1 WO2018113329 A1 WO 2018113329A1
Authority
WO
WIPO (PCT)
Prior art keywords
bandwidth
rate
code block
network device
idle
Prior art date
Application number
PCT/CN2017/097999
Other languages
English (en)
French (fr)
Inventor
钟其文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020197016328A priority Critical patent/KR102240140B1/ko
Priority to EP17883698.7A priority patent/EP3531594B1/en
Priority to EP21162885.4A priority patent/EP3905556A3/en
Priority to JP2019532786A priority patent/JP6946433B2/ja
Publication of WO2018113329A1 publication Critical patent/WO2018113329A1/zh
Priority to US16/428,246 priority patent/US11082142B2/en
Priority to US17/391,421 priority patent/US11750312B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0002Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/07Synchronising arrangements using pulse stuffing for systems with different or fluctuating information rates or bit rates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • H04J3/1658Optical Transport Network [OTN] carrying packets or ATM cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0006Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format
    • H04L1/0007Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format by modifying the frame length
    • H04L1/0008Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format by modifying the frame length by supplementing frame payload, e.g. with padding bits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0071Use of interleaving
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/26Systems using multi-frequency codes
    • H04L27/2601Multicarrier modulation systems
    • H04L27/2647Arrangements specific to the receiver only
    • H04L27/2655Synchronisation arrangements
    • H04L27/2666Acquisition of further OFDM parameters, e.g. bandwidth, subcarrier spacing, or guard interval length
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/74Admission control; Resource allocation measures in reaction to resource unavailability
    • H04L47/748Negotiation of resources, e.g. modification of a request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a method for adjusting a transmission rate and a network device.
  • Flexible Ethernet FlexE draws on the Synchronous Digital Hierarchy or Optical Transport Network (SDH/OTN) technology to construct a FlexE frame format for each physical interface information transmission in the flexible Ethernet link group FlexE Group, and time-division multiplexing TDM Gap division. Unlike SDH/OTN byte interleaving, FlexE's TDM slot division granularity is 66 bits, and slots are interleaved with 66 bits.
  • SDH/OTN Synchronous Digital Hierarchy or Optical Transport Network
  • one FlexE frame contains 8 rows, and the first 66b block position of each row is a FlexE overhead area, and the overhead area is followed by a payload area for slot division, with a 64/66b bit block as a granularity, corresponding to 20 ⁇ 1023 64/66b bit block bearer space, divided into 20 time slots, the bandwidth of each time slot is about 100GE interface bandwidth divided by 20, about 5Gbps, the nominal rate is less than 5Gbps.
  • the actual rate of the FlexE traffic slot is constrained by the physical interface rate characteristics.
  • the bit rate of the 100G FlexE frame is (16383/16384)*(66/64)*100Gbps+/-100ppm. .
  • the total rate of the 100GFlexE frame payload area is ((1023*20)/(1023*20+1))*(16383/16384)*(66/64)*100Gbps+/-100ppm.
  • the rate of each time slot is ((1023*1)/(1023*20+1))*(16383/16384)*(66/64)*100Gbps+/-100ppm, and (66/64)*5Gbps+/- There is a difference of 100 ppm.
  • FlexE allows a number of physical interfaces to be cascaded to form a flexible Ethernet link group FlexE Group, all of which can be combined into several transmission channels to carry several Ethernet services. For example, two time slots are combined to form one 10GE service for one transmission channel, and five time slots are combined to form one transmission channel to carry one 25GE service, and 30 time slots are combined to form one transmission channel to carry one 150GE service.
  • the traffic carried by the transmission channel is a 66b coded block that is transmitted in a visible sequence. It is consistent with the native 64/66b code block stream formed by the encoding of the Ethernet MAC data stream.
  • the service such as 50GE service
  • the idle addition and deletion here refers to mainly adjusting the number of idle bytes Idle between Ethernet packets and their corresponding 64/66b coded blocks (8 idle bytes Idle).
  • Buffering Buffer increases the complexity of the device and the delay of the transmission.
  • the idle addition and deletion rate adjustment mechanism is also applied to the mapping adaptation of the OTN implementation service to the ODUflex.
  • the OTN also directly adopts the IEEE802.3 idle adjustment mechanism to implement the service rate adjustment and adaptation, and maps the service to the ODUflex, but It is required to adopt a slow adjustment ODUflex as a mechanism for carrying the bandwidth of the transmission channel to ensure the loss of the service.
  • An embodiment of the present invention provides a method for adjusting a transmission rate, which is configured to perform an effective idle addition and deletion rate adjustment on a network node according to a difference in transmission channel rates of upstream and downstream of a service data flow to adapt to a rate of a service upstream and downstream transmission channel.
  • Different differences especially when the service end-to-end transmission bandwidth is rapidly adjusted, there is a large rate difference between the upstream and downstream transmission channels, and the network node's data buffer, network node processing delay, and end-to-end service transmission delay are reduced. .
  • a first aspect of the embodiments of the present invention provides a method for adjusting a transmission rate, where the network device acquires, from an upstream device, a target data stream that includes a first data packet, where the first data packet includes at least two non-idle units.
  • bandwidth adjustment is required, a padding unit is inserted or deleted between any two non-idle cells according to the bandwidth size to be adjusted, and the padding unit is used to adapt the bandwidth of the upstream transmission channel of the network device and the downstream transmission channel.
  • the difference in bandwidth which is the difference in bandwidth that needs to be adjusted.
  • Embodiments of the present invention achieve a fast stepwise adjustment of the transmission rate by inserting a padding unit between non-idle cells.
  • the method further includes: adjusting the bandwidth as needed
  • the bandwidth is inserted or deleted between the first data packet and the data message adjacent to the first data packet, and the padding unit is configured to adapt the bandwidth of the upstream transmission channel of the network device.
  • the difference between the bandwidths is the bandwidth to be adjusted.
  • a fast stepwise adjustment of the transmission rate is achieved by inserting a padding unit between data messages.
  • the inserting or deleting the filling unit between any two non-idle units according to the bandwidth that needs to be adjusted includes: : inserting or deleting a preset pad code indicated by a code block type field, which is used to adapt the network device, according to the required bandwidth to be inserted or deleted between any two non-idle units
  • the difference between the bandwidths is the bandwidth that needs to be adjusted.
  • the embodiment of the present invention specifically describes that the padding unit inserted or deleted is a preset pad code, which increases the achievability of the embodiment of the present invention.
  • the inserting the filling unit between any two data units according to the bandwidth that needs to be adjusted includes: The bandwidth to be adjusted inserts or deletes a typical idle code block indicated by a code block type field between any two non-idle units, the typical idle code block being used to adapt the bandwidth and downstream of the upstream transmission channel of the network device The difference in bandwidth of the transmission channel. The difference between the bandwidths is the bandwidth that needs to be adjusted.
  • the embodiment of the present invention specifically describes that the padding unit inserted or deleted is a typical idle code block, which increases the achievability of the embodiment of the present invention.
  • the method further includes: when the rate adaptation rate difference is smaller than the difference of the bandwidth, according to the required rate
  • the adapted rate difference inserts or deletes a padding unit in the target data stream, the padding unit being inserted or deleted for rate adaptation.
  • the embodiment of the invention describes the insertion or deletion of the filling unit when the transmission rate is slightly adjusted, which enriches the rate adjustment mode of the embodiment of the invention.
  • the network device acquires a target data stream. Thereafter, the method further includes deleting the padding unit and transmitting the remaining data unit after the deletion to the next network device or user equipment.
  • the embodiment of the present invention describes deleting all the padding units and the idle cells, and transmitting only the data units to the next device, which increases the achievability and operability of the embodiment of the present invention.
  • a second aspect of the embodiments of the present invention provides a network device, including: an acquiring unit, configured to acquire a target data stream, where the target data stream includes a first data packet, where the first data packet includes at least two non- An idle unit, configured to insert or delete a padding unit between any two non-idle units for adapting the network according to the required bandwidth when bandwidth adjustment is required.
  • Embodiments of the present invention achieve a fast stepwise adjustment of the transmission rate by inserting a padding unit between non-idle cells.
  • the network device further includes: a second adjusting unit, configured to adjust the bandwidth as needed The bandwidth is inserted or deleted between the first data packet and the data message adjacent to the first data packet.
  • a fast stepwise adjustment of the transmission rate is achieved by inserting a padding unit between data messages.
  • the first adjusting unit includes: a first adjusting module, configured to adjust the bandwidth according to the need Inserting or deleting a preset pad code between two non-idle units, the preset pad code block being indicated by a code block type field, the preset pad code block being used to adapt upstream of the network device The difference between the bandwidth of the transmission channel and the bandwidth of the downstream transmission channel.
  • the embodiment of the present invention specifically describes that the padding unit inserted or deleted is a preset pad code, which increases the achievability of the embodiment of the present invention.
  • the first adjusting unit includes: a second adjusting module, configured to adjust the bandwidth according to the need
  • a typical idle code block is inserted or deleted between two non-idle units, the typical idle code block being indicated by a code block type field for adapting bandwidth and downstream of an upstream transmission channel of the network device The difference in bandwidth of the transmission channel.
  • the embodiment of the present invention specifically describes that the padding unit inserted or deleted is a typical idle code block, which increases the achievability of the embodiment of the present invention.
  • the network device further includes: a third adjusting unit, configured to insert or delete a padding unit in the target data stream according to a rate adaptation rate rate required, where the inserted or deleted padding unit is used for rate adaptation, and the rate adaptation rate The difference is less than the difference in the bandwidth.
  • the embodiment of the invention describes inserting or deleting a filling unit when the transmission rate is slightly adjusted, which enriches the rate adjustment of the embodiment of the invention. The whole way.
  • the network device further includes: processing And a unit, configured to delete the padding unit and send the remaining data unit after the deletion to the next network device or user equipment.
  • processing And a unit configured to delete the padding unit and send the remaining data unit after the deletion to the next network device or user equipment.
  • the embodiment of the present invention describes deleting all the padding units and the idle cells, and transmitting only the data units to the next device, which increases the achievability and operability of the embodiment of the present invention.
  • a third aspect of the embodiments of the present invention provides a network device, where the network device includes: an input interface, an output interface, a processor, a memory, and a bus; the input interface, the output interface, the processor, and the a memory is connected through the bus; the input interface is for connecting an upstream device and obtaining an input result; the output interface is for connecting a downstream device and outputting a result; the processor is configured to call a program for adjusting a rate from the memory and execute the program a program for storing a received data stream and adjusting a rate; the processor invoking program instructions in the memory, such that the network device performs the first aspect to the fifth implementation of the first aspect A method of adjusting a transmission rate as described in any implementation.
  • the network device acquires the target data stream, where the target data stream includes the first data packet, where the first data packet includes at least two non-idle units; when bandwidth adjustment is needed, adjust the bandwidth as needed.
  • the bandwidth inserts or deletes a padding unit between any two non-idle cells, and the padding unit is adapted to adapt the difference between the bandwidth of the upstream transmission channel of the network device and the bandwidth of the downstream transmission channel.
  • the embodiment of the invention can support the speed adjustment of the transmission rate of the transmission channel of the upstream and downstream interfaces of the network node, so that the service can adapt to the difference of the transmission rate of the uplink and downlink transmission channels of the node, and reduce the data cache and network node processing of the network node. Delay and service transmission delay.
  • FIG. 1 is a schematic diagram of an IEEE 802.3 idle addition and deletion mechanism
  • FIG. 2 is a schematic diagram of a network architecture according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an embodiment of a method for adjusting a transmission rate according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a code block of a 64/66b code according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a typical idle code block according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a preset pad code block according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of inserting a preset pad code into a data packet according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of inserting a typical idle code block into a data packet according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a control code block according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a typical process of adjusting end-to-end service bandwidth according to an embodiment of the present invention.
  • FIG. 11 is a specific application scenario of adjusting bandwidth in an embodiment of the present invention.
  • FIG. 12 is another specific application scenario of adjusting bandwidth in an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of various physical interface options of a CPRI according to an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of a separate networking of an ODUflex according to an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of a hybrid networking of FlexE and OTN according to an embodiment of the present invention.
  • FIG. 16 is a schematic diagram of an embodiment of a network device according to an embodiment of the present invention.
  • FIG. 17 is a schematic diagram of another embodiment of a network device according to an embodiment of the present invention.
  • FIG. 18 is a schematic diagram of another embodiment of a network device according to an embodiment of the present invention.
  • An embodiment of the present invention provides a method for adjusting a transmission rate, which is configured to perform an effective idle addition and deletion rate adjustment on a network node according to a difference in transmission channel rates of upstream and downstream of a service data flow to adapt to a rate of a service upstream and downstream transmission channel.
  • Different differences especially when the service end-to-end transmission bandwidth is rapidly adjusted, there is a large rate difference between the upstream and downstream transmission channels, and the network node's data buffer, network node processing delay, and end-to-end service transmission delay are reduced. .
  • the network node ie, the network device
  • the data stream contains data blocks that are not allowed to exist and there is no padding unit.
  • the network node needs to buffer the data stream containing the data code block, and the rate in the data stream belongs to the idle padding unit between the corresponding data code block groups of different data messages, and the padding unit is inserted or deleted, which requires almost buffering. A complete data message.
  • the IEEE 802.3-based idle addition and deletion mechanism is only applicable to the adjustment of the transmission rate when the 64/66b encoded Ethernet service data stream is adapted and transmitted on the interface, and is not suitable for similar general public radio interfaces CPRI, SDH.
  • TDM services such as OTN and Ethernet services using non-100GE compatible 64/66b encoding (for example, GE services using 8b/10b encoding) data streams are adjusted when the transmission is adapted.
  • CPRI typically has a data stream of both 64/66b and 8b/10b encoding formats.
  • the CPRI service data stream of the 64/66b encoding format has only three 64/66b coded block types, and the superframe structure of length 16*256*n bytes is encoded into three code blocks, that is, the start code block, the data code. Block and end code blocks.
  • 12165.12 Mbit/s using 64/66b encoding CPRI Option 9
  • the insertion and deletion of idle code blocks are performed between superframes (similar to data packets), that is, between the end code block and the start code block, and the transmission rate is adjusted.
  • the CPRI data stream of the 8b/10b encoding format needs to be converted into 64/66b encoding.
  • the TDM service data stream including other SDH, OTN and other unencoded formats is converted into 64/66b encoding, there may be only one type of data code block. , that is, there is no start code block and end code block, so it is impossible to use the existing IEEE 802.3 idle add/drop mechanism to idle between the end code block and the start code block.
  • the idle addition and deletion on the byte implements rate adjustment.
  • FIG. 1 is a schematic diagram of an idle addition and deletion mechanism of IEEE 802.3.
  • OR, Output Rate 100 Gbps downstream of the node
  • IR, Input Rate 5 Gbps at the upstream transmission channel of the node
  • the ODUflex bandwidth adjustment of the OTN emphasizes the lossless adjustment of the service. Therefore, when the buffering capacity of the network node is a fixed value, the speed adjustment of the ODUflex bandwidth capacity is limited, and The bandwidth or rate of the ODUflex, which is the upstream and downstream bearer transmission channel of the service, is increased or decreased in a very slow manner, which is complicated and time-consuming, and is not conducive to rapid adjustment of the service bandwidth.
  • the embodiment of the present invention can be applied to the FlexE of the Optical Network Forum OIF and the OTN (G.709) of the International Telecommunications Union Telecommunications Standardization Organization ITU-T, and can also be applied to a service byte such as SDH/OTN.
  • the service type the transmission rate adjustment of the service to the interface is implemented.
  • the former two are currently using the IEEE 802.3 idle addition and deletion mechanism, and the OTN is also called the IMP mechanism.
  • the IMP includes a description of the service transmission rate adjustment and the mapping processing from the service to the ODUflex.
  • the lossless bandwidth adjustment of ODUflex is currently a slow adjustment of the small slope.
  • the OTUflex can support the large slope step fast lossless bandwidth adjustment, including the increase and decrease.
  • OIF's FlexE is currently a step-by-step bandwidth adjustment, but without lossless adjustment capabilities, which may cause business damage, including packet loss, business interruptions, etc., with limited node caching capabilities.
  • there is no large data buffer requirement for the node which greatly reduces the complexity requirement of the node, and can support the stepless non-destructive adjustment of the service bandwidth.
  • the embodiment of the present invention can be applied to the network architecture shown in FIG. 2, in which the source user equipment (CE, Customer Equipment) sends the FlexE Client service including the data packet to the source network device (PE, Provider).
  • the PE is the network node.
  • the network node processes and forwards the data stream containing the data packet in the FlexE client service according to the method for adjusting the transmission rate according to the embodiment of the present invention, so that the last CE obtains the FlexE from the last PE. Client business.
  • the reverse direction of the FlexE client service is the same as the current direction from the last CE to the source CE. The principle is the same and will not be described.
  • the data structure of the minimum unit in the embodiment of the present invention may be an encoded block, such as a 64/66b code block, a 128/130b code block, or an 8/10b code block, or a coded structure that is not encoded.
  • Data or non-data indicates the bytes of the information or a combination thereof.
  • a 64/66b code block explicitly indicates, by its sync header type and code block type, its corresponding combination of type of data bytes of the first 8 bytes of encoding, that is, a full data byte or a non-full data byte.
  • the byte combination structure can be a combination of n bytes. When n is equal to 1, it is 1 byte.
  • n is 1 byte.
  • Data bytes, for non-data bytes can be distinguished from the actual contents of the bytes as free bytes or other types of non-data bytes, also known as control bytes.
  • the four-byte information is used to indicate the four-byte type on the XGMII
  • the 8-GE unit is used, for example, on the 100GE Ethernet CGMII interface.
  • the present invention does not limit the data structure of the above minimum unit. For ease of understanding, in the following embodiments, the data structure unit having the 64/66b code block as the minimum unit will be described.
  • the embodiment of the present invention provides a method for adjusting a transmission rate and a network device based on the method, where the method for adjusting a transmission rate includes: acquiring, by a network node, a target data stream, where the target data stream includes a first data packet, A data message includes at least two non-idle units; when bandwidth adjustment is required, according to the rate difference of the transmission channels of the upstream and downstream of the node according to the service, between any two non-idle units in the service data unit sequence stream as needed
  • the padding unit is inserted or deleted, and the network node matches the transmission rate of the traffic passing through the upstream transmission channel of the node to the downstream transmission channel.
  • a bearer network includes multiple network nodes, such as a head end network node, an intermediate network node, and an end network node.
  • the bandwidth of the downstream transmission channel of the end network node in the bearer network increases first, and the bandwidth of the upstream transmission channel of the end network node increases. After the end network node completes the adjustment, it gradually moves forward. Adjusting the bearer capacity bandwidth of the upstream and downstream transmission channels of the service on the remaining network nodes in the bearer network, and finally the bearer network has almost the same bandwidth of the upstream transmission channel and the downstream transmission channel of the service on each node, but allows +/- 100ppm difference; when the bandwidth is reduced, the bearer capacity of the upstream transmission channel of the bearer network in the bearer network is reduced first, and the bandwidth of the bearer capability of the downstream transport channel of the service is reduced.
  • the first-end network node After the first-end network node completes the adjustment, it gradually adjusts the bearer capacity bandwidth of the upstream and downstream transmission channels of the service on the remaining network nodes in the bearer network, and finally the bandwidth of the upstream transmission pipeline and the downstream transmission pipeline of the bearer network on each node is almost Consistent, but allows for a difference of +/- 100 ppm. Both of the above cases will result in the network node's downstream pipeline bandwidth for the service being temporarily larger than the upstream pipeline bandwidth.
  • the bandwidth of the transmission pipeline on each interface inside the bearer network can be adjusted through centralized control. Due to the difference in the time when the network node receives the control signal, the bandwidth of the upstream pipeline of the service may be temporarily compared.
  • the downstream pipeline has a large bandwidth and the upstream pipeline bandwidth is temporarily smaller than the downstream pipeline bandwidth, and this difference in bandwidth is relatively large.
  • the embodiment of the present invention can adjust the bandwidth difference between the upstream and downstream transmission channels of each node from the entire bearer network, that is, from the end-to-end service aspect, and can also adjust the transmission of the upstream and downstream interfaces from the network node in the bearer network.
  • the bandwidth aspect of the channel is described.
  • the upstream and downstream transmission channels of the network node may be ODUflex in the OTN interface, or the time slot combination of the FlexE flexible Ethernet interface, and the native CPRI interface as a single transmission channel, native.
  • the Ethernet interface acts as a single transmission channel, etc., and is not exhaustive here.
  • the transmission channel/network interface may also be an ODUk/OTN interface, a VC container/SDH interface, a traditional Ethernet Eth, an IB, an FC interface, and the like as a single transmission channel, which is not limited herein.
  • the following is an explanation of the bandwidth increase and decrease adjustment of the end-to-end transmission channel connection of the service from the network, the idle addition and deletion of the service by the node, the adjustment of the transmission rate, and the adaptation of the rate bandwidth difference of the upstream and downstream transmission channels of the service.
  • an embodiment of the method for adjusting the transmission rate in the embodiment of the present invention includes:
  • the source node acquires a target data stream, where the target data stream includes a first data packet, where the first data packet includes at least two non-idle code blocks.
  • the source node obtains a target data stream, where the target data stream includes a first data packet carrying the useful information.
  • the service target data stream is used as a transmission channel on the FlexE interface as a transmission channel.
  • the useful data includes a data code block, a start code block, an end code block, etc., and the data code block, the start code block, and the end code block belong to non-idle code blocks, and cannot be inserted or deleted arbitrarily.
  • the first data message may include a start code block for indicating the start of the data message and an end code block for indicating the end of the data message.
  • the data message may also include a plurality of data code blocks carrying service information, the data code blocks being located between the start code block and the end code block. As shown in FIG.
  • the type of the start code block may be 0x78
  • the type of the end code block may include eight types
  • the eight types of code block indicating the end of the message include 0x87, 0x99, 0xAA, 0xB4, 0xCC, 0xD2, 0xE1, and 0xFF.
  • the data message can include a start code block of type 0x78, and an end code block of any of type 0x87, 0x99, 0xAA, 0xB4, 0xCC, 0xD2, 0xE1, and 0xFF.
  • the source node inserts or deletes the padding unit in the target data stream according to the bandwidth rate difference of the transmission channel on the upstream and downstream interfaces of the node, and sends the padding unit to the intermediate node through the transmission channel of the downstream interface.
  • the bandwidth rate difference between the upstream and downstream transmission channels of the source node can be large.
  • the downstream network interface rate of the network node is greater than the upstream network interface rate and the rate difference between the upstream and downstream interfaces is greater than a certain threshold, for example, when the rate of the downstream transmission channel is greater than the rate of the upstream transmission channel, the network node receives the target from the upstream interface.
  • the data stream is adjusted to the rate adaptation request (the difference between the upstream and downstream interface rates, that is, the downstream interface rate minus the upstream interface rate) is stuffed into the padding unit and sent downstream.
  • the padding unit may be located between data code blocks in the data message or between code blocks between the data message and the data message, and the padding unit is adapted to adjust the rate difference between the upstream and downstream interfaces of the network node. That is, after the padding unit is inserted, the rate difference between the traffic channel on the upstream and downstream interfaces of the service and the network node is respectively eliminated, and the service can respectively match the rate of the transmission channel on the upstream and downstream interfaces of the service. For example, when the rate of the downstream interface is 5 Gbps greater than the rate of the upstream interface, the padding unit is inserted so that the inserted padding unit can adapt the rate difference of 5 Gbps between the upstream and downstream transmission channels of the service on the node.
  • the network node When the network interface rate of the network node is lower than the upstream network interface rate and the rate difference between the upstream and downstream interfaces is greater than a certain threshold, the network node receives the target data stream from the upstream interface, and deletes a certain number of padding units to adapt to the difference between the upstream and downstream rates. And send it downstream. For example, when the rate of the downstream interface is 5 Gbps lower than the rate of the upstream interface, the 5 Gbps padding unit is deleted, so that the deleted number is The rate at which the stream can adapt to the downstream interface.
  • the padding unit inserted by the source node may be a preset padding block or a typical idle code block.
  • a typical idle code block is an idle data structure unit known in IEEE 802.3, that is, a known idle code block or A combination of idle characters is known, and the preset padding block is a code block containing an identification field that is different from the known idle structure.
  • the role of the preset padding block and the role of the typical idle block can be the same and can be used for rate adjustment. In practical applications, the processing measures taken may be different according to different situations. In this embodiment, a description is made by taking a known idle structure as a typical idle code block as an example.
  • a preset padding block is inserted between the two non-idle code blocks in the data packet, in two data packets.
  • a typical idle code block is inserted between the nodes, and the node can directly delete the preset pad code block, and delete the inserted typical idle code block according to the context code block, so as to ensure that there is no preset pad code inserted between the data code blocks.
  • the code block structure of a typical idle code block is as shown in FIG. 5.
  • the code block type of a typical idle code block may be at least one of the first four types of 0x1E code block types in FIG. 5, and may also be repeated in FIG.
  • the fifth type of command sequence word block of the 0x4B code block type may be any one of the three types of code block types shown in FIG. 6, and other forms of structure are not listed one by one.
  • the three code block types of Figure 6 contain preset identification fields that are different from the fields in the typical idle code block of Figure 5.
  • a typical idle code block can be arbitrarily inserted at any position, including insertion between two non-idle code blocks in a data message or between two data messages. Free code block.
  • the inserted typical idle code block may be the first four known idle code blocks shown in FIG. 5 or the repeated fifth known idle code blocks.
  • the deleted padding unit separately includes a typical idle code block, a typical idle code block at any position, including a typical idle code block between the data message and the data message, can be arbitrarily deleted.
  • the transmission channel bandwidth rate on the downstream interface of the source node may be greater than the transmission channel bandwidth rate on the upstream interface, and the source node inserts a certain number of padding units into the target data stream to compensate for the transmission channel of the service on the upstream and downstream interfaces.
  • the bandwidth rate difference enables the service to match the bandwidth rate of the transmission channel on the upstream and downstream interfaces of the service on the node.
  • the transmission channel bandwidth rate on the downstream interface of the source node may also be smaller than the transmission channel bandwidth rate on the upstream interface.
  • the source node deletes a certain number of padding units for the target data stream to match the bandwidth of the transmission channel of the upstream and downstream interfaces.
  • the rate difference enables the service to match the bandwidth rate of the transmission channel on the upstream and downstream interfaces of the service on the node.
  • the intermediate node inserts or deletes the padding unit in the target data stream according to the bandwidth rate difference of the transmission channel on the upstream and downstream interfaces, and sends the padding unit to the next intermediate node or the last node through the downstream transmission channel.
  • the bandwidth of the transmission channel on the upstream and downstream interfaces of the intermediate node varies greatly.
  • the process of the intermediate node is similar to the step 302, and details are not described herein again.
  • the padding unit is a preset padding block
  • the process of inserting the preset padding block between the data code blocks by the intermediate node is as shown in FIG. After receiving the data stream containing the data packet, the intermediate node inserts a preset filler code block between the non-idle code blocks in the data packet.
  • the process of inserting a typical idle code block between the non-idle code blocks by the intermediate node is as shown in FIG. 8. After the intermediate node receives the data stream containing the data message, the data message is received in the data message. A typical idle code block is inserted between non-idle code blocks in the medium.
  • the intermediate node when the downstream interface rate of the intermediate node is greater than the upstream interface rate, the intermediate node is sent according to the The type of code block sent to determine the type of code block of the padding unit that needs to be inserted. For example, when the last transmitted code block type is a data code block, the intermediate node inserts a preset pad code block and transmits the specific structure of the preset pad code block as shown in FIG. 6. When the rate of the downstream interface of the intermediate node is lower than the rate of the upstream interface, the intermediate node deletes the deletable unit of the same size as the difference according to the specific difference between the upstream and downstream rates, and the deletable unit may be the previously inserted padding unit. It can also be the original known free code block in the target data stream. When the last transmitted code block type is a non-data code block, the intermediate node inserts a command word code block that is consistent with the last transmitted code block type.
  • the last node inserts or deletes the padding unit into the target data stream according to the difference between the upstream and downstream interface rates, and sends the service to the target client node according to a data format acceptable to the downstream target client node.
  • the last node receives the target data stream from the intermediate node.
  • the transmission rate adjustment of the service is performed by the idle addition and deletion operation involved in the present invention, it is also required to combine the data format constraints and restrictions acceptable to the downstream target client node. For example, for a 64/66b encoded CPRI service, all padding units inserted in the target data stream need to be deleted, including preset padding blocks or known typical idle code blocks, and for example, the target client node can only be used as a traditional Ethernet interface.
  • a single transmission channel receives a service data stream, only the existing typical idle code block is allowed as a padding unit and is required to be located between data messages.
  • the padding unit between non-idle code blocks located in the data message shall be removed.
  • the target data stream meets the requirement that the target client device can receive the service data.
  • the target client device does not have a special data format requirement and can receive the output format of the intermediate node involved in the present invention, the last node can also insert or delete the corresponding padding unit in the target data stream according to actual needs to adapt to the last node.
  • the difference in bandwidth rate of the transmission channel on the downstream interface requires no special handling.
  • the network node after the network node receives the target data stream, the network node inserts or deletes the padding unit into the target data stream, and adjusts the transmission rate of the service to adapt to the bandwidth rate difference of the transmission channel on the upstream and downstream interfaces of the network node. At the same time, reduce the data cache of network nodes, network node processing delay and end-to-end service transmission delay.
  • FIG. 9 is a byte-form control character involved in the 802.3 40GE/100GE service, the Ethernet service is carried by the FlexE, and the Ethernet service is carried by the OTN.
  • the characters of other FC and IB services are similar.
  • the IDLE/LPI character byte corresponds to C0-C7 in the code block of type 0x1E, that is, 8 consecutive idle bytes.
  • the addition and deletion object is an existing typical idle byte or a preset idle byte, and other granular data units are not described again.
  • the last node can directly delete the pre-predetermined block.
  • the padding block is set to ensure that there are no preset padding blocks inserted between non-idle code blocks. If the inserted padding unit does not include a preset pad code, the last node deletes the inserted typical idle code block according to the context code block, that is, according to the context code block, for example, the current typical idle code block appears in the previous start code block to the rear. Between the end code blocks, it is determined that the predetermined typical idle code block is in the data message, and is idle that does not meet the requirements of the existing protocol, and should be deleted.
  • the service sent from the source CE to the last CE is transmitted in multiple PEs, it may also pass through one source PE and one end PE, and does not include the intermediate node.
  • the processing steps are similar to the foregoing embodiments, and details are not described herein again.
  • the following is an embodiment of adjusting the bandwidth of the end-to-end transmission channel connection provided by the service from the bearer network, that is, the end-to-end adjustment of the bandwidth of the service end-to-end transmission channel connection as needed.
  • the bandwidth of the downstream pipeline of the service is temporarily larger than the bandwidth of the upstream pipeline.
  • the embodiment of the present invention can also be applied to a typical process scenario of bandwidth adjustment of the end-to-end service transmission channel connection as shown in FIG.
  • the end-to-end service bandwidth adjustment may include two scenarios of bandwidth increase and bandwidth reduction.
  • bandwidth adjustment is required, all network nodes from the source node to the last node need to perform bandwidth adjustment negotiation to implement end-to-end service bandwidth adjustment. For example, the source node initiates a bandwidth adjustment request and forwards the bandwidth adjustment request to the downstream hop by hop. Until the end node receives the bandwidth adjustment request and replies to the upstream bandwidth adjustment response until the source node receives the bandwidth adjustment response, it is determined that the bandwidth adjustment action can be performed.
  • the bandwidth of the transmission channel is sequentially increased from the end node to the source node, that is, from the downstream to the upstream.
  • the bandwidth of the transmission channel is sequentially decreased from the source node to the end node, that is, from the upstream to the downstream direction. Therefore, in the process of bandwidth adjustment, the bandwidth of the upstream transmission channel may be smaller than the bandwidth of the downstream transmission channel, or the bandwidth of the downstream transmission channel may be larger than the bandwidth of the upstream transmission channel. It will not cause problems such as the cumulative loss of valid message data on the intermediate nodes, so that the service is not damaged.
  • the specific application scenario is as shown in Figure 11.
  • the Ethernet service increases the service bandwidth by 10 Gbps to 50 Gbps.
  • the current Ethernet service flow bandwidth is 10 Gbps, which is equivalent to the traditional 10GE.
  • the source CE to PE1.
  • the pipe bandwidth is the sum of the two 5Gbps time slot bandwidths of the flexible Ethernet, and the pipe bandwidth of PE1 to PE2 is also the bandwidth of the two 5Gbps time slots of the flexible Ethernet.
  • the bandwidths of PE2 to PE3, PE3 to PE4, and PE4 to the end CE pipeline are also the bandwidths of the two 5Gbps slots of the flexible Ethernet.
  • the upstream and downstream pipeline bandwidth may have a small difference of +/-100ppm.
  • the source CE first requests the first hop transmission channel to increase the bandwidth to the network through the first hop interface.
  • the source CE management (overhead processing, OHPU) unit can pass the first hop interface through the first hop interface.
  • the signaling channel sends a request, and the PE1 device receives the request, and delivers the request to the PE2 downstream.
  • the request is sent to the egress PE4, and the egress PE4 forwards the service bandwidth increase request to the corresponding CE.
  • the egress PE4 After the CE agrees to increase and makes an acknowledgment response, the egress PE4 directly adjusts the pipe bandwidth of the last hop of the PE4 to the last CE from 2 5 Gbps slots to 10 5 Gbps slots, so the last one of the exit PE4 to the last CE
  • the skipping pipeline implements a stepped bandwidth adjustment.
  • the upstream pipeline bandwidth of the service is 10 Gbps, and the downstream pipeline bandwidth is 50 Gbps, which is quite different.
  • the egress PE4 needs to adjust the rate of the service according to the embodiment of FIG. 3.
  • the egress PE4 needs to perform rate adjustment according to the IEEE 802.3 idle addition and deletion mechanism, that is, there are rules to adjust the optional adjustment option. This requires the export PE4 to be designed to support larger data caching capabilities.
  • the egress PE4 knows that the final pipe adjustment has been completed successfully. If there is no other factor, the egress PE4 can respond to the upstream bandwidth request of the upstream node PE3. After the acknowledgment arrives at PE3, PE3 can adjust the service pipeline of PE3 to PE4. PE3 changes the number of downstream pipeline slots of the service from 2 to 10, and the pipeline bandwidth changes from 10 Gbps to 50 Gbps. The egress PE4 receives data from an upstream pipeline of 10 slots and 50 Gbps. At this time, the upstream and downstream service pipelines of the egress PE4 have a bandwidth of 10 Gbps and 50 Gbps bandwidth, but the upstream and downstream pipelines can have bandwidth. There can be small differences of +/- 100 ppm.
  • the PE4 can still perform rate adjustment according to the embodiment of FIG. 3, and can also perform rate adjustment according to the IEEE 802.3 idle addition and deletion mechanism.
  • the upstream pipeline of the PE3 is 10 Gbps, and the downstream is 50 Gbps. There is a big difference, and the bandwidth adjustment of the service needs to be performed according to the embodiment of FIG. 3 .
  • the acknowledgment response can be made to the bandwidth increase request of the upstream node PE2.
  • PE2 can adjust the service pipe of PE2 to PE3.
  • PE2 changes the number of downstream pipe slots of the service from 2 to 10, and the pipe bandwidth changes from 10 Gbps to 50 Gbps.
  • PE3 collects data from 10 time slots of 50 Gbps pipes.
  • the upstream and downstream service pipeline bandwidth of PE3 is 10 slots and 50Gbps bandwidth, but the upstream and downstream pipeline bandwidth may have a slight difference of +/-100ppm.
  • the PE3 can still perform rate adjustment according to the embodiment of FIG. 3, and can also perform rate adjustment according to the IEEE 802.3 idle addition and deletion mechanism.
  • the bandwidth of the upstream pipeline of the PE2 is 10 Gbps, and the bandwidth of the downstream pipeline is 50 Gbps.
  • the bandwidth adjustment of the service needs to be performed according to the embodiment of FIG. 3 .
  • the pipeline between the two devices hops from the downstream to the upstream to achieve an increase in bandwidth.
  • the ingress PE1 After the ingress PE1 knows that the bandwidth of the downstream pipeline is properly adjusted, there is no bottleneck in the downstream pipeline. That is, the bottleneck is only at the source CE to the ingress PE1. In this case, the ingress PE1 can respond to the bandwidth increase request of the source CE. After receiving the response, the source CE can immediately adjust the bandwidth of the service pipe that it downlinks to the egress PE4 to 50 Gbps, so that the end-to-end service pipe bandwidth can be adjusted as needed.
  • the source CE sends data packets and automatically sends the packet data according to the downstream pipeline bandwidth rate.
  • the packets are sent continuously. If no, the known idle structure unit is sent, even if the bandwidth of the downstream port is sent. There is no change in the period of the text and there is no need for rate adjustment.
  • the specific application scenario is as shown in Figure 11.
  • the Ethernet service reduces the service bandwidth by 50 Gbps to 10 Gbps.
  • the current Ethernet service flow bandwidth is 50 Gbps, which is equivalent to the traditional 50GE.
  • the source CE to PE1.
  • the pipe bandwidth is the sum of 10 5Gbps time slot bandwidths of flexible Ethernet, and the pipe bandwidth of PE1 to PE2 is also 10 5Gbps time slot bandwidths of flexible Ethernet.
  • the bandwidth of PE2 to PE3, PE3 to PE4, and PE4 to the end CE pipeline is also the bandwidth of 10 5Gbps slots of flexible Ethernet.
  • the upstream and downstream pipeline bandwidth may have a small difference of +/-100ppm.
  • the source CE first requests the network to reduce the bandwidth by using the first hop interface on the first hop interface.
  • the management unit of the source CE can send a request through the signaling channel of the first hop interface, and the ingress PE1 device receives the request. If there is no special case, the entry PE1 can make a confirmation response.
  • the source CE directly adjusts the pipe bandwidth of the first hop from 10 time slots to 2 time slots, the bandwidth changes from 50 Gbps to 10 Gbps, and the source CE goes to the first hop pipe of the ingress PE1.
  • a stepped bandwidth adjustment is achieved.
  • the source CE sends data packets to automatically send packet data according to the downstream pipe bandwidth rate. If the packets are sent continuously, the device sends a known idle structure even if the bandwidth of the downstream port is sent. There is no change in the period and there is no need for rate adjustment.
  • the upstream pipeline bandwidth of the service is 10 Gbps, and the downstream pipeline bandwidth is 50 Gbps, which has a large difference.
  • the ingress PE1 needs to adjust the rate of the service according to the embodiment of FIG. 3.
  • the ingress PE1 learns that the source CE to the ingress PE1 pipeline has been successfully adjusted. If there is no other factor, the PE1 can send a request for the service bandwidth reduction to the next node PE2. After the request arrives at the PE2, if there is no special case, the PE2 That is, PE1 can be acknowledged.
  • PE1 adjusts the downstream service pipeline of PE1.
  • PE1 changes the number of downstream pipeline slots from 10 to 2, and the downstream pipeline bandwidth from 50G to 10Gbps.
  • PE2 collects data from an upstream pipeline of 50 Gbps in 10 slots.
  • the upstream and downstream service pipeline bandwidth of PE1 is 10 slots and 50 Gbps bandwidth, but the upstream and downstream pipeline bandwidth may have a slight difference of +/- 100 ppm.
  • PE1 can still perform rate adjustment according to the embodiment of FIG. 3, and can also perform rate adjustment according to the idle addition and deletion mechanism of EEE 802.3.
  • the upstream pipeline of the PE2 is 10 Gbps, and the downstream is 50 Gbps.
  • the bandwidth adjustment of the service needs to be performed according to the embodiment of FIG. 3 .
  • the pipeline between the two devices jumps from upstream to downstream to achieve bandwidth reduction adjustment.
  • the downstream CE can be requested to adjust.
  • the rate adjustment needs to be performed according to the embodiment of FIG. 3.
  • the outlet PE4 needs to follow EEE 802.3.
  • the idle addition and deletion mechanism performs rate adjustment, that is, there are rule constraints. This requires the export PE4 to be designed to support larger data caching capabilities.
  • a certain network node cannot recognize the inserted padding unit, such as a conventional network device.
  • the last CE it is assumed that the last CE cannot identify the padding unit in the embodiment of the present invention.
  • the last hop network device of the last CE that is, the last PE
  • all the padding units in the data stream need to be deleted.
  • all the padding units inserted in the target data stream need to be deleted, including A padding block or a known typical idle block.
  • the target client node can only receive the service data stream using the traditional Ethernet interface as a single transmission channel, only the existing typical idle code block is allowed as the padding unit and is required to be located between the data packets.
  • all the previously known idle code blocks may also be deleted, and the padding unit between the non-idle code blocks in the data packet includes a preset padding block or a known typical idle code block. All should be removed.
  • the last PE can only send data packets to the last CE.
  • the source CE may send a padding unit when no data packet is sent, and the padding unit may be a preset pad code or a typical idle code block.
  • the data stream received by the last PE may include a data packet, and may also include a padding unit and the like.
  • each network device PE may also perform an IEEE 802.3 idle addition and deletion mechanism to implement rate adaptation.
  • this embodiment describes an example in which the bandwidth of the upstream pipeline of the service is temporarily larger than the bandwidth of the downstream pipeline.
  • the pipeline bandwidth adjustment inside the bearer network is performed through centralized control.
  • the bandwidth request is sent from the source CE to the ingress PE port, and the ingress PE forwards the request to the centralized controller.
  • the centralized controller can simultaneously control the adjustment of the pipes between all PEs, including increasing bandwidth and reducing bandwidth. Due to the different timings of the delivery of the control signals and the arrival of the PEs, the bandwidth adjustment of the pipes between the PEs is completely out of order.
  • the ingress CE requests the network to increase the bandwidth through the ingress PE; the ingress PE reports the request to the centralized controller of the network, before ensuring that the pipeline bandwidth between all the downstream nodes has been increased. , suspending any response to the source CE, that is, ensuring that the bandwidth of valid data packets sent from the source CE does not exceed The size of the bandwidth of any pipe before the adjustment is completed.
  • the centralized controller of the network sends instructions to all PEs.
  • PE1, PE2, PE3, PE4, and PE5 adjust the bandwidth of the downstream pipes and increase the bandwidth. This mode allows all downstream pipes to be adjusted in parallel at the same time.
  • the waiting time for the success of other pipeline adjustments is relatively short, and the overall time spent is shorter.
  • the bearer network bandwidth is increased from 10 Gbps to 50 Gbps.
  • the order of receiving instructions is: PE1, PE5, PE3, PE2, PE4.
  • the entire adjustment process has seven phases: for some network nodes, there may be cases where the upstream pipeline bandwidth is larger than the downstream pipeline bandwidth.
  • the CE-PE1 pipeline is finally added. In fact, the effective traffic entering the bearer network is constrained, and it can be ensured that there is no accumulation of packet data in the downstream intermediate node.
  • the interface rates between different devices are shown in the following table:
  • the upstream pipeline bandwidth of PE2 is 50 Gbps, and for PE2, the upstream pipeline bandwidth is larger than the downstream pipeline bandwidth.
  • the effective traffic of the source CE at the network entrance must be less than 10 Gbps.
  • PE1 performs rate adjustment on the service according to the embodiment of FIG. 3, and adds a large number of idle filling units, and the filling port rate is 50 Gbps.
  • the service is delivered to PE2.
  • the upstream pipeline bandwidth of PE2 is 50 Gbps, and the downstream pipeline bandwidth is 10 Gbps.
  • the rate adjustment is performed by the method of the embodiment of FIG. 3, and the excessively inserted filler unit is deleted. As long as the ingress bandwidth of the source CE does not increase, the effective service data flowing into the network is limited, and no accumulation of valid data is caused by any node in the bearer network.
  • the bandwidth of the ingress pipeline of the source CE is first reduced, and the effective traffic is limited, and the bandwidth of each downstream pipeline is simultaneously reduced in parallel.
  • the embodiment of the present invention can also be applied to a service type in which no idle bytes exist in a service flow, including a CPRI service, an OTN service, and an SDH service.
  • the uncoded pre-rate is 20x 491.52 Mbit/s, and the service bandwidth is less than 10GE.
  • the speed difference between the two is about 491.52:500.
  • Network interface, two 5Gbps time slots are about 10GE bandwidth.
  • the code block stream of the CPRI rate option is as follows, and does not contain any idle code blocks.
  • the end-to-end carrier pipe bandwidth is about 10 Gbps. Even if there is a +/-100 ppm frequency difference, the overall pipe bandwidth is larger than the CPRI bandwidth. Therefore, at the ingress PE1, the rate adjustment needs to be performed according to the embodiment of FIG. The main reason is to insert an appropriate amount of free padding blocks, and PE2, PE3, etc. also need to perform rate adjustment according to the embodiment of FIG. 3, and reasonablely add and delete the stuffed code blocks to fit the +/ between different pipes. -100 ppm rate difference.
  • the CPRI line bit rate option 6 using 8/10b encoding
  • the CPRI line bit rate option 6 can be first converted into the same 64/66b code block in Figure 13, and then through a 5Gbps time slot pipeline. Perform end-to-end transmission.
  • the adjustment process adopts the adjustment method when the above CPRI service is in the 64/66b encoding format, and details are not described herein.
  • the data of OTN and SDH is generally in bytes. As long as one sync header is added every 8 bytes and converted into a 64/66b data block, the OTN/SDH service can be used. It is converted into the following form, that is, only the data code block is included, and there is no start code block and end code block. For the service flow including only the data code block, the idle adjustment can be performed only by using the adjustment mode of the embodiment of the present invention. The existing IEEE 802.3 idle addition and deletion mechanism is not applicable.
  • the code block flow of the OTN/SDH service is as follows:
  • the network node may add or delete the padding unit according to the method of the embodiment of the present invention according to the bandwidth of the downstream bearer pipe.
  • the SDH/OTN service may be combined with the frame format of the SDH/OTN to be converted into a data stream of another frame format similar to the 64/66b encoding format, and processed according to the processing manner of the above CPRI service, which is not described here.
  • FIG. 14 shows the case where the upstream and downstream pipes of the network node are ODUflex, and the time slot group of the ODUflex and FlexE bearer services.
  • the upstream pipeline and the downstream pipeline of the service on the network node described in the embodiment of the present invention are used as a pipeline for carrying services between interfaces.
  • FIG. 14 is an ODUflex pipeline in which the end-to-end services are all OTNs.
  • the network node exchanges the services in the pipeline.
  • the rate adjustment can be performed by using the method in the embodiment of the present invention, or the rate adjustment can be performed by using the IEEE 802.3 idle addition and deletion mechanism.
  • ODUflex can achieve step adjustment and ensure that the service is not damaged.
  • ODUflex can also be a common ODUk.
  • Figure 15 shows the hybrid networking of FlexE and OTN, where one network interface uses ODUflex, the other is a flexible Ethernet interface time slot, and the native CPRI interface.
  • the network interface may also be an ODUk/OTN interface, an SDH interface, a traditional Ethernet interface, etc., which is not limited herein.
  • an embodiment of the network device in the embodiment of the present invention includes:
  • FIG. 16 is a schematic structural diagram of a network device according to an embodiment of the present invention.
  • the network device 1600 may generate a large difference due to different configurations or performances, and may include one or more central processing units (CPUs) 1610 (for example, one or more processors) and memory 1620, one or more storage media 1630 that store application 1633 or data 1632 (eg, one or one storage device in Shanghai).
  • the memory 1620 and the storage medium 1630 may be short-term storage or persistent storage.
  • the program stored on storage medium 1630 may include one or more modules (not shown), each of which may include a series of instruction operations in the server.
  • Network device 1600 may also include one or more power sources 1640, one or more wired or wireless network interfaces 1650, one or more input and output interfaces 1660, and/or one or more operating systems 1631, such as Windows ServerTM, Mac. OS XTM, UnixTM, LinuxTM, FreeBSDTM and more. It will be understood by those skilled in the art that the network device structure illustrated in FIG. 16 does not constitute a limitation to the network device, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • the memory 1620 can be used to store software programs and modules, and the processor 1610 executes various functional applications and data processing of the network devices by running software programs and modules stored in the memory 1620.
  • the memory 1620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of network devices (such as audio data, phone books, etc.).
  • memory 1620 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the program of the rate transfer adjustment mechanism and the received data stream provided in the embodiment of the present invention are stored in the memory 1620, which is called from the memory 1620 when needed for use.
  • the processor 1610 is a control center of the network device, and can adjust the transmission rate according to the set adjustment mechanism.
  • the processor 1610 connects various portions of the entire network device using various interfaces and lines, performs various types of network devices by running or executing software programs and/or modules stored in the memory 1620, and recalling data stored in the memory 1620. Function and processing data so that the opponent can adjust the transmission rate.
  • the processor 1610 can include one or more processing units.
  • the processor 1610 is configured to perform steps 301 to 304 in FIG. 3, and details are not described herein again.
  • the input interface 1650 and the output interface 1660 are used to control external devices for input and output.
  • the processor 1610 is connected to the input interface 1650 and the output interface 1660 through the internal bus of the network device, and the input interface 1650 and the output interface 1660 are respectively The upstream and downstream external devices are connected, and finally the data transmission of the processor 1610 and the upstream and downstream external devices is realized.
  • the data can be transmitted between different devices through the input interface 1650 and the output interface 1660, and the output rate of the service is quickly adjusted to adapt. Bandwidth changes demand as needed, and reduces node data caching, node processing latency, and end-to-end transmission latency.
  • FIG. 16 is a detailed description of the network device in the embodiment of the present invention from the perspective of hardware processing.
  • the network device in the embodiment of the present invention is described in detail from the perspective of the modular functional entity.
  • FIG. 17 in the embodiment of the present invention,
  • One embodiment of a network device includes:
  • the acquiring unit 1701 is configured to acquire a target data stream, where the target data stream includes a first data packet, where the first data packet includes at least two non-idle units;
  • the first adjusting unit 1702 when bandwidth adjustment is needed, is used to insert or delete a padding unit between any two non-idle cells according to a bandwidth adjusted according to requirements, and the padding unit is configured to adapt the bandwidth of the upstream transmission channel of the network device. The difference in bandwidth of the downstream transmission channel.
  • the target data stream is acquired from the upstream device through the input interface 1650, and the target data stream is stored.
  • the processor 1610 calls the rate adjustment program stored in the memory 1620, and sends the processed target data stream to the downstream device through the output interface 1660 to quickly adjust the upstream and downstream rates of the network interface to adapt to the upstream and downstream of the service.
  • the network device may further include:
  • the second adjusting unit 1703 is configured to insert or delete a padding unit between the first data packet and the data message adjacent to the first data packet, when the bandwidth adjustment is required.
  • the first adjusting unit 1702 may further include:
  • the first adjustment module 17021 is configured to insert or delete a preset filler code block between any two non-idle units according to the bandwidth that needs to be adjusted.
  • the preset filler code block is indicated by a code block type field, and the preset filler code is used.
  • the block is used to adapt the difference between the bandwidth of the upstream transmission channel of the network device and the bandwidth of the downstream transmission channel.
  • the first adjusting unit 1702 may further include:
  • the second adjustment module 17022 is configured to insert or delete a typical idle code block between any two non-idle units according to a bandwidth that needs to be adjusted.
  • a typical idle code block is indicated by a code block type field, and a typical idle code block is used to adapt the network. The difference between the bandwidth of the upstream transmission channel of the device and the bandwidth of the downstream transmission channel.
  • the network node after the network node receives the target data stream, the network node performs an efficient idle addition and deletion rate adjustment according to the difference of the transmission channel rates of the upstream and downstream of the service data flow to adapt to the rate of the upstream and downstream transmission channels of the service.
  • Different situations especially when the end-to-end transmission bandwidth of the service is rapidly adjusted, there is a large rate difference between the upstream and downstream transmission channels, and the data cache, network node processing delay, and end-to-end service of the network node are reduced. Transmission delay.
  • another embodiment of the network device in the embodiment of the present invention includes:
  • the acquiring unit 1701 is configured to acquire a target data stream, where the target data stream includes a first data packet, where the first data packet includes at least two data units;
  • the first adjusting unit 1702 when bandwidth adjustment is required, is used to insert a padding unit between any two non-idle cells according to the bandwidth adjusted according to requirements, and the padding unit is configured to adapt the bandwidth and the downstream transmission of the upstream transmission channel of the network device. The difference in bandwidth of the channel.
  • the network device may further include:
  • the third adjusting unit 1704 is configured to insert or delete a padding unit in the target data stream according to the rate adaptation rate rate required, and the padding unit inserted or deleted is used for rate adaptation, and the rate adaptation rate difference is smaller than the bandwidth. The difference.
  • the network device may further include:
  • the processing unit 1705 is configured to delete the padding unit and send the remaining data unit after the deletion to the next network device or user equipment.
  • the network device after the network device receives the target data stream, the network device inserts or deletes the filling unit according to the difference of the transmission channel rates of the upstream and downstream of the service data stream, and rapidly adjusts the rate of the upstream and downstream of the network node. It adapts to the rate difference between the upstream and downstream transmission channels of the service, and reduces the data cache of the network node, the processing delay of the network node, and the delay of the end-to-end service transmission.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)
  • Small-Scale Networks (AREA)

Abstract

本发明实施例公开了一种传输速率的调整方法及网络设备,用于支持快速调整业务在网络节点的上下游接口的传输通道的传输速率,使业务能适应其在节点上下游传输通道的传输速率差异,同时减少网络节点的数据缓存、网络节点处理延迟和业务传输延迟。本发明实施例方法包括:网络设备获取目标数据流,目标数据流包含第一数据报文,第一数据报文包括至少两个非空闲单元;网络设备根据业务在网络设备的上下游接口上的传输通道带宽的差异,在任意两个非空闲单元之间插入或删除填充单元,填充单元用于适配网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。

Description

一种传输速率的调整方法及网络设备
本申请要求于2016年12月23日提交中国专利局、申请号为201611205246.8、发明名称为“一种传输速率的调整方法及网络设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种传输速率的调整方法及网络设备。
背景技术
灵活以太网FlexE借鉴同步数字体系或光传送网(SDH/OTN)技术,对灵活以太网链路组FlexE Group中的每个物理接口信息传输构建了FlexE帧格式,并进行时分复用TDM的时隙划分。与SDH/OTN字节间插不同,FlexE的TDM时隙划分粒度是66比特,时隙之间按照66比特进行间插。例如,对于100GE物理接口,一个FlexE帧包含8行,每行第一个66b块位置为FlexE开销区域,开销区域后为进行时隙划分的净荷区域,以64/66b比特块为粒度,对应20×1023个64/66b比特块承载空间,划分20个时隙,每个时隙的带宽大约为100GE接口的带宽除以20,约为5Gbps,标称速率小于5Gbps。
由于使用物理接口的载荷能力传输FlexE帧,所以FlexE业务时隙的实际速率受物理接口速率特性的约束。例如100GE物理接口的编码后速率为66/64*100Gbps=103.125Gbps,其物理接口频率偏差为+/-100ppm。因此实际速率为103.125Gbps+/-100ppm。在目前的多通道并行100GE以太网接口中,还需要刨除多通道对齐码块的1/16384带宽,因此100G FlexE帧的比特速率为(16383/16384)*(66/64)*100Gbps+/-100ppm。100GFlexE帧净荷区的总速率为((1023*20)/(1023*20+1))*(16383/16384)*(66/64)*100Gbps+/-100ppm。每个时隙的速率为((1023*1)/(1023*20+1))*(16383/16384)*(66/64)*100Gbps+/-100ppm,与(66/64)*5Gbps+/-100ppm存在差异。
FlexE允许对若干个物理接口可以级联捆绑构成灵活以太网链路组FlexE Group,其全部时隙可以组合成为若干传输通道承载若干以太网业务。例如两个时隙组合成为一个传输通道承载一个10GE业务,5个时隙组合成为一个传输通道承载一个25GE业务,30个时隙组合成为一个传输通道承载一个150GE业务等。传输通道承载的业务为可见的顺序传输的66b编码块。与以太网MAC数据流经过编码后形成的原生64/66b码块流保持一致。需要指出的是,所述业务,例如50GE业务,其典型的编码后速率为(66/64)*50Gbps+/-100ppm,需要经过空闲增删实现到灵活以太网链路组FlexE Group的10个时隙组合成的传输通道的传输速率的适配。这里的空闲增删指的是主要是调整以太网分组之间的空闲字节Idle以及其对应64/66b编码块(8个空闲字节Idle)的数量。
在现有IEEE 802.3的空闲调整机制中,数据报文对应的数据字节之间不允许存在空闲字节,编码后的数据码块之间也不允许存在空闲码块。使得系统需要缓存数据报文或者其对应数量的64/66b编码块,才能适应业务在节点的上游承载传输通道和下游承载传输通道存在较大速率差异的情形。FlexE直接沿用了IEEE 802.3的空闲调整机制实现业务速率调 整适配。例如,调整增加网络节点的上下游FlexE接口上对某一业务的承载传输通道的时隙数量和带宽,在上游数据输入速率与下游数据输出速率存在很大差异的情况下,需要使用相当的数据缓存Buffer,增加了设备的复杂度和传输延时Delay。目前这种空闲增删速率调整机制也被应用于OTN实现业务到ODUflex的映射适配,OTN这时也直接沿用IEEE802.3的空闲调整机制实现业务速率调整适配,并将业务映射到ODUflex,但要求只能采取缓慢调整ODUflex作为承载传输通道的带宽的机制来保证业务的无损。
发明内容
本发明实施例提供了一种传输速率的调整方法,用于在网络节点上根据业务数据流的上下游的传输通道速率差异,进行高效的空闲增删速率调整以适应业务上下游传输通道的速率的不同差异情形,尤其是进行业务端到端传输带宽进行快速调整的时候出现上下游传输通道存在较大速率差异的情形,同时减少网络节点的数据缓存、网络节点处理延迟和端到端业务传输延迟。
本发明实施例的第一方面提供一种传输速率的调整方法,包括:网络设备从上游设备处获取包含第一数据报文的目标数据流,该第一数据报文包括至少两个非空闲单元;当需要进行带宽调整时,根据需要调整的带宽大小在任意两个非空闲单元之间插入或删除填充单元,该填充单元用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值,该带宽的差值即为需要调整的带宽大小。本发明实施例通过在非空闲单元之间插入填充单元,实现了快速的阶跃性地调整传输速率。
结合本发明实施例的第一方面,在本发明实施例第一方面的第一种实现方式中,网络设备获取目标数据流之后,所述方法还包括:当需要进行带宽调整时,根据需要调整的带宽在所述第一数据报文和所述第一数据报文相邻的数据报文之间插入或删除填充单元,该填充单元用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值,该带宽的差值即为需要调整的带宽大小。本发明实施例通过在数据报文之间插入填充单元,实现了快速的阶跃性地调整传输速率。
结合本发明实施例的第一方面,在本发明实施例第一方面的第二种实现方式中,所述根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除填充单元包括:根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除通过码块类型字段指示的预置的填充码块,该预置的填充码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值,该带宽的差值即为需要调整的带宽大小。本发明实施例具体描述了插入或删除的填充单元为预置的填充码块,增加了本发明实施例的可实现性。
结合本发明实施例的第一方面,在本发明实施例第一方面的第三种实现方式中,所述根据所述需要调整的带宽在任意两个数据单元之间插入填充单元包括:根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除通过码块类型字段指示的典型空闲码块,该典型空闲码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值,该带宽的差值即为需要调整的带宽大小。本发明实施例具体描述了插入或删除的填充单元为典型空闲码块,增加了本发明实施例的可实现性。
结合本发明实施例的第一方面至第一方面的第三种实现方式中的任一实现方式,在本 发明实施例第一方面的第四种实现方式中,所述网络设备获取目标数据流之后,所述方法还包括:当速率适配的速率差值小于所述带宽的差值时,根据需要速率适配的速率差值在所述目标数据流中插入或删除填充单元,所述插入或删除的填充单元用于进行速率适配。本发明实施例描述了对传输速率进行微小调整时插入或删除填充单元,丰富了本发明实施例的速率调整方式。
结合本发明实施例的第一方面至第一方面的第三种实现方式中的任一实现方式,在本发明实施例第一方面的第五种实现方式中,所述网络设备获取目标数据流之后,所述方法还包括:删除所述填充单元并将删除后剩余的数据单元发送给下一个网络设备或用户设备。本发明实施例描述了删除所有填充单元及空闲单元,只将数据单元发送给下一个设备,增加了本发明实施例的可实现性和可操作性。
本发明实施例的第二方面提供一种网络设备,包括:获取单元,用于获取目标数据流,所述目标数据流包含第一数据报文,所述第一数据报文包括至少两个非空闲单元;第一调整单元,当需要进行带宽调整时,用于根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除填充单元,所述填充单元用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。本发明实施例通过在非空闲单元之间插入填充单元,实现了快速的阶跃性地调整传输速率。
结合本发明实施例的第二方面,在本发明实施例第二方面的第一种实现方式中,所述网络设备还包括:第二调整单元,当需要进行带宽调整时,用于根据需要调整的带宽在所述第一数据报文和所述第一数据报文相邻的数据报文之间插入或删除所述填充单元。本发明实施例通过在数据报文之间插入填充单元,实现了快速的阶跃性地调整传输速率。
结合本发明实施例的第二方面,在本发明实施例第二方面的第二种实现方式中,所述第一调整单元包括:第一调整模块,用于根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除预置的填充码块,所述预置的填充码块通过码块类型字段指示,所述预置的填充码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。本发明实施例具体描述了插入或删除的填充单元为预置的填充码块,增加了本发明实施例的可实现性。
结合本发明实施例的第二方面,在本发明实施例第二方面的第三种实现方式中,所述第一调整单元包括:第二调整模块,用于根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除典型空闲码块,所述典型空闲码块通过码块类型字段指示,所述典型空闲码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。本发明实施例具体描述了插入或删除的填充单元为典型空闲码块,增加了本发明实施例的可实现性。
结合本发明实施例的第二方面至第二方面的第三种实现方式中的任一实现方式,在本发明实施例第二方面的第四种实现方式中,所述网络设备还包括:第三调整单元,用于根据需要速率适配的速率差值在所述目标数据流中插入或删除填充单元,所述插入或删除的填充单元用于进行速率适配,所述速率适配的速率差值小于所述带宽的差值。本发明实施例描述了对传输速率进行微小调整时插入或删除填充单元,丰富了本发明实施例的速率调 整方式。
结合本发明实施例的第二方面至第二方面的第三种实现方式中的任一实现方式,在本发明实施例第二方面的第五种实现方式中,所述网络设备还包括:处理单元,用于删除所述填充单元并将删除后剩余的数据单元发送给下一个网络设备或用户设备。本发明实施例描述了删除所有填充单元及空闲单元,只将数据单元发送给下一个设备,增加了本发明实施例的可实现性和可操作性。
本发明实施例的第三方面提供一种网络设备,所述网络设备包括:输入接口、输出接口、处理器、存储器和总线;所述输入接口、所述输出接口、所述处理器、所述存储器通过所述总线连接;所述输入接口用于连接上游设备并获取输入结果;所述输出接口用于连接下游设备并输出结果;所述处理器用于从存储器中调用调整速率的程序并执行该程序;所述存储器用于存储接收到的数据流及调整速率的程序;所述处理器调用所述存储器中的程序指令,使得网络设备执行第一方面至第一方面的第五种实现方式中任一实现方式所述的传输速率的调整方法。
从以上技术方案可以看出,本发明实施例具有以下优点:
本发明实施例提供的技术方案中,网络设备获取目标数据流,目标数据流包含第一数据报文,第一数据报文包括至少两个非空闲单元;当需要进行带宽调整时,根据需要调整的带宽在任意两个非空闲单元之间插入或删除填充单元,填充单元用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。本发明实施例能够支持快速调整业务在网络节点的上下游接口的传输通道的传输速率,使业务能适应其在节点上下游传输通道的传输速率差异,同时减少网络节点的数据缓存、网络节点处理延迟和业务传输延迟。
附图说明
图1为IEEE 802.3空闲增删机制示意图;
图2为本发明实施例的网络架构的示意图;
图3为本发明实施例中传输速率的调整方法的一个实施例示意图;
图4为本发明实施例中64/66b编码的码块结构示意图;
图5为本发明实施例中典型空闲码块的结构示意图;
图6为本发明实施例中预置的填充码块的结构示意图;
图7为本发明实施例中在数据报文中插入预置的填充码块的示意图;
图8为本发明实施例中在数据报文中插入典型空闲码块的示意图;
图9为本发明实施例中控制码块的结构示意图;
图10为本发明实施例中端到端业务带宽调整的典型过程示意图;
图11为本发明实施例中调整带宽是的一个具体应用场景;
图12为本发明实施例中调整带宽是的另一个具体应用场景;
图13为本发明实施例中CPRI的多种物理接口选项示意图;
图14为本发明实施例中ODUflex单独组网示意图;
图15为本发明实施例中FlexE和OTN混合组网示意图;
图16为本发明实施例中网络设备一个实施例示意图;
图17为本发明实施例中网络设备另一个实施例示意图;
图18为本发明实施例中网络设备另一个实施例示意图。
具体实施方式
本发明实施例提供了一种传输速率的调整方法,用于在网络节点上根据业务数据流的上下游的传输通道速率差异,进行高效的空闲增删速率调整以适应业务上下游传输通道的速率的不同差异情形,尤其是进行业务端到端传输带宽进行快速调整的时候出现上下游传输通道存在较大速率差异的情形,同时减少网络节点的数据缓存、网络节点处理延迟和端到端业务传输延迟。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例进行描述。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
目前网络节点(即网络设备)在对包含数据码块的数据流进行速率调整,以适配网络节点上下游传输通道带宽的传输速率差异时,沿用了IEEE802.3的空闲增删机制。当上游数据输入速率远小于下游数据输出速率的情况时,数据流包含的数据码块之间不允许存在也不存在填充单元。网络节点需要缓存包含数据码块的数据流,在数据流中的率属于不同数据报文的对应的数据码块组之间的空闲填充单元进行增删,即插入或删除填充单元,这几乎需要缓存一个完整的数据报文。另外这种基于IEEE 802.3的空闲增删机制只适用于64/66b编码后的以太网业务数据流在接口上进行传输适配时实现传输速率的调整,不适合用于类似通用公共无线电接口CPRI、SDH、OTN等TDM业务以及采用非100GE兼容的64/66b编码的以太网业务(例如采用8b/10b编码的GE业务)数据流在进行传输适配时实现传输速率的调整。
CPRI典型地具有64/66b和8b/10b两种编码格式的数据流。64/66b编码格式的CPRI业务数据流只有三种64/66b编码码块类型,长度为16*256*n字节的超帧结构,被编码为三种码块,即开始码块,数据码块和结束码块。例如12165.12Mbit/s,采用64/66b编码CPRI Option 9,n=24超帧长度为16*256*24=98304字节,几乎要缓存一个完整的超帧的数据才能按照IEEE 802.3的空闲增删机制,在超帧之间(类似数据报文之间),即结束码块和开始码块之间进行空闲码块的插入和删除,实现传输速率的调整。8b/10b编码格式的CPRI数据流需要转化成64/66b编码,包含其他SDH、OTN等无编码格式的TDM业务数据流在转化成64/66b编码的时候,可能只存在数据码块一种类型,即不存在开始码块和结束码块,因此无法沿用现有的IEEE 802.3的空闲增删机制在结束码块到开始码块之间的空闲 字节上进行空闲增删实现速率调整。
当现有IEEE 802.3的空闲增删机制运用于FlexE系统中时。在FlexE系统中,该空闲增删机制如图1所示,图1为IEEE 802.3的空闲增删机制的示意图。如图1所示,例如,当业务在节点的下游传输通道的输出速率(OR,Output Rate)为100Gbps,而业务在节点的上游传输通道的输入速率(IR,Input Rate)为5Gbps时,下游输出速率-上游输入速率=OR-IR=100Gbps–5Gbps=95Gbps。Min Buffer Size=MaxPacketLength*(OR-IR)/OR,则Min Buffer Size=0.95*76800=72960bit=72.96kb,在实际使用中一般设置至少一个最长帧的缓存,最长帧Jumbo frame size=9.6kB=9.6*8*1000=76800bit,传输延迟Delay=Min Buffer Size/OR=768ns~1us。这就要求网络节点对每个业务配备这样的缓存能力,给设备提出了很高的要求。实际中往往采取小缓存能力设计,因此FlexE中为了支持阶跃性业务带宽调整,又不希望引入较大的缓存设计,因此允许业务有损调整。
当IEEE 802.3的空闲增删机制应用在OTN系统中时,OTN的ODUflex带宽调整强调业务无损调整,因此在网络节点的缓存能力为固定值的情况下,限制了对ODUflex带宽容量调整的速度,需要以非常缓慢的方式增大或者减小作为业务上下游承载传输通道的ODUflex的带宽或速率,复杂耗时,不利于业务带宽的快速调整。
本发明实施例除了可应用于光联网论坛OIF的FlexE和国际电信联盟远程通信标准化组织ITU-T的OTN(G.709),还可以应用在如SDH/OTN等业务流中不存在空闲字节的业务类型中,实现对业务到接口的传输速率调整。前二者目前都是沿用IEEE 802.3的空闲增删机制,OTN中又称之为IMP机制,IMP包含业务传输速率调整和业务到ODUflex的映射处理两个方面的描述。ODUflex的无损带宽调整目前是微小斜率的缓慢调整,引入本发明实施例后,可以支持ODUflex的大斜率阶跃性快速无损带宽调整,包括变大和减小。OIF的FlexE目前是阶跃性带宽调整,但没有无损调整能力,即可能在节点缓存能力有限的情况下造成业务损伤,包括丢包、业务瞬断等。引入本发明实施例后,对节点没有大的数据缓存需求,极大地降低了对节点的复杂度要求,可以支持对业务带宽的阶跃性无损调整。
本发明实施例可应用在图2所示的网络架构中,在该网络架构中,源用户设备(CE,Customer Equipment)将包含有数据报文的FlexE Client业务发送给源网络设备(PE,Provider equipment),PE即为网络节点,网络节点按照本发明实施例提供的传输速率的调整方法对FlexE Client业务包含数据报文的数据流进行处理并转发,使得末CE从末PE处获取到该FlexE Client业务。FlexE Client业务的反向传输方向与当前方向从末CE到源CE方向,原理相同,不再赘述。
本发明实施例涉及最小单位的数据结构可以是经过编码处理的码块,如64/66b码块、128/130b码块或8/10b码块等码块结构,或者是未经过编码处理的携带数据或非数据(非数据包含空闲)指示信息的字节或者其组合结构。对于码块结构,例如64/66b码块以其同步头类型和码块类型明确指示其对应的编码前8字节的数据字节的类型组合,即全数据字节或者非全数据字节,其中非全数据字节的情形之一为8字节全为编码前的空闲填充字节,即64/66b码块的同步头用于指示该码块为数据码块或非数据码块。对于未经过编码处理的 字节组合结构,可以为n字节组合,当n等于1的时候为1字节,例如1Gbps以太网GMII接口上,以字节为单位,辅以一些控制信息指示该字节为数据或者非数据字节,对非数据字节,可以从字节的实际内容区分为空闲字节或者其他类型的非数据字节,也称控制字节。又例如10Gbps以太网XGMII接口上,以四字节为单位,辅以4个比特的信息分别指示XGMII上的四个字节的类型,又例如以100GE以太网CGMII接口以8字节为单位的字节组合结构,该结构中辅以一个字节即8个比特用于指示该8字节组合为数据字节或非数据字节(又称控制字节)的组合结构的具体情况。本发明对上述最小单位的数据结构不做限定。为便于理解,在后面的实施例中,都以64/66b码块作为最小单位的数据结构单元进行说明。
有鉴于此,本发明实施例提供了一种传输速率的调整方法及基于该方法的网络设备,传输速率的调整方法包括:网络节点获取目标数据流,目标数据流包含第一数据报文,第一数据报文包括至少两个非空闲单元;当需要进行带宽调整时,根据业务在节点的上下游的传输通道的速率差异,按需在业务数据单元序列流中任意两个非空闲单元之间插入或删除填充单元,实现网络节点对经该节点的上游传输通道的业务向下游传输通道的传输速率的匹配。
如前所述,FlexE用于承载网络的传输接口时,使用FlexE接口的时隙组合作为传输通道,多段传输通道经过节点串接,形成对业务的端到端传输通道连接。对业务的端到端传输通道连接的带宽增减一般进行阶跃性增减调整,具体包括承载网络增加对业务的端到端传输通道连接的带宽的调整和减少对业务的端到端传输通道连接的带宽的调整。承载网络包含多个网络节点,如首端网络节点、中间网络节点和末端网络节点。增加带宽的时候是承载网络中末端网络节点的下游传输通道承载能力带宽先阶跃性增加,末端网络节点上游传输通道承载能力带宽后阶跃性增加,当末端网络节点完成调整后,逐步向前调整承载网络中的剩余网络节点上对业务的上下游传输通道的承载能力带宽,最终承载网络在各节点上对该业务的上游传输通道带宽和下游传输通道带宽几乎一致,但允许存在+/-100ppm的差异;减少带宽的时候是承载网络中首端网络节点对业务的上游传输通道的承载能力带宽先阶跃性减少,首端网络节点对业务的下游传输通道的承载能力带宽后减少,当首端网络节点完成调整后,逐步向后调整承载网络中的剩余网络节点上对业务的上下游传输通道的承载能力带宽,最终承载网络在各节点上的上游传输管道和下游传输管道的带宽几乎一致,但允许存在+/-100ppm的差异。上述两种情况都会导致网络节点对业务的下游管道带宽临时比上游管道带宽大的情况发生。当承载网络是一个整体,可以通过集中控制对承载网络内部的各个接口上的传输管道带宽进行调整,由于网络节点接收控制信号的时间存在差异,可能导致部分网络节点对业务的上游管道带宽临时比下游管道带宽大以及上游管道带宽临时比下游管道带宽小,而且这种带宽的差异比较大的情况发生。本发明实施例针对不同的两种情况分别进行说明。
本发明实施例可以从整个承载网络适配业务在各节点的上下游传输通道带宽的差异,即从端对端业务方面来进行说明,还可以从承载网络中的网络节点调整上下游接口的传输通道的带宽方面进行说明。其中网络节点的上下游传输通道可以是OTN接口中的ODUflex,或者为FlexE灵活以太网接口的时隙组合、以及原生的CPRI接口作为单一传输通道、原生 的以太网接口作为单一传输通道等,这里不做穷举。实际使用中,传输通道/网络接口还可以是ODUk/OTN接口、VC容器/SDH接口、作为单一传输通道的传统以太网Eth、IB、FC接口等,具体此处不做限定。
下面从网络对业务的端到端传输通道连接的带宽增减调整和节点对业务进行空闲增删,调整其传输速率,适应节点上对该业务的上下游传输通道的速率带宽差异两方面进行说明。
请参阅图3,从源CE发送到末CE的业务在多个PE中进行传输时,可以经过一个源PE、一个或多个中间PE及一个末PE,以基于FlexE接口和时隙构建两节点之间的传输通道,串联多段传输通道形成端到端业务传输通道连接的情形为例,本发明实施例中传输速率的调整方法一个实施例包括:
301、源节点获取目标数据流,目标数据流包含第一数据报文,第一数据报文包括至少两个非空闲码块。
源节点获取目标数据流,该目标数据流包含携带有用信息的第一数据报文,业务目标数据流在本实施例中作为FlexE Client信号在FlexE接口上的时隙组合作为传输通道上承载传输,其有用信息包括的数据码块、开始码块、结束码块等,数据码块、开始码块、结束码块都属于非空闲码块,不能随意的插入或删除。该第一数据报文可以包括一个用于指示数据报文开始的开始码块和一个用于指示数据报文结束的结束码块。数据报文还可以包括多个承载业务信息的数据码块,这些数据码块位于开始码块和结束码块之间。如图4所示,开始码块的类型可以为0x78,结束码块的类型可以包括8种,8种指示报文结束的码块类型包括0x87、0x99、0xAA、0xB4、0xCC、0xD2、0xE1及0xFF。可以看出,数据报文可以包括一个类型为0x78的开始码块,和一个类型为0x87、0x99、0xAA、0xB4、0xCC、0xD2、0xE1及0xFF中任一种的结束码块。
302、源节点根据业务在节点的上下游接口上的传输通道的带宽速率差异在目标数据流中插入或删除填充单元,并经过下游接口的传输通道发送至中间节点。
源节点的上下游传输通道的带宽速率差异可以很大。当网络节点的下游网络接口速率大于上游网络接口速率并且上下游接口的速率差值大于一定的阈值时,例如下游传输通道的速率比上游传输通道的速率大的时候,网络节点从上游接口接收目标数据流,按速率调整适配需求(上下游接口速率的差值,即下游接口速率减去上游接口速率)塞入填充单元并向下游发送。填充单元可以位于数据报文中的数据码块之间或数据报文与数据报文之间的码块之间,该填充单元用于适配调整网络节点上下游接口的速率差异。即,插入填充单元之后,业务与网络节点的对该业务的上下游接口上的传输通道的速率差异被分别消除,业务能够分别匹配节点对该业务的上下游接口上的传输通道的速率。例如,下游接口的速率比上游接口的速率大5Gbps时,插入填充单元,使得插入的填充单元能够适配该业务在节点上的上下游传输通道之间5Gbps的速率差值。当网络节点的下游网络接口速率小于上游网络接口速率并且上下游接口的速率差值大于一定的阈值时,网络节点从上游接口接收目标数据流,删除一定数量的填充单元以适应上下游速率的差异,并向下游发送。例如,下游接口的速率比上游接口的速率小5Gbps时,删除5Gbps的填充单元,使得删除后的数 据流能够适配下游接口的速率。
需要说明的是,源节点插入的填充单元可以是预设的填充码块或者是典型空闲码块,典型空闲码块是IEEE 802.3中已知的空闲数据结构单元,即已知的空闲码块或已知空闲字符组合,预设的填充码块是包含有区别于已知空闲结构的标识字段的码块。预设的填充码块的作用和典型空闲码块的作用可以是相同的,都可以用来做速率调整。在实际应用中,根据不同情形,采取的处理措施可以是不同的,本实施例中以已知空闲结构是典型空闲码块为例进行说明。例如,当插入的填充单元同时包括预设的填充码块和典型空闲码块时,在数据报文中的两个非空闲码块之间插入预设的填充码块,在两个数据报文之间插入典型空闲码块,节点可以直接删除预设的填充码块,并根据上下文码块删除插入的典型空闲码块,确保数据码块之间没有插入的预设的填充码块。典型空闲码块的码块结构如图5所示,典型空闲码块的码块类型至少可以是图5中的前四种0x1E码块类型中任意一种结构,还可以是重复的图5中第五种0x4B码块类型的命令序列字码块。示例性地,预设的填充码块的码块结构可以是图6所示的三种码块类型中任意一种结构,其他形式的结构不再一一列举。图6三种码块类型包含有预置的标识字段,该标识字段区别于图5中的典型空闲码块中的字段。当插入的填充单元单独包括典型空闲码块时,可以任意在任何位置上的插入典型空闲码块,包括在数据报文中的两个非空闲码块之间或两个数据报文之间插入典型空闲码块。插入的典型空闲码块可以为图5所示的前四种已知的空闲码块或者重复的第五种已知的空闲码块。当删除的填充单元单独包括典型空闲码块时,可以任意删除任何位置上的典型空闲码块,包括数据报文中的和数据报文之间的典型空闲码块。
可以理解的是,源节点的下游接口上的传输通道带宽速率可能大于上游接口上的传输通道带宽速率,源节点对目标数据流插入一定数量的填充单元以弥补业务在上下游接口的传输通道的带宽速率差异,使得业务能够分别匹配节点上业务的上下游接口上传输通道的带宽速率。反之,源节点的下游接口上的传输通道带宽速率也可能小于上游接口上的传输通道带宽速率,源节点对目标数据流删除一定数量的填充单元以适配业务在上下游接口的传输通道的带宽速率差异,使得业务能够分别匹配节点上业务的上下游接口上传输通道的带宽速率。
303、中间节点根据上下游接口上的传输通道的带宽速率差异在目标数据流中插入或删除填充单元,并通过下游传输通道发送至下一中间节点或末节点。
当网络节点为中间节点时,中间节点从源节点或上一中间节点处获取到目标数据流后,中间节点的上下游接口上的传输通道带宽速率差异很大时。中间节点的处理过程类似步骤302,具体此处不再赘述。当填充单元为预置的填充码块时,如果下游传输管道带宽速率远比上游传输管道带宽速率大,则中间节点在数据码块之间插入预置的填充码块的过程如图7所示,中间节点接收到包含有数据报文的数据流后,在数据报文中的非空闲码块之间插入预置的填充码块。当填充单元为典型空闲码块时,中间节点在非空闲码块之间插入典型空闲码块的过程如图8所示,中间节点接收到包含有数据报文的数据流后,在数据报文中的非空闲码块之间插入典型空闲码块。
可以理解的是,当中间节点的下游接口速率大于上游接口速率时,中间节点根据已发 送的码块类型来确定需要插入的填充单元的码块类型。例如,当上一发送的码块类型为数据码块时,中间节点插入预置的填充码块并发送,预置的填充码块的具体结构如图6所示。当中间节点的下游接口速率小于上游接口速率时,中间节点根据上下游速率的具体差值,删除目标数据流中与差值相同大小的可删除单元,该可删除单元可以是之前插入的填充单元,还可以是目标数据流中原来已知的空闲码块。当上一已发送的码块类型为非数据码块时,中间节点插入与上一已发送码块类型一致的命令字码块。
304、末节点根据上下游接口速率差异在对目标数据流中插入或删除填充单元,按照与下游目标客户节点可以接受的数据格式向目标客户节点发送业务。
当网络节点为末节点时,末节点从中间节点接收目标数据流,通过本发明涉及的空闲增删操作进行业务的传输速率调整时,还需要结合下游目标客户节点可以接受的数据格式约束和限制。例如对64/66b编码的CPRI业务,需要删除目标数据流中插入的所有填充单元,包括预置的填充码块或已知典型空闲码块,又例如目标客户节点只能以传统以太网接口作为单一传输通道接收业务数据流的时候,只允许现有典型空闲码块作为填充单元并要求位于数据报文之间。位于数据报文中的非空闲码块之间的填充单元,包括预置的填充码块或已知典型空闲码块,都应被移除。通过调整位于数据报文与数据报文之间的填充单元,以使得目标数据流符合目标客户设备可接收业务数据的要求。当目标客户设备无特殊数据格式要求,可以接收本发明涉及的中间节点的输出格式的时候,末节点还可以根据实际需要在目标数据流中任意插入或者删除相应的填充单元来适配末节点上下游接口上的传输通道的带宽速率差异,无需特殊处理。
本发明实施例中,描述了网络节点接收到目标数据流后,对目标数据流插入或删除填充单元,调整业务的传输速率,以适应业务在网络节点上下游接口上的传输通道的带宽速率差异,同时减少网络节点的数据缓存、网络节点处理延迟和端到端业务传输延迟。
需要说明的是,对于本发明实施例,如图9所示,图9为802.3 40GE/100GE业务、以太网业务为FlexE所承载、以太网业务为OTN所承载时涉及的字节形式控制字符,其他FC、IB业务的字符类似。其中IDLE/LPI字符字节对应码块类型为0x1E类型的码块中的C0~C7,即连续的8个空闲字节。Sequence ordered set码块类型为0x4B且O0=0x0、Signaling命令集码块类型为0x4B且O0=0xF(FC业务专用,不适用于以太网业务)。对以太网业务而言,以64/66b码块形式的数据单元,0x1E类型的码块为可删除码块,当重复的Sequence ordered set码块类型为0x4B且O0=0x0时,该码块也可以删除。对于其他类型的数据单元,例如一个字符字节为最小数据单元,则增删对象为现有典型的空闲字节或者预设置的空闲字节,其他粒度的数据单元不再赘述。
可以理解的是,当插入的填充单元同时包含预置的填充码块和典型空闲码块时,由于预置的填充码块包含有区别于已知空闲结构的标识字段,末节点可以直接删除预置的填充码块,确保非空闲码块之间没有插入的预置的填充码块。若插入的填充单元不包含预置的填充码块,则末节点根据上下文码块删除插入的典型空闲码块,即根据上下文码块,例如当前典型空闲码块出现在前一开始码块到后一结束码块之间,判断出既定的典型空闲码块是处于数据报文内,是不符合现有协议要求的空闲,应当删除。
需要说明的是,从源CE发送到末CE的业务在多个PE中进行传输时,还可以经过一个源PE、及一个末PE,不包括中间节点。处理步骤类似上述实施例,具体此处不再赘述。
下面从承载网络调整面向业务所提供的端到端传输通道连接的带宽的实施例,即从端对端按需调整业务端到端传输通道连接的带宽方面来进行说明。
当应用于OTN业务中,业务的下游管道带宽临时比上游管道带宽大时。本发明实施例还可以运用于如图10所示的端到端业务传输通道连接的带宽调整的典型过程场景。端到端业务带宽调整可以包括带宽增大和带宽减小两种场景。当需要带宽调整时,从源节点到末节点之间的所有网络节点需要两两进行带宽调整协商,逐段实现端到端业务带宽调整。例如,源节点发起带宽调整请求,并将该带宽调整请求向下游逐跳转发。直到末节点收到带宽调整请求,并向上游回复带宽调整应答,直到源节点收到带宽调整应答,确定可以执行带宽调整动作。
通常地,带宽增大时,从末节点向源节点的方向,即从下游向上游的方向依次增大传输通道的带宽。带宽减小时,从源节点向末节点的方向,即从上游向下游的方向依次减小传输通道的带宽。因此,在带宽调整的过程中,可以出现上游传输通道的带宽比下游传输通道的带宽小,或者说下游传输通道的带宽比上游传输通道的带宽大的情况。不会在中间节点上造成有效报文数据的累积丢失等问题,做到业务无损。
带宽增大时,具体应用场景如图11所示:假设以太网业务增加业务带宽10Gbps到50Gbps,当前以太网业务流带宽为10Gbps,相当于传统10GE,为支持该以太网业务,源CE到PE1的管道带宽为灵活以太网的2个5Gbps时隙带宽的和,PE1到PE2的管道带宽也为灵活以太网的2个5Gbps时隙带宽。PE2到PE3、PE3到PE4、PE4到末CE管道带宽依次也为灵活以太网的2个5Gbps时隙带宽。但上下游管道带宽可能存在+/-100ppm微小差异。
源CE首先就第一跳传输通道通过第一跳接口向网络请求增加带宽,具体的话,目前可以由源CE的管理(开销处理,OHPU)单元通过就第一跳传输通道通过第一跳接口的信令通道发送请求,PE1设备收到请求,向下游传递请求到PE2,请求被依次送达出口PE4,出口PE4向对应的末CE转达业务带宽增加请求。
末CE同意增加,并且做出确认应答以后,出口PE4直接将PE4到末CE的最后一跳的管道带宽从2个5Gbps时隙调整为10个5Gbps时隙,因此出口PE4到末CE的最后一跳管道实现了阶跃性的带宽调整。此时对出口PE5而言,业务的上游管道带宽是10Gbps,下游管道带宽是50Gbps,存在较大的差异。此时,出口PE4需要按照图3的实施例方法对业务进行速率调整。但是如果末CE不能兼容本发明实施例的空闲调整机制,则出口PE4需要按照IEEE 802.3的空闲增删机制进行速率调整,即有规则约束可选的调整选项。这要求出口PE4的在设计上支持较大的数据缓存能力。
出口PE4获知已完成末段管道调整成功,如果没有其他因素的限制,出口PE4可以向其上游的上一节点PE3的带宽增加请求作出确认应答。确认应答到达PE3后,PE3即可以进行PE3到PE4的业务管道进行调整,PE3将业务的下游管道时隙数量从2变成10,管道带宽即从10Gbps变成50Gbps。出口PE4从10个时隙50Gbps的上游管道中收取数据。此时,出口PE4的上下游业务管道带宽均为10个时隙50Gbps带宽,但是上下游管道带宽可 能存在+/-100ppm的微小差异。PE4仍可以按照图3的实施例方式进行速率调整,也可以按照IEEE 802.3的空闲增删机制进行速率调整。而此时PE3的上游管道为10Gbps,下游为50Gbps,存在较大的差异,需要按照图3的实施例方法对业务进行带宽调整。
PE3在倒数第二跳的管道带宽调整成功后,如果没有其他因素的限制,可以向其上游的上一节点PE2的带宽增加请求作出确认应答。确认应答到达PE2后,PE2即可以进行PE2到PE3的业务管道进行调整,PE2将业务的下游管道时隙数量从2变成10,管道带宽即从10Gbps变成50Gbps。PE3从10个时隙50Gbps的管道中收取数据。此时,PE3的上下游业务管道带宽均为10个时隙50Gbps带宽,但是上下游管道带宽可能存在+/-100ppm的微小差异。PE3仍可以按照图3的实施例方法进行速率调整,也可以按照IEEE 802.3的空闲增删机制进行速率调整。而此时PE2的上游管道带宽为10Gbps,下游管道带宽为50Gbps,存在较大的差异,需要按图3的实施例方法对业务进行带宽调整。以此类推,对于端到端业务而言,两两设备之间的管道逐跳从下游到上游实现了带宽增加的调整。
入口PE1得知下游管道带宽调整妥当后,下游管道已经不存在瓶颈,即瓶颈只在源CE到入口PE1,此时入口PE1可以向源CE的带宽增加请求作出应答。源CE收到应答后,可以即刻将其下行到出口PE4的业务管道的带宽调整到50Gbps,从而实现端到端的业务管道带宽按需调整。
需要指出的是,源CE发送数据报文一般自动按照其下游管道带宽速率发送报文数据,有报文则连续发送,无则发送已知的空闲结构单元,即使其下游端口的带宽在发送报文的期间发生变化也没有速率调整的需要。
带宽减小时,具体应用场景如图11所示:假设以太网业务减少业务带宽50Gbps到10Gbps,当前以太网业务流带宽为50Gbps,相当于传统50GE,为支持该以太网业务,源CE到PE1的管道带宽为灵活以太网的10个5Gbps时隙带宽的和,PE1到PE2的管道带宽也为灵活以太网的10个5Gbps时隙带宽。PE2到PE3、PE3到PE4、PE4到末CE管道带宽依次也为灵活以太网的10个5Gbps时隙带宽。但上下游管道带宽可能存在+/-100ppm微小差异。
源CE首先通过就第一跳传输通道经第一跳接口向网络请求减少带宽,具体的话,目前可以是源CE的管理单元通过第一跳接口的信令通道发送请求,入口PE1设备收到请求,如无特殊情况,入口PE1可以即可做出确认应答。
PE1同意减少,并且做出确认应答以后,源CE直接将第一跳的管道带宽从10个时隙调整为2个时隙,带宽从50Gbps变成10Gbps,源CE到入口PE1的第一跳管道实现了阶跃性的带宽调整。需要指出的是,源CE发送数据报文一般自动按照其下游管道带宽速率发送报文数据,有报文则连续发送,无则发送已知的空闲结构,即使其下游端口的带宽在发送报文的期间发生变化也没有速率调整的需要。此时对入口PE1而言,业务的上游管道带宽是10Gbps,下游管道带宽是50Gbps,存在较大的差异。此时,入口PE1需要按照图3的实施例方法对业务进行速率调整。
入口PE1获知已完成源CE到入口PE1管道调整成功,如果没有其他因素的限制,PE1可以向其下一节点PE2发出业务带宽减少的请求,请求到达PE2后,如无特殊情况,PE2 即可以对PE1加以确认应答。
PE1收到确认应答后,对PE1的下游业务管道进行调整,PE1将业务的下游管道时隙数量从10变成2,下游管道带宽即从50G变成10Gbps。PE2从10个时隙50Gbps的上游管道中收取数据。此时,PE1的上下游业务管道带宽均为10个时隙50Gbps带宽,但是上下游管道带宽可能存在+/-100ppm的微小差异。PE1仍可以按照图3的实施例方法进行速率调整,也可以按照EEE 802.3的空闲增删机制进行速率调整。而此时PE2的上游管道为10Gbps,下游为50Gbps,存在较大的差异,需要按照图3的实施例方法对业务进行带宽调整。以此类推,对于端到端业务而言,两两设备之间的管道逐跳从上游游到下游实现了带宽减少的调整。
出口PE4的上游管道带宽调整妥当后,可以继续向下游末CE请求调整。但在最后一段管道调整前,出口PE4的上下游管道大小存在明显差异,需要按照图3的实施例方法进行速率调整,但是如果末CE不能兼容本发明的调整机制,则出口PE4需要按照EEE 802.3的空闲增删机制进行速率调整,即有规则约束。这要求出口PE4的在设计上支持较大的数据缓存能力。
可选地,当某个网络节点无法识别插入的填充单元时,例如传统的网络设备。以末CE为例进行说明,假设末CE不能识别本发明实施例中的填充单元。在末CE的上一跳网络设备,即末PE中,需要对数据流中的填充单元全部删除,例如对64/66b编码的CPRI业务,需要删除目标数据流中插入的所有填充单元,包括预置的填充码块或已知典型空闲码块。例如目标客户节点只能以传统以太网接口作为单一传输通道接收业务数据流的时候,只允许现有典型空闲码块作为填充单元并要求位于数据报文之间。
可选地,还可以将原有已知的空闲码块也全部删除,位于数据报文中的非空闲码块之间的填充单元,包括预置的填充码块或已知典型空闲码块,都应被移除。末PE可以仅将数据报文发送给末CE。
源CE在没有数据报文发送时,可以发送填充单元,填充单元可以为预设的填充码块或典型空闲码块。末PE接收的数据流可以包含数据报文,还可以包含填充单元等。
可选地,在进行调整的过程中,各个网络设备PE还可以执行IEEE 802.3的空闲增删机制,以实现速率适配。
就应用于OTN中,本实施例就业务的上游管道带宽临时比下游管道带宽大时举例进行描述。在实际情况中,如图12所示,当源CE与末CE之间的承载网络是一个整体时,通过集中控制进行承载网络内部的管道带宽调整。带宽请求从源CE发送至入口PE口,入口PE将请求向集中控制器转达,集中控制器可以同时控制所有PE之间的管道分别进行调整,包括增加带宽和减少带宽的调整。由于控制信号的下发和到达各PE的时间不一,PE之间的管道的带宽调整完全乱序,此时,对任何PE而言,都可能存在下游管道比上游管道大的情况,也可能上游管道比下游大的情况。
下面以增加带宽为例进行说明:入口CE通过入口PE向网络请求增加带宽;入口PE将请求向网络的集中控制器报告,在没有确保下游所有节点两两之间的管道带宽均已经增加妥当之前,暂缓对源CE的任何应答,即保证从源CE发送的有效数据报文的带宽不会超过 完成调整前的任何一段管道的带宽的大小。
如无特殊情况,网络的集中控制器向所有PE发送指令,PE1、PE2、PE3、PE4、PE5分别调整其下游管道的带宽,增加带宽,这种方式允许下游的所有管道同时并行进行调整,没有对其他管道调整成功的等待依赖,相对而言,总体耗费的时间要短一些。例如承载网络带宽从10Gbps增加到50Gbps。假设收到指令的顺序是:PE1、PE5、PE3、PE2、PE4。则整个调整过程有7个阶段:对一些网络节点而言,可能存在上游管道带宽比下游管道带宽大的情况。但CE-PE1的管道最后增加,实际上进入承载网络的有效流量是受约束的,可以保证在下游中间节点不存在报文数据的累积。不同设备之间的接口速率如下表所示:
  CE-PE1 PE1-PE2 PE2-PE3 PE3-PE4 PE4-PE5 PE5-CE
第一阶段 10Gbps 10Gbps 10Gbps 10Gbps 10Gbps 10Gbps
第二阶段 10Gbps 50Gbps 10Gbps 10Gbps 10Gbps 10Gbps
第三阶段 10Gbps 50Gbps 10Gbps 10Gbps 10Gbps 50Gbps
第四阶段 10Gbps 50Gbps 10Gbps 50Gbps 10Gbps 50Gbps
第五阶段 10Gbps 50Gbps 50Gbps 50Gbps 10Gbps 50Gbps
第六阶段 10Gbps 50Gbps 50Gbps 50Gbps 50Gbps 50Gbps
第七阶段 50Gbps 50Gbps 50Gbps 50Gbps 50Gbps 50Gbps
例如在第二阶段,仅PE1-PE2的管道带宽为50Gbps,则对PE2而言,其上游管道带宽比下游管道带宽大。但源CE在网络入口的业务有效流量必然小于10Gbps。此时PE1按照图3的实施例方法对业务进行速率调整,增加了大量的空闲的填充单元,填充口的速率为50Gbps。业务被送达PE2,PE2的上游管道带宽为50Gbps,下游管道带宽为10Gbps,需要用图3的实施例的方法进行速率调整,删除过量插入的填充单元。只要源CE的入口带宽未增加,流入网络的有效业务数据是受限的,不会在承载网络中任何节点造成有效数据的累积。
相反的过程,如果是减小带宽,则先减小源CE的入口管道带宽,限制有效业务流量,下游的各个管道带宽再同时并行进行缩小。
需要说明的是,本发明实施例还可以运用于业务流中不存在空闲字节的业务类型,包括CPRI业务,OTN业务和SDH业务。
对CPRI业务而言,有如图13所示的多种物理接口选项。
当CPRI业务为64/66b编码格式时,例如CPRI line bit rate option 8,未编码前速率为20x 491.52Mbit/s,小于10GE的业务带宽,两者的速率差异大约为491.52:500,对于灵活以太网接口,2个5Gbps的时隙大约为10GE的带宽。该CPRI速率选项的码块流如下,不包含任何的空闲码块。
D T S D D D D T S D D D D T S D D
假设端到端的各段承载管道带宽均大约为10Gbps,即使有+/-100ppm频率差异,整体上管道的带宽大于CPRI的带宽,因此在入口PE1上,需要按照图3的实施例方法进行速率调整,主要为插入适量的空闲的填充码块,PE2、PE3等也需要按照图3的实施例方法进行速率调整,通过对塞入的填充码块进行合理增删以适配不同管道之间的+/-100ppm的速率差异。
需要说明的是,出口PE到末CE链路上,当要求输出原生的CPRI信号,此时需要完全删除所有的填充单元,按照原生数据格式要求,仅发送有效CPRI数据,即发送不包含插入的填充单元的原CPRI数据流。
当CPRI业务为8/10b编码格式时,例如CPRI line bit rate option 6,采用8/10b编码,可以先转化为图13中相同的64/66b码块的形式,再通过一个5Gbps的时隙管道进行端到端传输。调整过程采用上述CPRI业务为64/66b编码格式时的调整方式,具体此处不再赘述。
对OTN/SDH业务而言,OTN、SDH的数据一般以字节为单位,只要将每8个字节增加一个同步头,转化为一个64/66b的数据码块,则OTN/SDH业务均可以转化为下面的形式,即只包含数据码块,没有开始码块和结束码块。对于只包含数据码块的业务流,只能使用本发明实施例的调整方式进行空闲调整。现有IEEE 802.3的空闲增删机制不适用。OTN/SDH业务的码块流如下:
D D D D D D D D D D D D D D D D D D
上述实现了OTN/SDH业务的码块流的转化,转化完成的码块流可以在FlexE的时隙组或者OTN的ODUflex等管道中进行承载传输。当任何一段承载管道的带宽大于等于OTN/SDH业务的码块流的净带宽,则网络节点可以根据下游承载管道的带宽,对业务的数据流按照本发明实施例的方法增删填充单元。
也可以将SDH/OTN业务,结合SDH/OTN的帧格式,转换成类似64/66b编码格式的其他帧格式数据流,并参考上述CPRI业务的处理方式进行处理,具体此处不再赘述。
对于OTN中业务从上游ODUflex到下游ODUflex的转传、速率调整适配和映射而言,图14为网络节点的上游管道和下游管道均为ODUflex的情形,ODUflex与FlexE的承载业务的时隙组类似,都是本发明实施例描述的网络节点上业务的上游管道和下游管道,作为接口间承载业务的管道使用。图14为端到端业务全部为OTN的ODUflex管道,网络节点对管道内部的业务进行交换,可以采用本发明实施例的方法进行速率调整,也可以采用IEEE802.3的空闲增删机制进行速率调整。采用图3的实施例方法后,ODUflex的带宽调整可以实现阶跃性调整并保证业务无损。具体地,ODUflex也可以为常见的ODUk。图15是FlexE和OTN混合组网的情况,其中一条网络接口使用ODUflex,其他为灵活以太网接口的时隙、以及原生的CPRI接口。实际使用中,网络接口还可以是ODUk/OTN接口、SDH接口、传统以太网接口等,具体此处不做限定。
上面对本发明实施例中传输速率的调整方法进行了描述,下面对本发明实施例中的网络设备进行描述请参阅图16,本发明实施例中网络设备一个实施例包括:
图16是本发明实施例提供的一种网络设备结构示意图,该网络设备1600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)1610(例如,一个或一个以上处理器)和存储器1620,一个或一个以上存储应用程序1633或数据1632的存储介质1630(例如一个或一个以上海量存储设备)。其中,存储器1620和存储介质1630可以是短暂存储或持久存储。存储在存储介质1630的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,处理器1610可以设置为与存储介质1630通信,在网络设备1600上执行 存储介质1630中的一系列指令操作。网络设备1600还可以包括一个或一个以上电源1640,一个或一个以上有线或无线网络接口1650,一个或一个以上输入输出接口1660,和/或,一个或一个以上操作系统1631,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。本领域技术人员可以理解,图16中示出的网络设备结构并不构成对网络设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图16对网络设备的各个构成部件进行具体的介绍:
存储器1620可用于存储软件程序以及模块,处理器1610通过运行存储在存储器1620的软件程序以及模块,从而执行网络设备的各种功能应用以及数据处理。存储器1620可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据网络设备的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1620可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。在本发明实施例中提供的速率传输调整机制的程序和接收到的数据流存储在存储器1620中,当需要使用时,处理器1610从存储器1620中调用。
处理器1610是网络设备的控制中心,可以按照设置的调整机制调整传输速率。处理器1610利用各种接口和线路连接整个网络设备的各个部分,通过运行或执行存储在存储器1620内的软件程序和/或模块,以及调用存储在存储器1620内的数据,执行网络设备的各种功能和处理数据,从而对手实现对传输速率的调整。
可选的,处理器1610可包括一个或多个处理单元。
在本发明的实施例中,处理器1610用于执行图3中的步骤301至步骤304,此处不再赘述。
本发明实施例中,输入接口1650和输出接口1660用来控制输入输出的外部设备,处理器1610通过网络设备的内部总线和输入接口1650及输出接口1660连接,输入接口1650与输出接口1660分别与上下游的外部设备连接,最终实现处理器1610和上下游的外部设备的数据传输,数据可以通过输入接口1650和输出接口1660实现数据在不同设备之间的传输,快速调整业务的输出速率以适应带宽按需改变需求,并减少节点数据缓存、节点处理延迟和端到端传输延迟。
图16从硬件处理的角度分别对本发明实施例中的网络设备进行详细描述,下面从模块化功能实体的角度对本发明实施例中的网络设备进行详细描述,请参阅图17,本发明实施例中网络设备的一个实施例包括:
获取单元1701,用于获取目标数据流,目标数据流包含第一数据报文,第一数据报文包括至少两个非空闲单元;
第一调整单元1702,当需要进行带宽调整时,用于根据需要调整的带宽在任意两个非空闲单元之间插入或删除填充单元,填充单元用于适配网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
本发明实施例中,通过输入接口1650从上游设备获取目标数据流,将该目标数据流存 储在存储器1620中,处理器1610调用存储于存储器1620的速率调整程序,在通过输出接口1660将处理后的目标数据流发送至下游设备,快速调整网络接口上下游的速率,以适应业务上下游带宽的差异,同时减少网络节点的数据缓存、网络节点处理延迟和端到端业务传输延迟。
可选的,网络设备可进一步包括:
第二调整单元1703,当需要进行带宽调整时,用于根据需要调整的带宽在第一数据报文和第一数据报文相邻的数据报文之间插入或删除填充单元。
可选的,第一调整单元1702可进一步包括:
第一调整模块17021,用于根据需要调整的带宽在任意两个非空闲单元之间插入或删除预置的填充码块,预置的填充码块通过码块类型字段指示,预置的填充码块用于适配网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
可选的,第一调整单元1702可进一步包括:
第二调整模块17022,用于根据需要调整的带宽在任意两个非空闲单元之间插入或删除典型空闲码块,典型空闲码块通过码块类型字段指示,典型空闲码块用于适配网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
本发明实施例中,描述了网络节点接收到目标数据流后,在网络节点上根据业务数据流的上下游的传输通道速率差异,进行高效的空闲增删速率调整以适应业务上下游传输通道的速率的不同差异情形,尤其是是进行业务端到端传输带宽进行快速调整的时候出现上下游传输通道存在较大速率差异的情形,同时减少网络节点的数据缓存、网络节点处理延迟和端到端业务传输延迟。
请参阅图18,本发明实施例中网络设备的另一个实施例包括:
获取单元1701,用于获取目标数据流,目标数据流包含第一数据报文,第一数据报文包括至少两个数据单元;
第一调整单元1702,当需要进行带宽调整时,用于根据需要调整的带宽在任意两个非空闲单元之间插入填充单元,填充单元用于适配网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
可选的,网络设备可进一步包括:
第三调整单元1704,用于根据需要速率适配的速率差值在目标数据流中插入或删除填充单元,插入或删除的填充单元用于进行速率适配,速率适配的速率差值小于带宽的差值。
可选的,网络设备可进一步包括:
处理单元1705,用于删除填充单元并将删除后剩余的数据单元发送给下一个网络设备或用户设备。
本发明实施例中,描述了网络设备接收到目标数据流后,根据业务数据流的上下游的传输通道速率差异,对目标数据流插入或删除填充单元,快速调整网络节点上下游的速率,以适应业务上下游传输通道的速率差异,同时减少网络节点的数据缓存、网络节点处理延迟和端到端业务传输延迟。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装 置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (13)

  1. 一种传输速率的调整方法,其特征在于,包括:
    网络设备获取目标数据流,所述目标数据流包含第一数据报文,所述第一数据报文包括至少两个非空闲单元;
    当需要进行带宽调整时,根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除填充单元,所述填充单元用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
  2. 根据权利要求1所述的调整方法,其特征在于,网络设备获取目标数据流之后,所述方法还包括:
    当需要进行带宽调整时,根据需要调整的带宽在所述第一数据报文和所述第一数据报文相邻的数据报文之间插入或删除所述填充单元。
  3. 根据权利要求1所述的调整方法,其特征在于,所述根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除填充单元包括:
    根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除预置的填充码块,所述预置的填充码块通过码块类型字段指示,所述预置的填充码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
  4. 根据权利要求1所述的调整方法,其特征在于,所述根据所述需要调整的带宽在任意两个数据单元之间插入填充单元包括:
    根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除典型空闲码块,所述典型空闲码块通过码块类型字段指示,所述典型空闲码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
  5. 根据权利要求1至4中任一项所述的调整方法,其特征在于,所述网络设备获取目标数据流之后,所述方法还包括:
    根据需要速率适配的速率差值在所述目标数据流中插入或删除填充单元,所述插入或删除的填充单元用于进行速率适配,所述速率适配的速率差值小于所述带宽的差值。
  6. 根据权利要求1至4中任一项所述的调整方法,其特征在于,所述网络设备获取目标数据流之后,所述方法还包括:
    删除所述填充单元并将删除后剩余的数据单元发送给下一个网络设备或用户设备。
  7. 一种网络设备,其特征在于,包括:
    获取单元,用于获取目标数据流,所述目标数据流包含第一数据报文,所述第一数据报文包括至少两个非空闲单元;
    第一调整单元,当需要进行带宽调整时,用于根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除填充单元,所述填充单元用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
  8. 根据权利要求7所述的网络设备,其特征在于,所述网络设备还包括:
    第二调整单元,当需要进行带宽调整时,用于根据需要调整的带宽在所述第一数据报文和所述第一数据报文相邻的数据报文之间插入或删除所述填充单元。
  9. 根据权利要求7所述的网络设备,其特征在于,所述第一调整单元包括:
    第一调整模块,用于根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除预置的填充码块,所述预置的填充码块通过码块类型字段指示,所述预置的填充码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
  10. 根据权利要求7所述的网络设备,其特征在于,所述第一调整单元包括:
    第二调整模块,用于根据所述需要调整的带宽在任意两个非空闲单元之间插入或删除典型空闲码块,所述典型空闲码块通过码块类型字段指示,所述典型空闲码块用于适配所述网络设备的上游传输通道的带宽和下游传输通道的带宽的差值。
  11. 根据权利要求7至10中任一项所述的网络设备,其特征在于,所述网络设备还包括:
    第三调整单元,用于根据需要速率适配的速率差值在所述目标数据流中插入或删除填充单元,所述插入或删除的填充单元用于进行速率适配,所述速率适配的速率差值小于所述带宽的差值。
  12. 根据权利要求7至10中任一项所述的网络设备,其特征在于,所述网络设备还包括:
    处理单元,用于删除所述填充单元并将删除后剩余的数据单元发送给下一个网络设备或用户设备。
  13. 一种网络设备,其特征在于,所述网络设备包括:
    输入接口、输出接口、处理器、存储器和总线;
    所述输入接口、所述输出接口、所述处理器、所述存储器通过所述总线连接;
    所述输入接口用于连接上游设备并获取输入结果;
    所述输出接口用于连接下游设备并输出结果;
    所述处理器用于从存储器中调用调整速率的程序并执行该程序;
    所述存储器用于存储接收到的数据流及调整速率的程序;
    所述处理器调用所述存储器中的程序指令,使得网络设备执行如权利要求1至6中任一项所述的传输速率的调整方法。
PCT/CN2017/097999 2016-12-23 2017-08-18 一种传输速率的调整方法及网络设备 WO2018113329A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020197016328A KR102240140B1 (ko) 2016-12-23 2017-08-18 전송 레이트 조정 방법 및 네트워크 장치
EP17883698.7A EP3531594B1 (en) 2016-12-23 2017-08-18 Method for adjusting transmission rate and network apparatus
EP21162885.4A EP3905556A3 (en) 2016-12-23 2017-08-18 Transmission rate adjustment method and network device
JP2019532786A JP6946433B2 (ja) 2016-12-23 2017-08-18 伝送レート調整方法およびネットワークデバイス
US16/428,246 US11082142B2 (en) 2016-12-23 2019-05-31 Transmission rate adjustment method and network device
US17/391,421 US11750312B2 (en) 2016-12-23 2021-08-02 Transmission rate adjustment method and network device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611205246.8 2016-12-23
CN201611205246.8A CN108242969B (zh) 2016-12-23 2016-12-23 一种传输速率的调整方法及网络设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/428,246 Continuation US11082142B2 (en) 2016-12-23 2019-05-31 Transmission rate adjustment method and network device

Publications (1)

Publication Number Publication Date
WO2018113329A1 true WO2018113329A1 (zh) 2018-06-28

Family

ID=62624358

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/097999 WO2018113329A1 (zh) 2016-12-23 2017-08-18 一种传输速率的调整方法及网络设备

Country Status (6)

Country Link
US (2) US11082142B2 (zh)
EP (2) EP3905556A3 (zh)
JP (1) JP6946433B2 (zh)
KR (1) KR102240140B1 (zh)
CN (2) CN113300810B (zh)
WO (1) WO2018113329A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3813281A4 (en) * 2018-08-07 2021-08-25 Huawei Technologies Co., Ltd. CODE BLOCK STREAM RECEIVING METHOD, CODE BLOCK STREAM SENDING METHOD AND COMMUNICATION DEVICE
US11349719B2 (en) * 2018-05-31 2022-05-31 Huawei Technologies Co., Ltd. Method and apparatus for adjusting bandwidth of transmission channel in flexible ethernet

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109586864B (zh) * 2017-09-28 2021-01-15 华为技术有限公司 数据传输方法、装置及系统
CN110278065B (zh) * 2018-03-13 2022-02-08 华为技术有限公司 一种补偿时延的方法和设备
CN110830153B (zh) * 2018-08-07 2021-04-09 华为技术有限公司 接收码块流的方法、发送码块流的方法和通信装置
CN110932999A (zh) * 2018-09-20 2020-03-27 中国移动通信有限公司研究院 业务处理方法和设备
CN110149452A (zh) * 2019-03-27 2019-08-20 杭州叙简科技股份有限公司 一种降低网络丢包率提升通话声音效果的方法
CN117692099A (zh) 2019-06-04 2024-03-12 华为技术有限公司 一种以太网数据传输的方法和通信设备
CN110381071B (zh) * 2019-07-24 2021-05-28 新华三技术有限公司合肥分公司 一种报文传输方法、装置及发送方设备
CN112564851B (zh) * 2019-09-10 2022-03-08 华为技术有限公司 以太网链路速率切换的方法、装置及计算机可读存储介质
US20240089078A1 (en) * 2019-11-07 2024-03-14 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for communication between two communication devices
EP4049477B1 (en) * 2019-12-18 2023-03-29 Siemens Industry Software Inc. Transmission rate adaptation
WO2021134527A1 (zh) * 2019-12-31 2021-07-08 华为技术有限公司 速率调节装置和方法
CN113452475B (zh) * 2020-03-28 2022-12-13 华为技术有限公司 数据传输方法、装置及相关设备
CN113938245A (zh) * 2020-07-13 2022-01-14 华为技术有限公司 一种速率适配方法及装置
CN112804078B (zh) * 2020-07-23 2022-03-18 中兴通讯股份有限公司 带宽调整方法、业务传输方法、网络设备和可读存储介质
WO2022021451A1 (zh) * 2020-07-31 2022-02-03 华为技术有限公司 一种基于FlexE业务的带宽调整方法及网络设备
CN115699629A (zh) * 2020-09-02 2023-02-03 华为技术有限公司 一种数据传输方法及其设备
EP4254835A4 (en) * 2020-12-25 2024-01-24 Huawei Technologies Co., Ltd. DATA PROCESSING METHOD, DEVICE AND SYSTEM
CN112821985B (zh) * 2020-12-31 2021-11-30 珠海格力电器股份有限公司 编码器的控制方法及控制装置、伺服电机、编码器
CN113438142B (zh) * 2021-06-21 2023-06-20 京东方科技集团股份有限公司 一种通信方法、通信系统及储物系统
CN114866425B (zh) * 2022-03-17 2023-12-05 北京邮电大学 一种调整光业务单元连接的带宽的方法及装置
CN115189811B (zh) * 2022-07-12 2023-11-28 烽火通信科技股份有限公司 一种灵活以太网中网络时延优化的方法和装置
CN118041487A (zh) * 2022-11-07 2024-05-14 中兴通讯股份有限公司 64b/66b编码信号的处理方法、通信设备和存储介质
CN116847360B (zh) * 2022-12-30 2024-04-02 曲阜师范大学 一种非实时数据传输方法、装置及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1521988A (zh) * 2003-01-28 2004-08-18 华为技术有限公司 物理层的数据发送时隙在整个时域上均匀分布的方法
CN1984043A (zh) * 2006-06-07 2007-06-20 华为技术有限公司 基于带宽调整机制的流量控制方法及系统
CN101656588A (zh) * 2009-09-21 2010-02-24 中兴通讯股份有限公司 一种传输数据的方法及系统
CN103546229A (zh) * 2012-07-09 2014-01-29 中兴通讯股份有限公司 Serdes速率匹配方法及装置

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09319672A (ja) * 1996-05-30 1997-12-12 Fuji Xerox Co Ltd データ伝送装置および方法
US5951637A (en) * 1997-05-07 1999-09-14 Intel Corporation Bandwidth reservation system
US8266266B2 (en) * 1998-12-08 2012-09-11 Nomadix, Inc. Systems and methods for providing dynamic network authorization, authentication and accounting
EP1137314A1 (en) * 2000-03-23 2001-09-26 Siemens Information and Communication Networks S.p.A. Telecommunication process and system handling data organized in cells of variable length
US7027461B1 (en) * 2000-07-20 2006-04-11 General Instrument Corporation Reservation/retry media access control
US7027394B2 (en) * 2000-09-22 2006-04-11 Narad Networks, Inc. Broadband system with traffic policing and transmission scheduling
JP3732412B2 (ja) * 2001-01-26 2006-01-05 日本電信電話株式会社 パケット通信方法及び通信装置、中継開始ノード装置及び中継開始方法、中継ノード装置及び中継方法、中継終了ノード装置及び中継終了方法
US6931011B2 (en) * 2001-01-31 2005-08-16 Telcordia Technologies, Inc. Method and systems for bandwidth management in packet data networks
US7656905B2 (en) * 2002-12-24 2010-02-02 Samir Sheth Apparatus and method for aggregation and transportation of gigabit ethernet and other packet based data formats
JP4110964B2 (ja) * 2002-12-25 2008-07-02 日本電気株式会社 伝送システムおよびデータ転送方法
IL154739A0 (en) * 2003-03-04 2003-10-31 Bamboo Mediacasting Ltd Segmented data delivery over non-reliable link
GB2421091B (en) * 2004-12-07 2008-09-03 Hewlett Packard Development Co Central processor for a memory tag
US8155148B2 (en) * 2005-09-27 2012-04-10 Ciena Corporation Telecommunications transport methods and systems for the transparent mapping/demapping of client data signals
US8358987B2 (en) * 2006-09-28 2013-01-22 Mediatek Inc. Re-quantization in downlink receiver bit rate processor
CN101192953A (zh) * 2006-11-21 2008-06-04 中兴通讯股份有限公司 干扰环境下Ad hoc网络多媒体视频传输方法
US8160096B1 (en) * 2006-12-06 2012-04-17 Tadaaki Chigusa Method and system for reserving bandwidth in time-division multiplexed networks
CN101335751A (zh) * 2007-06-29 2008-12-31 华为技术有限公司 将以太网编码块映射到光传输网络传输的方法及装置
US7903550B2 (en) * 2007-07-27 2011-03-08 Silicon Image, Inc. Bandwidth reservation for data flows in interconnection networks
CN101299649B (zh) * 2008-06-19 2011-08-24 中兴通讯股份有限公司 基于通用成帧规程的多业务混合汇聚方法和装置
GB2454606C (en) * 2009-02-02 2017-01-25 Skype Ltd Method of transmitting data in a communication system
CN103973265B (zh) * 2009-06-09 2017-01-18 华为技术有限公司 一种ODUflex通道带宽的无损调整方法和光传送网
CN101651512B (zh) * 2009-09-24 2013-06-05 中兴通讯股份有限公司 一种实现数据业务透明传送的方法、系统及装置
CN102195859B (zh) * 2010-03-04 2015-05-06 中兴通讯股份有限公司 基于gfp的灵活光通道数据单元带宽调整方法及系统
CN102215153B (zh) * 2010-04-02 2014-11-05 华为技术有限公司 带宽调整方法及通信节点
CN102130787A (zh) * 2011-03-02 2011-07-20 华为技术有限公司 一种网管设备及其管理ODUflex帧的空闲时隙的方法
US9722723B2 (en) * 2012-09-13 2017-08-01 Tejas Networks Ltd. Dynamic hitless ODUflex resizing in optical transport networks
US10003546B1 (en) * 2012-11-19 2018-06-19 Cox Communications, Inc. Bandwidth override
US9654849B2 (en) * 2015-05-15 2017-05-16 Huawei Technologies Co., Ltd. System and method for photonic switching
CN104967571B (zh) * 2015-06-08 2018-08-24 新华三技术有限公司 一种带宽调整方法及装置
CN107579833B (zh) * 2016-07-05 2022-03-18 中兴通讯股份有限公司 一种为专线用户提速的方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1521988A (zh) * 2003-01-28 2004-08-18 华为技术有限公司 物理层的数据发送时隙在整个时域上均匀分布的方法
CN1984043A (zh) * 2006-06-07 2007-06-20 华为技术有限公司 基于带宽调整机制的流量控制方法及系统
CN101656588A (zh) * 2009-09-21 2010-02-24 中兴通讯股份有限公司 一种传输数据的方法及系统
CN103546229A (zh) * 2012-07-09 2014-01-29 中兴通讯股份有限公司 Serdes速率匹配方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11349719B2 (en) * 2018-05-31 2022-05-31 Huawei Technologies Co., Ltd. Method and apparatus for adjusting bandwidth of transmission channel in flexible ethernet
EP3813281A4 (en) * 2018-08-07 2021-08-25 Huawei Technologies Co., Ltd. CODE BLOCK STREAM RECEIVING METHOD, CODE BLOCK STREAM SENDING METHOD AND COMMUNICATION DEVICE
US11251905B2 (en) 2018-08-07 2022-02-15 Huawei Technologies Co., Ltd. Method for receiving code block stream, method for transmitting code block stream, and communications apparatus

Also Published As

Publication number Publication date
KR102240140B1 (ko) 2021-04-13
CN108242969B (zh) 2021-04-20
EP3531594A1 (en) 2019-08-28
CN108242969A (zh) 2018-07-03
JP2020502927A (ja) 2020-01-23
EP3905556A2 (en) 2021-11-03
EP3531594A4 (en) 2019-12-04
KR20190073569A (ko) 2019-06-26
EP3531594B1 (en) 2021-04-07
EP3905556A3 (en) 2021-11-17
US11082142B2 (en) 2021-08-03
US20190288783A1 (en) 2019-09-19
CN113300810B (zh) 2023-03-10
US20210359779A1 (en) 2021-11-18
US11750312B2 (en) 2023-09-05
JP6946433B2 (ja) 2021-10-06
CN113300810A (zh) 2021-08-24

Similar Documents

Publication Publication Date Title
WO2018113329A1 (zh) 一种传输速率的调整方法及网络设备
US20210203588A1 (en) Data forwarding method and device
EP3319254B1 (en) Method for data transmission, transmitter and receiver
US5559796A (en) Delay control for frame-based transmission of data
US11552721B2 (en) Clock synchronization method and apparatus
WO2019062227A1 (zh) 数据传输方法、传输设备和传输系统
CN108965157B (zh) 数据传输方法、装置、设备及系统
WO2016101682A1 (zh) 一种处理信号的方法及通信设备
US8718069B2 (en) Transmission apparatus and signal transmission method for mapping packets in frames of synchronous network
EP1339183B1 (en) Method and device for transporting ethernet frames over a transport SDH/SONET network
JP2003518874A (ja) データ通信
WO2022183875A1 (zh) 确定传输时隙的方法和相关装置
US7590150B1 (en) Method and apparatus for transport of multiple TDM and data clients over multiple variable data rate streams
US7408939B1 (en) Method and apparatus for transport of fractional datastreams over frame-based transport systems
JP3876414B2 (ja) データ伝送方法及びデータ伝送装置
WO2023232097A1 (zh) 业务数据处理方法和装置
WO2020114084A1 (zh) 一种报文转发方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17883698

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017883698

Country of ref document: EP

Effective date: 20190520

ENP Entry into the national phase

Ref document number: 20197016328

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019532786

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE