WO2018195728A1 - 一种客户业务传输方法和装置 - Google Patents

一种客户业务传输方法和装置 Download PDF

Info

Publication number
WO2018195728A1
WO2018195728A1 PCT/CN2017/081729 CN2017081729W WO2018195728A1 WO 2018195728 A1 WO2018195728 A1 WO 2018195728A1 CN 2017081729 W CN2017081729 W CN 2017081729W WO 2018195728 A1 WO2018195728 A1 WO 2018195728A1
Authority
WO
WIPO (PCT)
Prior art keywords
counter
transmission
customer service
preset threshold
bandwidth
Prior art date
Application number
PCT/CN2017/081729
Other languages
English (en)
French (fr)
Inventor
陈玉杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2019556977A priority Critical patent/JP6962599B2/ja
Priority to KR1020227006386A priority patent/KR102408176B1/ko
Priority to KR1020197033779A priority patent/KR102369305B1/ko
Priority to EP17907069.3A priority patent/EP3605975B1/en
Priority to PCT/CN2017/081729 priority patent/WO2018195728A1/zh
Priority to CN201780038563.1A priority patent/CN109314673B/zh
Priority to TW106145618A priority patent/TWI680663B/zh
Publication of WO2018195728A1 publication Critical patent/WO2018195728A1/zh
Priority to US16/661,559 priority patent/US11785113B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2408Traffic characterised by specific attributes, e.g. priority or QoS for supporting different services, e.g. a differentiated services [DiffServ] type of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • H04L45/502Frame based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/62Establishing a time schedule for servicing the requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0002Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank

Definitions

  • the present application relates to the field of data transmission technologies, and in particular, to a client service transmission method and apparatus.
  • an existing transport node transmits customer services based on a best effort delivery mechanism. For example, if the client device A sends the client service to the client device B through the transit node, and the output line interface bandwidth of the transit node is 100 Gbps (gigabits per second), the rate at which the transmitting node outputs the client service (ie, the output rate of the client service) may be Try to get close to 100Gbps.
  • the transmission time of the customer service in the transmission node may theoretically be 10M/100Gbps; if the customer service of the input transmission node is within a second 100M, the transmission time of the client service in the transmission node can theoretically be 100M/100Gbps.
  • the embodiment of the present application provides a client service transmission method and device, which are used to reduce the probability of congestion on an output line interface, and even avoid congestion.
  • the application provides a method for transmitting a customer service, including: receiving a customer service; the customer service includes a plurality of data blocks, and the customer service corresponds to a counter, and the counter is used to control an output rate of the customer service. Then, a plurality of data blocks are transmitted in a plurality of transmission periods; wherein, in each transmission period, when the count value of the counter reaches a preset threshold, at least one of the plurality of data blocks is transmitted.
  • the execution body of the method may be a transport node.
  • the output rate of the data block can be controlled, that is, the output of the customer service is controlled.
  • the rate if the output rate of the customer service of each input transmission node is controlled according to the technical solution, helps to reduce the probability of congestion of the output line interface, and even avoids congestion.
  • the method may further include: increasing the counter value of the counter by C in each counting period of the counter; wherein, C Less than or equal to the preset threshold.
  • C may be determined according to the bandwidth of the customer service, and the preset threshold may be determined according to the output line interface bandwidth.
  • the specific implementation is not limited to this.
  • C may be the ratio of the bandwidth of the customer service to the unit bandwidth.
  • the preset threshold may be a ratio of the output line interface bandwidth to the adjustment value of the unit bandwidth, where the adjustment value of the unit bandwidth is greater than or equal to the unit bandwidth.
  • the counter counts from the initial value during each transmission cycle.
  • the initial value of the counter may be the same for different transmission periods, for example, the initial value is 0.
  • the initial value of the counter may be different for different transmission periods.
  • the initial value of the counter is a value obtained after the counter value of the counter is subtracted from the preset threshold at the end of the ith transmission period; Where i is an integer greater than or equal to 1.
  • the specific implementation is not limited to this.
  • transmitting at least one of the plurality of data blocks may include: counting the counter at each transmission cycle When the value reaches the preset threshold, if the client service is cached, at least one of the plurality of data blocks is transmitted.
  • the method may further include: when each counter period reaches a preset threshold, if the client service is not cached, the counter is stopped. Based on this, the method may further include: transmitting at least one of the plurality of data blocks when the client service is cached, and then the counter starts counting from the initial value. Or, after receiving the client service, directly sending at least one data block without buffering the at least one data block.
  • the method may further include: storing the customer service into the cache queue; and acquiring at least one data block from the cache queue when the counter value reaches a preset threshold.
  • Each client service can correspond to a cache queue.
  • the packet loss rate can be reduced during the control of the output rate of the data block.
  • each of the plurality of data blocks has a fixed length. In this way, there is a simple and advantageous effect. Of course, the length of different data blocks may be different when implemented.
  • sending at least one of the plurality of data blocks may include: transmitting at least one of the plurality of data blocks according to a priority of the customer service; wherein, the smaller the expected transmission delay is The higher the priority.
  • the implementation method considers the expected transmission delay when outputting different customer services, so as to better meet the requirements of different customer services for transmission delay, thereby improving the user experience.
  • the present application provides a client service transmission device having the function of implementing the steps in the foregoing method embodiments.
  • This function can be implemented in hardware or in hardware by executing the corresponding software.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the apparatus can include: a processor, a memory, a bus, and a communication interface; the memory is configured to store computer execution instructions, the processor, the memory, and the communication interface are connected through the bus when the device In operation, the processor executes the computer-executable instructions stored by the memory to cause the apparatus to perform any of the client service transmission methods as provided by the first aspect above.
  • the present application provides a computer readable storage medium for storing computer program instructions for use in the above apparatus, which when executed on a computer, enable the computer to execute any of the clients provided by the first aspect described above Service transmission method.
  • the present application provides a computer program product comprising instructions that, when executed on a computer, cause the computer to perform any of the customer service transmission methods provided by the first aspect above.
  • FIG. 1 is a schematic diagram of a system architecture applicable to a technical solution provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a client service transmission apparatus according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an upper and lower node according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of another upper and lower node according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a transmission node according to an embodiment of the present application.
  • 4a is a schematic structural diagram of a line processing unit according to an embodiment of the present application.
  • FIG. 4b is a schematic structural diagram of another transmission node according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of another system architecture applicable to the technical solution provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of interaction of a client service transmission method according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a data block according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a process for replacing a label according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a processing procedure of an upper and lower node according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a processing procedure of a transit node according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic flowchart diagram of a rate monitoring method according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of a process of a rate monitoring method according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic flowchart of a policy scheduling method according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of another client service transmission apparatus according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a system architecture to which the technical solution provided by the present application is applicable.
  • the system architecture can include: a bearer network device and a plurality of client devices 100.
  • one or more client devices transmit customer services to another one or more client devices via a carrying network device.
  • the client device that sends the client service is hereinafter referred to as the sender client device
  • the client device that receives the client service is referred to as the receiver client device. It can be understood that the same client device can be used as both a sender client device and a receiver client device.
  • the bearer network device may include: upper and lower nodes 200 connected to the sender client device or the sink client device, and one or more transfer nodes 300 disposed between the upper and lower nodes 200. Any one or more of the transit nodes 300 may be integrated with any one or more of the upper and lower nodes 200, or may be independently configured. In the present application, the above lower node 200 and the transmission node 300 are independently provided as an example for description.
  • Customer services may include, but are not limited to, Ethernet client services, synchronous digital hierarchy (SDH) client services, storage services, video services, and the like.
  • a sender client device can transmit one or more client services to one or more recipient client devices. Multiple sender client devices can transmit one or more client services to the same receiver client device.
  • SDH synchronous digital hierarchy
  • the system shown in FIG. 1 may be an optical network, and may specifically be an access network, such as a passive optical network (PON; or a transport network, such as an optical transport network (OTN), a packet network, Packet switching network, etc.
  • PON passive optical network
  • OTN optical transport network
  • packet network Packet switching network
  • the client device 100 may include, but is not limited to, any of the following: a switch, a router, a computer, a data center, a base station, and the like.
  • Both the upper and lower nodes 200 and the transit node 300 may include, but are not limited to, any of the following: an OTN device, a router, and the like.
  • the upper and lower nodes 200 may be configured to receive a client service in the form of a data packet sent by the client device at the transmitting end or in the form of a continuous data stream, and divide the client service in the form of a data packet or a continuous data stream into multiple data blocks (slice And exchange a plurality of data blocks to the corresponding transmission node 300 according to the routing information. Alternatively, the data block sent by the transmission node 300 is received, and a plurality of data blocks belonging to the same client service are restored into a form of a data packet or a continuous data stream, and then sent to a corresponding receiving client device.
  • Each of the upper and lower nodes 200 can be packaged Includes one or more inputs/outputs. Each input/output terminal can be connected to one or more client devices 100 (including a sender client device or a sink client device) or to one or more transit nodes 300.
  • the transmitting node 300 can be configured to forward the data block to the other transit node 300 or the upper and lower nodes 200 according to the routing information.
  • Each of the transfer nodes 300 can include one or more input/output terminals.
  • Each input/output terminal can be connected to one or more of the transfer nodes 300 or to one or more of the upper and lower nodes 200. It should be noted that the output line interface below may be considered as the output end of the output node 300.
  • the input of any of the upper and lower nodes 200 and the transfer node 300 may serve as an output of the device in other scenarios.
  • the input end or the output end is related to the path in the current transmission service process, and the path may be determined according to the routing information.
  • the routing information of the client service may be configured by the control plane and sent to the routing nodes (including the upper and lower nodes 200 and the transit node 300) on the path.
  • the specific implementation process may refer to the prior art.
  • the control layer may be a function module integrated in any of the upper and lower nodes 200 and the transmission node 300, or may be a device independent of the upper and lower nodes 200 and the transmission node 300, which is not limited in this application.
  • Each output corresponds to a line interface bandwidth, which is used to characterize the carrying capacity of the output.
  • FIG. 1 is only an example of a system architecture to which the present application is applied.
  • the number and connection relationship of the client device 100, the upper and lower nodes 200, and the transit node 300 included in the system architecture are not limited.
  • the network layout can be performed according to the actual application scenario.
  • FIG. 2 a schematic structural diagram of any one or more of the upper and lower nodes 200 and the transmission node 300 in FIG. 1 is as shown in FIG. 2 .
  • the apparatus shown in FIG. 2 may include at least one processor 21, and a memory 22, a communication interface 23, and a communication bus 24.
  • the processor 21 is a control center of the device, and may be a processing component or a collective name of a plurality of processing components.
  • the processor 21 may be a central processing unit (CPU), may be an application specific integrated circuit (ASIC), or be configured to implement the technical solution provided by the embodiments of the present application.
  • a plurality of integrated circuits such as one or more digital signal processors (DSPs), or one or more field programmable gate arrays (FPGAs).
  • DSPs digital signal processors
  • FPGAs field programmable gate arrays
  • the processor 21 can perform various functions of the device by running or executing a software program stored in the memory 22 and calling data stored in the memory 22.
  • processor 21 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG.
  • the device can include multiple processors, such as processor 21 and processor 25 shown in FIG.
  • processors can be a single core processor (CPU) or a multi-core processor (multi-CPU).
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data, such as computer program instructions.
  • the memory 22 can be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (RAM) or other type that can store information and instructions.
  • the dynamic storage device can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, and a disc storage device. (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), disk storage media or other magnetic storage devices, or Any other medium that can be used to carry or store desired program code in the form of an instruction or data structure and that can be accessed by a computer, but is not limited thereto.
  • Memory 22 may be present independently and coupled to processor 21 via communication bus 24.
  • the memory 22 can also be integrated with the processor 21.
  • the memory 22 is configured to store a software program executed by the device in the technical solution provided by the embodiment of the present application, and is controlled by the processor 21.
  • the communication interface 23 may be a device using any transceiver (for example, an optical receiver, an optical module) for communicating with other devices or communication networks, such as an Ethernet, radio access network (RAN). , wireless local area networks (WLAN), and the like.
  • the communication interface 23 may include a receiving unit that implements a receiving function, and a transmitting unit that implements a transmitting function.
  • the communication bus 24 may be an industry standard architecture (ISA) bus, a peripheral component interconnect (PCI) bus, or an extended industry standard architecture (EISA) bus.
  • ISA industry standard architecture
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 2, but it does not mean that there is only one bus or one type of bus.
  • FIG. 3 a schematic structural diagram of the upper and lower nodes 200 in FIG. 1 may be as shown in FIG. 3.
  • the upper and lower nodes 200 shown in FIG. 3 may include one or more branch processing units 31, a service switching unit 32, and one or more line processing units 33.
  • the branch processing unit 31 can be configured to receive the client service sent by the sender client device through the input end of the upper and lower nodes 200, and divide the received client service into data blocks, for example, divide the client service into fixed-length data blocks. The data block is then exchanged to the corresponding line processing unit 33 via the service switching unit 32.
  • the service switching unit 32 exchanges the customer service transmitted from the one branch processing unit 31 to the line processing unit 33, which is not limited in the present application.
  • the specific implementation process may refer to the prior art.
  • the line processing unit 33 can be configured to output the received data block to the upper and lower nodes 200 via the output of the upper and lower nodes 200.
  • the branch processing unit 31 may be a tributary board 31a
  • the service switching unit 32 may be a switch board 32a
  • the line processing unit 33 may be a circuit board 33a, a tributary board 31a, a cross board 32a, and a circuit board 33a. Both can be connected to the main control board 34a as shown in Fig. 3a.
  • the main control board 34a is a control center of the upper and lower nodes 200 for controlling the tributary board 31a, the cross board 32a, and the circuit board 33a to perform the corresponding steps in the method provided by the present application.
  • a tributary board 31a and a circuit board 33a are connected to the main control board 34a in Fig. 3a.
  • each tributary board 31a and each circuit board 33a can be connected to the main control board 34a.
  • the upper and lower nodes 200 may further include: a supplemental light processing unit 35a, wherein the supplemental light processing unit 35a may include: an optical amplification unit (OA), an optical multiplexing unit (OM), and optical splitting. Pptical de-multiplexing (OD), single optical supervisory channel unit (SCI), fiber interface unit (FIU), and the like.
  • a schematic structural diagram of the transmission node 300 in FIG. 1 may be as shown in FIG. 4.
  • the transmission node 300 shown in FIG. 4 may include one or more source line processing units 41, a service switching unit 42, and one or more destination line processing units 43.
  • the source line processing unit 41 may be configured to receive, by the input end of the transmission node 300, the data block sent by the upper and lower nodes 200 or another transmission node 300; then, the data block is exchanged to the corresponding destination line by the service switching unit 42.
  • Unit 43 wherein, the service switching unit 42 exchanges the customer service transmitted from one source line processing unit 41 to which destination line
  • the processing unit is not limited in this application, and the specific implementation process may refer to the prior art.
  • the destination line processing unit 43 can be configured to output the received client traffic to the transmission node 300 via the output of the transmission node 300.
  • the line processing unit may include a queue buffer module 431 and a rate supervision module 432.
  • the policy scheduling module 433 may also be included.
  • a rate adaptation module 434 is also included.
  • An example of the connection relationship between the modules is shown in FIG. 4, and the functions of each module can be referred to below.
  • the source line processing unit 41 may be the source line board 41a
  • the service switching unit 42 may be the cross board 42a
  • the destination line processing unit 43 may be the destination line board 43a.
  • the source circuit board 41a, the cross board 42a, and the destination circuit board 43a can be connected to the main control board 44a as shown in Fig. 4b.
  • the main control board 44a is a control center of the transmission node 300 for controlling the source circuit board 41a, the cross board 42a, and the destination circuit board 43a to perform the corresponding steps in the method provided by the present application.
  • a source circuit board 41a and a destination circuit board 43a are connected to the main control board 44a in FIG. 4b.
  • each source circuit board 41a and each destination circuit board 43a can be connected to the main control board. 44a connection.
  • the transfer node 300 may further include: a supplemental light processing unit 45a, wherein a specific implementation of the supplemental light processing unit 45a may refer to the supplemental light processing unit 35a.
  • first”, “second” and the like are used herein to distinguish different objects and do not limit the order.
  • first upper and lower nodes and the second upper and lower nodes are only for distinguishing different upper and lower nodes, and do not limit the order of the first upper and lower nodes and the second upper and lower nodes.
  • the system architecture shown in FIG. 5 is the system architecture shown in FIG. A specific implementation.
  • the client device 1 transmits the client service 1 to the client device 4 via the upper and lower nodes 1, the transfer node 1, and the upper and lower nodes 2.
  • the client device 2 transmits the client service 2 to the client device 5 via the upper and lower nodes 1, the transfer node 1, and the upper and lower nodes 2.
  • the client device 3 transmits the client service 3 to the client device 5 via the upper and lower nodes 1, the transfer node 2, and the upper and lower nodes 3.
  • the technical solution provided by the present application can also be applied to a scenario in which a transmitting client device sends multiple client services to an upper node.
  • the basic principles can be referred to the following description.
  • the operation of buffering, rate monitoring, policy scheduling, rate adaptation, and the like of the data block of the customer service is applied to the transmission node as an example. In actual implementation, one of these operations is implemented.
  • the method may be applied to the upper and lower nodes (including the first upper and lower nodes and/or the second upper and lower nodes), and the specific implementation process may refer to the description applied to the transmission node, which is not described herein again.
  • FIG. 6 is a schematic diagram of interaction of a client service transmission method according to an embodiment of the present application.
  • the method shown in FIG. 6 may include the following steps S101 to S104:
  • S101 A plurality of sender client devices send client services to the first upper and lower nodes. Customer services are transmitted in the form of data packets or continuous data streams.
  • the first upper and lower nodes may be the upper and lower nodes 1 in FIG. 5, which will be described below as an example.
  • the plurality of sender client devices may be any plurality of sender client devices connected to the upper and lower nodes 1. Any one or more of the plurality of sender client devices may continuously or intermittently send the client service to the node 1 .
  • S101 may include: client device 1 sends client service 1 to node 1 and client device 2 transmits client service 2 to node 1 and client device 3 transmits client service 3 to node 1 .
  • each sender client device applies for bandwidth of the client service (for example, customer service 1, 2, 3) to the control plane, so that the control plane controls the upper and lower nodes 1 to pre-empt the sender client device. Leave a certain amount of bandwidth to transfer the customer's business.
  • the bandwidth of the customer service may be determined by the sending client device according to requirements, such as the size of the client service to be transmitted, the expected transmission delay requirement, and the like. When the same client device sends different client services, the bandwidth of the client service may be the same or different. When different client devices send the same client service, the bandwidth of the client service may be the same or different. This application does not limit this.
  • a unit bandwidth ie, minimum bandwidth granularity
  • each transmitting client device can set the bandwidth of the customer service to an integral multiple of the unit bandwidth. For example, assuming a unit bandwidth of 2 Mbps, the bandwidth of the client service may be n*2 Mbps, where n may be an integer greater than or equal to one.
  • the first upper and lower nodes receive the client services sent by the multiple client devices, and divide the received client services into fixed-length data blocks, and generate one slice for each data block, and then, according to the routing information, each The slice is output to the corresponding transfer node.
  • the step S102 can be understood as: the upper and lower nodes 1 map the received customer services into the bearer container, and each bearer container is used to carry one data block. It can be understood that the "bearing container” is a logical concept proposed to describe the process of dividing the data block more vividly, and may not exist.
  • S102 may include: the upper and lower nodes 1 receive the customer service sent by the client device 1, the client service sent by the client device 2, the client service 3 sent by the client device 3, and the customer service 1, 2 And 3 are equally divided into fixed-length data blocks, one slice is generated for each data block, and then each slice of the customer service 1 is output to the transmission node 1, and each slice of the customer service 2 is output to the transmission node 1, Each slice of the customer service 3 is output to the transmission node 2.
  • the upper and lower nodes 1 may divide the received data packet or the continuous data stream into data blocks in accordance with the order of reception time in the case of receiving one or more data packets or continuous data streams.
  • Each data block can be a fixed length, that is, the upper and lower nodes 1 divide any received client service sent by any of the transmitting client devices into fixed-length data blocks.
  • the data block is fixed length as an example for description. In actual implementation, the lengths of different data blocks may be unequal.
  • the “data block” herein includes the customer service itself, optional, and may also include some accompanying information of the customer service.
  • the average rate of the data blocks input to the upper and lower nodes 1 is 2 Mbps/123 Bytes data blocks/second, that is, the upper and lower nodes 1
  • the branch processing unit forms a container, an average of 2 Mbps / 123 Bytes of data blocks per second is generated.
  • the bandwidth of the client service is 4 Mbps
  • the average rate of data blocks input to the upper and lower nodes 1 is 4 Mbps / 123 Bytes data blocks / sec.
  • the data block may be identified, in an embodiment of the present application
  • the upper and lower nodes 1 after dividing the received data packet or the continuous data stream into fixed-length data blocks, a label can be added for each data block.
  • the transmission format of different types of client services sent by the client device to the upper and lower nodes 1 may be different.
  • the present application provides a format of a data block, as shown in FIG. 7.
  • the format of any type of client service transmitted to the upper and lower nodes 1 can be converted into the format as shown in FIG.
  • the format of the data block may include: a label and a payload area.
  • a cyclic redundancy check (CRC) area may also be included in the format of the data block.
  • CRC cyclic redundancy check
  • the tag can be a global tag or a line interface local tag.
  • the global label can be a label that is identifiable by each device in the system.
  • the line interface local tag can be a tag that can be recognized by two devices that communicate directly. Compared with the global label, the number of bits occupied by the local label of the line interface is small.
  • the label is a local label of the line interface as an example.
  • the tags of the data block can also be used to distinguish different customer services.
  • the label of the data block can be configured at the control plane.
  • the payload area is used to carry the customer service itself, and optionally can also be used to carry some of the accompanying information of the customer service.
  • the CRC area is used to carry check bits, and the check bits can be used to verify the integrity of the information carried in the payload area.
  • the integrity check can also be implemented in other ways, and is not limited to the CRC.
  • the size of the label, the payload area, and the CRC area is not limited in this application.
  • the description is made by taking an example in which the tag occupies 4 bytes (Byte), the payload area occupies 123 Bytes, and the CRC occupies 1 Byte.
  • the data block of the tag will be included in the present application (as shown in FIG. 7). Shown) is called "slice".
  • Each data block corresponds to one slice, and each data block can be considered to carry information in the payload area of the slice corresponding to the data block.
  • the processing process of the upper and lower nodes 1 to one data block may be Including: the upper and lower nodes 1 add a label a to the data block, then replace the label a with the label b according to the routing information, and recalculate the CRC.
  • the replacement label may be performed by any one of the branch processing unit, the service switching unit, or the line processing unit in the upper and lower nodes 1.
  • the specific implementation process may be as shown in FIG. 8.
  • FIG. 9 it is a schematic diagram of an implementation process of S102.
  • a branch processing unit 31 of the lower node 1 described above divides a received data packet or a continuous data stream into blocks, and generates a slice for each data block as an example.
  • the transmitting node receives the slice of the client service sent by the first upper and lower nodes, and performs an exchange operation on the received slice according to the routing information. Then, the exchanged slices are cached according to the customer service to which they belong, wherein each client service corresponds to one cache queue. Then, the rate supervision operation is performed independently on the slice of each customer service, and the policy of the rate-supervised client service is performed, such as policy scheduling and rate adaptation, and the slice after performing the above operation is transmitted to the next routing node.
  • the next routing node may be the next transit node or the second upper and lower nodes. If the next routing node is the second upper and lower nodes, then S104 is performed. If the next routing node is a transit node, the transmitting node continues to execute S103, ... until the next routing node is the second upper and lower nodes, and then S104 is performed.
  • the cache space is shared by all cache queues.
  • a cache queue can be allocated for the client service.
  • the second upper and lower nodes may include upper and lower nodes 2 and upper and lower nodes 3.
  • S103 may include: the transmission node 1 receives the slice of the customer service 1, 2 sent by the upper and lower nodes 1, transmits the slice of the customer service 1, 2 to the upper and lower node 2; and the transmission node 2 receives the slice of the customer service 3 sent by the upper and lower node 1, The slice of the customer service 3 is transmitted to the upper and lower nodes 3.
  • the transmitting node performs the switching operation on the received slice according to the routing information, which may include: the transmitting node determines the next routing node of the slice according to the routing information, and then performs the switching operation.
  • the label of the data block is a line interface local label
  • the transport node needs to perform the action of replacing the label during the performing the swap operation.
  • the transmission node 2 can replace the label b with the labeling process during the exchange operation.
  • the label c optionally, can recalculate the CRC of the data block.
  • the tag b is a tag identifiable by the upper and lower nodes 1 and the transit node 2
  • the tag c is a tag identifiable by the transit node 2 and the upper and lower nodes 3.
  • the transit node may perform an integrity check according to the CRC included in the slice, and perform rate monitoring, policy scheduling, and rate adaptation if the verification succeeds. Match.
  • the slice is discarded.
  • the switching operation can be performed by the service switching unit 42
  • the buffering operation can be performed by the queue buffering module 431
  • the rate monitoring operation can be performed by the rate monitoring module 432
  • the policy scheduling is performed.
  • the operations may be performed by policy scheduling module 433, which may be performed by rate adaptation module 434.
  • FIG. 10 it is a schematic diagram of an implementation process of S103. 10 is an example in which a plurality of client services input by two input ends of a transmission node are transmitted to an output terminal, wherein each rectangular small square represents a slice, wherein each shadow is small.
  • the grid represents a customer service, and each blank square in the cache queue indicates that the slice has not been stored in the cache queue. Queue caching and rate policing are performed independently for each customer service. After multiple customer services are uniformly scheduled for policy scheduling and rate adaptation, they are output from the output.
  • the second upper and lower nodes receive the slice of the client service sent by the transit node, and obtain the data block in each slice, and then restore the data block of the same client service into the form of the data packet or the continuous data flow according to the receiving time sequence.
  • the form or continuous stream of data is then sent to the corresponding receiving client device.
  • the receiving client device receives the client service sent by the second upper and lower nodes.
  • the S104 may include: the upper and lower nodes 2 receive the slice of the customer service 1 sent by the transmission node 1, and then restore the slice of the customer service 1 into the form of a data packet or a continuous data stream, and send Give the customer device 4.
  • the client device 4 receives the client service 1 sent by the upper and lower nodes 2.
  • the upper and lower nodes 2 receive the slice of the customer service 2 transmitted by the transmission node 1, and then restore the slice of the customer service 2 into the form of a data packet or a continuous data stream, and transmit it to the client device 5.
  • the upper and lower nodes 3 receive the slice of the customer service 3 transmitted by the transmission node 2, and then restore the slice of the customer service 3 into the form of a data packet or a continuous data stream, and transmit it to the client device 5.
  • the client device 5 receives the client services 2, 3 sent by the upper and lower nodes 3.
  • the second upper and lower nodes may perform an integrity check according to the CRC included in the slice, and if the verification succeeds, delete the label in the slice. Get the data block in the slice.
  • a schematic diagram of a specific implementation process of S104 may be the inverse process of FIG.
  • rate policing policy scheduling, and rate adaptation in S103 are described below.
  • Rate regulation is a technology provided by the present application to control the output rate of a customer service, which helps reduce the probability of congestion and even eliminates congestion.
  • multiple counters are set in the transit node, and each client service can correspond to one counter, and each counter is used to control the output rate of the client service.
  • a plurality of data blocks are transmitted in a plurality of transmission periods. Wherein, in each transmission period, when the count value of the counter reaches a preset threshold, at least one of the plurality of data blocks is transmitted.
  • the counter can be implemented by software or hardware, which is not limited in this application.
  • the transit node can independently rate-regulate each customer service.
  • the rate regulation of the output rate of a client service is taken as an example.
  • the transmission node specifically, the rate supervision module 432 in the transmission node
  • the following steps S201 to S205 can be performed:
  • S201 Increase the counter value of the counter by C in each counting period of the counter; wherein C is less than or equal to a preset threshold.
  • S203 Determine whether the cache queue of the client service caches the client service data block corresponding to the counter.
  • the transmitting node may continuously or intermittently receive the client service sent by the first upper and lower nodes, or regularly receive the client service sent by other transmitting nodes. Therefore, when S203 is executed, if the cache queue does not cache the client service, then S203 is executed after the execution of S205, and the cache queue may have cached the client service, so that S204 can be performed.
  • the duration of the time may also be set. Thus, after the duration from when the counter stops counting, if the cache queue still does not cache the client service when S203 is executed, the client service transmission may be considered to be ended.
  • the specific value of the duration of the present application is not limited.
  • the above S201 to S205 describe a rate supervision process of a transmission period.
  • the counter counts from the initial value during each transmission cycle.
  • the initial values of different transmission periods may be the same or different.
  • the initial value of the counter in the (i+1)th transmission period, is a value obtained after the counter value of the counter is subtracted from the preset threshold at the end of the ith transmission period; wherein i is greater than or An integer equal to 1.
  • the initial value of the counter is a fixed value less than a preset threshold for each transmission cycle, and may be, for example, zero.
  • the sending period refers to the time interval between two adjacent data blocks, where one or more data blocks can be sent at a time.
  • the transmission period may not be a preset value, which is related to the counter value of the counter, and further, whether or not the customer service is cached when the count value reaches the preset threshold.
  • the cache queue caches the customer service, and the two transmission periods are equal.
  • the cache queue does not cache the client service when the count value reaches the preset threshold, if the counter stops counting for the same time, the two transmission periods are equal. If the count value reaches the preset threshold, the cache queue does not cache the customer service, and the counter stops counting for different times, the two transmission periods are not equal.
  • the rate monitoring process provided by the present application can be understood as that when the technical value of the counter reaches a preset threshold, the client service has a transmission opportunity, that is, the transmission node has an opportunity to send a data block of the customer service once.
  • Each transmission opportunity can transmit at least one data block, based on which, the output rate of the data block is controlled, that is, the output rate of the customer service is controlled.
  • the number of data blocks sent in each transmission cycle may or may not be equal.
  • the concept of "data block transmission period" is introduced in the present application, wherein the data block transmission period may include one or more transmission periods, and data blocks transmitted in each data block transmission period. The number is equal.
  • each data block transmission period includes two transmission periods, one data block is transmitted in one transmission period of the two transmission periods, and two data blocks are transmitted in another transmission period.
  • the number of data blocks transmitted in multiple transmission periods may be: 1, 2, 1, 2, 1, 2, ..., or 1, 2, 2, 1, 1, 2, 1, 2, 2, 1, 1... It can be seen from the example that by controlling the number of data blocks sent in each transmission period, the number of data blocks transmitted in each data block transmission period can be the same, thereby ensuring that the data block has a granularity in the data block transmission period.
  • the output rate on a constant basis is constant. It can be understood that the actual output rate of the data block is smaller than the constant output rate. Therefore, the constant output rate can be controlled by controlling the duration of each transmission period and the number of data blocks transmitted in each transmission period. This helps to reduce the probability of congestion on the output line interface.
  • each data block transmission period includes one transmission period, and two data blocks in this transmission period.
  • the number of data blocks transmitted in a plurality of transmission periods may be: 2, 2, 2, . It can be understood that in this example, the data block transmission period is equal to the transmission period. Therefore, by controlling the length of each transmission cycle and the number of data blocks transmitted per transmission cycle, the constant output rate can be controlled, thereby contributing to reducing the probability of congestion on the output line interface.
  • the counting period is the time required for each counter to be updated.
  • the counting period can be realized by a pulse signal.
  • the pulse period is equal to the counting period, the counter value of the counter is increased by C every pulse period.
  • the counting period can also be greater than the data block transmission time of the line interface.
  • Mode 1 C is the number of counts. In this case, the count value of the counter is incremented by one for each counting cycle.
  • the value of the preset threshold can be obtained according to the value in the following mode 2, and details are not described herein again. Of course, it can also be obtained by other means, which is not limited in this application.
  • Mode 2 C is determined based on the bandwidth of the customer's business.
  • the preset threshold is determined according to the output line interface bandwidth.
  • C is a ratio of bandwidth of the customer service to the unit bandwidth.
  • the preset threshold is a ratio of the output line interface bandwidth to the adjustment value of the unit bandwidth, where the adjustment value of the unit bandwidth is greater than or equal to the unit bandwidth. It should be noted that, for the convenience of implementation, the preset threshold may be set to an integer. In this case, if the ratio of the output line interface bandwidth to the adjustment value of the unit bandwidth is a non-integer, the preset threshold may be taken as the non-integer The integer obtained. Certainly, the preset threshold may also be set to a non-integer number, which is not limited in this application.
  • the preset threshold may be less than or equal to the output line bandwidth (ie, the line physical rate), and the unit bandwidth is multiplied by an acceleration factor.
  • a value that is an integer multiple. For example, if the unit bandwidth is 2 Mbps and the bandwidth of the customer service is 10 Mbps, then C can be 10 Mbps / 2 Mbps 5.
  • the preset threshold (labeled as P) can be 100 Gbps/(2 Mbps*1001/1000). Round up the resulting value, which is 49950.
  • the real-time output rate of the data block can be approximated and smaller than the bandwidth for the customer service.
  • the control plane can control the sum of the output line bandwidths of the transmission nodes to be less than or equal to the sum of the bandwidths of the customer services transmitted on the output interface line bandwidth, if each customer service is controlled according to the above method, it is helpful. To reduce the probability of congestion on the output line bandwidth.
  • the counter value of the counter is incremented by C during each count cycle of any one of the transmission cycles, and the client service has a transmission opportunity when the count value of the counter reaches P.
  • the cache queue corresponding to the client service caches the client service, at least one data block of the data service is sent. At this point, the sending period ends.
  • the counter stops counting, and when the client service is cached in the cache queue, the present transmission cycle ends.
  • the counter value of the counter is set to the initial value, and the next transmission period begins.
  • the process may be implemented by using the process shown in FIG. 12.
  • the triggered saturated leaky bucket set in the rate monitoring module 432 may be equivalent to a control module, and the control module may be in each pulse cycle.
  • the count value of the control counter is increased by C.
  • the count value of the counter reaches P, if the non-empty indication sent by the queue buffer module 431 is detected, at least one data block in the control queue buffer is output to the policy scheduling module 433, optionally
  • a scheduling request may be sent to the policy scheduling module 433 to request scheduling resources.
  • policy scheduling is a scheduling technology that outputs the priority of different customer services through the delay requirements of different customer services, thereby outputting the transmission nodes according to the priority. It is specifically directed to a scheduling technique that simultaneously outputs from the rate monitoring module 432 and inputs different customer service designs for the same output. Scalable, policy scheduling can include the following:
  • the plurality of slices are sequentially input to the policy scheduling module 433, they are output in the order of input time. If the plurality of slices are simultaneously input into the policy scheduling module 433, the strict priority (SP) of the customer service to which the slice belongs is output, wherein the slice of the client service with the highest priority is output first, and the client with the lower priority is output. The output of the slice after the business. If the customer service of any of the multiple slices has the same priority, it is output according to the round robin (RR) mode.
  • RR round robin
  • FIG. 13 shows a first priority, a second priority, and a third priority, wherein the first priority is higher than the second priority, and the second priority The level is higher than the third priority.
  • “H” in FIG. 13 indicates a high priority customer service among two client services performing SP, and "L” indicates a low priority client service among two client services performing SP.
  • the transmitting node may perform any step before the policy scheduling, the method may further include: the transmitting node determines the priority of the customer service according to the expected transmission delay of the customer service.
  • the expected transmission delay is an expected value set according to actual needs, which may be a preset value.
  • the expected transmission delays of different customer services may be the same or different.
  • the expected transmission delays of the same client service in different scenarios may be the same or different.
  • each priority level may correspond to a desired transmission delay range. For example, a priority of a client service that expects a transmission delay to be less than or equal to 5 us (microseconds) is the first priority.
  • the priority of the client service whose transmission delay is less than or equal to 20 us is the second priority.
  • the priority of the client service with a transmission delay of less than or equal to 50 us is expected to be the third priority.
  • the first priority is higher than the second priority, and the second priority is higher than the third priority.
  • the expected transmission delay of a client service is 5 us, only the first priority can satisfy the expected transmission delay at any time; other priorities, such as the second priority, etc., although the desired transmission delay can sometimes be satisfied
  • the expected delay cannot be met at all times, and therefore, the priority of the customer service is the first priority.
  • Other examples are not listed one by one.
  • the control plane may allocate corresponding transmission resources according to the delay requirement of the customer service and the support capability of the system, so as to implement the delay requirement of different customer services. For example, assuming that the output line interface bandwidth is 100 Gbps, the time at which the transmitting node schedules one data block is 10.24 ns, and the time at which the data block is switched from the input end of the transmitting node to the corresponding cache queue through the service switching module is less than 3 us. Then, if the transmission delay is less than 5 us, the priority of the customer service is the first priority. Since the customer service after the strict rate supervision can achieve the effect of no congestion, the transmission delay is only introduced and the first priority is introduced.
  • rate adaptation is required. Specifically: if the sum of the real-time output rates of all customer services at a time is less than the output line interface bandwidth, then all the customer services are output together by filling in invalid data blocks at that moment.
  • the invalid data block filled in the rate adaptation process may be in the same format as the data block, and may be identified by using a special label, or may be a data block of other length or other format, which is not limited in this application. .
  • each network element such as an upper and lower node, a transit node, or a client device, includes hardware structures and/or software modules corresponding to each function in order to implement the above functions.
  • each network element such as an upper and lower node, a transit node, or a client device
  • the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiments of the present application may perform the division of function modules on the upper and lower nodes, the transmission node, or the client device according to the foregoing method.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 14 shows a possible structure of the client service transmission device (specifically, the transmission node or the upper and lower nodes) involved in the above embodiment.
  • the client service transmission device may include: a receiving unit 501 and a sending unit 502. among them:
  • the receiving unit 501 is configured to receive a customer service, where the customer service includes multiple data blocks, and the customer service corresponds to a counter, and the counter is used to control the output rate of the customer service.
  • the sending unit 502 can be configured to send multiple data blocks in multiple sending periods; wherein, in each sending period, when the count value of the counter reaches a preset threshold, at least one of the plurality of data blocks is sent.
  • the customer service transmission apparatus may further include: a control unit 503, configured to increase the counter value of each counter in each counting period of the counter before the counter value of the counter reaches a preset threshold in each transmission period. C; wherein C is less than or equal to a preset threshold.
  • C is determined according to a bandwidth of the customer service, and the preset threshold is determined according to an output line interface bandwidth.
  • C is a ratio of bandwidth of the customer service to the unit bandwidth.
  • the preset threshold is a ratio of the output line interface bandwidth to the adjustment value of the unit bandwidth, where the adjustment value of the unit bandwidth is greater than or equal to the unit bandwidth.
  • the counter starts counting from the initial value during each transmission cycle.
  • the initial value of the counter is a value obtained after the counter value of the counter is subtracted from the preset threshold at the end of the ith transmission period; where i is an integer greater than or equal to 1.
  • control unit 503 is further configured to: when the counter value of the counter reaches a preset threshold in each sending period, if the client service is not cached, stop counting the counter.
  • the customer service transmission device may further include: a storage unit 504, configured to store the customer service into the cache queue.
  • the obtaining unit 505 is configured to acquire at least one data block from the cache queue when the counter value of the counter reaches a preset threshold.
  • each of the plurality of data blocks has a fixed length.
  • the sending unit 502 is specifically configured to: send, according to a priority of the customer service, at least one of the plurality of data blocks; wherein, the smaller the expected transmission delay, the higher the priority.
  • the customer service transmission device is presented in the form of dividing each function module corresponding to each function, or the customer service transmission device is presented in a form of dividing each function module in an integrated manner.
  • a "unit” herein may refer to an ASIC, circuitry, processor and memory that executes one or more software or firmware programs, integrated logic circuitry, and/or other devices that provide the functionality described above.
  • the above described customer service transmission device can be implemented in the form shown in FIG.
  • the receiving unit 501 and the transmitting unit 502 in FIG. 14 can be implemented by the communication interface 23 in FIG.
  • the memory unit 504 in FIG. 14 can be implemented by the memory 22 in FIG.
  • the control unit 503 and the obtaining unit 505 in FIG. 14 can be executed by the processor 21 in FIG. 2 to call the application code stored in the memory 22, which is not limited in this embodiment of the present application.
  • the customer service transmission device shown in FIG. 4 or FIG. 5 and the customer service transmission device shown in FIG. 14 perform functional module division on the customer service transmission device from different angles.
  • the control unit 503, the storage unit 504, and the obtaining unit 505 in FIG. 14 may be implemented by the destination line processing unit 43 in FIG. 4.
  • the control unit 503 may be implemented by the rate monitoring module 432 in FIG.
  • the 504 and the obtaining unit 505 can be implemented by the queue buffer module 431 in FIG.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • a software program it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transmission to another website site, computer, server or data center via wired (eg coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device that includes one or more servers, data centers, etc. that can be integrated with the media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)) or the like.
  • a magnetic medium eg, a floppy disk, a hard disk, a magnetic tape
  • an optical medium eg, a DVD
  • a semiconductor medium such as a solid state disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开一种客户业务传输方法和装置,用以减少输出线路接口发生拥塞的概率,甚至可以避免拥塞的发生。该方法可以包括:接收客户业务;其中,客户业务包括多个数据块,客户业务对应一计数器,计数器用于控制客户业务的输出速率;然后,在多个发送周期发送多个数据块;其中,在每个发送周期,当计数器的计数值达到预设阈值时,发送多个数据块中的至少一个数据块。该技术可以应用于传输节点传输客户业务的场景中。

Description

一种客户业务传输方法和装置 技术领域
本申请涉及数据传输技术领域,尤其涉及一种客户业务传输方法和装置。
背景技术
在分组业务系统中,现有传输节点基于尽力传送机制传输客户业务。例如,若客户设备A通过传输节点向客户设备B发送客户业务,传输节点的输出线路接口带宽是100Gbps(吉比特每秒),则传输节点输出客户业务的速率(即客户业务的输出速率)可尽力接近100Gbps。基于此,若某秒内输入传输节点的客户业务的大小是10M(兆),则客户业务在传输节点中的传输时间理论上可以是10M/100Gbps;若某秒内输入传输节点的客户业务是100M,则客户业务在传输节点中的传输时间理论上可以是100M/100Gbps等。
这样,若多个客户设备同时向一个传输节点发送客户业务,则由于对于每个客户设备发送的客户业务来说,传输节点均采用尽力传送机制传输客户业务,因此,不可避免地会发生拥塞。
发明内容
本申请实施例提供一种客户业务传输方法和装置,用以减少输出线路接口发生拥塞的概率,甚至可以避免拥塞的发生。
为了达到上述目的,本申请提供了如下技术方案:
第一方面,本申请提供了一种客户业务传输方法,包括:接收客户业务;客户业务包括多个数据块,客户业务对应一计数器,计数器用于控制客户业务的输出速率。然后,在多个发送周期发送多个数据块;其中,在每个发送周期,当计数器的计数值达到预设阈值时,发送多个数据块中的至少一个数据块。该方法的执行主体可以是传输节点。该技术方案中,通过在传输节点中设置计数器,并在每个发送周期,基于计数器的计数值控制输出的数据块的个数,这样,能够控制数据块的输出速率,即控制客户业务的输出速率,若按照该技术方案控制每个输入传输节点的客户业务的输出速率,则有助于减少输出线路接口发生拥塞的概率,甚至可以避免拥塞的发生。
在一种可能的设计中,在每个发送周期,在计数器的计数值达到预设阈值之前,该方法还可以包括:在计数器的每个计数周期,将计数器的计数值增加C;其中,C小于或等于预设阈值。本申请对C的物理含义不进行限定,例如,C可以是计数器的计数次数,该情况下,C=1。又如,C可以是根据客户业务的带宽确定的,预设阈值可以是根据输出线路接口带宽确定的。当然,具体实现时不限于此。
在一种可能的设计中,为了便于控制,本申请中引入了单位带宽的概念。基于此,C可以是客户业务的带宽与单位带宽的比值,预设阈值可以是输出线路接口带宽与单位带宽的调整值的比值,其中,单位带宽的调整值大于或等于单位带宽。
在一种可能的设计中,在每个发送周期,计数器从初始值开始计数。可选的,不同发送周期,计数器的初始值可以相同,例如初始值为0。或者,不同发送周期,计数器的初始值可以不同,例如,在第i+1个发送周期,计数器的初始值为第i个发送周期结束时计数器的计数值减去预设阈值之后得到的值;其中,i是大于或等于1的整数。当然,具体实现时不限于此。
在一种可能的设计中,在每个发送周期,当计数器的计数值达到预设阈值时,发送多个数据块中的至少一个数据块,可以包括:在每个发送周期,当计数器的计数值达到预设阈值时,若缓存有客户业务,则发送多个数据块中的至少一个数据块。
在一种可能的设计中,该方法还可以包括:在每个发送周期,当计数器的计数值达到预设阈值时,若没有缓存客户业务,则停止对计数器计数。基于此,该方法还可以包括:在缓存有客户业务时,发送多个数据块中的至少一个数据块,然后计数器从初始值开始计数。或者,在接收到客户业务之后,直接发送至少一个数据块,而不对该至少一个数据块进行缓存。
在一种可能的设计中,在接收客户业务之后,该方法还可以包括:将客户业务存储至缓存队列中;当计数器的计数值达到预设阈值时,从缓存队列中获取至少一个数据块。其中,每个客户业务可以对应一个缓存队列。本申请中通过对客户业务进行缓存,然后从缓存队列中获取至少一个数据块,并发送出去,能够在对数据块的输出速率的控制过程中,降低丢包率。
在一种可能的设计中,多个数据块中的每个数据块具有固定长度。这样,具有实现简单的有益效果。当然,具体实现时,不同数据块的长度可以不同。
在一种可能的设计中,发送多个数据块中的至少一个数据块,可以包括:根据客户业务的优先级,发送多个数据块中的至少一个数据块;其中,期望传输时延越小优先级越高。该实现方式,在输出不同客户业务时,考虑了期望传输时延,从而能够更好地满足不同客户业务对传输时延的要求,从而提高用户体验。
第二方面,本申请提供了一种客户业务传输装置,该装置具有实现上述方法实施例中各步骤的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的设计中,该装置可以包括:处理器、存储器、总线和通信接口;该存储器用于存储计算机执行指令,该处理器、该存储器和该通信接口通过该总线连接,当该装置运行时,该处理器执行该存储器存储的该计算机执行指令,以使该装置执行如上述第一方面提供的任一种客户业务传输方法。
第三方面,本申请提供了一种计算机可读存储介质,用于储存为上述装置所用的计算机程序指令,当其在计算机上运行时,使得计算机可以执行上述第一方面提供的任一种客户业务传输方法。
第四方面,本申请提供了一种计算机程序产品,该计算机程序产品包含指令,当该指令在计算机上运行时,使得计算机可以执行上述第一方面提供的任一种客户业务传输方法。
上述提供的任一种装置,计算机可读介质或计算机程序产品的技术效果均可参见对应的方法所带来的技术效果,此处不再赘述。
附图说明
图1为本申请实施例提供的技术方案所适用的一种系统架构的示意图;
图2为本申请实施例提供的一种客户业务传输装置的结构示意图;
图3为本申请实施例提供的一种上下节点的结构示意图;
图3a为本申请实施例提供的另一种上下节点的结构示意图;
图4为本申请实施例提供的一种传输节点的结构示意图;
图4a为本申请实施例提供的一种线路处理单元的结构示意图;
图4b为本申请实施例提供的另一种传输节点的结构示意图;
图5为本申请实施例提供的技术方案所适用的另一种系统架构的示意图;
图6为本申请实施例提供的一种客户业务传输方法的交互示意图;
图7为本申请实施例提供的一种数据块的结构示意图;
图8为本申请实施例提供的一种替换标签的过程示意图;
图9为本申请实施例提供的一种上下节点的处理过程示意图;
图10为本申请实施例提供的一种传输节点的处理过程示意图;
图11为本申请实施例提供的一种速率监管方法的流程示意图;
图12为本申请实施例提供的一种速率监管方法的过程示意图;
图13为本申请实施例提供的一种策略调度方法的过程示意图;
图14为本申请实施例提供的另一种客户业务传输装置的结构示意图。
具体实施方式
如图1所示,为本申请提供的技术方案所适用的一种系统架构的示意图。该系统架构可以包括:承载网络设备以及多个客户设备100。该系统中,一个或多个客户设备经承载网络设备,将客户业务发送给另外的一个或多个客户设备。其中,为了方便理解,下文中将发送客户业务的客户设备称为发送端客户设备,将接收客户业务的客户设备称为接收端客户设备。可以理解的,同一个客户设备既可以作为发送端客户设备,也可以作为接收端客户设备。承载网络设备可以包括:与发送端客户设备或接收端客户设备连接的上下节点200,以及设置在上下节点200之间的一个或多个传输节点300。其中,任意一个或多个传输节点300可以与任意一个或多个上下节点200集成在一起,也可以是独立设置的。本申请中均是以上下节点200与传输节点300是独立设置为例进行说明的。
客户业务可以包括但不限于:以太网客户业务、同步数字体系(synchronous digital hierarchy,SDH)客户业务、存储业务、视频业务等。一个发送端客户设备可以将一个或多个客户业务传输至一个或多个接收端客户设备。多个发送端客户设备可以将一个或多个客户业务传输至同一个接收端客户设备。
图1所示的系统可以是光网络,具体可以是接入网,例如无源光网络(passive optical network,PON;也可以传送网,例如光传送网(optical transport network,OTN)、分组网络、包交换网络等。
客户设备100(包括发送端客户设备和/或接收端客户设备),可以包括但不限于以下任一种:交换机、路由器、计算机、数据中心、基站等。上下节点200和传输节点300均可以包括但不限于以下任一种:OTN设备、路由器等。
上下节点200,可以用于接收发送端客户设备发送的数据包的形式或连续数据流的形式的客户业务,并将数据包的形式或连续数据流的形式的客户业务分成多个数据块(slice),并根据路由信息将多个数据块交换至对应的传输节点300。或者,接收传输节点300发送的数据块,并将属于同一客户业务的多个数据块恢复成数据包的形式或连续数据流的形式,然后发送给对应的接收端客户设备。每个上下节点200可以包 括一个或多个输入/输出端。每个输入/输出端可以与一个或多个客户设备100(包括发送端客户设备或接收端客户设备)连接,或者与一个或多个传输节点300连接。
传输节点300,可以用于根据路由信息将数据块转发至其他传输节点300或上下节点200。每个传输节点300可以包括一个或多个输入/输出端。每个输入/输出端可以与一个或多个传输节点300连接,或者与一个或多个上下节点200连接。需要说明的是,下文中的输出线路接口可以认为是输出节点300的输出端。
可以理解的,在一些场景中,上下节点200和传输节点300中的任一设备的输入端,在另外一些场景中,可以作为该设备的输出端。其中,具体作为输入端还是输出端与本次传输客户业务过程中的路径有关,该路径可以根据路由信息确定。本次传输客户业务的路由信息可以是控制层面配置,并发送给路径上的各路由节点(包括上下节点200和传输节点300)的,其具体实现过程可以参考现有技术。其中,控制层面可以是集成在任一上下节点200和传输节点300中的一个功能模块,也可以是独立于上下节点200和传输节点300的一个设备,本申请对此不进行限定。每个输出端对应一个线路接口带宽,该线路接口带宽用于表征该输出端的承载能力。
可以理解的,图1仅为本申请所适用的系统架构的一种示例,本申请对系统架构中所包含的客户设备100、上下节点200和传输节点300的个数和连接关系均不进行限定,具体实现时,可以根据实际应用场景进行网络布局。
在本申请的一个示例中,图1中的上下节点200和传输节点300中的任一种或多种的结构示意图如图2所示。图2所示的设备可以包括:至少一个处理器21,以及存储器22、通信接口23和通信总线24。
处理器21是该设备的控制中心,具体可以是一个处理元件,也可以是多个处理元件的统称。例如,处理器21可以是一个中央处理器(central processing unit,CPU),也可以是特定集成电路(application specific integrated circuit,ASIC),或者是被配置成实施本申请实施例提供的技术方案的一个或多个集成电路,例如:一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)。其中,处理器21可以通过运行或执行存储在存储器22内的软件程序,以及调用存储在存储器22内的数据,执行该设备的各种功能。
在具体实现中,作为一种实施例,处理器21可以包括一个或多个CPU,例如图2中所示的CPU0和CPU1。
在具体实现中,作为一种实施例,该设备可以包括多个处理器,例如图2中所示的处理器21和处理器25。这些处理器中的每一个可以是一个单核处理器(single-CPU),也可以是一个多核处理器(multi-CPU)。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
存储器22可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者 能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器22可以是独立存在,通过通信总线24与处理器21相连接。存储器22也可以和处理器21集成在一起。其中,存储器22用于存储执行本申请实施例提供的技术方案中该设备所执行的软件程序,并由处理器21来控制执行。
通信接口23,可以是使用任何收发器(例如,光接收器、光模块)一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。通信接口23可以包括接收单元实现接收功能,以及发送单元实现发送功能。
通信总线24,可以是工业标准体系结构(industry standard architecture,ISA)总线、外部设备互连(peripheral component interconnect,PCI)总线或扩展工业标准体系结构(extended industry standard architecture,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图2中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在本申请的一个示例中,图1中的上下节点200的结构示意图可以如图3所示。图3所示的上下节点200可以包括:一个或多个支路处理单元31、业务交换单元32,以及一个或多个线路处理单元33。其中,支路处理单元31可以用于经上下节点200的输入端接收发送端客户设备发送的客户业务,并将接收到的客户业务分成数据块,例如,将客户业务分成固定长度的数据块。然后,经业务交换单元32将该数据块交换至对应的线路处理单元33。其中,关于业务交换单元32将从一个支路处理单元31传输过来的客户业务交换至哪个线路处理单元33,本申请不进行限定,其具体实现过程可以参考现有技术。线路处理单元33可以用于将接收到的数据块经上下节点200的输出端输出该上下节点200。
在硬件实现上,支路处理单元31可以是支路板31a,业务交换单元32可以是交换板32a,线路处理单元33可以是线路板33a,支路板31a、交叉板32a、以及线路板33a均可以与主控板34a连接,如图3a所示。其中,主控板34a是上下节点200的控制中心,用于控制支路板31a、交叉板32a、以及线路板33a执行本申请提供的方法中相应的步骤。为了简便,图3a中示出了一个支路板31a和一个线路板33a与主控板34a连接,实际实现时,每个支路板31a和每个线路板33a均可以与主控板34a连接。可选的,上下节点200还可以包括:补充光处理单元35a,其中,补充光处理单元35a可以包括:光放大单元(optical amplification,OA)、光合波单元(optical multiplexing,OM)、光分波单元(pptical de-multiplexing,OD)、单路光监控信道单元optical supervisory channel,SCI)、线路接口单元(fiber interface unit,FIU)等。
在本申请的一个示例中,图1中的传输节点300的结构示意图可以如图4所示。图4所示的传输节点300可以包括:一个或多个源线路处理单元41、业务交换单元42,以及一个或多个目的线路处理单元43。其中,源线路处理单元41可以用于经传输节点300的输入端接收上下节点200或另一个传输节点300发送的数据块;然后,经业务交换单元42将该数据块交换至对应的目的线路处理单元43。其中,关于业务交换单元42将从一个源线路处理单元41传输过来的客户业务交换至哪个目的线路处 理单元43,本申请不进行限定,其具体实现过程可以参考现有技术。目的线路处理单元43可以用于将接收到的客户业务经传输节点300的输出端输出该传输节点300。
如图4a所示,线路处理单元(包括:线路处理单元33和/或目的线路处理单元43)可以包括:队列缓存模块431和速率监管模块432,可选的,还可以包括策略调度模块433。可选的,还可以包括速率适配模块434。其中,各模块之间的连接关系的一种示例如图4所示,各模块的功能可以参考下文。
在硬件实现上,源线路处理单元41可以是源线路板41a,业务交换单元42可以是交叉板42a,目的线路处理单元43可以是目的线路板43a。源线路板41a、交叉板42a、以及目的线路板43a均可以与主控板44a连接,如图4b所示。其中,主控板44a是传输节点300的控制中心,用于控制源线路板41a、交叉板42a、以及目的线路板43a执行本申请提供的方法中相应的步骤。为了简便,图4b中示出了一个源线路板41a和一个目的线路板43a与主控板44a连接,实际实现时,每个源线路板41a和每个目的线路板43a均可以与主控板44a连接。可选的,传输节点300还可以包括:补充光处理单元45a,其中,补充光处理单元45a的具体实现可以参考补充光处理单元35a。
需要说明的是,本文中的术语“多个”是指两个或两个以上。
本文中的术语“第一”“第二”等仅是为了区分不同的对象,并不对其顺序进行限定。例如,第一上下节点和第二上下节点仅仅是为了区分不同的上下节点,并不对第一上下节点和第二上下节点的先后顺序进行限定。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。可以理解的,在公式中,“/”,一般表示前后关联对象是一种“相除”的关系。
下面对本申请实施例中的技术方案进行详细地描述。
需要说明的是,下文中的具体示例是以本申请提供的技术方案应用于图5所示的系统架构为例进行说明的,其中,图5所示的系统架构是图1所示的系统架构的一种具体实现。在图5中,客户设备1将客户业务1经上下节点1、传输节点1、上下节点2传输至客户设备4。客户设备2将客户业务2经上下节点1、传输节点1、上下节点2传输至客户设备5。客户设备3将客户业务3经上下节点1、传输节点2、上下节点3传输至客户设备5。实际上,本申请提供的技术方案也可以适用于一个发送端客户设备向上下节点发送多个客户业务的场景中,其基本原理可参考下文中的描述。
另外需要说明的是,下文中是以对客户业务的数据块进行缓存、速率监管、策略调度、速率适配等操作是应用于传输节点为例进行说明的,实际实现时,这些操作中的一个或多个也可以应用于上下节点(包括第一上下节点和/或第二上下节点)中,其具体实现过程可参考应用于传输节点的描述,本申请对此不再赘述。
如图6所示,为本申请实施例提供的一种客户业务传输方法的交互示意图。图6所示的方法可以包括以下步骤S101~S104:
S101:多个发送端客户设备向第一上下节点发送客户业务。客户业务以数据包的形式或连续数据流的形式传输。第一上下节点可以是图5中的上下节点1,下文中均以此为例进行说明。
该多个发送端客户设备可以是与上下节点1连接的任意的多个发送端客户设备。该多个发送端客户设备中的任意一个或多个发送端客户设备可以连续不断地,或者间断地向上下节点1发送客户业务。
基于图5所示的系统架构,S101可以包括:客户设备1向上下节点1发送客户业务1,客户设备2向上下节点1发送客户业务2,客户设备3向上下节点1发送客户业务3。
在S101之前,该方法还可以包括:每个发送端客户设备向控制层面申请客户业务(例如客户业务1、2、3)的带宽,以使得控制层面控制上下节点1为该发送端客户设备预留一定的带宽,以传输该客户业务。其中,客户业务的带宽可以是发送端客户设备根据需求(例如待传输客户业务的大小、期望传输时延需求等)确定的。同一发送端客户设备发送不同客户业务时,客户业务的带宽可以相同,也可以不同。不同发送端客户设备发送同一客户业务时,客户业务的带宽可以相同,也可以不同。本申请对此不进行限定。
在本申请的一个示例中,为了方便控制,系统中设置了单位带宽(即:最小带宽粒度),每个发送端客户设备可以将客户业务的带宽设置为单位带宽的整数倍。例如,假设单位带宽为2Mbps,则客户业务的带宽可以是n*2Mbps,其中,n可以是大于或等于1的整数。
S102:第一上下节点接收多个发送端客户设备发送的客户业务,并将接收到的客户业务分成固定长度的数据块,并将每个数据块生成一个切片,然后,根据路由信息将每个切片输出至对应的传输节点。
该步骤S102可以理解为:上下节点1将接收到的客户业务映射到承载容器中,每个承载容器用于承载一个数据块。可以理解的,“承载容器”是为了更形象地描述划分数据块的过程而提出的一个逻辑概念,可以不真实存在。
基于图5所示的系统架构,S102可以包括:上下节点1接收客户设备1发送的客户业务1、客户设备2发送的客户业务2、客户设备3发送的客户业务3,将客户业务1、2、3均分为固定长度的数据块,将每个数据块生成一个切片,接着,将客户业务1的每个切片输出至传输节点1,将客户业务2的每个切片输出至传输节点1,将客户业务3的每个切片输出至传输节点2。
上下节点1可以在接收到一个或多个数据包或连续数据流的情况下,按照接收时间先后顺序,将所接收到的数据包或连续数据流分成数据块。每个数据块可以是固定长度,即:上下节点1将接收到的任一发送端客户设备发送的任一客户业务,均分成固定长度的数据块。为了便于描述,本申请中均是以数据块是固定长度为例进行说明的。实际实现时,不同数据块的长度可以是不相等的。可以理解的,这里的“数据块”包括客户业务本身,可选的,还可以包括客户业务的一些随路信息等。
示例的,假设固定长度的数据块的长度为123Bytes,那么,当客户业务的带宽是2Mbps时,输入上下节点1的数据块的平均速率是2Mbps/123Bytes个数据块/秒,即上下节点1的支路处理单元形成容器时,平均每秒产生2Mbps/123Bytes个数据块。当客户业务的带宽是4Mbps时,输入上下节点1的数据块的平均速率是4Mbps/123Bytes个数据块/秒。
为了使承载网络设备(包括第一上下节点、第二上下节点和/或传输节点)对数据块的处理(例如交换、传输等)过程中,可以识别该数据块,在本申请的一个实施例中,上下节点1在将所接收到的数据包或连续数据流分成固定长度的数据块之后,可以为每个数据块添加一个标签(label)。由于发送端客户设备发送至上下节点1的不同类型的客户业务的传输格式可能不同,为了实现方便,本申请提供了一种数据块的格式,如图7所示。传输至上下节点1的任一类型的客户业务的格式均可以被转换成如图7所示的格式。图7中,数据块的格式可以包括:标签和净荷区。可选的,数据块的格式中还可以包括循环冗余校验(cyclic redundancy check,CRC)区。
其中,标签可以是全局标签,也可以是线路接口局部标签。全局标签可以是系统中的各设备均可识别的标签。线路接口局部标签可以是直接通信的两个设备可以识别的标签。相比全局标签,线路接口局部标签占用的比特数较少,下文中均是以标签是线路接口局部标签为例进行说明的。可以理解的,数据块的标签还可以用于区分不同的客户业务。数据块的标签可以是控制层面配置的。净荷区用于承载客户业务本身,可选的还可以用于承载客户业务的一些随路信息等。CRC区用于承载校验比特,校验比特可用于对净荷区承载的信息的完整性进行校验。当然,具体实现时,也可以通过其他方式实现完整性校验,不限定于CRC。
本申请对标签、净荷区和CRC区中的任一部分所占的大小不进行限定。图7中是以标签占4字节(Byte),净荷区占123Bytes,CRC占1Byte为例进行说明的。需要说明的是,为了区分上下节点1将接收到的客户业务区分成的“数据块”,以及如图7所示的“数据块”,本申请中将包含标签的数据块(如图7所示)称为“切片”。每个数据块对应一个切片,每个数据块可以认为承载在该数据块对应的切片的净荷区中的信息。
可选的,若上下节点1的输入端与输出端是一对多的关系,例如一个客户业务通过不同路径传输至不同的接收端客户设备,那么,上下节点1对一个数据块的处理流程可以包括:上下节点1为该数据块添加标签a,然后根据路由信息将标签a替换为标签b,并重新计算CRC。其中,替换标签可以是上下节点1中的支路处理单元,业务交换单元或线路处理单元中的任一单元执行,其具体实现过程可以如图8所示。
如图9所示,为S102的一种实现过程的示意图。图9中是以上下节点1的一个支路处理单元31对接收到的数据包或连续数据流进行分数据块,并将每个数据块生成一个切片为例进行说明的。
S103:传输节点接收第一上下节点发送的客户业务的切片,并根据路由信息对接收到的切片执行交换操作。然后,将交换后的切片按照所属的客户业务进行缓存,其中,每个客户业务对应一个缓存队列。接着,对每个客户业务的切片独立执行速率监管操作,并对速率监管后的客户业务进行策略调度和速率适配等操作,并将执行上述操作后的切片传输至下一个路由节点。其中,下一个路由节点可以是下一个传输节点或第二上下节点,若下一个路由节点是第二上下节点,则执行S104。若下一个路由节点是传输节点,则该传输节点继续执行S103,……直至下一个路由节点是第二上下节点,则执行S104。
可以理解的,一般的,缓存空间是所有缓存队列共享的,在传输节点的承载能力范围内,传输节点每接收一个客户业务,即可为该客户业务分配一个缓存队列。
基于图5所示的系统架构,第二上下节点可以包括上下节点2和上下节点3。S103可以包括:传输节点1接收上下节点1发送的客户业务1、2的切片,将客户业务1、2的切片传输至上下节点2;传输节点2接收上下节点1发送的客户业务3的切片,将客户业务3的切片传输至上下节点3。
可以理解的,传输节点根据路由信息对接收到的切片执行交换操作,可以包括:传输节点根据路由信息,确定切片的下一个路由节点,然后再执行交换操作。可选的,若数据块的标签是线路接口局部标签,则传输节点在执行交换操作的过程中,还需要执行替换标签的动作。例如,基于图5所示的系统架构,假设上下节点1向传输节点2发送的客户业务3的数据块标签为标签b,则传输节点2在执行交换操作的过程中,可以将标签b替换为标签c,可选的,可以重新计算数据块的CRC,该过程的具体实现方式可以参考图8。其中,标签b是上下节点1和传输节点2可识别的标签,标签c是传输节点2和上下节点3可识别的标签。
在本申请的一个示例中,传输节点在接收到客户业务的切片之后,可以根据切片中包含的CRC进行完整性校验,并在校验成功的情况下,执行速率监管、策略调度和速率适配。可选的,在校验不成功的情况下,丢弃该切片。
可以理解的,若传输节点的结构示意图如图4所示,则交换操作可以由业务交换单元42执行,缓存操作可以由队列缓存模块431执行,速率监管操作可以由速率监管模块432执行,策略调度操作可以由策略调度模块433执行,速率适配操作可以由速率适配模块434执行。其中,关于速率监管、策略调度和速率适配的操作的相关解释及具体实现方式可参见下文。
如图10所示,为S103的一种实现过程的示意图。其中,图10中是以传输节点的两个输入端输入的多个客户业务传输至一个输出端为例进行说明的,其中,每个矩形小方格表示一个切片,其中,每种阴影小方格表示一个客户业务,缓存队列中的每个空白小方格表示缓存队列中还未存储切片。每个客户业务独立进行队列缓存和速率监管。多个客户业务统一进行策略调度和速率适配之后,从该输出端输出。
S104:第二上下节点接收传输节点发送的客户业务的切片,并获取每个切片中的数据块,然后,将同一客户业务的数据块按照接收时间先后顺序恢复成数据包的形式或连续数据流的形式或连续数据流,然后将该客户业务发送给对应的接收端客户设备。接收端客户设备接收第二上下节点发送的客户业务。
基于图5所示的系统架构,S104可以包括:上下节点2接收传输节点1发送的客户业务1的切片,然后,将客户业务1的切片恢复成数据包的形式或连续数据流的形式,发送给客户设备4。客户设备4接收上下节点2发送的客户业务1。上下节点2接收传输节点1发送的客户业务2的切片,然后,将客户业务2的切片恢复成数据包的形式或连续数据流的形式,发送给客户设备5。上下节点3接收传输节点2发送的客户业务3的切片,然后,将客户业务3的切片恢复成数据包的形式或连续数据流的形式,发送给客户设备5。客户设备5接收上下节点3发送的客户业务2、3。
在本申请的一个示例中,第二上下节点在接收到客户业务的切片之后,可以根据该切片中包含的CRC进行完整性校验,并在校验成功的情况下,删除切片中的标签,得到该切片中的数据块。
S104的具体实现过程的示意图可以是图9的逆过程。
下面说明S103中的速率监管、策略调度和速率适配等操作。
一、速率监管
速率监管,是本申请提供的一种控制客户业务的输出速率的技术,有助于减少拥塞发生的概率,甚至消除拥塞的发生。具体的:在传输节点中设置多个计数器,每个客户业务可以对应一个计数器,每个计数器用于控制该客户业务的输出速率。然后,在多个发送周期发送多个数据块。其中,在每个发送周期,当计数器的计数值达到预设阈值时,发送该多个数据块中的至少一个数据块。其中,计数器可以通过软件或硬件实现,本申请对此不进行限定。传输节点可以对每个客户业务独立进行速率监管。
在本申请的一个示例中,以对一个客户业务的输出速率进行速率监管为例,如图11所示,在每个发送周期,传输节点(具体可以是传输节点中的速率监管模块432),可以执行如下步骤S201~S205:
S201:在计数器的每个计数周期,将计数器的计数值增加C;其中,C小于或等于预设阈值。
S202:判断计数器的计数值是否达到预设阈值。
若是,则执行S203;若否,则返回S201。
S203:判断客户业务的缓存队列是否缓存有该计数器对应的客户业务数据块。
若是,则执行S204。若否,则执行S205。
S204:从缓存队列中获取至少一个数据块,并发送出去。
执行S204之后,本发送周期结束。
S205:停止对计数器计数。
执行S205之后,执行S203。
可以理解的,在执行速率监管的过程中,传输节点可以连续不断地或者间断地接收第一上下节点发送的客户业务,或规律地接收其他传输节点发送的客户业务。因此,执行S203时,若缓存队列没有缓存客户业务,则执行S205的若干时间之后再执行S203,缓存队列可能已缓存有客户业务,从而可以执行S204。另外,具体实现时,还可以设置该时间的时长,这样,当从计数器停止计数开始的该时长之后,若执行S203时,缓存队列仍没有缓存客户业务,可以认为该客户业务传输结束。其中,本申请对该时长的具体取值不进行限定。
上述S201~S205描述了一个发送周期的速率监管过程。对于多个发送周期来说,可选的,在每个发送周期,计数器从初始值开始计数。其中,不同发送周期的初始值可以相同,也可以不相同。在本申请的一个实施例中,在第i+1个发送周期,计数器的初始值为第i个发送周期结束时计数器的计数值减去预设阈值之后得到的值;其中,i是大于或等于1的整数。在本申请的另一个实施例中,在每个发送周期,计数器的初始值均为小于预设阈值的一个固定的一个值,例如可以是0。
发送周期,是指相邻两次发送数据块的时间间隔,其中,每次可以发送一个或多个数据块。发送周期可以不是预先设置的值,其与计数器的计数值相关,进一步地,与计数值达到预设阈值时,是否缓存有客户业务相关。在任意两个发送周期,若计数值达到预设阈值时,缓存队列均缓存有客户业务,则这两个发送周期相等。或者,虽然计数值达到预设阈值时,缓存队列没有缓存客户业务,但是,计数器停止计数的时间相等,则这两个发送周期相等。若计数值达到预设阈值时,缓存队列没有缓存客户业务,且计数器停止计数的时间不相等,则这两个发送周期不相等。
可以理解的,若计数值达到预设阈值时,缓存队列缓存有客户业务,则发送周期等于预设数量个计数周期。若计数值达到预设阈值时,缓存队列没有缓存客户业务,则发送周期等于预设数量个计数周期与停止计数的时长。基于此,本申请提供的速率监管过程可以理解为,在计数器的技术值达到预设阈值时,客户业务拥有一次发送机会,即传输节点拥有一次发送该客户业务的数据块的机会。每次发送机会可发送至少一个数据块,基于此,来控制数据块的输出速率,即控制客户业务的输出速率。
每个发送周期发送的数据块的个数可以相等,也可以不相等。为了保证输出速率在一定范围内恒定,本申请中引入了“数据块传输周期”的概念,其中,数据块传输周期可以包括一个或多个发送周期,每个数据块传输周期内发送的数据块的个数相等。
例如,每个数据块传输周期包括2个发送周期,在这2个发送周期的一个发送周期发送1个数据块,在另一个发送周期发送2个数据块。该情况下,在多个发送周期内发送的数据块的个数可以为:1,2,1,2,1,2……,或者,1,2,2,1,1,2,1,2,2,1……。由该示例可知,通过控制每个发送周期发送的数据块个数,即可使得每个数据块传输周期发送的数据块的个数相同,从而可以保证数据块的以数据块传输周期为粒度的基础上的输出速率恒定。可以理解的,数据块的实际输出速率小于该恒定的输出速率,因此,只要通过控制每个发送周期的时长,以及每个发送周期发送的数据块个数,即可控制该恒定的输出速率,从而有助于减小输出线路接口发生拥塞的概率。
又如,每个数据块传输周期包括1个发送周期,在这个发送周期内2个数据块。该情况下,在多个发送周期内发送的数据块的个数可以为:2,2,2……。可以理解的,该示例中,数据块传输周期与发送周期相等。因此,只要通过控制每个发送周期的时长,以及每个发送周期发送的数据块个数,即可控制该恒定的输出速率,从而有助于减小输出线路接口发生拥塞的概率。
计数周期,是指计数器的计数值每更新一次所需的时间。实际实现时,可以通过脉冲信号实现计数周期,例如,若脉冲周期等于计数周期,则在每个脉冲周期计数器的计数值增加C。可选的,计数周期可以等于线路接口的数据块传输时间,例如,假设输出线路接口的带宽是100Gbps,数据块所在的切片的长度是128Bytes,则线路接口的数据块传输时间为:128Bytes/100Gbps=10.24ns(纳秒)。当然,计数周期也可以大于线路接口的数据块传输时间。
本申请对计数器的计数值的物理含义不进行限定。下面列举几种实现方式:
方式1:C是计数次数。该情况下,每个计数周期,计数器的计数值增加1。该情况下,预设阈值的取值可以根据下文方式2中的取值换算得到,此处不再赘述。当然,也可以通过其他方式得到,本申请对此不进行限定。
方式2:C是根据客户业务的带宽确定的。该情况下,可选的,预设阈值是根据输出线路接口带宽确定的。
可选的,C是客户业务的带宽与单位带宽的比值,预设阈值是输出线路接口带宽与单位带宽的调整值的比值,其中,单位带宽的调整值大于或等于单位带宽。需要说明的是,为了实现方便,预设阈值可以设置为整数,该情况下,若输出线路接口带宽与单位带宽的调整值的比值为非整数,则预设阈值可以取该非整数向下取整得到的整数。当然,具体实现时,预设阈值也可以设置为非整数,本申请对此不进行限定。
可以理解的,单位带宽的调整值越大,所确定的预设阈值就越小,这样,计数器的计数值更容易达到预设阈值,因此,发送周期较小,发送机会更多,进而输出速率更快。加速是为了消除客户业务输出速率的微小突变,从而避免拥塞的发生。
在本申请的一个示例中,假设C是客户业务的带宽与单位带宽的比值,则预设阈值可以是小于或等于输出线路带宽(即线路物理速率),且是单位带宽乘以一个加速系数后的整数倍的一个值。例如,单位带宽是2Mbps,客户业务的带宽是10Mbps,则C可以是10Mbps/2Mbps=5。若输出线路接口带宽是100Gbps,单位带宽的调整值为单位带宽的1001/1000,即加速系数是1001/1000,则预设阈值(标记为P)可以是100Gbps/(2Mbps*1001/1000)后向下取整得到的值,即49950。也就是说,本申请中通过速率监管,可以将数据块的实时输出速率近似且小于为客户业务的带宽。基于此,由于控制层面可以控制传输节点的输出线路带宽的总和小于或等于该输出接口线路带宽上传输的客户业务的带宽总和,因此,若每个客户业务均按照上述方法进行控制,则有助于减小输出线路带宽发生拥塞的概率。
在本申请的一个示例中,在任一个发送周期的每个计数周期,计数器的计数值增加C,当计数器的计数值达到P时,客户业务拥有一次发送机会。当该客户业务对应的缓存队列缓存有客户业务时,发送该数据业务的至少一个数据块,至此,本发送周期结束。当客户业务对应的缓存队列没有缓存客户业务时,计数器停止计数,并在缓存队列中缓存有客户业务时,至此,本发送周期结束。其中,在本发送周期结束时,将计数器的计数值设置为初始值,至此,下一个发送周期开始。
其中,该过程可以采用如图12所示的过程实现,图12中,在速率监管模块432中设置的触发式饱和漏桶,可相当于一个控制模块,该控制模块可以在每个脉冲周期,控制计数器的计数值增加C,当计数器的计数值达到P时,若检测到队列缓存模块431发送的非空指示,则控制队列缓存中的至少一个数据块输出至策略调度模块433,可选的,可以向策略调度模块433发送一个调度请求,以请求调度资源。
二、策略调度
由于传输节点对每个客户业务独立执行速率监管,且输入传输节点的不同客户业务可以被交换至同一个输出端,因此,可能出现不同客户业务的数据块同时输出速率监管模块432,并输入同一个输出端的情况,这可能造成用拥塞,从而导致部分数据块丢包。基于此,本申请提供了在速率监管操作之后,输出之前,增加了策略调度操 作。其中,策略调度,是通过不同客户业务的时延需求,确定不同客户业务的优先级,从而按照优先级输出传输节点的一种调度技术。其具体实针对同时从速率监管模块432输出,并输入同一个输出端的不同客户业务设计的一种调度技术。可扩展地,策略调度可以包括如下内容:
对于任意多个客户业务的切片来说,若该多个切片先后被输入策略调度模块433,则按照输入时间的先后顺序被输出。若该多个切片同时被输入策略调度模块433,则按照切片所属客户业务的严格优先级(strict priority,SP)被输出,其中,优先级高的客户业务的切片先输出,优先级低的客户业务的切片后输出。若任意多个切片所属客户业务的优先级相同,则按照轮询(round robin,RR)方式被输出。该过程的示意图如图13所示,其中,图13中是示出了第一优先级、第二优先级和第三优先级,其中,第一优先级高于第二优先级,第二优先级高于第三优先级。图13中的“H”表示执行SP的两个客户业务中高优先级的客户业务,“L”表示执行SP的两个客户业务中低优先级的客户业务。
具体实现时,传输节点在执行策略调度之前的任一步骤,该方法还可以包括:传输节点按照客户业务的期望传输时延,确定客户业务的优先级。其中,期望传输时延,是根据实际需求设置的一个期望值,其可以是预设值。不同客户业务的期望传输时延可以相同也可以不同。同一客户业务在不同场景中的期望传输时延可以相同也可以不同。可选的,每个优先级级别可以对应一个期望传输时延范围。例如,期望传输时延小于或等于5us(微秒)的客户业务的优先级是第一优先级。期望传输时延小于或等于20us的客户业务的优先级是第二优先级。期望传输时延小于或等于50us的客户业务的优先级是第三优先级。其中,第一优先级高于第二优先级,第二优先级高于第三优先级。示例的,若一个客户业务的期望传输时延是5us,则只有第一优先级能够时刻满足该期望传输时延;其他优先级,例如第二优先级等,虽然有时可以满足该期望传输时延,但是,不能时刻满足该期望时延,因此,该客户业务的优先级为第一优先级。其他示例不再一一列举。
基于本申请提供的速率监管方法,在本申请的一个示例中,控制层面可以根据客户业务的时延要求和本系统的支持能力来分配对应的传输资源,实现不同客户业务的时延需求。示例的,假设输出线路接口带宽是100Gbps,传输节点调度一个数据块的时间是10.24ns,数据块从传输节点的输入端经业务交换模块交换到对应的缓存队列的时间小于3us。那么,如果期望传输时延小于5us,则该客户业务的优先级是第一优先级,由于经过严格速率监管后的客户业务可以达到没有拥塞的效果,因此引入传输时延仅和第一优先级的客户业务的个数客户业务管道数(即第一优先级的客户业务的个数)相关,因此,系统最多可支持(5us-3us)/10.24ns=195个传输时延小于5us的客户业务,即分配的所有第一优先级的客户业务管道数不超过195个时,都可以保证传输时延在5us以内。对于第二优先级客户业务,如果期望传输时延小于20us,则按照同样的方法,系统可以支持(20us-3us)/10.24ns=1660个小于20us的客户业务管道,考虑到可能有195个第一优先级队列,因此,系统可以支持1660-195=1465个传输时延小于20us的客户业务,以此类推,可以得到系统可支持的第三优先级客户业务、第 四优先级客户业务的个数。这样,可以根据客户业务的传输时延要求和本系统的支持能力来分配对应的传输资源,实现不同客户业务的时延需求。
三、速率适配
由于有加速等因素的影响,会使得输出线路接口带宽大于输出线路接口传输的所有客户业务的带宽总和,因此,需要进行速率适配。具体的:若某一时刻所有客户业务的实时输出速率之和小于输出线路接口带宽,则该时刻通过填充无效的数据块,该所有客户业务一起输出。其中,速率适配过程中所填充的无效的数据块可以采用和数据块相同的格式,使用一个特殊的标签来识别,也可以采用其他长度或其他格式的数据块,本申请对此不进行限定。
上述主要从各个网元之间交互的角度对本申请实施例提供的方案进行了介绍。可以理解的是,各个网元,例如上下节点、传输节点或者客户设备,为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对上下节点、传输节点或者客户设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
比如,在采用对应各个功能划分各个功能模块的情况下,图14示出了上述实施例中所涉及的客户业务传输装置(具体可以是传输节点,也可以是上下节点)的一种可能的结构示意图。该客户业务传输装置可以包括:接收单元501和发送单元502。其中:
接收单元501,可以用于接收客户业务;其中,客户业务包括多个数据块,客户业务对应一计数器,计数器用于控制客户业务的输出速率。
发送单元502,可以用于在多个发送周期发送多个数据块;其中,在每个发送周期,当计数器的计数值达到预设阈值时,发送多个数据块中的至少一个数据块。
可选的,该客户业务传输装置还可以包括:控制单元503,用于在每个发送周期,在计数器的计数值达到预设阈值之前,在计数器的每个计数周期,将计数器的计数值增加C;其中,C小于或等于预设阈值。
可选的,C是根据客户业务的带宽确定的,预设阈值是根据输出线路接口带宽确定的。可选的,C是客户业务的带宽与单位带宽的比值,预设阈值是输出线路接口带宽与单位带宽的调整值的比值,其中,单位带宽的调整值大于或等于单位带宽。
可选的,在每个发送周期,计数器从初始值开始计数。
可选的,在第i+1个发送周期,计数器的初始值为第i个发送周期结束时计数器的计数值减去预设阈值之后得到的值;其中,i是大于或等于1的整数。
可选的,控制单元503,还可以用于在每个发送周期,当计数器的计数值达到预设阈值时,若没有缓存客户业务,则停止对计数器计数。
可选的,该客户业务传输装置还可以包括:存储单元504,用于将客户业务存储至缓存队列中。获取单元505,用于当计数器的计数值达到预设阈值时,从缓存队列中获取至少一个数据块。
可选的,多个数据块中的每个数据块具有固定长度。
可选的,发送单元502,具体可以用于:根据客户业务的优先级,发送多个数据块中的至少一个数据块;其中,期望传输时延越小优先级越高。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在本申请实施例中,该客户业务传输装置以对应各个功能划分各个功能模块的形式来呈现,或者,该客户业务传输装置以采用集成的方式划分各个功能模块的形式来呈现。这里的“单元”可以指ASIC,电路,执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。
在一个简单的实施例中,本领域的技术人员可以想到上述客户业务传输装置可以采用图2所示的形式实现。比如,图14中的接收单元501和发送单元502可以通过图2中的通信接口23实现。图14中的存储单元504可以通过图2中的存储器22实现。图14中的控制单元503和获取单元505可以由图2中的处理器21来调用存储器22中存储的应用程序代码来执行,本申请实施例对此不作任何限制。
可以理解的,图4或图5所示的客户业务传输装置与图14所示的客户业务传输装置从不同角度对客户业务传输装置进行了功能模块划分。例如,图14中的控制单元503、存储单元504和获取单元505可以通过图4中的目的线路处理单元43实现,具体的,控制单元503可以通过图5中的速率监管模块432实现,存储单元504和获取单元505可以通过图5中的队列缓存模块431实现。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。

Claims (21)

  1. 一种客户业务传输方法,其特征在于,包括:
    接收客户业务;其中,所述客户业务包括多个数据块,所述客户业务对应一计数器,所述计数器用于控制所述客户业务的输出速率;
    在多个发送周期发送所述多个数据块;其中,在每个发送周期,当所述计数器的计数值达到预设阈值时,发送所述多个数据块中的至少一个数据块。
  2. 根据权利要求1所述的方法,其特征在于,在每个发送周期,在所述计数器的计数值达到预设阈值之前,所述方法还包括:
    在所述计数器的每个计数周期,将所述计数器的计数值增加C;其中,所述C小于或等于所述预设阈值。
  3. 根据权利要求2所述的方法,其特征在于,所述C是根据所述客户业务的带宽确定的,所述预设阈值是根据输出线路接口带宽确定的。
  4. 根据权利要求3所述的方法,其特征在于,所述C是所述客户业务的带宽与单位带宽的比值,所述预设阈值是所述输出线路接口带宽与所述单位带宽的调整值的比值,其中,所述单位带宽的调整值大于或等于所述单位带宽。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,在每个发送周期,所述计数器从初始值开始计数。
  6. 根据权利要求5所述的方法,其特征在于,在第i+1个发送周期,所述计数器的初始值为第i个发送周期结束时所述计数器的计数值减去所述预设阈值之后得到的值;其中,i是大于或等于1的整数。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述方法还包括:
    在每个发送周期,当所述计数器的计数值达到所述预设阈值时,若没有缓存所述客户业务,则停止对所述计数器计数。
  8. 根据权利要求1至7任一项所述的方法,其特征在于,在接收客户业务之后,所述方法还包括:
    将所述客户业务存储至缓存队列中;
    当所述计数器的计数值达到预设阈值时,从所述缓存队列中获取所述至少一个数据块。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述多个数据块中的每个数据块具有固定长度。
  10. 根据权利要求1至9任一项所述的方法,其特征在于,所述发送所述多个数据块中的至少一个数据块,包括:
    根据所述客户业务的优先级,发送所述多个数据块中的至少一个数据块;其中,期望传输时延越小优先级越高。
  11. 一种客户业务传输装置,其特征在于,包括:
    接收单元,用于接收客户业务;其中,所述客户业务包括多个数据块,所述客户业务对应一计数器,所述计数器用于控制所述客户业务的输出速率;
    发送单元,用于在多个发送周期发送所述多个数据块;其中,在每个发送周期,当所述计数器的计数值达到预设阈值时,发送所述多个数据块中的至少一个数据块。
  12. 根据权利要求11所述的装置,其特征在于,所述装置还包括:
    控制单元,用于在每个发送周期,在所述计数器的计数值达到预设阈值之前,在所述计数器的每个计数周期,将所述计数器的计数值增加C;其中,所述C小于或等于所述预设阈值。
  13. 根据权利要求12所述的装置,其特征在于,所述C是根据所述客户业务的带宽确定的,所述预设阈值是根据输出线路接口带宽确定的。
  14. 根据权利要求13所述的装置,其特征在于,所述C是所述客户业务的带宽与单位带宽的比值,所述预设阈值是所述输出线路接口带宽与所述单位带宽的调整值的比值,其中,所述单位带宽的调整值大于或等于所述单位带宽。
  15. 根据权利要求11至14任一项所述的装置,其特征在于,在每个发送周期,所述计数器从初始值开始计数。
  16. 根据权利要求15所述的装置,其特征在于,在第i+1个发送周期,所述计数器的初始值为第i个发送周期结束时所述计数器的计数值减去所述预设阈值之后得到的值;其中,i是大于或等于1的整数。
  17. 根据权利要求11至16任一项所述的装置,其特征在于,所述装置还包括:
    控制单元,用于在每个发送周期,当所述计数器的计数值达到所述预设阈值时,若没有缓存所述客户业务,则停止对所述计数器计数。
  18. 根据权利要求11至17任一项所述的装置,其特征在于,所述装置还包括:
    存储单元,用于将所述客户业务存储至缓存队列中;
    获取单元,用于当所述计数器的计数值达到预设阈值时,从所述缓存队列中获取所述至少一个数据块。
  19. 根据权利要求11至18任一项所述的装置,其特征在于,所述多个数据块中的每个数据块具有固定长度。
  20. 根据权利要求11至19任一项所述的装置,其特征在于,
    所述发送单元,具体用于:根据所述客户业务的优先级,发送所述多个数据块中的至少一个数据块;其中,期望传输时延越小优先级越高。
  21. 一种客户业务传输装置,其特征在于,包括:处理器、存储器、总线和通信接口;该存储器用于存储计算机执行指令,所述处理器、所述存储器和所述通信接口通过所述总线连接,当所述装置运行时,所述处理器执行所述存储器存储的计算机执行指令,以使所述装置执行如权利要求1~10任一项所述的客户业务传输方法。
PCT/CN2017/081729 2017-04-24 2017-04-24 一种客户业务传输方法和装置 WO2018195728A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
JP2019556977A JP6962599B2 (ja) 2017-04-24 2017-04-24 クライアントサービス送信方法および装置
KR1020227006386A KR102408176B1 (ko) 2017-04-24 2017-04-24 클라이언트 전송 방법 및 디바이스
KR1020197033779A KR102369305B1 (ko) 2017-04-24 2017-04-24 클라이언트 전송 방법 및 디바이스
EP17907069.3A EP3605975B1 (en) 2017-04-24 2017-04-24 Client service transmission method and device
PCT/CN2017/081729 WO2018195728A1 (zh) 2017-04-24 2017-04-24 一种客户业务传输方法和装置
CN201780038563.1A CN109314673B (zh) 2017-04-24 2017-04-24 一种客户业务传输方法和装置
TW106145618A TWI680663B (zh) 2017-04-24 2017-12-26 客戶業務傳輸方法和裝置
US16/661,559 US11785113B2 (en) 2017-04-24 2019-10-23 Client service transmission method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/081729 WO2018195728A1 (zh) 2017-04-24 2017-04-24 一种客户业务传输方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/661,559 Continuation US11785113B2 (en) 2017-04-24 2019-10-23 Client service transmission method and apparatus

Publications (1)

Publication Number Publication Date
WO2018195728A1 true WO2018195728A1 (zh) 2018-11-01

Family

ID=63917918

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/081729 WO2018195728A1 (zh) 2017-04-24 2017-04-24 一种客户业务传输方法和装置

Country Status (7)

Country Link
US (1) US11785113B2 (zh)
EP (1) EP3605975B1 (zh)
JP (1) JP6962599B2 (zh)
KR (2) KR102369305B1 (zh)
CN (1) CN109314673B (zh)
TW (1) TWI680663B (zh)
WO (1) WO2018195728A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217733A (zh) * 2019-07-09 2021-01-12 中兴通讯股份有限公司 一种报文处理方法及相关装置
CN112437017A (zh) * 2020-11-17 2021-03-02 锐捷网络股份有限公司 一种数据流控系统、方法、装置、设备及介质
CN113225241A (zh) * 2021-04-19 2021-08-06 中国科学院计算技术研究所 面向环形数据报文网络的数据传输拥塞控制方法及系统

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827014A (zh) * 2018-05-28 2022-07-29 华为技术有限公司 一种报文处理方法和装置
CN110688208A (zh) * 2019-09-09 2020-01-14 平安普惠企业管理有限公司 线性递增的任务处理方法、装置、计算机设备和存储介质
US20220014884A1 (en) * 2020-07-07 2022-01-13 Metrolla Inc. Method For Wireless Event-Driven Everything-to-Everything (X2X) Payload Delivery
CN114124831A (zh) * 2020-08-28 2022-03-01 中国移动通信集团终端有限公司 数据发送方法、装置、设备及存储介质
CN112540724A (zh) * 2020-11-20 2021-03-23 普联技术有限公司 一种数据发送方法、装置及设备
CN116709066A (zh) * 2020-11-20 2023-09-05 华为技术有限公司 Pon中的数据传输方法、装置和系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787483A (zh) * 2004-12-10 2006-06-14 华为技术有限公司 一种流量控制方法
US20140064081A1 (en) * 2012-08-31 2014-03-06 Guglielmo Marco Morandin Multicast replication skip
CN105915468A (zh) * 2016-06-17 2016-08-31 北京邮电大学 一种业务的调度方法及装置
CN106506119A (zh) * 2016-11-16 2017-03-15 南京金水尚阳信息技术有限公司 一种窄带非对称信道的rtu数据可靠传输控制方法

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4429415A (en) * 1981-11-30 1984-01-31 Rca Corporation Signal-seeking tuning system with signal loss protection for a television receiver
JPH04257145A (ja) * 1991-02-12 1992-09-11 Hitachi Ltd パケット流量制御方法およびパケット交換システム
JP3478100B2 (ja) * 1997-12-09 2003-12-10 三菱電機株式会社 無線回線割当装置及び無線回線割当方法
JP4006169B2 (ja) * 2000-05-30 2007-11-14 株式会社日立製作所 ラベルスイッチング型パケット転送装置
US7114009B2 (en) * 2001-03-16 2006-09-26 San Valley Systems Encapsulating Fibre Channel signals for transmission over non-Fibre Channel networks
JP3644404B2 (ja) * 2001-04-27 2005-04-27 三菱電機株式会社 光加入者線端局装置及びaponシステム及びセル遅延ゆらぎ抑制方法
JP3994774B2 (ja) * 2002-03-28 2007-10-24 三菱電機株式会社 光加入者線終端装置及びユーザトラヒック収容方法
EP1592182A4 (en) * 2003-02-03 2010-05-12 Nippon Telegraph & Telephone DATA TRANSMISSION DEVICE AND DATA TRANSMISSION SYSTEM
ES2327626T3 (es) * 2003-08-11 2009-11-02 Alcatel Lucent Un metodo de suministro de un servicio multimedia en una red de comunicacion inalambrica digital.
WO2005122493A1 (fr) * 2004-06-07 2005-12-22 Huawei Technologies Co., Ltd. Procede de realisation d'une transmission d'acheminement dans un reseau
US20060242319A1 (en) * 2005-04-25 2006-10-26 Nec Laboratories America, Inc. Service Differentiated Downlink Scheduling in Wireless Packet Data Systems
JP4648833B2 (ja) * 2005-12-28 2011-03-09 富士通株式会社 帯域管理装置
US7649910B1 (en) * 2006-07-13 2010-01-19 Atrica Israel Ltd. Clock synchronization and distribution over a legacy optical Ethernet network
JP4839266B2 (ja) * 2007-06-07 2011-12-21 株式会社日立製作所 光通信システム
CN102164067B (zh) * 2010-02-20 2013-11-06 华为技术有限公司 交换网流控实现方法、交换设备及系统
CN101860481A (zh) * 2010-05-25 2010-10-13 北京邮电大学 一种MPLS-TP over OTN多层网络中区分优先级的业务传送方法及其装置
US20120328288A1 (en) * 2011-06-23 2012-12-27 Exar Corporation Method for aggregating multiple client signals into a generic framing procedure (gfp) path
US9025467B2 (en) * 2011-09-29 2015-05-05 Nec Laboratories America, Inc. Hitless protection for traffic received from 1+1 protecting line cards in high-speed switching systems
EP2887590B1 (en) * 2012-09-25 2017-09-20 Huawei Technologies Co., Ltd. Flow control method, device and network
EP2916496B1 (en) * 2012-12-05 2017-08-02 Huawei Technologies Co., Ltd. Data processing method, communication single board and device
US10455301B2 (en) * 2013-01-17 2019-10-22 Infinera Corporation Method to re-provision bandwidth in P-OTN network based on current traffic demand
US9584429B2 (en) * 2014-07-21 2017-02-28 Mellanox Technologies Ltd. Credit based flow control for long-haul links
US9538264B2 (en) * 2014-08-07 2017-01-03 Ciena Corporation ODUflex resizing systems and methods
WO2016041580A1 (en) * 2014-09-16 2016-03-24 Huawei Technologies Co.,Ltd Scheduler, sender, receiver, network node and methods thereof
WO2016078026A1 (zh) * 2014-11-19 2016-05-26 华为技术有限公司 处理被叫业务的方法、移动管理实体和归属用户服务器
US9807024B2 (en) * 2015-06-04 2017-10-31 Mellanox Technologies, Ltd. Management of data transmission limits for congestion control
WO2017059550A1 (en) * 2015-10-07 2017-04-13 Szymanski Ted H A reduced-complexity integrated guaranteed-rate optical packet switch
US10027594B1 (en) * 2016-03-30 2018-07-17 Amazon Technologies, Inc. Congestion control for label switching traffic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787483A (zh) * 2004-12-10 2006-06-14 华为技术有限公司 一种流量控制方法
US20140064081A1 (en) * 2012-08-31 2014-03-06 Guglielmo Marco Morandin Multicast replication skip
CN105915468A (zh) * 2016-06-17 2016-08-31 北京邮电大学 一种业务的调度方法及装置
CN106506119A (zh) * 2016-11-16 2017-03-15 南京金水尚阳信息技术有限公司 一种窄带非对称信道的rtu数据可靠传输控制方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217733A (zh) * 2019-07-09 2021-01-12 中兴通讯股份有限公司 一种报文处理方法及相关装置
KR20220006606A (ko) * 2019-07-09 2022-01-17 지티이 코포레이션 메시지 처리 방법 및 관련 장치
EP3968586A4 (en) * 2019-07-09 2022-06-29 ZTE Corporation Packet processing method and related apparatus
JP7329627B2 (ja) 2019-07-09 2023-08-18 中興通訊股▲ふん▼有限公司 パケット処理方法及び関連装置
KR102633193B1 (ko) * 2019-07-09 2024-02-01 지티이 코포레이션 메시지 처리 방법 및 관련 장치
CN112217733B (zh) * 2019-07-09 2024-02-02 中兴通讯股份有限公司 一种报文处理方法及相关装置
CN112437017A (zh) * 2020-11-17 2021-03-02 锐捷网络股份有限公司 一种数据流控系统、方法、装置、设备及介质
CN113225241A (zh) * 2021-04-19 2021-08-06 中国科学院计算技术研究所 面向环形数据报文网络的数据传输拥塞控制方法及系统

Also Published As

Publication number Publication date
KR20190138861A (ko) 2019-12-16
JP6962599B2 (ja) 2021-11-05
JP2020518172A (ja) 2020-06-18
EP3605975A1 (en) 2020-02-05
EP3605975A4 (en) 2020-04-08
TW201840169A (zh) 2018-11-01
EP3605975B1 (en) 2024-02-14
US20200059436A1 (en) 2020-02-20
US11785113B2 (en) 2023-10-10
CN109314673B (zh) 2022-04-05
KR20220025306A (ko) 2022-03-03
KR102369305B1 (ko) 2022-02-28
KR102408176B1 (ko) 2022-06-10
TWI680663B (zh) 2019-12-21
CN109314673A (zh) 2019-02-05

Similar Documents

Publication Publication Date Title
WO2018195728A1 (zh) 一种客户业务传输方法和装置
JP7231749B2 (ja) パケットスケジューリング方法、スケジューラ、ネットワーク装置及びネットワークシステム
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US8553708B2 (en) Bandwith allocation method and routing device
JP2014522202A (ja) パケットを再構築し再順序付けするための方法、装置、およびシステム
US20150058485A1 (en) Flow scheduling device and method
US11252099B2 (en) Data stream sending method and system, and device
JP2023511889A (ja) サービスレベル構成方法および装置
JP2020072336A (ja) パケット転送装置、方法、及びプログラム
WO2016082603A1 (zh) 一种调度器及调度器的动态复用方法
Klymash et al. Data Buffering Multilevel Model at a Multiservice Traffic Service Node
WO2019200568A1 (zh) 一种数据通信方法及装置
JP2024519555A (ja) パケット伝送方法及びネットワークデバイス
Liu et al. Queue management algorithm for multi-terminal and multi-service models of priority
JP7193787B2 (ja) 通信システム、ブリッジ装置、通信方法、及びプログラム
CN114070776B (zh) 一种改进的时间敏感网络数据传输方法、装置及设备
JP2003152751A (ja) 通信システム、通信端末、サーバ、及びフレーム送出制御プログラム
JP5938939B2 (ja) パケット交換装置、パケット交換方法及び帯域制御プログラム
WO2022246710A1 (zh) 一种控制数据流传输的方法及通信装置
Radivojević et al. Single-Channel EPON
CN117897936A (zh) 一种报文转发方法及装置
CN113746746A (zh) 数据处理方法及设备
CN117768417A (zh) 数据传输方法、装置、计算机设备和存储介质
CN117793583A (zh) 报文转发方法、装置、电子设备及计算机可读存储介质
CN117749726A (zh) Tsn交换机输出端口优先级队列混合调度方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17907069

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019556977

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017907069

Country of ref document: EP

Effective date: 20191028

ENP Entry into the national phase

Ref document number: 20197033779

Country of ref document: KR

Kind code of ref document: A