WO2022135202A1 - 业务流的调度方法、装置及系统 - Google Patents

业务流的调度方法、装置及系统 Download PDF

Info

Publication number
WO2022135202A1
WO2022135202A1 PCT/CN2021/137364 CN2021137364W WO2022135202A1 WO 2022135202 A1 WO2022135202 A1 WO 2022135202A1 CN 2021137364 W CN2021137364 W CN 2021137364W WO 2022135202 A1 WO2022135202 A1 WO 2022135202A1
Authority
WO
WIPO (PCT)
Prior art keywords
service flow
scheduler
network device
service
transmission rate
Prior art date
Application number
PCT/CN2021/137364
Other languages
English (en)
French (fr)
Inventor
宋健
赵喜全
张永平
王震
李广
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21909199.8A priority Critical patent/EP4262313A4/en
Publication of WO2022135202A1 publication Critical patent/WO2022135202A1/zh
Priority to US18/339,273 priority patent/US20230336486A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/56Allocation or scheduling criteria for wireless resources based on priority criteria
    • H04W72/566Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient
    • H04W72/569Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient of the traffic information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • H04W72/1263Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows
    • H04W72/1273Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows of downlink data flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/535Allocation or scheduling criteria for wireless resources based on resource usage policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/542Allocation or scheduling criteria for wireless resources based on quality criteria using measured or perceived quality

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a method, device, and system for scheduling service flows.
  • the service level requirements corresponding to different business flows may be different.
  • the service level requirements corresponding to service flows of latency-sensitive services usually require low latency, high bandwidth, and low packet loss rate, but not latency-sensitive services (such as files).
  • the service level requirements corresponding to the service flow of download and video on demand, etc. require high bandwidth, but there are no high requirements for delay and packet loss rate.
  • the service flow sent by the server needs to be forwarded to the user's terminal through multi-level network equipment.
  • the multi-level network equipment generally includes: backbone router, service router (SR), local area network switch (LSW), optical Line terminal (optical line terminal, OLT) and optical network terminal (optical network terminal, ONT) and so on. If the above-mentioned network device receives different service flows, it usually mixes the different service flows in one queue for scheduling, and this scheduling method cannot meet the service level requirements of the different service flows.
  • the present application provides a scheduling method, device and system for a service flow to solve the technical problem that the scheduling method in the related art cannot meet the service level requirements of different service flows.
  • a method for scheduling a service flow comprising: a first network device respectively scheduling a first service flow and a second service flow based on a hierarchical quality of service (HQoS) model, wherein, The priority of the first service flow is higher than the priority of the second service flow.
  • HQoS hierarchical quality of service
  • the first network device can adjust the transmission rate threshold of the HQoS model for the second service flow to a first threshold, where the first threshold is smaller than the current data transmission rate of the second service flow.
  • the service level requirement may be a requirement defined by a service level agreement (SLA) or other agreed requirement.
  • the bandwidth resource of the downlink port of the first network device can be assigned to the first service flow with higher priority, so as to ensure that the service level requirement of the first service flow with higher priority can be preferentially satisfied.
  • the first threshold is greater than or equal to the average data transmission rate of the second service flow, so as to prevent traffic shaping from seriously affecting the transmission quality of the second service flow.
  • the process in which the first network device adjusts the transmission rate threshold of the second service flow in the HQoS model to the first threshold may include: when the transmission quality of the first service flow does not meet the service level corresponding to the first service flow When the current data transmission rate of the second service flow is greater than the peak threshold of the data transmission rate of the second service flow, adjust the transmission rate threshold of the second service flow in the HQoS model to the first threshold.
  • the first network device may determine that there is currently a traffic burst in the second service flow. Since the traffic burst will seriously occupy the bandwidth resources of other service flows, the first network device performs traffic shaping on the second service flow with the traffic burst based on this, which can effectively improve the transmission quality of the first service flow.
  • the transmission rate threshold of the second service flow includes a peak information rate (peak information rate, PIR), a committed access rate (committed access rate, CAR), a committed information rate (committed information rate, CIR) and an additional information rate One or more of (excess information rate, EIR).
  • the transmission rate threshold of the second service flow includes any one of PIR, CAR, CIR and EIR.
  • the first network device needs to adjust each rate in the transmission rate threshold respectively.
  • the first network device may adjust multiple rates in the transmission rate threshold to the same first threshold, that is, the adjusted rates are equal in value.
  • the first network device may adjust multiple rates in the transmission rate threshold to respective corresponding thresholds, that is, the values of the adjusted rates may be unequal.
  • the first network device is connected to the terminal through the second network device, wherein the HQoS model includes a multi-level scheduler, such as a first-level scheduler corresponding to a downlink port of the first network device, a second-level scheduler corresponding to the downlink port of the second network device, a first bottom-level scheduler for transmitting the first service flow through the downlink port of the second network device, and a first-level scheduler for transmitting the first service flow through the downlink port of the second network device A second underlying scheduler for the second service flow.
  • a multi-level scheduler such as a first-level scheduler corresponding to a downlink port of the first network device, a second-level scheduler corresponding to the downlink port of the second network device, a first bottom-level scheduler for transmitting the first service flow through the downlink port of the second network device, and a first-level scheduler for transmitting the first service flow through the downlink port of the second network device A second underlying scheduler for the second service flow.
  • the first bottom layer scheduler corresponds to the first service flow transmitted by the downlink port of the second network device
  • the second bottom layer scheduler corresponds to the second service flow transmitted by the downlink port of the second network device.
  • the first network device can implement the scheduling of the first service flow and the second service flow respectively through the two underlying schedulers.
  • an implementation manner in which the first network device adjusts the transmission rate threshold of the second service flow to the first threshold value of the HQoS model includes: the first network device adjusts the first-level scheduler, the second-level scheduler, and the second bottom layer.
  • the transmission rate threshold of at least one of the schedulers for the second service flow is the first threshold.
  • the first network device may only adjust the transmission rate threshold of the second service flow by the second underlying scheduler to the first threshold.
  • an implementation manner in which the first network device adjusts the transmission rate threshold of the second service flow to the first threshold by the first network device includes: determining a target scheduler for transmitting the first service flow and network congestion occurs, and the target scheduler may be: The first-level scheduler or the second-level scheduler; adjust the transmission rate threshold of the target scheduler for the second service flow to the first threshold.
  • the degree of congestion when the target scheduler transmits the first service flow can be effectively reduced, thereby improving the transmission quality of the first service flow.
  • the sum of the transmission rate thresholds of the first service flow and the second service flow by the first-level scheduler may be less than or equal to the maximum bandwidth of the downlink port of the first network device;
  • the sum of the transmission rate thresholds of the service flow and the second service flow may be less than or equal to the maximum bandwidth of the downlink port of the second network device.
  • the scheduler By making the sum of the transmission rate thresholds of the scheduler for each service flow less than or equal to the maximum bandwidth of the downlink port of the corresponding network device, it can be ensured that the bandwidth of the downlink port of the network device can meet the bandwidth of the service flow scheduled by the scheduler need.
  • the transmission rate threshold of the first service flow of the first underlying scheduler is less than or equal to the maximum bandwidth of the downlink port of the second network device.
  • the transmission rate threshold of the second underlying scheduler for the second service flow is less than or equal to the maximum bandwidth of the downlink port of the second network device.
  • the first underlying scheduler includes a first queue for buffering packets of the first service flow
  • the second underlying scheduler includes a second queue for buffering packets of the second service flow.
  • the sum of the maximum queue buffer of the first queue and the maximum queue buffer of the second queue is less than or equal to the maximum port buffer of the downlink port of the second network device.
  • the upper limit of the delay in the service level requirement of the first service flow may be smaller than the upper limit of the delay in the service level requirement of the second service flow. That is, the priority of service flows with higher latency requirements (ie, higher real-time requirements) can be higher, and the priority of service flows with lower latency requirements (ie, lower real-time requirements) can be higher. Low.
  • the method may further include: the first network device schedules a third service flow based on the HQoS model, where the priority of the third service flow is higher than the priority of the second service flow and lower than the priority of the first service flow.
  • Priority when the transmission rate threshold of the second service flow is less than or equal to the average data transmission rate of the second service flow, or, when the current data transmission rate of the second service flow is less than or equal to the data transmission rate of the second service flow
  • the transmission rate threshold of the third service flow is adjusted by the first network device to a second threshold, and the second threshold is smaller than the current data transmission rate of the third service flow. Traffic shaping of business flows.
  • traffic shaping may be performed on other third service flows with lower priorities to ensure that the service level requirements of the first service flow are met.
  • a service flow scheduling apparatus is provided, the scheduling apparatus is applied to a first network device, and the scheduling apparatus includes at least one module, and the at least one module can be used to implement the above-mentioned first aspect or the first aspect.
  • the scheduling method of the service flow provided by the optional solution.
  • a service flow scheduling apparatus in a third aspect, includes a memory and a processor, where the memory is used for storing a computer program or code, and the processor is used for executing the computer program or code to implement the above-mentioned first
  • a service flow scheduling method provided by an aspect or an optional solution in the first aspect.
  • a computer-readable storage medium includes instructions or codes, when the instructions or codes are executed on a computer, the computer is made to execute the above-mentioned first aspect or the first aspect.
  • the scheduling method of the service flow provided by the option.
  • a chip in a fifth aspect, includes a programmable logic circuit and/or program instructions, and the chip is configured to execute the service flow scheduling method provided by the first aspect or the optional solution of the first aspect.
  • a sixth aspect provides a computer program product, the computer program product includes a program or code, when the program or code is run on a computer, the computer is made to execute the above-mentioned first aspect or the optional solution provided in the first aspect.
  • the scheduling method of the business flow is not limited to:
  • a seventh aspect provides a traffic scheduling system, the traffic scheduling system includes a terminal and a first network device, the first network device is used to schedule a first service flow and a second service flow of the terminal, and the first network
  • the device includes the apparatus for scheduling service flows as provided in the second aspect or the third aspect.
  • the first network device includes the chip provided in the fifth aspect.
  • the traffic scheduling system may further include a second network device, and the first network device may be connected to the terminal through the second network device.
  • the embodiments of the present application provide a method, device, and system for scheduling a service flow.
  • the first network adjusts the HQoS model for the transmission rate threshold of the service flow with lower priority to be a first threshold, where the first threshold is smaller than the current data transmission rate of the service flow with lower priority.
  • traffic shaping of the service flow with lower priority can be realized.
  • the bandwidth of the downlink port of the first network device can be assigned to the service flow with the higher priority, so as to ensure that the service level requirement of the service flow with the higher priority can be preferentially satisfied.
  • FIG. 1 is a schematic diagram of a network scenario of a traffic scheduling system provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an HQoS model provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of another traffic scheduling system provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of another HQoS model provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for scheduling a service flow provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a first network device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a traffic scheduling system provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a traffic shaping provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an apparatus for scheduling a service flow provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of another service flow scheduling apparatus provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another apparatus for scheduling a service flow provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a network scenario of a traffic scheduling system provided by an embodiment of the present application.
  • service providers such as video content service providers
  • DCs data centers
  • CDN content delivery network
  • the server for providing the service stream can be set in a city or county that is close to the terminal, so that the service stream (eg video stream) acquired by the terminal mainly comes from the server in the DC or CDN that is closer to it. , thereby effectively improving the end-user experience.
  • the terminal may also be referred to as user equipment, which may be a mobile phone, a computer, a wearable device, or a smart home device.
  • the servers in the DC and CDN may forward the service flow to the terminal through a multi-level network device.
  • the multi-level network device may include: backbone routers 10, SR 20, LSW 30, OLT 40, ONT 50, etc. cascaded in sequence.
  • the uplink port of the backbone router 10 can be connected to a server in the DC and/or CDN to access the Internet (Internet), and the ONT 50 is connected to one or more terminals.
  • the traffic scheduling system may further include a splitter (splitter) 60, and the OLT 40 may be connected to a plurality of ONTs 50 through the splitter 60.
  • the backbone router 10 and the LSW 30, or between the backbone router 10 and the OLT 40 may also be connected through a broadband access server (broadband access server, BAS).
  • BAS broadband access server
  • the BAS may be a broadband remote access server (BRAS).
  • BRAS broadband remote access server
  • the nodes between the backbone router 10 and the OLT 40 are collectively referred to as SR/BRAS 20 below.
  • the SR/BRAS 20 shown in FIG. 1 may be directly connected to the OLT 40, that is, the traffic scheduling system may also not include the LSW 30.
  • the SR/BRAS 20 and the OLT 40 shown in FIG. 1 may be connected through a plurality of cascaded LSWs 30.
  • the OLT 40 may not be included between the LSW 30 and the ONT 50 shown in FIG. 1 .
  • the LSW 30 and the ONT 50 shown in FIG. 1 may not include the OLT 40 . can be connected by multiple cascaded OLTs 40.
  • the ONT 50 may not be included between the OLT 40 shown in FIG.
  • the cascading of network devices may refer to: the downlink port of one network device is connected to the ingress port of another network device.
  • the service level requirement corresponding to the service flow is simply referred to as the service level requirement of the service flow below. It can be understood that, in this embodiment of the present application, the service level requirement may be a requirement defined in the SLA.
  • backbone routers are usually not the bottleneck of network congestion because backbone routers have large throughput and processing capabilities, and service flows can be load-balanced among backbone routers.
  • the traffic pressure of the traffic scheduling system is mainly concentrated in the metropolitan area network, that is, as shown in Figure 1, network congestion usually occurs in the link between SR/BRAS 20 and LSW 30, and the link between LSW 30 and OLT 40, and on the link between OLT 40 and ONT 50.
  • the embodiment of the present application provides a service flow scheduling method, which enables the traffic scheduling system to preferentially meet the service level requirements of higher priority service flows, thereby effectively improving the guarantee capability of the service level requirements of the traffic scheduling system.
  • the scheduling method can be applied to the first network device in the traffic scheduling system, and the first network device can be the SR/BRAS 20, the LSW 30, the OLT 40 or the ONT 50 in the system shown in FIG. 1 .
  • an HQoS model is deployed in the first network device.
  • the HQoS model can divide the scheduling queue into multiple scheduling levels, and each level can use different traffic characteristics for traffic management, so as to achieve multi-user and multi-service service management.
  • the first network device divides the received multiple service flows into different priorities, and can perform different scheduling on the service flows with different priorities based on the HQoS model. For example, when the transmission quality of a service flow with a higher priority does not meet the service level requirements of the service flow, traffic shaping can be performed on the service flow with a lower priority.
  • traffic shaping is a way to adjust the data transmission rate of the service flow, which can limit the burst of the service flow, so that the service flow is sent out at a relatively uniform rate.
  • traffic shaping By performing traffic shaping on the service flow with lower priority, the bandwidth of the downlink port of the first network device can be given over to the service flow with higher priority, so as to ensure that the service flow with higher priority can be preferentially satisfied service level requirements.
  • the first network device may be connected to the terminal through the second network device.
  • the HQoS model may include: a first-level scheduler 21 corresponding to the downlink port of the first network device, a second-level scheduler 22 corresponding to the downlink port of the second network device, and a downlink for passing through the second network device N bottom-level schedulers for the port to transmit L types of service flows with different priorities.
  • L and N are integers greater than 1, and N is greater than or equal to L.
  • Each underlying scheduler corresponds to a service flow of one priority transmitted by the downlink port of the second network device, and is used to schedule the service flow of a corresponding priority. Since the underlying scheduler corresponds to the service flow, it can also be called a flow queue (FQ) level scheduler.
  • FQ flow queue
  • a service flow with a priority may correspond to an underlying scheduler, that is, a service flow with a priority may be scheduled by a corresponding underlying scheduler.
  • N is greater than L
  • the N underlying schedulers may at least include: The first underlying scheduler 23 for transmitting the first service flow through the downlink port of the second network device, and the second underlying scheduler 24 for transmitting the second service flow through the downlink port of the second network device.
  • the N bottom-level schedulers at least include a first bottom-level scheduler 23 corresponding to the first service flow, and a second bottom-level scheduler 24 corresponding to the second service flow.
  • the correspondence between the scheduler and the downlink port of the network device may refer to: the scheduler establishes a mapping relationship with the downlink port of the network device, and based on the port parameters of the downlink port (for example, the maximum bandwidth and / or maximum port buffering, etc.) to schedule traffic flows.
  • the first network device may be connected to the terminal through a plurality of cascaded second network devices, and correspondingly, the HQoS model may include a plurality of cascaded second network devices.
  • the HQoS model may include: the first-level scheduler 21 corresponding to the SR/BRAS 20, and the second network The second-level scheduler 22 corresponding to the devices one-to-one, and the four bottom-level schedulers corresponding to the four service flows of different priorities one-to-one.
  • both the first-level scheduler 21 and the second-level scheduler 22 in the HQoS model may include a scheduling unit and a shaping unit, and the underlying scheduler may include a shaping unit.
  • the shaping unit is used to perform traffic shaping on the service flow.
  • the scheduling unit is configured to select, according to a pre-configured scheduling policy, a message in a certain scheduler from a plurality of schedulers connected to it for scheduling.
  • the scheduling policy may include strict priority (strict priority, SP) scheduling or weighted fair queue (weighted fair queue, WFQ) scheduling, or the like.
  • the structure of multiple schedulers in the HQoS model may be the same as the topology structure of multiple network devices corresponding to the multiple schedulers. That is, the first-level scheduler can be connected to N bottom-level schedulers through one second-level scheduler or multiple cascaded second-level schedulers.
  • the first-level scheduler 21 in the HQoS model can be a virtual Port (dummy port, DP) level scheduler.
  • the three second-level schedulers 22 included in the HQoS model specifically include the second-level schedulers 22 corresponding to the downlink outgoing ports of the LSW 30, which may be virtual interface (virtual interface, VI) level schedulers;
  • the second-level scheduler 22 corresponding to the egress port may be a user group queue (group queue, GQ)-level scheduler;
  • the second-level scheduler 22 corresponding to the downlink egress port of the ONT 50 may be a subscriber queue (subscriber queue, SQ) ) level scheduler.
  • FIG. 3 is a schematic structural diagram of another traffic scheduling system provided by an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of another HQoS model provided by an embodiment of the present application.
  • SR/BRAS 20 can be connected with multiple LSW 30, LSW 30 can be connected with multiple OLT 40, and OLT 40 can be connected with multiple ONT 50 connect.
  • the OLT 40 is connected to two ONTs 50, one of which is connected to the first terminal 03 and the second terminal 04 respectively, and the other ONT 50 is connected to the third terminal 05.
  • FIG. 4 is a schematic structural diagram of another traffic scheduling system provided by an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of another HQoS model provided by an embodiment of the present application.
  • the HQoS model in the SR/BRAS 20 may include a DP-level scheduler (ie, a first-level scheduler 21 ), and may also include: multiple VI levels corresponding to the multiple LSWs 30 Schedulers, multiple GQ-level schedulers corresponding to the multiple OLTs 40, and multiple SQ-level schedulers corresponding to the multiple ONTs 50. Among them, one SQ-level scheduler can be connected to multiple bottom-level schedulers.
  • each ONT 50 in the traffic scheduling system may correspond to a user, and is used to access one or more terminals of the user to the network.
  • multiple SQ-level schedulers in the HQoS model can be used to distinguish service flows of different users, that is, each SQ-level scheduler can be used to schedule service flows of one user.
  • the user may refer to a virtual local area network (virtual local area network, VLAN), a virtual private network (virtual private network, VPN), or a home broadband user.
  • Multiple underlying schedulers connected to each SQ-level scheduler can be used to distinguish service flows of different priorities of the same user, wherein the service flows of each priority level can include one or more types of service flows.
  • a user's service flow includes four different types of service flows: a voice service flow, a game service flow, a video-on-demand service flow, and a file download service flow.
  • the voice service flow and the game service flow belong to the high-priority service flow
  • the video-on-demand service flow and the file download service flow both belong to the low-priority service flow.
  • the SQ-level scheduler corresponding to the user can be connected to at least two bottom-level schedulers, where one bottom-level scheduler is used to schedule the high-priority service flow, and the other bottom-level scheduler is used to schedule the low-priority service flow.
  • each SQ-level scheduler can be connected to N bottom-level schedulers.
  • the SQ-level scheduler corresponding to the first user can be connected to N1 underlying scheduling
  • the SQ-level scheduler corresponding to the second user can be connected to N2 bottom-level schedulers.
  • N1 and N2 are integers greater than 1, and N1 is not equal to N2.
  • the number of schedulers at any level included in the HQoS model may be greater than the number of network devices at the corresponding level included in the traffic scheduling system.
  • the number of VI-level schedulers in the HQoS model can be greater than the number of LSWs 30 connected to SR/BRAS 20
  • the number of GQ-level schedulers can be greater than the number of OLTs 40
  • the number of SQ-level schedulers can be greater than the number of ONTs 50 .
  • the service flow of a user corresponding to a certain SQ-level scheduler can be divided into N different priorities
  • the number of underlying schedulers connected to the SQ-level scheduler may be greater than N.
  • the first network device eg, the SR/BRAS 20
  • the first network device may include multiple downlink ports, and then multiple HQoS models corresponding to the multiple downlink ports may be deployed in the first network device. That is, the HQoS model may be deployed for each downlink port of the first network device.
  • all downlink ports of the network device may be physical ports or virtual ports.
  • the virtual port may be a trunk port composed of multiple physical ports.
  • the first network device may first determine the user to which the service flow belongs, and determine the user corresponding to the user from the multiple SQ-level schedulers included in the HQoS model.
  • the target SQ-level scheduler For example, the first network device may determine the user to which the service flow belongs based on an access control list (access control list, ACL).
  • the first network device may determine the priority of the service flow based on the type of the service flow, and determine a target corresponding to the priority of the service flow from multiple underlying schedulers connected to the target SQ-level scheduler The underlying scheduler. After that, the first network device can add the message of the service flow to the queue in the target bottom-level scheduler. Further, the first network device may schedule the messages in the target bottom-level scheduler through the target SQ-level scheduler, GQ-level scheduler, VI-level scheduler, and DP-level scheduler, respectively, so that the packets of the service flow are scheduled. It is transmitted to the DP-level scheduler through the target SQ-level scheduler, GQ-level scheduler, and VI-level scheduler in turn. The DP-level scheduler can then send the packet to the second network device at the next level through the downlink egress port of the first network device, for example, to the LSW 30.
  • the ONT 50 in the scheduling system can be connected to multiple terminals of the same user, and the types of service flows transmitted by the ONT 50 to different terminals can be the same, that is, the priority of the service flows transmitted by the ONT 50 to different terminals can be same.
  • the first network device may use an underlying scheduler to schedule multiple service flows that are transmitted to different terminals but have the same priority. For example, assuming that an ONT 50 is connected to the user's mobile phone and computer respectively, and the mobile phone and computer are downloading files respectively, the HQoS model can combine the file download service stream transmitted to the mobile phone and the file download service stream transmitted to the computer at the same time. Scheduled in a low-level scheduler.
  • the first network device can distinguish not only service flows of different users, but also different types of service flows of the same user based on the HQoS model, so the purpose of fine-grained traffic scheduling can be achieved.
  • FIG. 5 is a flowchart of a method for scheduling a service flow provided by an embodiment of the present application, and the method can be applied to a first network device in a traffic scheduling system.
  • the first network device may be the SR/BRAS 20, the LSW 30, the OLT 40 or the ONT 50 shown in any one of Figures 1 to 4 .
  • the method includes:
  • Step 101 Configure the device mapping relationship between the multi-level scheduler included in the HQoS model and each network device in the traffic scheduling system.
  • the multi-level schedulers in the HQoS model in the first network device correspond to port parameters of downlink ports of network devices of different levels respectively.
  • the first network device may record the mapping relationship between each scheduler and the port parameters of the corresponding network device in the HQoS model, thereby obtaining a device mapping model.
  • the port parameter may at least include: maximum bandwidth.
  • the port parameter may also include: a maximum port buffer.
  • the network device may include a plurality of queues with different priorities, wherein each queue is used to buffer the packets of a service flow of one priority, and the downlink port of the network device may be used for these multiple queues according to a certain scheduling ratio.
  • the packets in the queue are scheduled.
  • the port parameters of the downlink port of the network device may further include: scheduling ratios for queues with different priorities.
  • the device mapping model can also record the relationship between the underlying scheduler and the second network device.
  • FIG. 6 is a schematic structural diagram of a first network device provided by an embodiment of the present application.
  • the first network device includes a model configuration module 201
  • the model configuration module 201 includes a mapping model establishment unit 2011 .
  • the mapping model establishing unit 2011 can establish a device mapping model based on the port parameters configured in the first network device. Referring to Fig. 2 and Fig.
  • the following mapping relationships can be recorded in the device mapping model: the port parameters of the downlink port of the DP-level scheduler and its corresponding SR/BRAS 20, the downlink port of the VI-level scheduler and its corresponding LSW 30 port parameters of the GQ-level scheduler and its corresponding downstream ports of the OLT 40, port parameters of the SQ-level scheduler and its corresponding downstream ports of the ONT 50, and bottom-level schedulers and their corresponding downstream ports of the ONT 50 It can be understood that the device mapping model includes one or more of the VI-level scheduler, GQ-level scheduler and SQ-level scheduler device.
  • the above port parameters can be configured as static parameters and will not change with the operation of the network device, so the above port parameters can be configured in the first network device in advance.
  • the above port parameters can be used as constraints for subsequent determination of the initial values of the scheduling parameters of each scheduler. That is, when determining the initial value of the scheduling parameter of the scheduler, it is necessary to ensure that the initial value can satisfy the constraint of the port parameter of the downlink port of the network device corresponding to the scheduler.
  • Step 102 configure the service level requirement model of the business flow.
  • the first network device may save the mapping relationship between the service flow and its service level requirements, so as to obtain a service level requirement model.
  • the service level requirement of the service flow may include a limitation on at least one of the following parameters: delay, packet loss rate, data transmission rate, and the like. It can be understood that each parameter in the service level requirement may refer to an end-to-end parameter, and end-to-end refers to the first network device to the terminal.
  • the model configuration module 201 of the first network device further includes a demand model establishment unit 2012 .
  • the demand model establishing unit 2012 can establish a service level demand model of the business flow based on the service level demand of each business flow configured in the first network device. Assuming that the service level requirements of M service flows are configured in the first network device, and the service level requirements are the requirements defined in the SLA, then in the service level requirement model created by the demand model establishment unit 2012, the ith service flow
  • Xi, Yi and Zi can respectively represent the defined thresholds of a parameter in service level requirements.
  • Xi can represent the upper limit of the delay
  • Yi can represent the upper limit of the packet loss rate
  • Zi can represent the lower limit of the data transmission rate.
  • the end-to-end delay of the ith service flow is not greater than Xi
  • the end-to-end packet loss rate is not greater than Yi
  • the end-to-end data is not less than Zi.
  • the i-th service flow is a cloud VR (cloud VR) service flow, which requires the delay from the first network device to the terminal to be less than 20 milliseconds (ms)
  • Xi in the service level requirement can be 20 ms.
  • the defined thresholds of each parameter in the service level requirement may be empirical values, or may also be derived based on modeling theory.
  • the first network device may model the traffic distribution of the service flow and the processing capability of the network device, so as to estimate the defined thresholds of each parameter in the service level requirement.
  • the modeling theory may include: Poisson distribution modeling, queuing theory modeling, network calculus and artificial intelligence (artificial intelligence, AI) modeling, and the like.
  • the model obtained by the queuing theory modeling can be the M/D/1 model.
  • Step 103 Based on the device mapping model and the service level requirement model, determine initial values of scheduling parameters of each scheduler in the HQoS model.
  • the first network device can determine the scheduler based on the port parameters of the network device corresponding to the scheduler recorded in the device mapping model and the service level requirements of each service flow recorded in the service level requirement model.
  • the scheduling parameter of the scheduler is used to indicate the scheduling policy for the service flow.
  • the scheduling parameter may include: a transmission rate threshold of the scheduler for service flows of different priorities, where the transmission rate threshold is used to limit the rate at which the scheduler transmits the service flow.
  • the transmission rate thresholds of the scheduler for service flows with different priorities may be the same or different.
  • the transmission rate threshold of the scheduler for each priority service flow is less than or equal to the maximum bandwidth of the downlink port of the network device corresponding to the scheduler.
  • the service flow received by the first network device includes a first service flow and a second service flow, wherein the priority of the first service flow is higher than the priority of the second service flow.
  • the sum of the transmission rate thresholds of the first-level scheduler 21 for the first service flow and the second service flow is less than or equal to the downlink port of the first network device (for example, the SR/BRAS 20) (the downlink port refers to the The maximum bandwidth of the downlink port corresponding to the HQoS model).
  • the sum of the transmission rate thresholds of the second-level scheduler 22 for the first service flow and the second service flow is less than or equal to the maximum bandwidth of the downlink port of the second network device (the downlink port refers to the port used to connect with the terminal) .
  • the sum of the transmission rate thresholds of the first service flow and the second service flow by the SQ-level scheduler is less than or equal to the maximum bandwidth of the downlink port of the ONT 50; the GQ-level scheduler transmits the first service flow and the second service flow
  • the sum of the rate thresholds is less than or equal to the maximum bandwidth of the downstream ports of the OLT 40.
  • the scheduler By making the sum of the transmission rate thresholds of the scheduler for each service flow less than or equal to the maximum bandwidth of the downlink port of the corresponding network device, it can be ensured that the bandwidth of the downlink port of the network device can meet the bandwidth of the service flow scheduled by the scheduler need.
  • the first underlying scheduler 23 is used to transmit the first service flow through the downlink port of the second network device
  • the second underlying scheduler 24 is used to transmit the first service flow through the downlink port of the second network device.
  • the downlink port transmits the second service flow. Then the transmission rate threshold of the first underlying scheduler 23 for the first service flow is less than or equal to the maximum bandwidth of the downlink port of the second network device, and the transmission rate threshold of the second underlying scheduler 24 for the second service flow is less than or equal to The maximum bandwidth of the downstream port of the second network device.
  • each underlying scheduler may include a queue, and the queue is used for buffering packets of a service flow of a priority corresponding to the underlying scheduler.
  • the first underlying scheduler 23 may include a first queue for buffering packets of the first service flow
  • the second underlying scheduler 24 may include a second queue for buffering packets of the second service flow.
  • the upstream scheduler may include multiple queues with different priorities.
  • the upstream scheduler may be equal to the number of queues included in the network device corresponding to the upstream scheduler, and the upstream scheduler
  • the server can schedule multiple queues with different priorities that it includes. If the upstream scheduler does not include queues, the upstream scheduler can schedule the queues included in each of its connected schedulers.
  • the number of transmission rate thresholds included in the scheduling parameters of each scheduler in the HQoS model may be equal to the number of queues that the scheduler needs to schedule, wherein each transmission rate threshold is used to limit the number of queues in a queue.
  • the transmission rate of the message can also be understood as: the transmission rate threshold of the queue to which the service flow of the priority belongs to the scheduler.
  • the scheduling parameter of each underlying scheduler may include a transmission rate threshold.
  • the scheduling parameters of the SQ-level scheduler may include: N corresponding to the N queues included in the N bottom-level schedulers Transfer rate threshold.
  • the number of queues included in the upstream scheduler and the number of underlying schedulers connected to the SQ-level scheduler may or may not be equal. If the number of queues included in an upstream scheduler is less than the number of underlying schedulers connected to the SQ-level scheduler, the upstream scheduler can schedule packets from multiple underlying schedulers in one queue.
  • each upstream scheduler may also include 4 queues with different priorities.
  • the upstream scheduler may also include only two queues, and each of the two queues may be associated with two queues.
  • the queue in the underlying scheduler corresponds. That is, the upstream scheduler can mix the packets from the two underlying schedulers into one queue for scheduling.
  • the transmission rate threshold may include one or more of PIR, CAR, CIR, and EIR.
  • the initial value of each of the above rates may be less than or equal to the maximum bandwidth of the downlink port of the network device corresponding to the scheduler.
  • the initial value of the transmission rate threshold of each queue by the scheduler may be equal to the maximum bandwidth of the downlink port.
  • the initial value of the transmission rate threshold of each queue by the scheduler may be equal to dividing the maximum bandwidth of the downlink port by the number of queues to be scheduled by the scheduler.
  • the port parameters recorded in the device mapping model further include: the scheduling ratio of the downlink ports of the network device to queues with different priorities, the first network device may also allocate the downlink based on the scheduling ratio The maximum bandwidth of the port, so as to obtain the transmission rate threshold for each queue. For example, the ratio of the initial values of the transmission rate thresholds of the respective queues may be equal to the scheduling ratio.
  • the GQ-level scheduler corresponding to the OLT 40 includes 4 queues with different priorities
  • the GQ-level scheduler The initial value of PIR for the 4 queues can all be configured to be 1/4Gbps.
  • the scheduling ratio of the OLT 40 to four queues with different priorities is 1:2:3:4
  • the initial value of the PIR of the GQ-level scheduler for the four queues can be respectively configured as: 0.1Gbps , 0.2Gbps, 0.3Gbps, and 0.4Gbps. It can be seen that the ratio of the initial values of the PIRs of the four queues by the GQ-level scheduler is equal to the scheduling ratio.
  • the device mapping model may also record the maximum bandwidth of the network package handled by the user corresponding to the SQ-level scheduler. If the maximum bandwidth of the network package is less than the maximum bandwidth of the downlink egress port of the ONT 50, the first network device may determine the transmission rate threshold of the SQ-level scheduler for queues with different priorities based on the maximum bandwidth of the network package.
  • the ONT 50 corresponds to The sum of the initial values of the PIRs of the SQ-level scheduler for queues with different priorities may be less than or equal to 100 Mbps.
  • the scheduling parameters of the scheduler may further include: the maximum queue buffer of each queue in the scheduler.
  • the scheduling parameters of the scheduler may further include: the maximum buffer of the scheduler.
  • the sum of the maximum queue buffers of each queue in the scheduler may be less than or equal to the maximum port buffer, and the maximum buffer of the scheduler is also less than or equal to the maximum port buffer. For example, assuming that the SQ-level scheduler includes 4 queues with different priorities, the sum of the maximum queue buffers of the 4 queues may be less than or equal to the maximum port buffer of the downlink port of the second network device (eg ONT 50).
  • the maximum queue buffer of a queue refers to the maximum buffer that the queue can occupy, that is, the upper limit of the total data amount of the packets that can be buffered by the queue.
  • the sum of the maximum queue buffers of the N queues included in the N low-level schedulers should be less than or equal to the maximum downlink port of the second network device corresponding to the SQ-level scheduler.
  • Port cache For example, it is assumed that the low-level schedulers connected to the SQ-level scheduler include the first low-level scheduler 23 and the second low-level scheduler 24, wherein the queue used for buffering the packets of the first service flow in the first low-level scheduler 23 is the first low-level scheduler 23. A queue, the queue used for buffering the packets of the second service flow in the second bottom scheduler 24 is the second queue. Then, the sum of the maximum queue buffer of the first queue and the maximum queue buffer of the second queue may be less than or equal to the maximum port buffer of the downlink port of the second network device (for example, the ONT 50).
  • the first network device may further establish a traffic model of the traffic scheduling system based on the service level requirement model, and then determine the maximum queue buffer of each queue in the scheduler based on the traffic model.
  • the first network device can be based on the M of queuing theory.
  • the /D/1 model calculates the delay of the GQ-level scheduler corresponding to the OLT 40, and then determines the size of the buffer required by the GQ-level scheduler based on the delay.
  • the formula for calculating the delay is as follows:
  • W(t) represents the time delay of the GQ-level scheduler at time t.
  • is the traffic arrival rate, which obeys the Poisson distribution;
  • is the service rate of the OLT40;
  • k is greater than or equal to 0 and less than or equal to 0 an integer, Indicates that ⁇ t is rounded down.
  • the first network device may estimate the traffic arrival rate ⁇ in the above formula based on the proportion of different types of service flows in the scenario where the OLT 40 is located and the number of each type of service flows.
  • the first network device may also determine the load rate ⁇ of the OLT 40 based on the average port load rate of each OLT in the region where the OLT 40 is located. Assuming that the load rate ⁇ is 50% and the service rate ⁇ is 1 Gbps, the first network device can calculate the size of the buffer required by the GQ-level scheduler corresponding to the OLT 40 based on the above formula, so as to configure the GQ-level scheduler in the GQ-level scheduler. Maximum queue buffer for each queue.
  • the model configuration module 201 of the first network device further includes a parameter configuration unit 2013 .
  • the parameter configuration unit 2013 can configure the initial values of the configuration parameters of each scheduler in the HQoS model based on the device mapping model and the service level requirement model.
  • Step 104 Schedule each received service flow by using the HQoS model.
  • the first network device after the first network device receives the service flow from the server through its upstream port, it can distinguish different service flows based on the characteristics of the user to which the service flow belongs and the type of the service flow. For example, the first network device may first determine the user to which the received service flow belongs, and then determine the priority of the service flow based on the type of the service flow.
  • the first network device After the first network device completes the identification of the service flow, it can determine the target second-level scheduler corresponding to the user to which the service flow belongs from the second-level scheduler 22 (for example, the SQ-level scheduler) included in the HQoS model, and A target low-level scheduler corresponding to the priority of the service flow is determined from a plurality of low-level schedulers connected to the target second-level scheduler. After that, the first network device can add the packets of the service flow to the target bottom-level scheduler for queuing, and the first-level scheduler 21 and the target second-level scheduler based on the HQoS model messages are scheduled.
  • the target second-level scheduler 22 for example, the SQ-level scheduler included in the HQoS model
  • a target low-level scheduler corresponding to the priority of the service flow is determined from a plurality of low-level schedulers connected to the target second-level scheduler.
  • the first network device receives the first service flow from the server 01 and the second service flow from the server 02 .
  • the receiver of the first service flow is the first terminal 03
  • the receiver of the second service flow is the second terminal 04
  • the first service flow and the second service flow belong to the same user
  • the first service flow The priority is higher than the priority of the second service flow.
  • the first network device can add the packets of the first service flow to the first queue in the first bottom-layer scheduler 23 , and add the packets of the second service flow to the second queue in the second bottom-layer scheduler 24 . queue. Since the two service flows belong to the same user, as shown in FIG. 2 , FIG. 4 and FIG. 7 , the first bottom-level scheduler 23 and the second bottom-level scheduler 24 are the same as the same second-level scheduler 22 (for example, SQ level scheduler) connection.
  • server 01 and the server 02 may be deployed on the same server, or may be deployed on different servers.
  • the first terminal 03 and the second terminal 04 may be the same terminal, or may be different terminals, which are not limited in this embodiment of the present application.
  • the first network device can schedule the bottom-level through the second-level scheduler 22 and the first-level scheduler 21 in sequence. messages in the scheduler.
  • the second-level scheduler 22 may first schedule the packets in the bottom-level scheduler to the second-level scheduler 22 according to its configured scheduling policy (for example, SP scheduling or WFQ scheduling, etc.). Then, the first-level scheduler 21 schedules the packets in the second-level scheduler 22 to the first-level scheduler 21 according to the configured scheduling policy.
  • the configured scheduling policy for example, SP scheduling or WFQ scheduling, etc.
  • the first network device may schedule the packets in the bottom-level scheduler through the first-level scheduler 21 and the second-level scheduler 22 in sequence.
  • the first-level scheduler 21 may first allocate scheduling resources (eg, bandwidth resources) to the second-level scheduler 22 according to the scheduling policy configured by the first-level scheduler 21 .
  • the second-level scheduler 22 can then allocate scheduling resources to each of its connected bottom-level schedulers based on the scheduling resources allocated by the first-level scheduler 21 .
  • the bottom-level scheduler can transmit the message to the second-level scheduler 22 based on the allocated scheduling resources.
  • the transmission sequence of any service flow in the schedulers at all levels included in the HQoS model is: the bottom-level scheduler, the second-level scheduler 22 and the first-level scheduler 21 .
  • the transmission sequence of packets in the multiple cascaded second-level schedulers 22 is: from the direction close to the bottom-level scheduler to the direction far from the bottom-level scheduler The direction of the transmitter is transmitted sequentially.
  • the HQoS model includes SQ-level schedulers, GQ-level schedulers, and VI-level schedulers that are cascaded in sequence
  • the packets in the HQoS model of the schedulers at all levels The transmission sequence is: FQ-level scheduler ⁇ SQ-level scheduler ⁇ GQ-level scheduler ⁇ VI-level scheduler ⁇ DP-level scheduler.
  • the PIR of the first bottom layer scheduler 23 for the first service flow is 1000 Mbps
  • the PIR of the second bottom layer scheduler 24 for the second service flow is 800 Mbps.
  • the first bottom-layer scheduler 23 can limit the data transmission rate of the first service flow to less than 1000 Mbps
  • the second bottom-layer scheduler 24 is transmitting to the SQ-level scheduler.
  • the data transmission rate of the second service flow may be limited to less than 800 Mbps.
  • the first network device further includes a processing module 202 , a network interface 203 and a power supply 204 .
  • the processing module 202 includes a service identification unit 2021, and the network interface 203 is connected to an upstream device (eg, the backbone router 10), for receiving service flows from a server (eg, a server in a DC or CDN), and transmitting the service flow to the service identification unit 2021.
  • a server eg, a server in a DC or CDN
  • the service identification unit 2021 can identify each received service flow based on a preconfigured service flow identification policy, and determine the priority of each service flow. Afterwards, the service identification unit 2021 can add the message of the service flow to the corresponding underlying scheduler based on the user to which the service flow belongs and the priority of the service flow.
  • the service flow identification strategy may include at least one of the following methods: a technology for defining QoS attributes based on a differentiated services code point (DSCP), a deep packet inspection (deep packet inspection, DPI) technology, a technology based on The identification technology of traffic identification model, and the identification technology based on traffic characteristics, etc.
  • the traffic identification model may be obtained by training based on an AI algorithm.
  • the service identification unit 2021 can add the message D1 of the first service flow to the corresponding first bottom layer scheduler 23, and add the message D2 of the second service flow to the corresponding second bottom layer In the scheduler 24 , the packet D3 of the third service flow is added to the corresponding third bottom layer scheduler 25 .
  • the third underlying scheduler 25 is also an FQ-level scheduler.
  • Step 105 Monitor the transmission quality of each service flow between the first network device and the terminal respectively.
  • the first network device may monitor, in real time, the transmission quality of each service flow between the first network device and the terminal during the service flow scheduling process.
  • the service flow scheduled by the first network device includes: a first service flow and a second service flow
  • the first network device can monitor the first service flow between the first network device and the first terminal 03 and monitor the transmission quality of the second service flow between the first network device and the second terminal 04.
  • the measurement parameter of the transmission quality may include one or more of delay, packet loss rate, data transmission rate and burst size (burst size, BS).
  • the processing module 202 in the first network device further includes a data statistics unit 2022 and a calculation unit 2023 .
  • the process of monitoring the transmission quality of the service flow by the first network device is described below by taking the first service flow as an example.
  • the data statistics unit 2022 may collect statistics on the transmission status data of the first service flow in at least one scheduler.
  • the calculation unit 2023 may determine the transmission quality of the first service flow between the first bottom-level scheduler 23 and the first-level scheduler 21 based on the transmission state data obtained by the statistics of the data statistics unit 202 . Since there is a mapping relationship between the schedulers at all levels in the HQoS model and the network devices at all levels in the traffic scheduling system, the first service flow is between the first bottom scheduler 23 and the first level scheduler 21 .
  • the transmission quality reflects the transmission quality of the first service flow between the first network device and the first terminal 03 .
  • the transmission status data may include at least one of the following data: the number of newly added packets and the number of sent packets of the queue to which the first service flow belongs, the queue length of the queue to which the first service flow belongs, The buffer occupied by the queue to which the first service flow belongs, and the number of dropped packets of the queue to which the first service flow belongs.
  • P_in, P_out, and P_buffer are respectively the number of newly added packets, the number of sent packets, and the number of buffered packets in the queue to which the first service flow belongs within the statistical period.
  • the data statistics unit 2022 may The queue to which the flow belongs performs statistics on the transmission status data. For example, packet counts, queue length statistics, cache occupancy statistics, and packet loss statistics may be performed on the queue to which the first service flow belongs.
  • the packet count refers to: counting the number of newly added packets in the queue to which the first service flow belongs.
  • the calculation unit 2023 may calculate the transmission status data of the first service flow for each scheduler, thereby obtaining the transmission quality of the first service flow on the first network device.
  • the calculating unit 2023 may add up the queue lengths of the queues to which the first service flow belongs in each scheduler, and then determine whether the first service flow is located on the first network device or the first terminal based on the total queue length.
  • the delay of transmission between 03, the delay is positively related to the total queue length.
  • the calculation unit 2023 may add up the number of lost packets in the queue to which the first service flow belongs in each scheduler within the statistical period, and then add the total number of lost packets to the number of lost packets in the first underlying scheduler 23
  • the number of newly added packets in the queue to which a service flow belongs is divided by the number of newly added packets within the statistical time period, thereby obtaining the packet loss rate of the first service flow in the statistical time period.
  • the statistics duration may be equal to the transmission duration of the first service flow, that is, the first network device may continue to perform statistics on the transmission status data of the first service flow after receiving the packets of the first service flow.
  • the statistics duration may also be a preconfigured fixed duration, that is, the first network device may perform statistics on the transmission state data of the first service flow once every statistics duration.
  • the calculating unit 2023 can divide the total data volume of the packets sent by the queue to which the first service flow in the first underlying scheduler 23 belongs within a unit time by the unit time, so as to obtain the first service flow data transfer rate.
  • the unit of the data transmission rate may be bps.
  • the unit duration may be in the order of seconds, for example, the unit duration may be 1 second.
  • the magnitude of the unit duration may also be in the order of milliseconds, for example, may be 10 milliseconds.
  • the calculation unit 2023 may accumulate the data volume of the newly added packets in the queue to which the first service flow belongs in the first underlying scheduler 23 within the statistical time period to obtain the burst flow size.
  • the continuously newly added packets refer to the packets whose arrival interval from the previous packet is less than a time threshold (for example, 1 microsecond).
  • the data statistics unit 2022 may also record the identification (ID) of each scheduler, so as to distinguish different schedulers statistics.
  • the underlying scheduler in the HQoS model includes a queue, and the first-level scheduler 21 and/or the second-level scheduler 22 are not set to queues with different priorities.
  • the data statistics unit 2022 may perform statistics on the transmission status data of the queue to which the first service flow belongs in the scheduler. For example, if only the underlying scheduler includes queues in the HQoS model, the data statistics unit 2022 may perform statistics on transmission status data only for the queues in the underlying scheduler.
  • the schedulers at all levels in the HQoS model can schedule the packets in the underlying scheduler according to a certain scheduling order, in this implementation, the transmission status data of each queue in the scheduler with queues is The statistic result can accurately reflect the overall scheduling situation of the first service flow by the HQoS model.
  • the computing unit 2023 may determine, based on the queue length of the queue (ie, the first queue) to which the first service flow belongs in the first underlying scheduler 23, that the first service flow is in Delay between the first network device and the first terminal 03 .
  • the computing unit 2023 can divide the number of packets lost in the first queue in the first bottom-layer scheduler 23 by the number of newly added packets in the first queue, so as to obtain the first service flow in the first network device and the first terminal.
  • the calculation unit 2023 can divide the number of packets sent by the first queue within the statistical duration by the statistical duration, so as to obtain the data transmission of the first service flow between the first network device and the first terminal 03 rate.
  • the maximum bandwidth of the downlink port of a certain ONT 50 with wireless WIFI function is 1Gbps, and the user of the ONT 50 purchased a network bandwidth package of 100Mbps. If the user is using the second terminal 04 to watch the video on demand, the traffic scheduling system needs to schedule the video on demand service stream from the server 02 to the second terminal 04 .
  • the following description is given by taking the second terminal 04 far away from the ONT 50, and the data transmission rate of the video-on-demand service stream transmitted by the ONT 50 to the second terminal 04 reaches a maximum of 20 Mbps as an example.
  • the service identification unit 2021 in the first network device can add the message of the video-on-demand service flow to the queue in the corresponding bottom-level scheduler (for example, the second bottom-level scheduler). the second queue in the server 24) for queuing.
  • the data statistics unit 2022 may perform statistics on the number of packets sent by the second queue in the second bottom-level scheduler 24 within a unit time, and divide the counted value by For the unit time, the data transmission rate of the video-on-demand service stream between the first network device and the second terminal 04 can be obtained.
  • the calculation unit 2023 can calculate that the data transmission rate of the video stream is 4 Mbps. If statistics are performed in units of millisecond granularity, the calculation unit 2023 can calculate that the data transmission rate of the video stream can reach a maximum of 20 Mbps.
  • the first network device can also determine the service flow in the network device corresponding to the scheduler based on the transmission status data of the service flow in each scheduler. transmission quality.
  • the transmission quality of each service flow monitored by the first network device may also be used for visual display.
  • the first network device may send measurement parameters used to measure the transmission quality of each service flow to the controller for display.
  • the first network device may be connected to a display device, and the first network device may display the measurement parameters of the transmission quality of each service flow through the display device.
  • Step 106 Detect whether the transmission quality of the first service flow satisfies the service level requirement corresponding to the first service flow.
  • the transmission quality can be correlated with the service level of the first service flow
  • the requirements are compared to determine whether the transmission quality of the first service flow meets the service level requirements. If the first network device determines that the transmission quality of the first service flow does not meet the service level requirement, the first network device may perform step 107 . If the first network device determines that the transmission quality of the first service flow meets the service level requirement, the first network device may continue to perform step 105, that is, continue to monitor the transmission quality of the first service flow.
  • the first network device may determine that the transmission quality of the first service flow is not satisfied its service level requirements. If the service level requirement of the first service flow includes the upper limit of the packet loss rate, the first network device may determine the transmission of the first service flow when detecting that the end-to-end packet loss rate of the first service flow is greater than the upper limit of the packet loss rate The quality does not meet its service level requirements.
  • the first network device may determine the transmission of the first service flow when detecting that the end-to-end data transmission rate of the first service flow is lower than the lower limit of the data transmission rate The quality does not meet its service level requirements.
  • the processing module 202 may further include a transmission quality monitoring unit 2024, which may be configured to detect whether the transmission quality of the first service flow meets the service level requirement of the first service flow. Assuming that the first service flow is a game or video conference service flow, the upper limit of the delay in the service level requirement is 20ms. If the delay between the first network device and the first terminal 03 of the first service flow determined by the first network device is 30ms, then since the delay is greater than the upper limit of the delay of 20ms, the first network device can Step 107 is executed.
  • a transmission quality monitoring unit 2024 may be configured to detect whether the transmission quality of the first service flow meets the service level requirement of the first service flow. Assuming that the first service flow is a game or video conference service flow, the upper limit of the delay in the service level requirement is 20ms. If the delay between the first network device and the first terminal 03 of the first service flow determined by the first network device is 30ms, then since the delay is greater than the upper limit of the delay of 20ms, the first network
  • Step 107 Detect whether the second service flow satisfies the traffic shaping condition.
  • the first network device may detect a second service flow with a lower priority Whether the service flow meets the conditions for traffic shaping. If the first network device determines that the second service flow satisfies the conditions for traffic shaping, step 108 may be performed, that is, perform traffic shaping on the second service flow; if the first network device determines that the second service flow does not meet the conditions for traffic shaping, Then step 109 can be executed.
  • the traffic shaping condition may include at least one of the following conditions: the transmission rate threshold of the HQoS model to the second service flow is greater than the average data transmission rate of the second service flow, and the current data of the second service flow The transmission rate is greater than the peak threshold of the data transmission rate of the second service flow.
  • the transmission rate threshold of the HQoS model to the second service flow is greater than the average data transmission rate of the second service flow
  • the current data of the second service flow is greater than the peak threshold of the data transmission rate of the second service flow.
  • the first network device can monitor the data transmission rate of the second service flow in real time within the statistical time period. Therefore, it can be understood that the average data transmission rate of the second service flow may refer to the average value of the data transmission rate of the second service flow within the statistical time period.
  • the first network device may determine that if the transmission rate threshold of the second service flow continues to decrease, the second service flow will be seriously affected. The business experience of the business flow. Therefore, the first network device may take the transmission rate threshold greater than the average data transmission rate as one of the conditions for traffic shaping.
  • the first network device may determine that there is currently a traffic burst in the second service flow.
  • the characteristics of traffic bursts include: sending data at a relatively high data transmission rate in a short period of time (for example, 10 milliseconds), and then stopping sending data for a long period of time, or sending data at a lower rate Data transfer rate to send data.
  • the video-on-demand service stream as the second service stream as an example, if 10 milliseconds is used as the unit time for calculating the data transmission rate, the real-time data transmission rate of the video-on-demand service stream within a certain statistical time period can reach a maximum of 350Mbps.
  • the average data transmission rate of the video-on-demand service flow in the statistical period is only about 3 Mbps to 5 Mbps.
  • the order of magnitude of the statistical duration may be in the order of seconds.
  • the first network device may also use the current data transmission rate greater than the peak threshold of the data transmission rate of the second service flow as one of the conditions for traffic shaping.
  • the magnitude of the unit duration used to calculate the data transmission rate of the second service flow in the foregoing step 105 may be in the order of milliseconds.
  • the peak threshold of the data transmission rate of the second service flow may be determined based on the type of the second service flow. Also, the peak thresholds of data transmission rates for different types of traffic flows may be different. It can be understood that the peak threshold of the data transmission rate of each service flow can also be determined based on the maximum port buffer of downlink ports of network devices at all levels, and the larger the maximum port buffer, the higher the peak threshold of the data transmission rate of the service flow.
  • the first network device detects that the PIR of the HQoS model for the second service flow is less than or equal to the average data transmission rate of the second service flow, it may determine that the second service flow does not meet the traffic shaping conditions, and may Step 109 is executed. Or, if the first network device detects that the peak value of the current data transmission rate of the second service flow is smaller than the peak threshold value of the data transmission rate of the second service flow, it can be determined that there is currently no traffic burst in the second service flow, so It may also be determined that the second service flow does not meet the conditions for traffic shaping, and step 109 may be performed.
  • the first network device detects that the PIR of the second service flow is greater than the average data transmission rate of the second service flow, and the peak value of the data transmission rate of the second service flow is greater than the peak threshold of the second service flow, it can determine the The second service flow satisfies the conditions for traffic shaping, and step 108 may be performed.
  • the priorities of different service flows may be determined based on the delay requirement, that is, the priority of a service flow with a higher delay requirement is higher.
  • the upper limit of the delay in the service level requirement of the first service flow may be smaller than the upper limit of the delay in the service level requirement of the second service flow.
  • service flows with high latency requirements ie, high real-time requirements
  • those with low latency requirements ie, low real-time requirements
  • service flows It is a non-delay sensitive service flow. Therefore, in the embodiment of the present application, the priority of the service flow of the delay-sensitive service may be higher, and the priority of the service flow of the non-delay-sensitive service may be lower.
  • Table 1 shows the traffic size and real-time requirements of some types of service flows in the traffic scheduling system.
  • the service level requirements generally include high bandwidth, low packet loss rate, and low latency.
  • the service level requirements generally include high bandwidth, but there are no strict requirements for packet loss rate and delay.
  • the service level requirements generally include low packet loss rate and low delay, but there are no strict requirements for bandwidth.
  • the service level requirements generally include low packet loss rate and low delay, but there are no strict requirements for bandwidth.
  • For business flows with small traffic such as social chat and email, there are no strict requirements on bandwidth, packet loss rate and delay.
  • Table 2 shows the distribution characteristics of traffic output from a 10G port of a backbone router in a time period of 48 seconds.
  • the 10G port refers to a downstream outbound port with a bandwidth of 10 gigabits per second (Gbps).
  • Gbps gigabits per second
  • the video-on-demand service flow and the file download service flow are typical large flows, which have the characteristics of high data transmission rate, long duration, large message interval, and large burst mode.
  • Send traffic that is, there is a traffic burst
  • traffic bursts will seriously preempt the bandwidth resources of other service flows
  • large flows in the traffic scheduling system are the main cause of network congestion and degradation of network service quality.
  • the network device After the network device receives different types of service flows, it does not differentiate and process it, that is, the network device mixes the delay-sensitive service flow and the non-delay-sensitive service flow in the same scheduling in the queue.
  • the delay-sensitive service flow cannot be prioritized, and its service level requirements cannot be guaranteed, but also its transmission quality may be deteriorated by the influence of the non-delay-sensitive service flow.
  • the traffic of online games is small. If it is mixed with large streams such as video-on-demand service streams or file download service streams for scheduling in the same queue, the transmission quality of online games will be seriously affected. For example, it may cause problems such as high latency and high packet loss rate in the traffic of online games, which will seriously affect the user's gaming experience.
  • the priority of the service flow can be determined based on the delay requirement of the service flow, and when the service level requirement of the high-priority service flow is not satisfied, the traffic flow of the low-priority service flow can be carried out. Therefore, the service level requirements of delay-sensitive business flows can be guaranteed preferentially.
  • the priority of the service flow can also be determined based on other parameters in the service level requirement, for example, it can also be determined based on the requirement for the packet loss rate, which is not limited in this embodiment of the present application. .
  • Step 108 Adjust the transmission rate threshold of the HQoS model for the second service flow to the first threshold.
  • the first network device can adjust the transmission rate threshold of the second service flow by the HQoS model to a first threshold, and the first threshold is smaller than the second service flow. The current data transmission rate, thereby enabling traffic shaping of the second service flow.
  • the first threshold may be greater than or equal to the average data transmission rate of the second service flow.
  • the first threshold may be 1.5 times the average data transmission rate of the second traffic flow.
  • the transmission rate threshold of the second service flow may include one or more of PIR, CAR, CIR and EIR. If the transmission rate threshold includes multiple rates in the PIR, CAR, CIR, and EIR, the first network device needs to adjust each rate in the transmission rate threshold respectively. In a possible implementation, the first network device may adjust multiple rates in the transmission rate threshold to the same first threshold, that is, the adjusted rates are equal. In another possible implementation, the first network device may adjust multiple rates in the transmission rate thresholds to respective corresponding first thresholds, that is, the adjusted rates may be unequal.
  • FIG. 8 is a schematic diagram of data transmission rates of a first service flow and a second service flow provided by an embodiment of the present application.
  • the horizontal axis in FIG. 8 represents the time t
  • the vertical axis represents the data transfer rate v.
  • the first network device can adjust the The transmission rate threshold of at least one of the first-level scheduler 21 , the second-level scheduler 22 and the second bottom-level scheduler 24 for the second service flow is the first threshold.
  • the first network device may adjust the transmission rate threshold of the second underlying scheduler 24 for the second service flow to be the first threshold. Since the second bottom-level scheduler 24 only needs to schedule the packets in one of the queues it includes, the first-level scheduler 21 and the second-level scheduler 22 both need to schedule messages in the multiple schedulers connected to them. Therefore, only adjusting the transmission rate threshold of the second bottom layer scheduler 24 for the second service flow can effectively reduce the impact on other service flows.
  • the traffic scheduling system needs to schedule the video stream (that is, the first service stream) of the video conference from the server 01 to the first terminal 03, and needs to schedule the video-on-demand service stream (that is, the second service stream) from the server 02. Scheduled to the second terminal 04 .
  • the transmission quality monitoring unit 2024 in the first network device may monitor that the delay of the video stream of the video conference cannot satisfy the video conference. Service level requirements for the flow.
  • the traffic shaping unit 2025 in the processing module 202 can perform traffic shaping on the video-on-demand service flow. For example, if the calculation unit 2023 calculates that the average data transmission rate of the video-on-demand service stream is 4 Mbps, the traffic shaping unit 2025 can adjust the PIR of the video-on-demand service stream of the second bottom layer scheduler 24 to 4 Mbps. As a result, the smooth processing of the video-on-demand stream is realized, so that all network bandwidth can be given up to the video stream of the time-sensitive video conference.
  • the data transmission rate of the video-on-demand stream sent by the first network device is always stable below 4 Mbps. Therefore, even if the downstream LSW 30, OLT 40 and ONT 50 do not have the capability of service flow identification and QoS differentiated scheduling, it can be ensured that the video-on-demand flow after the traffic shaping can pass through the downstream network devices at all levels. Always give up network bandwidth to video conferencing streams, thus safeguarding the end-to-end service level requirements of video conferencing streams. At the same time, the video-on-demand stream sent at 4Mbps also satisfies the bit rate of the video stream, and will not deteriorate the experience of the video service itself.
  • the first network device may determine a target scheduler for which network congestion occurs when transmitting the first service flow, and adjust the transmission rate threshold of the target scheduler for the second service flow to a first threshold value .
  • the target scheduler is the first-level scheduler 21 or the second-level scheduler 22 .
  • the first network device can compare the transmission status data of the queues to which the first service flow belongs in each scheduler, and based on the transmission status The data determines the target scheduler that transmits the first traffic flow and the network congestion occurs.
  • the first network device may compare the queue lengths of the queues to which the first service flow belongs in each scheduler, and determine the scheduler with the longest queue length as the target scheduler where network congestion occurs.
  • the first network device may compare the packet loss rates of the queues to which the first service flow belongs in each scheduler, and determine the scheduler with the highest packet loss rate as the target scheduler where network congestion occurs.
  • the first network device may be based on the topology of each scheduler and the transmission status of each queue in the scheduler with queues data, and determine the target scheduler that transmits the first service flow and network congestion occurs.
  • the first network device may determine that the SQ-level scheduler has The target scheduler for network congestion.
  • the maximum queue buffers of the queues in the 4 underlying schedulers are 100 bytes, 200 bytes, 300 bytes, and 400 bytes, respectively, and the The maximum buffer size of the SQ-level scheduler is 800 bytes. If the data volume of the packets actually buffered by the four queues at a certain moment is 99 bytes, 199 bytes, 299 bytes and 399 bytes respectively, then because the sum of the data volumes of the packets buffered by the four queues is greater than The maximum buffer of the SQ-level scheduler, so the first network device can determine that the SQ-level scheduler is the target scheduler for which network congestion occurs when transmitting the first service flow.
  • Step 109 Adjust the transmission rate threshold of the HQoS model for the third service flow to the second threshold.
  • the service flow carried in the traffic scheduling system may further include a third service flow, and the priority of the third service flow is higher than that of the second service flow and lower than that of the first service flow.
  • the first network device may further schedule the third service flow based on the HQoS model.
  • the first network device detects that the second service flow does not meet the traffic shaping conditions, it can set the transmission rate threshold of the third service flow by the HQoS model to a second threshold, and the second threshold is smaller than the first threshold.
  • the current data transmission rate of the third service flow thereby implementing traffic shaping for the third service flow.
  • the first network device can sequentially detect each service flow in order of priority from low to high. Whether a low-priority service flow satisfies the traffic shaping conditions. If it is detected that any low-priority service flow meets the conditions for traffic shaping, the HQoS model can lower the transmission rate threshold of the low-priority service flow to implement traffic shaping for the low-priority service flow. . That is, the first network device may perform the foregoing step 109 after determining that the third service flow satisfies the traffic shaping condition.
  • the first network device can determine the transmission quality of the service flow based on the method shown in the above step 106. Whether the service level requirements corresponding to the service flow are met. In addition, when the first network device detects that the transmission quality of any service flow does not meet the service level requirement corresponding to the service flow, it can refer to the methods shown in the above steps 107 to 109 for the lower priority service.
  • the flow is traffic shaped.
  • the sequence of steps of the method for scheduling service flows provided by the embodiments of the present application may be adjusted appropriately, and the steps may also be correspondingly increased or decreased according to the situation.
  • the above-mentioned step 102 may be performed before the step 101 .
  • the above steps 101 and 102 may be deleted according to the situation, and correspondingly, the initial values of the scheduling parameters of each scheduler in the HQoS model may be directly configured.
  • the above steps 107 and 109 may be deleted according to the situation, that is, the first network device may directly perform traffic shaping on the second service flow.
  • an embodiment of the present application provides a method for scheduling a service flow.
  • the first network device detects that the transmission quality of a service flow with a higher priority does not meet the service level requirement corresponding to the service flow
  • the first network device can Adjust the transmission rate threshold of the service flow with lower priority in the HQoS model to the first threshold. Since the first threshold is smaller than the current data transmission rate of the service flow with lower priority, traffic shaping of the service flow with lower priority can be implemented.
  • the bandwidth of the downlink port of the first network device can be assigned to the service flow with the higher priority, so as to ensure that the service level requirement of the service flow with the higher priority can be preferentially satisfied.
  • the service flow with lower priority can be transmitted at a stable data transmission rate To the downstream second network device, that is, the service flow with lower priority will not have a traffic burst in the downstream second network device. Therefore, even if the downstream second network device does not have the functions of service flow identification and QoS differentiated scheduling, it is possible to prevent the lower priority service flow from preempting the bandwidth of the higher priority service flow due to traffic bursts resource.
  • the solution provided by the embodiment of the present application can be implemented without updating the second network device that does not have the above functions in the existing network.
  • the solutions provided by the embodiments of the present application have high application flexibility and compatibility.
  • the first network device can ensure that the reduced transmission rate threshold is greater than or equal to the average data transmission rate of the lower-priority service flow when performing traffic shaping on the lower-priority service flow, the impact can be avoided.
  • FIG. 9 is a schematic structural diagram of an apparatus for scheduling a service flow provided by an embodiment of the present application.
  • the scheduling apparatus may be applied to the first network device provided in the foregoing method embodiment, and may be used to implement the service flow provided in the foregoing embodiment. scheduling method.
  • the scheduling apparatus can implement the function of the first device in FIG. 5 and execute the method shown in FIG. 5 .
  • the device may also be the SR/BRAS in Figures 1-4.
  • the scheduling device for the service flow includes:
  • the scheduling module 301 is configured to schedule the first service flow and the second service flow respectively based on the HQoS model, wherein the priority of the first service flow is higher than the priority of the second service flow.
  • the scheduling module 301 For the function implementation of the scheduling module 301, reference may be made to the relevant description of step 104 in the foregoing method embodiments.
  • the adjustment module 302 is configured to adjust the transmission rate threshold of the HQoS model for the second service flow to a first threshold when the transmission quality of the first service flow does not meet the service level requirement corresponding to the first service flow, and the first threshold is less than The current data transmission rate of the second service flow. That is, the adjustment module 302 can be used to perform traffic shaping on the second service flow.
  • the adjustment module 302 can be used to implement the functions of the transmission quality monitoring unit 2024 and the traffic shaping unit 2025 in the embodiment shown in FIG. 6 .
  • the first threshold may be greater than or equal to the average data transmission rate of the second traffic flow.
  • the adjustment module 302 may be configured to: when the transmission quality of the first service flow does not meet the service level requirement corresponding to the first service flow, and the current data transmission rate of the second service flow is greater than the first service flow When the peak threshold of the data transmission rate of the second service flow is set, the transmission rate threshold of the second service flow of the HQoS model is adjusted to be the first threshold.
  • the adjustment module 302 can also be used to implement the functions of the data statistics unit 2022 and the calculation unit 2023 in the embodiment shown in FIG. 6 .
  • the transmission rate threshold of the second service flow may include one or more of PIR, CAR, CIR, and EIR.
  • the first network device may be connected to the terminal through the second network device; correspondingly, the HQoS model may include: a first-level scheduler corresponding to the downlink port of the first network device, and a second-level scheduler corresponding to the downlink port of the first network device The second-level scheduler corresponding to the downlink port of the network device, the first bottom-level scheduler for transmitting the first service flow through the downlink port of the second network device, and the first-level scheduler for transmitting the first service flow through the downlink port of the second network device The second underlying scheduler of the second service flow.
  • the adjustment module 302 may be configured to: adjust the transmission rate threshold of the second service flow of at least one scheduler among the first-level scheduler, the second-level scheduler, and the second bottom-level scheduler is the first threshold.
  • the adjustment module 302 may be configured to determine a target scheduler for which network congestion occurs when transmitting the first service flow, and adjust the transmission rate threshold of the target scheduler for the second service flow to a first threshold.
  • the target scheduler may be a first-level scheduler or a second-level scheduler.
  • the sum of the transmission rate thresholds of the first service flow and the second service flow by the first-level scheduler is less than or equal to the maximum bandwidth of the downlink port of the first network device; the second-level scheduler The sum of the transmission rate thresholds of the first service flow and the second service flow is less than or equal to the maximum bandwidth of the downlink port of the second network device.
  • the transmission rate threshold of the first service flow by the first bottom layer scheduler is less than or equal to the maximum bandwidth of the downlink port of the second network device; the transmission rate of the second service flow by the second bottom layer scheduler The threshold is less than or equal to the maximum bandwidth of the downlink port of the second network device.
  • the first underlying scheduler may include a first queue for buffering packets of the first service flow
  • the second underlying scheduler may include a second queue for buffering packets of the second service flow queue; the sum of the maximum queue buffer of the first queue and the maximum queue buffer of the second queue is less than or equal to the maximum port buffer of the downlink port of the second network device.
  • the upper limit of delay in the service level requirement of the first service flow may be smaller than the upper limit of delay in the service level requirement of the second service flow. That is, the priority of the service flow may be divided based on the delay requirement of the service flow, and the priority of the service flow with higher delay requirements may be higher.
  • the scheduling module 301 may also be configured to schedule a third service flow based on the HQoS model, where the priority of the third service flow is higher than the priority of the second service flow and lower than the priority of the first service flow class.
  • the adjustment module 302 can also be used to: when the transmission rate threshold of the second service flow is less than or equal to the average data transmission rate of the second service flow, or when the current data transmission rate of the second service flow is less than or equal to the second service flow.
  • the transmission rate threshold of the HQoS model for the third service flow is adjusted to a second threshold, and the second threshold is smaller than the current data transmission rate of the third service flow.
  • the adjustment module 302 may perform traffic shaping on the third service flow with the second lowest priority.
  • the adjustment module 302 may perform traffic shaping on the third service flow with the second lowest priority.
  • the embodiment of the present application provides a service flow scheduling device, which can adjust the HQoS model to adjust the HQoS model when the transmission quality of a service flow with a higher priority does not meet the service level requirement corresponding to the service flow.
  • the transmission rate threshold of the service flow with lower priority is the first threshold. Since the first threshold is smaller than the current data transmission rate of the service flow with lower priority, traffic shaping of the service flow with lower priority can be implemented.
  • the bandwidth of the downlink port of the first network device can be assigned to the service flow with the higher priority, so as to ensure that the service level requirement of the service flow with the higher priority can be preferentially satisfied.
  • the apparatus for scheduling service flows provided in the embodiments of the present application may also be implemented by an application-specific integrated circuit (ASIC), or a programmable logic device (PLD), and the above-mentioned PLD may be implemented by an application-specific integrated circuit (ASIC). It is a complex programmable logical device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL) or any combination thereof.
  • CPLD complex programmable logical device
  • FPGA field-programmable gate array
  • GAL generic array logic
  • the method for scheduling service flows provided by the foregoing method embodiments may also be implemented through software.
  • each module in the apparatus for scheduling service flows provided in the embodiments of the present application is also implemented. Can be a software module.
  • FIG. 10 is a schematic structural diagram of another service flow scheduling apparatus provided by an embodiment of the present application.
  • the apparatus may be the SR/BRAS in FIG. 1 to FIG. 4 .
  • the apparatus for scheduling a service flow may be applied to the first network device provided in the foregoing embodiment, for example, the method and functions performed by the first device shown in FIG. 5 .
  • the scheduling apparatus for the service flow may include: a processor 401 , a memory 402 , a network interface 403 and a bus 404 .
  • the bus 404 is used for connecting the processor 401 , the memory 402 and the network interface 403 .
  • the communication connection with other devices can be realized through the network interface 403 .
  • a computer program for realizing various application functions is stored in the memory 402 .
  • the processor 401 may be a CPU, and the processor 401 may also be other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays ( FPGA), GPU or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGA field programmable gate arrays
  • GPU GPU or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • Memory 402 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory.
  • the non-volatile memory may be ROM, programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), EEPROM or flash memory.
  • Volatile memory can be RAM, which acts as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • Double data rate synchronous dynamic random access memory double data date SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous link dynamic random access memory direct rambus RAM, DR RAM
  • bus 404 may also include a power bus, a control bus, a status signal bus, and the like. However, for clarity of illustration, the various buses are labeled as bus 404 in the figure.
  • the processor 401 is configured to execute the computer program stored in the memory 402, and the processor 401 implements the service flow scheduling method provided by the above method embodiments by executing the computer program 4021, for example, executing the first network device shown in FIG. 5 . method of execution.
  • the processor 401 is configured to schedule the first service flow and the second service flow respectively based on the HQoS model, wherein the priority of the first service flow is higher than the priority of the second service flow; When the transmission quality of the first service flow does not meet the service level requirement corresponding to the first service flow, adjust the HQoS model for the transmission rate threshold of the second service flow to a first threshold, and the first threshold is smaller than the second service flow the current data transfer rate.
  • FIG. 11 is a schematic structural diagram of another apparatus for scheduling a service flow provided by an embodiment of the present application, which may be, for example, the SR/BRAS in FIGS. 1 to 4 .
  • the apparatus for scheduling a service flow may be applied to the first network device provided in the foregoing embodiment, for example, to execute the method performed by the first network device shown in FIG. 5 .
  • the scheduling apparatus 500 may include: a main control board 501 , an interface board 502 and an interface board 503 .
  • a switching network board (not shown in the figure) may be included, and the switching network board is used to complete data exchange between each interface board (the interface board is also called a line card or a service board).
  • the main control board 501 is used to complete functions such as system management, equipment maintenance, and protocol processing.
  • the interface boards 502 and 503 are used to provide various service interfaces, for example, a packet over SONET/SDH (POS) interface based on SONET/SDH, a Gigabit Ethernet (GE) interface, an asynchronous transfer mode (asynchronous transfer mode) transfer mode, ATM) interface, etc., and realize packet forwarding.
  • SONET refers to synchronous optical network (synchronous optical network)
  • SDH refers to synchronous digital hierarchy (synchronous digital hierarchy).
  • the main control board 501 , the interface board 502 and the interface board 503 are connected to the system backplane through the system bus to realize intercommunication.
  • One or more processors 5021 are included on the interface board 502 .
  • the processor 5021 is used to control and manage the interface board, communicate with the central processing unit 5011 on the main control board 501, and perform packet forwarding processing.
  • the memory 5022 on the interface board 502 is used for storing forwarding entries, and the processor 5021 forwards the message by searching for the forwarding entries stored in the memory 5022 .
  • the interface board 502 includes one or more network interfaces 5023 for receiving the packets sent by the previous hop node, and sending the processed packets to the next hop node according to the instructions of the processor 5021 .
  • the specific implementation process will not be repeated here.
  • the specific functions of the processor 5021 are also not repeated here.
  • this embodiment includes multiple interface boards and adopts a distributed forwarding mechanism.
  • the structure of the interface board 503 is basically similar to the structure of the interface board 502 .
  • the operation of the interface board 502 is basically similar to that of the interface board 502 , and is not repeated for brevity.
  • the processors 5021 and/or 5031 in the interface board in FIG. 11 may be dedicated hardware or chips, such as a network processor or an application-specific integrated circuit, to implement the above-mentioned functions, and this implementation method is usually Said forwarding plane is processed by dedicated hardware or chip.
  • the processor 5021 and/or 5031 may also use a general-purpose processor, such as a general-purpose CPU, to implement the functions described above.
  • main control boards there may be one or more main control boards, and when there are multiple main control boards, they may include an active main control board and a backup main control board.
  • the multiple interface boards can communicate with each other through one or more switching network boards.
  • they can jointly implement load sharing and redundant backup.
  • the first network device may not need a switching network board, and the interface board is responsible for processing the service data of the entire system.
  • the first network device includes a plurality of interface boards, and data exchange between the plurality of interface boards can be realized through a switching network board, thereby providing large-capacity data exchange and processing capabilities. Therefore, the data access and processing capabilities of network devices in a distributed architecture are greater than those in a centralized architecture.
  • the specific architecture used depends on the specific networking deployment scenario, and there is no restriction here.
  • the memory 5022 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (RAM), or a random access memory (RAM) that can store Other types of dynamic storage devices for information and instructions, which may also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only Memory (CD-ROM) or Other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, Blu-ray disk, etc.), magnetic disk or other magnetic storage device, or capable of carrying or storing desired program code in the form of instructions or data structures and any other medium that can be accessed by a computer, but is not limited thereto.
  • the memory 5022 may exist independently and be connected to the processor 5021 through a communication bus.
  • the memory 5022 may also be integrated with the processor 5021.
  • the memory 5022 is used for storing program codes, and is controlled and executed by the processor 5021, so as to execute the method for scheduling service flows provided by the above embodiments.
  • the processor 5021 is used to execute program codes stored in the memory 5022 .
  • One or more software modules may be included in the program code.
  • the one or more software modules may be functional modules in the above-mentioned embodiment shown in FIG. 9 .
  • the network interface 5023 can be a device using any network interface for communicating with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area network (wireless local area network) local area networks, WLAN), etc.
  • RAN radio access network
  • WLAN wireless local area network
  • Embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions or codes are executed on a computer, the computer is executed to implement the methods provided by the foregoing method embodiments.
  • the scheduling method of the service flow for example, the method performed by the first network device shown in FIG. 5 is performed.
  • the embodiments of the present application also provide a computer program product containing instructions, when the computer program product is run on a computer, the computer is made to execute the service flow scheduling method provided by the above method embodiments, for example, execute the first step shown in FIG. 5 .
  • Embodiments of the present application further provide a chip, where the chip includes a programmable logic circuit and/or program instructions, and the chip can be used to execute the service flow scheduling method provided by the above method embodiments, for example, execute the first step shown in FIG. 5 .
  • the chip may be a traffic management (traffic management, TM) chip.
  • An embodiment of the present application further provides a network device, where the network device may be the first network device in the foregoing embodiment, and may be used to implement the service flow scheduling method provided in the foregoing embodiment.
  • the network device may include the service flow scheduling apparatus provided in the foregoing embodiment.
  • the network device may include a service flow scheduling apparatus as shown in FIG. 9 , FIG. 10 or FIG. 11 .
  • the network device may include the chip provided in the foregoing embodiment.
  • An embodiment of the present application further provides a traffic scheduling system, where the traffic scheduling system includes a terminal and a first network device, where the first network device is configured to schedule a service flow to the terminal.
  • the first network device may include the service flow scheduling apparatus provided in the foregoing embodiment.
  • the first network device may include a service flow scheduling apparatus as shown in FIG. 9 , FIG. 10 or FIG. 11 .
  • the first network device may include the chip provided in the foregoing embodiment.
  • the first network device may be the SR/BRAS 20, the LSW 30, the OLT 40 or the ONT 50 in the traffic scheduling system.
  • the traffic scheduling system may further include one or more cascaded second network devices, and the first network device may be connected to the terminal through the one or more cascaded second network devices.
  • the first network device is the SR/BRAS 20
  • the traffic scheduling system may further include a second network device, and the second network device is the LSW 30 , any one of the OLT 40 and ONT 50 devices.
  • the first network device is the SR/BRAS 20
  • the traffic scheduling system may further include two second network devices, and the two second network devices are any of the LSW 30, the OLT 40 and the ONT 50 two devices.
  • the first network device is the SR/BRAS 20
  • the traffic scheduling system may further include three second network devices cascaded in sequence, and the three second network devices are the LSW 30 and the OLT 40 respectively. and ONT 50. That is, the SR/BRAS 20, as the first network device, can be connected to the terminal through the LSW 30, the OLT 40 and the ONT 50 that are cascaded in sequence.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital versatile disc (DVD)), or semiconductor media (eg, solid state disk (SSD)) )Wait.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了业务流的调度方法、装置及系统。该方案中,当优先级较高的业务流的传输质量不满足与该业务流对应的服务等级需求时,第一网络设备调整HQoS模型对优先级较低的业务流的传输速率阈值为第一阈值,其中,第一阈值小于该优先级较低的业务流的当前数据传输速率。通过对该优先级较低的业务流的流量整形,可以将第一网络设备的下行端口的带宽资源让位于优先级较高的业务流,以确保该优先级较高的业务流满足其服务等级需求。

Description

业务流的调度方法、装置及系统
本申请要求于2020年12月24日提交的申请号为202011550634.6、发明名称为“发送报文的方法、设备和系统”的中国专利申请的优先权,以及要求于2021年3月12日提交的申请号为202110272534.X、发明名称为“业务流的调度方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,特别涉及业务流的调度方法、装置及系统。
背景技术
随着网络技术的发展,网络中的业务流的数量越来越多。不同业务流对应的服务等级需求可能不同。例如,时延敏感型业务(如交互式虚拟现实和视频会议等)的业务流对应的服务等级需求通常要求低时延、高带宽和低丢包率,而非时延敏感型业务(如文件下载和视频点播等)的业务流对应的服务等级需求则要求高带宽,但对时延和丢包率却没有较高要求。
服务端发送的业务流需要经过多级网络设备转发至用户的终端,该多级网络设备一般包括:骨干路由器、业务路由器(service router,SR)、局域网交换机(local area network switch,LSW)、光线路终端(optical line terminal,OLT)和光网络终端(optical network terminal,ONT)等。上述网络设备若接收到不同的业务流,通常会将不同的业务流混合在一个队列中进行调度,而该调度方法无法满足不同业务流的服务等级需求。
发明内容
本申请提供了业务流的调度方法、装置及系统,以解决相关技术中的调度方法无法满足不同业务流的服务等级需求的技术问题。
第一方面,提供了一种业务流的调度方法,该方法包括:第一网络设备基于层次化服务质量(hierarchical quality of service,HQoS)模型分别调度第一业务流和第二业务流,其中,该第一业务流的优先级高于该第二业务流的优先级。当第一业务流的传输质量(例如,业务传输的时延、丢包率、数据传输速率和突发流量大小中的一个或多个)不满足与该第一业务流对应的服务等级需求时,第一网络设备能够调整HQoS模型对第二业务流的传输速率阈值为第一阈值,该第一阈值小于该第二业务流的当前数据传输速率。示例性地,该服务等级需求可以是服务等级协议(service level agreement,SLA)定义的需求或其他约定需求。
由于该第一阈值小于优先级较低的第二业务流的当前数据传输速率,因此可以实现对该优先级较低的第二业务流的流量整形。进而,可以将该第一网络设备的下行端口的带宽资源让位于该优先级较高的第一业务流,以确保能够优先满足优先级较高的第一业务流的服务等级需求。
可选地,该第一阈值大于或等于该第二业务流的平均数据传输速率,以避免流量整形严重影响第二业务流的传输质量。
可选地,第一网络设备调整HQoS模型对该第二业务流的传输速率阈值为第一阈值的过程可以包括:当第一业务流的传输质量不满足与该第一业务流对应的服务等级需求,且该第二业务流的当前数据传输速率大于该第二业务流的数据传输速率的峰值阈值时,调整该HQoS模 型对该第二业务流的传输速率阈值为该第一阈值。
第二业务流的当前数据传输速率大于第二业务流的数据传输速率的峰值阈值时,第一网络设备可以确定该第二业务流当前存在流量突发。由于流量突发会严重抢占其他业务流的带宽资源,因此第一网络设备基于此对存在流量突发的第二业务流进行流量整形,可以有效改善该第一业务流的传输质量。
可选地,该第二业务流的传输速率阈值包括峰值信息速率(peak information rate,PIR)、承诺访问速率(committed access rate,CAR)、承诺信息速率(committed information rate,CIR)和额外信息速率(excess information rate,EIR)中的一个或多个。可以理解的是,第二业务流的传输速率阈值包括PIR、CAR、CIR和EIR中的任意一个。若第二业务流的传输速率阈值包括PIR、CAR、CIR和EIR中的多个速率,则第一网络设备需分别调整传输速率阈值中的每个速率。在一种可能的实现中,第一网络设备可以将传输速率阈值中的多个速率均调整为同一个第一阈值,即调整后的各个速率之间数值相等。在另一种可能的实现中,第一网络设备可以将传输速率阈值中的多个速率分别调整为各自对应的阈值,即调整后的各个速率之间的数值可以不等。
可选地,在一种网络场景中,第一网络设备通过第二网络设备与终端连接,其中HQoS模型包括多级调度器,如与第一网络设备的下行端口对应的第一级调度器,与第二网络设备的下行端口对应的第二级调度器,用于通过第二网络设备的下行端口传输第一业务流的第一底层调度器,以及用于通过第二网络设备的下行端口传输第二业务流的第二底层调度器。
上述多级调度器中,第一底层调度器与第二网络设备的下行端口传输的第一业务流对应,第二底层调度器与第二网络设备的下行端口传输的第二业务流对应。第一网络设备可以通过该两个底层调度器分别实现对第一业务流和第二业务流的调度。
可选地,第一网络设备调整HQoS模型对该第二业务流的传输速率阈值为第一阈值的实现方式包括:第一网络设备调整第一级调度器、第二级调度器和第二底层调度器中至少一个调度器对第二业务流的传输速率阈值为该第一阈值。
例如,为了避免对其他业务流的传输质量产生影响,第一网络设备可以仅调节第二底层调度器对第二业务流的传输速率阈值为该第一阈值。
可选地,第一网络设备调整HQoS模型对该第二业务流的传输速率阈值为第一阈值的实现方式包括:确定传输第一业务流发生网络拥塞的目标调度器,该目标调度器可以为第一级调度器或第二级调度器;调整该目标调度器对第二业务流的传输速率阈值为第一阈值。
通过调整该目标调度器对第二业务流的传输速率阈值,可以有效降低目标调度器传输第一业务流时的拥塞程度,进而改善第一业务流的传输质量。
可选地,该第一级调度器对第一业务流和第二业务流的传输速率阈值之和可以小于或等于第一网络设备的下行端口的最大带宽;该第二级调度器对第一业务流和第二业务流的传输速率阈值之和可以小于或等于第二网络设备的下行端口的最大带宽。
通过使调度器对各个业务流的传输速率阈值之和小于或等于对应的网络设备的下行端口的最大带宽,可以确保网络设备的下行端口的带宽能够满足经过该调度器调度后的业务流的带宽需求。
可选地,第一底层调度器对该第一业务流的传输速率阈值小于或等于该第二网络设备的下行端口的最大带宽。该第二底层调度器对该第二业务流的传输速率阈值小于或等于该第二网络设备的下行端口的最大带宽。
可选地,该第一底层调度器包括用于缓存该第一业务流的报文的第一队列,该第二底层调度器包括用于缓存该第二业务流的报文的第二队列。该第一队列的最大队列缓存与该第二队列的最大队列缓存之和小于或等于该第二网络设备的下行端口的最大端口缓存。
通过使各个底层调度器中的队列的最大队列缓存之和小于或等于第二网络设备的下行端口的最大端口缓存,可以确保第二网络设备的下行端口的端口缓存能够满足经过该底层调度器调度后的业务流的缓存需求。
可选地,该第一业务流的服务等级需求中的时延上限可以小于该第二业务流的服务等级需求中的时延上限。也即是,对于时延要求较高(即对实时性要求高)的业务流的优先级可以较高,对于时延要求较低(即对实时性要求低)的业务流的优先级可以较低。
可选地,该方法还可以包括:第一网络设备基于该HQoS模型调度第三业务流,该第三业务流的优先级高于第二业务流的优先级,且低于第一业务流的优先级;当该第二业务流的传输速率阈值小于或等于第二业务流的平均数据传输速率,或者,当该第二业务流的当前数据传输速率小于或等于第二业务流的数据传输速率的峰值阈值时,第一网络设备调整HQoS模型对该第三业务流的传输速率阈值为第二阈值,该第二阈值小于该第三业务流的当前数据传输速率,由此可以实现对第三业务流的流量整形。
在申请提供的方案中,在第二业务流已不满足流量整形的条件时,可以对其他优先级较低的第三业务流进行流量整形,以确保满足第一业务流的服务等级需求。
第二方面,提供了一种业务流的调度装置,该调度装置应用于第一网络设备,且该调度装置包括至少一个模块,该至少一个模块可以用于实现上述第一方面或第一方面中可选方案提供的业务流的调度方法。
第三方面,提供了一种业务流的调度装置,该业务流的调度装置包括存储器和处理器,该存储器用于存储计算机程序或代码,该处理器用于执行该计算机程序或代码以实现上述第一方面或第一方面中可选方案所提供的业务流的调度方法。
第四方面,提供了一种计算机可读存储介质,该计算机可读存储介质包括指令或代码,当该指令或代码在计算机上执行时,使得该计算机执行上述第一方面或第一方面中可选方案提供的业务流的调度方法。
第五方面,提供了一种芯片,该芯片包括可编程逻辑电路和/或程序指令,该芯片用于执行上述第一方面或第一方面中可选方案提供的业务流的调度方法。
第六方面,提供一种计算机程序产品,该计算机程序产品包括程序或代码,当所述程序或代码在计算机上运行时,使得计算机执行上述第一方面或第一方面中可选方案所提供的业务流的调度方法。
第七方面,提供了一种流量调度系统,该流量调度系统包括终端和第一网络设备,该第一网络设备用于调度该终端的第一业务流和第二业务流,且该第一网络设备包括如上述第二方面或第三方面提供的业务流的调度装置。或者,该第一网络设备包括如上述第五方面提供的芯片。
可选地,该流量调度系统还可以包括第二网络设备,该第一网络设备可以通过该第二网络设备与该终端连接。
综上所述,本申请实施例提供了一种业务流的调度方法、装置及系统,在优先级较高的业务流的传输质量不满足与该业务流对应的服务等级需求时,第一网络设备调整HQoS模型对优先级较低的业务流的传输速率阈值为第一阈值,其中,第一阈值小于该优先级较低的业务流 的当前数据传输速率。通过该方案可以实现对该优先级较低的业务流的流量整形。进而,可以将该第一网络设备的下行端口的带宽让位于该优先级较高的业务流,以确保能够优先满足该优先级较高的业务流的服务等级需求。
附图说明
图1是本申请实施例提供的一种流量调度系统的网络场景示意图;
图2是本申请实施例提供的一种HQoS模型的结构示意图;
图3是本申请实施例提供的另一种流量调度系统的结构示意图;
图4是本申请实施例提供的另一种HQoS模型的结构示意图;
图5是本申请实施例提供的一种业务流的调度方法的流程图;
图6是本申请实施例提供的一种第一网络设备的结构示意图;
图7是本申请实施例提供的一种流量调度系统的结构示意图;
图8是本申请实施例提供的一种流量整形的示意图;
图9是本申请实施例提供的一种业务流的调度装置的结构示意图;
图10是本申请实施例提供的另一种业务流的调度装置的结构示意图;
图11是本申请实施例提供的又一种业务流的调度装置的结构示意图。
具体实施方式
下面结合附图详细介绍本申请实施例提供的业务流的调度方法、装置及系统。
图1是本申请实施例提供的一种流量调度系统的网络场景示意图。如图1所示,为了避免数据长距离传输带来的业务体验问题和网络租用成本问题,服务提供商(例如视频内容服务提供商)可以在各个重点地区自建数据中心(data center,DC),或租用运营商提供的内容分发网络(content delivery network,CDN)。由此,可以将用于提供业务流的服务器设置在市县等距离终端较近的区域,使得终端所获取的业务流(例如视频流)主要来源于距离其较近的DC或CDN中的服务器,从而有效改善了终端用户的体验。其中,终端也可以称为用户设备,其可以为手机、电脑、可穿戴设备或智能家居设备等。
在本申请实施例中,该DC和CDN中的服务器可以通过多级网络设备将业务流转发至终端。如图1所示,该多级网络设备可以包括:依次级联的骨干路由器10、SR 20、LSW 30、OLT 40和ONT 50等。其中,该骨干路由器10的上行端口可以与DC和/或CDN中的服务器连接以接入互联网(Internet),ONT 50与一个或多个终端连接。参考图1,该流量调度系统还可以包括分路器(splitter)60,OLT 40可以通过分路器60与多个ONT 50连接。并且,该骨干路由器10与LSW 30之间,或者骨干路由器10与OLT 40之间也可以通过宽带接入服务器(broadband access server,BAS)连接。例如,该BAS可以为宽带远程接入服务器(broadband remote access server,BRAS)。为便于描述,下文将骨干路由器10与OLT 40之间的节点统称为SR/BRAS 20。
可以理解的是,在一种可能的实现方式中,图1所示的SR/BRAS 20可以直接与OLT 40连接,即该流量调度系统也可以不包括LSW 30。在另一种可能的实现方式中,图1所示的SR/BRAS 20与OLT 40之间可以通过多个级联的LSW 30连接。在又一种可能的实现方式中,图1所示的LSW 30与ONT 50之间也可以不包括OLT 40,在再一种可能的实现方式中,图1所示的LSW 30与ONT 50之间可以通过多个级联的OLT 40连接。在再一种可能的实现方式中,图1所示的OLT 40与终端之间也可以不包括ONT 50,在再一种可能的实现方式中,图1所示的 OLT 40与终端之间可以通过多个级联的ONT 50连接。上文描述中,网络设备级联可以是指:一个网络设备的下行端口与另一个网络设备的入端口连接。
随着第五代移动通信技术(5th generation mobile networks,5G)及其新业务的发展,不仅网络流量持续增长,网络中业务流的类型也越来越丰富。由于不同类型的业务流的流量相差很大,服务等级需求也不尽相同,因此对流量调度系统的服务等级需求的保障能力提出了更高的要求。当流量调度系统中业务流的数量较多时,流量调度系统中可能发生网络拥塞,进而导致部分业务流的传输质量不满足与其对应的服务等级需求。为了便于描述,下文将与业务流对应的服务等级需求简称为业务流的服务等级需求。可以理解的是,在本申请实施例中,该服务等级需求可以为SLA中定义的需求。
在流量调度系统中,由于骨干路由器的吞吐能力大,处理能力强,且业务流能够在各个骨干路由器之间进行负载分担,因此骨干路由器通常并不是网络拥塞的瓶颈点。流量调度系统的流量压力主要集中在城域网内,即如图1所示,网络拥塞通常出现在SR/BRAS 20与LSW 30之间的链路,LSW 30与OLT 40之间的链路,以及OLT 40与ONT 50之间的链路上。
本申请实施例提供了一种业务流的调度方法,该调度方法能够使得流量调度系统优先满足优先级较高的业务流的服务等级需求,从而有效提升流量调度系统的服务等级需求的保障能力。该调度方法可以应用于流量调度系统中的第一网络设备,该第一网络设备可以为图1所示系统中的SR/BRAS 20、LSW 30、OLT 40或者ONT 50。
在本申请实施例中,该第一网络设备中部署有HQoS模型。HQoS模型可以将调度队列划为多个调度级别,每一级别可以使用不同的流量特征进行流量管理,从而实现多用户、多业务的服务管理。第一网络设备将接收到的多个业务流划分为不同的优先级,且可以基于HQoS模型对不同优先级的业务流进行区别调度。例如,当优先级较高的业务流的传输质量不满足该业务流的服务等级需求时,可以对优先级较低的业务流进行流量整形(traffic shaping)。其中,流量整形是一种调整业务流的数据传输速率的方式,其能够限制业务流的突发,使得业务流以较为均匀的速率向外发送。通过对优先级较低的业务流进行流量整形,可以将该第一网络设备的下行端口的带宽让位于该优先级较高的业务流,以确保能够优先满足该优先级较高的业务流的服务等级需求。
在一种可能的实现方式中,第一网络设备可以通过第二网络设备与终端连接。HQoS模型可以包括:与第一网络设备的下行端口对应的第一级调度器21,与该第二网络设备的下行端口对应的第二级调度器22,以及用于通过第二网络设备的下行端口传输L种不同优先级的业务流的N个底层调度器。其中,L和N均为大于1的整数,且N大于或等于L。每个底层调度器与第二网络设备的下行端口传输的一种优先级的业务流对应,且用于调度其所对应的一种优先级的业务流。由于底层调度器与业务流对应,因此也可以称为流队列(flow queue,FQ)级调度器。
可以理解的是,若N=L,则一种优先级的业务流可以对应一个底层调度器,即一种优先级的业务流可以由对应的一个底层调度器调度。若N大于L,则一种优先级的业务流可以对应多个底层调度器,即一种优先级的业务流可以由对应的多个底层调度器调度。例如,假设L=4,N=8,则每种优先级的业务流可以由两个底层调度器调度。又例如,假设第二网络设备的下行端口所传输的业务流包括较高优先级的第一业务流和较低优先级的第二业务流,则该N个底层调度器至少可以包括:用于通过第二网络设备的下行端口传输第一业务流的第一底层调度器23,以及用于通过第二网络设备的下行端口传输第二业务流的第二底层调度器24。或者可 以理解为:该N个底层调度器至少包括与第一业务流对应的第一底层调度器23,以及与第二业务流对应的第二底层调度器24。
在本申请实施例中,可以理解的是,调度器与网络设备的下行端口对应可以是指:调度器与网络设备的下行端口建立映射关系,并基于该下行端口的端口参数(例如最大带宽和/或最大端口缓存等)调度业务流。
在本申请实施例中,还可以理解的是,该第一网络设备可以通过多个级联的第二网络设备与终端连接,相应的,该HQoS模型可以包括与该多个级联的第二网络设备的下行端口一一对应的多个第二级调度器22。示例的,参考图2,假设该第一网络设备为SR/BRAS 20,该SR/BRAS 20可以通过LSW 30、OLT 40或ONT 50与第一终端03连接,即该SR/BRAS 20通过作为第二网络设备的LSW 30、OLT 40或ONT 50与第一终端03连接。若该ONT 50的下行端口能够传输4种不同优先级的业务流(即N=4),则该HQoS模型可以包括:与该SR/BRAS 20对应的第一级调度器21,与第二网络设备一一对应的第二级调度器22,以及与该4种不同优先级的业务流一一对应的4个底层调度器。
可选地,如图2所示,该HQoS模型中的第一级调度器21和第二级调度器22均可以包括调度单元和整形单元,底层调度器可以包括整形单元。其中,该整形单元用于对业务流进行流量整形。该调度单元用于按照预先配置的调度策略,从其连接的多个调度器中选择某个调度器中的报文进行调度。可选地,该调度策略可以包括严格优先级(strict priority,SP)调度或者加权公平排队(weighted fair queue,WFQ)调度等。
参考图2还可以看出,该HQoS模型中多个调度器的结构可以与该多个调度器所对应的多个网络设备的拓扑结构相同。也即是,该第一级调度器可以通过一个第二级调度器或者多个级联的第二级调度器与N个底层调度器连接。
对于第一网络设备为SR/BRAS 20,且SR/BRAS 20通过依次级联的LSW 30、OLT 40和ONT 50与终端连接的网络场景,该HQoS模型中的第一级调度器21可以为虚端口(dummy port,DP)级调度器。该HQoS模型包括的三个第二级调度器22,具体包括与LSW 30的下行出端口对应的第二级调度器22可以为虚接口(virtual interface,VI)级调度器;与OLT 40的下行出端口对应的第二级调度器22可以为用户组队列(group queue,GQ)级调度器;与ONT 50的下行出端口对应的第二级调度器22可以为用户队列(subscriber queue,SQ)级调度器。
图3是本申请实施例提供的另一种流量调度系统的结构示意图,图4是本申请实施例提供的另一种HQoS模型的结构示意图。结合图1、图3和图4可以看出,该流量调度系统中,SR/BRAS 20可以与多个LSW 30连接,LSW 30可以与多个OLT 40连接,OLT 40又可以与多个ONT 50连接。例如图4所示的网络场景中,OLT 40与两个ONT 50连接,其中一个ONT 50分别与第一终端03和第二终端04连接,另一个ONT 50与第三终端05连接。如图4所示,该SR/BRAS 20中的HQoS模型可以包括一个DP级调度器(即一个第一级调度器21),并且还可以包括:与该多个LSW 30对应的多个VI级调度器,与该多个OLT 40对应的多个GQ级调度器,与该多个ONT 50对应的多个SQ级调度器。其中,一个SQ级调度器可以连接多个底层调度器。
可以理解的是,流量调度系统中的每个ONT 50可以与一个用户对应,并用于将该用户的一个或多个终端接入网络。相应的,HQoS模型中的多个SQ级调度器可以用于区分不同用户的业务流,即每个SQ级调度器可以用于调度一个用户的业务流。其中,用户可以是指一个虚拟局域网(virtual local area network,VLAN)、一个虚拟专用网(virtual private network,VPN)或者一个家庭宽带用户等。每个SQ级调度器连接的多个底层调度器则可以用于区分同一用户 的不同优先级的业务流,其中每种优先级的业务流可以包括一种或多种类型的业务流。
示例的,假设某个用户的业务流包括四种不同类型的业务流:语音业务流、游戏业务流、视频点播业务流和文件下载业务流。并且,该四种不同类型的业务流中,语音业务流和游戏业务流均属于高优先级的业务流,视频点播业务流和文件下载业务流均属于低优先级的业务流。相应的,与该用户对应的SQ级调度器至少可以连接两个底层调度器,其中一个底层调度器用于调度该高优先级的业务流,另一个底层调度器用于调度该低优先级的业务流。
还可以理解的是,不同SQ级调度器所连接的底层调度器的数量可以相同也可以不同。例如,若每个用户的业务流均可以划分为N个不同的优先级,则每个SQ级调度器均可以与N个底层调度器连接。若第一用户的业务流可以划分为N1个不同的优先级,第二用户的业务流可以划分为N2个不同的优先级,则与第一用户对应的SQ级调度器可以连接N1个底层调度器,与第二用户对应的SQ级调度器可以连接N2个底层调度器。其中,N1和N2均为大于1的整数,且N1不等于N2。
还可以理解的是,该HQoS模型包括的任一级调度器的数量可以大于该流量调度系统所包括的对应层级的网络设备的数量。例如,该HQoS模型中VI级调度器的数量可以大于SR/BRAS 20所连接的LSW 30数量,GQ级调度器的数量可以大于OLT 40的数量,SQ级调度器的数量可以大于ONT 50的数量。并且,若某个SQ级调度器对应的用户的业务流能够划分为N个不同的优先级,则该SQ级调度器所连接的底层调度器的数量可以大于N。通过在HQoS模型中设计较多数量的调度器,可以确保后续该流量调度系统扩容或更新时,该HQoS模型中具有足够多的调度器能够与流量调度系统中新增的网络设备建立映射关系。由此,有效提高了该HQoS模型的应用灵活性。
还可以理解的是,第一网络设备(例如SR/BRAS 20)可以包括多个下行端口,则该第一网络设备中可以部署有与该多个下行端口一一对应的多个HQoS模型。也即是,该HQoS模型可是针对第一网络设备的每个下行端口部署的。
还可以理解的是,在本申请实施例中,网络设备的下行端口均可以是物理端口或虚拟端口。其中,虚拟端口可以是由多个物理端口组成的汇聚(trunk)端口。
结合图4所示的网络场景图,下文对第一网络设备基于HQoS模型调度业务流的过程进行介绍。第一网络设备在接收到上游设备(例如骨干路由器10)发送的业务流后,可以先确定该业务流所属的用户,并从HQoS模型包括的多个SQ级调度器中确定出与该用户对应的目标SQ级调度器。例如,第一网络设备可以基于访问控制列表(access control lists,ACL)确定业务流所属的用户。然后,第一网络设备可以基于该业务流的类型确定该业务流的优先级,并从该目标SQ级调度器所连接的多个底层调度器中确定出与该业务流的优先级对应的目标底层调度器。之后,第一网络设备即可将该业务流的报文添加至目标底层调度器中的队列。进一步的,第一网络设备可以通过目标SQ级调度器、GQ级调度器、VI级调度器和DP级调度器分别对该目标底层调度器中的报文进行调度,使得该业务流的报文依次通过目标SQ级调度器、GQ级调度器和VI级调度器传输至DP级调度器。该DP级调度器进而可以将该报文通过第一网络设备的下行出端口发送至下一级的第二网络设备,例如发送至LSW 30。
可以理解的是,调度系统中的ONT 50可以与同一用户的多个终端连接,且ONT 50向不同终端传输的业务流的类型可以相同,即ONT 50向不同终端传输的业务流的优先级可以相同。在该场景下,第一网络设备可以通过一个底层调度器对传输至不同终端但优先级相同的多个业务流进行调度。例如,假设某个ONT 50分别与用户的手机和电脑连接,且该手机和电脑分 别在下载文件,则HQoS模型可以将传输至手机的文件下载业务流和传输至电脑的文件下载业务流在同一个底层调度器中进行调度。
基于上述HQoS模型的调度原理可知,第一网络设备基于该HQoS模型不仅能够区分不同用户的业务流,且可以区分同一用户的不同类型的业务流,因此可以达到精细化调度流量的目的。
图5是本申请实施例提供的一种业务流的调度方法的流程图,该方法可以应用于流量调度系统中的第一网络设备。该第一网络设备可以为图1至图4中任一附图所示的SR/BRAS 20、LSW 30、OLT 40或者ONT 50。参考图5,该方法包括:
步骤101、配置HQoS模型包括的多级调度器与流量调度系统中各个网络设备的设备映射关系。
在本申请实施例中,第一网络设备中的HQoS模型中多级调度器分别对应不同级别的网络设备的下行端口的端口参数。该第一网络设备可以记录HQoS模型中每个调度器与其所对应的网络设备的端口参数的映射关系,从而得到设备映射模型。其中,该端口参数至少可以包括:最大带宽。除了该最大带宽之外,该端口参数还可以包括:最大端口缓存。
可选地,网络设备中可以包括多个不同优先级的队列,其中每个队列用于缓存一种优先级的业务流的报文,网络设备的下行端口可以按照一定的调度比例对该多个队列中的报文进行调度。相应的,该网络设备的下行端口的端口参数还可以包括:对不同优先级的队列的调度比例。
可以理解的是,由于HQoS模型中的底层调度器与第二网络设备的下行端口所传输的不同优先级的业务流对应,因此该设备映射模型中还可以记录底层调度器与第二网络设备的下行端口的端口参数的映射关系,以及底层调度器与不同优先级的业务流的映射关系。
示例的,图6是本申请实施例提供的一种第一网络设备的结构示意图,如图6所示,该第一网络设备包括模型配置模块201,该模型配置模块201包括映射模型建立单元2011。该映射模型建立单元2011能够基于该第一网络设备中配置的端口参数,建立设备映射模型。参考图2和图4,该设备映射模型中可以记录如下映射关系:DP级调度器及其对应的SR/BRAS 20的下行端口的端口参数,VI级调度器及其对应的LSW 30的下行端口的端口参数,GQ级调度器及其对应的OLT 40的下行端口的端口参数,SQ级调度器及其对应的ONT 50的下行端口的端口参数,底层调度器及其对应的ONT 50的下行端口的端口参数,以及底层调度器及其对应的一种业务流的优先级,可以理解的是,设备映射模型包括VI级调度器、GQ级调度器和SQ级调度器中的一个或多个调度器。
上述端口参数可以配置为静态参数,不会随着网络设备的运行而改变,因此可以提前将上述端口参数配置在第一网络设备中。并且,上述端口参数可以作为后续确定各个调度器的调度参数的初始值的约束。也即是,在确定调度器的调度参数的初始值时,需确保该初始值能够满足该调度器对应的网络设备的下行端口的端口参数的约束。
步骤102、配置业务流的服务等级需求模型。
在本申请实施例中,第一网络设备可以保存业务流及其服务等级需求的映射关系,从而得到服务等级需求模型。其中,业务流的服务等级需求可以包括对下述至少一种参数的限定:时延、丢包率和数据传输速率等。可以理解的是,服务等级需求中的每个参数可以是指端到端的参数,端到端是指第一网络设备到终端。
如图6所示,该第一网络设备的模型配置模块201还包括需求模型建立单元2012。该需求模 型建立单元2012能够基于该第一网络设备中配置的各个业务流的服务等级需求,建立业务流的服务等级需求模型。假设在第一网络设备中配置了M个业务流的服务等级需求,且该服务等级需求为SLA中定义的需求,则该需求模型建立单元2012创建的服务等级需求模型中,第i个业务流的服务等级需求可以表示为:SLA_i={Xi,Yi,Zi,…}。其中,M为大于1的整数,i为不大于M的正整数。Xi,Yi和Zi可以分别表示服务等级需求中一种参数的限定阈值。
例如,Xi可以表示时延上限,Yi可以表示丢包率上限,Zi可以表示数据传输速率下限。相应的,若要满足第i个业务流的服务等级需求,则需确保该第i个业务流的端到端时延不大于Xi,端到端丢包率不大于Yi,且端到端的数据传输速率不小于Zi。假设第i个业务流为云VR(cloud VR)业务流,其要求从第一网络设备到终端的时延在20毫秒(ms)以下,则该服务等级需求中的Xi可以为20ms。
可选地,该服务等级需求中各个参数的限定阈值可以是经验值,或者,也可以基于建模理论推导得到。例如,第一网络设备可以对业务流的流量分布以及网络设备的处理能力进行建模,从而估算得到服务等级需求中各个参数的限定阈值。其中,该建模理论可以包括:泊松分布建模、排队论建模、网络演算和人工智能(artificial intelligence,AI)建模等。该排队论建模得到的模型可以为M/D/1模型。
步骤103、基于该设备映射模型和该服务等级需求模型,确定HQoS模型中各个调度器的调度参数的初始值。
对于HQoS模型中的调度器,第一网络设备可以基于设备映射模型中记录的该调度器对应的网络设备的端口参数,以及该服务等级需求模型中记录的各个业务流的服务等级需求,确定该调度器的调度参数的初始值。其中,调度器的调度参数用于指示对业务流的调度策略。
可选地,该调度参数可以包括:调度器对不同优先级的业务流的传输速率阈值,该传输速率阈值用于限制调度器传输业务流时的速率。其中,调度器对不同优先级的业务流的传输速率阈值可以相同,也可以不同。并且,调度器对每种优先级的业务流的传输速率阈值小于或等于该调度器对应的网络设备的下行端口的最大带宽。
示例的,假设第一网络设备接收到的业务流包括第一业务流和第二业务流,其中第一业务流的优先级高于第二业务流的优先级。则HQoS模型中第一级调度器21对该第一业务流和第二业务流的传输速率阈值之和小于或等于第一网络设备(例如SR/BRAS 20)的下行端口(该下行端口是指HQoS模型所对应的下行端口)的最大带宽。第二级调度器22对该第一业务流和第二业务流的传输速率阈值之和小于或等于第二网络设备的下行端口(该下行端口是指用于与终端连接的端口)的最大带宽。例如,SQ级调度器对第一业务流和第二业务流的传输速率阈值之和小于或等于ONT 50的下行端口的最大带宽;GQ级调度器对第一业务流和第二业务流的传输速率阈值之和小于或等于OLT 40的下行端口的最大带宽。
通过使调度器对各个业务流的传输速率阈值之和小于或等于对应的网络设备的下行端口的最大带宽,可以确保网络设备的下行端口的带宽能够满足经过该调度器调度后的业务流的带宽需求。
若HQoS模型包括的多个底层调度器中,第一底层调度器23用于通过该第二网络设备的下行端口传输第一业务流,第二底层调度器24用于通过该第二网络设备的下行端口传输第二业务流。则第一底层调度器23对该第一业务流的传输速率阈值小于或等于第二网络设备的下行端口的最大带宽,第二底层调度器24对该第二业务流的传输速率阈值小于或等于第二网络设备的下行端口的最大带宽。
在本申请实施例中,参考图2和图4,每个底层调度器可以包括一个队列,该队列用于缓存该底层调度器对应的一种优先级的业务流的报文。例如,第一底层调度器23可以包括用于缓存第一业务流的报文的第一队列,第二底层调度器24可以包括用于缓存第二业务流的报文的第二队列。对于第一级调度器和第二级调度器(例如SQ级调度器、GQ级调度器、VI级调度器和DP级调度器)中的每个调度器(为便于与底层调度器区分,下文简称为上游调度器),该上游调度器可以包括多个不同优先级的队列。其中,若上游调度器中包括多个不同优先级的队列,则该上游调度器包括的队列的个数可以等于与该上游调度器对应的网络设备中包括的队列的个数,且该上游调度器可以对其所包括的多个不同优先级的队列进行调度。若上游调度器中不包括队列,则该上游调度器可以对其所连接的各个调度器包括的队列进行调度。
可选地,HQoS模型中每个调度器的调度参数所包括的传输速率阈值的个数可以等于该调度器所需调度的队列的个数,其中每个传输速率阈值用于限制一个队列中的报文的传输速率。因此,调度器对某个优先级的业务流的传输速率阈值也可以理解为:调度器对该优先级的业务流所属队列的传输速率阈值。
示例的,由于每个底层调度器包括一个队列,因此每个底层调度器的调度参数可以包括一个传输速率阈值。假设SQ级调度器与N个底层调度器连接,且SQ级调度器中不包括队列,则SQ级调度器的调度参数可以包括:与该N个底层调度器包括的N个队列对应的N个传输速率阈值。
可以理解的是,上游调度器包括的队列的个数与SQ级调度器连接的底层调度器的个数可以相等,也可以不等。若某个上游调度器包括的队列个数小于SQ级调度器所连接的底层调度器的个数,则该上游调度器可以将多个底层调度器中的报文在一个队列中调度。
例如,假设每个SQ级调度器均与4个底层调度器连接,则每个上游调度器也可以包括4个不同优先级的队列。或者,若某个上游调度器对应的网络设备中仅包括2个不同优先级的队列,则该上游调度器也可以仅包括2个队列,并且该2个队列中的每个队列可以与2个底层调度器中的队列对应。也即是,上游调度器可以将2个底层调度器中的报文混合至一个队列中调度。
可选地,该传输速率阈值可以包括PIR、CAR、CIR和EIR中的一个或多个。并且,上述每种速率的初始值均可以小于或等于调度器所对应的网络设备的下行端口的最大带宽。
作为一种可能的实现方式,调度器对各个队列的传输速率阈值的初始值可以等于该下行端口的最大带宽。作为另一种可能的实现方式,调度器对各个队列的传输速率阈值的初始值可以等于该下行端口的最大带宽除以该调度器所需调度的队列的个数。作为又一种可能的实现方式,若设备映射模型中记录的端口参数还包括:网络设备的下行端口对不同优先级的队列的调度比例,则第一网络设备还可以基于该调度比例分配该下行端口的最大带宽,从而得到对各个队列的传输速率阈值。例如,各个队列的传输速率阈值的初始值的比例可以等于该调度比例。
示例的,假设流量调度系统中,某个OLT 40的下行出端口的最大带宽为1Gbps,且与该OLT 40对应的GQ级调度器中包括4个不同优先级的队列,则该GQ级调度器对该4个队列的PIR的初始值均可以被配置为1/4Gbps。或者,假设该OLT 40对4个不同优先级的队列的调度比例为1:2:3:4,则该GQ级调度器对该4个队列的PIR的初始值可以分别被配置为:0.1Gbps、0.2Gbps、0.3Gbps和0.4Gbps。可以看出,GQ级调度器对该4个队列的PIR的初始值的比例等于该调度比例。
可选地,对于该HQoS模型中的SQ级调度器,该设备映射模型中还可以记录有SQ级调度器 对应的用户所办理的网络套餐的最大带宽。若该网络套餐的最大带宽小于该ONT 50的下行出端口的最大带宽,则第一网络设备可以基于该网络套餐的最大带宽确定SQ级调度器对不同优先级的队列的传输速率阈值。例如,假设某ONT 50的下行出端口的最大带宽为1Gbps,但该ONT 50的用户购买的是最大带宽为100兆比特每秒(megabits per second,Mbps)的网络套餐,则与该ONT 50对应的SQ级调度器对不同优先级的队列的PIR的初始值之和可以小于等于100Mbps。
若设备映射模型中记录的网络设备的端口参数还包括:最大端口缓存,则对于包括队列的调度器,该调度器的调度参数还可以包括:调度器中每个队列的最大队列缓存。对于不包括队列的调度器,该调度器的调度参数还可以包括:该调度器的最大缓存。其中,调度器中各个队列的最大队列缓存之和可以小于或等于该最大端口缓存,调度器的最大缓存也小于或等于该最大端口缓存。例如,假设SQ级调度器包括4个不同优先级的队列,则该4个队列的最大队列缓存之和可以小于或等于第二网络设备(例如ONT 50)的下行端口的最大端口缓存。
可以理解的是,队列的最大队列缓存是指队列所能够占用的最大缓存,即该队列所能够缓存的报文的总数据量的上限。
对于SQ级调度器连接的N个底层调度器,该N个底层调度器包括的N个队列的最大队列缓存之和应小于或等于与SQ级调度器对应的第二网络设备的下行端口的最大端口缓存。例如,假设SQ级调度器所连接的底层调度器包括第一底层调度器23和第二底层调度器24,其中第一底层调度器23中用于缓存第一业务流的报文的队列为第一队列,第二底层调度器24中用于缓存第二业务流的报文的队列为第二队列。则该第一队列的最大队列缓存与第二队列的最大队列缓存之和可以小于或等于该第二网络设备(例如ONT 50)的下行端口的最大端口缓存。
通过使各个底层调度器中的队列的最大队列缓存之和小于或等于第二网络设备的下行端口的最大端口缓存,可以确保第二网络设备的下行端口的端口缓存能够满足经过该底层调度器调度后的业务流的缓存需求。
在本申请实施例中,该第一网络设备还可以基于该服务等级需求模型建立该流量调度系统的流量模型,进而基于该流量模型确定调度器中各个队列的最大队列缓存。以流量调度系统中的某个OLT 40为例,假设流量调度系统中的业务流主要为视频流(例如,视频流的占比达到80%以上),则第一网络设备可以基于排队论的M/D/1模型计算该OLT 40对应的GQ级调度器的时延,进而基于该时延确定GQ级调度器所需的缓存的大小。该时延的计算公式如下:
Figure PCTCN2021137364-appb-000001
其中,W(t)表示GQ级调度器在t时刻的时延。λ为流量到达速率,其服从泊松分布;μ为OLT40的服务速率;ρ为OLT 40的负载率,且该负载率ρ满足:ρ=λ/μ;k为大于等于0且小于等于
Figure PCTCN2021137364-appb-000002
的整数,
Figure PCTCN2021137364-appb-000003
表示对μt进行向下取整。
第一网络设备可以基于该OLT 40所处场景中不同类型的业务流的占比以及每种类型的业务流的条数,对上述公式中的流量到达速率λ进行预估。并且,该第一网络设备还可以基于该OLT 40所在地区各个OLT的平均端口负载率确定该OLT 40的负载率ρ。假设该负载率ρ为50%,服务速率μ为1Gbps,则第一网络设备能够基于上述公式计算出该OLT 40对应的GQ级调度器所需的缓存的大小,从而配置该GQ级调度器中各个队列的最大队列缓存。
可选地,如图6所示,该第一网络设备的模型配置模块201还包括参数配置单元2013。该参数配置单元2013能够基于该设备映射模型和该服务等级需求模型,配置HQoS模型中各个调度 器的配置参数的初始值。
步骤104、采用HQoS模型分别调度接收到的每个业务流。
在本申请实施例中,第一网络设备通过其上行端口接收到来自服务端的业务流后,能够以业务流所属的用户和业务流的类型这两个特征对不同的业务流进行区分。例如,第一网络设备可以先确定接收到的业务流所属的用户,然后再基于业务流的类型确定该业务流的优先级。第一网络设备完成对业务流的识别之后,可以从HQoS模型包括的第二级调度器22(例如SQ级调度器)中确定出与该业务流所属用户对应的目标第二级调度器,并从该目标第二级调度器所连接的多个底层调度器中确定出与该业务流的优先级对应的目标底层调度器。之后,第一网络设备即可以将该业务流的报文添加至目标底层调度器中进行排队,并基于HQoS模型的第一级调度器21和目标第二级调度器对该目标底层调度器中的报文进行调度。
示例的,参考图2至图4,假设该第一网络设备接收到了来自服务端01的第一业务流,以及来自服务端02的第二业务流。其中,该第一业务流的接收方为第一终端03,第二业务流的接收方为第二终端04,该第一业务流和第二业务流所属的用户相同,且该第一业务流的优先级高于该第二业务流的优先级。则该第一网络设备可以将第一业务流的报文添加至第一底层调度器23中的第一队列,并将第二业务流的报文添加至第二底层调度器24中的第二队列。由于该两个业务流所属的用户相同,因此如图2、图4和图7所示,该第一底层调度器23和第二底层调度器24与同一个第二级调度器22(例如SQ级调度器)连接。
可以理解的是,该服务端01和服务端02可以部署于同一个服务器,也可以部署于不同的服务器。该第一终端03和第二终端04可以为同一终端,也可以为不同的终端,本申请实施例对此不做限定。
第一网络设备在将业务流的报文添加至对应的底层调度器后,作为一种可能的实现方式,第一网络设备可以依次通过第二级调度器22和第一级调度器21调度底层调度器中的报文。例如,第二级调度器22可以先按照其配置的调度策略(例如SP调度或WFQ调度等)将底层调度器中的报文调度至第二级调度器22中。然后,第一级调度器21再按照其配置的调度策略将第二级调度器22中的报文调度至第一级调度器21中。
作为另一种可能的实现方式,第一网络设备可以依次通过第一级调度器21和第二级调度器22调度底层调度器中的报文。例如,第一级调度器21可以先按照其配置的调度策略为第二级调度器22分配调度资源(例如带宽资源)。第二级调度器22进而可以基于第一级调度器21分配的调度资源为其所连接的各个底层调度器分配调度资源。最后,底层调度器即可基于分配的调度资源向第二级调度器22传输报文。
还可以理解的是,任一业务流在HQoS模型包括的各级调度器中的传输顺序为:底层调度器、第二级调度器22和第一级调度器21。对于HQoS模型包括多个级联的第二级调度器22的场景,报文在该多个级联的第二级调度器22的传输顺序为:从靠近底层调度器的方向到远离该底层调度器的方向依次传输。例如,假设如图2、图4和图7所示,HQoS模型包括依次级联的SQ级调度器、GQ级调度器和VI级调度器,则报文在该HQoS模型中各级调度器的传输顺序为:FQ级调度器→SQ级调度器→GQ级调度器→VI级调度器→DP级调度器。
假设该HQoS模型中,第一底层调度器23对第一业务流的PIR为1000Mbps,第二底层调度器24对第二业务流的PIR为800Mbps。则该第一底层调度器23在向SQ级调度器传输第一业务流时,可以将该第一业务流的数据传输速率限制在1000Mbps以下,第二底层调度器24在向SQ级调度器传输第二业务流时,可以将该第二业务流的数据传输速率限制在800Mbps以下。
参考图6,该第一网络设备还包括处理模块202、网络接口203和电源204。该处理模块202包括业务识别单元2021,该网络接口203与上游设备(例如骨干路由器10)连接,用于接收来自服务端(例如DC或CDN中的服务端)的业务流,并将业务流传输至业务识别单元2021。
该业务识别单元2021可以基于预先配置的业务流识别策略,识别接收到的各个业务流,并确定每个业务流的优先级。之后,该业务识别单元2021即可基于业务流所属的用户和业务流的优先级,将该业务流的报文添加至对应的底层调度器中。其中,该业务流识别策略可以包括下述方式中的至少一种:基于差分服务代码点(differentiated services code point,DSCP)定义QoS属性的技术,深度包检测(deep packet inspection,DPI)技术,基于流量识别模型的识别技术,以及基于流量特征的识别技术等。其中,该流量识别模型可以是基于AI算法训练得到的。
示例的,假设第一网络设备接收到的第一业务流、第二业务流和第三业务流所属的用户相同,但优先级互不相同。则如图7所示,该业务识别单元2021可以将第一业务流的报文D1添加至对应的第一底层调度器23中,将第二业务流的报文D2添加至对应的第二底层调度器24中,并将第三业务流的报文D3添加至对应的第三底层调度器25中。该第三底层调度器25也为FQ级调度器。
步骤105、分别监测每个业务流在第一网络设备与终端之间的传输质量。
在本申请实施例中,第一网络设备可以在业务流调度的过程中,实时监测每个业务流在第一网络设备与终端之间的传输质量。例如,假设该第一网络设备调度的业务流包括:第一业务流和第二业务流,则该第一网设备可以监测该第一业务流在该第一网络设备与第一终端03之间的传输质量,并监测该第二业务流在第一网络设备与第二终端04之间的传输质量。其中,该传输质量的衡量参数可以包括时延、丢包率、数据传输速率和突发流量大小(burst size,BS)中的一个或多个。
如图6所示,该第一网络设备中的处理模块202还包括数据统计单元2022和计算单元2023。下文以第一业务流为例,对第一网络设备监测业务流的传输质量的过程进行说明。
首先,该数据统计单元2022可以统计第一业务流在至少一个调度器中的传输状态数据。计算单元2023可基于数据统计单元202统计得到的传输状态数据,确定该第一业务流在该第一底层调度器23与该第一级调度器21之间的传输质量。由于该HQoS模型中各级调度器与流量调度系统中的各级网络设备之间存在映射关系,因此该第一业务流在该第一底层调度器23与该第一级调度器21之间的传输质量,即反映了该第一业务流在第一网络设备与第一终端03之间的传输质量。
可选地,该传输状态数据可以包括下述数据中的至少一种:该第一业务流所属队列新增的报文数量和发出的报文数量,该第一业务流所属队列的队列长度,该第一业务流所属队列占用的缓存,以及该第一业务流所属队列的丢包数。其中,第一业务流所属队列的丢包数PL可以满足:PL=P_in-P_out-P_buffer。P_in、P_out和P_buffer分别为第一业务流所属队列在统计时长内新增的报文数量,发出的报文数量,以及队列中缓存的报文数量。
作为一种可选的实现方式,对于HQoS模型中的每个调度器均包括多个不同优先级的队列的场景,如图2所示,数据统计单元2022可以对每个调度器中第一业务流所属队列进行传输状态数据的统计。例如,可以对第一业务流所属队列进行报文计数、队列长度统计、缓存占用统计以及丢包数统计。其中,报文计数是指:统计该第一业务流所属队列新增的报文数量。计算单元2023可对各个调度器对该第一业务流的传输状态数据进行计算,从而得到该第一业 务流在该第一网络设备的传输质量。
例如,对于时延,该计算单元2023可以将各个调度器中第一业务流所属队列的队列长度相加,然后再基于总的队列长度确定该第一业务流在第一网络设备和第一终端03之间传输的时延,该时延与该总的队列长度正相关。
对于丢包率,该计算单元2023可以将各个调度器中第一业务流所属队列在统计时长内的丢包数相加,然后再将总的丢包数与第一底层调度器23中该第一业务流所属队列在该统计时长内新增的报文数量相除,从而得到该第一业务流在该统计时长内的丢包率。其中,该统计时长可以等于第一业务流的传输时长,即该第一网络设备在接收到第一业务流的报文后可以持续对该第一业务流的传输状态数据进行统计。或者,该统计时长也可以为预先配置的固定时长,即第一网络设备可以每隔统计时长对该第一业务流的传输状态数据进行一次统计。
对于数据传输速率,该计算单元2023可以将第一底层调度器23中第一业务流所属队列在单位时长内发出的报文的总数据量与该单位时长相除,从而得到该第一业务流的数据传输速率。其中,该数据传输速率的单位可以为bps。该单位时长的数量级可以为秒级,例如,该单位时长可以为1秒。或者,为了确保统计得到的数据传输速率的精度,该单位时长的数量级还可以为毫秒级,例如可以为10毫秒。
对于突发流量大小,计算单元2023可以对第一底层调度器23中第一业务流所属队列在统计时长内连续新增的报文的数据量进行累加,从而得到该突发流量大小。其中,连续新增的报文是指:与前一个报文的到达间隔小于时间阈值(例如1微秒)的报文。
可以理解的是,由于该HQoS模型中每一层级的调度器均包括多个,因此参考图2,数据统计单元2022还可以记录每个调度器的标识(identification,ID),以便区分不同调度器的统计数据。
作为另一种可选的实现方式,HQoS模型中的底层调度器包括队列,第一级调度器21和/或第二级调度器22中未设置不同优先级的队列。在该实现方式中,对于包括队列的每个调度器,该数据统计单元2022可以对该调度器中第一业务流所属的队列进行传输状态数据的统计。例如,若HQoS模型中仅底层调度器中包括队列,则数据统计单元2022可以仅对该底层调度器中的队列进行传输状态数据的统计。
可以理解的是,由于HQoS模型中的各级调度器能够按照一定的调度顺序对底层调度器中的报文进行调度,因此在该实现方式中,具有队列的调度器中各个队列的传输状态数据的统计结果即可准确反映HQoS模型对第一业务流的整体调度情况。
例如,假设HQoS模型中仅底层调度器中包括队列,则计算单元2023可以基于第一底层调度器23中第一业务流所属队列(即第一队列)的队列长度,确定该第一业务流在第一网络设备和第一终端03之间的时延。该计算单元2023可以将第一底层调度器23中第一队列的丢包数与该第一队列新增的报文数量相除,从而得到该第一业务流在第一网络设备和第一终端03之间的丢包率。并且,该计算单元2023可以将该第一队列在统计时长内发出的报文数量与该统计时长相除,从而得到该第一业务流在第一网络设备和第一终端03之间的数据传输速率。
示例的,假设流量调度系统中,某个具备无线WIFI功能的ONT 50的下行端口的最大带宽为1Gbps,该ONT 50的用户购买的是100Mbps的网络带宽套餐。若该用户正在使用第二终端04观看点播视频,则流量调度系统需要将来自服务端02的视频点播业务流调度至该第二终端04。下文以第二终端04距离ONT 50较远,且ONT 50向第二终端04传输视频点播业务流的数据传输速率最大达到20Mbps为例进行说明。
第一网络设备中的业务识别单元2021在通过网络接口203接收到该视频点播业务流后,可以将该视频点播业务流的报文添加至对应的底层调度器中的队列(例如第二底层调度器24中的第二队列)进行排队。假设该HQoS模型中仅底层调度器包括队列,则数据统计单元2022可以对该第二底层调度器24中第二队列在单位时长内发出的报文数量进行统计,并将统计得到的数值除以该单位时长,即可得到该视频点播业务流在该第一网络设备与该第二终端04之间的数据传输速率。例如,若以秒级粒度的单位时长进行统计,则该计算单元2023可以计算得出该视频流的数据传输速率为4Mbps。若以毫秒级粒度的单位时长进行统计,则该计算单元2023可以计算得出该视频流的数据传输速率最大能够达到20Mbps。
可以理解的是,第一网络设备除了可以监测业务流的端到端的传输质量,还可以基于业务流在每个调度器中的传输状态数据,确定业务流在该调度器对应的网络设备中的传输质量。
还可以理解的是,在本申请实施例中,该第一网络设备监测到的各个业务流的传输质量还可以用于进行可视化显示。例如,该第一网络设备可以将用于衡量各个业务流的传输质量的衡量参数发送至控制器以进行显示。或者,该第一网络设备可以与显示设备连接,第一网络设备可以通过该显示设备显示各个业务流的传输质量的衡量参数。
步骤106、检测第一业务流的传输质量是否满足与该第一业务流对应的服务等级需求。
在本申请实施例中,第一网络设备监测到第一业务流在该第一网络设备与该第一终端之间的传输质量后,即可将该传输质量与该第一业务流的服务等级需求进行对比,以判断该第一业务流的传输质量是否满足其服务等级需求。若第一网络设备确定第一业务流的传输质量不满足其服务等级需求,则第一网络设备可以执行步骤107。若第一网络设备确定第一业务流的传输质量满足其服务等级需求,则第一网络设备可以继续执行步骤105,即继续监测该第一业务流的传输质量。
若第一业务流的服务等级需求包括时延上限,则第一网络设备检测到第一业务流的端到端时延大于该时延上限时,可以确定该第一业务流的传输质量不满足其服务等级需求。若第一业务流的服务等级需求包括丢包率上限,则第一网络设备检测到第一业务流的端到端丢包率大于该丢包率上限时,可以确定该第一业务流的传输质量不满足其服务等级需求。若第一业务流的服务等级需求包括数据传输速率下限,则第一网络设备检测到第一业务流的端到端的数据传输速率小于该数据传输速率下限时,可以确定该第一业务流的传输质量不满足其服务等级需求。
示例的,参考图6,该处理模块202还可以包括传输质量监测单元2024,该传输质量监测单元2024可以用于检测该第一业务流的传输质量是否满足该第一业务流的服务等级需求。假设该第一业务流为游戏或视频会议的业务流,其服务等级需求中的时延上限为20ms。若第一网络设备确定出的该第一业务流在第一网络设备与第一终端03之间的时延为30ms,则由于该时延大于该时延上限20ms,因此该第一网络设备可以执行步骤107。
步骤107、检测该第二业务流是否满足流量整形的条件。
在本申请实施例中,第一网络设备在检测到优先级较高的第一业务流的传输质量不满足与该第一业务流对应的服务等级需求后,可以检测优先级较低的第二业务流是否满足流量整形的条件。若第一网络设备确定第二业务流满足流量整形的条件,则可以执行步骤108,即对该第二业务流进行流量整形;若第一网络设备确定第二业务流不满足流量整形的条件,则可以执行步骤109。
可选地,该流量整形的条件可以包括下述条件中的至少一种:HQoS模型对该第二业务流 的传输速率阈值大于第二业务流的平均数据传输速率,第二业务流的当前数据传输速率的大于第二业务流的数据传输速率的峰值阈值。基于上述步骤105中的描述可知,该第二业务流的当前数据传输速率是第一网络设备测得的。例如,第一网络设备可以基于第二业务流在HQoS模型中的传输状态数据计算得到该第二业务流的当前数据传输速率。
在本申请实施例中,第一网络设备能够在统计时长内实时监测第二业务流的数据传输速率。因此可以理解,该第二业务流的平均数据传输速率可以是指第二业务流的数据传输速率在该统计时长内的平均值。
若HQoS模型对第二业务流的传输速率阈值小于或等于该第二业务流的平均数据传输速率,则第一网络设备可以确定若继续降低第二业务流的传输速率阈值将会严重影响第二业务流的业务体验。因此,第一网络设备可以将传输速率阈值大于平均数据传输速率作为流量整形的条件之一。
若第二业务流的当前数据传输速率大于第二业务流的数据传输速率的峰值阈值,则第一网络设备可以确定该第二业务流当前存在流量突发。其中,流量突发具备的特征包括:在较短的时间段内(例如10毫秒)以较高的数据传输速率发送数据,然后再在较长的时间段内停止发送数据,或以较低的数据传输速率发送数据。以第二业务流为视频点播业务流为例,若以10毫秒作为计算数据传输速率的单位时长,则某个统计时长内该视频点播业务流的实时数据传输速率最大可以达到350Mbps。但是,该视频点播业务流在该统计时长内的平均数据传输速率仅为约3Mbps至5Mbps。其中,该统计时长的数量级可以为秒级。
由于流量突发会严重抢占其他业务流的带宽资源,因此对存在流量突发的业务流进行流量整形可以有效改善其他业务流的传输质量。相应的,第一网络设备还可以将当前数据传输速率大于第二业务流的数据传输速率的峰值阈值作为流量整形的条件之一。并且,为了便于第一网络设备准确检测第二业务流当前是否存在流量突发,上述步骤105中用于计算该第二业务流的数据传输速率的单位时长的数量级可以为毫秒级。
可选地,第二业务流的数据传输速率的峰值阈值可以是基于第二业务流的类型确定的。并且,不同类型的业务流的数据传输速率的峰值阈值可以不同。可以理解的是,各个业务流的数据传输速率的峰值阈值还可以基于各级网络设备的下行端口的最大端口缓存确定,且最大端口缓存越大,业务流的数据传输速率的峰值阈值越高。
以传输速率阈值包括PIR为例。若该第一网络设备检测到HQoS模型对第二业务流的PIR已小于或等于该第二业务流的平均数据传输速率,则可以确定该第二业务流已经不满足流量整形的条件,并可以执行步骤109。或者,若第一网络设备检测到第二业务流的当前数据传输速率的峰值小于第二业务流的数据传输速率的峰值阈值,则可以确定该第二业务流当前并不存在流量突发,因此也可以确定该第二业务流不满足流量整形的条件,并可以执行步骤109。
若该第一网络设备检测到第二业务流的PIR大于该第二业务流的平均数据传输速率,且第二业务流的数据传输速率的峰值大于第二业务流的峰值阈值,则可以确定该第二业务流满足流量整形的条件,并可以执行步骤108。
在本申请实施例中,不同业务流的优先级可以是基于时延需求确定的,即对时延要求越高的业务流的优先级越高。相应的,该第一业务流的服务等级需求中的时延上限可以小于该第二业务流的服务等级需求中的时延上限。其中,对于时延要求较高(即对实时性要求高)的业务流也可以称为时延敏感型业务流,对于时延要求较低(即对实时性要求低)的业务流也可以称为非时延敏感型业务流。因此在本申请实施例中,时延敏感型业务的业务流的优先级 可以较高,非时延敏感型业务的业务流的优先级可以较低。
表1示出了流量调度系统中部分类型的业务流的流量大小以及对实时性的要求。参考表1,对于视频会议和交互式虚拟现实(virtual reality,VR)等交互型且流量较大的业务流,其服务等级需求一般包括高带宽、低丢包率和低时延。而对于视频点播和文件下载等非交互型且流量较大的业务流,其服务等级需求一般包括高带宽,但对丢包率和时延没有严格要求。对于语音和游戏等交互型且流量较小的业务流,其服务等级需求一般包括低丢包率和低时延,但对带宽没有严格要求。而对于社交聊天和电子邮件等流量较小的业务流,其对于带宽、丢包率和时延均没有严格要求。
表1
业务流 流量大小 实时性要求高低
视频会议、交互式VR
语音、游戏
视频点播、文件下载
社交聊天、电子邮件
研究表明,网络中流量较大的业务流(例如视频会议和交互式VR等)的数量较少,但占用了大多数的网络带宽。例如,表2示出了某骨干路由器的10G端口在时长为48秒的时间段内输出的流量的分布特征。其中,10G端口是指带宽为10吉比特每秒(Gbps)的下行出端口。假设该10G端口在该时间段内输出了1.5×10 6条业务流,则参考表2可以看出,该1.5×10 6条业务流中,48s内的总流量大于0.1兆字节(Mbyte,MB)的业务流(下文简称为大流)的数量占比仅约为1.46%。但是,这些大流占用的网络带宽的带宽占比达到了89.21%。其余数量占比约为98.5%的业务流所占用的网络带宽的带宽占比仅约为10%。
表2
总流量(MB) 数量 数量占比 带宽占比
>0.1 22000 1.46% 89.21%
在表1所示的各类型的业务流中,视频点播业务流和文件下载业务流是典型的大流,其具有数据传输速率高,持续时间长,报文间隔大,以及采用大突发模式发送流量(即存在流量突发)等特点。由于流量突发会严重抢占其他业务流的带宽资源,因此流量调度系统中的大流是导致网络拥塞和网络服务质量下降的主要原因。
在相关技术的流量调度系统中,网络设备在接收到不同类型的业务流后,并不会进行区分处理,即网络设备会将时延敏感型业务流与非时延敏感型业务流混合在同一队列中进行调度。由此,不仅使得时延敏感型业务流得不到优先处理,无法保障其服务等级需求,而且还可能导致其传输质量受非时延敏感型业务流的影响而恶化。比如在家庭宽带场景下,网络游戏的流量较小,如果将其与视频点播业务流或文件下载业务流这类大流混合在同一队列中进行调度,则网络游戏的传输质量会受到严重影响。例如,可能导致网络游戏的流量出现高时延和高丢包率等问题,进而严重影响用户的游戏体验。
而在本申请实施例中,由于可以基于业务流的时延需求确定业务流的优先级,并可以在高优先级的业务流的服务等级需求不满足时,对低优先级的业务流进行流量整形,因此能够优先保障时延敏感型业务流的服务等级需求。
可以理解的是,除了时延需求之外,业务流的优先级还可以基于服务等级需求中的其他参数确定,例如还可以基于对丢包率的需求确定,本申请实施例对此不做限定。
步骤108、调整HQoS模型对第二业务流的传输速率阈值为第一阈值。
第一网络设备在确定该第二业务流满足流量整形的条件后,即可将HQoS模型对第二业务流的传输速率阈值调节为第一阈值,且该第一阈值小于该第二业务流的当前数据传输速率,由此可以实现对第二业务流的流量整形。
可选地,为了避免流量整形对第二业务流的业务体验造成影响,该第一阈值可以大于或等于该第二业务流的平均数据传输速率。例如,该第一阈值可以是该第二业务流的平均数据传输速率的1.5倍。
如前文所述,该第二业务流的传输速率阈值可以包括PIR、CAR、CIR和EIR中的一个或多个。若该传输速率阈值包括PIR、CAR、CIR和EIR中的多个速率,则第一网络设备需分别调整传输速率阈值中的每个速率。在一种可能的实现中,第一网络设备可以将传输速率阈值中的多个速率均调整为同一个第一阈值,即调整后的各个速率相等。在另一种可能的实现中,第一网络设备可以将传输速率阈值中的多个速率分别调整为各自对应的第一阈值,即调整后的各个速率可以不等。
图8是本申请实施例提供的一种第一业务流和第二业务流的数据传输速率的示意图。图8中的横轴表示时间t,纵轴表示数据传输速率v。参考图8可以看出,第一网络设备未对第二业务流进行流量整形时,该第二业务流存在流量突发的情况。并且,该第二业务流的流量突发的时段与该第一业务流的流量突发的时段有重叠,由此会严重影响该第一业务流的业务体验。该第一网络设备在对第二业务流进行流量整形后,可以实现对该第二业务流的数据传输速率的平滑处理,从而能够将网络带宽让位于优先级更高的第一业务流。
由于HQoS模型中的第一级调度器21、第二级调度器22和第二底层调度器24可以限制第二业务流的数据传输速率,因此在本申请实施例中,第一网络设备可以调整第一级调度器21、第二级调度器22和第二底层调度器24中至少一个调度器对该第二业务流的传输速率阈值为第一阈值。
作为一种可选的实现方式,该第一网络设备可以调整第二底层调度器24对该第二业务流的传输速率阈值为第一阈值。由于该第二底层调度器24仅需对其所包括的一个队列中的报文进行调度,而第一级调度器21和第二级调度器22均需对其所连接的多个调度器中的报文进行调度,因此仅调节第二底层调度器24对第二业务流的传输速率阈值,可以有效减少对其他业务流的影响。
示例的,假设流量调度系统中,某个具备无线WIFI功能的ONT 50的下行是1Gbps速率的链路,与该ONT 50无线连接的第一终端03正在召开视频会议,与该ONT 50无线连接的第二终端04正在播放点播视频。则流量调度系统需要将来自服务端01的视频会议的视频流(即第一业务流)调度至该第一终端03,并需要将来自服务端02的视频点播业务流(即第二业务流)调度至该第二终端04。若由于视频点播业务流的流量突发,导致视频会议的视频流瞬时积压较为严重,第一网络设备中的传输质量监测单元2024可以监测到该视频会议的视频流的时延无法满足该视频会议流的服务等级需求。相应的,该处理模块202中的流量整形单元2025即可对该视频点播业务流进行流量整形。例如,若计算单元2023计算得到该视频点播业务流的平均数据传输速率为4Mbps,则流量整形单元2025可以将第二底层调度器24对该视频点播业务流的PIR调节为4Mbps。由此,实现了对该视频点播流的平滑处理,从而能够将网络带宽均让 位于对时延敏感的视频会议的视频流。
通过上述方式,还可以确保该第一网络设备发出的视频点播流的数据传输速率始终稳定在4Mbps以下。由此,即使下游的LSW 30、OLT 40和ONT 50均不具备业务流识别和QoS差异化调度能力,也可以确保该流量整形后的视频点播流在经过通过下游的各级网络设备时,能够始终将网络带宽让位于视频会议流,从而保障视频会议流的端到端服务等级需求。同时,视频点播流以4Mbps发出也满足该视频流的码率,并不会恶化该视频业务自身体验。
作为另一种可选的实现方式,该第一网络设备可以确定传输该第一业务流发生网络拥塞的目标调度器,并调整该目标调度器对第二业务流的传输速率阈值为第一阈值。其中,该目标调度器为第一级调度器21或第二级调度器22。通过调整该目标调度器对第二业务流的传输速率阈值,可以有效降低目标调度器传输第一业务流时的拥塞程度,进而改善第一业务流的传输质量。
对于HQoS模型中第一级调度器21和第二级调度器22均包括队列的场景,该第一网络设备可以对比各个调度器中第一业务流所属队列的传输状态数据,并基于该传输状态数据确定传输第一业务流发生网络拥塞的目标调度器。
例如,第一网络设备可以对比各个调度器中第一业务流所属队列的队列长度,并将队列长度最长的调度器确定为发生网络拥塞的目标调度器。或者,第一网络设备可以对比各个调度器中第一业务流所属队列的丢包率,并将丢包率最高的调度器确定为发生网络拥塞的目标调度器。
对于第一级调度器21和第二级调度器22中仅部分调度器包括队列的场景,该第一网络设备可以基于各个调度器的拓扑结构,以及具有队列的调度器中各个队列的传输状态数据,确定传输第一业务流发生网络拥塞的目标调度器。
例如,假设某个SQ级调度器中不包括队列,但该SQ级调度器连接的N个底层调度器中每个队列中缓存的报文的数据量均大于该队列的最大队列缓存,或各个队列缓存的报文的数据量之和大于该SQ级调度器的最大缓存,又或每个队列的丢包率均大于丢包率阈值,则第一网络设备可以确定该SQ级调度器为发生网络拥塞的目标调度器。
示例的,假设某SQ级调度器连接有4个底层调度器,该4个底层调度器中的队列的最大队列缓存分别为100字节、200字节、300字节和400字节,且该SQ级调度器的最大缓存为800字节。若某个时刻该4个队列实际缓存的报文的数据量分别为99字节、199字节、299字节和399字节,则由于该4个队列缓存的报文的数据量之和大于该SQ级调度器的最大缓存,因此第一网络设备可以确定该SQ级调度器为传输第一业务流发生网络拥塞的目标调度器。
步骤109、调整HQoS模型对第三业务流的传输速率阈值为第二阈值。
在本申请实施例中,流量调度系统中承载的业务流还可以包括第三业务流,该第三业务流的优先级高于该第二业务流,且低于该第一业务流。相应的,在上述步骤104中,该第一网络设备还可以基于HQoS模型调度该第三业务流。
在上述步骤107中,第一网络设备若检测到第二业务流不满足流量整形的条件,则可以将HQoS模型对第三业务流的传输速率阈值为第二阈值,该第二阈值小于该第三业务流的当前数据传输速率,由此可以实现对该第三业务流进行流量整形。该步骤109的实现过程可以参考上述步骤108,此处不再赘述。
可以理解的是,第一网络设备在检测到高优先级的第一业务流的传输质量不满足该第一业务流的服务等级需求之后,可以按照优先级由低到高的顺序,依次检测每个低优先级的业 务流是否满足流量整形的条件。若检测到任一低优先级的业务流满足流量整形的条件,则可以将HQoS模型对该低优先级的业务流的传输速率阈值调低,以实现对该低优先级的业务流的流量整形。也即是,第一网络设备可以在确定该第三业务流满足流量整形的条件后再执行上述步骤109。
还可以理解的是,在本申请实施例中,对于该第一网络设备接收到的每个业务流,该第一网络设备均可以基于上述步骤106所示的方式,判断该业务流的传输质量是否满足与该业务流对应的服务等级需求。并且,该第一网络设备在检测到任一业务流的传输质量不满足与该业务流对应的服务等级需求时,均可以参考上述步骤107至步骤109所示的方法对优先级更低的业务流进行流量整形。
应当理解的是,本申请实施例提供的业务流的调度方法的步骤先后顺序可以进行适当调整,步骤也可以根据情况进行相应增减。例如,上述步骤102可以在步骤101之前执行。或者,上述步骤101和步骤102可以根据情况删除,相应的,可以直接配置该HQoS模型中各个调度器的调度参数的初始值。又或者,上述步骤107和步骤109可以根据情况删除,即第一网络设备可以直接对第二业务流进行流量整形。
综上所述,本申请实施例提供了一种业务流的调度方法,第一网络设备在检测到优先级较高的业务流的传输质量不满足与该业务流对应的服务等级需求时,可以调整HQoS模型对优先级较低的业务流的传输速率阈值为第一阈值。由于该第一阈值小于该优先级较低的业务流的当前数据传输速率,因此可以实现对该优先级较低的业务流的流量整形。进而,可以将该第一网络设备的下行端口的带宽让位于该优先级较高的业务流,以确保能够优先满足该优先级较高的业务流的服务等级需求。
由于第一网络设备还通过第二网络设备与终端连接,因此该第一网络设备对优先级较低的业务流进行流量整形后,该优先级较低的业务流能够以平稳的数据传输速率传输至下游的第二网络设备,即该优先级较低的业务流在下游的第二网络设备中也不会出现流量突发。由此,即使该下游的第二网络设备不具备业务流识别和QoS差异化调度的功能,也可以避免该优先级较低的业务流因流量突发而抢占优先级较高的业务流的带宽资源。也即是,通过第一网络设备对优先级较低的业务流进行流量整形,可以确保该优先级较高的业务流经过下游的第二网络设备时均能够获得较大的带宽,进而能够有效保障该优先级较高的业务流的服务等级需求。
并且,由于可以避免优先级较低的业务流在下游的第二网络设备中出现流量突发,因此可以降低对第二网络设备的缓存需求,进而降低第二网络设备的设备成本。又由于无需下游的第二网络设备具备业务流识别和QoS差异化调度的功能,因此无需对现网中不具备上述功能的第二网络设备进行更新即可实现本申请实施例提供的方案,即本申请实施例提供的方案具有较高的应用灵活性和兼容性。
此外,由于第一网络设备在对优先级较低的业务流进行流量整形时,可以确保降低后的传输速率阈值大于或等于该优先级较低的业务流的平均数据传输速率,因此可以避免影响该优先级较低的业务流的业务体验。
图9是本申请实施例提供的一种业务流的调度装置的结构示意图,该调度装置可以应用于上述方法实施例提供的第一网络设备中,且可以用于实现上述实施例提供的业务流的调度方法。例如,该调度装置可以实现图5中第一设备的功能以及执行图5所示的方法。该装置还可以是图1至图4中的SR/BRAS。如图9所示,该业务流的调度装置包括:
调度模块301用于基于HQoS模型分别调度第一业务流和第二业务流,其中,该第一业务流的优先级高于该第二业务流的优先级。该调度模块301的功能实现可以参考上述方法实施例中步骤104的相关描述。
调整模块302用于当第一业务流的传输质量不满足与该第一业务流对应的服务等级需求时,调整HQoS模型对第二业务流的传输速率阈值为第一阈值,该第一阈值小于该第二业务流的当前数据传输速率。也即是,该调整模块302可以用于对第二业务流进行流量整形。
该调整模块302的功能实现可以参考上述方法实施例中步骤108的相关描述。并且,该调整模块302可以用于实现图6所示实施例中传输质量监测单元2024和流量整形单元2025的功能。
在一种实现中,该第一阈值可以大于或等于该第二业务流的平均数据传输速率。
在一种实现中,该调整模块302可以用于:当该第一业务流的传输质量不满足与该第一业务流对应的服务等级需求,且该第二业务流的当前数据传输速率大于第二业务流的数据传输速率的峰值阈值时,调整该HQoS模型对该第二业务流的传输速率阈值为该第一阈值。
该调整模块302的功能实现还可以参考上述方法实施例中步骤107的相关描述。并且,该调整模块302还可以用于实现图6所示实施例中数据统计单元2022和计算单元2023的功能。
在一种实现中,该第二业务流的传输速率阈值可以包括PIR、CAR、CIR和EIR中的一个或多个。
在一种实现中,第一网络设备可以通过第二网络设备与终端连接;相应的,该HQoS模型可以包括:与该第一网络设备的下行端口对应的第一级调度器,与该第二网络设备的下行端口对应的第二级调度器,用于通过该第二网络设备的下行端口传输第一业务流的第一底层调度器,以及用于通过该第二网络设备的下行端口传输第二业务流的第二底层调度器。
在一种实现中,该调整模块302可以用于:调整该第一级调度器、该第二级调度器和该第二底层调度器中至少一个调度器对该第二业务流的传输速率阈值为第一阈值。
在一种实现中,该调整模块302可以用于确定传输该第一业务流发生网络拥塞的目标调度器,并调整该目标调度器对该第二业务流的传输速率阈值为第一阈值。其中,该目标调度器可以为第一级调度器或第二级调度器。
在一种实现中,该第一级调度器对第一业务流和第二业务流的传输速率阈值之和小于或等于该第一网络设备的下行端口的最大带宽;该第二级调度器对第一业务流和第二业务流的传输速率阈值之和小于或等于该第二网络设备的下行端口的最大带宽。
在一种实现中,该第一底层调度器对第一业务流的传输速率阈值小于或等于该第二网络设备的下行端口的最大带宽;该第二底层调度器对第二业务流的传输速率阈值小于或等于该第二网络设备的下行端口的最大带宽。
在一种实现中,该第一底层调度器可以包括用于缓存第一业务流的报文的第一队列,该第二底层调度器可以包括用于缓存第二业务流的报文的第二队列;该第一队列的最大队列缓存与第二队列的最大队列缓存之和小于或等于第二网络设备的下行端口的最大端口缓存。
在一种实现中,该第一业务流的服务等级需求中的时延上限可以小于该第二业务流的服务等级需求中的时延上限。也即是,业务流的优先级可以是基于业务流的时延需求划分的,且对时延要求越高的业务流的优先级可以越高。
在一种实现中,该调度模块301还可以用于基于HQoS模型调度第三业务流,该第三业务流的优先级高于第二业务流的优先级,且低于第一业务流的优先级。
该调整模块302还可以用于:当第二业务流的传输速率阈值小于或等于该第二业务流的平 均数据传输速率,或者,当第二业务流的当前数据传输速率小于或等于第二业务流的数据传输速率的峰值阈值时,调整HQoS模型对第三业务流的传输速率阈值为第二阈值,该第二阈值小于该第三业务流的当前数据传输速率。
也即是,该调整模块302若确定第二业务流不满足流量整形的条件,则可以对优先级次低的第三业务流进行流量整形。该调整模块302的功能实现还可以参考上述方法实施例中步骤109的相关描述。
综上所述,本申请实施例提供了一种业务流的调度装置,该装置在优先级较高的业务流的传输质量不满足与该业务流对应的服务等级需求时,可以调整HQoS模型对优先级较低的业务流的传输速率阈值为第一阈值。由于该第一阈值小于该优先级较低的业务流的当前数据传输速率,因此可以实现对该优先级较低的业务流的流量整形。进而,可以将第一网络设备的下行端口的带宽让位于该优先级较高的业务流,以确保能够优先满足该优先级较高的业务流的服务等级需求。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的业务流的调度装置以及各模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
应理解的是,本申请实施例提供的业务流的调度装置还可以用专用集成电路(application-specific integrated circuit,ASIC)实现,或可编程逻辑器件(programmable logic device,PLD)实现,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD),现场可编程门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。也可以通过软件实现上述方法实施例提供的业务流的调度方法,当通过软件实现上述方法实施例提供的业务流的调度方法时,本申请实施例提供的业务流的调度装置中的各个模块也可以为软件模块。
图10是本申请实施例提供的另一种业务流的调度装置的结构示意图,例如该装置可以是图1至图4中的SR/BRAS。该业务流的调度装置可以应用于上述实施例提供的第一网络设备中,例如图5所示的第一设备执行的方法和具备的功能。参考图10,该业务流的调度装置可以包括:处理器401、存储器402、网络接口403和总线404。其中,总线404用于连接处理器401、存储器402和网络接口403。通过网络接口403可以实现与其他设备之间的通信连接。存储器402中存储有计算机程序,该计算机程序用于实现各种应用功能。
应理解,在本申请实施例中,处理器401可以是CPU,该处理器401还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)、GPU或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。
存储器402可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是ROM、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、EEPROM或闪存。易失性存储器可以是RAM,其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
总线404除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线404。
该处理器401被配置为执行存储器402中存储的计算机程序,处理器401通过执行该计算机程序4021来实现上述方法实施例提供的业务流的调度方法,例如执行图5所示的第一网络设备执行的方法。在一种实现中,处理器401用于基于HQoS模型分别调度第一业务流和第二业务流,其中,该第一业务流的优先级高于该第二业务流的优先级;还用于当第一业务流的传输质量不满足与该第一业务流对应的服务等级需求时,调整HQoS模型对第二业务流的传输速率阈值为第一阈值,该第一阈值小于该第二业务流的当前数据传输速率。
图11是本申请实施例提供的又一种业务流的调度装置的结构示意图,例如可以是图1至图4中的SR/BRAS。该业务流的调度装置可以应用于上述实施例提供的第一网络设备中,例如,执行图5所示的第一网络设备执行的方法。如图11所示,该调度装置500可以包括:主控板501、接口板502和接口板503。多个接口板的情况下可以包括交换网板(图中未示出),该交换网板用于完成各接口板(接口板也称为线卡或业务板)之间的数据交换。
主控板501用于完成系统管理、设备维护、协议处理等功能。接口板502和503用于提供各种业务接口,例如,基于SONET/SDH的数据包(packet over SONET/SDH,POS)接口、千兆以太网(Gigabit Ethernet,GE)接口、异步传输模式(asynchronous transfer mode,ATM)接口等,并实现报文的转发。其中,SONET是指同步光纤网络(synchronous optical network),SDH是指同步数字体系(synchronous digital hierarchy)。主控板501上主要有3类功能单元:系统管理控制单元、系统时钟单元和系统维护单元。主控板501、接口板502以及接口板503之间通过系统总线与系统背板相连实现互通。接口板502上包括一个或多个处理器5021。处理器5021用于对接口板进行控制管理并与主控板501上的中央处理器5011进行通信,以及用于报文的转发处理。接口板502上的存储器5022用于存储转发表项,处理器5021通过查找存储器5022中存储的转发表项进行报文的转发。
该接口板502包括一个或多个网络接口5023用于接收上一跳节点发送的报文,并根据处理器5021的指示向下一跳节点发送处理后的报文。具体实现过程这里不再逐一赘述。该处理器5021的具体功能这里同样不再逐一赘述。
可以理解,如图11所示,本实施例中包括多个接口板,采用分布式的转发机制,这种机制下,接口板503的结构与该接口板502的结构基本相似,接口板503上的操作与该接口板502的操作基本相似,为了简洁,不再赘述。此外,可以理解的是,图11中的接口板中的处理器5021和/或5031可以是专用硬件或芯片,如网络处理器或者专用集成电路来实现上述功能,这种实现方式即为通常所说的转发面采用专用硬件或芯片处理的方式。在另外的实施方式中,该处理器5021和/或5031也可以采用通用的处理器,如通用的CPU来实现以上描述的功能。
此外应理解的是,主控板可能有一块或多块,有多块的时候可以包括主用主控板和备用主控板。接口板可能有一块或多块,该第一网络设备的数据处理能力越强,提供的接口板越多。多块接口板的情况下,该多块接口板之间可以通过一块或多块交换网板通信,有多块的时候可以共同实现负荷分担冗余备份。在集中式转发架构下,该第一网络设备可以不需要交换网板,接口板承担整个系统的业务数据的处理功能。在分布式转发架构下,该第一网络设备包括多块接口板,可以通过交换网板实现多块接口板之间的数据交换,提供大容量的数据交换和处理能力。所以,分布式架构的网络设备的数据接入和处理能力要大于集中式架构的网络设备。具体采用哪种架构,取决于具体的组网部署场景,此处不做任何限定。
具体的实施例中,存储器5022可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器5022可以是独立存在,通过通信总线与处理器5021相连接。存储器5022也可以和处理器5021集成在一起。
存储器5022用于存储程序代码,并由处理器5021来控制执行,以执行上述实施例提供的业务流的调度方法。处理器5021用于执行存储器5022中存储的程序代码。程序代码中可以包括一个或多个软件模块。这一个或多个软件模块可以为上述图9所示实施例中的功能模块。
具体实施例中,该网络接口5023可以是使用任何网络接口一类的装置,用于与其它设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当所述指令或代码在计算机上执行时,使得该计算机执行以实现上述方法实施例提供的业务流的调度方法,例如执行图5所示的第一网络设备执行的方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述方法实施例提供的业务流的调度方法,例如执行图5所示的第一网络设备执行的方法。
本申请实施例还提供了一种芯片,该芯片包括可编程逻辑电路和/或程序指令,该芯片可以用于执行上述方法实施例提供的业务流的调度方法,例如执行图5所示的第一网络设备执行的方法。可选地,该芯片可以为流量管理(traffic management,TM)芯片。
本申请实施例还提供了一种网络设备,该网络设备可以为上述实施例中的第一网络设备,且可以用于实现上述实施例提供的业务流的调度方法。
在一种可能的实现方式中,该网络设备可以包括上述实施例提供的业务流的调度装置。例如,该网络设备可以包括如图9、图10或图11所示的业务流的调度装置。在另一种可能的实现方式中,该网络设备可以包括上述实施例提供的芯片。
本申请实施例还提供了一种流量调度系统,该流量调度系统包括终端和第一网络设备,该第一网络设备用于向终端调度业务流。
在一种可能的实现方式中,该第一网络设备可以包括上述实施例提供的业务流的调度装置。例如,该第一网络设备可以包括如图9、图10或图11所示的业务流的调度装置。在另一种可能的实现方式中,该第一网络设备可以包括上述实施例提供的芯片。
可选地,参考图1至图4,该第一网络设备可以为流量调度系统中的SR/BRAS 20、LSW 30、OLT 40或者ONT 50。在本申请实施例中,该流量调度系统还可以包括一个或多个级联的第二网络设备,该第一网络设备可以通过该一个或多个级联的第二网络设备与终端连接。举例来说,参考图1至图4,在一种场景中,该第一网络设备为SR/BRAS 20,该流量调度系统还可以包括一个第二网络设备,该一个第二网络设备为LSW 30、OLT 40和ONT 50中任意一个设备。在另一种场景中,该第一网络设备为SR/BRAS 20,该流量调度系统还可以包括两个第二网络 设备,该两个第二网络设备为LSW 30、OLT 40和ONT 50中任意两个设备。在又一种场景中,该第一网络设备为SR/BRAS 20,该流量调度系统还可以包括依次级联的三个第二网络设备,该三个第二网络设备分别为LSW 30、OLT 40和ONT 50。也即是,SR/BRAS 20作为第一网络设备可以通过依次级联的LSW 30、OLT 40和ONT 50与终端连接。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如:同轴电缆、光纤、数据用户线(digital subscriber line,DSL))或无线(例如:红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如:软盘、硬盘、磁带)、光介质(例如:数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如:固态硬盘(solid state disk,SSD))等。
应当理解的是,本文提及的术语“至少一个”的含义是指一个或多个,“多个”是指两个或两个以上。在本申请的描述中,为了便于清楚描述本申请实施例的技术方案,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定。本文中术语“系统”和“网络”可互换使用。
还应当理解的是,在本文中提及的“和/或”,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
以上所述,仅为本申请的可选实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (27)

  1. 一种业务流的调度方法,其特征在于,应用于第一网络设备,所述方法包括:
    基于层次化服务质量HQoS模型分别调度第一业务流和第二业务流,其中,所述第一业务流的优先级高于所述第二业务流的优先级;
    当所述第一业务流的传输质量不满足与所述第一业务流对应的服务等级需求时,调整所述HQoS模型对所述第二业务流的传输速率阈值为第一阈值,所述第一阈值小于所述第二业务流的当前数据传输速率。
  2. 根据权利要求1所述的方法,其特征在于,所述第一阈值大于或等于所述第二业务流的平均数据传输速率。
  3. 根据权利要求1或2所述的方法,其特征在于,所述调整所述HQoS模型对所述第二业务流的传输速率阈值为第一阈值,包括:
    当所述第一业务流的传输质量不满足与所述第一业务流对应的服务等级需求,且所述第二业务流的当前数据传输速率大于所述第二业务流的数据传输速率的峰值阈值时,调整所述HQoS模型对所述第二业务流的传输速率阈值为所述第一阈值。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述第二业务流的传输速率阈值包括峰值信息速率PIR、承诺访问速率CAR、承诺信息速率CIR和额外信息速率EIR中的一个或多个。
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述第一网络设备通过第二网络设备与终端连接;
    所述HQoS模型包括:与所述第一网络设备的下行端口对应的第一级调度器,与所述第二网络设备的下行端口对应的第二级调度器,用于通过所述第二网络设备的下行端口传输所述第一业务流的第一底层调度器,以及用于通过所述第二网络设备的下行端口传输所述第二业务流的第二底层调度器。
  6. 根据权利要求5所述的方法,其特征在于,所述调整所述HQoS模型对所述第二业务流的传输速率阈值为第一阈值,包括:
    调整所述第一级调度器、所述第二级调度器和所述第二底层调度器中至少一个调度器对所述第二业务流的传输速率阈值为所述第一阈值。
  7. 根据权利要求5所述的方法,其特征在于,所述调整所述HQoS模型对所述第二业务流的传输速率阈值为第一阈值,包括:
    确定传输所述第一业务流发生网络拥塞的目标调度器,所述目标调度器为所述第一级调度器或所述第二级调度器;
    调整所述目标调度器对所述第二业务流的传输速率阈值为所述第一阈值。
  8. 根据权利要求5至7任一所述的方法,其特征在于,所述第一级调度器对所述第一业务流和所述第二业务流的传输速率阈值之和小于或等于所述第一网络设备的下行端口的最大带宽;
    所述第二级调度器对所述第一业务流和所述第二业务流的传输速率阈值之和小于或等于所述第二网络设备的下行端口的最大带宽。
  9. 根据权利要求5至8任一所述的方法,其特征在于,所述第一底层调度器包括用于缓存所述第一业务流的报文的第一队列,所述第二底层调度器包括用于缓存所述第二业务流的报文的第二队列;
    所述第一队列的最大队列缓存与所述第二队列的最大队列缓存之和小于或等于所述第二网络设备的下行端口的最大端口缓存。
  10. 根据权利要求1至9任一所述的方法,其特征在于,所述第一业务流的服务等级需求中的时延上限小于所述第二业务流的服务等级需求中的时延上限。
  11. 根据权利要求1至10任一所述的方法,其特征在于,所述方法还包括:
    基于所述HQoS模型调度第三业务流,所述第三业务流的优先级高于所述第二业务流的优先级,且低于所述第一业务流的优先级;
    当所述第二业务流的传输速率阈值小于或等于所述第二业务流的平均数据传输速率,或者,当所述第二业务流的当前数据传输速率小于或等于所述第二业务流的数据传输速率的峰值阈值,调整所述HQoS模型对所述第三业务流的传输速率阈值为第二阈值,所述第二阈值小于所述第三业务流的当前数据传输速率。
  12. 一种业务流的调度装置,其特征在于,应用于第一网络设备,所述调度装置包括:
    调度模块,用于基于层次化服务质量HQoS模型分别调度第一业务流和第二业务流,其中,所述第一业务流的优先级高于所述第二业务流的优先级;
    调整模块,用于当所述第一业务流的传输质量不满足与所述第一业务流对应的服务等级需求时,调整所述HQoS模型对所述第二业务流的传输速率阈值为第一阈值,所述第一阈值小于所述第二业务流的当前数据传输速率。
  13. 根据权利要求12所述的调度装置,其特征在于,所述第一阈值大于或等于所述第二业务流的平均数据传输速率。
  14. 根据权利要求12或13所述的调度装置,其特征在于,所述调整模块,用于:
    当所述第一业务流的传输质量不满足与所述第一业务流对应的服务等级需求,且所述第二业务流的当前数据传输速率大于所述第二业务流的数据传输速率的峰值阈值时,调整所述HQoS模型对所述第二业务流的传输速率阈值为所述第一阈值。
  15. 根据权利要求12至14任一所述的调度装置,其特征在于,所述第二业务流的传输速率阈值包括峰值信息速率PIR、承诺访问速率CAR、承诺信息速率CIR和额外信息速率EIR中的一个或多个。
  16. 根据权利要求12至15任一所述的调度装置,其特征在于,所述第一网络设备通过第二网络设备与终端连接;
    所述HQoS模型包括:与所述第一网络设备的下行端口对应的第一级调度器,与所述第二网络设备的下行端口对应的第二级调度器,用于通过所述第二网络设备的下行端口传输所述第一业务流的第一底层调度器,以及用于通过所述第二网络设备的下行端口传输所述第二业务流的第二底层调度器。
  17. 根据权利要求16所述的调度装置,其特征在于,所述调整模块,用于:
    调整所述第一级调度器、所述第二级调度器和所述第二底层调度器中至少一个调度器对所述第二业务流的传输速率阈值为所述第一阈值。
  18. 根据权利要求17所述的调度装置,其特征在于,所述调整模块,用于:
    确定传输所述第一业务流发生网络拥塞的目标调度器,所述目标调度器为所述第一级调度器或所述第二级调度器;
    调整所述目标调度器对所述第二业务流的传输速率阈值为所述第一阈值。
  19. 根据权利要求16至18任一所述的调度装置,其特征在于,所述第一级调度器对所述第 一业务流和所述第二业务流的传输速率阈值之和小于或等于所述第一网络设备的下行端口的最大带宽;
    所述第二级调度器对所述第一业务流和所述第二业务流的传输速率阈值之和小于或等于所述第二网络设备的下行端口的最大带宽。
  20. 根据权利要求16至19任一所述的调度装置,其特征在于,所述第一底层调度器包括用于缓存所述第一业务流的报文的第一队列,所述第二底层调度器包括用于缓存所述第二业务流的报文的第二队列;
    所述第一队列的最大队列缓存与所述第二队列的最大队列缓存之和小于或等于所述第二网络设备的下行端口的最大端口缓存。
  21. 根据权利要求12至20任一所述的调度装置,其特征在于,所述第一业务流的服务等级需求中的时延上限小于所述第二业务流的服务等级需求中的时延上限。
  22. 根据权利要求12至21任一所述的调度装置,其特征在于,
    所述调度模块,还用于基于所述HQoS模型调度第三业务流,所述第三业务流的优先级高于所述第二业务流的优先级,且低于所述第一业务流的优先级;
    所述调整模块,还用于当所述第二业务流的传输速率阈值小于或等于所述第二业务流的平均数据传输速率,或者,当所述第二业务流的当前数据传输速率的峰值小于或等于所述第二业务流的数据传输速率的峰值阈值,调整所述HQoS模型对所述第三业务流的传输速率阈值为第二阈值,所述第二阈值小于所述第三业务流的当前数据传输速率。
  23. 一种业务流的调度装置,其特征在于,所述业务流的调度装置包括存储器和处理器,所述存储器用于存储计算机程序或代码,所述处理器用于执行所述计算机程序或代码以实现如权利要求1至11任一所述的业务流的调度方法。
  24. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括指令或代码,当所述指令或代码在计算机上执行时,使得所述计算机执行如权利要求1至11任一所述的业务流的调度方法。
  25. 一种芯片,其特征在于,所述芯片包括可编程逻辑电路和/或程序指令,所述芯片用于执行如权利要求1至11任一所述的业务流的调度方法。
  26. 一种流量调度系统,其特征在于,所述流量调度系统包括终端和第一网络设备,所述第一网络设备用于调度所述终端的第一业务流和第二业务流,所述第一网络设备包括如权利要求12至23任一所述的调度装置,或者如权利要求25所述的芯片。
  27. 根据权利要求26所述的流量调度系统,其特征在于,所述流量调度系统还包括第二网络设备,所述第一网络设备通过所述第二网络设备与所述终端连接。
PCT/CN2021/137364 2020-12-24 2021-12-13 业务流的调度方法、装置及系统 WO2022135202A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21909199.8A EP4262313A4 (en) 2020-12-24 2021-12-13 METHOD, DEVICE AND SYSTEM FOR PLANNING A SERVICE FLOW
US18/339,273 US20230336486A1 (en) 2020-12-24 2023-06-22 Service flow scheduling method and apparatus, and system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202011550634.6 2020-12-24
CN202011550634 2020-12-24
CN202110272534.XA CN114679792A (zh) 2020-12-24 2021-03-12 业务流的调度方法、装置及系统
CN202110272534.X 2021-03-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/339,273 Continuation US20230336486A1 (en) 2020-12-24 2023-06-22 Service flow scheduling method and apparatus, and system

Publications (1)

Publication Number Publication Date
WO2022135202A1 true WO2022135202A1 (zh) 2022-06-30

Family

ID=82070319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137364 WO2022135202A1 (zh) 2020-12-24 2021-12-13 业务流的调度方法、装置及系统

Country Status (4)

Country Link
US (1) US20230336486A1 (zh)
EP (1) EP4262313A4 (zh)
CN (1) CN114679792A (zh)
WO (1) WO2022135202A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112256A (zh) * 2023-02-08 2023-05-12 电子科技大学 一种面向应用加密流量识别的数据处理方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938403A (zh) * 2009-06-30 2011-01-05 中国电信股份有限公司 多用户多业务的服务质量的保证方法和业务接入控制点
US20130074087A1 (en) * 2011-09-15 2013-03-21 International Business Machines Corporation Methods, systems, and physical computer storage media for processing a plurality of input/output request jobs
CN104079501A (zh) * 2014-06-05 2014-10-01 深圳市邦彦信息技术有限公司 一种基于多优先级的队列调度方法
CN110266604A (zh) * 2019-07-09 2019-09-20 京信通信系统(中国)有限公司 空口带宽自适应控制方法、装置和通信设备
US20200112483A1 (en) * 2018-10-04 2020-04-09 Sandvine Corporation System and method for intent based traffic management

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7675926B2 (en) * 2004-05-05 2010-03-09 Cisco Technology, Inc. Hierarchical QoS behavioral model
CN103634223B (zh) * 2013-12-03 2016-11-23 北京东土科技股份有限公司 一种基于网络业务流的动态控制传输方法和装置
EP3442180B1 (en) * 2016-04-28 2020-11-11 Huawei Technologies Co., Ltd. Congestion processing method, host, and system
CN110177054B (zh) * 2019-05-22 2022-08-19 新华三技术有限公司 一种端口队列调度方法、装置、网络控制器及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938403A (zh) * 2009-06-30 2011-01-05 中国电信股份有限公司 多用户多业务的服务质量的保证方法和业务接入控制点
US20130074087A1 (en) * 2011-09-15 2013-03-21 International Business Machines Corporation Methods, systems, and physical computer storage media for processing a plurality of input/output request jobs
CN104079501A (zh) * 2014-06-05 2014-10-01 深圳市邦彦信息技术有限公司 一种基于多优先级的队列调度方法
US20200112483A1 (en) * 2018-10-04 2020-04-09 Sandvine Corporation System and method for intent based traffic management
CN110266604A (zh) * 2019-07-09 2019-09-20 京信通信系统(中国)有限公司 空口带宽自适应控制方法、装置和通信设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4262313A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112256A (zh) * 2023-02-08 2023-05-12 电子科技大学 一种面向应用加密流量识别的数据处理方法

Also Published As

Publication number Publication date
EP4262313A1 (en) 2023-10-18
EP4262313A4 (en) 2024-05-01
US20230336486A1 (en) 2023-10-19
CN114679792A (zh) 2022-06-28

Similar Documents

Publication Publication Date Title
US11316795B2 (en) Network flow control method and network device
CN108259383B (zh) 一种数据的传输方法和网络设备
US8638664B2 (en) Shared weighted fair queuing (WFQ) shaper
US9185047B2 (en) Hierarchical profiled scheduling and shaping
WO2017024824A1 (zh) 基于聚合链路的流量管理方法及装置
US8542586B2 (en) Proportional bandwidth sharing of the excess part in a MEF traffic profile
CN107454015B (zh) 一种基于OF-DiffServ模型的QoS控制方法及系统
US8139485B2 (en) Logical transport resource traffic management
WO2021057447A1 (zh) 确定传输数据流的需求带宽的方法、设备和系统
US8547846B1 (en) Method and apparatus providing precedence drop quality of service (PDQoS) with class-based latency differentiation
US9197570B2 (en) Congestion control in packet switches
Irazabal et al. Dynamic buffer sizing and pacing as enablers of 5G low-latency services
EP3395023B1 (en) Dynamically optimized queue in data routing
JP2020072336A (ja) パケット転送装置、方法、及びプログラム
US20230336486A1 (en) Service flow scheduling method and apparatus, and system
Zoriđ et al. Fairness of scheduling algorithms for real-time traffic in DiffServ based networks
CN109995608B (zh) 网络速率计算方法和装置
CN112751776A (zh) 拥塞控制方法和相关装置
CN114501544A (zh) 一种数据传输方法、装置和存储介质
US12028265B2 (en) Software-defined guaranteed-latency networking
Liu et al. Queue management algorithm for multi-terminal and multi-service models of priority
Laidig et al. Dynamic Deterministic Quality of Service Model with Behavior-Adaptive Latency Bounds
EP2667554B1 (en) Hierarchal maximum information rate enforcement
CN116260769A (zh) 面向分布式网络的时间确定性流量突发整形方法
Carvalho et al. PACE your network: Fair and controllable multi-tenant data center networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21909199

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021909199

Country of ref document: EP

Effective date: 20230712

NENP Non-entry into the national phase

Ref country code: DE