WO2021254202A1 - 数据调度方法、设备和存储介质 - Google Patents

数据调度方法、设备和存储介质 Download PDF

Info

Publication number
WO2021254202A1
WO2021254202A1 PCT/CN2021/098666 CN2021098666W WO2021254202A1 WO 2021254202 A1 WO2021254202 A1 WO 2021254202A1 CN 2021098666 W CN2021098666 W CN 2021098666W WO 2021254202 A1 WO2021254202 A1 WO 2021254202A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
scheduling
tunnel
scheduling node
pseudowire
Prior art date
Application number
PCT/CN2021/098666
Other languages
English (en)
French (fr)
Inventor
张勇
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to BR112022020001A priority Critical patent/BR112022020001A2/pt
Publication of WO2021254202A1 publication Critical patent/WO2021254202A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/825Involving tunnels, e.g. MPLS

Definitions

  • the embodiments of the present application relate to the technical field of hierarchical flow control, and in particular, to a data scheduling method, device, and storage medium.
  • Quality of Service (Quality of Service, QoS) technology is a security mechanism of the network to solve problems such as network delay and congestion.
  • the current QoS technology can only map the message to different sending queues according to the priority field in the message, and then realize sending queue scheduling and bandwidth allocation through different scheduling algorithms between the sending queues.
  • the scheduling basis of the above-mentioned QoS technology is based on the different priorities of the messages, and its basis is relatively single, which leads to the limited application scenarios of the current QoS technology.
  • the embodiments of the present application propose a data scheduling method, device, and storage medium.
  • the embodiment of the present application provides a data scheduling method.
  • the method includes the following steps: determining a target tunnel corresponding to the data to be scheduled according to user indication information of the data to be scheduled; putting the data to be scheduled into the The priority field of the scheduling data corresponds to the target priority sub-queue; through the target tunnel scheduling node corresponding to the target tunnel, scheduling the target priority queue to which the target priority sub-queue belongs; scheduling through the target port corresponding to the target port A node that schedules data in the target tunnel scheduling node; wherein, the target port is a port corresponding to the target tunnel.
  • the embodiment of the present application also proposes a data scheduling device.
  • the device includes a memory, a processor, a program stored in the memory and running on the processor, and a program for implementing the processor and the A data bus for connection and communication between memories.
  • the program is executed by the processor, the steps of the foregoing method are implemented.
  • the embodiment of the present application provides a storage medium for computer-readable storage.
  • the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the foregoing Method steps.
  • FIG. 1 is a flowchart of a data scheduling method provided by an embodiment
  • FIG. 2 is a flowchart of a data scheduling method provided by another embodiment
  • FIG. 3 is a schematic structural diagram of a data scheduling device provided by an embodiment
  • FIG. 4 is a schematic structural diagram of a service access unit in a data scheduling device provided by an embodiment
  • FIG. 5 is a schematic structural diagram of a hierarchical scheduling unit in a data scheduling device provided by an embodiment
  • FIG. 6 is a schematic diagram of the principle of bandwidth allocation when the tunnel scheduling node is an empty node according to an embodiment
  • FIG. 7 is a schematic diagram of the principle of bandwidth allocation when the pseudowire scheduling node is an empty node according to an embodiment
  • FIG. 8 is a flowchart of a data scheduling method provided by another embodiment
  • Fig. 9 is a schematic diagram corresponding to Fig. 8.
  • FIG. 10 is a flowchart of a data scheduling method provided by still another embodiment
  • Fig. 11 is a schematic diagram corresponding to Fig. 10;
  • FIG. 12 is a flowchart of a data scheduling method provided by another embodiment
  • Figure 13 is a schematic diagram corresponding to Figure 12;
  • FIG. 14 is a flowchart of a data scheduling method provided by another embodiment
  • Fig. 15 is a schematic diagram corresponding to Fig. 14;
  • FIG. 16 is a schematic structural diagram of a data scheduling device provided by an embodiment
  • Fig. 17 is a schematic structural diagram of a data scheduling device provided by an embodiment.
  • This embodiment provides a data scheduling method, which includes: determining the target tunnel corresponding to the data to be scheduled according to user indication information of the data to be scheduled, and putting the data to be scheduled into the target priority sub-field corresponding to the priority field of the data to be scheduled.
  • a data scheduling method which includes: determining the target tunnel corresponding to the data to be scheduled according to user indication information of the data to be scheduled, and putting the data to be scheduled into the target priority sub-field corresponding to the priority field of the data to be scheduled.
  • the target tunnel scheduling node corresponding to the target tunnel scheduling the target priority queue to which the target priority sub-queue belongs, scheduling the node through the target port corresponding to the target port, scheduling the data in the target tunnel scheduling node, where the target port is The port corresponding to the target tunnel.
  • the data scheduling method can determine the target tunnel corresponding to the data to be scheduled according to the user indication information of the data to be scheduled, schedule the target priority queue where the data to be scheduled is located through the target tunnel scheduling node corresponding to the target tunnel, and then use the corresponding target port
  • the target port scheduling node scheduling the data in the target tunnel scheduling node, because the target tunnel is determined according to the user indication information of the data to be scheduled, so that the priority field of the data to be scheduled and the user indication information can be combined to schedule the data to be scheduled That is, different flow control methods are provided for different users to meet the needs of users, improve the scheduling flexibility of QoS technology, and at the same time, reduce the maintenance cost of operators.
  • Fig. 1 is a flowchart of a data scheduling method provided by an embodiment. This embodiment is applicable to scenarios where data is scheduled. This embodiment can be executed by a data scheduling device, which can be implemented by software and/or hardware, and the data scheduling device can be integrated into a communication device with packet switching function, and the communication device with packet switching function Can be set in the packet switching system. As shown in Figure 1, the data scheduling method provided in this embodiment includes the following steps:
  • Step 101 Determine the target tunnel corresponding to the data to be scheduled according to the user indication information of the data to be scheduled.
  • the communication device with the packet switching function may be an optical transport network (Optical Transport Network, OTN) device.
  • OTN optical Transport Network
  • the OTN device can be connected to the switch through a Packet Transport Network (PTN) device.
  • PTN Packet Transport Network
  • the data to be scheduled in this embodiment may be data in a multi-protocol label switching (Multi-Protocol Label Switching, MPLS) system.
  • MPLS Multi-Protocol Label Switching
  • the user indication information may include: the device port (port) that receives the data to be scheduled and/or the number of the virtual local area network (Virtual Local Area Network, VLAN) where the scheduling data is located.
  • the device port that receives the to-be-scheduled data refers to the port through which a communication device with a packet switching function receives the to-be-scheduled data. It should be noted that the user indication information may or may not be encapsulated in the to-be-scheduled data, and this embodiment is not limited thereto.
  • the to-be-scheduled data of different users is transmitted on different tunnels. That is, the tunnel corresponds to the user.
  • the data scheduling apparatus may determine the target tunnel corresponding to the to-be-scheduled data according to the user indication information in the to-be-scheduled data. In this embodiment, there may be multiple data to be scheduled. In step 101, the data scheduling apparatus may determine the target tunnel corresponding to the data to be scheduled according to the user indication information of each data to be scheduled.
  • the user indication information may be an identifier of a device port, for example, "port 1".
  • Step 102 Put the to-be-scheduled data into the target priority sub-queue corresponding to the priority field of the to-be-scheduled data.
  • the to-be-scheduled data itself is encapsulated with a priority field.
  • the length of the priority field may be 3 bits.
  • the corresponding priority can be determined.
  • the priority corresponds to the priority sub-queue. Therefore, the target priority sub-queue corresponding to the data to be scheduled can be determined according to the priority field of the data to be scheduled. .
  • the target priority sub-queue belongs to the target priority queue.
  • the target priority queue in this embodiment includes multiple priority sub-queues including the target priority sub-queue.
  • the priority queue may have a mapping relationship with the tunnel. After the target tunnel corresponding to the data to be scheduled is determined in step 101, in step 102, the target priority queue corresponding to the target tunnel can be determined according to the mapping relationship between the priority queue and the tunnel and the target tunnel, and then according to The priority field of the data to be scheduled determines the corresponding target priority sub-queue.
  • the target priority queue includes 8 priority sub-queues: sub-queue 0, sub-queue 1, sub-queue 2, sub-queue 3, ..., sub-queue 7, and the priority field of the data to be scheduled is 110.
  • the priority field it is determined that the priority of the data to be scheduled is 6, and the corresponding sub-queue is sub-queue 6. Therefore, in step 102, the to-be-scheduled data is put into the sub-queue 6.
  • the data to be scheduled is put into the target priority sub-queue and waits for scheduling.
  • Step 103 Scheduling the target priority queue to which the target priority sub-queue belongs through the target tunnel scheduling node corresponding to the target tunnel.
  • the target tunnel corresponds to the target tunnel scheduling node.
  • the target tunnel scheduling node is used to schedule data in the target priority queue.
  • the target priority sub-queue corresponds to the third target minimum required bandwidth.
  • the target priority queue is a collection of at least two priority sub-queues, and the at least two priority sub-queues include the target priority sub-queue. That is, the target priority queue is a set of multiple priority sub-queues including the target priority sub-queue, and each priority sub-queue corresponds to the third minimum required bandwidth.
  • the target tunnel scheduling node corresponds to the maximum required bandwidth of the first target. When the priority sub-queue is the target priority sub-queue, the third target minimum required bandwidth and the third minimum required bandwidth represent the same concept.
  • the target priority sub-queues included in the target priority queue when the sum of the traffic of all priority sub-queues included in the target priority queue is greater than the first target maximum required bandwidth, it indicates that the target tunnel is congested. In this implementation, it is necessary to determine the third actual bandwidth corresponding to the target priority sub-queue according to the third target minimum required bandwidth corresponding to the target priority sub-queue, and send the data in the target priority sub-queue according to the third actual bandwidth .
  • the process of determining the third actual bandwidth may be: determining the sum of the third minimum required bandwidth of all priority sub-queues in the target priority queue connected to the target tunnel scheduling node, and subtracting the first target maximum required bandwidth The remaining bandwidth after the sum of the third minimum required bandwidth of all priority sub-queues is allocated according to the preset third bandwidth allocation rule, the third target allocation bandwidth corresponding to the target priority sub-queue is determined, and the target priority sub-queue The sum of the corresponding third target minimum required bandwidth and the third target allocated bandwidth is determined as the third actual bandwidth corresponding to the data in the target priority sub-queue.
  • This allocation method can ensure that when the target tunnel is congested, the minimum required bandwidth of each priority sub-queue is guaranteed first.
  • the preset third bandwidth allocation rule here may be to allocate all the remaining bandwidth to several priority sub-queues with higher priority, or to allocate each priority sub-queue according to a certain preset ratio. This embodiment is not limited to this.
  • the sum of the traffic of all priority sub-queues included in the target priority queue refers to the sum of the traffic of data in all priority sub-queues included in the target priority queue.
  • the traffic or bandwidth in this embodiment all refer to the data transmission rate.
  • the target tunnel scheduling node schedules the data in the target priority queue, the data will flow into the target tunnel. Since the target tunnel corresponds to the target tunnel scheduling node, it can also be said that data will flow into the target tunnel scheduling node.
  • Step 104 Schedule data in the target tunnel scheduling node through the target port scheduling node corresponding to the target port.
  • the target port is the port corresponding to the target tunnel.
  • enterprise users' VoIP calls need to ensure low latency and high real-time performance.
  • individual users browsing the web are not sensitive to the browsing network speed. Therefore, separate bandwidth control is required for data of different users.
  • a target port scheduling node corresponding to the target port is set.
  • the target port scheduling node may be connected to at least two tunnel scheduling nodes, and the at least two tunnel scheduling nodes include the target tunnel scheduling node. That is, the target port scheduling node is connected to multiple tunnel scheduling nodes including the target tunnel scheduling node.
  • each tunnel corresponds to the data of one user. Therefore, the target port scheduling node can schedule multiple tunnels, that is, the data of multiple users, so as to provide different flow control methods for different users to meet the needs of users.
  • the demand of QoS improves the scheduling flexibility of QoS technology.
  • the target tunnel scheduling node corresponds to the first target minimum required bandwidth.
  • the target port scheduling node corresponds to the target total bandwidth.
  • the data in the multiple tunnel scheduling nodes can be sent from the target port at the same time.
  • the target port scheduling node when the sum of all data traffic in at least two tunnel scheduling nodes connected by the target port scheduling node is greater than the target total bandwidth, it is determined that the target port is congested; when the target port is congested, the node is scheduled through the target port, and according to The first target minimum required bandwidth determines the first actual bandwidth corresponding to the data in the target tunnel scheduling node; the target port scheduling node sends the data in the target tunnel scheduling node according to the first actual bandwidth.
  • each of the at least two tunnel scheduling nodes corresponds to the first minimum required bandwidth.
  • the process of determining the first actual bandwidth can be: determining the sum of the first minimum required bandwidth of all tunnel scheduling nodes connected to the target port scheduling node; subtracting the total target bandwidth from the sum of the first minimum required bandwidth and the remaining bandwidth according to the pre-defined Set the first bandwidth allocation rule to allocate, and determine the first target allocation bandwidth corresponding to the target tunnel scheduling node; determine the target tunnel scheduling as the target tunnel scheduling by the sum of the first target minimum required bandwidth corresponding to the target tunnel scheduling node and the first target allocation bandwidth The first actual bandwidth corresponding to the data in the node.
  • the preset first bandwidth allocation rule here may be to allocate all the remaining bandwidth in the target port scheduling node to several tunnel scheduling nodes with a higher first minimum required bandwidth, or to allocate to each tunnel scheduling node according to a certain preset ratio node. This embodiment is not limited to this.
  • third target minimum required bandwidth, third minimum required bandwidth, first target maximum required bandwidth, first target minimum required bandwidth, first minimum required bandwidth, and target total bandwidth may all be pre-configured.
  • the user may configure the data scheduling apparatus in advance, or other equipment may configure the data scheduling apparatus.
  • This embodiment provides a data scheduling method, which includes: determining the target tunnel corresponding to the data to be scheduled according to user indication information of the data to be scheduled, and putting the data to be scheduled into the target priority sub-field corresponding to the priority field of the data to be scheduled.
  • a data scheduling method which includes: determining the target tunnel corresponding to the data to be scheduled according to user indication information of the data to be scheduled, and putting the data to be scheduled into the target priority sub-field corresponding to the priority field of the data to be scheduled.
  • the target tunnel scheduling node corresponding to the target tunnel scheduling the target priority queue to which the target priority sub-queue belongs, scheduling the node through the target port corresponding to the target port, scheduling the data in the target tunnel scheduling node, where the target port is The port corresponding to the target tunnel.
  • the data scheduling method can determine the target tunnel corresponding to the data to be scheduled according to the user instruction information of the data to be scheduled, schedule the target priority queue where the data to be scheduled is located through the target tunnel scheduling node corresponding to the target tunnel, and then use the corresponding target port
  • the target port scheduling node scheduling the data in the target tunnel scheduling node, because the target tunnel is determined according to the user indication information of the data to be scheduled, so that the priority field of the data to be scheduled and the user indication information can be combined to schedule the data to be scheduled That is, different flow control methods are provided for different users to meet the needs of users, improve the scheduling flexibility of QoS technology, and at the same time, reduce the maintenance cost of operators.
  • Fig. 2 is a flowchart of a data scheduling method provided by another embodiment.
  • the data scheduling method provided in this embodiment includes the following steps:
  • Step 201 Determine the target tunnel corresponding to the data to be scheduled according to the user indication information of the data to be scheduled.
  • Step 202 Put the to-be-scheduled data into the target priority sub-queue corresponding to the priority field of the to-be-scheduled data.
  • Step 203 Determine the target pseudowire corresponding to the data to be scheduled according to the service indication information of the data to be scheduled.
  • the target pseudowire is connected to the target tunnel.
  • the target tunnel may connect at least two pseudo wires, the at least two pseudo wires including the target pseudo wires.
  • the at least two pseudowires may be pseudowire 1 and pseudowire 2, where pseudowire 1 may be a target pseudowire.
  • the service indication information in this embodiment may be the device port receiving the data to be scheduled and/or the number of the VLAN where the scheduling data is located.
  • the target tunnel corresponding to the data to be scheduled can be determined according to the mapping relationship between the pseudowire and the tunnel and the target pseudowire.
  • the user indication information may be the mapping relationship between the pseudo wire and the tunnel.
  • this embodiment can not only realize the scheduling of data of different users, but also realize the scheduling of data of different services of the same user, which further improves the flexibility of scheduling.
  • the network speeds of different service requirements of the same user are different.
  • the VoIP service of individual users needs to ensure low latency and high real-time performance, and they are not sensitive to the network speed when browsing the web.
  • separate bandwidth control can be performed for different services of different users, and the maintenance cost of the operator can be reduced.
  • Fig. 3 is a schematic structural diagram of a data scheduling device provided by an embodiment.
  • the device includes a service access unit 31 and a service hierarchical scheduling unit 32.
  • the tunnel corresponds to the tunnel scheduling node
  • the pseudo wire corresponds to the pseudo wire scheduling node
  • the port corresponds to the port scheduling node.
  • Each message corresponds to different priority sub-queues according to different priorities.
  • Bandwidth allocation is performed on each scheduling node, that is, the maximum required bandwidth and the minimum required bandwidth are set, which can realize the flow control of different services of different users.
  • Fig. 4 is a schematic structural diagram of a service access unit in a data scheduling device provided by an embodiment.
  • users passing through the same port are divided into different tunnels, and different services of users are divided into different pseudowires.
  • enterprise user 1 is on tunnel 1
  • the voip service of enterprise user 1 is on the pseudowire PW11
  • the network transmission service is on the pseudowire PW12
  • the business n of enterprise user 1 is on the pseudowire PW1n
  • the personal user 2 is on the tunnel On 2
  • the call of individual user 2 is on the pseudowire PW23, and the network transmission service is on the pseudowire PW24
  • User n is on the tunnel n
  • the service 1 of the user n is on the pseudowire PWn1
  • the service 2 is on the pseudowire PWn2 On,...
  • business n is on the pseudowire PWnn.
  • tunnel 1, tunnel 2, ... and tunnel n connect ports.
  • the port here refers to the port that sends out data in the data scheduling device.
  • Fig. 5 is a schematic structural diagram of a hierarchical scheduling unit in a data scheduling device provided by an embodiment. As shown in Figure 5, it is divided into four scheduling levels under one port: the first level of scheduling is port-level scheduling, that is, the port scheduling node, all scheduling will be aggregated to the port scheduling node, and the port can be shaped, allocated and flow controlled; The second level of scheduling is the tunnel scheduling node. The service access unit assigns different users to different tunnels. This reflects different users on the tunnel scheduling node. The maximum and minimum bandwidth can be configured according to the needs of users. Distributed flow control; the third level of scheduling is the PW scheduling node. Different services of the same user are mapped to different PWs. Different services are reflected on this scheduling node.
  • the first level of scheduling is port-level scheduling, that is, the port scheduling node, all scheduling will be aggregated to the port scheduling node, and the port can be shaped, allocated and flow controlled
  • the second level of scheduling is the tunnel scheduling node.
  • the service access unit
  • the maximum and minimum bandwidth can be configured according to the needs of the service.
  • Distribution flow control the last level of scheduling is the queue scheduling node. This level is divided into multiple priority scheduling sub-queues according to the priority of the message. Illustratively, 8 priority scheduling sub-queues can be assigned to each The bandwidth of the queue is controlled, and packets are scheduled according to priority.
  • Step 204 Scheduling the target priority queue to which the target priority sub-queue belongs through the target pseudowire scheduling node corresponding to the target pseudowire.
  • the target priority queue, the target pseudowire scheduling node, the target tunnel scheduling node, and the target port scheduling node are connected in sequence.
  • the pseudo wire scheduling node schedules data in the priority queue
  • the tunnel scheduling node schedules data in its corresponding pseudo wire scheduling node
  • the port scheduling node schedules data in its corresponding tunnel scheduling node.
  • step 201 determines the target tunnel corresponding to the data to be scheduled
  • step 203 determines the target pseudowire corresponding to the data to be scheduled
  • step 202 puts the data to be scheduled into the data to be scheduled
  • the priority field corresponds to the target priority sub-queue. It should be noted that there is no timing relationship between step 201 and step 203.
  • step 204 is executed to schedule the target priority queue to which the target priority sub-queue belongs through the target pseudowire scheduling node corresponding to the target pseudowire.
  • the target priority sub-queue corresponds to the third target minimum required bandwidth.
  • the target pseudowire scheduling node corresponds to the maximum required bandwidth of the second target.
  • the target priority queue is a collection of multiple priority sub-queues including the target priority sub-queue, and each priority sub-queue corresponds to the third minimum required bandwidth.
  • the specific scheduling process of step 204 may be: when the sum of the traffic of all priority sub-queues included in the target priority queue is less than or equal to the second target maximum demand bandwidth, multiple priority sub-queues The data in can pass the target pseudowire at the same time.
  • the specific scheduling process of step 204 may be: when the sum of the traffic of all priority sub-queues included in the target priority queue is greater than the maximum required bandwidth of the second target, determining that the target pseudowire is congested; When the target pseudowire is congested, the target pseudowire scheduling node determines the third actual bandwidth corresponding to the target priority subqueue according to the third target minimum required bandwidth corresponding to the target priority subqueue; 3. The actual bandwidth sends the data in the target priority sub-queue.
  • the process of determining the third actual bandwidth may be: determining the sum of the third minimum required bandwidth of all priority sub-queues in the target priority queue connected to the target pseudowire scheduling node, and subtracting the second target maximum required bandwidth After removing the sum of the third minimum required bandwidth of all priority sub-queues, the remaining bandwidth is allocated according to the preset third bandwidth allocation rule, the third target allocation bandwidth corresponding to the target priority sub-queue is determined, and the target priority sub-queue is determined. The sum of the third target minimum required bandwidth corresponding to the queue and the third target allocated bandwidth is determined as the third actual bandwidth corresponding to the data in the target priority sub-queue.
  • This allocation method can ensure that when the target pseudowire is congested, the minimum required bandwidth of each priority sub-queue is guaranteed first.
  • the target pseudowire scheduling node schedules the data in the target priority queue, the data will flow into the target pseudowire. Since the target pseudowire corresponds to the target pseudowire scheduling node, it can also be said that data will flow into the target pseudowire scheduling node.
  • Step 205 Schedule data in the target pseudowire scheduling node through the target tunnel scheduling node corresponding to the target tunnel.
  • the data in the target pseudowire scheduling node is scheduled through the target tunnel scheduling node corresponding to the target tunnel.
  • the target pseudowire scheduling node corresponds to the second target minimum required bandwidth.
  • the target tunnel scheduling node corresponds to the maximum required bandwidth of the first target.
  • the at least two pseudowire scheduling nodes connected by the target tunnel scheduling node include the target pseudowire scheduling node. That is, the target tunnel scheduling node is connected to at least two pseudowire scheduling nodes including the target pseudowire scheduling node.
  • the specific scheduling process of step 205 may be: when the sum of all data traffic in at least two pseudowire scheduling nodes connected to the target tunnel scheduling node is less than or equal to the maximum required bandwidth of the first target, multiple The data in the pseudowire scheduling node can pass through the target tunnel at the same time.
  • the specific scheduling process in step 205 may be: when the sum of all data traffic in at least two pseudowire scheduling nodes connected to the target tunnel scheduling node is greater than the first target maximum required bandwidth, determine the target tunnel Congestion; when the target tunnel is congested, the node is scheduled through the target tunnel, and the second actual bandwidth corresponding to the data in the target pseudowire scheduling node is determined according to the minimum required bandwidth of the second target; the node is scheduled through the target tunnel, according to the second actual bandwidth Send the data in the target pseudowire scheduling node.
  • each of the at least two pseudowire scheduling nodes corresponds to the second minimum required bandwidth.
  • the process of determining the second actual bandwidth may be: determining the sum of the second minimum required bandwidth of all pseudowire scheduling nodes connected to the target tunnel scheduling node; and the remaining after subtracting the sum of the second minimum required bandwidth from the first target maximum required bandwidth
  • the bandwidth is allocated according to the preset second bandwidth allocation rule, and the second target allocation bandwidth corresponding to the target pseudowire scheduling node is determined; the sum of the second target minimum required bandwidth corresponding to the target pseudowire scheduling node and the second target allocation bandwidth , It is determined as the second actual bandwidth corresponding to the data in the target pseudowire scheduling node.
  • the target tunnel scheduling node schedules the data in the target pseudowire, the data will flow into the target tunnel. Since the target tunnel corresponds to the target tunnel scheduling node, it can also be said that data will flow into the target tunnel scheduling node.
  • Step 206 Schedule data in the target tunnel scheduling node through the target port scheduling node corresponding to the target port.
  • the target port is the port corresponding to the target tunnel.
  • the target tunnel scheduling node corresponds to the first target minimum required bandwidth.
  • the target port scheduling node corresponds to the target total bandwidth.
  • the data in the at least two tunnel scheduling nodes can be sent from the target port at the same time .
  • the target port scheduling node when the sum of all data traffic in at least two tunnel scheduling nodes connected by the target port scheduling node is greater than the target total bandwidth, it is determined that the target port is congested; when the target port is congested, the node is scheduled through the target port, and According to the minimum required bandwidth of the first target, the first actual bandwidth corresponding to the data in the target tunnel scheduling node is determined; the node is scheduled through the target port, and the data in the target tunnel scheduling node is sent according to the first actual bandwidth.
  • each of the at least two tunnel scheduling nodes corresponds to the first minimum required bandwidth.
  • the process of determining the first actual bandwidth can be: determining the sum of the first minimum required bandwidth of all tunnel scheduling nodes connected to the target port scheduling node; subtracting the total target bandwidth from the sum of the first minimum required bandwidth and the remaining bandwidth according to the pre-defined Set the first bandwidth allocation rule to allocate, and determine the first target allocation bandwidth corresponding to the target tunnel scheduling node; determine the target tunnel scheduling as the target tunnel scheduling by the sum of the first target minimum required bandwidth corresponding to the target tunnel scheduling node and the first target allocation bandwidth The first actual bandwidth corresponding to the data in the node.
  • the foregoing implementation process of determining the first actual bandwidth, determining the second actual bandwidth, and determining the bandwidth of the third actual bandwidth when the scheduling node is congested can ensure the minimum required bandwidth of the scheduling node at the previous level without affecting the normal operation of the service.
  • the target tunnel corresponds to the target tunnel scheduling node.
  • the specific determination process may be: if it is determined that the target tunnel has a corresponding tunnel scheduling node in the second mapping relationship according to the preset second mapping relationship between the tunnel and the tunnel scheduling node and the target tunnel, then the second mapping relationship is:
  • the tunnel scheduling node corresponding to the target tunnel is determined to be the target tunnel scheduling node; if according to the preset second mapping relationship between the tunnel and the tunnel scheduling node and the target tunnel, it is determined that the target tunnel does not have a corresponding tunnel scheduling node in the second mapping relationship ,
  • the empty tunnel scheduling node connected under the target port scheduling node is determined as the target tunnel scheduling node.
  • the first maximum required bandwidth of the empty tunnel scheduling node is the bandwidth remaining after subtracting the first maximum required bandwidth of other connected tunnel scheduling nodes from the total target bandwidth of the target port scheduling node.
  • the tunnel scheduling node corresponding to the target tunnel in the second mapping relationship is determined to be the target tunnel scheduling node.
  • the tunnel scheduling node in the second mapping relationship has been pre-configured with the maximum and minimum bandwidth. If there is no tunnel scheduling node corresponding to the target tunnel in the preset second mapping relationship, the empty tunnel scheduling node connected to the target port scheduling node needs to be determined as the target tunnel scheduling node.
  • the empty tunnel scheduling node does not have a pre-configured maximum and minimum bandwidth. Therefore, it is necessary to subtract the first maximum required bandwidth of other connected tunnel scheduling nodes from the total target bandwidth of the target port node to determine the remaining bandwidth of the empty tunnel scheduling node. A maximum demand bandwidth.
  • the target pseudowire corresponds to the target pseudowire scheduling node. After the target pseudowire is determined in step 203, before step 204, in one embodiment, it is also necessary to determine the target pseudowire scheduling node corresponding to the target pseudowire.
  • the specific determination process may be: if it is determined that the target pseudowire has a corresponding pseudowire scheduling node in the first mapping relationship according to the preset pseudowire and the first mapping relationship of the pseudowire scheduling node and the target pseudowire, then the first mapping relationship In a mapping relationship, the pseudowire scheduling node corresponding to the target pseudowire is determined to be the target pseudowire scheduling node; if the preset pseudowire and the first mapping relationship of the pseudowire scheduling node and the target pseudowire are used, it is determined that the target pseudowire is located If there is no corresponding pseudo-wire scheduling node in the first mapping relationship, the empty pseudo-wire scheduling node connected under the target tunnel scheduling node is determined as the target pseudo-wire scheduling node.
  • the second maximum required bandwidth of the empty pseudowire scheduling node is the bandwidth remaining
  • the pseudowire scheduling node corresponding to the target pseudowire in the first mapping relationship is determined as the target pseudowire scheduling node.
  • the pseudowire scheduling node in the first mapping relationship has been pre-configured with the maximum and minimum bandwidth. If there is no pseudowire scheduling node corresponding to the target pseudowire in the preset first mapping relationship, the empty pseudowire scheduling node connected under the target tunnel scheduling node needs to be determined as the target pseudowire scheduling node.
  • the empty pseudowire scheduling node does not have a pre-configured maximum and minimum bandwidth. Therefore, the remaining bandwidth after subtracting the second maximum required bandwidth of other connected pseudowire scheduling nodes from the target total bandwidth of the target tunnel node is required to determine the empty pseudowire scheduling The second maximum required bandwidth of the node.
  • the empty pseudowire scheduling node refers to the pseudowire scheduling node that is not bound to the pseudowire.
  • the empty tunnel scheduling node refers to a tunnel scheduling node that is not bound to the tunnel.
  • Fig. 6 is a schematic diagram of the principle of bandwidth allocation when the tunnel scheduling node is an empty node according to an embodiment.
  • the second layer tunnel scheduling node has a free node.
  • the total bandwidth allocated by the port is 100Mbps
  • the bandwidth allocated by the tunnel scheduling node 1 is 20Mbps
  • the bandwidth allocated by the tunnel scheduling node 2 is 30Mbps. All other services go to the empty tunnel scheduling node.
  • 20 Mbps refers to the maximum required bandwidth of tunnel scheduling node 1
  • 30 Mbps refers to the maximum required bandwidth of tunnel scheduling node 2.
  • FIG. 7 is a schematic diagram of the principle of bandwidth allocation when the pseudowire scheduling node is an empty node according to an embodiment.
  • the allocated bandwidth of the tunnel is 80Mbps
  • the allocated bandwidth of PW1 is 40Mbps
  • the allocated bandwidth of PW2 is 10Mbps, so other PW services under this tunnel are empty.
  • the above method of allocating bandwidth for air-conditioning nodes can ensure that services without bandwidth control are also available with appropriate bandwidth to meet the bandwidth requirements of other services. At the same time, it can ensure that the bandwidth of low-level nodes does not exceed the bandwidth of high-level nodes to ensure bandwidth allocation. rationality.
  • the target tunnel corresponding to the data to be scheduled in addition to determining the target tunnel corresponding to the data to be scheduled based on the user indication information of the data to be scheduled, the target tunnel corresponding to the data to be scheduled can also be determined based on the service indication information of the data to be scheduled.
  • the corresponding target port scheduling node, scheduling the data in the target tunnel scheduling node can not only provide different flow control methods for different users, but also provide different flow control methods for different services of different users, which further improves Flexibility of scheduling.
  • the first scenario is a scenario where the target pseudowire scheduling node is an empty pseudowire scheduling node.
  • the maximum and minimum bandwidth can be configured on the tunnel scheduling node to achieve the purpose of controlling traffic.
  • Fig. 8 is a flowchart of a data scheduling method provided by another embodiment. As shown in Figure 8, the data scheduling method includes the following steps:
  • Step 801 Obtain the corresponding relationship between the pseudowire and the tunnel configured by the user, configure the committed information rate (Committed Information Rate, CIR) and the highest information rate (Peak Information Rate, PIR) of the tunnel scheduling node, and create a scheduling hierarchy tree according to requirements.
  • CIR Committed Information Rate
  • PIR Peak Information Rate
  • the scheduling hierarchy tree refers to the level of scheduling: four levels of scheduling are allocated, the first level is port scheduling, the second level is tunnel scheduling node scheduling, the third level is assigned empty pseudowire scheduling node scheduling, and the fourth level is queues. Scheduling.
  • Step 802 Determine the target pseudowire corresponding to the data to be scheduled according to the port or port+vlan of the data to be scheduled.
  • Step 803 According to the configured corresponding relationship between the pseudo wire and the tunnel, determine the target tunnel corresponding to the multiple pseudo wires including the target pseudo wire.
  • Step 804 Bind the target tunnel to the assigned target tunnel scheduling node.
  • Step 805 The data to be scheduled enters the target priority queue corresponding to the target tunnel scheduling node according to different priorities.
  • Step 806 In the target priority queue, scheduling is performed according to the priority, and it is imported into the upper-level scheduling node.
  • Step 807 Pass through the empty pseudowire scheduling node, and merge into the upper-level scheduling node.
  • Step 808 The target tunnel scheduling node controls the outgoing rate of the data to be scheduled according to the allocated flow, and merges it into the target port scheduling node.
  • Step 809 The target port scheduling node aggregates the traffic and sends the data to be scheduled.
  • Fig. 9 is a schematic diagram corresponding to Fig. 8. As shown in Figure 9, the priority queue includes 8 priority sub-queues, and the target pseudowire scheduling node is an empty pseudowire scheduling node.
  • the second scenario is a scenario where the target tunnel scheduling node is an empty tunnel scheduling node.
  • the maximum and minimum bandwidth can be configured on the pseudowire scheduling node to achieve the purpose of controlling traffic.
  • Fig. 10 is a flowchart of a data scheduling method provided by still another embodiment. As shown in Figure 10, the data scheduling method includes the following steps:
  • Step 1001 Obtain the corresponding relationship between the pseudowire and the tunnel configured by the user, configure the CIR and PIR of the pseudowire scheduling node, and create a scheduling hierarchy tree according to requirements.
  • the scheduling hierarchy tree refers to the level of scheduling: four levels of scheduling are allocated, the first level is port scheduling, the second level is empty tunnel scheduling node scheduling, the third level is allocated pseudowire scheduling node scheduling, and the fourth level is queues Scheduling.
  • Step 1002 Determine the target pseudowire corresponding to the data to be scheduled according to the port or port+vlan of the data to be scheduled.
  • Step 1003 According to the configured correspondence between the pseudowire and the tunnel, determine the target tunnel corresponding to the multiple pseudowires including the target pseudowire.
  • Step 1004 Bind the target pseudowire to the allocated target pseudowire scheduling node.
  • Step 1005 The data to be scheduled enters the target priority queue corresponding to the target pseudowire scheduling node according to different priorities.
  • Step 1006 In the target priority queue, perform scheduling according to the priority, and import it to the upper-level scheduling node.
  • Step 1007 The target pseudowire scheduling node controls the outgoing rate of the data to be scheduled according to the allocated flow, and merges it into the upper-level scheduling node.
  • Step 1008 Pass through the empty tunnel scheduling node and import the target port scheduling node.
  • Step 1009 The target port scheduling node aggregates the traffic and sends the data to be scheduled.
  • Fig. 11 is a schematic diagram corresponding to Fig. 10. As shown in Figure 11, the priority queue includes 8 priority sub-queues, and the target tunnel scheduling node is an empty tunnel scheduling node.
  • the third scenario is a scenario where neither the target tunnel scheduling node nor the target pseudowire scheduling node is an air-conditioning node.
  • the maximum and minimum bandwidth can be configured on the tunnel scheduling node, and the maximum and minimum bandwidth can be configured on the pseudowire scheduling node to achieve the purpose of controlling traffic.
  • FIG. 12 is a flowchart of a data scheduling method provided by another embodiment. As shown in Figure 12, the data scheduling method includes the following steps:
  • Step 1201 Obtain the correspondence between the pseudowire and the tunnel configured by the user, configure the CIR and PIR of the tunnel scheduling node, configure the CIR and PIR of the pseudowire scheduling node, and create a scheduling hierarchy tree according to requirements.
  • the scheduling hierarchy tree refers to the level of scheduling: four levels of scheduling are allocated, the first level is port scheduling, the second level is tunnel scheduling node scheduling, the third level is assigned pseudowire scheduling node scheduling, and the fourth level is queue scheduling. .
  • Step 1202 Determine the target pseudowire corresponding to the data to be scheduled according to the port or port+vlan of the data to be scheduled.
  • Step 1203 According to the configured correspondence between the pseudowire and the tunnel, determine the target tunnel corresponding to the multiple pseudowires including the target pseudowire.
  • Step 1204 Bind the target pseudowire to the allocated target pseudowire scheduling node.
  • Step 1205 The data to be scheduled enters the target priority queue corresponding to the target pseudowire scheduling node according to different priorities.
  • Step 1206 On the target priority queue, if the bandwidth of the upper scheduling node (that is, the target pseudowire scheduling node) is congested, the messages will be scheduled according to the priority queue to ensure that high priority messages are scheduled first.
  • the bandwidth of the upper scheduling node that is, the target pseudowire scheduling node
  • Step 1207 The messages after queue scheduling are imported into the target pseudowire scheduling node, and the target pseudowire scheduling node controls the message export rate according to the assigned flow rate, and then is imported into the upper-level scheduling node (that is, the target tunnel scheduling node).
  • Step 1208 The messages dispatched by the target pseudowire scheduling node are imported into the target tunnel scheduling node, and the target tunnel scheduling node controls the message export rate according to the assigned flow rate, and then is imported into the upper-level scheduling node (ie, the target port scheduling node).
  • Step 1209 The target port scheduling node aggregates the traffic and sends the data to be scheduled.
  • Fig. 13 is a schematic diagram corresponding to Fig. 12. As shown in Figure 13, the priority queue includes 8 priority sub-queues. Neither the target pseudowire scheduling node nor the target tunnel scheduling node is an air-conditioning node.
  • the fourth scenario is a scenario where the target tunnel scheduling node and/or the target pseudowire scheduling node is an air-conditioning node. In this scenario, it is a combination of various situations.
  • FIG. 14 is a flowchart of a data scheduling method provided by another embodiment.
  • Fig. 15 is a schematic diagram corresponding to Fig. 14. As shown in Figure 14, the data scheduling method includes the following steps:
  • Step 1401 Obtain the correspondence between the pseudowire and the tunnel configured by the user, configure the CIR and PIR of the tunnel scheduling node, configure the CIR and PIR of the pseudowire scheduling node, and create a scheduling hierarchy tree according to requirements.
  • the scheduling in the figure is divided into four layers.
  • the first layer is a port scheduling node
  • the second layer is a tunnel scheduling node
  • the third layer is a pseudowire scheduling node
  • the fourth layer is a queue scheduling node.
  • the first part, pseudo-wire scheduling node 1 and pseudo-wire scheduling node 2 are common tunnel scheduling node 1.
  • the user configures the bandwidth control of tunnel scheduling node 1, and at the same time configures the bandwidth control of pseudo-wire scheduling node 1, and pseudo-wire scheduling node 2 does not configure bandwidth control .
  • the pseudowire scheduling node 2 is allocated to the empty pseudowire scheduling node, and the remaining bandwidth is allocated to it. The remaining bandwidth is the bandwidth of the tunnel scheduling node 1 minus the bandwidth allocated by the pseudowire scheduling node 1.
  • the user configures the bandwidth control of the tunnel scheduling node 2, and the pseudowire scheduling node under it does not configure bandwidth control. Assign empty pseudowire scheduling nodes to the third layer, and all pseudowire scheduling nodes under the tunnel scheduling node 2. The data in the line all go to the scheduling node.
  • the pseudowire scheduling node 3 belongs to the tunnel scheduling node 3. The user only configures the bandwidth control of the pseudowire scheduling node 3.
  • the tunnel scheduling node 3 allocates the empty tunnel scheduling node, and the bandwidth is the port bandwidth minus the tunnel scheduling node 1 and tunnel scheduling The remaining bandwidth after the bandwidth allocated by node 2.
  • the fourth part belongs to the 4-level scheduling assigned by the port by default, and all other services that are not assigned to the scheduling node under this port go to this scheduling level.
  • Step 1402 Determine the target pseudowire corresponding to the data to be scheduled according to the port or port+vlan of the data to be scheduled.
  • Step 1403 According to the configured correspondence between the pseudowire and the tunnel, determine the target tunnel corresponding to the multiple pseudowires including the target pseudowire.
  • Step 1404 The data in the pseudowire scheduling node 1 is processed according to the processing flow of the third scenario.
  • Step 1405 The traffic in the pseudowire scheduling node 2 is not limited, and the traffic flows into the tunnel scheduling node 1.
  • the tunnel scheduling node 1 uniformly controls the traffic of the pseudowire scheduling node 1 and the pseudowire scheduling node 2 according to CIR and PIR.
  • Step 1406 The data in the tunnel scheduling node 2 is processed according to the processing flow of the first scenario.
  • Step 1407 The data in the pseudowire scheduling node 3 is processed according to the processing flow of the second scenario.
  • Step 1408 other services of the local port use the four-layer scheduling corresponding to the port, and process them according to the remaining bandwidth.
  • the data scheduling methods provided in the above four scenarios can not only provide different flow control methods for different users, but also provide different flow control methods for different services of different users, which further improves the flexibility of scheduling.
  • FIG. 16 is a schematic structural diagram of a data scheduling device provided by an embodiment. As shown in FIG. 16, the data scheduling device provided in this embodiment includes the following modules: a first determining module 161, a putting module 162, a first scheduling module 163, and a second scheduling module 164.
  • the first determining module 161 is configured to determine the target tunnel corresponding to the data to be scheduled according to the user indication information of the data to be scheduled.
  • the putting module 162 is configured to put the to-be-scheduled data into the target priority sub-queue corresponding to the priority field of the to-be-scheduled data.
  • the first scheduling module 163 is configured to schedule the target priority queue to which the target priority sub-queue belongs through the target tunnel scheduling node corresponding to the target tunnel.
  • the second scheduling module 164 is configured to schedule data in the target tunnel scheduling node through the target port corresponding to the target port.
  • the target port is the port corresponding to the target tunnel.
  • the device further includes: a second determining module and a third scheduling module.
  • the second determining module is configured to determine the target pseudowire corresponding to the data to be scheduled according to the service indication information of the data to be scheduled.
  • the third scheduling module is configured to schedule the target priority queue to which the target priority sub-queue belongs through the target pseudowire scheduling node corresponding to the target pseudowire.
  • the target priority queue, the target pseudowire scheduling node, the target tunnel scheduling node, and the target port scheduling node are connected in sequence.
  • the first scheduling module 163 is specifically configured to schedule data in the target pseudowire scheduling node through the target tunnel scheduling node corresponding to the target tunnel.
  • the target tunnel scheduling node corresponds to the first target minimum required bandwidth.
  • the target port scheduling node corresponds to the target total bandwidth.
  • the at least two tunnel scheduling nodes connected by the target port scheduling node include the target tunnel scheduling node. That is, the target port scheduling node is connected to multiple tunnel scheduling nodes including the target tunnel scheduling node.
  • the second scheduling module 164 is specifically configured to determine that the target port is congested when the sum of all data traffic in at least two tunnel scheduling nodes connected by the target port scheduling node is greater than the target total bandwidth; When congested, the node is scheduled through the target port, and the first actual bandwidth corresponding to the data in the target tunnel scheduling node is determined according to the minimum required bandwidth of the first target; the node is scheduled through the target port, and the target tunnel scheduling node is sent according to the first actual bandwidth. The data.
  • each of the at least two tunnel scheduling nodes corresponds to the first minimum required bandwidth.
  • the second scheduling module 164 is specifically configured to: determine the connection of the target port scheduling node The sum of the first minimum required bandwidth of all tunnel scheduling nodes; the remaining bandwidth after the total target bandwidth is subtracted from the sum of the first minimum required bandwidth, is allocated according to the preset first bandwidth allocation rule, and the corresponding to the target tunnel scheduling node is determined The first target allocation bandwidth; the sum of the first target minimum required bandwidth corresponding to the target tunnel scheduling node and the first target allocation bandwidth is determined as the first actual bandwidth corresponding to the data in the target tunnel scheduling node.
  • the target pseudowire scheduling node corresponds to the second target minimum required bandwidth.
  • the target tunnel scheduling node corresponds to the maximum required bandwidth of the first target.
  • the at least two pseudowire scheduling nodes connected by the target tunnel scheduling node include the target pseudowire scheduling node. That is, the target tunnel scheduling node is connected to multiple pseudowire scheduling nodes including the target pseudowire scheduling node.
  • the first scheduling module 163 is specifically configured to determine that the target tunnel is congested when the sum of all data traffic in at least two pseudowire scheduling nodes connected to the target tunnel scheduling node is greater than the maximum required bandwidth of the first target; When the node is scheduled through the target tunnel, and according to the minimum required bandwidth of the second target, the second actual bandwidth corresponding to the data in the target pseudowire scheduling node is determined; the node is scheduled through the target tunnel, and the target pseudowire scheduling node is sent according to the second actual bandwidth. Data in.
  • each of the at least two pseudowire scheduling nodes corresponds to the second minimum required bandwidth.
  • the first scheduling module 163 is specifically configured to: determine the connection of the target tunnel scheduling node The sum of the second minimum required bandwidth of all pseudowire scheduling nodes; the remaining bandwidth after subtracting the sum of the second minimum required bandwidth from the maximum required bandwidth of the first target, is allocated according to the preset second bandwidth allocation rule, and the target is determined The second target allocation bandwidth corresponding to the pseudowire scheduling node; the sum of the second target minimum required bandwidth corresponding to the target pseudowire scheduling node and the second target allocation bandwidth is determined as the second actual value corresponding to the data in the target pseudowire scheduling node bandwidth.
  • the target priority sub-queue corresponds to the third target minimum required bandwidth
  • the target pseudowire scheduling node corresponds to the second target maximum required bandwidth
  • the target priority queue is a set of at least two priority sub-queues, with at least two priorities
  • the level sub-queue includes the target priority sub-queue. That is, the target priority queue is a set of multiple priority sub-queues including the target priority sub-queue, and each priority sub-queue corresponds to the third minimum required bandwidth.
  • the third scheduling module is specifically configured to determine that the target pseudowire is congested when the sum of the traffic of all priority sub-queues included in the target priority queue is greater than the maximum required bandwidth of the second target; when the target pseudowire is congested, pass The target pseudowire scheduling node determines the third actual bandwidth corresponding to the target priority subqueue according to the third target minimum required bandwidth corresponding to the target priority subqueue; through the target pseudowire scheduling node, the target priority is sent according to the third actual bandwidth The data in the sub-queue.
  • the device further includes: a third determining module and a fourth determining module.
  • the third determining module is configured to determine that the target pseudowire has a corresponding pseudowire scheduling node in the first mapping relationship according to the preset pseudowire and the first mapping relationship of the pseudowire scheduling node and the target pseudowire, then In the first mapping relationship, the pseudowire scheduling node corresponding to the target pseudowire is determined to be the target pseudowire scheduling node.
  • the fourth determining module is configured to determine that the target pseudowire does not have a corresponding pseudowire scheduling node in the first mapping relationship according to the preset pseudowire and the first mapping relationship of the pseudowire scheduling node and the target pseudowire.
  • the empty pseudowire scheduling node connected under the target tunnel scheduling node is determined as the target pseudowire scheduling node.
  • the second maximum required bandwidth of the empty pseudowire scheduling node is the bandwidth remaining after the first target maximum required bandwidth of the target tunnel scheduling node minus the second maximum required bandwidth of other connected pseudowire scheduling nodes.
  • the device further includes: a fifth determining module and a sixth determining module.
  • the fifth determining module is configured to if it is determined that the target tunnel has a corresponding tunnel scheduling node in the second mapping relationship according to the preset second mapping relationship between the tunnel and the tunnel scheduling node and the target tunnel, then the second mapping relationship is , The tunnel scheduling node corresponding to the target tunnel is determined as the target tunnel scheduling node.
  • the sixth determining module is configured to, if it is determined that the target tunnel does not have a corresponding tunnel scheduling node in the second mapping relationship according to the preset second mapping relationship between the tunnel and the tunnel scheduling node, and the target tunnel, then the target port scheduling node The empty tunnel scheduling node connected below is determined as the target tunnel scheduling node.
  • the first maximum required bandwidth of the empty tunnel scheduling node is the bandwidth remaining after subtracting the first maximum required bandwidth of other connected tunnel scheduling nodes from the total target bandwidth of the target port scheduling node.
  • the data scheduling device provided in this embodiment is configured to execute the data scheduling method of any of the foregoing embodiments.
  • the implementation principles and technical effects of the data scheduling device provided in this embodiment are similar, and will not be repeated here.
  • Fig. 17 is a schematic structural diagram of a data scheduling device provided by an embodiment.
  • the data scheduling device includes a processor 171 and a memory 172; the number of processors 171 in the data scheduling device may be one or more.
  • one processor 171 is taken as an example;
  • the processor 171 and the memory 172; may be connected by a bus or in other ways.
  • the connection by a bus is taken as an example.
  • the memory 172 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the data scheduling method in the embodiment of the present application (for example, the first data scheduling device in the data scheduling device).
  • the processor 171 runs the software programs, instructions, and modules stored in the memory 172, thereby implementing various functional applications and data processing of the data scheduling device, that is, realizing the aforementioned data scheduling method.
  • the memory 172 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the data scheduling device.
  • the memory 172 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the embodiment of the present application also provides a storage medium containing computer-executable instructions.
  • the computer-executable instructions are used to execute a data scheduling method when executed by a computer processor, and the method includes:
  • the data in the target tunnel scheduling node is scheduled through the target port scheduling node corresponding to the target port; wherein, the target port is a port corresponding to the target tunnel.
  • a storage medium containing computer-executable instructions provided by this application is not limited to the method operations described above, and can also perform related operations in the data scheduling method provided by any embodiment of this application. .
  • the data scheduling method, device and storage medium proposed in this application include: determining the target tunnel corresponding to the data to be scheduled according to the user indication information of the data to be scheduled, and putting the data to be scheduled into the priority field corresponding to the target priority of the data to be scheduled In the first-level sub-queue, the target tunnel scheduling node corresponding to the target tunnel is used to schedule the target priority queue to which the target priority sub-queue belongs, and the target port scheduling node corresponding to the target port is used to schedule the data in the target tunnel scheduling node, where, The target port is the port corresponding to the target tunnel.
  • the data scheduling method can determine the target tunnel corresponding to the data to be scheduled according to the user instruction information of the data to be scheduled, schedule the target priority queue where the data to be scheduled is located through the target tunnel scheduling node corresponding to the target tunnel, and then use the corresponding target port
  • the target port scheduling node scheduling the data in the target tunnel scheduling node, because the target tunnel is determined according to the user indication information of the data to be scheduled, so that the priority field of the data to be scheduled and the user indication information can be combined to schedule the data to be scheduled That is, different flow control methods are provided for different users to meet the needs of users, improve the scheduling flexibility of QoS technology, and at the same time, reduce the maintenance cost of operators.
  • the various embodiments of the present application can be implemented in hardware or dedicated circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device, although the application is not limited thereto.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, a physical component may have multiple functions, or a function or step may consist of several physical components.
  • the components are executed cooperatively.
  • Certain physical components or all physical components can be implemented as software executed by a processor, such as a central processing unit, a digital signal processor, or a microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit .
  • a processor such as a central processing unit, a digital signal processor, or a microprocessor
  • Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
  • computer storage medium includes volatile and non-volatile data implemented in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data).
  • Information such as computer-readable instructions, data structures, program modules, or other data.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other storage technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
  • communication media usually contain computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as carrier waves or other transmission mechanisms, and may include any information delivery media. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种数据调度方法、设备和存储介质。该方法包括:根据待调度数据的用户指示信息,确定待调度数据对应的目标隧道(步骤101),将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中(步骤102),通过目标隧道对应的目标隧道调度节点,调度目标优先级子队列所属的目标优先级队列(步骤103),通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据(步骤104),其中,目标端口为目标隧道对应的端口。

Description

数据调度方法、设备和存储介质
相关申请的交叉引用
本申请基于申请号为202010544387.2、申请日为2020年06月15日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请实施例涉及分层流量控制技术领域,尤其涉及一种数据调度方法、设备和存储介质。
背景技术
服务质量(Quality of Service,QoS)技术是网络的一种安全机制,用来解决网络延迟和阻塞等问题。
目前的QoS技术只能根据报文中的优先级字段,将报文映射到不同的发送队列,再通过发送队列之间不同的调度算法实现发送队列调度和带宽分配。
但是,上述的QoS技术的调度依据是报文的不同优先级,其依据的因素较为单一,这导致目前的QoS技术的应用场景有限。
发明内容
本申请实施例提出一种数据调度方法、设备和存储介质。
本申请实施例提供了一种数据调度方法,所述方法包括以下步骤:根据待调度数据的用户指示信息,确定所述待调度数据对应的目标隧道;将所述待调度数据放入所述待调度数据的优先级字段对应的目标优先级子队列中;通过所述目标隧道对应的目标隧道调度节点,调度所述目标优先级子队列所属的目标优先级队列;通过目标端口对应的目标端口调度节点,调度所述目标隧道调度节点中的数据;其中,所述目标端口为所述目标隧道对应的端口。
本申请实施例还提出了一种数据调度设备,所述设备包括存储器、处理器、存储在所述存储器上并可在所述处理器上运行的程序以及用于实现所述处理器和所述存储器之间的连接通信的数据总线,所述程序被所述处理器执行时实现前述方法的步骤。
本申请实施例提供了一种存储介质,用于计算机可读存储,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现前述方法的步骤。
附图说明
图1为一实施例提供的数据调度方法的流程图;
图2为另一实施例提供的数据调度方法的流程图;
图3为一实施例提供的数据调度装置的结构示意图;
图4为一实施例提供的数据调度装置中的业务接入单元的结构示意图;
图5为一实施例提供的数据调度装置中的分层调度单元的结构示意图;
图6为一实施例提供的隧道调度节点为空节点时的带宽分配的原理示意图;
图7为一实施例提供的伪线调度节点为空节点时的带宽分配的原理示意图;
图8为又一实施例提供的数据调度方法的流程图;
图9为图8对应的原理示意图;
图10为再一实施例提供的数据调度方法的流程图;
图11为图10对应的原理示意图;
图12为另一实施例提供的数据调度方法的流程图;
图13为图12对应的原理示意图;
图14为另一实施例提供的数据调度方法的流程图;
图15为图14对应的原理示意图;
图16为一实施例提供的数据调度装置的结构示意图;
图17为一实施例提供的数据调度设备的结构示意图。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请实施例,并不用于限定本申请实施例。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请实施例的说明,其本身没有特有的意义。因此,“模块”、“部件”或“单元”可以混合地使用。需要注意,本申请中提及的“第一”、“第二”等概念仅用于对不同的参数、装置、模块或单元进行区分,并非用于限定这些参数、装置、模块或单元所执行的功能的顺序或者相互依存关系。
随着网络技术发展,企业用户越来越多地依赖网络提供的服务,并希望运营商能够提供安全、可靠的专线,例如,网络电话(Voice over Internet Protocol,VoIP)以及视频电话会议等服务,从而降低企业的运营成本。个人用户也已经不再满足于进行网上冲浪或文件下载等简单业务,而是希望通过网络获取更好的体验,比如高质量的视频聊天,实时游戏等服务。而随着第五代移动通信技术(5th Generation Mobile Networks,5G)技术的建设,也要求运营商为用户提供高速率,低时延的服务体验。
如何为不同的用户提供不同的服务质量,这就需要用到QoS技术。但是目前的QoS技术只能依据报文的优先级字段进行调度,依据的因素较为单一,无法满足不同用户的需求。
本实施例提供了一种数据调度方法,包括:根据待调度数据的用户指示信息,确定待调度数据对应的目标隧道,将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中,通过目标隧道对应的目标隧道调度节点,调度目标优先级子队列所属的目标优先级队列,通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据,其中,目标端口为目标隧道对应的端口。该数据调度方法可以根据待调度数据的用户指示信息,确定出待调度数据对应的目标隧道,通过目标隧道对应的目标隧道调度节点调度待调度数据所在的目标优先级队列,再通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据,由于目标隧道是根据待调度数据的用户指示信息确定的,从而,可以实现结合待调度数据的优先级字段以及用户指示信息调度该待调度数据,即,针对不同用户提供不同的流量控制方法,满足用户的需求,提高了QoS技术的调度灵活性,同时,降低运营商的维护成本。
图1为一实施例提供的数据调度方法的流程图。本实施例适用于对数据进行调度的场景。本实施例可以由数据调度装置来执行,该数据调度装置可以由软件和/或硬件的方式实现,该数据调度装置可以集成于具有分组交换功能的通信设备中,该具有分组交换功能的通信设备可以设置在分组交换系统中。如图1所示,本实施例提供的数据调度方法包括如下步骤:
步骤101:根据待调度数据的用户指示信息,确定待调度数据对应的目标隧道。
一实施例中,具有分组交换功能的通信设备可以为光传送网(Optical Transport Network,OTN)设备。该OTN设备可以通过分组传送网(Packet Transport Network,PTN)设备与交换机连接。从上游交换机处接收数据,对接收到的数据进行调度,再传送给下游交换机。
在一些实例中,本实施例中的待调度数据可以为多协议标签交换(Multi-Protocol Label Switching,MPLS)系统中的数据。
用户指示信息可以包括:接收该待调度数据的设备端口(port)和/或该调度数据所在的虚拟局域网(Virtual Local Area Network,VLAN)的编号。接收该待调度数据的设备端口指的是具有分组交换功能的通信设备接收该待调度数据的端口。需要说明的是,用户指示信息可以封装在该待调度数据中,也可以不封装在该待调度数据中,本实施例并不以此为限。
一实施例中,不同的隧道上传送不同用户的待调度数据。即,隧道与用户对应。
数据调度装置可以根据待调度数据中的用户指示信息,确定该待调度数据对应的目标隧道。本实施例中的待调度数据可以为多个,在步骤101中,数据调度装置可以根据每个待调度数据的用户指示信息,确定待调度数据对应的目标隧道。示例性地,用户指示信息可以为设备端口的标识,例如,“端口1”。
步骤102:将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中。
一实施例中,待调度数据本身封装有优先级字段,在一些实例中,该优先级字段的长度可以为3比特。根据待调度数据的优先级字段,可以确定出其对应的优先级,优先级与优先级子队列对应,因此,可以根据待调度数据的优先级字段确定该待调度数据对应的目标优先级子队列。
目标优先级子队列属于目标优先级队列。本实施例中的目标优先级队列包括目标优先级子队列在内的多个优先级子队列。在一些实例中,优先级队列可以与隧道具有映射关系。在步骤101中确定出了待调度数据对应的目标隧道之后,在步骤102中,可以先根据优先级队列与隧道的映射关系以及目标隧道,确定出该目标隧道对应的目标优先级队列,再根据待调度数据的优先级字段,确定对应的目标优先级子队列。
举例来说,假设目标优先级队列包括8个优先级子队列:子队列0、子队列1、子队列2、子队列3、……、子队列7,待调度数据的优先级字段为110,根据优先级字段确定该待调度数据的优先级为6,对应的子队列为子队列6。因此,在步骤102中,将该待调度数据放入子队列6中。
在确定出待调度数据对应的目标优先级子队列之后,将该待调度数据放入该目标优先级子队列中,等待调度。
步骤103:通过目标隧道对应的目标隧道调度节点,调度目标优先级子队列所属的目标优先级队列。
一实施例中,目标隧道对应目标隧道调度节点。目标隧道调度节点用于调度目标优先级队列中的数据。
一实施例中,目标优先级子队列对应第三目标最小需求带宽。目标优先级队列为至少两个优先级子队列的集合,至少两个优先级子队列包括目标优先级子队列。即,目标优先级队列为包括目标优先级子队列在内的多个优先级子队列的集合,每个优先级子队列均对应第三最小需求带宽。目标隧道调度节点对应第一目标最大需求带宽。在优先级子队列为目标优先级子队列时,第三目标最小需求带宽与第三最小需求带宽表示相同的概念。
一实现方式中,在目标优先级队列中包括的所有优先级子队列的流量之和小于或者等于该第一目标最大需求带宽时,多个优先级子队列中的数据可以同时通过该目标隧道。
另一实现方式中,在目标优先级队列中包括的所有优先级子队列的流量之和大于该第一目标最大需求带宽时,说明目标隧道出现拥塞。在该实现方式中,需要根据目标优先级子队列对应的第三目标最小需求带宽,确定目标优先级子队列对应的第三实际带宽,根据该第三实际带宽发送目标优先级子队列中的数据。
在一些实例中,确定第三实际带宽的过程可以为:确定目标隧道调度节点连接的目标优先级队列中所有优先级子队列的第三最小需求带宽之和,将第一目标最大需求带宽减去所有优先级子队列的第三最小需求带宽之和后剩余的带宽,按照预设的第三带宽分配规则进行分配,确定目标优先级子队列对应的第三目标分配带宽,将目标优先级子队列对应的第三目标最小需求带宽以及第三目标分配带宽的和,确定为目标优先级子队列中的数据对应的第三实际带宽。该分配方式可以保证在目标隧道出现拥塞时,优先保证每个优先级子队列的最小需求带宽。
这里的预设的第三带宽分配规则可以为将剩余的带宽全部分配给优先级较高的若干个优先级子队列,或者按照某个预设比例分配各优先级子队列。本实施例并不以此为限。
需要说明的是,目标优先级队列中包括的所有优先级子队列的流量之和指的是目标优先级队列中包括的所有优先级子队列中的数据的流量之和。本实施例中的流量或者带宽均是指的数据的传输速率。
目标隧道调度节点调度目标优先级队列中的数据后,数据会流入到目标隧道中。由于目标隧道与目标隧道调度节点对应,因而,也可以说数据会流入到目标隧道调度节点中。
步骤104:通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据。
其中,目标端口为目标隧道对应的端口。
在某些场景中,企业用户的VoIP通话需要保证低时延、高实时性。而个人用户浏览网页对浏览网络速率并不敏感。因此,需要对不同用户的数据进行单独的带宽控制。
一实施例中,为了实现针对不同的用户进行带宽控制,设置有目标端口对应的目标端口调度节点。目标端口调度节点可以连接至少两个隧道调度节点,该至少两个隧道调度节点包括目标隧道调度节点。即,目标端口调度节点连接包括目标隧道调度节点在内的多个隧道调度节点。基于前面的描述,可知,每个隧道对应一个用户的数据,因而,目标端口调度节点可以对多个隧道,即多个用户的数据进行调度,实现针对不同用户提供不同的流量控制方法,满足用户的需求,提高了QoS技术的调度灵活性。
一实施例中,目标隧道调度节点对应第一目标最小需求带宽。目标端口调度节点对应目标总带宽。
一实现方式中,当目标端口调度节点连接的多个隧道调度节点中所有数据的流量的总和小于或者等于目标总带宽时,多个隧道调度节点中数据可以同时从该目标端口发出。
另一实现方式中,当目标端口调度节点连接的至少两个隧道调度节点中所有数据流量的总和大于目标总带宽时,确定目标端口拥塞;在目标端口拥塞时,通过目标端口调度节点,并根据第一目标最小需求带宽,确定目标隧道调度节点中的数据对应的第一实际带宽;通过目标端口调度节点,根据第一实际带宽发送目标隧道调度节点中的数据。
更具体地,至少两个隧道调度节点中的每个隧道调度节点均对应第一最小需求带宽。确定第一实际带宽 的过程可以为:确定目标端口调度节点连接的所有隧道调度节点的第一最小需求带宽的和;将目标总带宽减去第一最小需求带宽的和后剩余的带宽,按照预设的第一带宽分配规则进行分配,确定目标隧道调度节点对应的第一目标分配带宽;将目标隧道调度节点对应的第一目标最小需求带宽以及第一目标分配带宽的和,确定为目标隧道调度节点中的数据对应的第一实际带宽。
这里的预设的第一带宽分配规则可以为将目标端口调度节点中剩余的带宽全部分配给第一最小需求带宽较高的若干个隧道调度节点,或者按照某个预设比例分配给各隧道调度节点。本实施例并不以此为限。
需要说明的是,上述的第三目标最小需求带宽、第三最小需求带宽、第一目标最大需求带宽、第一目标最小需求带宽、第一最小需求带宽以及目标总带宽均可以是预先配置的。示例性地,可以是用户预先为该数据调度装置配置的,也可以是其他设备为该数据调度装置配置的。
本实施例提供了一种数据调度方法,包括:根据待调度数据的用户指示信息,确定待调度数据对应的目标隧道,将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中,通过目标隧道对应的目标隧道调度节点,调度目标优先级子队列所属的目标优先级队列,通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据,其中,目标端口为目标隧道对应的端口。该数据调度方法可以根据待调度数据的用户指示信息,确定出待调度数据对应的目标隧道,通过目标隧道对应的目标隧道调度节点调度待调度数据所在的目标优先级队列,再通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据,由于目标隧道是根据待调度数据的用户指示信息确定的,从而,可以实现结合待调度数据的优先级字段以及用户指示信息调度该待调度数据,即,针对不同用户提供不同的流量控制方法,满足用户的需求,提高了QoS技术的调度灵活性,同时,降低了运营商的维护成本。
图2为另一实施例提供的数据调度方法的流程图。本实施例在图1所示实施例及各种其他的方案的基础上,对数据调度方法包括的其他步骤进行详细说明。如图2所示,本实施例提供的数据调度方法包括如下步骤:
步骤201:根据待调度数据的用户指示信息,确定待调度数据对应的目标隧道。
步骤202:将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中。
步骤203:根据待调度数据的业务指示信息,确定待调度数据对应的目标伪线。
在本实施例中,除了基于待调度数据的用户指示信息,确定待调度数据对应的目标隧道之外,还可以基于待调度数据的业务指示信息,确定待调度数据对应的目标伪线(Pseudo Wire,PW)。
目标伪线与目标隧道连接。目标隧道可以连接至少两个伪线,该至少两个伪线包括目标伪线。示例性地,该至少两个伪线可以为伪线1及伪线2,其中,伪线1可以为目标伪线。
一种实现方式中,本实施例中的业务指示信息可以为接收该待调度数据的设备端口和/或该调度数据所在的VLAN的编号。在确定出待调度数据对应的目标伪线之后,可以根据伪线与隧道的映射关系以及该目标伪线,确定出待调度数据对应的目标隧道。可以看出,在该实现方式中,用户指示信息可以为伪线与隧道的映射关系。
换句话说,在该实现方式中,多个伪线与目标隧道连接,目标隧道与用户对应,不同的伪线对应该用户的不同业务的数据。因此,本实施例不仅可以实现对不同用户的数据进行调度,还可以对同一用户的不同业务的数据实现调度,进一步提高了调度的灵活性。
在某些场景中,同一用户的不同业务需求的网络速率不一样,例如,个人用户的VoIP业务需要保证低时延、高实时性,而浏览网页时对网络速率并不敏感。本实施例可以针对不同的用户的不同业务进行单独的带宽控制,降低运营商的维护成本。
图3为一实施例提供的数据调度装置的结构示意图。该装置包括业务接入单元31和业务分层调度单元32。
用户业务通过业务接入单元31封装到不同的隧道和伪线上。在业务分层调度单元32,隧道对应隧道调度节点,伪线对应伪线调度节点,端口对应端口调度节点。每个报文又根据不同的优先级对应到不同的优先级子队列中。对每个调度节点节点进行带宽分配,即,设置最大需求带宽和最小需求带宽,可以实现对不同用户的不同业务的流量控制。
图4为一实施例提供的数据调度装置中的业务接入单元的结构示意图。如图4所示,将经过同一个端口下的用户划分到不同的隧道上,用户的不同业务划分到不同的伪线上。比如企业用户1在隧道1上,企业用 户1的voip业务在伪线PW11上,网络传输业务在伪线PW12上,……,企业用户1的业务n在伪线PW1n上;个人用户2在隧道2上,个人用户2的通话在伪线PW23上,网络传输业务在伪线PW24上;……;用户n在隧道n上,用户n的业务1在伪线PWn1上,业务2在伪线PWn2上,……,业务n在伪线PWnn上。这样通过归类方法,可以把不同的用户,不同的业务区分开来。之后,再将这些归类划分后的数据对应到分层调度单元的不同调度节点上进入分层流量控制。图4中的隧道1、隧道2、……以及隧道n连接端口。这里的端口指的是数据调度装置中发出数据的端口。
图5为一实施例提供的数据调度装置中的分层调度单元的结构示意图。如图5所示,在一个端口下分成四个调度层次:第一级调度是端口级别的调度,即,端口调度节点,所有调度会汇总到端口调度节点,可以对端口进行整形分配流量控制;第二级调度是隧道调度节点,业务接入单元将不同的用户分配到不同的隧道下,这在隧道调度节点上就反映出不同的用户,可以针对用户的需求,通过配置最大最小带宽的方式分配流量控制;第三级调度是PW调度节点,同一用户的不同业务映射到不同的PW上,在这个调度节点上就反映出不同的业务,可以针对业务的需求,通过配置最大最小带宽的方式分配流量控制;最后一级调度是队列调度节点,这一级按照报文的优先级分为多个优先级调度子队列,示例性地,可以为8个优先级调度子队列,可以对每一个队列的带宽进行控制,同时按照优先级调度报文。
步骤204:通过目标伪线对应的目标伪线调度节点,调度目标优先级子队列所属的目标优先级队列。
其中,目标优先级队列、目标伪线调度节点、目标隧道调度节点以及目标端口调度节点依次连接。
本实施例中,伪线调度节点调度优先级队列中的数据,隧道调度节点调度其对应的伪线调度节点中的数据,端口调度节点调度其对应的隧道调度节点中的数据。
在数据调度装置接收到待调度数据后,通过步骤201确定出待调度数据对应的目标隧道,通过步骤203确定出待调度数据对应的目标伪线,通过步骤202将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中。需要说明的是,步骤201与步骤203之间没有时序关系。
之后,执行步骤204,通过目标伪线对应的目标伪线调度节点,调度目标优先级子队列所属的目标优先级队列。
目标优先级子队列对应第三目标最小需求带宽。目标伪线调度节点对应第二目标最大需求带宽。目标优先级队列为包括目标优先级子队列在内的多个优先级子队列的集合,每个优先级子队列均对应第三最小需求带宽。
一种实现方式中,步骤204具体的调度过程可以为:当目标优先级队列中包括的所有优先级子队列的流量的总和,小于或者等于第二目标最大需求带宽时,多个优先级子队列中的数据可以同时通过该目标伪线。
另一种实现方式中,步骤204具体的调度过程可以为:当目标优先级队列中包括的所有优先级子队列的流量的总和,大于第二目标最大需求带宽时,确定目标伪线拥塞;在目标伪线拥塞时,通过目标伪线调度节点,根据目标优先级子队列对应的第三目标最小需求带宽,确定目标优先级子队列对应的第三实际带宽;通过目标伪线调度节点,根据第三实际带宽发送目标优先级子队列中的数据。
在一些实例中,确定第三实际带宽的过程可以为:确定目标伪线调度节点连接的目标优先级队列中所有优先级子队列的第三最小需求带宽之和,将第二目标最大需求带宽减去所有优先级子队列的第三最小需求带宽之和后剩余的带宽,按照预设的第三带宽分配规则进行分配,确定目标优先级子队列对应的第三目标分配带宽,将目标优先级子队列对应的第三目标最小需求带宽以及第三目标分配带宽的和,确定为目标优先级子队列中的数据对应的第三实际带宽。该分配方式可以保证在目标伪线出现拥塞时,优先保证每个优先级子队列的最小需求带宽。
目标伪线调度节点调度目标优先级队列中的数据后,数据会流入到目标伪线中。由于目标伪线与目标伪线调度节点对应,因而,也可以说数据会流入到目标伪线调度节点中。
步骤205:通过目标隧道对应的目标隧道调度节点,调度目标伪线调度节点中的数据。
数据流入到目标伪线调度节点之后,通过目标隧道对应的目标隧道调度节点,调度目标伪线调度节点中的数据。
在一些实例中,目标伪线调度节点对应第二目标最小需求带宽。目标隧道调度节点对应第一目标最大需求带宽。目标隧道调度节点连接的至少两个伪线调度节点包括目标伪线调度节点。即,目标隧道调度节点连接包括目标伪线调度节点在内的至少两个伪线调度节点。
一种实现方式中,步骤205具体的调度过程可以为:当目标隧道调度节点连接的至少两个伪线调度节点中所有数据的流量的总和,小于或者等于第一目标最大需求带宽时,多个伪线调度节点中的数据可以同时通过该目标隧道。
另一种实现方式中,步骤205具体的调度过程可以为:当目标隧道调度节点连接的至少两个伪线调度节点中所有数据的流量的总和,大于第一目标最大需求带宽时,确定目标隧道拥塞;在目标隧道拥塞时,通过目标隧道调度节点,并根据第二目标最小需求带宽,确定目标伪线调度节点中的数据对应的第二实际带宽;通过目标隧道调度节点,根据第二实际带宽发送目标伪线调度节点中的数据。
在一些实例中,至少两个伪线调度节点中的每个伪线调度节点对应第二最小需求带宽。确定第二实际带宽的过程可以为:确定目标隧道调度节点连接的所有伪线调度节点的第二最小需求带宽的和;将第一目标最大需求带宽减去第二最小需求带宽的和后剩余的带宽,按照预设的第二带宽分配规则进行分配,确定目标伪线调度节点对应的第二目标分配带宽;将目标伪线调度节点对应的第二目标最小需求带宽以及第二目标分配带宽的和,确定为目标伪线调度节点中的数据对应的第二实际带宽。
目标隧道调度节点调度目标伪线中的数据后,数据会流入到目标隧道中。由于目标隧道与目标隧道调度节点对应,因而,也可以说数据会流入到目标隧道调度节点中。
步骤206:通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据。
其中,目标端口为目标隧道对应的端口。
一实施例中,目标隧道调度节点对应第一目标最小需求带宽。目标端口调度节点对应目标总带宽。
一实现方式中,当目标端口调度节点连接的至少两个隧道调度节点中所有数据的流量的总和,小于或者等于目标总带宽时,该至少两个隧道调度节点中数据可以同时从该目标端口发出。
另一实现方式中,当目标端口调度节点连接的至少两个隧道调度节点中所有数据流量的总和,大于目标总带宽时,确定目标端口拥塞;在目标端口拥塞时,通过目标端口调度节点,并根据第一目标最小需求带宽,确定目标隧道调度节点中的数据对应的第一实际带宽;通过目标端口调度节点,根据第一实际带宽发送目标隧道调度节点中的数据。
更具体地,至少两个隧道调度节点中的每个隧道调度节点均对应第一最小需求带宽。确定第一实际带宽的过程可以为:确定目标端口调度节点连接的所有隧道调度节点的第一最小需求带宽的和;将目标总带宽减去第一最小需求带宽的和后剩余的带宽,按照预设的第一带宽分配规则进行分配,确定目标隧道调度节点对应的第一目标分配带宽;将目标隧道调度节点对应的第一目标最小需求带宽以及第一目标分配带宽的和,确定为目标隧道调度节点中的数据对应的第一实际带宽。
需要说明的是,第一带宽分配规则、第二带宽分配规则以及第四带宽分配规则的具体内容已在图1所示实施例中进行了描述,此处不再赘述。
上述在调度节点拥塞时,确定第一实际带宽、确定第二实际带宽以及确定第三实际带宽的带宽的实现过程,可以保证前一级调度节点的最小需求带宽,不会影响业务的正常运行。
目标隧道对应目标隧道调度节点,在步骤201中确定出目标隧道之后,在步骤205之前,一实施例中,还需要确定目标隧道对应的目标隧道调度节点。具体的确定过程可以为:若根据预设的隧道及隧道调度节点的第二映射关系以及目标隧道,确定目标隧道在第二映射关系中存在对应的隧道调度节点,则将第二映射关系中,目标隧道对应的隧道调度节点,确定为目标隧道调度节点;若根据预设的隧道及隧道调度节点的第二映射关系以及目标隧道,确定目标隧道在第二映射关系中不存在对应的隧道调度节点,则将目标端口调度节点下连接的空隧道调度节点,确定为目标隧道调度节点。其中,空隧道调度节点的第一最大需求带宽为目标端口调度节点的目标总带宽减去连接的其他隧道调度节点的第一最大需求带宽后剩余的带宽。
上述过程中,如果在预先设置的第二映射关系中存在目标隧道对应的隧道调度节点,则将该第二映射关系中,目标隧道对应的隧道调度节点,确定为目标隧道调度节点。第二映射关系中的隧道调度节点已经预先配置了最大最小带宽。如果在预先设置的第二映射关系中不存在目标隧道对应的隧道调度节点,则需要将目标端口调度节点下连接的空隧道调度节点,确定为目标隧道调度节点。空隧道调度节点没有预先配置最大最小带宽,因此,需要将目标端口节点的目标总带宽减去连接的其他隧道调度节点的第一最大需求带宽后剩余的带宽,确定为该空隧道调度节点的第一最大需求带宽。
目标伪线对应目标伪线调度节点,在步骤203中确定出目标伪线之后,在步骤204之前,一实施例中, 还需要确定目标伪线对应的目标伪线调度节点。具体的确定过程可以为:若根据预设的伪线及伪线调度节点的第一映射关系以及目标伪线,确定目标伪线在第一映射关系中存在对应的伪线调度节点,则将第一映射关系中,目标伪线对应的伪线调度节点,确定为目标伪线调度节点;若根据预设的伪线及伪线调度节点的第一映射关系以及目标伪线,确定目标伪线在第一映射关系中不存在对应的伪线调度节点,则将目标隧道调度节点下连接的空伪线调度节点,确定为目标伪线调度节点。其中,空伪线调度节点的第二最大需求带宽为目标隧道调度节点的第一目标最大需求带宽减去连接的其他伪线调度节点的第二最大需求带宽后剩余的带宽。
上述过程中,如果在预先设置的第一映射关系中存在目标伪线对应的伪线调度节点,则将该第一映射关系中,目标伪线对应的伪线调度节点,确定为目标伪线调度节点。第一映射关系中的伪线调度节点已经预先配置了最大最小带宽。如果在预先设置的第一映射关系中不存在目标伪线对应的伪线调度节点,则需要将目标隧道调度节点下连接的空伪线调度节点,确定为目标伪线调度节点。空伪线调度节点没有预先配置最大最小带宽,因此,需要将目标隧道节点的目标总带宽减去连接的其他伪线调度节点的第二最大需求带宽后剩余的带宽,确定为该空伪线调度节点的第二最大需求带宽。
需要说明的是,空伪线调度节点指的是没有与伪线绑定的伪线调度节点。空隧道调度节点指的是没有与隧道绑定的隧道调度节点。
图6为一实施例提供的隧道调度节点为空节点时的带宽分配的原理示意图。如图6所示,第二层隧道调度节点有空节点的情况,这里端口分配总带宽是100Mbps,隧道调度节点1分配带宽为20Mbps,隧道调度节点2分配带宽为30Mbps,那么在此端口下的其他业务都走空隧道调度节点。空隧道调度节点分配的带宽是端口带宽减去分配给隧道调度节点1和隧道调度节点2的节点带宽的剩余值,即100-20-30=50Mbps。这里的20Mbps指的是隧道调度节点1的最大需求带宽,30Mbps指的是隧道调度节点2的最大需求带宽。
图7为一实施例提供的伪线调度节点为空节点时的带宽分配的原理示意图。如图7所示,为第三层PW调度节点有空节点的情况,这里隧道分配带宽是80Mbps,PW1分配带宽为40Mbps,PW2分配带宽为10Mbps,那么在此隧道下的其他PW业务都走空伪线调度节点。空伪线调度节点分配的带宽是隧道带宽减去分配给PW1和PW2的节点带宽的剩余值,即80-40-10=30Mbps。
上述为空调度节点分配带宽的方式,能够保证没有分配带宽控制的业务也有适当的带宽可以使用,满足其他业务的带宽需求,同时保证低层次节点的带宽不会超过高层次节点,保证带宽分配的合理性。
本实施例提供的数据调度方法,通过除了基于待调度数据的用户指示信息,确定待调度数据对应的目标隧道之外,还可以基于待调度数据的业务指示信息,确定待调度数据对应的目标伪线,通过目标伪线对应的目标伪线调度节点,调度目标优先级子队列所属的目标优先级队列,通过目标隧道对应的目标隧道调度节点,调度目标伪线调度节点中的数据,通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据,从而,不仅可以实现针对不同用户提供不同的流量控制方法,还可以实现针对不同用户的不同业务提供不同的流量控制方法,进一步提高了调度的灵活性。
以下以几个具体的场景对本实施例提供的数据调度方法的过程进行描述。
第一个场景为目标伪线调度节点为空伪线调度节点的场景。该场景中,可以在隧道调度节点上配置最大、最小带宽来达到控制流量的目的。
图8为又一实施例提供的数据调度方法的流程图。如图8所示,该数据调度方法包括如下步骤:
步骤801:获取用户配置的伪线和隧道的对应关系,配置隧道调度节点的承诺信息率(Committed Information Rate,CIR)和最高信息率(Peak Information Rate,PIR),根据需求创建调度层次树。
其中,调度层次树指的是调度的层级:分配四层调度层次,第一层是端口调度,第二层是隧道调度节点调度,第三层分配空伪线调度节点调度,第四层是队列调度。
步骤802:根据待调度的数据的port或port+vlan,确定待调度数据对应的目标伪线。
步骤803:根据配置的伪线和隧道的对应关系,确定包括目标伪线在内的多个伪线对应的目标隧道。
步骤804:将目标隧道绑定到分配的目标隧道调度节点上。
步骤805:待调度数据根据不同的优先级进入目标隧道调度节点对应的目标优先级队列中。
步骤806:在目标优先级队列中,按照优先级进行调度,汇入上级调度节点。
步骤807:经过空伪线调度节点,汇入上级调度节点。
步骤808:目标隧道调度节点根据分配的流量控制待调度数据出口速率,汇入目标端口调度节点。
步骤809:目标端口调度节点将流量汇总后发送待调度数据。
图9为图8对应的原理示意图。如图9所示,优先级队列包括8个优先级子队列,目标伪线调度节点为空伪线调度节点。
第二个场景为目标隧道调度节点为空隧道调度节点的场景。该场景中,可以在伪线调度节点上配置最大、最小带宽来达到控制流量的目的。
图10为再一实施例提供的数据调度方法的流程图。如图10所示,该数据调度方法包括如下步骤:
步骤1001:获取用户配置的伪线和隧道的对应关系,配置伪线调度节点的CIR和PIR,根据需求创建调度层次树。
其中,调度层次树指的是调度的层级:分配四层调度层次,第一层是端口调度,第二层是空隧道调度节点调度,第三层分配伪线调度节点调度,第四层是队列调度。
步骤1002:根据待调度的数据的port或port+vlan,确定待调度数据对应的目标伪线。
步骤1003:根据配置的伪线和隧道的对应关系,确定包括目标伪线在内的多个伪线对应的目标隧道。
步骤1004:将目标伪线绑定到分配的目标伪线调度节点上。
步骤1005:待调度数据根据不同的优先级进入目标伪线调度节点对应的目标优先级队列中。
步骤1006:在目标优先级队列中,按照优先级进行调度,汇入上级调度节点。
步骤1007:目标伪线调度节点根据分配的流量控制待调度数据出口速率,汇入上级调度节点。
步骤1008:经过空隧道调度节点,汇入目标端口调度节点。
步骤1009:目标端口调度节点将流量汇总后发送待调度数据。
图11为图10对应的原理示意图。如图11所示,优先级队列包括8个优先级子队列,目标隧道调度节点为空隧道调度节点。
第三种场景为目标隧道调度节点及目标伪线调度节点均不为空调度节点的场景。该场景中,可以在隧道调度节点上配置最大、最小带宽,在伪线调度节点上配置最大、最小带宽来达到控制流量的目的。
图12为另一实施例提供的数据调度方法的流程图。如图12所示,该数据调度方法包括如下步骤:
步骤1201:获取用户配置的伪线和隧道的对应关系,配置隧道调度节点的CIR和PIR,配置伪线调度节点的CIR和PIR,根据需求创建调度层次树。
其中,调度层次树指的是调度的层级:分配四层调度层次,第一层是端口调度,第二层是隧道调度节点调度,第三层分配伪线调度节点调度,第四层是队列调度。
步骤1202:根据待调度的数据的port或port+vlan,确定待调度数据对应的目标伪线。
步骤1203:根据配置的伪线和隧道的对应关系,确定包括目标伪线在内的多个伪线对应的目标隧道。
步骤1204:将目标伪线绑定到分配的目标伪线调度节点上。
步骤1205:待调度数据根据不同的优先级进入目标伪线调度节点对应的目标优先级队列中。
步骤1206:在目标优先级队列上,如果上级调度节点(即目标伪线调度节点)的带宽拥塞,报文会按照优先级队列调度,保证高优先级报文优先调度。
步骤1207:队列调度后的报文汇入目标伪线调度节点,目标伪线调度节点按照分配的流量控制报文出口速率后,汇入上级调度节点(即目标隧道调度节点)。
步骤1208:目标伪线调度节点调度后的报文汇入目标隧道调度节点,目标隧道调度节点按照分配的流量控制报文出口速率后,汇入上级调度节点(即目标端口调度节点)。
步骤1209:目标端口调度节点将流量汇总后发送待调度数据。
图13为图12对应的原理示意图。如图13所示,优先级队列包括8个优先级子队列。目标伪线调度节点及目标隧道调度节点均不为空调度节点。
第四种场景为目标隧道调度节点和/或目标伪线调度节点为空调度节点的场景。在该场景中,为各种情况的组合。
图14为另一实施例提供的数据调度方法的流程图。图15为图14对应的原理示意图。如图14所示,该数据调度方法包括如下步骤:
步骤1401:获取用户配置的伪线和隧道的对应关系,配置隧道调度节点的CIR和PIR,配置伪线调度节点的CIR和PIR,根据需求创建调度层次树。
如图15所示,图中调度一共分为四层,第一层是端口调度节点,第二层是隧道调度节点,第三层是伪线调度节点,第四层是队列调度节点。第一部分,伪线调度节点1和伪线调度节点2公用隧道调度节点1,用户配置隧道调度节点1的带宽控制,同时配置伪线调度节点1的带宽控制,伪线调度节点2不配置带宽控制。伪线调度节点2分配在空伪线调度节点上,为其分配剩余带宽,剩余带宽是隧道调度节点1的带宽减去伪线调度节点1分配的带宽。第二部分,对于隧道调度节点2,用户配置隧道调度节点2的带宽控制,其下的伪线调度节点不配置带宽控制,给第三层分配空伪线调度节点,隧道调度节点2下所有伪线中的数据都走该调度节点。第三部分,伪线调度节点3属于隧道调度节点3,用户只配置伪线调度节点3的带宽控制,隧道调度节点3分配空隧道调度节点,带宽为端口带宽减去隧道调度节点1和隧道调度节点2分配的带宽后的剩余带宽。第四部分,属于端口默认分配的4层调度,在这个端口下其他没有分配调度节点的业务全部走该调度层次。
步骤1402:根据待调度的数据的port或port+vlan,确定待调度数据对应的目标伪线。
步骤1403:根据配置的伪线和隧道的对应关系,确定包括目标伪线在内的多个伪线对应的目标隧道。
步骤1404:伪线调度节点1中的数据按照第三种场景的处理流程处理。
步骤1405:伪线调度节点2中的业务不限速,流量汇入隧道调度节点1,隧道调度节点1将伪线调度节点1和伪线调度节点2的流量统一按照CIR和PIR进行流量控制。
步骤1406:隧道调度节点2中的数据按照第一种场景的处理流程处理。
步骤1407:伪线调度节点3中的数据按照第二种场景的处理流程处理。
步骤1408:其他本端口业务走端口对应的四层调度,按照剩余带宽处理。
上述四种场景提供的数据调度方法,不仅可以实现针对不同用户提供不同的流量控制方法,还可以实现针对不同用户的不同业务提供不同的流量控制方法,进一步提高了调度的灵活性。
图16为一实施例提供的数据调度装置的结构示意图。如图16所示,本实施例提供的数据调度装置包括如下模块:第一确定模块161、放入模块162、第一调度模块163以及第二调度模块164。
第一确定模块161,被配置为根据待调度数据的用户指示信息,确定待调度数据对应的目标隧道。
放入模块162,被配置为将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中。
第一调度模块163,被配置为通过目标隧道对应的目标隧道调度节点,调度目标优先级子队列所属的目标优先级队列。
第二调度模块164,被配置为通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据。
其中,目标端口为目标隧道对应的端口。
在一些实例中,该装置还包括:第二确定模块以及第三调度模块。
第二确定模块,被配置为根据待调度数据的业务指示信息,确定待调度数据对应的目标伪线。
第三调度模块,被配置为通过目标伪线对应的目标伪线调度节点,调度目标优先级子队列所属的目标优先级队列。
其中,目标优先级队列、目标伪线调度节点、目标隧道调度节点以及目标端口调度节点依次连接。
第一调度模块163具体是配置为:通过目标隧道对应的目标隧道调度节点,调度目标伪线调度节点中的数据。
在一些实例中,目标隧道调度节点对应第一目标最小需求带宽。目标端口调度节点对应目标总带宽。目标端口调度节点连接的至少两个隧道调度节点包括目标隧道调度节点。即,目标端口调度节点连接包括目标隧道调度节点在内的多个隧道调度节点。
在一些实例中,第二调度模块164具体被设置成:当目标端口调度节点连接的至少两个隧道调度节点中所有数据的流量的总和,大于目标总带宽时,确定目标端口拥塞;在目标端口拥塞时,通过目标端口调度节点,并根据第一目标最小需求带宽,确定目标隧道调度节点中的数据对应的第一实际带宽;通过目标端口调度节点,根据第一实际带宽发送目标隧道调度节点中的数据。
更具体地,至少两个隧道调度节点中的每个隧道调度节点对应第一最小需求带宽。在通过目标端口调度节点,并根据第一目标最小需求带宽,确定目标隧道调度节点中的数据对应的第一实际带宽的方面,第二调度模块164具体被设置成:确定目标端口调度节点连接的所有隧道调度节点的第一最小需求带宽的和;将目标总带宽减去第一最小需求带宽的和后剩余的带宽,按照预设的第一带宽分配规则进行分配,确定目标隧道 调度节点对应的第一目标分配带宽;将目标隧道调度节点对应的第一目标最小需求带宽以及第一目标分配带宽的和,确定为目标隧道调度节点中的数据对应的第一实际带宽。
在一些实例中,目标伪线调度节点对应第二目标最小需求带宽。目标隧道调度节点对应第一目标最大需求带宽。目标隧道调度节点连接的至少两个伪线调度节点包括目标伪线调度节点。即,目标隧道调度节点连接包括目标伪线调度节点在内的多个伪线调度节点。第一调度模块163具体被设置成:当目标隧道调度节点连接的至少两个伪线调度节点中所有数据的流量的总和,大于第一目标最大需求带宽时,确定目标隧道拥塞;在目标隧道拥塞时,通过目标隧道调度节点,并根据第二目标最小需求带宽,确定目标伪线调度节点中的数据对应的第二实际带宽;通过目标隧道调度节点,根据第二实际带宽发送目标伪线调度节点中的数据。
更具体地,至少两个伪线调度节点中的每个伪线调度节点对应第二最小需求带宽。在通过目标隧道调度节点,并根据第二目标最小需求带宽,确定目标伪线调度节点中的数据对应的第二实际带宽的方面,第一调度模块163具体被设置成:确定目标隧道调度节点连接的所有伪线调度节点的第二最小需求带宽的和;将第一目标最大需求带宽减去第二最小需求带宽的和后剩余的带宽,按照预设的第二带宽分配规则进行分配,确定目标伪线调度节点对应的第二目标分配带宽;将目标伪线调度节点对应的第二目标最小需求带宽以及第二目标分配带宽的和,确定为目标伪线调度节点中的数据对应的第二实际带宽。
在一些实例中,目标优先级子队列对应第三目标最小需求带宽,目标伪线调度节点对应第二目标最大需求带宽,目标优先级队列为至少两个优先级子队列的集合,至少两个优先级子队列包括目标优先级子队列。即,目标优先级队列为包括目标优先级子队列在内的多个优先级子队列的集合,每个优先级子队列均对应第三最小需求带宽。第三调度模块具体被设置成:当目标优先级队列中包括的所有优先级子队列的流量的总和,大于第二目标最大需求带宽时,确定目标伪线拥塞;在目标伪线拥塞时,通过目标伪线调度节点,根据目标优先级子队列对应的第三目标最小需求带宽,确定目标优先级子队列对应的第三实际带宽;通过目标伪线调度节点,根据第三实际带宽发送目标优先级子队列中的数据。
在一些实例中,装置还包括:第三确定模块以及第四确定模块。
第三确定模块,被配置为若根据预设的伪线及伪线调度节点的第一映射关系以及目标伪线,确定目标伪线在第一映射关系中存在对应的伪线调度节点,则将第一映射关系中,目标伪线对应的伪线调度节点,确定为目标伪线调度节点。
第四确定模块,被配置为若根据预设的伪线及伪线调度节点的第一映射关系以及目标伪线,确定目标伪线在第一映射关系中不存在对应的伪线调度节点,则将目标隧道调度节点下连接的空伪线调度节点,确定为目标伪线调度节点。
其中,空伪线调度节点的第二最大需求带宽为目标隧道调度节点的第一目标最大需求带宽减去连接的其他伪线调度节点的第二最大需求带宽后剩余的带宽。
在一些实例中,装置还包括:第五确定模块以及第六确定模块。
第五确定模块,被配置为若根据预设的隧道及隧道调度节点的第二映射关系以及目标隧道,确定目标隧道在第二映射关系中存在对应的隧道调度节点,则将第二映射关系中,目标隧道对应的隧道调度节点,确定为目标隧道调度节点。
第六确定模块,被配置为若根据预设的隧道及隧道调度节点的第二映射关系以及目标隧道,确定目标隧道在第二映射关系中不存在对应的隧道调度节点,则将目标端口调度节点下连接的空隧道调度节点,确定为目标隧道调度节点。
其中,空隧道调度节点的第一最大需求带宽为目标端口调度节点的目标总带宽减去连接的其他隧道调度节点的第一最大需求带宽后剩余的带宽。
本实施例提供的数据调度装置被设置成执行上述任意实施例的数据调度方法,本实施例提供的数据调度装置实现原理和技术效果类似,此处不再赘述。
图17为一实施例提供的数据调度设备的结构示意图。如图17所示,该数据调度设备包括处理器171和存储器172;数据调度设备中处理器171的数量可以是一个或多个,图17中以一个处理器171为例;数据调度设备中的处理器171和存储器172;可以通过总线或其他方式连接,图17中以通过总线连接为例。
存储器172作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的数据调度方法对应的程序指令/模块(例如,数据调度装置中的第一确定模块161、放入模块162、 第一调度模块163以及第二调度模块164)。处理器171通过运行存储在存储器172中的软件程序、指令以及模块,从而数据调度设备的各种功能应用以及数据处理,即实现上述的数据调度方法。
存储器172可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据数据调度设备的使用所创建的数据等。此外,存储器172可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
本申请实施例还提供一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种数据调度方法,该方法包括:
根据待调度数据的用户指示信息,确定所述待调度数据对应的目标隧道;
将所述待调度数据放入所述待调度数据的优先级字段对应的目标优先级子队列中;
通过所述目标隧道对应的目标隧道调度节点,调度所述目标优先级子队列所属的目标优先级队列;
通过目标端口对应的目标端口调度节点,调度所述目标隧道调度节点中的数据;其中,所述目标端口为所述目标隧道对应的端口。
当然,本申请所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的数据调度方法中的相关操作。
本申请提出的数据调度方法、设备和存储介质,包括:根据待调度数据的用户指示信息,确定待调度数据对应的目标隧道,将待调度数据放入待调度数据的优先级字段对应的目标优先级子队列中,通过目标隧道对应的目标隧道调度节点,调度目标优先级子队列所属的目标优先级队列,通过目标端口对应的目标端口调度节点,调度对目标隧道调度节点中的数据,其中,目标端口为目标隧道对应的端口。该数据调度方法可以根据待调度数据的用户指示信息,确定出待调度数据对应的目标隧道,通过目标隧道对应的目标隧道调度节点调度待调度数据所在的目标优先级队列,再通过目标端口对应的目标端口调度节点,调度目标隧道调度节点中的数据,由于目标隧道是根据待调度数据的用户指示信息确定的,从而,可以实现结合待调度数据的优先级字段以及用户指示信息调度该待调度数据,即,针对不同用户提供不同的流量控制方法,满足用户的需求,提高了QoS技术的调度灵活性,同时,降低了运营商的维护成本。
以上所述,仅为本申请的示例性实施例而已,并非用于限定本申请的保护范围。
一般来说,本申请的多种实施例可以在硬件或专用电路、软件、逻辑或其任何组合中实现。例如,一些方面可以被实现在硬件中,而其它方面可以被实现在可以被控制器、微处理器或其它计算装置执行的固件或软件中,尽管本申请不限于此。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
以上参照附图说明了本申请实施例的一些实施例,并非因此局限本申请实施例的权利范围。本领域技术人员不脱离本申请实施例的范围和实质内所作的任何修改、等同替换和改进,均应在本申请实施例的权利范围之内。

Claims (12)

  1. 一种数据调度方法,包括:
    根据待调度数据的用户指示信息,确定所述待调度数据对应的目标隧道;
    将所述待调度数据放入所述待调度数据的优先级字段对应的目标优先级子队列中;
    通过所述目标隧道对应的目标隧道调度节点,调度所述目标优先级子队列所属的目标优先级队列;
    通过目标端口对应的目标端口调度节点,调度所述目标隧道调度节点中的数据;其中,所述目标端口为所述目标隧道对应的端口。
  2. 根据权利要求1所述的方法,其中,所述通过所述目标隧道对应的目标隧道调度节点,调度所述目标优先级子队列所属的目标优先级队列之前,所述方法还包括:
    根据所述待调度数据的业务指示信息,确定所述待调度数据对应的目标伪线;
    通过所述目标伪线对应的目标伪线调度节点,调度所述目标优先级子队列所属的目标优先级队列;其中,所述目标优先级队列、所述目标伪线调度节点、所述目标隧道调度节点以及所述目标端口调度节点依次连接。
  3. 根据权利要求2所述的方法,其中,所述通过所述目标隧道对应的目标隧道调度节点,调度所述目标优先级子队列所属的目标优先级队列,包括:
    通过所述目标隧道对应的目标隧道调度节点,调度所述目标伪线调度节点中的数据。
  4. 根据权利要求3所述的方法,其中,所述目标隧道调度节点对应第一目标最小需求带宽;所述目标端口调度节点对应目标总带宽;所述目标端口调度节点连接的至少两个隧道调度节点包括所述目标隧道调度节点;
    所述通过目标端口对应的目标端口调度节点,调度所述目标隧道调度节点中的数据,包括:
    当所述目标端口调度节点连接的所述至少两个隧道调度节点中所有数据流量的总和,大于所述目标总带宽时,确定所述目标端口拥塞;
    在所述目标端口拥塞时,通过所述目标端口调度节点,并根据所述第一目标最小需求带宽,确定所述目标隧道调度节点中的数据对应的第一实际带宽;
    通过所述目标端口调度节点,根据所述第一实际带宽发送所述目标隧道调度节点中的数据。
  5. 根据权利要求4所述的方法,其中,所述至少两个隧道调度节点中的每个隧道调度节点对应第一最小需求带宽;
    所述通过所述目标端口调度节点,并根据所述第一目标最小需求带宽,确定所述目标隧道调度节点中的数据对应的第一实际带宽,包括:
    确定所述目标端口调度节点连接的所有隧道调度节点的第一最小需求带宽的和;
    将所述目标总带宽减去所述第一最小需求带宽的和后剩余的带宽,按照预设的第一带宽分配规则进行分配,确定所述目标隧道调度节点对应的第一目标分配带宽;
    将所述目标隧道调度节点对应的第一目标最小需求带宽以及所述第一目标分配带宽的和,确定为所述目标隧道调度节点中的数据对应的第一实际带宽。
  6. 根据权利要求3所述的方法,其中,所述目标伪线调度节点对应第二目标最小需求带宽;所述目标隧道调度节点对应第一目标最大需求带宽;所述目标隧道调度节点连接的至少两个伪线调度节点包括所述目标伪线调度节点;
    所述通过所述目标隧道对应的目标隧道调度节点,调度所述目标伪线调度节点中的数据,包括:
    当所述目标隧道调度节点连接的所述至少两个伪线调度节点中所有数据的流量的总和,大于所述第一目标最大需求带宽时,确定所述目标隧道拥塞;
    在所述目标隧道拥塞时,通过所述目标隧道调度节点,并根据所述第二目标最小需求带宽,确定所述目标伪线调度节点中的数据对应的第二实际带宽;
    通过所述目标隧道调度节点,根据所述第二实际带宽发送所述目标伪线调度节点中的数据。
  7. 根据权利要求6所述的方法,其中于,所述至少两个伪线调度节点中的每个伪线调度节点对应第二最小需求带宽;
    所述通过所述目标隧道调度节点,并根据所述第二目标最小需求带宽,确定所述目标伪线调度节点中的数据对应的第二实际带宽,包括:
    确定所述目标隧道调度节点连接的所有伪线调度节点的第二最小需求带宽的和;
    将所述第一目标最大需求带宽减去所述第二最小需求带宽的和后剩余的带宽,按照预设的第二带宽分配规则进行分配,确定所述目标伪线调度节点对应的第二目标分配带宽;
    将所述目标伪线调度节点对应的第二目标最小需求带宽以及所述第二目标分配带宽的和,确定为所述目标伪线调度节点中的数据对应的第二实际带宽。
  8. 根据权利要求3所述的方法,其中,所述目标优先级子队列对应第三目标最小需求带宽,所述目标伪线调度节点对应第二目标最大需求带宽,所述目标优先级队列为至少两个优先级子队列的集合,所述至少两个优先级子队列包括所述目标优先级子队列,每个优先级子队列均对应第三最小需求带宽;
    所述通过所述目标伪线对应的目标伪线调度节点,调度所述目标优先级子队列所属的目标优先级队列,包括:
    当所述目标优先级队列中包括的所有优先级子队列的流量的总和,大于所述第二目标最大需求带宽时,确定所述目标伪线拥塞;
    在所述目标伪线拥塞时,通过所述目标伪线调度节点,根据所述目标优先级子队列对应的第三目标最小需求带宽,确定所述目标优先级子队列对应的第三实际带宽;
    通过所述目标伪线调度节点,根据所述第三实际带宽发送所述目标优先级子队列中的数据。
  9. 根据权利要求3所述的方法,其中,所述通过所述目标伪线对应的目标伪线调度节点,调度所述目标优先级子队列所属的目标优先级队列之前,所述方法还包括:
    若根据预设的伪线及伪线调度节点的第一映射关系以及所述目标伪线,确定所述目标伪线在所述第一映射关系中存在对应的伪线调度节点,则将所述第一映射关系中所述目标伪线对应的伪线调度节点确定为所述目标伪线调度节点;
    若根据预设的伪线及伪线调度节点的第一映射关系以及所述目标伪线,确定所述目标伪线在所述第一映射关系中不存在对应的伪线调度节点,则将所述目标隧道调度节点下连接的空伪线调度节点,确定为所述目标伪线调度节点;其中,所述空伪线调度节点的第二最大需求带宽为所述目标隧道调度节点的第一目标最大需求带宽减去连接的其他伪线调度节点的第二最大需求带宽后剩余的带宽。
  10. 根据权利要求3所述的方法,其中,所述通过所述目标隧道对应的目标隧道调度节点,调度所述目标伪线调度节点中的数据之前,所述方法还包括:
    若根据预设的隧道及隧道调度节点的第二映射关系以及所述目标隧道,确定所述目标隧道在所述第二映射关系中存在对应的隧道调度节点,则将所述第二映射关系中,所述目标隧道对应的隧道调度节点,确定为所述目标隧道调度节点;
    若根据预设的隧道及隧道调度节点的第二映射关系以及所述目标隧道,确定所述目标隧道在所述第二映射关系中不存在对应的隧道调度节点,则将所述目标端口调度节点下连接的空隧道调度节点,确定为所述目标隧道调度节点;其中,所述空隧道调度节点的第一最大需求带宽为所述目标端口调度节点的目标总带宽减去连接的其他隧道调度节点的第一最大需求带宽后剩余的带宽。
  11. 一种数据调度设备,包括存储器、处理器、存储在所述存储器上并可在所述处理器上运行的程序以及用于实现所述处理器和所述存储器之间的连接通信的数据总线,其中,所述程序被所述处理器执行时实现如权利要求1至10中任一项所述的数据调度方法的步骤。
  12. 一种存储介质,用于计算机可读存储,其中,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现权利要求1至10中任一项所述的数据调度方法的步骤。
PCT/CN2021/098666 2020-06-15 2021-06-07 数据调度方法、设备和存储介质 WO2021254202A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
BR112022020001A BR112022020001A2 (pt) 2020-06-15 2021-06-07 Método para agendamento de dados e meio de armazenamento legível por computador

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010544387.2A CN113810314A (zh) 2020-06-15 2020-06-15 数据调度方法、设备和存储介质
CN202010544387.2 2020-06-15

Publications (1)

Publication Number Publication Date
WO2021254202A1 true WO2021254202A1 (zh) 2021-12-23

Family

ID=78944173

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098666 WO2021254202A1 (zh) 2020-06-15 2021-06-07 数据调度方法、设备和存储介质

Country Status (3)

Country Link
CN (1) CN113810314A (zh)
BR (1) BR112022020001A2 (zh)
WO (1) WO2021254202A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206602A1 (en) * 2006-03-01 2007-09-06 Tellabs San Jose, Inc. Methods, systems and apparatus for managing differentiated service classes
CN102546395A (zh) * 2011-12-14 2012-07-04 中兴通讯股份有限公司 基于l2vpn网络的业务调度方法和装置
CN106330710A (zh) * 2015-07-01 2017-01-11 中兴通讯股份有限公司 数据流调度方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206602A1 (en) * 2006-03-01 2007-09-06 Tellabs San Jose, Inc. Methods, systems and apparatus for managing differentiated service classes
CN102546395A (zh) * 2011-12-14 2012-07-04 中兴通讯股份有限公司 基于l2vpn网络的业务调度方法和装置
CN106330710A (zh) * 2015-07-01 2017-01-11 中兴通讯股份有限公司 数据流调度方法及装置

Also Published As

Publication number Publication date
CN113810314A (zh) 2021-12-17
BR112022020001A2 (pt) 2022-12-27

Similar Documents

Publication Publication Date Title
US7782776B2 (en) Shared weighted fair queuing (WFQ) shaper
US8917597B2 (en) Providing a quality of service for various classes of service for transfer of electronic data packets
US7327675B1 (en) Fairness of capacity allocation for an MPLS-based VPN
US8542586B2 (en) Proportional bandwidth sharing of the excess part in a MEF traffic profile
US9455927B1 (en) Methods and apparatus for bandwidth management in a telecommunications system
US20030174650A1 (en) Weighted fair queuing (WFQ) shaper
CN105282029B (zh) 外层标签编码方法、流量拥塞控制方法及装置
CN107454015B (zh) 一种基于OF-DiffServ模型的QoS控制方法及系统
WO2017024824A1 (zh) 基于聚合链路的流量管理方法及装置
US20140310354A1 (en) Data transfer
US20100278189A1 (en) Methods and Apparatus for Providing Dynamic Data Flow Queues
CN106453126A (zh) 一种虚拟机流量控制方法及装置
WO2011137727A1 (zh) 一种报文的传输方法和系统
WO2015066878A1 (zh) Sdn网络中的控制设备和控制方法
WO2023142937A1 (zh) 一种网络拥塞控制方法及相关装置
CN108702336A (zh) 数据路由中的动态优化队列
WO2021254202A1 (zh) 数据调度方法、设备和存储介质
CN101009637B (zh) 服务质量控制方法、系统及逻辑链路管理功能设备
CN107786468B (zh) 基于HQoS的MPLS网络带宽分配方法及装置
WO2022135202A1 (zh) 业务流的调度方法、装置及系统
CN113746675B (zh) 一种用HQoS实现灵活以太网业务场景的方法及系统
EP2991295A1 (en) System and method for handling data flows in an access network
CN110048957B (zh) 一种流量控制方法、装置及虚拟延伸接入系统
Liu et al. Deployment of Asynchronous Traffic Shapers in Data Center Networks
WO2022237860A1 (zh) 报文处理方法、资源分配方法以及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21826197

Country of ref document: EP

Kind code of ref document: A1

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022020001

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112022020001

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20221003

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/04/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21826197

Country of ref document: EP

Kind code of ref document: A1