CN117675711A - Link congestion scheduling method, device, equipment, medium and program - Google Patents

Link congestion scheduling method, device, equipment, medium and program Download PDF

Info

Publication number
CN117675711A
CN117675711A CN202211020941.2A CN202211020941A CN117675711A CN 117675711 A CN117675711 A CN 117675711A CN 202211020941 A CN202211020941 A CN 202211020941A CN 117675711 A CN117675711 A CN 117675711A
Authority
CN
China
Prior art keywords
congestion
slice
flow
priority
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211020941.2A
Other languages
Chinese (zh)
Inventor
黄宗和
沈硕
陈慧光
孙琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202211020941.2A priority Critical patent/CN117675711A/en
Publication of CN117675711A publication Critical patent/CN117675711A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the disclosure provides a link congestion scheduling method, a device, computer equipment, a readable storage medium and a program, and relates to the technical field of computers. The method comprises the following steps: acquiring a first congestion relay link in all links through which a first priority slice passes; acquiring a first flow which is required to be adjusted for relieving the congestion of a first congestion relay link; obtaining a slice to be adjusted according to the first flow; and adjusting the flow corresponding to the first flow in the slice to be adjusted so as to relieve the congestion of the first congestion relay link. The method provided by the embodiment of the disclosure can realize the fine adjustment of link congestion.

Description

Link congestion scheduling method, device, equipment, medium and program
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a link congestion scheduling method, a device, a computer apparatus, a readable storage medium and a program.
Background
At present, when the mixed slices with low time delay, high guarantee, high elasticity and large bandwidth and Internet are constructed, as each slice is not provided with the upper limit bandwidth constraint, the link congestion is easy to occur when the link bears various slices at the same time. In order to ensure the transmission quality of the high priority slice, the duration of the traffic congestion of the high priority slice needs to be reduced as much as possible. The prior art scheme generally directly re-optimizes the tunnel corresponding to the congested slice. This may occur as a phenomenon of "hanging" the path quality of the high priority slices.
Disclosure of Invention
The embodiment of the disclosure provides a link congestion scheduling method, a device, computer equipment, a readable storage medium and a program, relates to the technical field of computers, and can realize fine adjustment of link congestion by adjusting slices according to different priorities.
The embodiment of the disclosure provides a link congestion scheduling method, which comprises the following steps: acquiring a first congestion relay link in all links through which a first priority slice passes; acquiring a first flow which is required to be adjusted for relieving the congestion of a first congestion relay link; obtaining a slice to be adjusted according to the first flow; and adjusting the flow corresponding to the first flow in the slice to be adjusted so as to relieve the congestion of the first congestion relay link.
In one embodiment, acquiring the first congested relay link of all links traversed by the first priority slice includes: and monitoring the time delay, the packet loss rate and the jitter of the first priority slice to acquire a first congestion relay link.
In one embodiment, obtaining the first flow that is adjusted required to relieve congestion of the first congested relay link includes: and acquiring the first traffic which needs to be adjusted to relieve the congestion of the first congestion relay link according to the total bandwidth of the first congestion relay link, the current bandwidth utilization of the first congestion relay link, the utilization threshold value of the first congestion relay link and the bandwidth reserved quantity of the first congestion relay link.
In one embodiment, obtaining a slice to be adjusted based on a first flow comprises: sorting the slices passing through the first congested relay link by priority; the slices to be adjusted that can call the first traffic are determined in order of priority from low to high.
In one embodiment, determining the slice to be adjusted that is capable of invoking the first traffic in order of priority from low to high comprises: when there are a plurality of slices of the same level of priority among the slices capable of calling out the first flow, the slice in which the divided slices exist is preferentially selected as the slice to be adjusted.
In one embodiment, determining the slice to be adjusted that is capable of invoking the first traffic in order of priority from low to high comprises: when there are a plurality of slices of the same level of priority among the slices capable of calling out the first flow, a slice having a flow rate close to the first flow rate is preferentially selected as a slice to be adjusted.
In one embodiment, determining the slice to be adjusted that is capable of invoking the first traffic in order of priority from low to high comprises: and determining all the slices with the third priority and part of the slices with the second priority which can call the first traffic as the slices to be adjusted according to the order of the priorities from low to high when the total flow of the slices with the third priority is smaller than the first flow.
In one embodiment, adjusting the flow in the slice to be adjusted corresponding to the first flow to decongest the first congested relay link includes: adjusting the flow corresponding to the first flow in the slice to be adjusted to the first receiving link so as to relieve the congestion of the first congestion relay link; the utilization rate of the first receiving link after receiving the first flow is smaller than or equal to the utilization rate threshold of the first receiving link.
The embodiment of the disclosure provides a link congestion scheduling device, which comprises: an acquisition module, configured to acquire a first congestion relay link in all links through which a first priority slice passes; the acquisition module is also used for acquiring first flow which is required to be adjusted for relieving the congestion of the first congestion relay link; the acquisition module is also used for acquiring the slice to be adjusted according to the first flow; and the adjusting module is used for adjusting the flow corresponding to the first flow in the slice to be adjusted so as to relieve the congestion of the first congestion relay link.
The embodiment of the disclosure provides a computer device, which comprises a processor, a memory and an input-output interface; the processor is connected to the memory and the input-output interface, respectively, wherein the input-output interface is used for receiving data and outputting data, the memory is used for storing a computer program, and the processor is used for calling the computer program to enable the computer device to execute the method according to any one of the above embodiments.
The disclosed embodiments provide a computer readable storage medium storing a computer program adapted to be loaded and executed by a processor to cause a computer device having a processor to perform the method of any of the above embodiments.
The disclosed embodiments provide a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the above embodiments.
According to the link congestion scheduling method, the first congestion relay links in all links through which the first priority slice passes are obtained; acquiring a first flow which is required to be adjusted for relieving the congestion of a first congestion relay link; obtaining a slice to be adjusted according to the first flow; the flow corresponding to the first flow in the slice to be adjusted is adjusted to relieve the congestion of the first congestion relay link, so that the fine adjustment of the link congestion can be realized, and the path quality of the slice with high priority can be ensured to the greatest extent.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 illustrates a typical low latency high guarantee, flexible large bandwidth, internet hybrid slice link scenario;
fig. 2 is a flowchart of a link congestion scheduling method provided by an embodiment of the present disclosure;
FIG. 3 illustrates a scenario of a low latency high guarantee, flexible large bandwidth, internet hybrid slice link according to one embodiment of the present disclosure;
FIG. 4 illustrates a scenario of an adjusted low latency high guarantee, flexible large bandwidth, internet hybrid slice link corresponding to the embodiment of FIG. 3;
FIG. 5 illustrates a scenario of a low latency high guarantee, flexible large bandwidth, internet hybrid slice link according to one embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a link congestion scheduling apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
In the embodiment of the present disclosure, for SRv (Segment Routing IPv6, segment routing based on IPv6 forwarding plane, where IPv6 is an abbreviation for Internet Protocol Version 6 (internet protocol version 6)) Policy hybrid slice, a first congested relay link among all links through which a first priority slice passes may be acquired; acquiring a first flow which is required to be adjusted for relieving the congestion of a first congestion relay link; obtaining a slice to be adjusted according to the first flow; and adjusting the flow corresponding to the first flow in the slice to be adjusted to relieve the congestion of the first congestion relay link, thereby realizing the fine adjustment of the link congestion, ensuring the path quality of the slice with high priority to the maximum extent, and simultaneously ensuring the path quality of the slice with high priority and simultaneously using the optimal path for the slice with low priority.
The following first describes some terms of the present disclosure:
slicing is a networking mode according to needs, so that operators can separate a plurality of virtual end-to-end networks on a unified infrastructure, the services are carried by SRv Policy tunnels with different priorities, and a plurality of slices can be carried on one link. The slices at least comprise a low-delay high-guarantee slice, an elastic large-bandwidth slice and an internet slice, and the priorities respectively correspond to a high priority, a medium priority and a low priority. The slice with high priority generally bears the time delay sensitivity of VIP (very important person) large clients and can guarantee high-value key services with higher requirements; for medium priority slices, large bandwidth services, such as automatic driving data cloud services, are generally carried; the low priority slices are internet traffic.
Link, passive point-to-point physical connection. In wired communication, a link refers to a physical line, such as a cable or fiber, between two nodes. In radio communication, a link refers to a path space in which electromagnetic waves propagate between a base station and a terminal. One slice may pass through multiple links.
The tunnel in this application refers to a channel carrying different slices. There may be multiple tunnels in a link carrying different slices.
Fig. 1 shows a typical low latency, high security, flexible, large bandwidth, internet hybrid slice link scenario.
Referring to fig. 1, p (Provider router), which is a core layer device, is a backbone network routing device to which a service Provider is not connected any CE (customer edge) router, and corresponds to a Label Switching Router (LSR). PE (Provider Edge), i.e. Provider Edge device, an Edge router of the service Provider backbone, which corresponds to a Label Edge Router (LER). The PE router connects the CE router and the P router, and is the most important network node. Traffic for users 105 flows into the user network through the PE router or flows to the networking backbone (e.g., internet 106, the angel cloud 107, and third party cloud 108) through the PE router. CE (Customer Edge router), customer Edge device, customer end router to which service provider connects. The CE routers provide service access to the users by connecting to one or more PE routers. The CE router is typically an IP router that establishes an adjacency with the connected PE router. The GW (Gate Way) is a gateway. SDN (Software Defined Network) is a software defined network. RR (Route Reflector) is a routing reflector. SDN controller 104 manages and adjusts the slices through RR 109. R (Router) is a Router. Telemetry is telemet. The Internet 106 is the Internet. BGP-LS (Border Gateway Protocol-Link state) is the border gateway protocol-connected state. VLAN (Virtual Local Area Network) is a virtual local area network. QinQ is a Stacked (Stacked) VLAN or a Double (Double) VLAN. EVPN (Ethernet Virtual Private Network) is an ethernet virtual private network. The L3VPN (Layer 3Virtual Private Network) is a three-Layer virtual private network.
In fig. 1, 101 is a high priority-low latency high guarantee slice; 102 is a medium priority-flexible large bandwidth slice; 103 is a low priority-internet slice. The P3-PE3 relay link high priority slice 101, medium priority slice 102 and low priority slice 103 pass through simultaneously, so that congestion is relatively easy to occur, and once congestion occurs, the transmission of the traffic of the high priority slice 101 is affected. In the prior art, once congestion occurs in a relay link of a high priority slice, a path of the high priority slice is re-optimized, but for the high priority slice, the indexes of SLA (Service Level Agreement, service level agreement/service level agreement) such as delay, packet loss rate, metric, hop count and the like of the re-optimized path may not be optimal, and the phenomenon that the link quality indexes such as delay of a low-delay high-guarantee slice are lower than those of a medium-priority slice and the path quality of the high priority slice is also worse, and the phenomenon of "hanging down" occurs.
Fig. 2 is a flowchart of a link congestion scheduling method provided in an embodiment of the present disclosure. The method provided by the embodiments of the present disclosure may be performed by SDN controller 104 in the embodiment of fig. 1, or by any other terminal and/or server having computing capabilities, which is not limited to this disclosure.
As shown in fig. 2, the method provided by the embodiment of the present disclosure may include the following steps.
In step S210, a first congested relay link among all links through which a first priority slice passes is acquired.
In this step, the SDN controller obtains a first congested relay link from among all links through which a first priority slice passes. Wherein the first priority slice is, for example, a high priority slice, the second priority slice is, for example, a medium priority slice, and the third priority slice is, for example, a low priority slice. For example, in FIG. 1, when a P3-PE3 relay link is congested, a first congested relay link P3-PE3 of all links PE1-P1-P3-PE3 of the high priority slice 101 is acquired. The first congestion relay link may be any relay link that generates congestion. In one embodiment, the SDN controller monitors the delay, packet loss rate and jitter of the first priority slice by means of technologies such as Telemetry, twamp (Two-way active measurement protocol) detection, flow-following detection, etc. to obtain the first congested relay link. In other embodiments, the first congested relay link may also be obtained by implementing monitoring of the implemented traffic of the respective relay link. If the high priority slice link is not congested, the medium priority slice link is checked.
In step S220, a first flow that needs to be adjusted to relieve congestion of the first congested relay link is acquired.
In this step, the SDN controller obtains a first traffic volume that needs to be adjusted to relieve congestion of the first congested relay link. The first traffic that needs to be adjusted to be removed for congestion relief of the first congestion relay link may be obtained according to the total bandwidth of the first congestion relay link, the current bandwidth utilization of the first congestion relay link, the utilization threshold of the first congestion relay link, and the bandwidth reserved amount of the first congestion relay link. The first flow rate may be determined according to the following equation (1):
first traffic = total bandwidth (current bandwidth utilization-utilization threshold + bandwidth reservation) (1)
The total bandwidth, the current bandwidth utilization, the utilization threshold and the bandwidth reservation are all referred to as the total bandwidth, the current bandwidth utilization, the utilization threshold and the bandwidth reservation of the first congestion relay link.
Fig. 3 illustrates a scenario of a low latency high guarantee, flexible large bandwidth, internet hybrid slice link according to one embodiment of the present disclosure.
Referring to fig. 3, the physical bandwidth between the relay links P3-PE3 is 100G, the utilization threshold is 80%, and the bandwidth reservation is 4%. The high priority slice 301 traffic is 50G, the medium priority slice 302 traffic is 15G, the low priority slices 303 and 304 traffic is 18G, the sum of the slice traffic is 83G, and the traffic threshold 80G has been exceeded. The first traffic adjusted to relieve the congestion of the P3-PE3 relay link is calculated to be 100 x (83% -80% +4%) =7g according to equation (1).
In step S230, a slice to be adjusted is acquired according to the first flow.
In this step, the SDN controller obtains a slice to be adjusted according to the first flow. In one embodiment, slices through the first congested relay link may be prioritized; the slice to be adjusted that can call the first flow is then determined in order of priority from low to high.
For example, referring to fig. 3, high priority slice 301, medium priority slice 302, low priority slices 303 and 304 are prioritized as: low priority slice 303 (10G), low priority slice 304 (8G), medium priority slice 302 (15G), high priority slice 301 (50G). The slice to be adjusted, which can call out the first flow rate 7G, is determined as the low-priority slice 303 (10G) or the low-priority slice 304 (8G) in order of priority from low to high. In one embodiment, when there are a plurality of slices of the same level of priority among the slices capable of calling out the first flow rate, a slice having a flow rate close to the first flow rate is preferentially selected as a slice to be adjusted. Then in the embodiment of fig. 3, the low-priority slice 304 (8G) may be selected as the slice to be adjusted to call out the first flow 7G.
In one embodiment, when there are a plurality of slices of the same level of priority among the slices capable of calling out the first flow, the slice in which the divided slices exist is preferentially selected as the slice to be adjusted. For example, if the low-priority slice 303 (10G) has fractional traffic in P4-PE3, the low-priority slice 303 (10G) may be preferentially selected as the slice to be adjusted to avoid splitting the low-priority slice 304 (8G) to other relay links.
In one embodiment, the total flow of the slices with the third priority is smaller than the first flow, and all the slices with the third priority and part of the slices with the second priority which can call out the first flow are determined as the slices to be adjusted according to the order of the priorities from low to high. For example, if the first flow rate is 19G, the sum of the flows of the low-priority slice 303 (10G) and the low-priority slice 304 (8G) is 18G < 19G, and then the low-priority slice 303 (10G), the low-priority slice 304 (8G) and the medium-priority slice 302 (15G) are selected as the slices to be adjusted.
In step S240, the flow corresponding to the first flow in the slice to be adjusted is adjusted to relieve the congestion of the first congested relay link.
In this step, the SDN controller adjusts the flow corresponding to the first flow in the slice to be adjusted to relieve the congestion of the first congested trunk. For example, referring to fig. 3, 7G corresponding to the first traffic 7G in the slice low-priority slice 304 (8G) to be adjusted is adjusted to relieve congestion of the relay link P3-PE3.
In one embodiment, adjusting a flow corresponding to the first flow in the slice to be adjusted to the first receiving link to decongest the first congested relay link; the utilization rate of the first receiving link after receiving the first flow is smaller than or equal to the utilization rate threshold of the first receiving link.
Fig. 4 shows a scenario of an adjusted low latency high guarantee, flexible large bandwidth, internet hybrid slice link corresponding to the embodiment of fig. 3.
Referring to fig. 4, the slice low-priority slice 304 (8G) to be adjusted is divided into a low-priority slice 3041 (1G) and a low-priority slice 3042 (7G). 7G corresponding to the first flow 7G in the slice low-priority slice 304 (8G) to be adjusted is adjusted to the relay link P4-PE3 to relieve congestion of the relay link P3-PE3. The utilization rate of the relay link P4-PE3 after receiving the first traffic 7G needs to be less than or equal to the utilization rate threshold of the relay link P4-PE 3.
According to the link congestion scheduling method in the embodiment, the first congestion relay links in all links through which the first priority slice passes are obtained; acquiring a first flow which is required to be adjusted for relieving the congestion of a first congestion relay link; obtaining a slice to be adjusted according to the first flow; the flow corresponding to the first flow in the slice to be adjusted is adjusted to relieve the congestion of the first congestion relay link, so that the fine adjustment of the link congestion can be realized, the path quality of the slice with high priority is ensured to the maximum extent, and meanwhile, the optimal path can be used by the slice with low priority while the path quality of the slice with high priority is ensured.
The link congestion scheduling method of the present disclosure is described below with reference to specific embodiments.
Fig. 5 illustrates a scenario of a low latency high guarantee, flexible large bandwidth, internet hybrid slice link according to one embodiment of the present disclosure.
In the embodiment of fig. 5, a scenario is presented in which a user implements differentiated service bearers based on SRv Policy through a backbone networking private line of an operator. As shown in fig. 5, assuming that the physical bandwidth of the link between backbone devices is 50G, the backbone network carries traffic of different priority slices such as low latency high guarantee slice 501 (high priority), elastic large bandwidth slice 502 (medium priority), and internet slices 5031, 5032, 5041, and 5042 (low priority). Wherein internet slices 5031 and 5032 belong to different sections of the same slice; internet slices 5041 and 5042 belong to different sections of the same slice. The low-delay high-guarantee slice 501 has a flow of 10G, the elastic large-bandwidth slice 502 has a flow of 5G, the Internet slice 5031 has a flow of 18G, the Internet slice 5032 has a flow of 2G, the Internet slice 5041 has a flow of 12G, and the Internet slice 5042 has a flow of 3G.
The bandwidth utilization threshold of the link is 80%. Then at the P3-PE3 link the total traffic has reached 10+5+18+12=45g and 45G/50 g=90%, greater than the bandwidth utilization threshold for the 80% link. At this point, the link congestion scheduling of Srv6 Policy is triggered. The specific process of link congestion scheduling is as follows:
the SDN controller 104 tuning module continuously monitors all links PE 1-P3-PE3 through which the high priority slice path passes, finds that the delay, the packet loss rate and the jitter of the high priority slice are abnormal, and further checks to find that the current bandwidth utilization of the relay link between P3-PE3 reaches 90% and is greater than 80% of the link bandwidth utilization threshold, so that link congestion scheduling is triggered. The SDN controller 104 tuning module searches all Srv6 Policy that pass through the congestion relay circuit P3-PE3, and finds that 4 Srv6 Policy tunnels pass through, which are respectively: high priority slice 501, medium priority slice 502, low priority slice 5031, low priority slice 5041. Ordered from low to high by priority: low priority slice 5031 is 18G, low priority slice 5041 is 12G, medium priority slice 502 is 5G, and high priority slice 501 is 10G. The SDN controller 104 tuning module calculates the first traffic (assuming bandwidth reservation is 4%) that needs to be tuned away in order for the congested link to be congestion relieved:
first traffic required to relieve congestion = link total bandwidth x (link current bandwidth utilization-utilization threshold + bandwidth reservation)
Then: first flow=50g (90% -80% +4%) =7g
According to the principle of preferentially selecting slices with flow rates similar to the first flow rate. Among the low priority slices, SDN controller 104 tuning module selects the slice with the flow closest to 7G, then low priority slices 5041 (12G) and 7G are closest, thus selecting low priority slice 5041 as the slice to be tuned. The SDN controller 104 tuning module invokes the SDN controller 104 calculation module to tune the low priority slice 5041 (7G traffic needs to be tuned away). The SDN controller 104 computation module performs link congestion scheduling on the low priority slices 5041. The principle of link congestion scheduling is according to the sequence of traffic scheduling weight > traffic splitting > traffic unloading. The weight adjustment means that the slice capable of adjusting the first flow has a slicing. For example, a low priority slice 5041 exists with a sliced low priority slice 5042, and the specific gravity of 5041 and 5042 is changed, called the weighting, by adjusting the flow of 5041 to 5042. Flow splitting refers to the absence of slicing the slice, splitting the slice into at least two split slices to achieve flow regulation. Traffic offloading refers to directly offloading slices.
In one embodiment, when the traffic (path) is split, a link in which the ratio of the remaining physical bandwidth of the link where the pre-split slice is located to the remaining physical bandwidth of the target link is less than or equal to the weight ratio of the split slice is preferentially considered as the target link. For example, the total traffic of the pre-split slice is 8G, the pre-split weight ratio is 2g:6g=1:3, where the 6G traffic splits to the target link; if the residual physical bandwidth of the link where the pre-splitting slice is located is 5G, selecting the link with the residual physical bandwidth of more than or equal to 15G as the target link.
The embodiment of fig. 5 may be weighted by adjusting the weights. For example, the low priority slices 5041 and 5042 weight ratio adjustment is as follows:
before adjustment: 5041:5042=12:3=4:1
After adjustment: 5041:5042= (12-7): (3+7) =1:2
By adjusting the weight ratio of low priority slices 5041 and 5042, the traffic of low priority slice 504 is adjusted from original 5041:5042=12g:3g to 5041:5042=5g:10g. After the adjustment is completed, the link bandwidth flow of the P3-PE3 is reduced to 10+5+18+5=38g, and then the current bandwidth utilization becomes 38G/50 g=76% <80%. Thus, the congestion of the link P3-PE3 is relieved.
In the embodiment of fig. 5, by acquiring congested relay links among all links through which high priority passes; acquiring a first flow which is required to be regulated for relieving the congestion of the congestion relay link; obtaining a slice to be adjusted according to the first flow; the flow corresponding to the first flow in the slice to be adjusted is adjusted by adjusting the weight so as to relieve the congestion of the congestion relay link, so that the fine adjustment of the link congestion can be realized, the path quality of the slice with high priority is ensured to the maximum extent, and meanwhile, the optimal path can be used by the slice with low priority while the path quality of the slice with high priority is ensured.
In other embodiments, links for high priority slices are re-optimized if low and medium priority slice traffic that the link can tune away cannot be decongested. The high-priority slice is a slice with main and standby protection, and the re-optimization of the high-priority slice does not involve a method of weight adjustment and path splitting.
Fig. 6 is a schematic structural diagram of a link congestion scheduling apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, a link congestion scheduling apparatus 600 provided by an embodiment of the present disclosure may include:
an obtaining module 610, configured to obtain a first congested relay link from all links through which a first priority slice passes;
the obtaining module 610 is further configured to obtain a first flow that needs to be adjusted to relieve congestion of the first congested relay link;
the acquiring module 610 is further configured to acquire a slice to be adjusted according to the first flow;
and an adjustment module 620, configured to adjust a flow corresponding to the first flow in the slice to be adjusted to relieve the congestion of the first congested relay link.
The link congestion scheduling device of fig. 6 acquires, by the acquisition module, the first congested relay link among all links through which the first priority slice passes; acquiring a first flow which is required to be adjusted for relieving the congestion of a first congestion relay link; obtaining a slice to be adjusted according to the first flow; the adjusting module adjusts the flow corresponding to the first flow in the slice to be adjusted to relieve the congestion of the first congestion relay link, so that the fine adjustment of the link congestion can be realized, the path quality of the slice with high priority is ensured to the maximum extent, and meanwhile, the optimal path can be used by the slice with low priority while the path quality of the slice with high priority is ensured.
In one embodiment, the obtaining module 610 is further configured to monitor the delay, the packet loss rate, and the jitter of the first priority slice to obtain the first congested relay link.
In one embodiment, the obtaining module 610 is further configured to obtain the first traffic that needs to be adjusted to relieve the congestion of the first congested relay link according to the total bandwidth of the first congested relay link, the current bandwidth utilization of the first congested relay link, the utilization threshold of the first congested relay link, and the bandwidth reservation of the first congested relay link.
In one embodiment, the obtaining module 610 is further configured to prioritize the slices through the first congested relay link; the slices to be adjusted that can call the first traffic are determined in order of priority from low to high.
In one embodiment, the acquiring module 610 is further configured to, when there are a plurality of slices with the same level of priority among the slices capable of calling out the first flow, preferentially select the slices having the existing sub-slices as the slices to be adjusted.
In one embodiment, the obtaining module 610 is further configured to, when there are a plurality of slices with the same level of priority among the slices capable of calling out the first flow, preferentially select a slice with a flow similar to the first flow as the slice to be adjusted.
In one embodiment, the obtaining module 610 is further configured to determine, in order from low priority to high priority, all slices of the third priority and a portion of slices of the second priority that can call out the first traffic as slices to be adjusted, when the total traffic of the slices of the third priority is smaller than the first traffic.
In one embodiment, the adjusting module 620 is further configured to adjust a flow corresponding to the first flow in the slice to be adjusted to the first receiving link to decongest the first congested relay link; the utilization rate of the first receiving link after receiving the first flow is smaller than or equal to the utilization rate threshold of the first receiving link.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computer device 700 according to an embodiment of the present disclosure. As shown in fig. 7, a computer device in an embodiment of the present disclosure may include: one or more processors 701, memory 702, and input-output interfaces 703. The processor 701, the memory 702, and the input-output interface 703 are connected via a bus 704. The memory 702 is used for storing a computer program including program instructions, and the input-output interface 703 is used for receiving data and outputting data, such as for data interaction between a host and a computer device, or for data interaction between virtual machines in a host; the processor 701 is configured to execute program instructions stored in the memory 702.
The processor 701 may perform the following operations, among others:
acquiring a first congestion relay link in all links through which a first priority slice passes; acquiring a first flow which is required to be adjusted for relieving the congestion of a first congestion relay link; obtaining a slice to be adjusted according to the first flow; and adjusting the flow corresponding to the first flow in the slice to be adjusted so as to relieve the congestion of the first congestion relay link.
In some possible implementations, the processor 701 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 702 may include read only memory and random access memory, and provides instructions and data to the processor 701 and input output interface 703. A portion of the memory 702 may also include non-volatile random access memory. For example, the memory 702 may also store information of device type.
In a specific implementation, the computer device may execute, through each functional module built in the computer device, an implementation manner provided by each step in the foregoing embodiment, and specifically may refer to an implementation manner provided by each step in the foregoing embodiment, which is not described herein again.
Embodiments of the present disclosure provide a computer device comprising: the processor, the input/output interface and the memory acquire the computer program in the memory through the processor, execute the steps of the method shown in the above embodiment, and perform the transmission operation.
The embodiments of the present disclosure further provide a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program is adapted to be loaded by the processor and execute the method provided by each step in the foregoing embodiments, and specifically refer to an implementation manner provided by each step in the foregoing embodiments, which is not described herein in detail. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present disclosure, please refer to the description of the embodiments of the method according to the present disclosure. As an example, a computer program may be deployed to be executed on one computer device or on multiple computer devices at one site or distributed across multiple sites and interconnected by a communication network.
The computer readable storage medium may be an apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the computer device. Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The disclosed embodiments also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternatives in the above embodiments.
The terms first, second and the like in the description and in the claims and drawings of the embodiments of the disclosure are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in this description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The methods and related devices provided by the embodiments of the present disclosure are described with reference to the method flowcharts and/or structure diagrams provided by the embodiments of the present disclosure, and each flowchart and/or block of the method flowcharts and/or structure diagrams may be implemented by computer program instructions, and combinations of flowcharts and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable transmission device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable transmission device, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable transmission apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable transmission device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or structures.
The foregoing disclosure is merely illustrative of the presently preferred embodiments of the present disclosure, and it is not intended to limit the scope of the claims hereof, as defined by the appended claims.

Claims (12)

1. A method for scheduling link congestion, comprising:
acquiring a first congestion relay link in all links through which a first priority slice passes;
acquiring a first flow which is required to be adjusted for relieving the congestion of the first congestion relay link;
obtaining a slice to be adjusted according to the first flow;
and adjusting the flow corresponding to the first flow in the slice to be adjusted so as to relieve the congestion of the first congestion relay link.
2. The method of claim 1, wherein acquiring a first congested relay link among all links traversed by a first priority slice comprises:
and monitoring the time delay, the packet loss rate and the jitter of the first priority slice to acquire the first congestion relay link.
3. The method of claim 1, wherein obtaining the first flow adjusted required to decongest the first congested relay link comprises:
and acquiring the first traffic which is required to be regulated away for relieving the congestion of the first congestion relay link according to the total bandwidth of the first congestion relay link, the current bandwidth utilization rate of the first congestion relay link, the utilization rate threshold value of the first congestion relay link and the bandwidth reservation quantity of the first congestion relay link.
4. The method of claim 1, wherein obtaining a slice to be adjusted based on the first flow comprises:
sorting the slices through the first congested relay link by priority;
determining the slices to be adjusted capable of adjusting the first traffic in order of priority from low to high.
5. The method of claim 4, wherein determining the slice to be adjusted that is capable of adjusting the first traffic in a low to high order of priority comprises:
when there are a plurality of slices of the same-level priority among the slices capable of calling out the first flow, the slices in which the slices are already present are preferentially selected as the slices to be adjusted.
6. The method of claim 4, wherein determining the slice to be adjusted that is capable of adjusting the first traffic in a low to high order of priority comprises:
when a plurality of slices with the same-level priority exist in the slices capable of calling the first flow, the slices with the flow similar to the first flow are preferentially selected as the slices to be regulated.
7. The method of claim 4, wherein determining the slice to be adjusted that is capable of adjusting the first traffic in a low to high order of priority comprises:
and determining all the slices with the third priority and part of the slices with the second priority which can call out the first traffic as the slices to be adjusted according to the order of the priorities from low to high when the total flow of the slices with the third priority is smaller than the first flow.
8. The method of claim 1, wherein adjusting the flow in the slice to be adjusted corresponding to the first flow to decongest the first congested relay link comprises:
adjusting the flow corresponding to the first flow in the slice to be adjusted to a first receiving link to relieve the congestion of the first congestion relay link;
the utilization rate of the first receiving link after receiving the first flow is smaller than or equal to the utilization rate threshold of the first receiving link.
9. A link congestion scheduling apparatus, comprising:
an acquisition module, configured to acquire a first congestion relay link in all links through which a first priority slice passes;
the acquisition module is further used for acquiring first flow which is required to be adjusted for relieving the congestion of the first congestion relay link;
the acquisition module is also used for acquiring the slice to be adjusted according to the first flow;
and the adjusting module is used for adjusting the flow corresponding to the first flow in the slice to be adjusted so as to relieve the congestion of the first congestion relay link.
10. The computer equipment is characterized by comprising a processor, a memory and an input-output interface;
the processor is respectively connected with the memory and the input/output interface, wherein the input/output interface is used for receiving data and outputting data, the memory is used for storing a computer program, and the processor is used for calling the computer program to enable the computer device to execute the method of any one of claims 1-8.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method of any of claims 1-8.
12. Computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1-8.
CN202211020941.2A 2022-08-24 2022-08-24 Link congestion scheduling method, device, equipment, medium and program Pending CN117675711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211020941.2A CN117675711A (en) 2022-08-24 2022-08-24 Link congestion scheduling method, device, equipment, medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211020941.2A CN117675711A (en) 2022-08-24 2022-08-24 Link congestion scheduling method, device, equipment, medium and program

Publications (1)

Publication Number Publication Date
CN117675711A true CN117675711A (en) 2024-03-08

Family

ID=90066685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211020941.2A Pending CN117675711A (en) 2022-08-24 2022-08-24 Link congestion scheduling method, device, equipment, medium and program

Country Status (1)

Country Link
CN (1) CN117675711A (en)

Similar Documents

Publication Publication Date Title
CN114073052B (en) Systems, methods, and computer readable media for slice-based routing
EP3618370B1 (en) Method, controller and system for establishing forwarding path in network
US9042355B2 (en) Quality of service (QoS) for satellite communications network
US11677643B2 (en) Traffic classification of elephant and mice data flows in managing data networks
US10892994B2 (en) Quality of service in virtual service networks
US8634299B2 (en) Method of managing a traffic load
CN112152935B (en) Method and device for determining transmission path
CN112954069A (en) Method, device and system for accessing mobile equipment to SD-WAN (secure digital-Wide area network)
EP2405609A2 (en) System and method for monitoring and optimizing traffic in mpls-diffserv networks
US20230142425A1 (en) Virtual dual queue core stateless active queue management (agm) for communication networks
CN115412482B (en) Calculation force routing method and device, electronic equipment and storage medium
CN117675711A (en) Link congestion scheduling method, device, equipment, medium and program
US8126004B2 (en) Method for optimising the sharing of a plurality of network resources between a plurality of application flows
Cisco Quality of Service Solutions Configuration Guide Cisco IOS Release 12.0
Tariq et al. Hop-count fairness-aware protocols for improved bandwidth utilization in WDM burst-switched networks
CN114615155A (en) Method and device for deploying service
WO2011140947A1 (en) Traffic engineering and server selection for content distribution
CN109792405A (en) It is used for transmission the method and apparatus that sharing synthesis process distributes in node
US10594593B2 (en) Methods and apparatus for transmitting data
Hussain et al. Assessing and redesigning enterprise networks through NS-2 to support VoIP
US11968113B1 (en) Computing power routing methods, apparatus, electronic devices and storage media
Moore et al. Packet sequencing: A Deterministic protocol for QoS in IP networks
EP3920485A1 (en) Protocol processing method and apparatus, and storage medium
Fichera et al. Latency-aware network service orchestration over an SDN-controlled multi-layer transport infrastructure
Budka et al. An Overview of Smart Grid Network Design Process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination