CN114928614B - Deterministic network load balancing method and system based on SDN - Google Patents

Deterministic network load balancing method and system based on SDN Download PDF

Info

Publication number
CN114928614B
CN114928614B CN202210527428.6A CN202210527428A CN114928614B CN 114928614 B CN114928614 B CN 114928614B CN 202210527428 A CN202210527428 A CN 202210527428A CN 114928614 B CN114928614 B CN 114928614B
Authority
CN
China
Prior art keywords
forwarding
network
sdn
traffic
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210527428.6A
Other languages
Chinese (zh)
Other versions
CN114928614A (en
Inventor
荆山
张雪
赵川
魏亮
陈贞翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202210527428.6A priority Critical patent/CN114928614B/en
Publication of CN114928614A publication Critical patent/CN114928614A/en
Application granted granted Critical
Publication of CN114928614B publication Critical patent/CN114928614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2408Traffic characterised by specific attributes, e.g. priority or QoS for supporting different services, e.g. a differentiated services [DiffServ] type of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The application provides a deterministic network load balancing method and system based on SDN, which relate to the technical field of computer networks, and the method is implemented by acquiring flow states of all interfaces and queues in the current SDN; on the edge equipment, distinguishing the traffic of different service types, determining the priority, and forwarding the traffic according to a preset forwarding strategy, so that the traffic of different service types can be ensured to reach a destination within a specified time; and (3) detecting the forwarding state of the switch at fixed time, when judging that traffic congestion occurs, carrying out load balancing among controllers according to the traffic states of all interfaces and queues in the SDN network, determining the optimal forwarding path in the current SDN network by using a set neural network algorithm, so that the congestion traffic is transmitted on the current optimal forwarding path, the possibility of bottleneck and paralysis of the network can be reduced, and the instantaneity of the selected path is ensured, thereby ensuring the transmission rate of the service sensitive to the time delay requirement in the network.

Description

Deterministic network load balancing method and system based on SDN
Technical Field
The application belongs to the technical field of computer networks, and particularly relates to a deterministic network load balancing method and system based on SDN.
Background
The statements in this section merely provide background information related to the present application and may not necessarily constitute prior art.
In a practical network environment, users often encounter network congestion and slower network speed, which are caused by excessive traffic in the network and insufficient network bandwidth to meet the demands of the users. In many cases, the network resources are not fully utilized by the user, rather they are not capable of meeting the user's needs. Due to improper allocation of network resources, not only network resources are wasted, but also individual links are blocked.
In the traditional network, load balancing of traffic in the network is mostly realized by using a load balancer, and the load balancer can improve the utilization rate of network resources to a certain extent, but has higher cost and is not suitable for large-scale network use. The static load balancing method is characterized in that the trend of the flow is regulated at the initial stage of network establishment, the method also has the defect that redundant equipment or redundant links are added at the position where the flow is likely to be more, the cost is increased, and the method is inflexible and cannot adapt to the real change of the network flow.
The advent of deterministic network technology presents greater challenges to the transmission performance of the network. Deterministic networks are mainly aimed at low latency, reliable, stable services. While the traditional network relies on a 'best effort' network sharing environment, the deterministic network is divided into different grades according to service demands, and differentiated services can be achieved, so that a destination can be reached within a specified time.
The software defined network (Software Defined Network, SDN) is a new network architecture proposed by the university of stanford Clean-slave topic research group in the united states, which is an implementation of network virtualization. The advantages compared to conventional networks are: the network is programmable to be able to flexibly respond to upper layer application changes. A centralized and unified control and management layer is introduced to decouple the management plane, control plane and data plane of the network equipment, thereby realizing flexible control of flow. The software defined network architecture can be divided into three planes and two interfaces. The control plane is connected with the application plane through a north interface and is connected with the data plane through a south interface. Forwarding rules of the data plane are issued by the controller through a unified interface. The presence of the software defined network breaks the original limitation of the traditional network equipment, so that the network can meet the demands of people.
The advent of software-defined networking has presented challenges while providing opportunities for network traffic load balancing. In the software defined network, the flow is forwarded according to the flow table rule, the flow reaches the switch, the flow table is searched first, if the switch has the corresponding flow forwarding rule, the flow table rule is forwarded according to the rule, if the flow table rule is not the flow table rule, the packet_in message is sent to the controller to obtain the flow table rule, therefore, the controller plays an important role in the flow forwarding process, the network equipment of the data plane is uniformly managed by the controller plane, the network equipment of the data plane can send the state information of the network equipment of the data plane to the controller at regular time, and therefore, the control plane can easily acquire the information of the network equipment of the data plane, and provides a convenient condition for the load balancing of the network. In a traditional software-defined network, a control plane has only one controller device, but in a large-scale backbone network, because of more traffic, the phenomenon that the controller is overloaded is easy to occur, and even network paralysis may occur. In addition, the load on the link is subject to significant stress, and if the link resource allocation is not reasonable, the link failure and other link resource waste problems can be caused.
Disclosure of Invention
In order to overcome the defects of the prior art, the application provides a deterministic network load balancing method and system based on SDN, and the method and system aim to provide a method and system capable of guaranteeing the transmission rate of a service sensitive to delay requirements in a backbone network under a software defined network environment.
The technical scheme adopted by the application is as follows:
in a first aspect, an embodiment of the present application provides a deterministic network load balancing method based on SDN, which is used in an SDN network including a plurality of controllers, switches and edge devices, and includes:
acquiring flow states of all interfaces and queues in an SDN in real time;
setting a flow threshold of the switch according to the state information of the edge equipment, determining priorities corresponding to a plurality of flows when the flow state of the switch exceeds the set flow threshold, and forwarding the flows according to a preset forwarding strategy;
and (3) detecting the forwarding state of the switches at fixed time, and when judging that traffic congestion occurs, carrying out load balancing among controllers according to the traffic states of all interfaces and queues in the SDN network, determining all forwarding paths of the congested traffic among the switches, and determining the optimal forwarding path in the current SDN network by using a set neural network algorithm so as to enable the congested traffic to be transmitted on the current optimal forwarding path.
In one possible implementation, the forwarding policy includes: setting QoS and a plurality of queues at an interface of the edge equipment; when a plurality of flows reach the edge equipment, distinguishing the flows of different service types, and determining the priority of the flows of different service types; and sending the data packets to corresponding queues for forwarding according to the priority.
In one possible implementation, traffic of different service types is differentiated by tos fields in the data packet, different tos values represent different quality of service, and priorities of the traffic of different service types are determined according to the tos values.
In one possible implementation, the queues include a high-speed forwarding queue, a medium-speed forwarding queue, and a low-speed forwarding queue; and sending the data packets into each forwarding queue in turn according to the order of the priority from high to low, and when the high-speed forwarding queue is congested, interrupting the sending of the data packets of other forwarding queues, and preferentially sending the data packets with high priority.
In one possible implementation, load balancing between controllers is performed in the following manner:
selecting one controller from the plurality of controllers as a root controller for synchronizing state information among other non-root controllers, and setting a threshold value for each controller;
when the load of the target non-root controller is monitored to exceed the threshold value, actively sending a load request to other non-root controllers except the target non-root controller, and migrating the load of the target non-root controller according to the load states and migration costs of the other non-root controllers.
In one possible implementation, if the migration is unsuccessful, determining that the traffic in the SDN network is excessive, adding a certain number of controllers within the budget cost, and migrating the maximum load switch managed by the target non-root controller that exceeds the load to the newly added controller management.
In one possible implementation, the neural network algorithm includes a deep reinforcement learning-deep Q network; in reinforcement learning, state represents state, is defined as throughput and transmission delay of a path, and is used as input of a neural network; action represents an action, defined as an optional forwarding path at the current node; reward represents a reward, defined as a reward that is given based on the throughput and transmission delay of the link after a certain action is selected.
In a second aspect, an embodiment of the present application provides an SDN-based deterministic network load balancing system, for use in an SDN network including a plurality of controllers, switches, and edge devices, including:
the acquisition module is used for acquiring the flow states of all interfaces and queues in the SDN in real time;
the flow forwarding module is used for setting a flow threshold of the switch according to the state information of the edge equipment, determining priorities corresponding to a plurality of flows when the flow state of the switch exceeds the set flow threshold, and forwarding the flows according to a preset forwarding strategy;
the congestion detection module is used for detecting the forwarding state of the switch at regular time, when the occurrence of traffic congestion is judged, load balancing among controllers is carried out according to the traffic states of all interfaces and queues in the SDN network, all forwarding paths of the congestion traffic among the switches are determined, and the optimal forwarding paths in the current SDN network are determined by using a set neural network algorithm, so that the congestion traffic is transmitted on the current optimal forwarding paths.
In a third aspect, embodiments of the present application provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine readable instructions when executed by the processor performing the steps of an SDN based deterministic network load balancing method as set forth in any one of the possible implementations of the first aspect and the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of an SDN based deterministic network load balancing method as set forth in any one of the possible implementations of the first aspect and the first aspect above.
The beneficial effects of this application are:
the method and the system are based on a software defined network architecture, and can acquire the flow states of all interfaces and queues in the current SDN network by virtue of the advantage that SDN is more convenient in acquiring network views and flow data, so that the cost is saved, and sufficient preparation is made for load balancing among the following multiple controllers and load balancing among multiple links; on the edge equipment, the priority of the traffic of different service types is determined by distinguishing the traffic of different service types, so that the traffic with higher delay requirement is forwarded preferentially, and the traffic of different service types can reach the destination within a specified time; and when judging that traffic congestion occurs, carrying out load balancing among controllers according to the traffic states of all interfaces and queues in the SDN network, and determining an optimal forwarding path in the current SDN network by using a set neural network algorithm, so that the congestion traffic is transmitted on the current optimal forwarding path, the possibility of bottleneck and even paralysis of the network can be reduced, and the instantaneity of the selected path is ensured, thereby ensuring the transmission rate of the service sensitive to the time delay requirement in the network.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application.
FIG. 1 is a topology of a large scale backbone network provided by an embodiment of the present application;
fig. 2 is a flowchart of a deterministic network load balancing method based on SDN provided in an embodiment of the present application;
fig. 3 is a flowchart of a deterministic network load balancing method based on SDN provided in another embodiment of the present application;
fig. 4 is a schematic structural diagram of a deterministic network load balancing system based on SDN provided in an embodiment of the present application;
fig. 5 is a system overview diagram of an SDN based deterministic network load balancing system provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
The present application is further described below with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
In a traditional software-defined network, a control plane has only one controller device, but in a large-scale backbone network, due to relatively high flow, the phenomenon that the controller is overloaded is easy to occur, even network paralysis is likely to occur, in addition, the load on a link is also subject to huge pressure, and if the link resource allocation is unreasonable, the link failure and other link resource waste problems can be caused. Based on this, the embodiment provides a deterministic network load balancing method and system based on SDN, which are used for ensuring the transmission rate of a service sensitive to delay requirements in a backbone network under a software defined network environment. Specifically, as shown in fig. 1, the SDN network includes a plurality of controllers, switches, and edge devices, where S1 and S11 are edge devices, S2 to S10 are switches on backbone network links, and the plurality of controllers, switches, and edge devices form a plurality of links.
Based on this, as shown in fig. 1 and fig. 2, the deterministic network load balancing method based on SDN provided in the embodiment of the present application includes the following steps:
s201: and acquiring the traffic states of all interfaces and queues in the SDN in real time.
In a specific implementation, the software-defined network architecture is divided into three planes and two interfaces. The control plane is connected with the application plane through the north interface, is connected with the data plane through the south interface, and the forwarding rule of the data plane is issued by the controller through the unified interface. After receiving a data packet corresponding to the flow, a switch at the network edge sends a packet-in message to a software defined network SDN controller if a matching rule of the flow is not found in a preset flow table, and the embodiment uses the characteristics that the SDN controller can issue the flow table and the switch can request the flow table rule from the controller, and obtains a packet_in data packet through a wire under the condition that the corresponding flow table is not found in the switch; with the corresponding flow table in the switch, the data packets are acquired every 30 seconds using tcpdump.
In this way, the traffic states of all interfaces and queues in the current SDN network are obtained in the mode, so that the cost is saved, and sufficient preparation is made for load balancing among the following multi-controllers and load balancing among the multi-links.
S202: setting a flow threshold of the switch according to the state information of the edge equipment, determining priorities corresponding to a plurality of flows when the flow state of the switch exceeds the set flow threshold, and forwarding the flows according to a preset forwarding strategy.
The state information of the edge equipment comprises information such as a CPU, a bandwidth and the like, and the traffic state of the switch comprises information such as resource use condition, forwarding rate and the like.
In this embodiment, as an optional embodiment, qoS and a plurality of queues are set at an interface of an edge device; when a plurality of flows reach the edge equipment, distinguishing the flows of different service types, and determining the priority of the flows of different service types; and sending the data packets to corresponding queues for forwarding according to the priority. Optionally, traffic of different service types is distinguished by tos field in the data packet, different tos values represent different service qualities, and priorities of traffic of different service types are determined according to the tos values. Optionally, the queues include a high-speed forwarding queue, a medium-speed forwarding queue and a low-speed forwarding queue; and sending the data packets into each forwarding queue in turn according to the order of the priority from high to low, and when the high-speed forwarding queue is congested, interrupting the sending of the data packets of other forwarding queues, and preferentially sending the data packets with high priority.
In a specific implementation, the embodiment provides a method for realizing flow control shaping at an edge device, which specifically includes the following steps:
step a1: qoS and a plurality of queues are set at an interface of the edge equipment, each queue has different priority, when a plurality of traffic arrives at the edge equipment, the traffic with higher priority is forwarded preferentially, and the traffic is allocated with the queue with higher priority. A plurality of queues are arranged on an interface and used for forwarding traffic from different sources and purposes, so that the forwarding time of traffic sensitive to delay requirements in the edge equipment is shortened, and traffic with different delay requirements can be forwarded in different queues.
Step a2: the edge device flow priority forwarding algorithm is realized and can be specifically divided into the following steps:
and a21, under the condition of no congestion, all the messages can be forwarded normally according to the rule of the flow table.
Step a22, setting a flow threshold on the switch according to information such as CPU and bandwidth of the edge device, checking information such as resource use condition and forwarding rate of the edge switch with 30S as a period, and if the information exceeds the threshold, starting a flow priority forwarding algorithm.
Step a23, determining the priority of the message according to the tos value carried in the message, wherein the higher the priority is, the more preferentially forwarding, wherein the flow of different service types is generated by using a scapy flow generating tool and sent from different terminals, and the constructed data packet comprises: and if the tos value of the data packet corresponding to a certain IP address is found to change according to the information such as the source IP address, the destination IP address, the tos field and the like, the priority of the flow table is changed according to the change of the tos value.
And a24, forwarding the data packets with highest priority into a fast forwarding queue, forwarding the data packets with medium priority into a medium-speed forwarding queue, and forwarding the data packets with low priority into a low-speed forwarding queue. When the high-speed forwarding queue is congested, the transmission of the data packets of other forwarding queues is interrupted, and the data packets with high priority are preferentially transmitted.
Therefore, by distinguishing the traffic of different service types on the edge equipment, determining the priority and forwarding the traffic according to the preset forwarding strategy, the traffic of different service types can be ensured to reach the destination within the specified time.
S203: and (3) detecting the forwarding state of the switches at fixed time, and when judging that traffic congestion occurs, carrying out load balancing among controllers according to the traffic states of all interfaces and queues in the SDN network, determining all forwarding paths of the congested traffic among the switches, and determining the optimal forwarding path in the current SDN network by using a set neural network algorithm so as to enable the congested traffic to be transmitted on the current optimal forwarding path.
In this embodiment, as an optional embodiment, load balancing between controllers is performed in the following manner: selecting one controller from the plurality of controllers as a root controller for synchronizing state information among other non-root controllers, and setting a threshold value for each controller; when the load of the target non-root controller is monitored to exceed the threshold value, actively sending a load request to other non-root controllers except the target non-root controller, and migrating the load of the target non-root controller according to the load states and migration costs of the other non-root controllers.
Optionally, if the migration is unsuccessful, judging that the flow in the SDN is excessive, adding a certain number of controllers within the budget cost, and migrating the maximum load switch managed by the target non-root controller exceeding the load to the newly added controller for management.
In an implementation, the load balancing process between controllers may be divided into the following parts:
the first part is statically configured, no traffic is forwarded in the initial situation, and the area and the equipment managed by the controller are statically set according to the topological structure. According to the quantity of network equipment and budget cost, the quantity of controllers is set, and when the quantity of actual flow cannot be predicted, a small quantity of controllers is set so as to reduce cost. The switches of the areas managed by the controllers are set according to the distribution condition of the network equipment, and the number of the switches managed by each controller is almost equal.
And the second part is dynamically configured, the load of the controller is defined as the flow generated by the packet_in data packet sent by the switch to the controller, and a large amount of flow exists in the large-scale backbone network, so that a large amount of packet_in data packets can be generated, and when the load of the controller is larger, the switch with the largest load in the area managed by the controller can be migrated to the controller with the smallest load for management. The set of controllers is denoted c= { C1, C2 … cn }, where n>=1 represents the number of controllers; the switch set s= { S1, S2, S3 … sm }, where m represents the number of switches; the set of switches managed by controller j is denoted v= { V i j J represents the number of the controller, i represents the number of the switch.
Third part of cost management, if all controllers are loaded more, then controllers need to be added within the budget cost, and a part of switches are allocated to the newly added controller management.
In particular, the distributed controller is involved in obtaining the information of the underlying devices and the migration of the switch. The distributed controllers can acquire information of the switch managed by the distributed controllers, one root controller can be arranged in the controllers, and other non-root controllers periodically send the information acquired from the managed area to the root controller. For the switch migration problem, when the switch load is found to be excessive in the area, the controller of the switch can be set to be a controller with smaller load through the command of the OpenVSwitch switch.
In this way, load balancing among controllers is performed according to the flow states of all interfaces and queues in the SDN, and the optimal forwarding path in the current SDN is determined by using a set neural network algorithm, so that the congestion flow is transmitted on the current optimal forwarding path, and the possibility of bottleneck and even paralysis of the network can be reduced.
In this embodiment, as an optional embodiment, the neural network algorithm includes a deep reinforcement learning-deep Q network; in reinforcement learning, state represents state, is defined as throughput and transmission delay of a path, and is used as input of a neural network; action represents an action, defined as an optional forwarding path at the current node; reward represents a reward, defined as a reward that is given based on the throughput and transmission delay of the link after a certain action is selected.
In a specific implementation, the method comprises the following parts:
a first portion of the environment settings for deploying an environment adapted to an SDN based deterministic network.
And selecting a path in the second part, using a DQN algorithm in reinforcement learning to realize path selection, training the DQN algorithm by using a neural network to obtain an optional path, and giving a certain reward according to the quality of the obtained result, thereby obtaining an optimal path, and more conveniently obtaining real-time flow information in an SDN network.
Specifically, the embodiment relates to the implementation problems of environment setting and acquiring flow information in real time, wherein the state refers to the current state, and is represented by throughput and transmission delay of a link and is used as input of a neural network; the action refers to an action, which link is selected to forward, after the corresponding action is executed, a certain prize is given according to the throughput and the transmission delay of the link, the throughput before and after the action is executed is respectively n, n', and the transmission delay before and after the action is executed is respectively: d, d ', i.e. reorder= (n ' -n)/(d ' -d), if the transmission rate is fast, the throughput is high giving a high prize, whereas a low prize is given, thus obtaining an optimal path. Thus, the instantaneity of the selected path can be ensured, and the transmission rate of the service sensitive to the time-lapse requirement in the network can be improved.
Example two
Referring to fig. 4, fig. 4 is a schematic structural diagram of an SDN-based deterministic network load balancing system provided in an embodiment of the present application, where the SDN-based deterministic network load balancing system 400 is used in an SDN network including a plurality of controllers, switches and edge devices, and includes:
an obtaining module 410, configured to obtain traffic states of all interfaces and queues in the SDN network in real time;
the flow forwarding module 420 is configured to set a flow threshold of the switch according to state information of the edge device, determine priorities corresponding to a plurality of flows when the flow state of the switch exceeds the set flow threshold, and perform flow forwarding according to a preset forwarding policy;
the congestion detection module 430 is configured to detect forwarding states of the switches at regular time, and when it is determined that traffic congestion occurs, perform load balancing between controllers according to traffic states of all interfaces and queues in the SDN network, determine all forwarding paths of the congested traffic between the switches, and determine an optimal forwarding path in the SDN network by using a set neural network algorithm, so that the congested traffic is transmitted on the current optimal forwarding path.
In a specific implementation, the software-defined network architecture may be divided into three planes and two interfaces. The control plane is connected with the application plane through the north interface, is connected with the data plane through the south interface, and the forwarding rule of the data plane is issued by the controller through the unified interface. After receiving a data packet corresponding to a flow, a switch at the network edge sends a packet-in message to a software defined network SDN controller if a matching rule of the flow is not found in a preset flow table, and fig. 5 is a system overview diagram of a deterministic network load balancing system based on SDN, as shown in fig. 5, where the system includes a controller and a data plane, and the controller includes a flow distinguishing module, a gate shaping module, a controller load balancing module and a link load balancing module, which are configured to ensure a transmission rate of a service sensitive to a latency requirement in a backbone network in a software defined network environment.
The embodiment of the application provides a deterministic network load balancing system based on SDN, which is implemented by acquiring the flow states of all interfaces and queues in the current SDN; on the edge equipment, distinguishing the traffic of different service types, determining the priority, and forwarding the traffic according to a preset forwarding strategy, so that the traffic of different service types can be ensured to reach a destination within a specified time; and (3) detecting the forwarding state of the switch at fixed time, when judging that traffic congestion occurs, carrying out load balancing among controllers according to the traffic states of all interfaces and queues in the SDN network, determining the optimal forwarding path in the current SDN network by using a set neural network algorithm, so that the congestion traffic is transmitted on the current optimal forwarding path, the possibility of bottleneck and paralysis of the network can be reduced, and the instantaneity of the selected path is ensured, thereby ensuring the transmission rate of the service sensitive to the time delay requirement in the network.
Example III
Referring to fig. 6, fig. 6 is a schematic diagram of a computer device according to an embodiment of the present application. As shown in fig. 6, the computer device 600 includes a processor 610, a memory 620, and a bus 630.
The memory 620 stores machine-readable instructions executable by the processor 610, when the computer device 600 is running, the processor 610 communicates with the memory 620 through the bus 630, and when the machine-readable instructions are executed by the processor 610, the steps of the deterministic network load balancing method based on SDN in the method embodiment shown in fig. 2 and 3 may be executed, and a specific implementation may refer to a method embodiment and will not be described herein.
Example IV
Based on the same application concept, the embodiments of the present application further provide a computer readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the deterministic network load balancing method based on SDN described in the foregoing method embodiments are executed.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (7)

1. A deterministic network load balancing method based on SDN, used in an SDN network comprising a plurality of controllers, switches and edge devices, comprising:
acquiring flow states of all interfaces and queues in an SDN in real time;
setting a flow threshold of the switch according to the state information of the edge equipment, determining priorities corresponding to a plurality of flows when the flow state of the switch exceeds the set flow threshold, and forwarding the flows according to a preset forwarding strategy;
the forwarding policy includes: setting QoS and a plurality of queues at an interface of the edge equipment; when a plurality of flows reach the edge equipment, distinguishing the flows of different service types, and determining the priority of the flows of different service types; sending the data packets into corresponding queues for forwarding according to the priority;
distinguishing traffic of different service types through tos fields in the data packet, wherein different tos values represent different service qualities, and determining priorities of the traffic of different service types according to the tos values;
the method comprises the steps of detecting forwarding states of switches at regular time, when traffic congestion is judged to occur, carrying out load balancing among controllers according to traffic states of all interfaces and queues in an SDN network, determining all forwarding paths of the congested traffic among the switches, and determining an optimal forwarding path in the current SDN network by using a set neural network algorithm so that the congested traffic is transmitted on the current optimal forwarding path;
load balancing among controllers is performed in the following manner: selecting one controller from the plurality of controllers as a root controller for synchronizing state information among other non-root controllers, and setting a threshold value for each controller; when the load of the target non-root controller is monitored to exceed the threshold value, actively sending a load request to other non-root controllers except the target non-root controller, and migrating the load of the target non-root controller according to the load states and migration costs of the other non-root controllers.
2. The SDN based deterministic network load balancing method of claim 1, wherein the queues include a high speed forwarding queue, a medium speed forwarding queue and a low speed forwarding queue; and sending the data packets into each forwarding queue in turn according to the order of the priority from high to low, and when the high-speed forwarding queue is congested, interrupting the sending of the data packets of other forwarding queues, and preferentially sending the data packets with high priority.
3. The deterministic network load balancing method based on SDN of claim 1, wherein if migration is unsuccessful, determining that there is excessive traffic in the SDN network, adding a number of controllers within a budget cost, and migrating a maximum load switch managed by a target non-root controller that exceeds the load to a newly added controller management.
4. The SDN based deterministic network load balancing method of claim 1, wherein the neural network algorithm comprises a deep reinforcement learning-deep Q network; in reinforcement learning, state represents state, is defined as throughput and transmission delay of a path, and is used as input of a neural network; action represents action, which is defined as the optional forwarding path of the current node; reward represents a reward, defined as a reward that is given based on the throughput and transmission delay of the link after a certain action is selected.
5. An SDN based deterministic network load balancing system for implementing an SDN based deterministic network load balancing method according to any of claims 1-4, for use in an SDN network comprising a plurality of controllers, switches and edge devices, characterized by comprising:
the acquisition module is used for acquiring the flow states of all interfaces and queues in the SDN in real time;
the flow forwarding module is used for setting a flow threshold of the switch according to the state information of the edge equipment, determining priorities corresponding to a plurality of flows when the flow state of the switch exceeds the set flow threshold, and forwarding the flows according to a preset forwarding strategy;
the congestion detection module is used for detecting the forwarding state of the switch at regular time, when the occurrence of traffic congestion is judged, load balancing among controllers is carried out according to the traffic states of all interfaces and queues in the SDN network, all forwarding paths of the congestion traffic among the switches are determined, and the optimal forwarding paths in the current SDN network are determined by using a set neural network algorithm, so that the congestion traffic is transmitted on the current optimal forwarding paths.
6. A computer device, comprising: processor, memory and bus, the memory storing machine readable instructions executable by the processor, the processor and the memory in communication over the bus when the computer device is running, the machine readable instructions when executed by the processor performing the steps of the SDN based deterministic network load balancing method as claimed in any of claims 1 to 4.
7. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of an SDN based deterministic network load balancing method according to any of claims 1 to 4.
CN202210527428.6A 2022-05-16 2022-05-16 Deterministic network load balancing method and system based on SDN Active CN114928614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210527428.6A CN114928614B (en) 2022-05-16 2022-05-16 Deterministic network load balancing method and system based on SDN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210527428.6A CN114928614B (en) 2022-05-16 2022-05-16 Deterministic network load balancing method and system based on SDN

Publications (2)

Publication Number Publication Date
CN114928614A CN114928614A (en) 2022-08-19
CN114928614B true CN114928614B (en) 2023-05-23

Family

ID=82807929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210527428.6A Active CN114928614B (en) 2022-05-16 2022-05-16 Deterministic network load balancing method and system based on SDN

Country Status (1)

Country Link
CN (1) CN114928614B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987814B (en) * 2022-12-23 2023-10-31 众芯汉创(北京)科技有限公司 Deterministic network computing system for 5GURLLC scene delay

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106550049A (en) * 2016-12-02 2017-03-29 清华大学深圳研究生院 A kind of Middleware portion arranging method, apparatus and system
CN107370676A (en) * 2017-08-03 2017-11-21 中山大学 Fusion QoS and load balancing demand a kind of route selection method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9571384B2 (en) * 2013-08-30 2017-02-14 Futurewei Technologies, Inc. Dynamic priority queue mapping for QoS routing in software defined networks
CN104994033A (en) * 2015-05-13 2015-10-21 南京航空航天大学 Method for guaranteeing QoS (quality of service) of SDN (software defined network) by means of dynamic resource management
CN107896192B (en) * 2017-11-20 2020-09-25 电子科技大学 QoS control method for differentiating service priority in SDN network
CN109246031A (en) * 2018-11-01 2019-01-18 郑州云海信息技术有限公司 A kind of switch port queues traffic method and apparatus
CN110784366B (en) * 2019-11-11 2022-08-16 重庆邮电大学 Switch migration method based on IMMAC algorithm in SDN
CN113347108B (en) * 2021-05-20 2022-08-02 中国电子科技集团公司第七研究所 SDN load balancing method and system based on Q-learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106550049A (en) * 2016-12-02 2017-03-29 清华大学深圳研究生院 A kind of Middleware portion arranging method, apparatus and system
CN107370676A (en) * 2017-08-03 2017-11-21 中山大学 Fusion QoS and load balancing demand a kind of route selection method

Also Published As

Publication number Publication date
CN114928614A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN113079218A (en) Service-oriented computing power network system, working method and storage medium
CN107579922B (en) Network load balancing device and method
US8537846B2 (en) Dynamic priority queue level assignment for a network flow
Long et al. LABERIO: Dynamic load-balanced routing in OpenFlow-enabled networks
JP4021017B2 (en) Automatic load balancing method and apparatus on electronic network
JP4490956B2 (en) Policy-based quality of service
Wang et al. Implementation of multipath network virtualization with SDN and NFV
Xie et al. Cutting long-tail latency of routing response in software defined networks
JP2013168934A (en) Load-balancing device and load-balancing method
Zhang et al. SDN-based load balancing strategy for server cluster
CN106936705B (en) Software defined network routing method
CN108476175B (en) Transfer SDN traffic engineering method and system using dual variables
CN109617810B (en) Data transmission method and device
JP2022532730A (en) Quality of service in virtual service networks
WO2021227947A1 (en) Network control method and device
Kim et al. Buffer management of virtualized network slices for quality-of-service satisfaction
CN114928614B (en) Deterministic network load balancing method and system based on SDN
Kang et al. Application of adaptive load balancing algorithm based on minimum traffic in cloud computing architecture
Wang et al. Control link load balancing and low delay route deployment for software defined networks
CN105357124A (en) MapReduce bandwidth optimization method
CN108768698B (en) SDN-based multi-controller dynamic deployment method and system
KR20170033179A (en) Method and apparatus for managing bandwidth of virtual networks on SDN
KR20160025926A (en) Apparatus and method for balancing load to virtual application server
CN111464453A (en) Message forwarding method and device
CN109547352B (en) Dynamic allocation method and device for message buffer queue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant