CN116319565A - Load balancing system, method, equipment and storage medium based on online computing - Google Patents

Load balancing system, method, equipment and storage medium based on online computing Download PDF

Info

Publication number
CN116319565A
CN116319565A CN202310197695.6A CN202310197695A CN116319565A CN 116319565 A CN116319565 A CN 116319565A CN 202310197695 A CN202310197695 A CN 202310197695A CN 116319565 A CN116319565 A CN 116319565A
Authority
CN
China
Prior art keywords
link
application
switching
flow
outlet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310197695.6A
Other languages
Chinese (zh)
Inventor
李海涛
林羽尘
邱燕茹
郑可铭
周天文
卢奇
周语城
周凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Suluo Information Technology Co ltd
Original Assignee
Zhejiang Suluo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Suluo Information Technology Co ltd filed Critical Zhejiang Suluo Information Technology Co ltd
Priority to CN202310197695.6A priority Critical patent/CN116319565A/en
Publication of CN116319565A publication Critical patent/CN116319565A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/123Evaluation of link metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The utility model discloses a load balancing system, a method, equipment and a storage medium based on network computing, the load balancing system of this application is applicable to the power network based on SRv construction, including a plurality of application server sides, switching equipment cluster and calculating the net brain, this application scheme combines link matching strategy and along with the stream monitoring through the switching chip and realizes the network computing, local link quick selection in switching equipment department is realized through the mode of calculating the net, and then global regulation is carried out to calculating the net brain, carry out global equilibrium to export link load, can carry out better global optimization when realizing quick response, simultaneously dispose the priority for the application sign, carry out categorised management and control to the export link, can carry out individualized regulation and control to the customer demand, the smooth switching of cooperation based on the priority is in order to realize customized load balancing.

Description

Load balancing system, method, equipment and storage medium based on online computing
Technical Field
The present disclosure relates to the field of computing networks, and in particular, to a load balancing system, method, device, and storage medium based on online computing.
Background
With the development of the computing power network, the programmable network technology is focused and experimentally deployed In the fields of data centers, edge computing and the like, and In-Network Computing is computed In the network or Computing In Network In the network gradually becomes a new network research break. The development of programmable network technology enables the link transmission of messages to have certain programming properties, and the traditional link load balancing adopts hardware load balancing equipment or is based on some load balancing strategies, such as hash algorithm or cells and the like. The working principle of the load balancing of the outlet link is as follows: distributing the traffic accessing the ISP to different outgoing links, such as dedicated links, cables or ADSL (Asymmetric Digital Subscriber Line ), etc.; meanwhile, if one of the outgoing links is abnormal, the traffic belonging to the outgoing link is automatically distributed to other normal links. With the application of multiple-egress link load sharing, for traffic flows, egress link selection must be performed according to a certain algorithm. Currently, many methods for performing egress link selection use packet granularity for egress link selection, and the method for performing egress link selection using packet granularity uses packet-by-packet forwarding, i.e. calculating the corresponding egress link before forwarding each packet. The method for selecting the outlet link has the advantages that the load balance of the flow among the outlet links can be achieved to the greatest extent, so that the bandwidth utilization rate of the outlet link is improved as much as possible. However, this way of egress link selection severely degrades the performance of the system because it requires packet-by-packet computation forwarding.
Disclosure of Invention
The main purpose of the application is to provide a load balancing system, method, equipment and storage medium based on network computing, and aims to provide a system which can not only quickly respond to link selection, but also globally perform link load balancing and optimize in real time.
To achieve the above object, in a first aspect, there is provided an on-network computing-based load balancing system, the system being adapted to an computing network constructed based on SRv, the system comprising a plurality of application servers, a switch device cluster and a computing network brain, the switch device cluster comprising a plurality of switch device groups, the switch device groups comprising a plurality of switch devices, the switch devices within the switch device cluster receiving data streams from the servers, the data streams comprising a plurality of types of application traffic, the application servers being connected to the switch devices in the switch device cluster, the switch device cluster being connected to the computing network brain, the application servers configuring corresponding application identifiers for each type of application traffic in the data streams issued based on SRv technology; the switching equipment is configured with a switching chip, each outlet link of the switching equipment is configured with a plurality of adaptive application identifiers, the switching equipment acquires flow information of received data flows in real time through flow detection, the switching equipment further comprises a switching control module and a link load monitoring module, the link load monitoring module is used for monitoring load conditions of each link of the switching equipment, the switching control module screens the adaptive outlet links according to the received application identifiers of the application flows, and the screening range is a switching equipment group where the switching equipment receiving the application flows is located;
the computing network brain is configured with a link load adjustment module, the link load monitoring module of the switching equipment updates the acquired link load condition to the link load adjustment module in real time, the link load adjustment module acquires application flow information based on flow detection, and the link load adjustment module is used for adjusting a flow transmission path according to the application flow data and the link load condition and sending the flow transmission path to the corresponding switching equipment.
Preferably, a priority is set for a plurality of adaptation application identifiers corresponding to each exit link of the switching device.
In a second aspect, a link load balancing method based on network computing is provided, the method comprising the steps of:
s1, a first switching device in a first switching device group in a switching device cluster receives a data stream from a server;
s2, the first switching equipment acquires flow information corresponding to application flow in the data flow in real time through flow detection; the flow information comprises the size and application identification of the flow; the application identifier is configured by a server;
s3, the first switching equipment acquires the corresponding load condition and the adaptive application identifier of each link in the first switching equipment group through a link load monitoring module configured by switching equipment in the first switching equipment group, and the switching control module of the first switching equipment selects an intra-group outlet link according to the acquired load condition and the adaptive application identifier of each link in the group and flow information acquired through flow following detection, so as to determine the corresponding intra-group outlet link for each application flow in the data flow;
s4, a link load adjustment module of the computing network brain acquires flow information of various application flows in the data flow based on flow-following detection, screens an adaptive outlet link in a switching device cluster according to application identifiers in the flow information, determines an alternative link capable of being loaded according to the load condition of the screened outlet link and the application flow, preferentially determines a preferred outlet link of various application flows in the data flow from the alternative link, and judges whether each preferred outlet link is consistent with a corresponding intra-group outlet link or not, and if not, sends the preferred outlet link and a corresponding forwarding path thereof to first switching equipment;
s5, the first switching equipment receives a preferred outlet link and a forwarding path sent by the computing network brain, and the preferred outlet link and the forwarding path are an application flow smooth switching path and an outlet link;
s6, if the load condition of a first outlet link of the first switching equipment exceeds a preset threshold, the first switching equipment acquires flow information of application flow in the first link based on flow detection, and a computing network brain carries out link adjustment according to the acquired flow information to determine an optimized outlet link and a corresponding forwarding path, and sends the optimized outlet link and the corresponding forwarding path to the first switching equipment;
and S7, the first switching equipment receives the optimized exit link and the corresponding forwarding path sent by the computing network brain, and smoothly switches the path and the exit link for the application flow in the first exit link.
Preferably, each exit link on the first switching device is configured with a plurality of adaptation application identifiers, and priority is set between each adaptation application identifier.
Preferably, in step S3, determining a corresponding intra-group egress link for each type of application traffic in the data stream specifically includes: the first switching equipment determines a matched outlet link according to the application identifier of the received application flow and the adaptive application identifier of the outlet link on the first switching equipment, determines an outlet link capable of being loaded according to the determined load condition of the outlet link and the flow size of the corresponding application flow, and determines an outlet link in a group according to the application identifier of the application flow and the adaptive application identifier priority of the outlet link and the load service condition if a plurality of outlet links capable of being loaded exist in the unified application flow type;
if the load condition of the exit link of the first switching device cannot meet the demand of the received application flow, calculating a link index S in the first switching device group n Determining an intra-group egress link, the link index S n The calculation formula of (2) is as follows: s is S n =N1*A%*W1*x1*(1-(1/(N2*B%*W2*x2)))
The method comprises the steps that in the formula, an export link weight W1, a transmission link weight W2, x1 is a priority coefficient of an application identifier corresponding to an application to be calculated on a current export link, x2 is a priority coefficient of the application to be calculated on the current transmission link, A% is the residual load of the export link, B% is the residual load of the transmission link, wherein the transmission link is an east-west link used for data transmission between switching devices, the export link weight W1 and the transmission link weight W2 are both preset values, the priority coefficient is determined based on a corresponding priority and a preset priority-priority coefficient conversion rule, N1 is an export link calculation constant, and N2 is a transmission link calculation constant.
Preferably, the preferentially determining the preferred egress link in step S4 specifically includes:
determining matched outlet links according to application identifiers, determining an outlet link set capable of being loaded according to the link loading condition and the application flow data flow size, and determining transmission links corresponding to the switching equipment among all outlet links in the outlet link set; calculating an optimized link index S y Determining a preferred egress link, wherein the preferred link index S y The formula of (1) is S y In the expression =n1×1×1 (1- ((1/N2×b% ×w2×2) + (1/N3×w3% +w3) +.
Preferably, the specific policy of applying the traffic smooth handover exit link includes:
after the calculation network brain issues the optimized exit link and path, the switching equipment determines a link to be switched and an optimized exit link path, acquires switching preparation timeliness, and transmits the application flow of the path to be switched in proportion according to a switching curve in the switching preparation timeliness until the path of the link to be switched is completely switched to the optimized link path, and reads a switching priority configuration table when switching the path, wherein the switching priority configuration table is used for setting switching priority corresponding to an application identifier, and the switching equipment sequentially performs flow switching according to the switching priority corresponding to the application identifier of the application flow.
In a third aspect, embodiments of the present application also provide a computer device comprising one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method as described in the second aspect above.
In a fourth aspect, embodiments of the present application further provide a storage medium storing a computer program, which when executed by a processor implements a method as described in the second aspect above.
The invention has the beneficial effects that:
the on-line calculation is realized by combining a link matching strategy and the on-flow monitoring through the exchange chip, the local link at the exchange equipment is rapidly selected in the on-line calculation mode, the global regulation and control are carried out through the network computing brain, the global balance is carried out on the load of the outlet link, and the better global optimization can be carried out while the rapid response is realized;
the priority is configured by the application identifier, the outlet link is classified and controlled, personalized regulation and control can be carried out according to the demands of clients, and customized load balancing is realized by matching with smooth switching based on the priority.
Drawings
FIG. 1 is a schematic diagram of an on-network computing-based load balancing system according to an embodiment of the present invention;
fig. 2 is a flowchart of a load balancing method for online computing according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a load balancing system based on online computing according to an embodiment of the present invention. An embodiment of the present application provides an online computing-based load balancing system, where the system is applicable to an computing network constructed based on SRv, the system includes a plurality of application servers, a switching device cluster, and a computing network brain, the switching device cluster includes a plurality of switching device groups, the switching device groups include a plurality of switching devices, the switching devices in the switching device cluster receive data flows from the servers, the data flows include a plurality of types of application traffic, the application servers are connected with the switching devices in the switching device cluster, the switching device cluster is connected with the computing network brain, and the application servers configure corresponding application identifiers for various application traffic in the sent data flows based on SRv technology; the switching equipment is configured with a switching chip, each outlet link of the switching equipment is configured with a plurality of adaptive application identifiers, the switching equipment acquires flow information of received data flows in real time through flow detection, the switching equipment further comprises a switching control module and a link load monitoring module, the link load monitoring module is used for monitoring load conditions of each link of the switching equipment, the switching control module screens the adaptive outlet links according to the received application identifiers of the application flows, and the screening range is a switching equipment group where the switching equipment receiving the application flows is located;
the computing network brain is configured with a link load adjustment module, the link load monitoring module of the switching equipment updates the acquired link load condition to the link load adjustment module in real time, the link load adjustment module acquires application flow information based on flow detection, and the link load adjustment module is used for adjusting a flow transmission path according to the application flow data and the link load condition and sending the flow transmission path to the corresponding switching equipment.
Further, a priority is set for a plurality of adaptation application identifiers corresponding to each exit link of the switching device.
In the above scheme, the service end configures application traffic identification in the domains field of the SRH extension header based on SRv6 technology, marks traffic from different applications with specific codes, and constructs an on-stream detection architecture based on SRv, where the IFIT architecture supported by SRv6 in the present application is specifically as follows:
adopting EAM instruction format:
Figure SMS_1
wherein, the meaning of each field of EAM:
FlowMonID: length 20bit, monitor stream ID, for marking a specific stream within the IFIT field.
L: length 1bit, packet loss flag bit described in rfc 8321.
D: length 1bit, delay flag bit described in rfc 8321.
Reserved: length 10bit, reserved field.
The L bits are set to 0 or 1 (alternate dyeing) periodically and alternately, the FlowMonID and the period number are sent up hop by hop in a Postcard mode, and the count value in the period is sent to an analyzer to obtain information such as packet loss operation, packet loss position and the like; the D bit is set as 1 for the data packet to be detected, and a timestamp is marked on the data packet to calculate the unidirectional time delay of the marked message;
SRv6 IFIT encapsulation is carried out by adopting an OPTIONAL TLV encapsulated in SRH:
Figure SMS_2
as shown in fig. 2, fig. 2 is a flowchart of a load balancing method based on network computing according to an embodiment of the present invention, and provides a link load balancing method based on network computing, where the method includes the following steps:
s1, a first switching device in a first switching device group in a switching device cluster receives a data stream from a server;
s2, the first switching equipment acquires flow information corresponding to application flow in the data flow in real time through flow detection; the flow information comprises the size and application identification of the flow; the application identifier is configured by a server;
s3, the first switching equipment acquires the corresponding load condition and the adaptive application identifier of each link in the first switching equipment group through a link load monitoring module configured by switching equipment in the first switching equipment group, and the switching control module of the first switching equipment selects an intra-group outlet link according to the acquired load condition and the adaptive application identifier of each link in the group and flow information acquired through flow following detection, so as to determine the corresponding intra-group outlet link for each application flow in the data flow;
s4, a link load adjustment module of the computing network brain acquires flow information of various application flows in the data flow based on flow-following detection, screens an adaptive outlet link in a switching device cluster according to application identifiers in the flow information, determines an alternative link capable of being loaded according to the load condition of the screened outlet link and the application flow, preferentially determines a preferred outlet link of various application flows in the data flow from the alternative link, and judges whether each preferred outlet link is consistent with a corresponding intra-group outlet link or not, and if not, sends the preferred outlet link and a corresponding forwarding path thereof to first switching equipment;
s5, the first switching equipment receives a preferred outlet link and a forwarding path sent by the computing network brain, and the preferred outlet link and the forwarding path are an application flow smooth switching path and an outlet link;
s6, if the load condition of a first outlet link of the first switching equipment exceeds a preset threshold, the first switching equipment acquires flow information of application flow in the first link based on flow detection, and a computing network brain carries out link adjustment according to the acquired flow information to determine an optimized outlet link and a corresponding forwarding path, and sends the optimized outlet link and the corresponding forwarding path to the first switching equipment;
and S7, the first switching equipment receives the optimized exit link and the corresponding forwarding path sent by the computing network brain, and smoothly switches the path and the exit link for the application flow in the first exit link.
Preferably, each exit link on the first switching device is configured with a plurality of adaptation application identifiers, and priority is set between each adaptation application identifier.
Preferably, in step S3, determining a corresponding intra-group egress link for each type of application traffic in the data stream specifically includes: the first switching equipment determines a matched outlet link according to the application identifier of the received application flow and the adaptive application identifier of the outlet link on the first switching equipment, determines an outlet link capable of being loaded according to the determined load condition of the outlet link and the flow size of the corresponding application flow, and determines an outlet link in a group according to the application identifier of the application flow and the adaptive application identifier priority of the outlet link and the load service condition if a plurality of outlet links capable of being loaded exist in the unified application flow type;
if the load condition of the exit link of the first switching device cannot meet the demand of the received application flow, calculating a link index S in the first switching device group n Determining an intra-group egress link, the link index S n The calculation formula of (2) is as follows: s is S n =N1*A%*W1*x1*(1-(1/(N2*B%*W2*x2)))
The method comprises the steps that in the formula, an export link weight W1, a transmission link weight W2, x1 is a priority coefficient of an application identifier corresponding to an application to be calculated on a current export link, x2 is a priority coefficient of the application to be calculated on the current transmission link, A% is the residual load of the export link, B% is the residual load of the transmission link, wherein the transmission link is an east-west link used for data transmission between switching devices, the export link weight W1 and the transmission link weight W2 are both preset values, the priority coefficient is determined based on a corresponding priority and a preset priority-priority coefficient conversion rule, N1 is an export link calculation constant, and N2 is a transmission link calculation constant.
Preferably, the preferentially determining the preferred egress link in step S4 specifically includes:
determining matched outlet links according to application identifiers, determining an outlet link set capable of being loaded according to the link loading condition and the application flow data flow size, and determining transmission links corresponding to the switching equipment among all outlet links in the outlet link set; calculating an optimized link index S y Determining a preferred egress link, wherein the preferred link index S y The formula of (1) is S y In the expression =n1×1×1 (1- ((1/N2×b% ×w2×2) + (1/N3×w3% +w3) +.
Preferably, the specific policy of applying the traffic smooth handover exit link includes:
after the calculation network brain issues the optimized exit link and path, the switching equipment determines a link to be switched and an optimized exit link path, acquires switching preparation timeliness, and transmits the application flow of the path to be switched in proportion according to a switching curve in the switching preparation timeliness until the path of the link to be switched is completely switched to the optimized link path, and reads a switching priority configuration table when switching the path, wherein the switching priority configuration table is used for setting switching priority corresponding to an application identifier, and the switching equipment sequentially performs flow switching according to the switching priority corresponding to the application identifier of the application flow.
In the above scheme, the selecting the egress link based on the link load condition specifically includes:
the first switching equipment receives a first application flow data stream from a server, acquires an application identifier and an application flow size of the first application flow, acquires the load condition of each link of the first switching equipment according to a link load monitoring module of the first switching equipment, judges whether the adaptive application identifiers of a plurality of outlet links exist according to the load condition of each link of the first switching equipment and the adaptive application identifiers of each link, matches the application identifier of the first application flow and the link load can support the application flow size of the first application flow, if so, determines the first outlet link according to the priority of the application identifier of the outlet link and the link load size meeting the requirements, if only one exit link meets the judging requirement, determining that the exit link is changed into a first exit link, if no exit link meets the judging requirement, judging whether a second exit link exists, wherein the second exit link is an exit link of a second switching device in a switching device group where the first switching device is located, an adaptation application identifier of the second exit link is matched with an application identifier of the first application flow, and the link load can support the application flow of the first application flow, and meanwhile, a third link exists between the second switching device and the first switching device to meet the requirement that the adaptation application identifier of the third link is matched with the application identifier of the first application flow, and the link load can support the application flow of the first application flow.
The method has the advantages that the on-line calculation is realized by combining a link matching strategy and the flow following monitoring through the exchange chip, the local link at the exchange equipment is quickly selected in the mode of the on-line calculation, the global regulation and control are carried out through the network computing brain, the global balance is carried out on the load of the outlet link, and the better global optimization can be carried out while the quick response is realized; on the other hand, the priority is configured by the application identifier, the outlet link is classified and controlled, personalized regulation and control can be performed according to the demands of clients, and customized load balancing is realized by matching with smooth switching based on the priority.
Fig. 3 is a schematic structural diagram of an apparatus according to an embodiment of the present invention. As shown in fig. 3, as another aspect, the present application also provides a computer apparatus 100 including one or more Central Processing Units (CPUs) 101 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 102 or a program loaded from a storage section 108 into a Random Access Memory (RAM) 103. In the RAM103, various programs and data required for the operation of the device 100 are also stored. The CPU101, ROM102, and RAM103 are connected to each other through a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
The following components are connected to the I/O interface 105: an input section 106 including a keyboard, a mouse, and the like; an output section 107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 108 including a hard disk or the like; and a communication section 109 including a network interface card such as a LAN card, a modem, and the like. The communication section 109 is also connected to the I/O interface 105 as necessary via a network execution communication processing driver 110 such as the internet. A removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 110 as needed, so that a computer program read out therefrom is installed into the storage section 108 as needed.
In particular, according to embodiments of the present disclosure, the method described in embodiment 1 above may be implemented as a computer software program. For example, embodiments disclosed herein include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method described in any of the embodiments above. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 109, and/or installed from the removable medium 111.
As yet another aspect, the present application also provides a computer-readable storage medium, which may be a computer-readable storage medium contained in the apparatus of the above-described embodiment; or may be a computer-readable storage medium, alone, that is not assembled into a device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described herein.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software, or may be implemented by hardware. The described units or modules may also be provided in a processor, for example, each of the units may be a software program provided in a computer or a mobile smart device, or may be separately configured hardware devices. Wherein the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or their equivalents without departing from the spirit of the application. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (9)

1. The system is suitable for an algorithm power network constructed based on SRv, and comprises a plurality of application service ends, a switching device cluster and an algorithm network brain, wherein the switching device cluster comprises a plurality of switching device groups, each switching device group comprises a plurality of switching devices, the switching devices in the switching device cluster receive data streams from the service ends, each data stream comprises a plurality of types of application traffic, the application service ends are connected with the switching devices in the switching device cluster, the switching device cluster is connected with the algorithm network brain, and the application service ends configure corresponding application identifiers for various application traffic in the sent data streams based on SRv technology; the switching equipment is configured with a switching chip, each outlet link of the switching equipment is configured with a plurality of adaptive application identifiers, the switching equipment acquires flow information of received data flows in real time through flow detection, the switching equipment further comprises a switching control module and a link load monitoring module, the link load monitoring module is used for monitoring load conditions of each link of the switching equipment, the switching control module screens the adaptive outlet links according to the received application identifiers of the application flows, and the screening range is a switching equipment group where the switching equipment receiving the application flows is located;
the computing network brain is configured with a link load adjustment module, the link load monitoring module of the switching equipment updates the acquired link load condition to the link load adjustment module in real time, the link load adjustment module acquires application flow information based on flow detection, and the link load adjustment module is used for adjusting a flow transmission path according to the application flow data and the link load condition and sending the flow transmission path to the corresponding switching equipment.
2. An on-network computing based load balancing system according to claim 1, wherein the adaptation application identities corresponding to the exit links of the switching devices are prioritized.
3. A link load balancing method based on network computing, the method comprising the steps of:
s1, a first switching device in a first switching device group in a switching device cluster receives a data stream from a server;
s2, the first switching equipment acquires flow information corresponding to application flow in the data flow in real time through flow detection; the flow information comprises the size and application identification of the flow; the application identifier is configured by a server;
s3, the first switching equipment acquires the corresponding load condition and the adaptive application identifier of each link in the first switching equipment group through a link load monitoring module configured by switching equipment in the first switching equipment group, and the switching control module of the first switching equipment selects an intra-group outlet link according to the acquired load condition and the adaptive application identifier of each link in the group and flow information acquired through flow following detection, so as to determine the corresponding intra-group outlet link for each application flow in the data flow;
s4, a link load adjustment module of the computing network brain acquires flow information of various application flows in the data flow based on flow-following detection, screens an adaptive outlet link in a switching device cluster according to application identifiers in the flow information, determines an alternative link capable of being loaded according to the load condition of the screened outlet link and the application flow, preferentially determines a preferred outlet link of various application flows in the data flow from the alternative link, and the computing network brain judges whether each preferred outlet link is consistent with a corresponding intra-group outlet link or not, and if not, sends the preferred outlet link and a corresponding forwarding path thereof to first switching equipment;
s5, the first switching equipment receives a preferred outlet link and a forwarding path sent by the computing network brain, and the preferred outlet link and the forwarding path are an application flow smooth switching path and an outlet link;
s6, if the load condition of a first outlet link of the first switching equipment exceeds a preset threshold, the first switching equipment acquires flow information of application flow in the first link based on flow detection, and a computing network brain carries out link adjustment according to the acquired flow information to determine an optimized outlet link and a corresponding forwarding path, and sends the optimized outlet link and the corresponding forwarding path to the first switching equipment;
and S7, the first switching equipment receives the optimized exit link and the corresponding forwarding path sent by the computing network brain, and smoothly switches the path and the exit link for the application flow in the first exit link.
4. A method for balancing link load based on network computing according to claim 3, wherein each of the egress links on the first switching device is configured with a plurality of adaptation application identifiers, and a priority is set between each of the adaptation application identifiers.
5. The method for link load balancing based on network computing according to claim 4, wherein determining the corresponding intra-group egress link for each type of application traffic in the data stream in step S3 specifically includes:
the first switching equipment determines a matched outlet link according to the application identifier of the received application flow and the adaptive application identifier of the outlet link on the first switching equipment, determines an outlet link capable of being loaded according to the determined load condition of the outlet link and the flow size of the corresponding application flow, and determines an outlet link in a group according to the application identifier of the application flow and the adaptive application identifier priority of the outlet link and the load service condition if a plurality of outlet links capable of being loaded exist in the unified application flow type;
if the load condition of the exit link of the first switching device cannot meet the demand of the received application flow, calculating a link index S in the first switching device group n Determining an intra-group egress link, the link index S n The calculation formula of (2) is as follows:
S n =N1*A%*W1*x1*(1-(1/(N2*B%*W2*x2)))
the method comprises the steps that in the formula, an export link weight W1, a transmission link weight W2, x1 is a priority coefficient of an application identifier corresponding to an application to be calculated on a current export link, x2 is a priority coefficient of the application to be calculated on the current transmission link, A% is the residual load of the export link, B% is the residual load of the transmission link, wherein the transmission link is an east-west link used for data transmission between switching devices, the export link weight W1 and the transmission link weight W2 are both preset values, the priority coefficient is determined based on a corresponding priority and a preset priority-priority coefficient conversion rule, N1 is an export link calculation constant, and N2 is a transmission link calculation constant.
6. A method for balancing link loads based on network computing according to claim 3, wherein the preferentially determining the preferred exit link in step S4 specifically comprises:
determining matched outlet links according to application identifiers, determining an outlet link set capable of being loaded according to the link loading condition and the application flow data flow size, and determining transmission links corresponding to the switching equipment among all outlet links in the outlet link set;
calculating an optimized link index S y Determining a preferred egress link, wherein the preferred link index S y The formula of (1) is S y =n1% > W1 x1 (1- ((1/N2%b) W2 x 2) + (1/N3%c%w3%x 3) +Xn is the priority coefficient of the application identifier in the corresponding link, nn is the calculation constant of the corresponding link, and N% is the residual load of the link.
7. A method for link load balancing based on network computing according to claim 3, wherein applying traffic smoothing handover egress link specific policies comprises:
after the calculation network brain issues the optimized exit link and path, the switching equipment determines a link to be switched and an optimized exit link path, acquires switching preparation timeliness, and transmits the application flow of the path to be switched in proportion according to a switching curve in the switching preparation timeliness until the path of the link to be switched is completely switched to the optimized link path, and reads a switching priority configuration table when switching the path, wherein the switching priority configuration table is used for setting switching priority corresponding to an application identifier, and the switching equipment sequentially performs flow switching according to the switching priority corresponding to the application identifier of the application flow.
8. A computer device, characterized by one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the methods of claims 3-7.
9. A storage medium storing a computer program, characterized in that the program, when executed by a processor, implements the method of claims 3 to 7.
CN202310197695.6A 2023-03-03 2023-03-03 Load balancing system, method, equipment and storage medium based on online computing Pending CN116319565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310197695.6A CN116319565A (en) 2023-03-03 2023-03-03 Load balancing system, method, equipment and storage medium based on online computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310197695.6A CN116319565A (en) 2023-03-03 2023-03-03 Load balancing system, method, equipment and storage medium based on online computing

Publications (1)

Publication Number Publication Date
CN116319565A true CN116319565A (en) 2023-06-23

Family

ID=86823427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310197695.6A Pending CN116319565A (en) 2023-03-03 2023-03-03 Load balancing system, method, equipment and storage medium based on online computing

Country Status (1)

Country Link
CN (1) CN116319565A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116668359A (en) * 2023-07-31 2023-08-29 杭州网鼎科技有限公司 Intelligent non-inductive switching method, system and storage medium for network paths

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116668359A (en) * 2023-07-31 2023-08-29 杭州网鼎科技有限公司 Intelligent non-inductive switching method, system and storage medium for network paths
CN116668359B (en) * 2023-07-31 2023-10-10 杭州网鼎科技有限公司 Intelligent non-inductive switching method, system and storage medium for network paths

Similar Documents

Publication Publication Date Title
US10129043B2 (en) Apparatus and method for network flow scheduling
EP1708441B1 (en) A method, network element and communication network for fairly adjusting bandwidth among distributed network elements
US20070041321A1 (en) Network switch apparatus that avoids congestion at link-aggregated physical port
CN110519783B (en) 5G network slice resource allocation method based on reinforcement learning
CN108476175B (en) Transfer SDN traffic engineering method and system using dual variables
CN107666448B (en) 5G virtual access network mapping method under time delay perception
CN112350949B (en) Rerouting congestion control method and system based on flow scheduling in software defined network
CN109787801A (en) A kind of network service management methods, devices and systems
CN111181873B (en) Data transmission method, data transmission device, storage medium and electronic equipment
CN107579925B (en) Message forwarding method and device
CN109274589B (en) Service transmission method and device
CN116319565A (en) Load balancing system, method, equipment and storage medium based on online computing
US9178826B2 (en) Method and apparatus for scheduling communication traffic in ATCA-based equipment
US20220053373A1 (en) Communication apparatus, communication method, and program
US20180013659A1 (en) Shaping outgoing traffic of network packets in a network management system
Cai et al. Optimal cloud network control with strict latency constraints
CN112005528B (en) Data exchange method, data exchange node and data center network
Wang et al. URBM: user-rank-based management of flows in data center networks through SDN
EP1489795A2 (en) Network swtich configured to weight traffic
US20030174651A1 (en) Control method and system composed of outer-loop steady-state controls and inner-loop feedback controls for networks
Yu et al. Energy-efficient, qos-aware packet scheduling in high-speed networks
CN108347378A (en) A kind of control dedicated network and dynamic routing method for bulk power grid
Bonald et al. Scheduling network traffic
JP2003511976A (en) Link capacity sharing for throughput blocking optimization
Li et al. PARS-SR: A scalable flow forwarding scheme based on Segment Routing for massive giant connections in 5G networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination