CN117880201A - Network load balancing method, system and device based on data processor - Google Patents

Network load balancing method, system and device based on data processor Download PDF

Info

Publication number
CN117880201A
CN117880201A CN202311706188.7A CN202311706188A CN117880201A CN 117880201 A CN117880201 A CN 117880201A CN 202311706188 A CN202311706188 A CN 202311706188A CN 117880201 A CN117880201 A CN 117880201A
Authority
CN
China
Prior art keywords
network
traffic
data processor
load balancing
routing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311706188.7A
Other languages
Chinese (zh)
Inventor
黄云鹏
黄明亮
鄢贵海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202311706188.7A priority Critical patent/CN117880201A/en
Publication of CN117880201A publication Critical patent/CN117880201A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a network load balancing method, a system and a device based on a data processor, which are used for sinking network traffic of a host end to the Data Processor (DPU) for route forwarding, monitoring node load parameters to adjust a routing strategy, reducing the load of the host end, releasing computing resources, obviously improving the efficiency of load balancing, reducing delay, improving the reliability and elasticity of the network and enhancing the stability of traffic data forwarding in the network. Based on the functions of network traffic forwarding, load monitoring, routing decision and load balancing of the data processor, the system can process network traffic more efficiently, the complexity of the system is reduced, the maintenance cost is reduced, and the flexibility and the expandability of the system are improved.

Description

Network load balancing method, system and device based on data processor
Technical Field
The present invention relates to the field of data routing technologies, and in particular, to a method, a system, and an apparatus for balancing network load based on a data processor.
Background
With the continuous expansion of network scale and the proliferation of data traffic, the traditional load balancing technology based on host software has difficulty in meeting the requirements of modern high-performance computing and low-delay network application.
The method for balancing the software load at the host computer needs to occupy a large amount of computing resources, thereby causing processing delay, and for high-performance computing and low-delay application scenes, the delay can seriously influence the normal operation of the service, and the requirement for low delay in a high-flow environment can not be met.
Secondly, there is a limitation in coping with the increase of the network scale by using a software load balancing method at the host side. With the expansion of the network scale, the traditional load balancing scheme needs to process more nodes and traffic, so that the problems of synchronous delay and packet loss are more serious, and the network performance and reliability are affected.
Meanwhile, the host computer side has limitations in terms of quickly adapting to the change of network conditions, such as link congestion, node faults and the like, by utilizing a software load balancing method. This may lead to reduced network performance, affecting the stable operation of the service.
Therefore, a new load balancing solution is needed to improve network performance, reliability and flexibility.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a method, a system, and an apparatus for balancing network load based on a data processor, so as to eliminate or improve one or more drawbacks existing in the prior art, and solve the problems of high delay, poor expandable capability, and instability caused by a high resource occupancy rate obtained by a host side using software load balancing.
One aspect of the present invention provides a method of network load balancing based on a data processor (DPU, data Processing Unit) for execution at a data processor end, the data processor being loaded at a host end, the method comprising the steps of:
based on one or more virtual network cards installed by the host side through an SR-IOV (Single Root I/O Virtualization) driver and virtual network ports configured for the virtual network cards, loading the SR-IOV driver by the data processor to establish communication connection with the virtual network cards, wherein the data processor is provided with a plurality of virtual network cards and corresponds to the virtual network cards one by one;
configuring a network address and a routing table for the data processor to accept message flow forwarded by the host virtual network card;
initializing a configuration load balancing table, establishing a routing strategy, sinking a load balancing task to the data processor for local processing so as to release the computing resource of the host side.
In some embodiments, the method further comprises: and detecting load parameters of all nodes in the network in real time, and dynamically adjusting the routing strategy according to the routing parameters based on a preset routing algorithm.
In some embodiments, the preset routing algorithm employs an adaptive routing algorithm, a source routing algorithm, a Software Defined Network (SDN), a border routing protocol (BGP), an enhanced gateway routing protocol (EIGRP), or a Link-state routing protocol (Link-State Routing Protocols).
In some embodiments, the load parameters include memory usage, network bandwidth usage, disk input/output usage, response time, message loss rate, and/or error rate.
In some embodiments, the method further comprises: and performing stability test on one or more types of network traffic, acquiring a plurality of performance parameters generated in the process of testing each type of traffic, and optimizing routing strategies of the network traffic of each type based on the performance parameters so as to improve the execution effect of the load balancing task.
In some embodiments, the types of network traffic include: data traffic, voice traffic, video traffic, image traffic, real-time traffic, control traffic, management traffic, broadcast traffic, multicast traffic, and/or virtual private network traffic.
In some embodiments, the method further comprises:
generating a test report according to the stability test result;
and/or presetting a performance parameter threshold for each type of network flow, and generating alarm information and forwarding the alarm information to a preset object through a preset path when each performance parameter generated in the test process reaches the corresponding performance parameter threshold.
In some embodiments, the method further comprises:
and predicting the load parameter of the next moment according to the load parameter of the current moment, and configuring the weight of each routing path based on the variables of the load parameters of the current moment and the next moment so as to dynamically adjust the routing strategy.
In another aspect, the present invention further provides a network load balancing system based on a data processor, where the system includes:
at least one host end, wherein the host end adopts one or more virtual network cards installed by an SR-IOV driver and virtual network ports configured for the virtual network cards;
one or more data processors loaded on each host end and matched with each virtual network card of the loaded host end one by one, and the data processors execute the steps of the method
In another aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the steps of the above method.
The invention has the advantages that:
according to the network load balancing method, system and device based on the data processor, the network flow of the host computer is sunk to the data processor for route forwarding, meanwhile, the node load parameter is monitored to adjust the routing strategy, the load of the host computer is reduced, the computing resource is released, the load balancing efficiency is remarkably improved, the delay is reduced, the reliability and the elasticity of the network are improved, and the stability of flow data forwarding in the network is enhanced.
Furthermore, based on the functions of network traffic forwarding, load monitoring, routing decision and load balancing of the data processor, the system can process network traffic more efficiently, the complexity of the system is reduced, the maintenance cost is reduced, and the flexibility and the expandability of the system are improved.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate and together with the description serve to explain the invention. In the drawings:
fig. 1 is a flow chart of a network load balancing method based on a data processor according to an embodiment of the invention.
Fig. 2 is a block diagram of a network traffic load balancing method and system based on a data processing unit according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled" may refer to not only a direct connection, but also an indirect connection in which an intermediate is present, unless otherwise specified.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
In the prior art, a large amount of computing resources and storage resources of a host end are occupied by a software load balancing mode at the host end, so that the problems of high network delay, poor expandable capacity and low stability are caused. In this regard, the present application provides a method, system, and apparatus for balancing network load based on a data processor, where a load balancing calculation task and network traffic are sunk onto a Data Processor (DPU) for processing, so as to overcome the defects in the prior art.
It should be noted that, the data processor DPU (Data Processing Unit) is a dedicated processor configured with data as a center, and supports infrastructure layer services such as storage, security, and quality of service management by supporting infrastructure layer resource virtualization using a software defined technology route. The core problem to be solved by the DPU is cost reduction and efficiency improvement of the infrastructure, namely, load which is low in CPU processing efficiency and incapable of being processed by the GPU is unloaded to a special DPU, so that the efficiency of the whole computing system is improved, and the total possession cost of the whole system is reduced
Specifically, the present invention provides a network load balancing method based on a data processor (DPU, data Processing Unit), which is used for executing on a data processor side, wherein the data processor is loaded on a host side, as shown in fig. 1, and the method comprises the following steps S101 to S103:
step S101: based on one or more virtual network cards installed by a Single Root I/O Virtualization (SR-IOV) driver at a host end and virtual network ports configured for the virtual network cards, a data processor loads the SR-IOV driver to establish communication connection with the virtual network cards, and the data processor is provided with a plurality of data processors and corresponds to the virtual network cards one by one.
Step S102: and configuring a network address and a routing table for the data processor to accept the message flow forwarded by the virtual network card at the host end.
Step S103: initializing a configuration load balancing table, establishing a routing strategy, sinking a load balancing task to the data processor for local processing so as to release the computing resource of the host.
In step S101, SR-IOV (Single Root I/O Virtualization) is a computer hardware Virtualization technology that allows for improved network performance and reduced Virtualization overhead in a virtualized environment. Its goal is to enhance the direct access capability of Virtual Machines (VMs) to physical hardware devices such as network adapters and storage adapters, thereby providing better performance and throughput. In this embodiment, the resource utilization rate of the host side can be improved by deploying a plurality of virtual network cards (VFs) on the host side using the SR-IOV technique. On the basis, a data processor DPU is matched and arranged for each virtual network card so as to bear forwarding with network traffic and execute balanced load tasks.
In step S102, in order to ensure that network traffic can be routed accurately to the respective data processor DPUs, it is necessary to configure the data processor DPUs with network addresses and routing tables, which may be implemented based on a common routing protocol.
In step S103, the load balancing table is a data structure, and is used for storing mapping relations between different servers and client requests, and based on this, the data processor DPU in the present application performs network traffic routing and load balancing tasks to bear the calculation pressure of the host, and the calculation capability of the data processor DPU is used to improve the overall performance and reduce the delay.
In some embodiments, the method further comprises step S104: load parameters of all nodes in the network are detected in real time, and a routing strategy is dynamically adjusted according to the routing parameters based on a preset routing algorithm.
In some embodiments, the load parameters include memory usage, network bandwidth usage, disk input output usage, response time, message loss rate, and/or error rate.
In some embodiments, the preset routing algorithm employs an adaptive routing algorithm, a source routing algorithm, a Software Defined Network (SDN), a border routing protocol (BGP), an enhanced gateway routing protocol (EIGRP), or a Link-state routing protocol (Link-State Routing Protocols).
The adaptive routing algorithm can sense the change of the network topology and dynamically adjust the routing policy. Such algorithms make real-time routing decisions by monitoring network conditions, including link loading, congestion conditions, latency, etc.
A Source Routing algorithm (Source Routing) allows the sender of the packet to specify the path of the packet. The sender realizes the perception of network topology by specifying complete route information in the data packet, and can select different paths according to the need.
Software defined networking (Software-Defined Networking) manages network devices through a centralized controller and adjusts routing policies according to real-time network conditions. SDN makes network management become more flexible, and routing strategy can be adjusted according to the demand, realizes more intelligent flow management.
The border routing protocol BGP (Border Gateway Protocol) is a dynamic routing protocol for internet routing. It is able to perceive routing information between different autonomous systems and make routing decisions based on these information. BGP is widely used in the internet and has strong route awareness.
The enhanced inter-gateway routing protocol EIGRP (Enhanced Interior Gateway Routing Protocol) is a dynamic routing protocol for an internal network, and has better routing awareness. It can monitor changes in the network, including link status and bandwidth, etc., to dynamically adjust routing.
The Link state routing protocol Link-State Routing Protocols, such as OSPF (Open Shortest Path First), is capable of sensing the state of the network topology and calculating the shortest path by updating the Link state database. These protocols can accommodate changes in network topology, thereby enabling dynamic adjustment of routes.
The routing algorithms and protocols can sense network conditions, make routing adjustments according to real-time information to optimize network performance, improve fault tolerance, and adapt to changing network conditions. In practice, the selection of an appropriate routing algorithm takes into account the characteristics, requirements and size of the network.
In some embodiments, the method further comprises step S105: and performing stability test on one or more types of network traffic, acquiring a plurality of performance parameters generated in the process of testing each type of traffic, and optimizing routing strategies of each type of network traffic based on the performance parameters so as to improve the execution effect of load balancing tasks.
In some embodiments, the types of network traffic include: data traffic, voice traffic, video traffic, image traffic, real-time traffic, control traffic, management traffic, broadcast traffic, multicast traffic, and/or virtual private network traffic.
Data Traffic (Data Traffic) refers to general Traffic carrying user Data, including file transfer, email, web browsing, and the like. Data traffic is the most basic and common traffic type in a network.
Voice Traffic is primarily used for real-time Voice communications, such as VoIP (Voice over IP) and other real-time telephony applications. Voice traffic is very sensitive to low latency and high quality transmissions.
Video Traffic (Video Traffic) encompasses various Video content transmitted over a network, including Video streaming, online Video services, video conferencing, and the like. Video traffic typically requires higher bandwidth and a stable connection.
The Image Traffic (Image Traffic) refers to a Traffic for transmitting an Image file or graphic data, for example, an Image loaded through a Web browser. Image traffic is highly demanding for fast loading and high quality display.
Real-time Traffic (Real-time Traffic) refers to Traffic related to Real-time communications and transmissions, including Real-time audio, real-time video, online games, etc. Real-time traffic requires high latency and high reliability.
Control Traffic (Control Traffic) refers to Traffic used for communication and Control between network devices, such as route update, link state notification, and the like. Control traffic is critical to the stable operation and topology management of the network.
The management traffic (Management Traffic) refers to traffic used for network management purposes, such as device configuration, performance monitoring, fault detection, etc. Managing traffic is critical to maintaining and monitoring the health of the network.
Broadcast traffic (Broadcast Traffic) refers to broadcast information that propagates in a local area network, such as ARP (address resolution protocol) requests. Broadcast traffic may cause unnecessary congestion in large networks.
Multicast traffic (Multicast Traffic) is traffic that is transmitted by pointers to multiple recipients, rather than broadcast to all devices. Multicast traffic can be used to efficiently transmit the same data to multiple targets.
Virtual Private Network (VPN) traffic refers to traffic that is transported through encrypted tunnels for remote access, secure communications, and isolated networks. VPN traffic is very important to ensure the security of communications.
In some embodiments, the method further comprises steps S201 and/or S202:
step S201: and generating a test report according to the result of the stability test.
Step S202: and presetting a performance parameter threshold value for each type of network flow, and generating alarm information and forwarding the alarm information to a preset object through a preset path when each performance parameter generated in the test process reaches the corresponding performance parameter threshold value.
In some embodiments, the method further comprises step S301: and predicting the load parameter of the next moment according to the load parameter of the current moment, and configuring the weight of each routing path based on the variables of the load parameters of the current moment and the next moment so as to dynamically adjust the routing strategy.
In another aspect, the present invention further provides a network load balancing system based on a data processor, where the system includes:
at least one host end, wherein the host end adopts one or more virtual network cards installed by an SR-IOV driver and virtual network ports configured for the virtual network cards;
one or more data processors loaded on each host end and matched with each virtual network card of the loaded host end one by one, and the data processors execute the steps of the method
In another aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the steps of the above method.
The invention is described below in connection with a specific embodiment:
aiming at the continuous expansion of network scale and the rapid increase of data traffic, the traditional software-based load balancing technology is difficult to meet the requirements of modern high-performance computing and low-delay network application, and a method and a system for realizing network traffic load balancing based on a Data Processing Unit (DPU) are provided. To achieve more efficient load balancing. Meanwhile, in order to cope with the real-time network condition changes such as link congestion, node faults and the like, the load of each node is monitored in real time, and the routing strategy is automatically adjusted according to the load condition, so that the reliability and the elasticity of the network are improved.
The embodiment designs a system integrating the DPU, the load monitoring module, the routing strategy module and the load balancing algorithm module so as to realize network traffic load balancing.
By sinking the load balancing task on the DPU and adopting the SR-IOV technology, the use of a network port of a host is avoided, and the synchronous acceleration of load balancing is realized. The host computer only needs to transmit the message to the DPU through a virtual network card (VF), and the DPU inquires a load balancing table to finish the transmission. Therefore, the information of a network port and a load balancing table at the host end is not needed, so that the CPU computing resource of the host is released, and the system processing efficiency is improved.
An embodiment of a method and a system for balancing network traffic load based on a Data Processing Unit (DPU) in this embodiment is shown in fig. 2, and specifically includes the following steps:
1. DPU equipment is introduced to construct a network traffic load balancing system based on the DPU.
1.1 introducing DPU equipment on a Host which needs to realize network traffic load balancing, and processing load balancing tasks and forwarding network traffic.
1.2 configuring DPU equipment, and installing and configuring DPU hardware including a network interface card, a display card, a hard disk and the like on each DPU node. Meanwhile, a virtual network card (VF) driver and an SR-IOV driver are installed on the host end, so that the host can communicate with the DPU.
1.3 network configuration: and configuring a network address and a routing table, ensuring that network traffic is correctly routed to each DPU node, and realizing load balancing.
2. And realizing load balancing synchronous acceleration.
2.1, an SR-IOV (request-input-output) technology is utilized to install an SR-IOV driver on a DPU node, a message is forwarded to the DPU through VF based on a virtual function mechanism of an SR-IOV VFR, and network port and load balancing table information at a host end are avoided, so that computing resources of a CPU of the host are released, and the processing efficiency of the system is improved.
2.2 configuring a load balancing table: according to the actual service demand, initializing and configuring a load balancing table, and realizing that a load balancing task sinks on the DPU for processing.
2.3 realizing load balancing synchronous acceleration: and the parallel computing capacity of the DPU is utilized to rapidly process the load balancing task and the network flow, so that the load balancing synchronous acceleration is realized.
3. And dynamically adjusting the network traffic forwarding strategy.
3.1 monitoring node load: and the load of each node is monitored in real time, and the routing strategy is automatically adjusted according to the load condition, so that the network load balance is ensured. The following are examples of routing policies:
assume that there are three nodes: a (leader), B (candidate), and C (follower), each node has a different load percentage. Let A be 70% loaded, B be 20% loaded, and C be 10% loaded.
The routing policy will redirect some traffic from the most busy node (a) to the less busy nodes (B and C) to balance the load. If traffic is divisible, such as a packet or an API request, the system can calculate the proportion of redirected traffic based on the current load.
For example, if the goal is to balance the load of each node to approximately 33%, the routing policy may redirect a portion of the new incoming traffic, transitioning from node a to nodes B and C until the load is balanced, to dynamically adjust the routing policy for different load conditions.
3.2 implementing dynamic adjustment of network traffic forwarding policy: the cache and parallel computing capacity of the DPU are utilized to realize dynamic adjustment of the network traffic forwarding strategy so as to improve the reliability and the elasticity of the network. For example, the load parameter of the next moment is predicted according to the load parameter of the current moment, and the weight of each routing path is configured based on the variables of the load parameters of the current moment and the next moment, so as to dynamically adjust the routing strategy. Wherein the prediction process may be implemented based on a pre-trained neural network module.
4. And a load balancing algorithm based on the DPU is realized.
4.1 designing a high-performance load balancing algorithm module, fully utilizing a high-performance processor and a cache of the DPU, improving throughput and performance and reducing delay.
4.2 realizing a load balancing algorithm module based on the DPU to dynamically adjust the forwarding strategy of the network traffic, thereby realizing load balancing.
5. Testing and optimizing.
5.1 confirmation of system stability: and (3) testing the stability of the system, ensuring that the system can stably operate, and meeting the actual service requirements.
5.2 test Performance: and testing the performance of the system, including indexes such as load balancing efficiency, throughput, delay and the like, and optimizing the system to improve the network performance and reliability.
Therefore, the network traffic load balancing method and system based on the DPU can fully utilize the high-performance processor and the cache of the DPU, and improve the network performance and reliability. By monitoring and adjusting the network flow forwarding strategy in real time, the system can adaptively realize load balancing under different scenes, improve network performance and reliability, simultaneously reduce the load of a host end, release computing resources and further improve overall performance and usability.
Specifically, the advantage of this embodiment is:
low latency load balancing decision: the DPU is provided with a high-performance processor and a high-speed cache, so that a load balancing task is sunk to the DPU, the data packet is directly processed, the load balancing decision is made, and the delay of the data packet transmission to a host side is avoided.
Efficient load balancing calculation: the DPU has higher parallel computing capability, and can rapidly process complex load balancing computation and decision. Sinking the load balancing task to the DPU can unload the load balancing calculation and decision task from the host end to the DPU, reduce the host load, release the calculation resources, and improve the performance and usability.
Load balancing synchronous acceleration: by sinking the load balancing task on the DPU and adopting the SR-IOV technology, the use of a network port of a host is avoided, and the synchronous acceleration of load balancing is realized. And the network port and the load balancing table information of the host are not needed, so that the CPU computing resource of the host is released, and the system processing efficiency is improved.
Embodiments of the present invention also provide a computer device that may include a processor, a memory, wherein the processor and the memory may be connected by a bus or other means.
The processor may be a central processing unit (Central Processing Unit, CPU). The processor may also be any other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof.
The memory, as a non-transitory computer readable storage medium, may be used to store a non-transitory software program, a non-transitory computer executable program, and a module, such as a program instruction/module corresponding to a key shielding method of an in-vehicle display device in an embodiment of the present invention. The processor executes various functional applications of the processor and data processing by running non-transitory software programs, instructions, and modules stored in memory.
The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory may optionally include memory located remotely from the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory that, when executed by the processor, perform the methods described in the present embodiments.
The embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the edge computing server deployment method described above. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
In summary, according to the network load balancing method, system and device based on the data processor, the network traffic of the host end is sunk to the data processor for route forwarding, meanwhile, the node load parameter is monitored to adjust the routing strategy, the load of the host end is reduced, the computing resource is released, the load balancing efficiency is remarkably improved, the delay is reduced, the reliability and the elasticity of the network are improved, and the stability of traffic data forwarding in the network is enhanced.
Furthermore, based on the functions of network traffic forwarding, load monitoring, routing decision and load balancing of the data processor, the system can process network traffic more efficiently, the complexity of the system is reduced, the maintenance cost is reduced, and the flexibility and the expandability of the system are improved.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for balancing network load based on a data processor, wherein the method is used for executing on a data processor side, and the data processor is loaded on a host side, and the method comprises the following steps:
based on one or more virtual network cards installed by the host end through an SR-IOV (request-input-output) driver and virtual network ports configured for the virtual network cards, the SR-IOV driver is loaded by the data processor to establish communication connection with the virtual network cards, and the data processor is multiple and corresponds to the virtual network cards one by one;
configuring a network address and a routing table for the data processor to accept message flow forwarded by the host virtual network card;
initializing a configuration load balancing table, establishing a routing strategy, sinking a load balancing task to the data processor for local processing so as to release the computing resource of the host side.
2. The data processor-based network load balancing method of claim 1, further comprising:
and detecting load parameters of all nodes in the network in real time, and dynamically adjusting the routing strategy according to the routing parameters based on a preset routing algorithm.
3. The method of claim 2, wherein the predetermined routing algorithm is an adaptive routing algorithm, a source routing algorithm, a software defined network, a border routing protocol, an enhanced inter-gateway routing protocol, or a link state routing protocol.
4. The method of claim 2, wherein the load parameters include memory usage, network bandwidth usage, disk input/output usage, response time, message loss rate, and/or error rate.
5. The data processor-based network load balancing method of claim 1, further comprising:
and performing stability test on one or more types of network traffic, acquiring a plurality of performance parameters generated in the process of testing each type of traffic, and optimizing routing strategies of the network traffic of each type based on the performance parameters so as to improve the execution effect of the load balancing task.
6. The data processor-based network load balancing method of claim 5, wherein the type of network traffic comprises: data traffic, voice traffic, video traffic, image traffic, real-time traffic, control traffic, management traffic, broadcast traffic, multicast traffic, and/or virtual private network traffic.
7. The data processor-based network load balancing method of claim 5, further comprising:
generating a test report according to the stability test result;
and/or presetting a performance parameter threshold for each type of network flow, and generating alarm information and forwarding the alarm information to a preset object through a preset path when each performance parameter generated in the test process reaches the corresponding performance parameter threshold.
8. The data processor-based network load balancing method of claim 2, further comprising:
and predicting the load parameter of the next moment according to the load parameter of the current moment, and configuring the weight of each routing path based on the variables of the load parameters of the current moment and the next moment so as to dynamically adjust the routing strategy.
9. A data processor-based network load balancing system, the system comprising:
at least one host end, wherein the host end adopts one or more virtual network cards installed by an SR-IOV driver and virtual network ports configured for the virtual network cards;
one or more data processors loaded on each of said host sides and in one-to-one correspondence with each virtual network card of the loaded host side, said data processors performing the steps of the method according to any one of claims 1 to 8.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
CN202311706188.7A 2023-12-12 2023-12-12 Network load balancing method, system and device based on data processor Pending CN117880201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311706188.7A CN117880201A (en) 2023-12-12 2023-12-12 Network load balancing method, system and device based on data processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311706188.7A CN117880201A (en) 2023-12-12 2023-12-12 Network load balancing method, system and device based on data processor

Publications (1)

Publication Number Publication Date
CN117880201A true CN117880201A (en) 2024-04-12

Family

ID=90587405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311706188.7A Pending CN117880201A (en) 2023-12-12 2023-12-12 Network load balancing method, system and device based on data processor

Country Status (1)

Country Link
CN (1) CN117880201A (en)

Similar Documents

Publication Publication Date Title
US11463511B2 (en) Model-based load balancing for network data plane
US9736278B1 (en) Method and apparatus for connecting a gateway router to a set of scalable virtual IP network appliances in overlay networks
US10320683B2 (en) Reliable load-balancer using segment routing and real-time application monitoring
US10484233B2 (en) Implementing provider edge with hybrid packet processing appliance
US8743894B2 (en) Bridge port between hardware LAN and virtual switch
US10110500B2 (en) Systems and methods for management of cloud exchanges
WO2014118938A1 (en) Communication path management method
US11444840B2 (en) Virtualized networking application and infrastructure
US8095661B2 (en) Method and system for scaling applications on a blade chassis
US10411742B2 (en) Link aggregation configuration for a node in a software-defined network
US7944923B2 (en) Method and system for classifying network traffic
US20170237649A1 (en) Adjusted spanning tree protocol path cost values in a software defined network
US20170293500A1 (en) Method for optimal vm selection for multi data center virtual network function deployment
US10091063B2 (en) Technologies for directed power and performance management
US20220045969A1 (en) Mapping nvme-over-fabric packets using virtual output queues
Iqbal et al. Minimize the delays in software defined network switch controller communication
US20140047260A1 (en) Network management system, network management computer and network management method
US11848989B2 (en) Separate routing of NVMe-over-fabric packets and non-NVMe packets
Moura et al. Resilience enhancement at edge cloud systems
US20190268269A1 (en) Migration from a legacy network appliance to a network function virtualization (nfv) appliance
CN117880201A (en) Network load balancing method, system and device based on data processor
WO2021173046A1 (en) Dynamic distributed local breakout determination
JP2015106865A (en) Communication device, communication system, communication method, and communication program
Jia et al. sRetor: a semi-centralized regular topology routing scheme for data center networking
US11477274B2 (en) Capability-aware service request distribution to load balancers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination