CN115022336A - Server resource load balancing method, system, terminal and storage medium - Google Patents

Server resource load balancing method, system, terminal and storage medium Download PDF

Info

Publication number
CN115022336A
CN115022336A CN202210612475.0A CN202210612475A CN115022336A CN 115022336 A CN115022336 A CN 115022336A CN 202210612475 A CN202210612475 A CN 202210612475A CN 115022336 A CN115022336 A CN 115022336A
Authority
CN
China
Prior art keywords
cpu
link
target
network card
binding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210612475.0A
Other languages
Chinese (zh)
Inventor
白云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210612475.0A priority Critical patent/CN115022336A/en
Publication of CN115022336A publication Critical patent/CN115022336A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the technical field of servers, in particular to a method, a system, a terminal and a storage medium for balancing server resource load, which comprises the following steps: respectively establishing a first link for connecting a first CPU with a service network card and a second link for connecting a second CPU with the service network card; acquiring resource use conditions of a first CPU and a second CPU, and selecting a target CPU based on the resource use conditions; and calling a target link connecting the target CPU and the service network card, and sending the service to the target CPU through the target link. The invention can effectively balance the task load of the multi-core CPU and fully utilize the resources of the multi-core CPU.

Description

Server resource load balancing method, system, terminal and storage medium
Technical Field
The invention belongs to the technical field of servers, and particularly relates to a server resource load balancing method, a system, a terminal and a storage medium.
Background
At present, with the explosive growth of services such as cloud computing and big data, higher and higher requirements are placed on a server, not only are the overall performance and the computing density improved, but also the load response requirements are higher and higher, and therefore higher response requirements are provided for server resource scheduling and load balancing.
Load Balance (Load Balance) refers to the uniform allocation of requests/data to multiple operation units for execution, the purpose of Load Balance is to implement Load allocation in network connection, CPU, disk or other resources, so as to achieve the purposes of optimizing resource usage, maximizing throughput and minimizing response time, a typical server is designed to be composed of multiple physical CPU cores, memories, hard disks and network cards at present, the traditional server internal resource allocation is developed around the CPU progress of the server, the defined sequence of network and storage resources is preferentially allocated to the CPU1, the CPU1 is allocated to the CPU2 after resource allocation is completed (see fig. 1), such advantages are simple server internal allocation, less number of risers and cables, but the disadvantage is also obvious, namely, the scheduling of server internal resources is not reasonable enough, when the Load is large, the CPU1 and CPU2 resources are unbalanced, and a large number of resource requests and responses are concentrated on the CPU1, while CPU2 is idling due to underutilization of resources.
Disclosure of Invention
In view of the above-mentioned deficiencies in the prior art, the present invention provides a method, a system, a terminal and a storage medium for balancing server resource load, so as to solve the above-mentioned technical problems.
In a first aspect, the present invention provides a server resource load balancing method, including:
respectively establishing a first link for connecting a first CPU with a service network card and a second link for connecting a second CPU with the service network card;
acquiring resource use conditions of a first CPU and a second CPU, and selecting a target CPU based on the resource use conditions;
and calling a target link connecting the target CPU and the service network card, and sending the service to the target CPU through the target link.
Further, after a first link connecting the first CPU and the service network card and a second link connecting the second CPU and the service network card are respectively established, the method further includes:
and monitoring the states of the first link and the second link, and if an unavailable link exists, sending the service to the corresponding CPU through the normal link.
Further, acquiring resource usage of the first CPU and the second CPU, and selecting a target CPU based on the resource usage, includes:
establishing a communication link between the first CPU and the second CPU;
and setting the working modes of the first CPU and the second CPU as an election mode, and carrying out resource information interaction on the first CPU and the second CPU through a communication link in the election mode to elect a target CPU with the minimum resource utilization rate.
Further, establishing a communication link between the first CPU and the second CPU includes:
the binding driver intercepts and captures an address resolution response sent by a local machine and rewrites a source hardware address into a unique hardware address of a slave node in binding, so that different opposite ends use different hardware addresses for communication;
when the local machine sends an address resolution request, the binding driver copies and stores the IP information of the opposite end from the address resolution packet;
when the address resolution response arrives from the opposite end, the binding driver extracts its hardware address and initiates an address resolution response to the slave node in the binding.
In a second aspect, the present invention provides a server resource load balancing system, including:
the link establishing unit is used for respectively establishing a first link for connecting the first CPU with the service network card and a second link for connecting the second CPU with the service network card;
the target determining unit is used for acquiring the resource use conditions of the first CPU and the second CPU and selecting a target CPU based on the resource use conditions;
and the task sending unit is used for calling a target link connected with the target CPU and the service network card and sending the service to the target CPU through the target link.
Further, the system further comprises:
and the link monitoring unit is used for monitoring the states of the first link and the second link, and if the link which is not communicated exists, the service is sent to the corresponding CPU through the normal link.
Further, the target determination unit includes:
the communication establishing module is used for establishing a communication link between the first CPU and the second CPU;
and the election execution module is used for setting the working modes of the first CPU and the second CPU into an election mode, and the first CPU and the second CPU perform resource information interaction through a communication link in the election mode to elect a target CPU with the minimum resource utilization rate.
Further, the communication establishing module is configured to:
the binding driver intercepts and captures an address resolution response sent by a local machine and rewrites a source hardware address into a unique hardware address of a slave node in binding, so that different opposite ends use different hardware addresses for communication;
when the local machine sends an address resolution request, the binding driver copies and stores the IP information of the opposite end from the address resolution packet;
when the address resolution response arrives from the opposite end, the binding driver extracts its hardware address and initiates an address resolution response to the slave node in the binding.
In a third aspect, a terminal is provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, a computer storage medium is provided having stored therein instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
The method, the system, the terminal and the storage medium for balancing the server resource load provided by the invention have the advantages that the method, the system, the terminal and the storage medium are not only distributed on the CPU1 according to the traditional mode, the CPU1 requests the CPU2 to assist in processing, two CPUs are not master-slave modes any more, but are used for service in a peer-to-peer mode, the network card and the hard disk are served in a peer-to-peer mode, and resources are not blocked in a single 'race track' and are accumulated on the race track or one CPU processing unit is accumulated.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention.
FIG. 2 is a schematic block diagram of a system of one embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention. The execution subject in fig. 1 may be a server resource load balancing system.
As shown in fig. 1, the method includes:
step 110, respectively establishing a first link between a first CPU and a service network card and a second link between a second CPU and the service network card;
step 120, acquiring resource use conditions of a first CPU and a second CPU, and selecting a target CPU based on the resource use conditions;
step 130, a target link connecting the target CPU and the service network card is called, and the service is sent to the target CPU through the target link.
In order to facilitate understanding of the present invention, the server resource load balancing method provided by the present invention is further described below with reference to the principle of the server resource load balancing method of the present invention and in combination with the process of performing load balancing on server resources in the embodiments.
Specifically, the server resource load balancing method includes:
s1, respectively establishing a first link between the first CPU and the service network card and a second link between the second CPU and the service network card.
The business network card adopts an OPC network card, and the two ports of the OCP card respectively correspond to the two CPUs.
Creating and loading configuration modules under Linux
#vim/etc/modprobe.d/bonding.conf
alias bond0 bonding
options bond0 miimon=100 mode=6
The miimon is used for link monitoring, for example, miimon is 100ms, the system monitors the link connection state every 100ms, and if one line is not available, the system switches to the other line.
S2, acquiring the resource use condition of the first CPU and the second CPU, and selecting the target CPU based on the resource use condition.
Establishing a communication link between the first CPU and the second CPU; and setting the working modes of the first CPU and the second CPU as an election mode, and carrying out resource information interaction on the first CPU and the second CPU through a communication link in the election mode to elect a target CPU with the minimum resource utilization rate.
The method for establishing the communication link between the first CPU and the second CPU comprises the following steps: a binding (binding) driver intercepts and captures an address resolution response (ARP response) sent by a local machine, and rewrites a source hardware address into a unique hardware address of a slave node (slave) in the binding (binding), so that different opposite ends use different hardware addresses to communicate; when the local machine sends an address resolution request, a binding driver copies and stores the IP information of the opposite end from an address resolution packet; when an address resolution response arrives from the opposite end, the binding (binding) driver extracts its hardware address and initiates an address resolution response to the slave node (slave) in the binding (bond).
In other embodiments of the present invention, the traffic may be equally distributed to two CPUs, for example, if a customer needs 2 network cards and 2 hard disks, then equally distributed to 2 CPUs.
And S3, calling a target link connecting the target CPU and the service network card, and sending the service to the target CPU through the target link.
And assuming that the target CPU is a second CPU, sending the service newly received by the OPC network card to the second CPU through a second link.
There is information interaction between the two CPUs: in the actual operation process, if the resources of the CPU1 are insufficient, the CPU2 may be scheduled to assist in arithmetic processing, in a single-CPU system, there are multiple levels of hardware caches to facilitate the scheduling of the CPU1, and in the case of multiple CPUs, the caches are much more complex. A program running on the CPU1 reads data from its own cache, if the CPU2 needs to assist, but the program cannot directly and effectively access the cache of the CPU1 and cannot effectively assist the CPU1 to work, and for the problems of cache access and consistency, an HBM cache mode may be introduced into the CPU, for example, data to be processed between the network card and the CPU is stored in a memory, the hottest data in the memory is synchronously backed up to the third-level shared cache of the CPU1, and the HBM video memories of the two CPUs share the third-level cache of the CPU, so that the CPU2 can effectively acquire the data of the network card and the CPU1 to perform assisted operation and scheduling processing tasks.
The OCP network has single host connection mode changed into multi-host mode, the two ports of the OCP card correspond to 2 CPUs separately, the request and service of the OCP card are no longer provided by single CPU core, the equalization of CPU resource service is realized, and two CPUs may respond to and process various requests in the mode of equal delay simultaneously.
As shown in fig. 2, the system 200 includes:
a link establishing unit 210, configured to respectively establish a first link between a first CPU and a service network card and a second link between a second CPU and the service network card;
a target determining unit 220, configured to obtain resource usage of the first CPU and the second CPU, and select a target CPU based on the resource usage;
and a task sending unit 230, configured to invoke a target link where the target CPU is connected to the service network card, and send the service to the target CPU through the target link.
Optionally, as an embodiment of the present invention, the system further includes:
and the link monitoring unit is used for monitoring the states of the first link and the second link, and if the link which is not communicated exists, the service is sent to the corresponding CPU through the normal link.
Optionally, as an embodiment of the present invention, the target determining unit includes:
the communication establishing module is used for establishing a communication link between the first CPU and the second CPU;
and the election execution module is used for setting the working modes of the first CPU and the second CPU into an election mode, and the first CPU and the second CPU perform resource information interaction through a communication link in the election mode to elect a target CPU with the minimum resource utilization rate.
Optionally, as an embodiment of the present invention, the communication establishing module is configured to:
the binding driver intercepts and captures an address resolution response sent by a local machine and rewrites a source hardware address into a unique hardware address of a slave node in binding, so that different opposite ends use different hardware addresses for communication;
when the local machine sends an address resolution request, the binding driver copies and stores the IP information of the opposite end from the address resolution packet;
when the address resolution response arrives from the opposite end, the binding driver extracts its hardware address and initiates an address resolution response to the slave node in the binding.
Fig. 3 is a schematic structural diagram of a terminal 300 according to an embodiment of the present invention, where the terminal 300 may be configured to execute the method for balancing server resource load according to the embodiment of the present invention.
Among them, the terminal 300 may include: a processor 310, a memory 320, and a communication unit 330. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the server shown in the figures is not intended to be limiting, and that it may be a bus architecture, a star architecture, a combination of more or fewer components than shown, or a different arrangement of components.
The memory 320 may be used for storing instructions executed by the processor 310, and the memory 320 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The executable instructions in memory 320, when executed by processor 310, enable terminal 300 to perform some or all of the steps in the method embodiments described below.
The processor 310 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, the processor 310 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 330, configured to establish a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Therefore, the present invention is not only distributed to the CPU1 according to the traditional mode, the CPU1 requests the CPU2 to perform the assistance processing, the two CPUs are not in the master-slave mode any more, but in the peer-to-peer mode to perform the service, the network card and the hard disk are in peer-to-peer mode, the resources are not blocked in a single "track" and are died on the track or died on one of the CPU processing units, the technical effect that can be achieved by this embodiment may refer to the description above, and will not be described herein again.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a software product, where the computer software product is stored in a storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like, and the storage medium can store program codes, and includes instructions for enabling a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, and the like) to perform all or part of the steps of the method in the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
In the several embodiments provided in the present invention, it should be understood that the disclosed system and method may be implemented in other manners. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for balancing server resource load is characterized by comprising the following steps:
respectively establishing a first link for connecting a first CPU with a service network card and a second link for connecting a second CPU with the service network card;
acquiring resource use conditions of a first CPU and a second CPU, and selecting a target CPU based on the resource use conditions;
and calling a target link connecting the target CPU and the service network card, and sending the service to the target CPU through the target link.
2. The method of claim 1, wherein after establishing the first link between the first CPU and the service network card and the second link between the second CPU and the service network card, respectively, the method further comprises:
and monitoring the states of the first link and the second link, and if an unavailable link exists, sending the service to the corresponding CPU through the normal link.
3. The method of claim 1, wherein obtaining resource usage of the first CPU and the second CPU, and selecting a target CPU based on the resource usage comprises:
establishing a communication link between the first CPU and the second CPU;
and setting the working modes of the first CPU and the second CPU as an election mode, and carrying out resource information interaction on the first CPU and the second CPU through a communication link in the election mode to elect a target CPU with the minimum resource utilization rate.
4. The method of claim 3, wherein establishing a communication link between the first CPU and the second CPU comprises:
the binding driver intercepts and captures an address resolution response sent by a local machine and rewrites a source hardware address into a unique hardware address of a slave node in binding, so that different opposite ends use different hardware addresses for communication;
when the local machine sends an address resolution request, the binding driver copies and stores the IP information of the opposite end from the address resolution packet;
when the address resolution response arrives from the opposite end, the binding driver extracts its hardware address and initiates an address resolution response to the slave node in the binding.
5. A server resource load balancing system, comprising:
the link establishing unit is used for respectively establishing a first link for connecting the first CPU with the service network card and a second link for connecting the second CPU with the service network card;
the target determining unit is used for acquiring the resource use conditions of the first CPU and the second CPU and selecting a target CPU based on the resource use conditions;
and the task sending unit is used for calling a target link connected with the target CPU and the service network card and sending the service to the target CPU through the target link.
6. The system of claim 5, further comprising:
and the link monitoring unit is used for monitoring the states of the first link and the second link, and if the link which is not communicated exists, the service is sent to the corresponding CPU through the normal link.
7. The system of claim 5, wherein the goal determination unit comprises:
the communication establishing module is used for establishing a communication link between the first CPU and the second CPU;
and the election execution module is used for setting the working modes of the first CPU and the second CPU into an election mode, and the first CPU and the second CPU perform resource information interaction through a communication link in the election mode to elect a target CPU with the minimum resource utilization rate.
8. The system of claim 7, wherein the communication setup module is configured to:
the binding driver intercepts and captures an address resolution response sent by a local machine and rewrites a source hardware address into a unique hardware address of a slave node in binding, so that different opposite ends use different hardware addresses for communication;
when the local machine sends an address resolution request, the binding driver copies and stores the IP information of the opposite end from the address resolution packet;
when the address resolution response arrives from the opposite end, the binding driver extracts its hardware address and initiates an address resolution response to the slave node in the binding.
9. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-4.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN202210612475.0A 2022-05-31 2022-05-31 Server resource load balancing method, system, terminal and storage medium Pending CN115022336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210612475.0A CN115022336A (en) 2022-05-31 2022-05-31 Server resource load balancing method, system, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210612475.0A CN115022336A (en) 2022-05-31 2022-05-31 Server resource load balancing method, system, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115022336A true CN115022336A (en) 2022-09-06

Family

ID=83071563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210612475.0A Pending CN115022336A (en) 2022-05-31 2022-05-31 Server resource load balancing method, system, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115022336A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113925A1 (en) * 2022-11-30 2024-06-06 苏州元脑智能科技有限公司 Storage optimization method and system, device, and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518620A (en) * 2014-10-31 2016-04-20 华为技术有限公司 Network card configuration method and resource management center
CN110515723A (en) * 2019-08-09 2019-11-29 苏州浪潮智能科技有限公司 A kind of two-way server and its equal balance system of cpu load
CN111884945A (en) * 2020-06-10 2020-11-03 中国电信股份有限公司重庆分公司 Network message processing method and network access equipment
CN114138354A (en) * 2021-11-29 2022-03-04 苏州浪潮智能科技有限公司 Onboard OCP network card system supporting multi host and server

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518620A (en) * 2014-10-31 2016-04-20 华为技术有限公司 Network card configuration method and resource management center
CN110515723A (en) * 2019-08-09 2019-11-29 苏州浪潮智能科技有限公司 A kind of two-way server and its equal balance system of cpu load
CN111884945A (en) * 2020-06-10 2020-11-03 中国电信股份有限公司重庆分公司 Network message processing method and network access equipment
CN114138354A (en) * 2021-11-29 2022-03-04 苏州浪潮智能科技有限公司 Onboard OCP network card system supporting multi host and server

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113925A1 (en) * 2022-11-30 2024-06-06 苏州元脑智能科技有限公司 Storage optimization method and system, device, and readable storage medium

Similar Documents

Publication Publication Date Title
CN102187315B (en) Methods and apparatus to get feedback information in virtual environment for server load balancing
CN110809760B (en) Resource pool management method and device, resource pool control unit and communication equipment
JP2001331333A (en) Computer system and method for controlling computer system
CN112003797B (en) Method, system, terminal and storage medium for improving performance of virtualized DPDK network
CN112235136B (en) Network file system backup method, system, terminal and storage medium
CN108933829A (en) A kind of load-balancing method and device
CN112181585A (en) Resource allocation method and device for virtual machine
JP5796722B2 (en) Computer server capable of supporting CPU virtualization
WO2013086861A1 (en) Method for accessing multi-path input/output (i/o) equipment, i/o multi-path manager and system
Truyen et al. Evaluation of container orchestration systems for deploying and managing NoSQL database clusters
CN115022336A (en) Server resource load balancing method, system, terminal and storage medium
CN110557432B (en) Cache pool balance optimization method, system, terminal and storage medium
CN114598746B (en) Method for optimizing load balancing performance between servers based on intelligent network card
CN109120680B (en) Control system, method and related equipment
CN111262753A (en) Method, system, terminal and storage medium for automatically configuring number of NUMA nodes
CN113760447A (en) Service management method, device, equipment, storage medium and program product
CN115904729A (en) Method, device, system, equipment and medium for connection allocation
CN115484129A (en) Multi-process data processing method and device, gateway and readable storage medium
US10481963B1 (en) Load-balancing for achieving transaction fault tolerance
CN115827148A (en) Resource management method and device, electronic equipment and storage medium
CN114448909A (en) Ovs-based network card queue polling method and device, computer equipment and medium
CN114356456A (en) Service processing method, device, storage medium and electronic equipment
CN110780992B (en) Cloud computing platform optimized deployment method, system, terminal and storage medium
CN114785745A (en) Method for configuring equipment resources and switch
CN111491039A (en) IP distribution method, system, terminal and storage medium for distributed file system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination