CN113641489A - Load balancing method based on power terminal CPU core sharing - Google Patents

Load balancing method based on power terminal CPU core sharing Download PDF

Info

Publication number
CN113641489A
CN113641489A CN202110773701.9A CN202110773701A CN113641489A CN 113641489 A CN113641489 A CN 113641489A CN 202110773701 A CN202110773701 A CN 202110773701A CN 113641489 A CN113641489 A CN 113641489A
Authority
CN
China
Prior art keywords
user
instruction
load balancing
main server
flow distributor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110773701.9A
Other languages
Chinese (zh)
Inventor
匡晓云
黄开天
杨祎巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China South Power Grid International Co ltd
Zhejiang University ZJU
Original Assignee
China South Power Grid International Co ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China South Power Grid International Co ltd, Zhejiang University ZJU filed Critical China South Power Grid International Co ltd
Priority to CN202110773701.9A priority Critical patent/CN113641489A/en
Publication of CN113641489A publication Critical patent/CN113641489A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the field of power terminal chips, in particular to a load balancing method based on power terminal CPU kernel sharing, which is provided with a main server, wherein a user number detector is fixedly connected at an output end interface of the main server, the main server is in signal interconnection with the user number detector, a flow distributor is fixedly connected at the output end interface of the user number detector, the flow distributor is in signal interconnection with the user number detector, and a communicating vessel is fixedly connected at the output end interface of the flow distributor; has the advantages that: when the main server detects that a single user uses the traffic distribution device, all the traffic is distributed to the single user through the traffic distribution device, so that the speed of the single user is ensured, and when the user number detector detects that a plurality of users use the traffic simultaneously, all the traffic is distributed evenly through the traffic distribution device, so that the traffic used by each user is ensured to be the same, and the use speed of the user is ensured to the maximum extent.

Description

Load balancing method based on power terminal CPU core sharing
Technical Field
The invention relates to the field of power terminals, in particular to a load balancing method based on power terminal CPU kernel sharing.
Background
The CPU core is a core chip in the middle of the CPU, is made of single crystal silicon, is used to complete all calculations, receive/store commands, process data, and the like, and is a digital processing core.
With the development of the power system towards informatization and intellectualization, the application facing the power system puts higher requirements on the performance of the power terminal CPU. Various CPU cores have fixed logic structures, and logic units such as a first-level cache, a second-level cache, an execution unit, an instruction-level unit, a bus interface and the like have scientific layout. Under the current situation that the processing capacity of the power control information is greatly increased, the server cannot distribute uniform flow under the condition of multiple users, so that the users need to queue up when surfing the internet, or a certain user is fast in network speed, and a certain user has no network, so that the user experience becomes low.
Disclosure of Invention
The invention aims to provide a load balancing method based on power terminal CPU kernel sharing, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a load balancing method based on power terminal CPU kernel sharing comprises a main server, wherein a user quantity detector is fixedly connected to an output port of the main server, the main server is in signal interconnection with the user quantity detector, a flow distributor is fixedly connected to an output port of the user quantity detector, the flow distributor is in signal interconnection with the user quantity detector, a communicating vessel is fixedly connected to an output port of the flow distributor, the communicating vessel is in signal interconnection with the flow distributor, a plurality of single users are fixedly connected to an output port of the communicating vessel, an expert module is arranged between the communicating vessel and the flow distributor, and the expert module is in signal connection with an external controller.
Through the technical scheme, when the main server detects that a single user uses, through the flow distributor, all flows are distributed to the single user, the speed of the single user is guaranteed, when the user quantity detector detects that a plurality of users use simultaneously, the flow distributor has the effect, all the flows are distributed evenly, the flow used by each user is guaranteed to be the same, the user using speed is guaranteed to the maximum extent, when the flow detection has a problem, the signals are sent to the expert module immediately, the expert module is an external independent host, when the flow distribution has a problem when the fault occurs, the condition of network disconnection is caused, the expert module immediately feeds back the problem, the problem is solved manually by external workers, and the smoothness of the network is guaranteed.
Preferably, each single user corresponds to a single CPU, the CPU internal instruction register transmits a signal by decoding an instruction sent by the user, and the instruction is sent to the flow distributor for identification and response through the communicator.
Through the technical scheme, the instruction of the user is well sent to the flow distributor under the action of the single CPU, the flow distributor is convenient to send the signal main server, and the flow is uniformly distributed to the single CPU through the communicating vessels under the action of the main server.
Preferably, the main server puts the signal into an instruction register, decodes the instruction, and then sends the decoded number to the user number detector for detection.
Through the technical scheme, the main server compiles and decodes, and the user number detector detects the number of the decodes, so that the user number can be conveniently and quickly detected.
Preferably, after the single CPU takes out the instruction through the instruction decoding stage, the instruction decoder splits and interprets the retrieved instruction according to a predetermined instruction format, identifies and distinguishes different instruction types to obtain operands, sends information of the operands to the user number detection, and the user number detection feeds back the result to the traffic distributor according to the result.
Through the technical scheme, the users are collected and fed back to the user number detector under the action of the CPU, so that the communicating vessels can be conveniently distributed according to needs, the phenomenon of uneven distribution is avoided, the instruction set codes allowed by the CPU are coded and sequentially executed, one instruction is taken out first, and the next instruction is started after the execution of the instructions is finished.
Preferably, the main server provides a virtual IP to the outside, different CPUs in the cluster use different IP addresses, and the communicator returns a request to the main server through the IP according to different load balancing algorithms after receiving the request.
Preferably, the connectivity devices are polled by DNS, and the requests can be assigned to different primary servers by configuring a plurality of DNS a records.
Preferably, the single CPU converts the private address into a legal IP address by the NAT, and maps the legal IP address with the communicator.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the load balancing method based on the power terminal CPU kernel sharing, when a main server detects that a single user uses the load balancing method, all the flow is distributed to the single user through a flow distributor to guarantee the speed of the single user, and when a user number detector detects that a plurality of users use the load balancing method simultaneously, all the flow is distributed evenly through the flow distributor, the flow used by each user is guaranteed to be the same, and the use speed of the users is guaranteed to the maximum extent.
2. According to the load balancing method based on the power terminal CPU core sharing, users are collected and fed back to the user number detector under the action of the CPU, so that communicating devices can be conveniently distributed according to needs, the phenomenon of uneven distribution is avoided, the instruction set codes allowed by the CPU are coded and sequentially executed, one instruction is taken out at first, and the next instruction is started after the execution of the instruction is finished.
3. According to the load balancing method based on the power terminal CPU kernel sharing, the main server compiles and decodes the codes, and the user number detector detects the number of the decoded codes, so that the user number can be conveniently and quickly detected.
Drawings
Fig. 1 is a schematic view of the overall structure of the present invention.
In the figure: 1. a main server; 2. a user number detector; 3. a flow distributor; 4. a communicating vessel.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a technical solution provided by the present invention is:
a load balancing method based on power terminal CPU kernel sharing comprises a main server 1, wherein a user quantity detector 2 is fixedly connected to an output port of the main server 1, the main server 1 is in signal interconnection with the user quantity detector 2, a flow distributor 3 is fixedly connected to an output port of the user quantity detector 2, the flow distributor 3 is in signal interconnection with the user quantity detector 2, a communicating vessel 4 is fixedly connected to an output port of the flow distributor 3, the communicating vessel 4 is in signal interconnection with the flow distributor 3, a plurality of single users are fixedly connected to an output port of the communicating vessel 4, an expert module is arranged between the communicating vessel 4 and the flow distributor 3, and the expert module is in signal connection with an external controller.
Through the technical scheme, when the main server 1 detects that a single user is used, all the flow is completely distributed to the single user through the flow distributor 3, so that the speed of the single user is ensured, and when the user number detector 2 detects that a plurality of users are used simultaneously, all the flow is evenly distributed through the flow distributor 3, so that the same flow used by each user is ensured, and the use speed of the users is ensured to the maximum extent.
And when flow detection goes wrong, send signal to expert's module immediately, expert's module is outside independent host computer promptly, and expert's module can divide into a plurality of department, and flow distribution goes wrong when breaking down, when the condition that leads to the disconnected net takes place, expert's module feeds back the problem at once, through the problem of user feedback, transmits to corresponding department, solves through outside staff is manual, ensures the unobstructed nature of network.
Preferably, each single user corresponds to a single CPU5, and the CPU internal instruction register transmits a signal by decoding an instruction sent by the user, and sends the instruction to the flow distributor 3 through the communicator 4 for identification and response correspondingly.
Through the technical scheme, the instruction of the user can be well sent to the flow distributor 3 through the action of the single CPU5, the flow distributor 3 can conveniently send the signal main server 1, and the flow is uniformly distributed to the single CPU5 through the communicating vessel 4 through the action of the main server 1.
Preferably, the main server 1 puts the signal into an instruction register, decodes the instruction, and then sends the decoded number to the user number detector 2 for detection.
Through the technical scheme, the main server 1 compiles and decodes, and the user number detector 2 detects the number of the decoded users, so that the user number can be conveniently and quickly detected.
Preferably, after the single CPU5 takes out the instruction through the instruction decoding stage, the instruction decoder splits and interprets the retrieved instruction according to a predetermined instruction format, identifies and distinguishes different instruction types to obtain operands, sends information of the operands to the user number detection, and the user number detection feeds back the result to the flow distributor 3.
Through the technical scheme, the users are collected and fed back to the user number detector 2 under the action of the CPU, so that the communicating vessels 4 can be conveniently distributed as required, the phenomenon of uneven distribution is avoided, the instruction set codes allowed by the CPU are sequentially executed, one instruction is taken out first, and the next instruction is started after the execution of the instructions is finished.
Preferably, the main server 1 provides a virtual IP for the outside, different individual CPUs 5 in the cluster use different IP addresses, and after receiving the request, the connectivity device 4 returns the request to the main server 1 through the IP according to different load balancing algorithms.
Preferably, the communicator 4 polls through DNS, and configures a plurality of DNS a records so that requests can be assigned to different main servers 1.
Preferably, the single CPU5 converts the private address into a legal IP address by NAT, and maps the address with the connected device 4.
When the load balancing method based on the power terminal CPU kernel sharing is used, when the main server 1 detects that a single user is used, all the flows are distributed to the single user through the flow distributor 3 to ensure the speed of the single user, when the user number detector 2 detects that a plurality of users are used simultaneously, all the flows are distributed evenly through the action of the flow distributor 3 to ensure that the flows used by each user are the same, the use speed of the users is ensured to the maximum extent, the user instructions are well sent to the flow distributor 3 through the action of the single CPU5, the flow distributor 3 is convenient to send the signal main server 1, the flows are uniformly distributed to the single CPU5 through the communicating device 4 through the action of the main server 1, the compiling and decoding are carried out through the main server 1, and the number of the decoding is detected by the user number detector 2, the number of users can be detected conveniently and rapidly, the users are collected by the CPU and fed back to the user number detector 2, so that the communicating vessels 4 can be distributed as required, the phenomenon of uneven distribution is avoided, the instruction set codes allowed by the CPU are coded and sequentially executed, an instruction is taken out first, and the next instruction is started after the execution of the instruction is finished.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. A load balancing method based on power terminal CPU kernel sharing comprises a main server (1), and is characterized in that: the utility model discloses a flow control system, including main server (1), output port department fixedly connected with user quantity detector (2), main server (1) with user quantity detector (2) signal interconnection, the output port department fixedly connected with flow distributor (3) of user quantity detector (2), flow distributor (3) with user quantity detector (2) signal interconnection, the output port department fixedly connected with linker (4) of flow distributor (3), linker (4) with flow distributor (3) signal interconnection, a plurality of single users of output port department fixedly connected with of linker (4), linker (4) with be provided with expert's module between flow distributor (3), expert's module links to each other with the external control ware signal.
2. The load balancing method based on the power terminal CPU core sharing according to claim 1, characterized in that: each single user corresponds to a single CPU (5), the instruction register in the CPU transmits a signal through decoding an instruction sent by the user, and the instruction is sent to the flow distributor (3) through the communicating vessel (4) to be identified and correspondingly react.
3. The load balancing method based on the power terminal CPU core sharing according to claim 1, characterized in that: the main server (1) puts the signals into an instruction register, decodes the instructions and then sends the decoded number to the user number detector (2) for detection.
4. The load balancing method based on the power terminal CPU core sharing according to claim 1, characterized in that: after the single CPU (5) takes out the instruction through an instruction decoding stage, the instruction decoder splits and interprets the taken-out instruction according to a preset instruction format, identifies different instruction types to obtain operands, sends information of the operands to the user number detection, and feeds the user number detection back to the flow distributor (3) according to the obtained result.
5. The load balancing method based on the power terminal CPU core sharing according to claim 4, characterized in that: the main server (1) provides a virtual IP to the outside, different single CPUs (5) in the cluster adopt different IP addresses, and after receiving the request, the communicating vessel (4) returns the request to the main server (1) through the IP according to different load balancing algorithms.
6. The load balancing method based on the power terminal CPU core sharing according to claim 1, characterized in that: the connectivity devices (4) are polled by DNS, by configuring a plurality of DNS A records so that requests can be distributed to different main servers (1).
7. The load balancing method based on the power terminal CPU core sharing according to claim 4, characterized in that: and the single CPU (5) converts the private address into a legal IP address by the NAT and maps the legal IP address with the communicating vessel (4).
CN202110773701.9A 2021-07-08 2021-07-08 Load balancing method based on power terminal CPU core sharing Pending CN113641489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110773701.9A CN113641489A (en) 2021-07-08 2021-07-08 Load balancing method based on power terminal CPU core sharing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110773701.9A CN113641489A (en) 2021-07-08 2021-07-08 Load balancing method based on power terminal CPU core sharing

Publications (1)

Publication Number Publication Date
CN113641489A true CN113641489A (en) 2021-11-12

Family

ID=78416919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110773701.9A Pending CN113641489A (en) 2021-07-08 2021-07-08 Load balancing method based on power terminal CPU core sharing

Country Status (1)

Country Link
CN (1) CN113641489A (en)

Similar Documents

Publication Publication Date Title
TWI297838B (en) Method and apparatus for shared i/o in a load/store fabric
US20200073840A1 (en) Dynamically changing configuration of data processing unit when connected to storage device or computing device
US9858102B2 (en) Data path failover method for SR-IOV capable ethernet controller
US7966402B2 (en) Switch to selectively couple any of a plurality of video modules to any of a plurality of blades
WO2019233322A1 (en) Resource pool management method and apparatus, resource pool control unit, and communication device
WO2021244194A1 (en) Register reading/writing method, chip, subsystem, register group, and terminal
WO2024139167A1 (en) Bmc-based memory resource processing device, method and apparatus, and nonvolatile readable storage medium
US7539129B2 (en) Server, method for controlling data communication of server, computer product
CN103955441B (en) Equipment management system, equipment management method and IO (Input/Output) expansion interface
CN102497432B (en) Multi-path accessing method for input/output (I/O) equipment, I/O multi-path manager and system
JP2017211984A (en) METHOD, SYSTEM AND APPARATUS FOR QoS-AWARE INPUT/OUTPUT MANAGEMENT FOR PCIe STORAGE SYSTEM WITH RECONFIGURABLE MULTI-PORTS
US20130086400A1 (en) Active state power management (aspm) to reduce power consumption by pci express components
CN111901164A (en) Adaptive control method, device, equipment and system for OCP NIC network card
CN104699654A (en) Interconnection adapting system and method based on CHI on-chip interaction bus and QPI inter-chip interaction bus
US20130247038A1 (en) Device-integration method, terminal-apparatus, and distributed multi-terminal-apparatus system
US9319313B2 (en) System and method of forwarding IPMI message packets based on logical unit number (LUN)
CN107562672A (en) A kind of system and method for improving vector network analyzer message transmission rate
CN110362511B (en) PCIE equipment
CN116723198A (en) Multi-node server host control method, device, equipment and storage medium
CN101639821B (en) SMBUS interface chip of sever with content redundant link
CN105763488B (en) Data center aggregation core switch and backboard thereof
CN104809026B (en) A kind of method that CPU computing resources are borrowed using remote node
US20130151885A1 (en) Computer management apparatus, computer management system and computer system
CN1976310B (en) Communication method using bus interface over a network
WO2024139593A1 (en) Computer device and computer system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination