CN116755889B - Data acceleration method, device and equipment applied to server cluster data interaction - Google Patents

Data acceleration method, device and equipment applied to server cluster data interaction Download PDF

Info

Publication number
CN116755889B
CN116755889B CN202311028717.2A CN202311028717A CN116755889B CN 116755889 B CN116755889 B CN 116755889B CN 202311028717 A CN202311028717 A CN 202311028717A CN 116755889 B CN116755889 B CN 116755889B
Authority
CN
China
Prior art keywords
server cluster
data
server
average load
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311028717.2A
Other languages
Chinese (zh)
Other versions
CN116755889A (en
Inventor
李军
郎晓旭
范亚娜
王斯诺
郭敬林
陈瑞兴
李媛
孙实杰
翟斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Information and Telecommunication Co Ltd
Beijing Guodiantong Network Technology Co Ltd
Original Assignee
State Grid Information and Telecommunication Co Ltd
Beijing Guodiantong Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Information and Telecommunication Co Ltd, Beijing Guodiantong Network Technology Co Ltd filed Critical State Grid Information and Telecommunication Co Ltd
Priority to CN202311028717.2A priority Critical patent/CN116755889B/en
Publication of CN116755889A publication Critical patent/CN116755889A/en
Application granted granted Critical
Publication of CN116755889B publication Critical patent/CN116755889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Abstract

The embodiment of the invention discloses a data acceleration method, a device and equipment applied to server cluster data interaction. One embodiment of the method comprises the following steps: acquiring first data and second data for data operation; according to a first server cluster where the first data are located and a second server cluster used for generating second data, obtaining a first server cluster average load between the first server cluster and the second server cluster; dynamically adjusting the average load of the first server cluster or the average load of the second server cluster based on the average load of the first server cluster and the average load of the second server cluster of the server cluster; and carrying out load balancing processing on the server clusters according to the dynamically adjusted average load of the first server clusters or the dynamically adjusted average load of the second server clusters. The implementation mode improves the data transmission speed of the multi-server during data interaction.

Description

Data acceleration method, device and equipment applied to server cluster data interaction
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a data acceleration method, a device and equipment applied to server cluster data interaction.
Background
A server is a type of computer that provides computing or application services in a network for other clients (e.g., PC's, smartphones, ATM's, etc. terminals and even large devices such as train systems). The server has high-speed CPU operation capability, long-time reliable operation, strong I/O external data throughput capability and better expansibility.
In the process of large-scale data operation, the method is often realized by being matched with a plurality of servers, namely, an architecture cluster with high response concurrency and high data volume access is formed, and load balance among the plurality of servers is required to be maintained, so that the high-speed operation of the server cluster can be ensured. Currently, in order to achieve load balancing, the following methods are generally adopted: one or more standby servers are arranged and are specially used for realizing load balancing.
However, with the above method, there are generally the following technical problems:
firstly, one or more standby servers are specially arranged for load balancing, so that the waste of server resources is easily caused;
secondly, when load balancing is performed, the performance of each server needs to be considered, the server performance directly influences the data interaction capacity of the server cluster, and the data interaction efficiency is reduced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a data acceleration method, apparatus, electronic device, and computer readable medium applied to server cluster data interaction to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a data acceleration method applied to server cluster data interaction, the method including: responding to the detection of the server cluster to execute data operation, and acquiring first data and second data for the data operation; according to a first server cluster where the first data are located and a second server cluster used for generating the second data, obtaining a first server cluster average load between the first server cluster and the second server cluster, wherein the first server cluster and the second server cluster belong to the server clusters; based on the first server cluster average load and the second server cluster average load of the server clusters, dynamically adjusting the first server cluster average load or the second server cluster average load according to the magnitude relation between the first server cluster average load and the second server cluster average load; and according to the dynamically adjusted average load of the first server cluster or the second server cluster, carrying out load balancing processing on the server cluster so as to accelerate data of the server cluster.
In a second aspect, some embodiments of the present disclosure provide a data acceleration apparatus applied to server cluster data interaction, the apparatus comprising: a first acquisition unit configured to acquire first data and second data for a data operation performed by a server cluster in response to detection of the data operation; a second obtaining unit configured to obtain a first server cluster average load between the first server cluster and the second server cluster according to a first server cluster where the first data is located and a second server cluster for generating the second data, where the first server cluster and the second server cluster both belong to the server cluster; an adjustment unit configured to dynamically adjust the first server cluster average load or the second server cluster average load according to a magnitude relation between the first server cluster average load and the second server cluster average load based on the first server cluster average load and the second server cluster average load of the server cluster; and the load balancing unit is configured to perform load balancing processing on the server clusters according to the dynamically adjusted average load of the first server clusters or the dynamically adjusted average load of the second server clusters so as to accelerate data of the server clusters.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: according to the data acceleration method applied to server cluster data interaction, which is disclosed by some embodiments of the invention, a standby server is not required to be additionally arranged, so that the waste of server resources is avoided. In addition, the problem that the data interaction processing capacity of the whole server cluster is low due to the performance problem of individual server hardware can be effectively avoided, and the data transmission speed of multiple servers in data interaction is improved. Specifically, the reason why the waste of server resources is easily caused and the efficiency of data interaction is lowered is that: one or more standby servers are specially arranged for load balancing, so that the waste of server resources is easily caused; when load balancing is performed, the performance of each server needs to be considered, and the server performance directly influences the data interaction capability of the server cluster. Based on this, the data acceleration method applied to server cluster data interaction of some embodiments of the present disclosure first obtains first data and second data for the above data operation in response to detecting that the server cluster performs the data operation. Therefore, in order to adjust the load balance, the reference data is promoted. And secondly, acquiring a first server cluster average load between the first server cluster and the second server cluster according to the first server cluster where the first data are located and the second server cluster used for generating the second data. Wherein the first server cluster and the second server cluster both belong to the server clusters. Thereby facilitating adjustment of load balancing between two clusters. And then, based on the first server cluster average load and the second server cluster average load of the server clusters, dynamically adjusting the first server cluster average load or the second server cluster average load according to the magnitude relation between the first server cluster average load and the second server cluster average load. Thus, the load of the server can be adjusted according to the load relation. And finally, according to the dynamically adjusted average load of the first server cluster or the second server cluster, carrying out load balancing processing on the server cluster so as to accelerate the data of the server cluster. Therefore, a standby server is not required to be additionally arranged, and waste of server resources is avoided. In addition, the problem that the data interaction processing capacity of the whole server cluster is low due to the performance problem of individual server hardware can be effectively avoided, and the data transmission speed of multiple servers in data interaction is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a data acceleration method applied to server cluster data interactions in accordance with the present disclosure;
FIG. 2 is a schematic diagram of the architecture of some embodiments of a data acceleration apparatus applied to server cluster data interactions in accordance with the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 is a flow chart of some embodiments of a data acceleration method applied to server cluster data interactions in accordance with the present disclosure. A flow 100 of some embodiments of a data acceleration method applied to server cluster data interactions according to the present disclosure is shown. The data acceleration method applied to server cluster data interaction comprises the following steps:
in step 101, in response to detecting that the server cluster performs a data operation, first data and second data for the data operation are acquired.
In some embodiments, an execution body (e.g., a computing device) of a data acceleration method applied to server cluster data interaction may obtain first data and second data for the data operation in response to detecting that a server cluster performs the data operation. That is, the data manipulation operation may refer to performing a bulk-gauge modular data manipulation. Such as big data pre-warning of power. A server cluster may contain multiple servers. The first data is data for data operation corresponding to the data operation. The second data is data for generating the first data. For example, the first data may refer to total power data of a certain area. And the second data may be data generated by each power node in a certain area.
In practice, the execution subject may acquire the first data and the second data for the data operation by:
the first step, according to the operation rule corresponding to the data operation, obtaining the data meeting the operation rule as the first data. Here, the data operation corresponds to a set data operation rule. Here, the operation rule may be set in advance. For example, the operation rule may be to perform data operation by a neural network model set in a certain setting. That is, the data may be acquired from the server conforming to the operation rule as the first data.
And a second step of acquiring data conforming to the construction rule as second data according to the construction rule of the first data.
When large-scale data operation is performed, various data are often needed, some of the data are bottom data (second data), some of the data are data (first data) obtained through means such as bottom data calculation or summarization processing, the data are common data for performing large-scale data operation in the data memory operation, and the data are very common data, namely the number or frequency of the second data being referred to, so that the large data operation is given to a server with multiple reference numbers or high call frequency for operation, the load of the server caused by data call is reduced, the load of the server is reduced, and the data acceleration effect is achieved when data interaction is performed between the servers.
Step 102, obtaining a first server cluster average load between the first server cluster and the second server cluster according to the first server cluster where the first data is located and the second server cluster used for generating the second data.
In some embodiments, the executing entity may obtain the average load of the first server cluster between the first server cluster and the second server cluster according to the first server cluster where the first data is located and the second server cluster for generating the second data. Wherein the first server cluster and the second server cluster both belong to the server clusters. That is, the first server cluster stores the first data. The second server cluster stores second data. The average value of the current server load average value of the first server cluster and the current server load average value of the second server cluster may be determined as the first server cluster average load.
Step 103, dynamically adjusting the first server cluster average load or the second server cluster average load according to the magnitude relation between the first server cluster average load and the second server cluster average load based on the first server cluster average load and the second server cluster average load of the server cluster.
In some embodiments, the executing entity may dynamically adjust the first server cluster average load or the second server cluster average load according to a magnitude relation between the first server cluster average load and the second server cluster average load based on the first server cluster average load and the second server cluster average load of the server cluster.
In practice, the executing entity may dynamically adjust the average load of the first server cluster or the average load of the second server cluster by:
in the first step, in the process of dynamically adjusting the average load of the first server, according to the magnitude relation, a first server cluster and a second server cluster corresponding to the average load of the first server, which is larger than the average load of the second server, are subjected to load adjustment, so that the average load of the first server is not larger than the average load of the second server. That is, one or more servers with lower server loads may be selected from the server clusters and added to the first server cluster to reduce the average load of the first server cluster. Thus, the first server average load is made not larger than the second server average load.
And secondly, in the process of dynamically adjusting the average load of the second server, taking the average load of the first server which is not more than the average load of the second server as a new average load of the second server, and carrying out load balancing processing on the server cluster. That is, the load of each server in the server cluster may be adjusted according to the new average load of the second server.
And 104, carrying out load balancing processing on the server clusters according to the dynamically adjusted average load of the first server clusters or the dynamically adjusted average load of the second server clusters so as to accelerate data of the server clusters.
In some embodiments, the executing body may perform load balancing processing on the server cluster according to the dynamically adjusted average load of the first server cluster or the dynamically adjusted average load of the second server cluster, so as to accelerate data of the server cluster. That is, the load balancing process can be performed on the server cluster according to the dynamically adjusted average load of the first server cluster or the dynamically adjusted average load of the second server cluster, so that data acceleration of the server cluster can be realized.
Optionally, during the process of executing the data operation, based on the first computing capability corresponding to the computing rule, the first server cluster conforming to the first computing capability is obtained as the third server cluster, so as to execute the data operation.
In some embodiments, the executing body may obtain, during the process of executing the data operation, based on the first computing capability corresponding to the computing rule, a first server cluster that meets the first computing capability as a third server cluster, so as to execute the data operation. The first computing capability corresponding to the computing rule may refer to a server computing power required to run the first data. That is, each first server conforming to the first computing capability described above may be acquired as a third server cluster to perform the data computing operation described above.
In practice, in the process of executing the data operation by the third server cluster, the executing body may acquire the second operation capability of the first server cluster, and generate, in a parallel manner, a third operation capability that is not less than the first operation capability, with the first server cluster parallel to each other as the third server cluster, with the first server cluster corresponding to the second operation capability that is less than the first operation capability, so as to execute the data operation. That is, the number of the first servers in the first server cluster may be increased, thereby increasing the operational capacity of the first server cluster to a third operational capacity not smaller than the above-mentioned first operational capacity. The parallel manner may refer to performing parallel operations on each first server in the first server cluster. The second computing capability may refer to the computing power of the first server cluster described above.
Optionally, in the parallel processing of the first server cluster, load balancing is performed on the first server clusters after the parallel processing is performed according to the server load of the first server cluster corresponding to the minimum value of the second computing capability, so that the third computing capability is not less than the first computing capability.
In some embodiments, in the parallel processing of the first server cluster, the executing body performs load balancing on the first server cluster after the parallel processing according to the server load of the first server cluster corresponding to the minimum value of the second computing capability, so that the third computing capability is not less than the first computing capability. That is, the server load of the first server cluster corresponding to the minimum value of the second computing capability is determined.
In some optional implementations of some embodiments, the executing entity may obtain the second computing capability of the first server cluster by:
and a first step of sequencing each first server corresponding to the second data according to the specific gravity of the second data in the first data in the process of acquiring the second operation capability to obtain a first server sequence. Wherein the specific gravity is used to represent the number of times or frequency of using the second data when the first data is generated by the second data. That is, the specific gravity may be up to the number of times the second data is used to generate the first data. That is, each first server corresponding to the second data may represent a first server storing first data generated using the second data.
And a second step of obtaining the computing capacity of each first server in the first server sequence to obtain a computing capacity sequence.
And a third step of determining the sum of the computing capacities included in the computing capacity sequence as a second computing capacity.
The above related matters serve as an invention point of the present disclosure, solving the second technical problem mentioned in the background art, and reducing the efficiency of data interaction. ". Factors that reduce the efficiency of data interactions tend to be as follows: when load balancing is performed, the performance of each server needs to be considered, and the server performance directly influences the data interaction capability of the server cluster. If the above factors are solved, the effect of improving the efficiency of data interaction can be achieved. To achieve this, first, in the process of executing the data operation by the third server cluster, the second computing capacity of the first server cluster is acquired, the first server cluster corresponding to the second computing capacity smaller than the first computing capacity is generated in parallel, the third computing capacity not smaller than the first computing capacity is generated, and the first server clusters parallel to each other are used as the third server cluster to execute the data operation. And then, in the process of carrying out parallel processing on the first server cluster, carrying out load balancing on the first server clusters after mutual parallelism according to the server load of the first server cluster corresponding to the minimum value of the second computing capacity so that the third computing capacity is not smaller than the first computing capacity. And then, in the process of dynamically adjusting the average load of the first server, according to the size relation, performing load adjustment on a first server cluster and a second server cluster corresponding to the average load of the first server, which is larger than the average load of the second server, so that the average load of the first server is not larger than the average load of the second server. And finally, in the process of dynamically adjusting the average load of the second servers, taking the average load of the first servers which is not greater than the average load of the second servers as a new average load of the second servers, and carrying out load balancing processing on the server cluster. Therefore, the loads of the first server cluster and the second server cluster are adjusted, so that the loads of the main participating servers for carrying out data operation are more balanced. Furthermore, the load balancing reference value of the whole server cluster is dynamically updated, so that the added server can adapt to the load of the current cluster architecture during further operation. And according to the required computing capacity of data computing, the server cluster is dynamically adjusted to adapt to the computing process of large-scale data, and according to load balancing, the effect of data acceleration is realized on the data of multi-server data interaction.
With further reference to fig. 2, as an implementation of the method shown in the foregoing figures, the present disclosure provides some embodiments of a data acceleration apparatus applied to server cluster data interaction, where the embodiments of the data acceleration apparatus applied to server cluster data interaction correspond to those method embodiments shown in fig. 1, and the data acceleration apparatus applied to server cluster data interaction may be specifically applied to various electronic devices.
As shown in fig. 2, a data acceleration apparatus 200 applied to server cluster data interaction of some embodiments includes: a first acquisition unit 201, a second acquisition unit 202, an adjustment unit 203, and a load balancing unit 204. Wherein, the first obtaining unit 201 is configured to obtain first data and second data for the data operation in response to detecting that the server cluster performs the data operation; a second obtaining unit 202, configured to obtain, according to a first server cluster in which the first data is located and a second server cluster for generating the second data, an average load of the first server cluster between the first server cluster and the second server cluster, where the first server cluster and the second server cluster both belong to the server cluster; an adjusting unit 203 configured to dynamically adjust the first server cluster average load or the second server cluster average load according to a magnitude relation between the first server cluster average load and the second server cluster average load, based on the first server cluster average load and the second server cluster average load of the server cluster; the load balancing unit 204 is configured to perform load balancing processing on the server cluster according to the dynamically adjusted average load of the first server cluster or the dynamically adjusted average load of the second server cluster, so as to perform data acceleration on the server cluster.
It will be appreciated that the elements described in the data acceleration device 200 applied to server cluster data interaction correspond to the steps of the method described with reference to fig. 1. Thus, the operations, features and the beneficial effects described above for the method are equally applicable to the data acceleration device 200 and the units contained therein, which are applied to server cluster data interaction, and are not described herein again.
Referring now to fig. 3, a schematic diagram of an electronic device (e.g., computing device) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic devices in some embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, as well as stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and task data required for the operation of the electronic device 300 are also stored. The processing device 301, the ROM302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange task data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a task data signal that propagates in baseband or as part of a carrier wave, in which computer-readable program code is carried. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital task data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to the detection of the server cluster to execute data operation, and acquiring first data and second data for the data operation; according to a first server cluster where the first data are located and a second server cluster used for generating the second data, obtaining a first server cluster average load between the first server cluster and the second server cluster, wherein the first server cluster and the second server cluster belong to the server clusters; based on the first server cluster average load and the second server cluster average load of the server clusters, dynamically adjusting the first server cluster average load or the second server cluster average load according to the magnitude relation between the first server cluster average load and the second server cluster average load; and according to the dynamically adjusted average load of the first server cluster or the second server cluster, carrying out load balancing processing on the server cluster so as to accelerate data of the server cluster.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor comprising: the device comprises a first acquisition unit, a second acquisition unit, an adjustment unit and a load balancing unit. The names of these units do not limit the unit itself in some cases, for example, the second obtaining unit may also be described as "a unit for obtaining an average load of the first server cluster between the first server cluster and the second server cluster according to the first server cluster where the first data is located, and the second server cluster for generating the second data".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (5)

1. A data acceleration method applied to server cluster data interaction comprises the following steps:
in response to detecting that a server cluster performs a data operation, acquiring first data and second data for the data operation;
according to a first server cluster where the first data are located and a second server cluster used for generating the second data, obtaining a first server cluster average load between the first server cluster and the second server cluster, wherein the first server cluster and the second server cluster belong to the server clusters;
based on the first server cluster average load and the second server cluster average load of the server cluster, dynamically adjusting the first server cluster average load or the second server cluster average load according to the magnitude relation between the first server cluster average load and the second server cluster average load;
according to the dynamically adjusted average load of the first server cluster or the second server cluster, carrying out load balancing processing on the server cluster so as to accelerate data of the server cluster;
in the process of executing data operation, based on a first operation capability corresponding to an operation rule, acquiring a first server cluster conforming to the first operation capability as a third server cluster so as to execute the data operation;
wherein the obtaining the first server cluster conforming to the first computing capability as a third server cluster to perform the data computing operation includes:
acquiring second computing capacity of the first server cluster in the process of executing the data computing operation by the third server cluster, generating third computing capacity which is not smaller than the first computing capacity by a parallel mode with the first server cluster corresponding to the second computing capacity which is smaller than the first computing capacity, and taking the first server cluster which is parallel to each other as the third server cluster to execute the data computing operation;
in the parallel processing process of the first server cluster, according to the server load of the first server cluster corresponding to the minimum value of the second computing capacity, carrying out load balancing on the first server cluster after mutual parallelism, so that the third computing capacity is not smaller than the first computing capacity;
wherein the obtaining the second operational capability of the first server cluster includes:
in the process of acquiring the second operation capability, sequencing each first server corresponding to the second data according to the proportion of the second data in the first data to obtain a first server sequence, wherein the proportion is used for representing the frequency or the frequency of using the second data when the first data is generated through the second data;
acquiring the computing capacity of each first server in the first server sequence to obtain a computing capacity sequence;
determining a sum of the computing capabilities included in the computing capability sequence as a second computing capability;
wherein the dynamically adjusting the average load of the first server cluster or the average load of the second server cluster includes:
in the process of dynamically adjusting the average load of the first server, according to the size relation, a first server cluster and a second server cluster corresponding to the average load of the first server, which is larger than the average load of the second server, are subjected to load adjustment, so that the average load of the first server is not larger than the average load of the second server;
and in the process of dynamically adjusting the average load of the second servers, taking the average load of the first servers which is not greater than the average load of the second servers as a new average load of the second servers, and carrying out load balancing processing on the server cluster.
2. The method of claim 1, wherein the acquiring the first data and the second data for the data manipulation operation comprises:
acquiring data conforming to the operation rule as first data according to the operation rule corresponding to the data operation;
and acquiring data conforming to the composition rule as second data according to the composition rule of the first data.
3. A data acceleration device for server cluster data interaction, comprising:
a first acquisition unit configured to acquire first data and second data for a data operation in response to detection of the server cluster performing the data operation;
the second acquisition unit is configured to acquire a first server cluster average load between the first server cluster and the second server cluster according to the first server cluster where the first data are located and the second server cluster used for generating the second data, wherein the first server cluster and the second server cluster belong to the server clusters;
an adjusting unit configured to dynamically adjust the first server cluster average load or the second server cluster average load according to a magnitude relation between the first server cluster average load and the second server cluster average load based on the first server cluster average load and the second server cluster average load of the server cluster; an adjustment unit further configured to:
in the process of dynamically adjusting the average load of the first server, according to the size relation, a first server cluster and a second server cluster corresponding to the average load of the first server, which is larger than the average load of the second server, are subjected to load adjustment, so that the average load of the first server is not larger than the average load of the second server;
in the process of dynamically adjusting the average load of the second server, taking the average load of the first server which is not more than the average load of the second server as a new average load of the second server, and carrying out load balancing processing on the server cluster;
the load balancing unit is configured to perform load balancing processing on the server clusters according to the dynamically adjusted average load of the first server clusters or the dynamically adjusted average load of the second server clusters so as to accelerate data of the server clusters;
a third obtaining unit configured to obtain, based on a first computing capability corresponding to a computing rule, a first server cluster conforming to the first computing capability as a third server cluster in a process of performing a data computing operation, so as to perform the data computing operation; the third acquisition unit is further configured to:
acquiring second computing capacity of the first server cluster in the process of executing the data computing operation by the third server cluster, generating third computing capacity which is not smaller than the first computing capacity by a parallel mode with the first server cluster corresponding to the second computing capacity which is smaller than the first computing capacity, and taking the first server cluster which is parallel to each other as the third server cluster to execute the data computing operation;
wherein the obtaining the second operational capability of the first server cluster includes:
in the process of acquiring the second operation capability, sequencing each first server corresponding to the second data according to the proportion of the second data in the first data to obtain a first server sequence, wherein the proportion is used for representing the frequency or the frequency of using the second data when the first data is generated through the second data;
acquiring the computing capacity of each first server in the first server sequence to obtain a computing capacity sequence;
determining a sum of the computing capabilities included in the computing capability sequence as a second computing capability;
and the second load balancing unit is configured to load balance the first server clusters after being mutually parallel according to the server load of the first server clusters corresponding to the minimum value of the second computing capacity in the parallel processing process of the first server clusters, so that the third computing capacity is not smaller than the first computing capacity.
4. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-2.
5. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-2.
CN202311028717.2A 2023-08-16 2023-08-16 Data acceleration method, device and equipment applied to server cluster data interaction Active CN116755889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311028717.2A CN116755889B (en) 2023-08-16 2023-08-16 Data acceleration method, device and equipment applied to server cluster data interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311028717.2A CN116755889B (en) 2023-08-16 2023-08-16 Data acceleration method, device and equipment applied to server cluster data interaction

Publications (2)

Publication Number Publication Date
CN116755889A CN116755889A (en) 2023-09-15
CN116755889B true CN116755889B (en) 2023-10-27

Family

ID=87957462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311028717.2A Active CN116755889B (en) 2023-08-16 2023-08-16 Data acceleration method, device and equipment applied to server cluster data interaction

Country Status (1)

Country Link
CN (1) CN116755889B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262901A (en) * 2019-06-27 2019-09-20 深圳前海微众银行股份有限公司 A kind of data processing method and data processing system
CN110569124A (en) * 2019-08-15 2019-12-13 中国平安财产保险股份有限公司 Task allocation method and device
CN114461389A (en) * 2022-01-13 2022-05-10 中国工商银行股份有限公司 Load balancing method and device of server cluster and electronic equipment
WO2022228485A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Data transmission method, data processing method, and related product
CN115952003A (en) * 2023-02-13 2023-04-11 中银金融科技有限公司 Method, device, equipment and storage medium for cluster server load balancing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10574741B2 (en) * 2016-04-18 2020-02-25 Nokia Technologies Oy Multi-level load balancing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262901A (en) * 2019-06-27 2019-09-20 深圳前海微众银行股份有限公司 A kind of data processing method and data processing system
CN110569124A (en) * 2019-08-15 2019-12-13 中国平安财产保险股份有限公司 Task allocation method and device
WO2022228485A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Data transmission method, data processing method, and related product
CN114461389A (en) * 2022-01-13 2022-05-10 中国工商银行股份有限公司 Load balancing method and device of server cluster and electronic equipment
CN115952003A (en) * 2023-02-13 2023-04-11 中银金融科技有限公司 Method, device, equipment and storage medium for cluster server load balancing

Also Published As

Publication number Publication date
CN116755889A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN112379982B (en) Task processing method, device, electronic equipment and computer readable storage medium
CN111857720B (en) User interface state information generation method and device, electronic equipment and medium
CN111966950A (en) Log sending method and device, electronic equipment and computer readable medium
CN115357350A (en) Task configuration method and device, electronic equipment and computer readable medium
CN111596992B (en) Navigation bar display method and device and electronic equipment
CN116388112B (en) Abnormal supply end power-off method, device, electronic equipment and computer readable medium
CN111355784B (en) Method, device, medium and electronic equipment for processing request information
CN110489219B (en) Method, device, medium and electronic equipment for scheduling functional objects
CN116755889B (en) Data acceleration method, device and equipment applied to server cluster data interaction
CN113204557B (en) Electronic form importing method, device, equipment and medium
CN111756833B (en) Node processing method, node processing device, electronic equipment and computer readable medium
CN116360971A (en) Processing method, device, equipment and medium based on heterogeneous computing framework
CN110941683B (en) Method, device, medium and electronic equipment for acquiring object attribute information in space
CN113518183A (en) Camera calling method and device and electronic equipment
CN111625436A (en) Insurance business capacity management method and device, electronic equipment and storage medium
CN114651237A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN116319322B (en) Power equipment node communication connection method, device, equipment and computer medium
CN115994120B (en) Data file merging method, device, electronic equipment and computer readable medium
CN114153620B (en) Optimal allocation method and device for Hudi operating environment resources
CN112148448B (en) Resource allocation method, apparatus, device and computer readable medium
CN116757443B (en) Novel power line loss rate prediction method and device for power distribution network, electronic equipment and medium
CN116703262B (en) Distribution resource adjustment method, distribution resource adjustment device, electronic equipment and computer readable medium
CN115827415B (en) System process performance test method, device, equipment and computer medium
CN112346728B (en) Device adaptation method, apparatus, device and computer readable medium
CN113435528B (en) Method, device, readable medium and electronic equipment for classifying objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant