CN116028231A - Load balancing method, device, medium and electronic equipment - Google Patents

Load balancing method, device, medium and electronic equipment Download PDF

Info

Publication number
CN116028231A
CN116028231A CN202310147057.3A CN202310147057A CN116028231A CN 116028231 A CN116028231 A CN 116028231A CN 202310147057 A CN202310147057 A CN 202310147057A CN 116028231 A CN116028231 A CN 116028231A
Authority
CN
China
Prior art keywords
item
data
resource allocation
symbols
analyzed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310147057.3A
Other languages
Chinese (zh)
Inventor
李由
梁鹏斌
宋成业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingxi Beijing Technology Co Ltd
Original Assignee
Lingxi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingxi Beijing Technology Co Ltd filed Critical Lingxi Beijing Technology Co Ltd
Priority to CN202310147057.3A priority Critical patent/CN116028231A/en
Publication of CN116028231A publication Critical patent/CN116028231A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application provides a method, a device, a medium and electronic equipment for load balancing, wherein the method comprises the following steps: monitoring data to be analyzed generated by each item in the plurality of items; if the ratio of the amount of the data to be analyzed corresponding to the first item to the total amount of the data to be analyzed corresponding to the plurality of items is determined to be larger than a set threshold value, acquiring an initial item number allocated to the first item; splicing the plurality of marking symbols with the initial project numbers respectively to obtain a plurality of computing resource allocation symbols, wherein one marking symbol corresponds to one computing resource allocation symbol; and distributing computing resources in a service cluster for the data to be analyzed of the first item according to the computing resource distribution symbols. By means of the method and the device, data can be distributed to different operators (namely computing resources) as evenly as possible, and the situation that script execution is slow due to data tilting is reduced.

Description

Load balancing method, device, medium and electronic equipment
Technical Field
The present application relates to the field of load balancing, and in particular, embodiments of the present application relate to a method, an apparatus, a medium, and an electronic device for load balancing.
Background
Load balancing (Load Balance), which means that a service cluster is formed by a plurality of servers in a symmetrical mode, each server has an equivalent position, and can independently provide services to the outside without assistance of other servers. By means of some load sharing technology, the externally transmitted requests are distributed to one server in the symmetrical structure homogeneously, and the server receiving the requests responds to the client's request independently. Load balancing can evenly distribute client requests to a server array, thereby providing fast acquisition of important data, solving the problem of massive concurrent access to services, and the cluster technology can obtain performance close to that of a large host with minimal investment.
At present, load balancing algorithms are mainly divided into two types: static load balancing algorithm: tasks are assigned with a fixed probability, irrespective of the state information of the server, such as: polling, weighted polling, random, weighted random, etc. Dynamic load balancing algorithm: determining task allocation according to real-time load state information of the server, such as: minimum connection method, weighted minimum connection number method, etc.
The load balancing technology aiming at a plurality of projects in the related field has the defects that the whole processing progress of the projects is lower due to the fact that a balancing algorithm is not ideal, and the market demand cannot be met.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device, a medium and electronic equipment for load balancing, which can ensure that data is distributed to different operators (namely computing resources) as evenly as possible through some embodiments of the application, and reduce the situation that script execution is slow due to data tilting.
In a first aspect, an embodiment of the present application provides a method for load balancing, where the method includes: monitoring data to be analyzed generated by each item in the plurality of items; if the ratio of the amount of the data to be analyzed corresponding to the first item to the total amount of the data to be analyzed corresponding to the plurality of items is determined to be larger than a set threshold value, acquiring an initial item number allocated to the first item; splicing the plurality of marking symbols with the initial project numbers respectively to obtain a plurality of computing resource allocation symbols, wherein one marking symbol corresponds to one computing resource allocation symbol; and distributing computing resources in a service cluster for the data to be analyzed of the first item according to the computing resource distribution symbols.
According to the method and the device, the initial project number corresponding to each project is encoded into a plurality of numbers, and then computing resources are allocated to the data to be processed according to the plurality of computing resource allocation symbols, so that the technical defect that the related technology can only allocate the unique computing resources for the same main key (namely one initial project number) can be effectively overcome, and further, a plurality of computing resources can be allocated to the project generating more data to be analyzed for data analysis processing.
In some embodiments, the allocating computing resources in a service cluster for the data to be analyzed of the first item according to the plurality of computing resource allocation symbols includes: selecting service resources with the same number as the total number of the symbols of the computing resource allocation symbols from the service cluster; and distributing the data to be analyzed of the first item to each service resource for parallel analysis processing.
According to the method and the device for processing the data, a plurality of computing resources are distributed to the data to be analyzed according to the number of the new codes generated by encoding the primary key, so that the processing speed of the data can be effectively improved, and the data processing progress corresponding to all projects can be further improved.
In some embodiments, the splicing the plurality of marking symbols with the initial item numbers respectively to obtain a plurality of computing resource allocation symbols includes: the random numbers in 1 to 9 are concatenated before the initial item number.
Some embodiments of the present application derive a plurality of computing resource allocation symbols by adding a prefix to the initial project number.
In some embodiments, the splicing the plurality of marking symbols with the initial item numbers respectively to obtain a plurality of computing resource allocation symbols includes: the random numbers in 1 to 9 are concatenated after the initial item number.
Some embodiments of the present application derive a plurality of computing resource allocation symbols by adding a suffix to the initial project number.
In some embodiments, after the allocating computing resources in the service cluster for the data to be analyzed of the first item according to the plurality of computing resource allocation symbols, the method further comprises: obtaining analysis processing results of the computing resources to obtain a plurality of analysis processing results; and aggregating the plurality of analysis processing results to obtain a target analysis processing result.
Some embodiments of the present application aggregate the processing results of multiple computing resources.
In some embodiments, before the obtaining the analysis processing results of the respective computing resources, the method further comprises: storing the plurality of computing resource allocation symbols and analysis processing results of the computing resources in a target cache table; the step of obtaining the analysis processing results of the computing resources comprises the following steps: and acquiring the analysis processing result from the target cache table according to the plurality of computing resource allocation symbols.
Some embodiments of the present application allocate computing resources for data to be processed by a computing resource allocator allocated for a first item and use the allocator to distinguish analysis processing results belonging to different items.
In some embodiments, the initial item number is a primary key assigned to the first item, and the plurality of computing resource allocation symbols are hashed from the primary key.
In a second aspect, some embodiments of the present application provide an apparatus for load balancing, the apparatus comprising: the monitoring module is configured to monitor data to be analyzed generated by each item in the plurality of items; a primary key acquisition module configured to acquire an initial item number allocated to a first item if it is determined that a ratio of an amount of data to be analyzed corresponding to the first item to a total amount of data to be analyzed corresponding to the plurality of items is greater than a set threshold; the hash processing module is configured to splice the plurality of marking symbols with the initial project numbers respectively to obtain a plurality of computing resource allocation symbols, wherein one marking symbol corresponds to one computing resource allocation symbol; and a resource allocation module configured to allocate computing resources in a service cluster for the data to be analyzed of the first item according to the plurality of computing resource allocation symbols.
In a third aspect, some embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs a method according to any embodiment of the first aspect.
In a fourth aspect, some embodiments of the present application provide an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, may implement a method according to any embodiment of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a load balancing system according to an embodiment of the present application;
FIG. 2 is one of the flowcharts of the method for load balancing provided in the embodiments of the present application;
FIG. 3 is a second flowchart of a method for load balancing according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a device for load balancing according to an embodiment of the present application;
fig. 5 is a schematic diagram of electronic device composition according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
For example, a lot of data to be analyzed which needs to be processed is generated by abnormal user interaction in a region or a city, and the data to be analyzed is several times more than that in other cities, in order to process the data to be analyzed, the related technology allocates the same computing resource to the data to be analyzed according to a primary key (namely, an initial item number allocated to an item) when the related technology is implemented at a lower layer, that is, the related technology allocates the same computing resource to the data to be analyzed with the same primary key, for example, five computers (as an example of five service resources) process the data to be processed corresponding to ten primary keys, and each computer processes the data to be analyzed corresponding to two primary keys. However, if the amount of data to be analyzed, which is divided by a computer, is very large, the distribution of computing resources is unbalanced, and the data is inclined, so that the execution efficiency is affected, and the service demand party cannot immediately check the processing results of the data to be processed corresponding to the ten primary keys.
It is to be understood that the above-mentioned data tilting problem must exist directly by the inventive concept of assigning one primary key to one item and assigning the same service resource according to the same primary key, resulting in slower processing progress of a plurality of items.
At least to solve these technical problems, in the embodiments of the present application, a plurality of sub-primary keys (i.e., a plurality of computing resource allocation symbols) are regenerated for a project with a larger amount of data to be analyzed, so that a plurality of service resources (referred to as computing resources) can be allocated to the data to be analyzed generated for the project according to the plurality of sub-primary keys, thereby improving the data processing speed for a large amount of generated projects.
Referring to fig. 1, fig. 1 is a load balancing system provided in an embodiment of the present application, where the system includes a plurality of clients (e.g., a first client 101 and a second client 102 in fig. 1), a load balancing server 200, and a service cluster 300 formed by a plurality of service resources.
The first client 101 and the second client 102 of fig. 1 are processing units respectively provided for one item, wherein data to be analyzed of the first item can be generated by the first client 101, and data to be analyzed of the second item can be generated by the second client 102. It should be noted that, in some embodiments of the present application, a plurality of processing units may be allocated to one item, that is, a plurality of clients generate data to be analyzed of one item.
The load balancing server 200 of fig. 1 is configured to receive data processing requests of clients, and select one or more service resources from the service cluster 300 as computing units for data to be analyzed of corresponding items. Unlike the related art, which allocates all data to be analyzed corresponding to an item to a service resource (to be an item corresponding to a primary key, and a primary key is only allocated to a computing resource), some embodiments of the present application recode a primary key set in a certain item to obtain a plurality of new codes, and then use the new codes as a basis for allocating service resources, so that the data to be analyzed corresponding to the item can be obviously allocated to a plurality of computing resources according to the plurality of new primary keys.
The service cluster 300 of fig. 1 illustratively includes: a first server 301 (as a first service resource or referred to as a first computing resource), a second server 302 (as a second service resource or referred to as a second computing resource), a third server 303 (as a third service resource or referred to as a third computing resource), and a fourth server 304 (as a fourth service resource or referred to as a fourth computing resource). For example, before the load balancing method of the present application is adopted, since the first client 101 running the first item corresponds to one primary key (i.e. the initial item number), the load balancing server allocates all the data to be analyzed related to the first item to the first server 301 as shown in fig. 1 according to the one primary key, and after the load balancing method of the present application is adopted, the load balancing server allocates the first server 301 and the third server 303 for all the data to be analyzed generated for the first item according to the two new primary keys because the primary key corresponding to the first item is encoded into two new primary keys (i.e. two computing resource allocation symbols are obtained), which obviously improves the processing speed of the first item. It should be noted that fig. 1 is only used to illustrate the types of service resources, and in some embodiments of the present application, a plurality of servers may also be used as a service resource that may be scheduled.
It should be noted that fig. 1 is only used to exemplarily illustrate a system for load balancing of some embodiments of the present application, and those skilled in the art may design different architectures according to application scenarios.
The method of load balancing performed by the load balancing server 200 of fig. 1 is exemplarily described below in connection with fig. 2.
As shown in fig. 2, an embodiment of the present application provides a method for load balancing, where the method includes:
s101, monitoring data to be analyzed generated by each item in the plurality of items.
S102, if the ratio of the amount of the data to be analyzed corresponding to the first item to the total amount of the data to be analyzed corresponding to the plurality of items is determined to be larger than a set threshold value, acquiring an initial item number allocated to the first item. For example, the initial item number is a primary key assigned to the corresponding item.
And S103, respectively splicing the plurality of marking symbols with the initial project numbers to obtain a plurality of computing resource allocation symbols, wherein one marking symbol corresponds to one computing resource allocation symbol.
For example, the symbol may be any one of numbers 1 to 9, or any one of english alphabets, and the total number of symbols of the plurality of symbols is related to the duty ratio. For example, the larger the value of the duty ratio, the more tag symbols can be selected to splice with the initial project number as much as possible. In some embodiments of the present application, the marker symbol may also be a complex symbol, i.e. composed of two characters or two numbers or one character and one number. Embodiments of the present application are not limited to the type of the marker symbol.
S104, distributing the computing resources in the service cluster for the data to be analyzed of the first item according to the computing resource distribution symbols. That is, some embodiments of the present application allocate computing resources for one item by multiple computing resource allocators instead of allocating computing resources for the data to be analyzed based on an initial item number.
According to the method and the device, the initial project number corresponding to each project is encoded into a plurality of numbers, and then computing resources are allocated to the data to be processed according to the plurality of computing resource allocation symbols, so that the technical defect that the related technology can only allocate the unique computing resources for the same main key (namely one initial project number) can be effectively overcome, and further, a plurality of computing resources can be allocated to the project generating more data to be analyzed for data analysis processing.
The implementation of the steps is exemplarily described below.
S101 needs to monitor the data to be analyzed generated by each item in the plurality of items in real time or periodically. For example, data to be analyzed corresponding to each item is acquired once a day.
The step S102 is executed to select an item with larger data to be analyzed from a plurality of items as a target item with the number of the primary keys to be adjusted, and code the initial item numbers of the items into a plurality of items, so that more computing resources can be allocated for the items with larger data.
The process of obtaining a plurality of computing resource allocation symbols at S103 is exemplarily set forth below.
The process of deriving a plurality of computing resource allocation symbols from an initial project number is illustratively described below with the numerals in 1-9 as reference numerals. The computing resource allocation symbol may be composed of a plurality of characters, a plurality of numbers, or at least one character and at least one number.
For example, in some embodiments of the present application, the splicing the plurality of marking symbols with the initial project number respectively to obtain a plurality of computing resource allocation symbols includes: the random numbers in 1 to 9 are concatenated before the initial item number. For example, the initial item number is a, then this step is performed to convert a to: 1A, 5A, and 6A, corresponding to three computing resource allocation symbols, three independent computing resources may then be allocated for the project.
It will be appreciated that the random number in the 1 value of 9 may be replaced with any one of the 26 english alphabets.
For example, in some embodiments of the present application, the prefix is added to the initial item number (i.e., the primary key) of the first item with a larger amount of data to be analyzed, and if the primary key number corresponding to the original 2 ten thousand pieces of data to be analyzed is 1, the primary key 1 is modified to A1, B1, … …, and then the data to be analyzed can be distributed to different machines for processing according to the modified multiple sub-primary keys, and then related processing is performed after the processing is completed. That is, some embodiments of the present application address the principle of allocating computing resources to data to be processed by changing the way in which the primary key of an item is encoded.
Some embodiments of the present application derive a plurality of computing resource allocation symbols by adding a prefix to the initial project number.
For example, in some embodiments, the splicing the plurality of marking symbols with the initial project number respectively to obtain a plurality of computing resource allocation symbols includes: the random numbers in 1 to 9 are concatenated after the initial item number. For example, the initial item number is a, then this step is performed to convert a to: three codes A1, A5, and A6 correspond to three computing resource allocation symbols, after which three independent computing resources can be allocated for the project.
Some embodiments of the present application derive a plurality of computing resource allocation symbols by adding a suffix to the initial project number.
In some embodiments of the present application, S104 illustratively includes: selecting the same number of service resources (or referred to as computing resources) as the total number of symbols of the computing resource allocation symbol from the service cluster; and distributing the data to be analyzed of the first item to each service resource for parallel analysis processing. That is, some embodiments of the present application allocate a plurality of computing resources to data to be analyzed according to the number of new codes generated by encoding the primary key, so as to effectively improve the processing speed of such data and further improve the data processing progress corresponding to all items.
It should be noted that some embodiments of the present application further need to aggregate the processing results obtained in S104.
In some embodiments of the present application, after S104, the method further comprises: obtaining analysis processing results of the service resources to obtain a plurality of analysis processing results; and aggregating the plurality of analysis processing results to obtain a target analysis processing result. Some embodiments of the present application aggregate the processing results of multiple computing resources.
For example, in some embodiments of the present application, before the obtaining the analysis processing result of the respective service resources, the method further includes: storing the plurality of computing resource allocation symbols and analysis processing results of the service resources in a target cache table; the step of obtaining the analysis processing results of the service resources comprises the following steps: and acquiring the analysis processing result from the target cache table according to the plurality of computing resource allocation symbols. Some embodiments of the present application allocate computing resources for data to be processed by a computing resource allocator allocated for a first item and use the allocator to distinguish analysis processing results belonging to different items.
It should be noted that, in some embodiments of the present application, the initial item number is a primary key allocated to the first item, and the plurality of computing resource allocation symbols are obtained by hashing the primary key.
The method of load balancing of some embodiments of the present application is described below in an exemplary manner with respect to fig. 3 with a statistical period of data to be analyzed as an item per day.
First, the application scenario of fig. 3 is exemplarily described: the project a performs an activity on a certain day. The outbound data of the day proliferated, two tens of millions of data were played in the previous day, and two hundred million data were played in the day of the activity. The work load of other projects is unchanged, and each project has about two tens of millions of data per day. The company builds a Hadoop service cluster by using 10 servers, and in the process of taking the item ID as the aggregation statistics calculation, each node carries out aggregation statistics according to the item ID. Of the 10 servers, 9 servers process 2 tens of millions of data, and one server processes 2 hundreds of millions of data. It takes longer to compute 2 hundred million servers than other servers to complete a task, even beyond its memory resources, resulting in failure to complete the task. Resulting in a very serious or even incomplete overall statistics task delay.
The implementation of the scheme is described below in connection with fig. 3.
S201, start.
S202, acquiring the quantity of data to be analyzed of a certain item on the same day, acquiring the total quantity of data to be analyzed of all items on the same day, and calculating the ratio of the two.
In the above scenario, the data to be analyzed of each item is sampled first, and the duty ratio of the amount of the data to be analyzed of item a is 50%.
S203, judging whether a certain item with the duty ratio larger than the threshold exists, if so, executing S204, and if not, continuing to execute S202.
In particular, in the above scenario, assuming that the threshold value is set to 45%, it is known that the duty ratio of the item a is larger than the threshold value, and thus the item ID of the item a (i.e., the item initial number or referred to as the primary key) needs to be encoded. For example, a prefix is added thereto.
S204, adding a random prefix to the item number of the item whose duty ratio is greater than the threshold.
For example, item a is numbered a, and adding the prefix results in three computing resource allocation symbols: 1A, 2A and 3A. These are taken as new primary keys for computing resource allocation.
For example, random numbers of 1 to 9 are concatenated using the conca () function of sql in advance of the item ID of the data of item a before performing statistics with the item ID as a primary key.
S205, selecting a plurality of computing resources which are in one-to-one correspondence with the plurality of new numbers obtained after adding the random prefix from the service cluster, obtaining a plurality of processing results of the item, and storing the processing results in a temporary table.
Specifically, in the above scenario, aggregation statistics is performed according to the spliced item IDs, and at this time, the data of the item a is evenly distributed to three servers to perform calculation, and the calculated result is written into the temporary storage table temp.
S206, carrying out aggregation processing on the data in the temporary table and removing prefixes added in the plurality of new numbers to obtain item numbers.
That is, the item ID prefix of the aggregated data of item a in temp is truncated using a subtstr function, and aggregation is performed again.
S207, reprocessing the aggregation processing result through the service cluster to obtain a target processing result, such as splicing processing and the like.
S208, ending.
It can be understood that the final data statistics can be completed after the steps are completed, the load data quantity and the operation time of a plurality of servers are utilized maximally, and the smooth output of the indexes can be ensured.
It should be noted that, in some embodiments of the present application, when processing data with an ultra-large data volume, a method of hash aggregation primary key aggregation and then hash primary key secondary aggregation is adopted to ensure that computing resources can be fully utilized. The manner of hashing the primary key of some embodiments of the present application includes, but is not limited to, the use of random prefixes, hash processing, polling processing, and the like. It can be appreciated that by ensuring that data is distributed as evenly as possible among different operators (i.e., computing resources) by the method of load balancing of some embodiments of the present application, situations in which script execution is slow due to data tilting are reduced.
Referring to fig. 4, fig. 4 illustrates an apparatus for providing load balancing according to an embodiment of the present application, and it should be understood that the apparatus corresponds to the method embodiment of fig. 2, and is capable of performing the steps related to the method embodiment, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein for avoiding repetition. The apparatus includes at least one software functional module that can be stored in memory in the form of software or firmware or cured in the operating system of the apparatus, the load balancing apparatus comprising: a monitoring module 401, a primary key acquisition module 402, a hash processing module 403, and a resource allocation module 404.
And the monitoring module is configured to monitor data to be analyzed generated by each item in the plurality of items.
And the primary key acquisition module is configured to acquire an initial item number allocated to the first item if the ratio of the amount of data to be analyzed corresponding to the first item to the total amount of data to be analyzed corresponding to the plurality of items is determined to be greater than a set threshold value.
And the hash processing module is configured to splice the plurality of marking symbols with the initial project numbers respectively to obtain a plurality of computing resource allocation symbols, wherein one marking symbol corresponds to one computing resource allocation symbol.
And a resource allocation module configured to allocate computing resources in a service cluster for the data to be analyzed of the first item according to the plurality of computing resource allocation symbols.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding procedure in the foregoing method for the specific working procedure of the apparatus described above, and this will not be repeated here.
Some embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described in any of the embodiments included in the method of load balancing described above.
As shown in fig. 5, some embodiments of the present application provide an electronic device 500 comprising a memory 510, a processor 520, and a computer program stored on the memory 510 and executable on the processor 520, wherein the processor 520, when reading the program from the memory 510 and executing the program via a bus 530, can implement the method as described in any of the embodiments included in the method of load balancing described above.
Processor 520 may process the digital signals and may include various computing structures. Such as a complex instruction set computer architecture, a reduced instruction set computer architecture, or an architecture that implements a combination of instruction sets. In some examples, processor 520 may be a microprocessor.
Memory 510 may be used for storing instructions to be executed by processor 520 or data related to execution of the instructions. Such instructions and/or data may include code to implement some or all of the functions of one or more modules described in embodiments of the present application. The processor 520 of the disclosed embodiments may be used to execute instructions in the memory 510 to implement the method shown in fig. 2. Memory 510 includes dynamic random access memory, static random access memory, flash memory, optical memory, or other memory known to those skilled in the art.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random AccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method of load balancing, the method comprising:
monitoring data to be analyzed generated by each item in the plurality of items;
if the ratio of the amount of the data to be analyzed corresponding to the first item to the total amount of the data to be analyzed corresponding to the plurality of items is determined to be larger than a set threshold value, acquiring an initial item number allocated to the first item;
splicing the plurality of marking symbols with the initial project numbers respectively to obtain a plurality of computing resource allocation symbols, wherein one marking symbol corresponds to one computing resource allocation symbol;
and distributing computing resources in a service cluster for the data to be analyzed of the first item according to the computing resource distribution symbols.
2. The method of claim 1, wherein the allocating computing resources in a service cluster for the data to be analyzed for the first item according to the plurality of computing resource allocation symbols comprises:
selecting service resources with the same number as the total number of the computing resource allocation symbols from the service cluster;
and distributing the data to be analyzed of the first item to each service resource for parallel analysis processing.
3. The method of claim 2, wherein,
the splicing the plurality of marking symbols with the initial project numbers to obtain a plurality of computing resource allocation symbols respectively includes:
the random numbers in 1 to 9 are concatenated before the initial item number.
4. The method of claim 2, wherein the concatenating the plurality of marking symbols with the initial project number to obtain a plurality of computing resource allocation symbols, respectively, comprises:
the random numbers in 1 to 9 are concatenated after the initial item number.
5. The method of claim 2, wherein after the allocating computing resources in a service cluster for the data to be analyzed for the first item according to the plurality of computing resource allocation symbols, the method further comprises:
obtaining analysis processing results of the computing resources to obtain a plurality of analysis processing results;
and aggregating the plurality of analysis processing results to obtain a target analysis processing result.
6. The method of claim 5, wherein prior to the obtaining the analysis processing results for the respective computing resources, the method further comprises:
storing the plurality of computing resource allocation symbols and analysis processing results of the computing resources in a target cache table;
the step of obtaining the analysis processing results of the computing resources comprises the following steps:
and acquiring the analysis processing result from the target cache table according to the plurality of computing resource allocation symbols.
7. The method of claim 2, wherein the initial project number is a primary key assigned to the first project, and the plurality of computing resource allocation symbols are hashed for the primary key.
8. An apparatus for load balancing, the apparatus comprising:
the monitoring module is configured to monitor data to be analyzed generated by each item in the plurality of items;
a primary key acquisition module configured to acquire an initial item number allocated to a first item if it is determined that a ratio of an amount of data to be analyzed corresponding to the first item to a total amount of data to be analyzed corresponding to the plurality of items is greater than a set threshold;
the hash processing module is configured to splice the plurality of marking symbols with the initial project numbers respectively to obtain a plurality of computing resource allocation symbols, wherein one marking symbol corresponds to one computing resource allocation symbol;
and a resource allocation module configured to allocate computing resources in a service cluster for the data to be analyzed of the first item according to the plurality of computing resource allocation symbols.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, is adapted to carry out the method of any of claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor is operable to implement the method of any one of claims 1-7 when the program is executed.
CN202310147057.3A 2023-02-22 2023-02-22 Load balancing method, device, medium and electronic equipment Pending CN116028231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310147057.3A CN116028231A (en) 2023-02-22 2023-02-22 Load balancing method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310147057.3A CN116028231A (en) 2023-02-22 2023-02-22 Load balancing method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116028231A true CN116028231A (en) 2023-04-28

Family

ID=86074023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310147057.3A Pending CN116028231A (en) 2023-02-22 2023-02-22 Load balancing method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116028231A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160088072A1 (en) * 2014-09-19 2016-03-24 Facebook, Inc. Balancing load across cache servers in a distributed data store
CN110443695A (en) * 2019-07-31 2019-11-12 中国工商银行股份有限公司 Data processing method and its device, electronic equipment and medium
CN113411369A (en) * 2020-03-26 2021-09-17 山东管理学院 Cloud service resource collaborative optimization scheduling method, system, medium and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160088072A1 (en) * 2014-09-19 2016-03-24 Facebook, Inc. Balancing load across cache servers in a distributed data store
CN110443695A (en) * 2019-07-31 2019-11-12 中国工商银行股份有限公司 Data processing method and its device, electronic equipment and medium
CN113411369A (en) * 2020-03-26 2021-09-17 山东管理学院 Cloud service resource collaborative optimization scheduling method, system, medium and equipment

Similar Documents

Publication Publication Date Title
CN107229555B (en) Identification generation method and device
CN110222048B (en) Sequence generation method, device, computer equipment and storage medium
CN109064345B (en) Message processing method, system and computer readable storage medium
US9595979B2 (en) Multiple erasure codes for distributed storage
CN110134889B (en) Short link generation method and device and server
CN106407207B (en) Real-time newly-added data updating method and device
US10116441B1 (en) Enhanced-security random data
EP3221797B1 (en) Testing systems and methods
JP2019523952A (en) Streaming data distributed processing method and apparatus
WO2022111313A1 (en) Request processing method and micro-service system
CN105045762A (en) Management method and apparatus for configuration file
AU2016367801A1 (en) Method and apparatus for generating random character string
CN111651695A (en) Method and device for generating and analyzing short link
CN112165451A (en) APT attack analysis method, system and server
CN111163186A (en) ID generation method, device, equipment and storage medium
CN113641505B (en) Resource allocation control method and device for server cluster
CN113010897A (en) Cloud computing security management method and system
CN116028231A (en) Load balancing method, device, medium and electronic equipment
CN112054919A (en) Method, device, storage medium and system for generating ID (identity) of container cluster under stateless condition
CN111177782A (en) Method and device for extracting distributed data based on big data and storage medium
CN110460634B (en) Edge computing consensus request management method and system
JP6233846B2 (en) Variable-length nonce generation
CN113608847A (en) Task processing method, device, equipment and storage medium
CN111966993B (en) Equipment identification code identification and generation algorithm test method, device, equipment and medium
US20170329722A1 (en) Importance based page replacement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230428

RJ01 Rejection of invention patent application after publication