CN115834585A - Data processing method and load balancing system - Google Patents

Data processing method and load balancing system Download PDF

Info

Publication number
CN115834585A
CN115834585A CN202211265816.8A CN202211265816A CN115834585A CN 115834585 A CN115834585 A CN 115834585A CN 202211265816 A CN202211265816 A CN 202211265816A CN 115834585 A CN115834585 A CN 115834585A
Authority
CN
China
Prior art keywords
data
server
data gateway
push
gateway server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211265816.8A
Other languages
Chinese (zh)
Inventor
刘继伟
李国杰
陈伟荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211265816.8A priority Critical patent/CN115834585A/en
Publication of CN115834585A publication Critical patent/CN115834585A/en
Pending legal-status Critical Current

Links

Images

Abstract

One or more embodiments of the present specification disclose a data processing method and a load balancing system. The method comprises the following steps: acquiring data to be pushed to a data gateway server for data archiving processing; generating a pushing task based on the data to be pushed; the pushing task comprises at least one piece of data to be pushed; polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers; wherein, a plurality of data gateway servers include: the data gateway server belongs to the first data push server and/or belongs to other data push servers except the first data push server; and pushing the pushing task to a target data gateway server so that the target data gateway server carries out data archiving processing on at least one piece of data to be pushed.

Description

Data processing method and load balancing system
Technical Field
The present disclosure relates to the field of load balancing technologies, and in particular, to a data processing method and a load balancing system.
Background
With the advent of the mobile internet era, internet services, whether access volume or data volume generated per second, are very huge, so that almost all internet applications start to deploy background servers in a cluster mode, and in such a deployment environment, each server can complete the same service function, for example, each server can process an access request of a user or process data generated by the internet services, thereby achieving the effect of quickly responding to front-end internet services.
The above manner of deploying background servers in the cluster mode meets the use requirements of people for internet services to a certain extent, but on the other hand, a new problem is also generated, for example, the manner of deploying each server in the cluster mode is different from the traditional main-slave server architecture, so that the manner of load balancing the slave servers by the main server is obviously no longer suitable for load balancing the servers deployed in the cluster mode. Therefore, how to more reasonably schedule a plurality of background servers under a deployment environment that the background servers are deployed in a cluster mode so as to more reasonably utilize server resources becomes one of the problems to be solved urgently at present.
Disclosure of Invention
In one aspect, one or more embodiments of the present specification provide a data processing method, which is applied to a first data push server, and includes: and acquiring data to be pushed, which is to be pushed to a data gateway server for data archiving processing. And generating a pushing task based on the data to be pushed, wherein the pushing task comprises at least one piece of data to be pushed. Polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers. Wherein the plurality of data gateway servers comprise: the data gateway server belongs to the first data push server, and/or the data gateway server belongs to other data push servers except the first data push server. And pushing the pushing task to the target data gateway server so that the target data gateway server carries out data archiving processing on the at least one piece of data to be pushed.
In another aspect, one or more embodiments of the present specification provide a load balancing system, which includes a plurality of management areas, and data push servers distributed in each management area, at least one data gateway server belonging to the data push servers, and a storage system connected to the data gateway servers. The data pushing server is used for acquiring data to be pushed to a data gateway server for data archiving processing, generating a pushing task based on the data to be pushed, wherein the pushing task comprises at least one piece of data to be pushed, polling a plurality of data gateway servers based on a preset load balancing strategy, and determining a target data gateway server meeting the load balancing strategy from the plurality of data gateway servers, and the plurality of data gateway servers comprise: and pushing the pushing task to the target data gateway server by the data gateway server belonging to the data pushing server and/or the data gateway server belonging to other data pushing servers except the data pushing server. The data gateway server is used for receiving the pushing tasks pushed by the data pushing server, and sending each piece of data to be pushed in the pushing tasks to the storage system based on the network connection between the data gateway server and the storage system. And the storage system is used for receiving each piece of data to be pushed sent by the data gateway server and archiving each piece of data to be pushed.
In yet another aspect, one or more embodiments of the present specification provide a data processing apparatus comprising a processor and a memory electrically connected to the processor, the memory storing a computer program, the processor being configured to invoke and execute the computer program from the memory to implement: and acquiring data to be pushed, which is to be pushed to a data gateway server for data archiving processing. And generating a pushing task based on the data to be pushed, wherein the pushing task comprises at least one piece of data to be pushed. Polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers. Wherein the plurality of data gateway servers comprise: the data gateway server belongs to a first data push server, and/or belongs to a data gateway server of other data push servers except the first data push server. And pushing the pushing task to the target data gateway server so that the target data gateway server carries out data archiving processing on the at least one piece of data to be pushed.
In another aspect, the present specification provides a storage medium for storing a computer program, where the computer program is executable by a processor to implement the following processes: and acquiring data to be pushed, which is to be pushed to a data gateway server for data archiving processing. And generating a pushing task based on the data to be pushed, wherein the pushing task comprises at least one piece of data to be pushed. Polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers. Wherein the plurality of data gateway servers comprise: the data gateway server belongs to a first data push server, and/or belongs to a data gateway server of other data push servers except the first data push server. And pushing the pushing task to the target data gateway server so that the target data gateway server carries out data archiving processing on the at least one piece of data to be pushed.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the description below are only some of the embodiments described in one or more embodiments of the present specification, and that other drawings may be obtained by those skilled in the art without inventive effort.
Fig. 1 is a schematic view of an application scenario of a data processing method according to an embodiment of the present specification;
FIG. 2 is a schematic flow chart diagram of a method of data processing according to one embodiment of the present description;
FIG. 3 is a schematic swim lane diagram of a data processing method according to an embodiment of the present description;
FIG. 4 is a schematic block diagram of a load balancing system in accordance with an embodiment of the present description;
FIG. 5 is a schematic block diagram of a load balancing system in accordance with another embodiment of the present description;
fig. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present specification.
Detailed Description
One or more embodiments of the present disclosure provide a data processing method and a load balancing system, so as to solve the problem of unreasonable server resource allocation in the existing service scenario.
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments of the present disclosure without making any creative effort shall fall within the protection scope of one or more of the embodiments of the present disclosure.
At present, in many service scenarios, in order to record some states of the current system or service operation, some key text information is often output and recorded as a log in a text form on a disk of a server processing the service or operating the system. Because log data recorded in the disks of the servers are messy, and the log data in the disks of the servers are needed in some service scenes, a plurality of data push servers and data gateway servers are deployed in a cluster mode in advance in the current service scene, so that the log data in the disks of the servers are pulled in real time through each data push server according to the requirements of operation and maintenance personnel and are pushed to the data gateway servers, and then the data gateway servers store the log data in different storage systems according to specific storage constraints. In a processing scene of mass data, the data push server pulls and generates hundreds of millions of detailed data (wherein each log data comprises at least one piece of detailed data) every minute and simultaneously sends the detailed data to the data gateway server cluster, so that high requirements are provided for a load balancing strategy of the data gateway server, a proper load balancing strategy is adopted, cluster resources can be utilized more reasonably, the service capacity of the cluster is improved better, and the availability of the data gateway server is improved. Based on this, the embodiments of the present specification provide a data processing method and a load balancing system.
Fig. 1 is a schematic view of an application scenario of a data processing method according to an embodiment of the present specification, and as shown in fig. 1, the data processing method in this embodiment is applied in a load balancing system, the load balancing system includes a plurality of management areas 10 (two management areas 10 are schematically shown in fig. 1), and a list server 20 connected to each management area 10 through a network, a data push server 110, at least one data gateway server 120, and a storage system 130 connected to the data gateway server 120 are distributed in each management area 10, and the list server 20 is further connected to each data push server 110. Each data push server 110 can perform the same service function, for example, obtain data to be pushed to the data gateway server 120 for data archiving processing, generate a push task based on the data to be pushed, poll the plurality of data gateway servers 120 based on a preset load balancing policy, determine a target data gateway server 120 meeting the load balancing policy from the plurality of data gateway servers 120, and push the push task to the target data gateway server 120. Each data gateway server 120 can perform the same service function, such as data archiving processing on at least one piece of data to be pushed. It should be understood that the deployment location of each component in the load balancing system shown in fig. 1 is only an example, and is not limited in this specification.
Fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the present specification, which can be executed by a first data push server. Alternatively, the first data push server may be any one of the data push servers 110 shown in fig. 1. As shown in fig. 2, the method includes:
s202, data to be pushed to a data gateway server for data archiving processing is obtained.
And S204, generating a pushing task based on the data to be pushed.
The pushing task comprises at least one piece of data to be pushed.
And S206, polling the plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server conforming to the load balancing strategy from the plurality of data gateway servers.
Wherein, a plurality of data gateway servers include: the data gateway server belongs to the first data push server, and/or belongs to other data push servers except the first data push server. Alternatively, the plurality of data gateway servers may include a data gateway server belonging to the first data push server and a data gateway server belonging to another data push server other than the first data push server. Alternatively, the plurality of data gateway servers may include a data gateway server belonging to the first data push server. Alternatively, the plurality of data gateway servers may include data gateway servers belonging to other data push servers than the first data push server.
Alternatively, the target data gateway server may be a data gateway server belonging to the first data push server, or a data gateway server belonging to a data push server other than the first data push server.
And S208, pushing the pushing task to the target data gateway server so that the target data gateway server performs data archiving processing on at least one piece of data to be pushed.
In an embodiment, before performing data archiving processing on at least one piece of data to be pushed, the target data gateway server may convert the format of the data to be pushed into a format meeting the requirement of the storage system according to configuration preset by operation and maintenance personnel, so as to successfully perform data archiving on the data to be pushed.
By adopting the technical scheme of one or more embodiments of the present specification, the first data push server acquires data to be pushed to the data gateway server for data archiving processing, and generates a push task including at least one piece of data to be pushed based on the data to be pushed, so as to poll the data gateway server belonging to the first data push server and/or the data gateway servers belonging to other data push servers except the first data push server based on a preset load balancing policy, so as to determine a target data gateway server meeting the load balancing policy, and further push the push task to the target data gateway server, so that the target data gateway server archives the at least one piece of data to be pushed. Compared with the current mode of balancing the load of the slave server through the master server, the technical scheme realizes the effect of balancing the load across the master server and the slave server, so that the resource allocation of the data gateway server can be based on the pushing task, and whether the non-data gateway server belongs to the first data pushing server generating the pushing task or not, the reasonable utilization of the resource of each data gateway server is promoted, the condition that the data pushing fails when the pushing task is more and the resource is not reasonably allocated is avoided, and the availability of the data gateway server is improved; and the push task is pushed to the corresponding target data gateway server, so that the push task can be processed by the target data gateway server reasonably configured for the push task, and the response efficiency of the push task is ensured.
In an embodiment, before acquiring data to be pushed to the data gateway server for data archiving processing (i.e., S202), each first data pushing server may acquire the data to be pushed according to a pre-configured data acquisition policy. Wherein the data acquisition policy may include one or more of: data acquisition path, data acquisition frequency, and data type.
In one embodiment, generating a push task based on data to be pushed (i.e., S204) may be performed as: generating each pushing task based on a preset number of data to be pushed; or, each pushing task is generated based on the data to be pushed of the preset data volume.
Optionally, a preset number or a preset data amount may be set in the first data push server, so that after the data to be pushed is obtained, each push task is generated based on the set preset number or preset data amount. Exemplarily, under the condition that the preset number is set in the first data push server, if the preset number is 100, after the data to be pushed is obtained, each push task may be generated based on 100 data to be pushed, that is, each push task includes 100 data to be pushed. Under the condition that the preset data volume is set in the first data pushing server, if the preset data volume is 5 megabytes, each pushing task can be generated based on 5 megabytes of data to be pushed, namely each pushing task contains 5 megabytes of data to be pushed.
In one embodiment, the preset load balancing policy may include: and the target parameter of the data gateway server does not exceed the corresponding preset threshold value.
Wherein the target parameters may include one or more of: the data gateway server comprises equipment load information of the data gateway server, data volume processed by the data gateway server within a first preset time period, and in-transit data volume interacted between the data gateway server and the storage system. The data volume in transit of the interaction between the data gateway server and the storage system comprises the data volume of data to be pushed, which is sent to the storage system and is not successfully archived by the storage system.
Optionally, the device load information of the data gateway server may include one or more of the following: CPU (Central Processing Unit) occupation, network bandwidth occupation, memory occupation, and network quality.
In this embodiment, the device load information of the data gateway server may reflect the processing capability of the current data gateway server, for example, if the device load information includes a CPU occupation condition and a network quality, and if the current CPU occupation condition is higher and the network quality is worse, the processing capability of the current data gateway server is worse. Based on this, by setting a preset threshold for the device load information of the data gateway server in advance, when the device load information exceeds the corresponding preset threshold, it is determined that the current data gateway server does not meet the preset load balancing strategy, and the effect of screening out the data gateway server with poor processing capability can be realized, so that the condition of data push failure is avoided. It should be understood that, since the device load information of the data gateway server includes a plurality of contents, a preset threshold needs to be set for each content according to the specific content included in the device load information of the data gateway server in a specific application scenario. For example, in the case where the device load information includes a CPU occupancy and a network quality, a CPU occupancy threshold and a network quality threshold need to be set.
In this embodiment, the data volume processed by the data gateway server in the first preset time period may reflect the service capability of the current data gateway server, and if the data volume processed by the data gateway server in the first preset time period exceeds the corresponding preset threshold, it indicates that the service pressure of the data gateway server is high, and then, subsequent data push comes, and the data gateway server may possibly crash, thereby not only causing data push failure, but also causing loss. Based on this, a preset threshold is preset for the data volume processed by the data gateway server within the first preset time, so that when the data volume processed by the data gateway server within the first preset time exceeds the corresponding preset threshold, it is determined that the current data gateway server does not meet the preset load balancing policy, an effect of screening out the data gateway server with poor service capability can be achieved, and a situation of data push failure is avoided.
In this embodiment, the in-transit data volume of the interaction between the data gateway server and the storage system can reflect the downstream service capability, and if the in-transit data volume exceeds the corresponding preset threshold, it indicates that the service pressure of the downstream device (i.e., the storage system) of the data gateway server is high, and then, subsequent data push comes, which may possibly cause the downstream device to crash, thereby not only causing data push failure, but also causing loss. Based on this, through setting up the preset threshold value for the data volume in transit in advance to when the data volume in transit exceeds its corresponding preset threshold value, confirm that present data gateway server does not satisfy preset load balancing strategy, can realize screening the effect of removing the data gateway server that the downstream service ability is relatively poor, thereby avoid the condition of data push failure.
In other embodiments, the preset load balancing policy further includes: the first data push server is in network communication with the data gateway server. Alternatively, it may be detected whether the network between the first data push server and the data gateway server is connected according to information returned by each data gateway server by periodically sending a PING (Packet Internet Groper) command to each data gateway server.
In one embodiment, polling the plurality of data gateway servers based on a preset load balancing policy to determine a target data gateway server conforming to the load balancing policy from the plurality of data gateway servers (i.e., S206), may be performed as the following steps A1-A3:
step A1, determining a data gateway server belonging to a first data push server in a plurality of data gateway servers.
Optionally, the first data push server may obtain polling related information from the list server in advance, and the polling related information may include a management area distributed by each data gateway server. Based on this, the first data push server may determine, according to the polling related information, a data gateway server belonging to the first data push server from among the plurality of data gateway servers.
In one embodiment, a plurality of data gateway servers belonging to the same data push server are distributed in the same management area, and a plurality of data gateway servers belonging to different data push servers are distributed in different management areas. Alternatively, the management area may be a machine room, a cell, a city, a country, or the like.
And step A2, polling the data gateway server belonging to the first data push server in the plurality of data gateway servers to determine whether the data gateway server belonging to the first data push server meets a preset load balancing strategy.
And step A3, if the data gateway server belonging to the first data push server does not meet the preset load balancing strategy, polling the data gateway servers belonging to other data push servers in the plurality of data gateway servers to determine a target data gateway server conforming to the load balancing strategy.
Optionally, when the management area is a machine room and each data push server has an authority to push data to data gateway servers in other machine rooms, the first data push server may poll a data gateway server belonging to the first data push server from the plurality of data gateway servers to determine whether the data gateway server belonging to the first data push server satisfies a preset load balancing policy. If the data transmission rate is not met, the data gateway servers belonging to other data push servers in the multiple data gateway servers are polled directly to determine target data gateway servers meeting the load balancing strategy.
For example, in a case where the management area is a machine room and each data push server has an authority to push data to data gateway servers in other machine rooms, it is assumed that a first data push server a and a plurality of data gateway servers belonging to the first data push server a are distributed in the machine room a, and a first data push server B and a plurality of data gateway servers belonging to the first data push server B are distributed in the machine room B. Then, the first data push server a can poll multiple data gateway servers in the computer room a to determine whether each data gateway server satisfies a preset load balancing policy. If the load balancing policy is not met, directly polling a plurality of data gateway servers in the machine room B to determine a target data gateway server meeting the load balancing policy. Similarly, the first data push server B can poll multiple data gateway servers in the machine room B to determine whether each data gateway server meets a preset load balancing policy. If the load balancing policy is not met, directly polling a plurality of data gateway servers in the machine room A to determine a target data gateway server meeting the load balancing policy.
Optionally, in the case that the management area is a machine room and each data push server does not have a permission to push data to data gateway servers in other machine rooms, the first data push server may poll a data gateway server belonging to the first data push server among the multiple data gateway servers to determine whether the data gateway server belonging to the first data push server meets a preset load balancing policy. And if the load balancing policy does not meet the load balancing policy, pushing the pushing task to other data pushing servers so that the other data pushing servers poll the data gateway servers belonging to the other data pushing servers and determine the target data gateway servers meeting the load balancing policy.
For example, in a case where the management area is a machine room and each data push server does not have an authority to push data to data gateway servers in other machine rooms, it is assumed that a first data push server a and a plurality of data gateway servers belonging to the first data push server a are distributed in the machine room a, and a first data push server B and a plurality of data gateway servers belonging to the first data push server B are distributed in the machine room B. Then, the first data push server a can poll multiple data gateway servers in the machine room a to determine whether each data gateway server satisfies a preset load balancing policy. If the load balancing policy is not met, the pushing task is pushed to other data pushing servers (such as the first data pushing server b), so that the other data pushing servers poll the data gateway server belonging to the other data pushing servers, and a target data gateway server meeting the load balancing policy is determined. Similarly, the first data push server B can poll multiple data gateway servers in the machine room B to determine whether each data gateway server meets a preset load balancing policy. If the load balancing policy is not met, the pushing task is pushed to other data pushing servers (such as the first data pushing server a), so that the other data pushing servers poll the data gateway server belonging to the other data pushing servers, and a target data gateway server meeting the load balancing policy is determined.
In this embodiment, when determining a target data gateway server, each data gateway server belonging to a first data push server may be preferentially polled, and when a target data gateway server satisfying a preset load balancing policy is not found, polling is performed on data gateway servers belonging to other data push servers. In addition, since the target data gateway server satisfies a preset load balancing policy, that is, the target parameter of the target data gateway server does not exceed its corresponding preset threshold, the target parameter includes one or more of the following: the data gateway server comprises equipment load information of the data gateway server, data volume processed by the data gateway server within a first preset time period, and in-transit data volume interacted between the data gateway server and the storage system. The device load information of the data gateway server can reflect the processing capacity of the current data gateway server, the data volume processed by the data gateway server within the first preset time and the on-road data volume interacted between the data gateway server and the storage system, and can reflect the service capacity and the downstream service capacity of the current data gateway server. Obviously, in the embodiment, the network overhead, the processing capability of the current data gateway server, the downstream service capability and other factors are fully considered, so that the determined target data gateway server is more reasonable, the target data gateway server is favorable for efficiently processing the data to be pushed in the pushing task, and the condition of data pushing failure is avoided.
In one embodiment, polling the plurality of data gateway servers based on a preset load balancing policy to determine a target data gateway server conforming to the load balancing policy from the plurality of data gateway servers (i.e., S206) may be performed as the following steps B1-B3. Alternatively, the following steps B1 to B3 may be specific execution steps of the above step A2 or step A3.
And step B1, judging whether first alarm information sent by the data gateway server is received currently or not aiming at each data gateway server. The first alarm information is used for prompting that at least one item of target parameters of the data gateway server exceeds a corresponding preset threshold value.
And step B2, if the first alarm information sent by the data gateway server is not received at present, determining that the data gateway server is a target data gateway server conforming to the load balancing strategy.
And step B3, if the first alarm information sent by the data gateway server is received currently, determining that the data gateway server does not conform to the load balancing strategy.
In this embodiment, whether the current data gateway server is the target data gateway server meeting the load balancing policy can be accurately obtained by judging whether the first warning information sent by the data gateway server is received currently, so that the effect of determining the target data gateway server is conveniently realized.
In one embodiment, based on the preset load balancing policy, the following steps C1-C2 may be performed before polling the plurality of data gateway servers (i.e., S206):
and C1, acquiring polling related information from the list server.
Alternatively, the polling related information may include a management area to which each data gateway server is distributed, an allocation weight corresponding to each data gateway server, and an allocation weight corresponding to each management area. The list server is used for pulling and obtaining the polling related information in advance based on network connection between the list server and each management area and between the list server and each data push server.
Alternatively, the assigned weight corresponding to each management area is set in advance. For example, the operation and maintenance staff may set the distribution weight corresponding to each management area in advance, and update the distribution weight corresponding to each management area after adding or deleting the management area.
Optionally, the distribution weight corresponding to each data gateway server may be preset and continuously updated in the process of processing the push task. For example, the distribution weight corresponding to each data gateway server may be preset by an operation and maintenance worker, and each data gateway server continuously updates its distribution weight in the process of processing the push task.
In one embodiment, the first data push server may receive the updated distribution weight transmitted by the data gateway server belonging to the first data push server, thereby transmitting the updated distribution weight to the list server to cause the list server to update the distribution weight of the corresponding data gateway server based on the updated distribution weight.
The data gateway server is used for performing weighted calculation based on the target parameters of the data gateway server under the condition that the update condition of the distribution weight is met, determining the updated distribution weight corresponding to the data gateway server according to the calculation result, and sending the updated distribution weight to the first data push server. The target parameters may include one or more of the following: the data gateway server comprises an in-transit data volume interacted between the data gateway server and the storage system, a data volume processed by the data gateway server within a first preset time period, and equipment load information of the data gateway server.
Alternatively, the assignment weight update condition may include: and the number of the push tasks reaches a preset time threshold and/or every second preset time interval.
Exemplarily, in the case that the target parameters include an in-transit data amount of the data gateway server interacting with the storage system, a data amount processed by the data gateway server within a first preset time period, and equipment load information of the data gateway server, the data gateway server performs weighting calculation based on its target parameters, which may be implemented as: the method comprises the steps of calculating a first ratio of an in-transit data volume to a corresponding preset threshold value, calculating a second ratio of the data volume processed by a data gateway server in a first preset time period to the corresponding preset threshold value, calculating a third ratio of equipment load information of the data gateway server to the corresponding preset threshold value, and performing weighted calculation on the first ratio, the second ratio and the third ratio according to preset weighted weights.
Optionally, in presetting the weighting weights, attention is paid to which amount of the load balancing system, and which amount is weighted to be the largest. For example, focusing on the amount of data processed by the data gateway server itself to avoid a crash, the weighting weight of the second ratio needs to be set to be the largest among the three. For example, the weighting weight of the second ratio is set to 40%, the weighting weight of the first ratio is set to 35%, and the weighting weight of the third ratio is set to 25%.
In the embodiment, in the process of updating the distribution weight corresponding to the data gateway server, the processing capacity, the downstream service capacity and other factors of the data gateway server are fully considered, so that the re-determined distribution weight better conforms to the actual situation of the current scene, the reasonable utilization of resources of each data gateway server is facilitated, and the situation that data push fails due to more push tasks and the failure in reasonably distributing the resources is avoided.
And step C2, storing the polling related information locally in the first data push server.
In this embodiment, before polling a plurality of data gateway servers, polling related information is first obtained from the list server and stored locally, which provides a basis for subsequent polling and is beneficial to accurate execution of subsequent polling.
In one embodiment, the polling of the plurality of data gateway servers is by weighted polling. In this embodiment, polling a data gateway server belonging to a first data push server among a plurality of data gateway servers (i.e. step A2) may be performed as: and carrying out weighted polling on the data gateway server belonging to the first data push server according to the distribution weight corresponding to the data gateway server belonging to the first data push server.
Illustratively, the data gateway server belonging to the first data push server includes a data gateway server m and a data gateway server n, the distribution weight corresponding to the data gateway server m is 5, and the distribution weight corresponding to the data gateway server n is 7, then the data gateway server belonging to the first data push server is subjected to weighted polling according to the distribution weight corresponding to the data gateway server belonging to the first data push server, that is, the data gateway server m is polled 5 times and the data gateway server n is polled 7 times in every 12 polls.
In this embodiment, since the weight can reflect the weight bias of the current load balancing system to each data gateway server, the data gateway server belonging to the first data push server is polled in a weighted polling manner, so that the data gateway server with a higher weight is polled for multiple times, and the data gateway server with a higher weight can process more push tasks, thereby reducing the processing pressure of other data gateway servers in the load balancing system.
In one embodiment, the polling of the plurality of data gateway servers is by weighted polling. In this embodiment, polling the data gateway servers belonging to other data push servers among the multiple data gateway servers (i.e. step A3) may be performed as: and carrying out weighted polling on the data gateway servers belonging to other data push servers according to the distribution weights corresponding to the data gateway servers belonging to other data push servers and the distribution weights corresponding to the management areas where the other data push servers are located.
Illustratively, the other data push servers include a data push server a and a data push server b, the distribution weight corresponding to the management area where the data push server a is located is 2, the distribution weight corresponding to the management area where the data push server b is located is 3, the data gateway server belonging to the data push server a includes a data gateway server m and a data gateway server n, the distribution weight corresponding to the data gateway server m is 5, the distribution weight corresponding to the data gateway server n is 7, the data gateway server belonging to the data push server b includes a data gateway server x and a data gateway server y, the distribution weight corresponding to the data gateway server x is 7, and the distribution weight corresponding to the data gateway server y is 3. Then, according to the distribution weight corresponding to the data gateway server belonging to the other data push server and the distribution weight corresponding to the management area where the other data push server is located, weighted polling is performed on the data gateway servers belonging to the other data push servers, that is, in every 5 times of polling, 2 times of polling are performed on each data gateway server belonging to the data push server a, and 3 times of polling are performed on each data gateway server belonging to the data push server b. For each data gateway server of the data push server a, 5 polling data gateway servers m and 7 polling data gateway servers n are performed in each 12 polling. For each data gateway server of the data push server b, polling the data gateway server x 7 times and polling the data gateway server y 3 times in each 10 polling.
In this embodiment, since the weight can reflect the bias of the current load balancing system to each management area and data gateway server, the data gateway servers belonging to other data push servers are polled in a weighted polling manner, so that the management area and the data gateway server with a higher weight can be polled for multiple times, and thus the management area and the data gateway server with a higher weight can process more push tasks, thereby reducing the processing pressure of other data gateway servers in the load balancing system.
In one embodiment, during the process of pushing the push task to the target data gateway server (i.e., S208), it may be detected in real time whether the target data gateway server still satisfies the preset load balancing policy, and if not, the newly received push task is discarded, and the second warning information is sent to the first data push server. The second warning information is used for prompting that the push task currently sent by the first data push server is discarded.
Optionally, whether the target data gateway server still meets a preset load balancing strategy can be detected in real time by judging whether first alarm information sent by the target data gateway server is received currently; if first warning information sent by a target data gateway server is not received currently, the target data gateway server can be determined to still meet a preset load balancing strategy; if first warning information sent by the target data gateway server is received currently, the target data gateway server can be determined not to meet a preset load balancing strategy. The first alarm information is used for prompting that at least one item of target parameters of the target data gateway server exceeds a corresponding preset threshold value.
In this embodiment, in the process of pushing the push task to the target data gateway server, whether the target data gateway server still satisfies the preset load balancing policy is detected in real time, if not, the newly received push task is discarded, so that the target data gateway server or the storage system can be prevented from crashing, and the second warning information for prompting that the currently sent push task is discarded is sent to the first data push server, so that the first data push server can timely know which push tasks are discarded, thereby executing corresponding remedial measures, and avoiding data loss.
In one embodiment, in the case that the target data gateway server is a data gateway server belonging to another data push server, after pushing the push task to the target data gateway server (i.e., S208), the following steps D1-D3 may be performed:
and D1, carrying out data push test on the data gateway server belonging to the first data push server by using a push task with a preset data volume at intervals of a third preset time.
Alternatively, the push task of the preset data amount may be a push task of a small data amount, such as a push task of 1 megabyte, a push task of 0.5 megabyte, a push task of 3 megabytes, or the like.
And D2, if the test is passed, increasing the data volume of the pushing task for the data pushing test, and performing the data pushing test on the data gateway server belonging to the first data pushing server at intervals of a third preset time length until all the data to be pushed are pushed to the data gateway server belonging to the first data pushing server, and stopping the data pushing test.
And if receiving successful receiving information fed back by the data gateway server belonging to the first data push server aiming at the push task for the data push test, determining that the test is passed.
And D3, if the test is not passed, polling the data gateway servers belonging to other data pushing servers based on a preset load balancing strategy so as to push the pushing task for the data pushing test to the target data gateway server conforming to the load balancing strategy.
Based on the preset load balancing policy, the specific execution manner of polling the data gateway servers belonging to other data push servers is consistent with the polling execution manner in step A3 and the related embodiments, and details are not described here.
In this embodiment, after the first data push server pushes the push task to the data gateway servers belonging to other data push servers, the data push server can perform a data push test on each data gateway server belonging to the first data push server by using the push task with a small data volume, so that data is gradually pushed from the management area to the management area, and network overhead in the data push process is reduced. And when the data push test is not passed, the push task for the data push test can be pushed to the target data gateway server conforming to the load balancing strategy based on the preset load balancing strategy, so that the condition of data push failure is avoided.
Fig. 3 is a schematic swim lane diagram of a data processing method according to an embodiment of the present specification, and as shown in fig. 3, the method is applied to interaction between a first data push server and a data gateway server belonging to the first data push server and a data gateway server belonging to another data push server, and includes the following steps:
s3.1, the first data pushing server generates a pushing task based on the obtained data to be pushed.
The data to be pushed is the data to be pushed to the data gateway server for data archiving processing. Each pushing task comprises at least one piece of data to be pushed. Alternatively, each push task may be generated based on a preset number of data to be pushed, or each push task may be generated based on a preset amount of data to be pushed.
And S3.2, the first data push server determines a data gateway server belonging to the first data push server in the plurality of data gateway servers according to the locally stored polling related information.
Alternatively, the first data push server may obtain the polling related information from the list server and store the polling related information locally at the first data push server. The polling related information may include a management area distributed by each data gateway server, an allocation weight corresponding to each data gateway server, and an allocation weight corresponding to each management area. The list server is used for pulling and obtaining the polling related information in advance based on the network connection between each management area and each data push server.
Wherein, a plurality of data gateway servers include: the data gateway server belongs to the first data push server, and/or belongs to other data push servers except the first data push server.
And S3.3, the first data push server performs weighted polling on the data gateway server belonging to the first data push server according to the distribution weight corresponding to the data gateway server belonging to the first data push server in the polling related information so as to determine whether the data gateway server belonging to the first data push server meets a preset load balancing strategy.
The preset load balancing policy may include: and the target parameter of the data gateway server does not exceed the corresponding preset threshold value. The target parameters may include one or more of the following: the data gateway server comprises equipment load information of the data gateway server, data volume processed by the data gateway server within a first preset time period, and in-transit data volume interacted between the data gateway server and the storage system. The data volume in transit of the data gateway server and the storage system interaction comprises the data volume of the data to be pushed, which is sent to the storage system and is not successfully archived by the storage system.
Optionally, in the process of weighted polling of each data gateway server, whether the first alarm information sent by the data gateway server is received or not can be judged; if the first warning information sent by the data gateway server is not received at present, determining that the data gateway server is a target data gateway server conforming to a load balancing strategy; and if the first alarm information sent by the data gateway server is received currently, determining that the data gateway server does not conform to the load balancing strategy. The first alarm information is used for prompting that at least one item of target parameters of the data gateway server exceeds a corresponding preset threshold value.
And S3.4, the first data pushing server pushes the pushing task to the data gateway server belonging to the first data pushing server under the condition that the data gateway server belonging to the first data pushing server is determined to meet the preset load balancing strategy, so that the target data gateway server conforming to the load balancing strategy is obtained.
And S3.5, the target data gateway server performs data archiving processing on at least one piece of data to be pushed in the data gateway server belonging to the first data pushing server.
And S3.6, the first data pushing server performs weighted polling on the data gateway servers belonging to other data pushing servers according to the distribution weights corresponding to the data gateway servers belonging to other data pushing servers and the distribution weights corresponding to the management areas where the other data pushing servers are located under the condition that the data gateway servers belonging to the first data pushing server are determined not to meet the preset load balancing strategy, so as to determine the target data gateway servers meeting the load balancing strategy.
And S3.7, the target data gateway server performs data archiving processing on at least one piece of data to be pushed in the data gateway servers belonging to other data pushing servers.
The specific processes of S3.1 to S3.7 are described in detail in the above embodiments, and are not described herein again.
By adopting the technical scheme of one or more embodiments of the present specification, the first data push server acquires data to be pushed to the data gateway server for data archiving processing, and generates a push task including at least one piece of data to be pushed based on the data to be pushed, so as to poll the data gateway server belonging to the first data push server and/or the data gateway servers belonging to other data push servers except the first data push server based on a preset load balancing policy, so as to determine a target data gateway server meeting the load balancing policy, and further push the push task to the target data gateway server, so that the target data gateway server archives the at least one piece of data to be pushed. Compared with the current mode of balancing the load of the slave server through the master server, the technical scheme realizes the effect of balancing the load across the master server and the slave server, so that the resource allocation of the data gateway server can be based on the pushing task, and whether the non-data gateway server belongs to the first data pushing server generating the pushing task or not, the reasonable utilization of the resource of each data gateway server is promoted, the condition that the data pushing fails when the pushing task is more and the resource is not reasonably allocated is avoided, and the availability of the data gateway server is improved; and the push task is pushed to the corresponding target data gateway server, so that the push task can be processed by the target data gateway server reasonably configured for the push task, and the response efficiency of the push task is ensured.
In summary, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
Based on the same idea, the data processing method provided by one or more embodiments of the present specification further provides a load balancing system.
Fig. 4 is a schematic block diagram of a load balancing system according to an embodiment of the present disclosure, and as shown in fig. 4, the load balancing system includes a plurality of management areas 10 (three management areas 10 are schematically illustrated in fig. 4), and a data push server 110 distributed in each management area 10, at least one data gateway server 120 belonging to the data push server 110, and a storage system 130 connected to the data gateway server.
In this embodiment, the data pushing server 110 is configured to obtain data to be pushed, which is to be pushed to the data gateway server for data archiving processing. And generating a push task based on the data to be pushed, wherein the push task comprises at least one piece of data to be pushed. And polling the plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers. Wherein, a plurality of data gateway servers include: the data gateway server belongs to the data push server, and/or belongs to the data gateway server of other data push servers except the data push server. And pushing the pushing task to a target data gateway server.
In this embodiment, the data gateway server 110 is configured to receive a push task pushed by the data push server. And sending each piece of data to be pushed in the pushing task to the storage system based on the network connection with the storage system. And the storage system 130 is configured to receive each piece of data to be pushed sent by the data gateway server, and archive each piece of data to be pushed.
In one embodiment, the data push server 110 is further configured to:
determining a data gateway server belonging to a first data push server in a plurality of data gateway servers;
polling a data gateway server belonging to a first data push server in a plurality of data gateway servers to determine whether the data gateway server belonging to the first data push server meets a preset load balancing strategy;
and if the data transmission rate is not met, polling data gateway servers belonging to other data push servers in the plurality of data gateway servers to determine a target data gateway server meeting the load balancing strategy.
In one embodiment, the preset load balancing policy includes:
the target parameter of the data gateway server does not exceed the corresponding preset threshold value; the target parameters include one or more of: the method comprises the following steps that equipment load information of a data gateway server, data volume processed by the data gateway server within a first preset time length and in-transit data volume interacted between the data gateway server and a storage system are obtained;
the data volume in transit of the interaction between the data gateway server and the storage system comprises the data volume of data to be pushed, which is sent to the storage system and is not successfully archived by the storage system.
In one embodiment, a plurality of data gateway servers belonging to the same data push server are distributed in the same management area; a plurality of data gateway servers belonging to different data push servers are distributed in different management areas.
In one embodiment, the polling is weighted polling; the data push server 110 is further configured to:
and carrying out weighted polling on the data gateway server belonging to the first data push server according to the distribution weight corresponding to the data gateway server belonging to the first data push server.
In one embodiment, the polling is weighted polling; the data push server 110 is further configured to perform weighted polling on the data gateway servers belonging to other data push servers according to the distribution weights corresponding to the data gateway servers belonging to other data push servers and the distribution weights corresponding to the management areas where the other data push servers are located.
In one embodiment, as shown in fig. 5, the load balancing system further includes a list server 20, and the list server 20 is connected to the management areas 10 and the data push servers 110 through a network.
A list server 20 for acquiring polling related information based on network connection with each management area and each data push server; the polling related information includes management areas distributed by the data gateway servers, distribution weights corresponding to the data gateway servers, and distribution weights corresponding to the management areas.
The data push server 110 is further configured to obtain polling related information from the list server; storing the polling related information locally at the data push server.
In an embodiment, the data gateway server 120 is further configured to perform a weighted calculation based on its own target parameter if an update condition of the distribution weight is satisfied, determine an updated distribution weight corresponding to the data gateway server according to a calculation result, and send the updated distribution weight to the data push server; the target parameters include one or more of: the data gateway server comprises an in-transit data volume interacted between the data gateway server and a storage system, a data volume processed by the data gateway server within a first preset time period, and equipment load information of the data gateway server; the assigning the weight update condition includes: the number of the processing pushing tasks reaches a preset frequency threshold value and/or is every second preset time;
the data push server 110 is further configured to receive an updated distribution weight sent by a data gateway server belonging to the data push server; sending the updated distribution weights to a list server;
the list server 20 is further configured to update the distribution weight of the corresponding data gateway server based on the updated distribution weight.
In one embodiment, the data push server 110 is further configured to:
judging whether first alarm information sent by a data gateway server is received currently or not aiming at each data gateway server; the first alarm information is used for prompting that at least one item of target parameters of the data gateway server exceeds a corresponding preset threshold value;
if not, determining that the data gateway server is a target data gateway server conforming to the load balancing strategy.
In one embodiment, the data push server 110 is further configured to:
performing data push test on a data gateway server belonging to the first data push server by using a push task with a preset data volume every third preset time interval;
if the test is passed, increasing the data volume of a pushing task for the data pushing test, and performing the data pushing test on the data gateway server belonging to the first data pushing server at intervals of a third preset time length until all the data to be pushed are pushed to the data gateway server belonging to the first data pushing server, and stopping the data pushing test;
if the test is not passed, polling the data gateway servers belonging to other data pushing servers based on a preset load balancing strategy so as to push a pushing task for the data pushing test to a target data gateway server conforming to the load balancing strategy;
and if receiving successful receiving information fed back by the data gateway server belonging to the first data push server aiming at the push task for the data push test, determining that the test is passed.
In one embodiment, the data push server 110 is further configured to:
generating each pushing task based on a preset number of data to be pushed; alternatively, the first and second electrodes may be,
and generating each pushing task based on the data to be pushed of the preset data volume.
By adopting the device in one or more embodiments of the present specification, the first data push server acquires data to be pushed to the data gateway server for data archiving processing, and generates a push task including at least one piece of data to be pushed based on the data to be pushed, so as to poll the data gateway server belonging to the first data push server and/or the data gateway servers belonging to other data push servers except the first data push server based on a preset load balancing policy, so as to determine a target data gateway server meeting the load balancing policy, and then push the push task to the target data gateway server, so that the target data gateway server archives the at least one piece of data to be pushed. Compared with the current mode of balancing the load of the slave server through the master server, the equipment realizes the effect of balancing the load across the master server and the slave server, so that the resource distribution of the data gateway server can be based on the pushing task, and whether the non-data gateway server belongs to the first data pushing server generating the pushing task or not, the reasonable utilization of the resource of each data gateway server is promoted, the condition that the data pushing fails when the resources are not reasonably distributed due to more pushing tasks is avoided, and the availability of the data gateway server is improved; and the push task is pushed to the corresponding target data gateway server, so that the push task can be processed by the target data gateway server reasonably configured for the push task, and the response efficiency of the push task is ensured.
It should be understood by those skilled in the art that the load balancing system can be used to implement the data processing method described above, wherein the detailed description is similar to the above method, and is not repeated herein to avoid complexity.
Based on the same idea, one or more embodiments of the present specification further provide a data processing apparatus, as shown in fig. 6. The data processing apparatus may have a large difference due to different configurations or performances, and may include one or more processors 601 and a memory 602, and one or more stored applications or data may be stored in the memory 602. Wherein the memory 602 may be transient or persistent storage. The application program stored in memory 602 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for a data processing device. Still further, the processor 601 may be arranged in communication with the memory 602 to execute a series of computer executable instructions in the memory 602 on a data processing device. The data processing apparatus may also include one or more power supplies 603, one or more wired or wireless network interfaces 604, one or more input-output interfaces 605, one or more keyboards 606.
In particular, in this embodiment, the data processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the data processing apparatus, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
acquiring data to be pushed, which is to be pushed to a data gateway server for data archiving processing; generating a pushing task based on the data to be pushed; the pushing task comprises at least one piece of data to be pushed; polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers; wherein, a plurality of data gateway servers include: the data gateway server belongs to the first data push server and/or belongs to other data push servers except the first data push server; and pushing the pushing task to a target data gateway server so that the target data gateway server carries out data archiving processing on at least one piece of data to be pushed.
By adopting the device in one or more embodiments of the present specification, the first data push server acquires data to be pushed to the data gateway server for data archiving processing, and generates a push task including at least one piece of data to be pushed based on the data to be pushed, so as to poll the data gateway server belonging to the first data push server and/or the data gateway servers belonging to other data push servers except the first data push server based on a preset load balancing policy, so as to determine a target data gateway server meeting the load balancing policy, and then push the push task to the target data gateway server, so that the target data gateway server archives the at least one piece of data to be pushed. Compared with the current mode of balancing the load of the slave server through the master server, the equipment realizes the effect of balancing the load across the master server and the slave server, so that the distribution of the data gateway server resources can be based on the push task, and whether the non-data gateway server belongs to the first data push server generating the push task or not, the reasonable utilization of the data gateway server resources is promoted, the condition that the data push fails when the push task is more and the resources are not reasonably distributed is avoided, and the availability of the data gateway server is promoted; and the push task is pushed to the corresponding target data gateway server, so that the push task can be processed by the target data gateway server reasonably configured for the push task, and the response efficiency of the push task is ensured.
One or more embodiments of the present specification also propose a storage medium storing one or more computer programs, the one or more computer programs including instructions, which, when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the respective processes of the above-mentioned data processing method embodiments, and are specifically adapted to perform:
acquiring data to be pushed to a data gateway server for data archiving processing; generating a pushing task based on the data to be pushed; the pushing task comprises at least one piece of data to be pushed; polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers; wherein, a plurality of data gateway servers include: the data gateway server belongs to the first data push server and/or belongs to other data push servers except the first data push server; and pushing the pushing task to a target data gateway server so that the target data gateway server carries out data archiving processing on at least one piece of data to be pushed.
By using the storage medium according to one or more embodiments of the present specification, the first data push server acquires data to be pushed to the data gateway server for data archiving processing, and generates a push task including at least one piece of data to be pushed based on the data to be pushed, so as to poll the data gateway server belonging to the first data push server and/or data gateway servers belonging to other data push servers except the first data push server based on a preset load balancing policy, to determine a target data gateway server meeting the load balancing policy, and then push the push task to the target data gateway server, so that the target data gateway server archives the at least one piece of data to be pushed. Compared with the current mode of balancing the load of the slave server through the master server, the storage medium realizes the effect of balancing the load across the master server and the slave server, so that the resource allocation of the data gateway server can be based on the push task, and whether the non-data gateway server belongs to the first data push server generating the push task or not, the reasonable utilization of the resources of each data gateway server is promoted, the condition that the data push fails when the push task is more and the resources are not reasonably allocated is favorably avoided, and the availability of the data gateway server is improved; and the push task is pushed to the corresponding target data gateway server, so that the push task can be processed by the target data gateway server reasonably configured for the push task, and the response efficiency of the push task is ensured.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
One or more embodiments of the specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The above description is intended to be illustrative of one or more embodiments of the present disclosure and is not intended to be limiting. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of claims of one or more embodiments of the present specification.

Claims (17)

1. A data processing method is applied to a first data push server and comprises the following steps:
acquiring data to be pushed to a data gateway server for data archiving processing;
generating a pushing task based on the data to be pushed; the pushing task comprises at least one piece of data to be pushed;
polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers; wherein the plurality of data gateway servers comprise: a data gateway server belonging to the first data push server, and/or a data gateway server belonging to a data push server other than the first data push server;
and pushing the pushing task to the target data gateway server so that the target data gateway server carries out data archiving processing on the at least one piece of data to be pushed.
2. The method of claim 1, wherein polling a plurality of data gateway servers based on a preset load balancing policy to determine a target data gateway server from the plurality of data gateway servers that meets the load balancing policy comprises:
determining a data gateway server belonging to the first data push server in the plurality of data gateway servers;
polling a data gateway server belonging to the first data push server in the plurality of data gateway servers to determine whether the data gateway server belonging to the first data push server meets the preset load balancing strategy;
and if the load balancing policy does not meet the load balancing policy, polling data gateway servers belonging to other data push servers in the plurality of data gateway servers to determine a target data gateway server meeting the load balancing policy.
3. The method of claim 1, the preset load balancing policy comprising:
the target parameter of the data gateway server does not exceed the corresponding preset threshold value; the target parameters include one or more of: the method comprises the following steps that equipment load information of a data gateway server, data volume processed by the data gateway server within a first preset time length and in-transit data volume interacted between the data gateway server and a storage system are obtained;
the data volume in transit of the interaction between the data gateway server and the storage system comprises the data volume of the data to be pushed, which is sent to the storage system and is not successfully archived by the storage system.
4. The method of claim 2, wherein a plurality of data gateway servers belonging to the same data push server are distributed in the same management area; a plurality of data gateway servers belonging to different data push servers are distributed in different management areas.
5. The method of claim 2, the polling being weighted polling;
the polling of the data gateway server belonging to the first data push server in the plurality of data gateway servers includes:
and performing weighted polling on the data gateway server belonging to the first data push server according to the distribution weight corresponding to the data gateway server belonging to the first data push server.
6. The method of claim 4, the polling being weighted polling;
the polling of the data gateway servers belonging to other data push servers in the plurality of data gateway servers includes:
and carrying out weighted polling on the data gateway servers belonging to other data push servers according to the distribution weights corresponding to the data gateway servers belonging to other data push servers and the distribution weights corresponding to the management areas where the other data push servers are located.
7. The method according to claim 5 or 6, before polling the plurality of data gateway servers based on the preset load balancing policy, the method further comprising:
acquiring polling related information from a list server; the polling related information comprises management areas distributed by the data gateway servers, distribution weights corresponding to the data gateway servers and distribution weights corresponding to the management areas; the list server is used for pulling and obtaining the polling related information in advance based on network connection between the list server and each management area and between the list server and each data push server;
and storing the polling related information locally in the first data push server.
8. The method of claim 7, further comprising:
receiving updated distribution weights sent by a data gateway server belonging to the first data push server; the data gateway server is used for performing weighted calculation based on a target parameter of the data gateway server under the condition that an update condition of the distribution weight is met, determining an updated distribution weight corresponding to the data gateway server according to a calculation result, and sending the updated distribution weight to the first data push server; the target parameters include one or more of: the data gateway server comprises an in-transit data volume interacted between the data gateway server and a storage system, a data volume processed by the data gateway server within a first preset time period, and equipment load information of the data gateway server; the assigning weight update condition includes: processing the number of the pushing tasks to reach a preset time threshold value and/or every second preset time interval;
and sending the updated distribution weight to the list server so that the list server updates the distribution weight of the corresponding data gateway server based on the updated distribution weight.
9. The method of claim 3, wherein polling a plurality of data gateway servers based on a preset load balancing policy to determine a target data gateway server from the plurality of data gateway servers that meets the load balancing policy comprises:
judging whether first alarm information sent by each data gateway server is received currently or not for each data gateway server; the first alarm information is used for prompting that at least one item of target parameters of the data gateway server exceeds a corresponding preset threshold value;
if not, determining that the data gateway server is a target data gateway server conforming to the load balancing strategy.
10. The method of claim 2, in a case where the target data gateway server is a data gateway server belonging to other data push servers, after the pushing the push task to the target data gateway server, the method further comprising:
performing data push test on a data gateway server belonging to the first data push server by using a push task with a preset data volume every third preset time interval;
if the test is passed, increasing the data volume of a pushing task for the data pushing test, and performing the data pushing test on the data gateway server belonging to the first data pushing server at intervals of the third preset time length until all the data to be pushed are pushed to the data gateway server belonging to the first data pushing server, and stopping the data pushing test;
if the test is not passed, polling the data gateway servers belonging to other data pushing servers based on the preset load balancing strategy so as to push a pushing task for the data pushing test to a target data gateway server conforming to the load balancing strategy;
and if receiving successful receiving information fed back by the data gateway server belonging to the first data push server aiming at the push task for the data push test, determining that the test is passed.
11. The method of claim 1, wherein generating a push task based on the data to be pushed comprises:
generating each pushing task based on a preset number of the data to be pushed; alternatively, the first and second liquid crystal display panels may be,
and generating each pushing task based on the data to be pushed of the preset data volume.
12. A load balancing system comprises a plurality of management areas, data push servers distributed in each management area, at least one data gateway server belonging to the data push servers, and a storage system connected with the data gateway servers;
the data pushing server is used for acquiring data to be pushed, which is to be pushed to the data gateway server for data archiving processing; generating a pushing task based on the data to be pushed; the pushing task comprises at least one piece of data to be pushed; polling a plurality of data gateway servers based on a preset load balancing strategy so as to determine a target data gateway server which accords with the load balancing strategy from the plurality of data gateway servers; wherein the plurality of data gateway servers comprise: the data gateway server belongs to the data push server, and/or the data gateway server belongs to other data push servers except the data push server; pushing the pushing task to the target data gateway server;
the data gateway server is used for receiving the pushing task pushed by the data pushing server; sending each piece of data to be pushed in the pushing task to the storage system based on network connection with the storage system;
the storage system is used for receiving each piece of data to be pushed sent by the data gateway server; and carrying out data archiving processing on each piece of data to be pushed.
13. The system of claim 12, wherein the data push server is further configured to perform weighted polling on the data gateway servers belonging to other data push servers according to the distribution weights corresponding to the data gateway servers belonging to other data push servers and the distribution weights corresponding to the management areas where the other data push servers are located.
14. The system of claim 13, further comprising a list server, wherein the list server is connected to each management area and each data push server through a network;
the list server is used for acquiring polling related information based on network connection between each management area and each data push server; the polling related information comprises management areas distributed by the data gateway servers, distribution weights corresponding to the data gateway servers and distribution weights corresponding to the management areas;
the data push server is also used for acquiring the polling related information from the list server; and storing the polling related information locally in the data push server.
15. The system of claim 14, wherein the first and second sensors are configured to sense the temperature of the fluid,
the data gateway server is further used for performing weighted calculation based on the target parameter of the data gateway server under the condition that the update condition of the distribution weight is met, determining the updated distribution weight corresponding to the data gateway server according to the calculation result, and sending the updated distribution weight to the data push server; the target parameters include one or more of: the data gateway server comprises an in-transit data volume interacted between the data gateway server and a storage system, a data volume processed by the data gateway server within a first preset time period, and equipment load information of the data gateway server; the assigning weight update condition includes: processing the number of the pushing tasks to reach a preset time threshold value and/or every second preset time interval;
the data push server is also used for receiving updated distribution weight sent by a data gateway server belonging to the data push server; sending the updated assigned weights to the list server;
the list server is further configured to update the distribution weight of the corresponding data gateway server based on the updated distribution weight.
16. A data processing apparatus comprising a processor and a memory electrically connected to the processor, the memory storing a computer program, the processor being operable to invoke and execute the computer program from the memory to implement a data processing method as claimed in any of claims 1 to 11.
17. A storage medium storing a computer program executable by a processor to implement a data processing method as claimed in any one of claims 1 to 11.
CN202211265816.8A 2022-10-17 2022-10-17 Data processing method and load balancing system Pending CN115834585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211265816.8A CN115834585A (en) 2022-10-17 2022-10-17 Data processing method and load balancing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211265816.8A CN115834585A (en) 2022-10-17 2022-10-17 Data processing method and load balancing system

Publications (1)

Publication Number Publication Date
CN115834585A true CN115834585A (en) 2023-03-21

Family

ID=85524868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211265816.8A Pending CN115834585A (en) 2022-10-17 2022-10-17 Data processing method and load balancing system

Country Status (1)

Country Link
CN (1) CN115834585A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050033858A1 (en) * 2000-07-19 2005-02-10 Swildens Eric Sven-Johan Load balancing service
CN102801620A (en) * 2012-08-09 2012-11-28 苏州阔地网络科技有限公司 Drift control processing method and system for netmeeting
US20130117382A1 (en) * 2011-11-07 2013-05-09 Cellco Partnership D/B/A Verizon Wireless Push messaging platform with high scalability and high availability
US20160248866A1 (en) * 2015-02-19 2016-08-25 Akamai Technologies, Inc. Systems and methods for avoiding server push of objects already cached at a client
WO2016133965A1 (en) * 2015-02-18 2016-08-25 KEMP Technologies Inc. Methods for intelligent data traffic steering
CN108337156A (en) * 2017-01-20 2018-07-27 阿里巴巴集团控股有限公司 A kind of information-pushing method and device
CN111131443A (en) * 2019-12-23 2020-05-08 中国平安财产保险股份有限公司 Task pushing method and system
CN112468573A (en) * 2020-11-24 2021-03-09 新天科技股份有限公司 Data pushing method, device, equipment and storage medium
CN112799931A (en) * 2021-03-15 2021-05-14 北京视界云天科技有限公司 Log collection method, device, system, medium and electronic equipment
CN113268351A (en) * 2021-06-07 2021-08-17 北京金山云网络技术有限公司 Load balancing method and device for gateway service
CN114244602A (en) * 2021-12-15 2022-03-25 腾讯科技(深圳)有限公司 Multi-user online network service system, method, device and medium
US20220191116A1 (en) * 2020-12-16 2022-06-16 Capital One Services, Llc Tcp/ip socket resiliency and health management
WO2022213529A1 (en) * 2021-04-07 2022-10-13 华为云计算技术有限公司 Instance deployment method and apparatus, cloud system, computing device, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050033858A1 (en) * 2000-07-19 2005-02-10 Swildens Eric Sven-Johan Load balancing service
US20130117382A1 (en) * 2011-11-07 2013-05-09 Cellco Partnership D/B/A Verizon Wireless Push messaging platform with high scalability and high availability
CN102801620A (en) * 2012-08-09 2012-11-28 苏州阔地网络科技有限公司 Drift control processing method and system for netmeeting
WO2016133965A1 (en) * 2015-02-18 2016-08-25 KEMP Technologies Inc. Methods for intelligent data traffic steering
US20160248866A1 (en) * 2015-02-19 2016-08-25 Akamai Technologies, Inc. Systems and methods for avoiding server push of objects already cached at a client
CN108337156A (en) * 2017-01-20 2018-07-27 阿里巴巴集团控股有限公司 A kind of information-pushing method and device
CN111131443A (en) * 2019-12-23 2020-05-08 中国平安财产保险股份有限公司 Task pushing method and system
CN112468573A (en) * 2020-11-24 2021-03-09 新天科技股份有限公司 Data pushing method, device, equipment and storage medium
US20220191116A1 (en) * 2020-12-16 2022-06-16 Capital One Services, Llc Tcp/ip socket resiliency and health management
CN112799931A (en) * 2021-03-15 2021-05-14 北京视界云天科技有限公司 Log collection method, device, system, medium and electronic equipment
WO2022213529A1 (en) * 2021-04-07 2022-10-13 华为云计算技术有限公司 Instance deployment method and apparatus, cloud system, computing device, and storage medium
CN113268351A (en) * 2021-06-07 2021-08-17 北京金山云网络技术有限公司 Load balancing method and device for gateway service
CN114244602A (en) * 2021-12-15 2022-03-25 腾讯科技(深圳)有限公司 Multi-user online network service system, method, device and medium

Similar Documents

Publication Publication Date Title
CN107241281B (en) Data processing method and device
WO2019076315A1 (en) Dynamic allocation of edge computing resources in edge computing centers
CN111950988B (en) Distributed workflow scheduling method and device, storage medium and electronic equipment
CN108737132B (en) Alarm information processing method and device
CN113742031B (en) Node state information acquisition method and device, electronic equipment and readable storage medium
CN110908774B (en) Resource scheduling method, equipment, system and storage medium
CN111399764B (en) Data storage method, data reading device, data storage equipment and data storage medium
CN113301078A (en) Network system, service deployment and network division method, device and storage medium
CN109739627B (en) Task scheduling method, electronic device and medium
CN113553178A (en) Task processing method and device and electronic equipment
CN108268211A (en) A kind of data processing method and device
CN111045811A (en) Task allocation method and device, electronic equipment and storage medium
CN109428926B (en) Method and device for scheduling task nodes
CN108111566B (en) Cloud storage system capacity expansion method and device and cloud storage system
CN113301075A (en) Flow control method, distributed system, device and storage medium
CN111865622A (en) Cloud service metering and charging method and system based on rule engine cluster
CN112202829A (en) Social robot scheduling system and scheduling method based on micro-service
CN106790354B (en) Communication method and device for preventing data congestion
CN115834585A (en) Data processing method and load balancing system
CN115981871A (en) GPU resource scheduling method, device, equipment and storage medium
CN116010065A (en) Distributed task scheduling method, device and equipment
CN113965538B (en) Equipment state message processing method, device and storage medium
CN113301076B (en) Flow control method, distributed system, device and storage medium
CN115048186A (en) Method and device for processing expansion and contraction of service container, storage medium and electronic equipment
CN114237902A (en) Service deployment method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination