CN115941604A - Flow distribution method, device, equipment, storage medium and program product - Google Patents

Flow distribution method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN115941604A
CN115941604A CN202211424383.6A CN202211424383A CN115941604A CN 115941604 A CN115941604 A CN 115941604A CN 202211424383 A CN202211424383 A CN 202211424383A CN 115941604 A CN115941604 A CN 115941604A
Authority
CN
China
Prior art keywords
idle
server
servers
request message
application interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211424383.6A
Other languages
Chinese (zh)
Inventor
郭赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202211424383.6A priority Critical patent/CN115941604A/en
Publication of CN115941604A publication Critical patent/CN115941604A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The invention discloses a flow distribution method, a device, equipment, a storage medium and a program product, wherein the method comprises the steps that an application interface corresponding to a target server receives a request message, and flow identification in the request message is assigned to be an identification value corresponding to the application interface of the target server; the service management module receives the request messages after the assignment, counts the number of the request messages with the same flow identification at set time intervals, generates a flow distribution rule according to the idle servers in the idle state and the difference value between the number and the first preset number when the number is larger than the first preset number, and sends the flow distribution rule to the network agent component; and the network agent component routes the request message which is received by the application interface corresponding to the target server in the next set time period to the application interface corresponding to the idle server according to the flow distribution rule. The invention can balance the access flow among a plurality of servers and improve the working efficiency of the servers.

Description

Flow distribution method, device, equipment, storage medium and program product
Technical Field
The present invention relates to the field of traffic distribution technologies, and in particular, to a traffic distribution method, apparatus, device, storage medium, and program product.
Background
In the prior art, when traffic is allocated, an application interface of a service requester for accessing a specific server is generally set in advance, which may have the following problems: the access flow of one server is too large and even exceeds the load of the server, so that the request message is lost; and the access flow of the other server is smaller, which causes resource waste. The access flow among a plurality of servers cannot be distributed in a balanced manner, so that the overall working efficiency is low.
Therefore, there is a need for a traffic distribution method that can balance access traffic among multiple servers and improve the operating efficiency of the servers.
Disclosure of Invention
The embodiment of the invention provides a flow distribution method, which comprises the following steps:
an application interface corresponding to a target server receives a request message, and assigns a flow identifier in the request message to an identifier value corresponding to the application interface of the target server;
the target server processes the request message and sends the request message after being assigned to a service management module;
the service management module receives the request messages after assignment, counts the number of the request messages with the same flow identification at set time intervals, generates a flow distribution rule according to the idle server in an idle state and the difference value between the number and the first preset number when the number is larger than the first preset number, and sends the flow distribution rule to the network agent component;
and the network agent component routes the request message which is received by the application interface corresponding to the target server in the next set time period to the application interface corresponding to the idle server according to the flow distribution rule.
Preferably, after routing the request packet of the next set time period to the application interface corresponding to the idle server, the method further includes:
the service management module judges whether the number of the remaining unprocessed messages of the application interface corresponding to the target server at the end moment of the next set time period is less than a first preset number;
if so, canceling the target server to use the flow distribution rule, and sending a canceling instruction to the network agent component;
and the network agent component receives the canceling instruction and routes the request message which is received by the application interface corresponding to the target server in the next set time interval to the application interface corresponding to the target server in the next set time interval.
Preferably, the generating a traffic distribution rule according to the idle servers in the idle state and the difference between the number and the first preset number further includes:
the service management module acquires the current message number of the application interfaces corresponding to all other servers in the current set time period and the historical message number in the next set time period;
determining idle servers in other servers according to the current message number and the historical message number;
calculating the proportion of the difference between the number and a first preset number to the first preset number;
and comparing the proportion with a first reference proportion and a second reference proportion, and generating a flow distribution rule according to a comparison result and the idle server.
Preferably, the determining idle servers in other servers according to the current packet number and the historical packet number further includes:
and determining other servers with the current message number lower than a first preset number and the historical message number lower than a second preset number as idle servers, wherein the second preset number is smaller than the first preset number.
Preferably, the generating a traffic distribution rule according to the comparison result and the idle server further includes:
if the proportion is smaller than the first reference proportion, the flow distribution rule is that the subsequent request messages are evenly distributed by the target server and any idle server;
if the ratio is greater than or equal to a first reference ratio and smaller than a second reference ratio, the flow distribution rule distributes the subsequent request message to at least one idle server;
if the ratio is larger than or equal to the second reference ratio, comparing the performance of all idle servers, determining at least one selected idle server from all idle servers according to the performance, and allocating the subsequent request message to the selected idle server according to the flow allocation rule.
Preferably, the evenly distributing the traffic distribution rule for the subsequent request message by the target server and any idle server further includes:
and according to the request time sequence of the subsequent request message, allocating to any idle server first and allocating to the target server.
Preferably, the comparing the performance of all idle servers, and determining a selected idle server from all idle servers according to the performance further comprises:
calculating to obtain performance data of all idle servers according to the current CPU occupancy rate, the memory utilization rate and the bandwidth occupancy rate of the idle servers;
and determining that the server with performance data higher than the set data in all the free servers is the selected free server.
An embodiment of the present invention further provides a flow distribution device, including: the system comprises a target server, a service management module and a network agent component;
an application interface corresponding to a target server receives a request message, and assigns a flow identifier in the request message to an identifier value corresponding to the application interface corresponding to the target server;
the target server processes the request message and sends the request message after value assignment to a service management module;
the service management module receives the request messages after assignment, counts the number of the request messages with the same flow identification at set time intervals, generates a flow distribution rule according to the idle server in an idle state and the difference value between the number and the first preset number when the number is larger than the first preset number, and sends the flow distribution rule to the network agent component;
and the network agent component routes the request message which is received by the application interface corresponding to the target server in the next set time period to the application interface corresponding to the idle server according to the flow distribution rule.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the above method.
An embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program, and when the computer program is executed by a processor, the computer program implements the method described above.
In the embodiment of the invention, when a certain application interface of the server is overheated during access, the subsequent service needing to be processed is routed to the application interface which can provide corresponding service in other idle servers for processing, so that the balanced allocation of access flow is realized, and the overall working efficiency of the system is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
fig. 1 is a schematic flow chart illustrating a traffic distribution method provided in an embodiment of the present disclosure;
fig. 2 is a schematic flowchart illustrating a process after routing a request packet of a next set time period to an application interface corresponding to an idle server according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart illustrating a process for generating a traffic distribution rule according to idle servers in an idle state and a difference between the number and a first preset number, provided in an embodiment herein;
fig. 4 is a flow chart illustrating a process of generating a traffic distribution rule according to a comparison result and an idle server provided in an embodiment of the present disclosure;
FIG. 5 is a flow diagram illustrating a process for comparing the performance of all idle servers and determining a selected idle server from among all idle servers based on the performance provided by an embodiment herein;
fig. 6 is a schematic block diagram illustrating a flow distribution device provided in an embodiment of the present disclosure;
fig. 7 shows a schematic structural diagram of a computer device provided in an embodiment herein.
Description of the symbols of the drawings:
100. a target server;
200. a service management module;
300. a network proxy component;
702. a computer device;
704. a processor;
706. a memory;
708. a drive mechanism;
710. an input/output module;
712. an input device;
714. an output device;
716. a presentation device;
718. a graphical user interface;
720. a network interface;
722. a communication link;
724. a communication bus.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
In the prior art, when performing traffic allocation, an application interface for a service requester to access a specific server is generally set in advance, which may have the following problems: the access flow of one server is too large and even exceeds the load of the server, so that the request message is lost; while the other server has a smaller access traffic, resulting in wasted resources. The access flow among a plurality of servers cannot be distributed in a balanced manner, so that the overall working efficiency is low.
In order to solve the above problem, embodiments herein provide a traffic allocation method. Fig. 1 is a schematic diagram of the steps of a flow distribution method provided in the embodiments herein, and the present specification provides the method operation steps as described in the embodiments or the flow chart, but more or less operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In the actual implementation of the system or the device product, the method according to the embodiments or shown in the drawings can be executed in sequence or in parallel.
Referring to fig. 1, provided herein is a traffic distribution method including:
s101: an application interface corresponding to a target server receives a request message, and assigns a flow identifier in the request message to an identifier value corresponding to the application interface of the target server;
s102: the target server processes the request message and sends the request message after value assignment to a service management module;
s103: the service management module receives the request messages after assignment, counts the number of the request messages with the same flow identification at set time intervals, generates a flow distribution rule according to the idle server in an idle state and the difference value between the number and the first preset number when the number is larger than the first preset number, and sends the flow distribution rule to the network agent component;
s104: and the network agent component routes the request message which is received by the application interface corresponding to the target server in the next set time interval to the application interface corresponding to the idle server according to the flow distribution rule.
And each server comprises a plurality of application interfaces, each application interface can provide different service functions, and each application interface corresponds to a unique identification value. The application interface of the target server can receive the request message, the request message is provided with the flow identification, and after the application interface a of the target server A receives the request message, the flow identification in the request message is assigned to be the identification value corresponding to the application interface a.
And the target server sends the request message after being assigned to the service management module while processing the request message. After the service management module analyzes the request message, the traffic identifier of the request message may be obtained, and the statistics is performed once every set time period, for example, 5 request messages with identifier 1 in the set time period and 10 request messages with identifier 2 … …, when the number of the request messages obtained by the statistics is greater than a first preset number, for example, the first preset number is 8, and the number of the request messages with identifier 2 is greater than 8, it is determined that the application interface b with identifier value 2 is too busy, and traffic allocation needs to be performed on the application interface b.
Specifically, the service management module generates a traffic distribution rule according to the idle server and the difference 2 between 8 and 10, and sends the rule to the network proxy component. And the network agent component routes the request message which is received by the application interface b corresponding to the target server in the next set time period to the application interface b corresponding to the idle server according to the rule for processing. It should be noted that the application interface b of the target server and the application interface b of the idle server can provide the same service.
By the method, when one application interface of the server is overheated during access, the subsequent service needing to be processed is routed to the application interfaces which can provide corresponding services in other idle servers for processing, so that the balanced distribution of access flow is realized, and the overall working efficiency of the system is improved.
Referring to fig. 2, in this embodiment, after routing the request packet of the next set time period to the application interface of the idle server, the method further includes:
s201: the service management module judges whether the number of the remaining unprocessed messages of the application interface corresponding to the target server at the end moment of the next set time period is less than a first preset number;
s202: if yes, canceling the target server to use the flow distribution rule, and sending a canceling instruction to the network agent component;
s203: and the network agent component receives the canceling instruction and routes the request message which is received by the application interface corresponding to the target server in the next set time interval to the application interface corresponding to the target server in the next set time interval.
Generally, for the service request side having its corresponding server, the corresponding relationship is preset in advance, for example, the service request side numbered 1-5 sends the request to the server a, and the service request side numbered 6-10 sends the request to the server B.
After the traffic is allocated, in the next set time period, the request sent to the server a by the service request end with the original number 2 needs to be routed to the server B for processing. However, in order to ensure the reasonableness of the overall operation and maintenance of the system and prevent the occurrence of the situation of disordered traffic distribution, it is generally possible to further determine whether the number of remaining unprocessed messages drops below the first preset number at the end time of the next set time period, and if so, it indicates that the application interface corresponding to the current target server is no longer busy, and the request sent by the service request end with the number of 2 in the next set time period may be returned to the server a for processing.
Referring to fig. 3, in this embodiment, the generating a traffic allocation rule according to the idle servers in the idle state and the difference between the number and the first preset number further includes:
s301: the service management module acquires the current message number of the application interfaces corresponding to all other servers in the current set time period and the historical message number of the application interfaces in the next set time period;
s302: determining idle servers in other servers according to the current message number and the historical message number;
s303: calculating the proportion of the difference between the number and the first preset number to the first preset number;
s304: and comparing the proportion with a first reference proportion and a second reference proportion, and generating a flow distribution rule according to a comparison result and the idle server.
For example, the current set time period is 8-9 o 'clock at 5/3/2022, the next set time period is 9-10 o' clock at 5/3/2022, and for the number of the historical messages in the next set time period, the number of the messages at 9-10 o 'clock at 2/5/2022, or the number of the messages at 9-10 o' clock at 3/2021, this document is not limited.
After determining the idle servers in the other servers, a difference between the number of messages in the application interface corresponding to the target server in the overheat state and the first preset number may be further calculated, that is, a ratio of a difference 2 between 8 and 10 in the above is 20% of the first preset number 8. And comparing the 20% ratio with a first reference ratio and a second reference ratio, and generating a flow distribution rule according to a comparison result and the idle server.
In this embodiment, the determining idle servers in other servers according to the current packet number and the historical packet number further includes:
and determining other servers with the current message number lower than a first preset number and the historical message number lower than a second preset number as idle servers, wherein the second preset number is smaller than the first preset number.
There may be various determination criteria for the idle server, and the illustration herein is just one example, wherein the second preset number is smaller than the first preset number, that is, the idle server is guaranteed to be still in the idle state in the next set period.
Referring to fig. 4, in this embodiment, the generating a traffic allocation rule according to the comparison result and the idle server further includes:
s401: if the proportion is smaller than the first reference proportion, the flow distribution rule is that the subsequent request messages are evenly distributed by the target server and any idle server;
s402: if the ratio is greater than or equal to a first reference ratio and smaller than a second reference ratio, the flow distribution rule distributes the subsequent request message to at least one idle server;
s403: and if the ratio is larger than or equal to the second reference ratio, comparing the performances of all idle servers, determining at least one selected idle server from all idle servers according to the performances, and allocating the subsequent request message to the selected idle server according to the flow allocation rule.
Wherein the first reference proportion is smaller than the second reference proportion, if the proportion is smaller than the first reference proportion, it indicates that the target server is not in a very busy state, and the target server will be idle after a period of time, and at this time, the traffic distribution rule is that subsequent request messages are distributed by the target server and any idle server on average, further:
and according to the request time sequence of the subsequent request message, allocating to any idle server first and allocating to the target server.
For example, a set time period is 1 hour, for a request message received within the first 0.5 hour, the request message is first allocated to any idle server for processing, and for a request message received within the next 0.5 hour, the request message is allocated to a target server.
If the ratio is greater than or equal to the first reference ratio and smaller than the second reference ratio, it indicates that the target server is in a relatively busy state, and the subsequent request message can be distributed to at least one idle server.
If the ratio is larger than or equal to the second reference ratio, the target server is in a busy state, and a performance-optimal one of the idle servers is selected as a selected idle server according to the performance, and the subsequent request message is distributed to the selected idle server.
Wherein, referring to fig. 5, said comparing the performance of all idle servers, and determining a selected idle server from among all idle servers according to the performance further comprises:
s501: calculating to obtain performance data of all idle servers according to the current CPU occupancy rate, the memory utilization rate and the bandwidth occupancy rate of the idle servers;
s502: and determining that the server with performance data higher than the set data in all the free servers is the selected free server.
Determining the performance optimum of the idle servers by combining the hardware performance of the idle servers, and specifically selecting the idle server with the lowest CPU occupancy rate, the lowest memory utilization rate and the lowest bandwidth occupancy rate from all the idle servers as the selected idle server; or setting different weight ratios for the CPU occupancy rate, the memory utilization rate and the bandwidth occupancy rate, and considering all the idle servers according to the different weight ratios to select and obtain the selected idle server. It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party. In addition, the technical scheme described in the embodiment of the application conforms to relevant regulations of national laws and regulations in terms of data acquisition, storage, use, processing and the like.
Referring to fig. 6, a flow distribution device is further provided in the embodiment of the present invention, as described in the following embodiments. Because the principle of the device for solving the problems is similar to a flow distribution method, the implementation of the device can refer to the implementation of the flow distribution method, and repeated details are not repeated.
The method comprises the following steps: a target server 100, a service management module 200, and a network proxy component 300;
an application interface corresponding to a target server 100 receives a request message, and assigns a flow identifier in the request message to an identifier value corresponding to the application interface corresponding to the target server;
the target server 200 processes the request message and sends the request message after being assigned to the service management module;
the service management module receives the request messages after assignment, counts the number of the request messages with the same flow identification at set time intervals, generates a flow distribution rule according to the idle server in an idle state and the difference value between the number and the first preset number when the number is larger than the first preset number, and sends the flow distribution rule to the network agent component;
the network proxy component 300 routes the request message, which should be received by the application interface corresponding to the target server in the next set time period, to the application interface corresponding to the idle server according to the traffic distribution rule.
Referring to fig. 7, a computer device 702 is further provided in an embodiment of the present disclosure based on a traffic distribution method described above, where the method is executed on the computer device 702. Computer device 702 may include one or more processors 704, such as one or more Central Processing Units (CPUs) or Graphics Processors (GPUs), each of which may implement one or more hardware threads. The computer device 702 may also include any memory 706 for storing any kind of information, such as code, settings, data, etc., and in a particular embodiment a computer program on the memory 706 and executable on the processor 704, which computer program when executed by the processor 704 may perform instructions according to the above-described method. For example, and without limitation, the memory 706 can include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of computer device 702. In one case, when the processor 704 executes associated instructions that are stored in any memory or combination of memories, the computer device 702 can perform any of the operations of the associated instructions. The computer device 702 also includes one or more drive mechanisms 708, such as a hard disk drive mechanism, an optical disk drive mechanism, or the like, for interacting with any of the memories.
Computer device 702 can also include input/output module 710 (I/O) for receiving various inputs (via input device 712) and for providing various outputs (via output device 714). One particular output mechanism may include a presentation device 716 and an associated graphical user interface 718 (GUI). In other embodiments, input/output module 710 (I/O), input device 712, and output device 714 may also not be included, as only one computer device in a network. Computer device 702 can also include one or more network interfaces 720 for exchanging data with other devices via one or more communication links 722. One or more communication buses 724 couple the above-described components together.
Communication link 722 may be implemented in any manner, such as over a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. Communication link 722 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., as dictated by any protocol or combination of protocols.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the above method.
An embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program, and when the computer program is executed by a processor, the computer program implements the method described above.
It should be understood that, in various embodiments herein, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments herein.
It should also be understood that, in the embodiments herein, the term "and/or" is only one kind of association relation describing an associated object, and means that there may be three kinds of relations. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purposes of the embodiments herein.
In addition, functional units in the embodiments herein may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present invention may be implemented in a form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The principles and embodiments of this document are explained herein using specific examples, which are presented only to aid in understanding the methods and their core concepts; meanwhile, for the general technical personnel in the field, according to the idea of this document, there may be changes in the concrete implementation and the application scope, in summary, this description should not be understood as the limitation of this document.

Claims (11)

1. A method for distributing traffic, comprising:
an application interface corresponding to a target server receives a request message, and assigns a flow identifier in the request message to an identifier value corresponding to the application interface of the target server;
the target server processes the request message and sends the request message after value assignment to a service management module;
the method comprises the steps that a service management module receives request messages after assignment, counts the number of the request messages with the same flow identification at set time intervals, generates a flow distribution rule according to an idle server in an idle state and a difference value between the number and a first preset number when the number is larger than the first preset number, and sends the flow distribution rule to a network agent component;
and the network agent component routes the request message which is received by the application interface corresponding to the target server in the next set time period to the application interface corresponding to the idle server according to the flow distribution rule.
2. The traffic distribution method according to claim 1, wherein the routing the request packet of the next set time period to the application interface corresponding to the idle server further comprises:
the service management module judges whether the number of the remaining unprocessed messages of the application interface corresponding to the target server at the end moment of the next set time period is less than a first preset number;
if so, canceling the target server to use the flow distribution rule, and sending a canceling instruction to the network agent component;
and the network agent component receives the canceling instruction and routes the request message which is received by the application interface corresponding to the target server in the next set time interval to the application interface corresponding to the target server in the next set time interval.
3. The traffic distribution method according to claim 1, wherein the generating the traffic distribution rule according to the idle servers in the idle state and the difference between the number and the first preset number further comprises:
the service management module acquires the current message number of the application interfaces corresponding to all other servers in the current set time period and the historical message number in the next set time period;
determining idle servers in other servers according to the current message number and the historical message number;
calculating the proportion of the difference between the number and the first preset number to the first preset number;
and comparing the proportion with a first reference proportion and a second reference proportion, and generating a flow distribution rule according to a comparison result and the idle server.
4. The traffic distribution method according to claim 3, wherein said determining idle servers among other servers according to the current packet number and the historical packet number further comprises:
and determining other servers with the current message number lower than a first preset number and the historical message number lower than a second preset number as idle servers, wherein the second preset number is smaller than the first preset number.
5. The traffic distribution method of claim 3, wherein generating the traffic distribution rule based on the comparison and the idle server further comprises:
if the proportion is smaller than the first reference proportion, the flow distribution rule is that the subsequent request messages are evenly distributed by the target server and any idle server;
if the ratio is larger than or equal to the first reference ratio and smaller than the second reference ratio, the flow distribution rule distributes the subsequent request messages to at least one idle server;
and if the ratio is larger than or equal to the second reference ratio, comparing the performances of all idle servers, determining at least one selected idle server from all idle servers according to the performances, and allocating the subsequent request message to the selected idle server according to the flow allocation rule.
6. The traffic distribution method according to claim 5, wherein the traffic distribution rule is that the target server and any idle server equally distribute the subsequent request message further comprises:
and according to the request time sequence of the subsequent request message, allocating to any idle server first and allocating to the target server.
7. The traffic distribution method of claim 5, wherein comparing the performance of all idle servers and determining a selected idle server from among all idle servers based on performance further comprises:
calculating to obtain performance data of all idle servers according to the current CPU occupancy rate, the memory utilization rate and the bandwidth occupancy rate of the idle servers;
and determining that the server with performance data higher than the set data in all the free servers is the selected free server.
8. A flow distribution device, comprising: the system comprises a target server, a service management module and a network agent component;
an application interface corresponding to a target server receives a request message, and assigns a flow identifier in the request message to an identifier value corresponding to the application interface corresponding to the target server;
the target server processes the request message and sends the request message after being assigned to a service management module;
the method comprises the steps that a service management module receives request messages after assignment, counts the number of the request messages with the same flow identification at set time intervals, generates a flow distribution rule according to an idle server in an idle state and a difference value between the number and a first preset number when the number is larger than the first preset number, and sends the flow distribution rule to a network agent component;
and the network agent component routes the request message which is received by the application interface corresponding to the target server in the next set time period to the application interface corresponding to the idle server according to the flow distribution rule.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 1 to 7.
11. A computer program product, characterized in that the computer program product comprises a computer program which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
CN202211424383.6A 2022-11-15 2022-11-15 Flow distribution method, device, equipment, storage medium and program product Pending CN115941604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211424383.6A CN115941604A (en) 2022-11-15 2022-11-15 Flow distribution method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211424383.6A CN115941604A (en) 2022-11-15 2022-11-15 Flow distribution method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115941604A true CN115941604A (en) 2023-04-07

Family

ID=86698404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211424383.6A Pending CN115941604A (en) 2022-11-15 2022-11-15 Flow distribution method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115941604A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579694A (en) * 2024-01-15 2024-02-20 国网浙江省电力有限公司宁波供电公司 Ubiquitous power internet of things-based data sharing management method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579694A (en) * 2024-01-15 2024-02-20 国网浙江省电力有限公司宁波供电公司 Ubiquitous power internet of things-based data sharing management method and system
CN117579694B (en) * 2024-01-15 2024-04-16 国网浙江省电力有限公司宁波供电公司 Ubiquitous power internet of things-based data sharing management method and system

Similar Documents

Publication Publication Date Title
CN109218355B (en) Load balancing engine, client, distributed computing system and load balancing method
CN109618002B (en) Micro-service gateway optimization method, device and storage medium
EP3553657A1 (en) Method and device for allocating distributed system task
KR100383381B1 (en) A Method and Apparatus for Client Managed Flow Control on a Limited Memory Computer System
JP4739272B2 (en) Load distribution apparatus, virtual server management system, load distribution method, and load distribution program
CN109672711B (en) Reverse proxy server Nginx-based http request processing method and system
CN109933431B (en) Intelligent client load balancing method and system
JP5840301B2 (en) System and method for performing cloud-based centralized overload control of service components via explicit or virtualized machine-to-machine gate (M2M) way elements
CN104243405A (en) Request processing method, device and system
US20200050479A1 (en) Blockchain network and task scheduling method therefor
CN108933829A (en) A kind of load-balancing method and device
CN100489791C (en) Method and system for local authority partitioning of client resources
JPWO2018220708A1 (en) Resource allocation system, management device, method and program
JP2005310120A (en) Computer system, and task assigning method
CN115941604A (en) Flow distribution method, device, equipment, storage medium and program product
JP4834622B2 (en) Business process operation management system, method, process operation management apparatus and program thereof
EP2863597B1 (en) Computer-implemented method, computer system, computer program product to manage traffic in a network
CN109413117B (en) Distributed data calculation method, device, server and computer storage medium
KR20170014804A (en) Virtual machine provisioning system and method for cloud service
CN113268329A (en) Request scheduling method, device and storage medium
CN109086128B (en) Task scheduling method and device
CN112685167A (en) Resource using method, electronic device and computer program product
CN109670691A (en) Method, equipment and the customer service system distributed for customer service queue management and customer service
CN115242718A (en) Cluster current limiting method, device, equipment and medium
CN110046040B (en) Distributed task processing method and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination