CN110933139A - System and method for solving high concurrency of Web server - Google Patents

System and method for solving high concurrency of Web server Download PDF

Info

Publication number
CN110933139A
CN110933139A CN201911069556.5A CN201911069556A CN110933139A CN 110933139 A CN110933139 A CN 110933139A CN 201911069556 A CN201911069556 A CN 201911069556A CN 110933139 A CN110933139 A CN 110933139A
Authority
CN
China
Prior art keywords
server
weight
nginx
utilization rate
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911069556.5A
Other languages
Chinese (zh)
Inventor
彭宏
张茵
孟利民
卢为党
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201911069556.5A priority Critical patent/CN110933139A/en
Publication of CN110933139A publication Critical patent/CN110933139A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services

Abstract

A system for solving high concurrency of Web server, the database server is made up of Redis database and MySQL database; the MySQL database is used as a container of persistent data; the Redis database is used as MySQL cache, commonly used data of a user is cached in the Redis database, and then the data in the Redis database is synchronized into the MySQL database; the Nginx server periodically collects load information of each service server node by taking set time as a reference; the Web server cluster is used for processing different requests of the client; the model training server is specially set for calculating the real-time weight of each service server. And to provide a method of resolving high concurrency in Web servers. The invention adopts an independent node operation load balancing algorithm, reduces the calculation consumption of the Nginx server, and solves the problem of Web server overload under the high concurrency condition.

Description

System and method for solving high concurrency of Web server
Technical Field
The invention relates to the technical field of internet, in particular to a system and a method for solving high concurrency of a Web server.
Background
With the rapid development of the internet technology, people have higher and higher requirements on the internet, the network application gradually becomes diversified, and the popularization rate of the internet is greatly improved. The China Internet information center publishes a 'China Internet development condition statistical report' in 2017 and 1 month, and the report shows that the number of Chinese netizens reaches 7.31 hundred million by 2016 and 12 months, which is equivalent to the total number of European population, the popularity rate of net names and more than half the cost. More and more netizens, and more diversified demands also bring greater challenges to the internet, and internet servers must be upgraded to serve more users.
The volume of the double-eleven shopping festival of the cat in 2018 reaches two thousand-billion yuan, and the access amount of the server on the day is huge. How to deal with such huge and sudden requests and complex business logics in different requests to ensure that the Web server can bear the high concurrent response service performance and provide stable and reliable service performance becomes a problem which is not negligible for design developers.
Disclosure of Invention
In order to solve the above problems, the present invention provides a system and a method for solving high concurrency of a Web server, starting from a weight of the Web server, updating the weight by periodically obtaining load information of a back-end server, thereby performing effective task allocation on back-end server nodes, and solving the problem of server overload caused by uneven task allocation.
The purpose of the invention can be realized by the following technical scheme:
a system for solving high concurrency of Web servers comprises a database server, a Nginx server, a Web server cluster and a model training server;
the database server consists of a MySQL database and a Redis database, wherein the MySQL database is used as a container of persistent data, the Redis database is used as a cache of the MySQL database, commonly used data of a user is cached in the Redis database, and then the data in the Redis database is synchronized into the MySQL database; when a client submits a request to a server through a forward proxy, firstly searching data cached in Redis, if the data are found, returning a corresponding result, if the data are not found, searching the MySQL database, returning a searched result and updating the Redis database;
the Nginx server periodically collects load information of each service server node by taking set time as a reference, wherein the load information is the CPU utilization rate, the memory utilization rate, the disk IO utilization rate and the network bandwidth utilization rate of the service server;
the Web server cluster consists of a plurality of service servers and is used for processing different requests of the client;
the model training server is used for calculating real-time weights of all the service servers;
the Nginx server sends the load information to the model training server, and the model training server calculates the weight of each Web server at the current time according to a preset dynamic load balancing algorithm.
Further, the load balancing algorithm is an algorithm for dynamically adjusting the weight based on a Nginx weighted polling algorithm, and is an improvement on the weighted polling algorithm, wherein the load balancing algorithm for dynamically adjusting the weight periodically collects load information of each service server node by taking a set time as a reference, and continuously updates the weight of each node.
The Nginx server simultaneously sends the acquired relevant parameter information of the service server to the model training server, the model training server calculates the weight of the received server load information, in order to avoid resource waste caused by frequent weight updating, a threshold concept is added into the load balancing algorithm, and the weight is updated only when the weight of the server node exceeds a threshold; otherwise, the weight value is not updated;
suppose U (S)i) Represents the weight currently consumed by the ith server node in the load cluster, i.e. the resource utilization rate of the node, i e (1.,. n),
Figure BDA0002260514180000032
is the average value of the resource utilization rate, T, of each server node in the load cluster1The preset threshold value is used for obtaining the standard deviation of the resource utilization rate of the node
Figure BDA0002260514180000031
If the node calculates the obtained SuGreater than T1Then the Nginx server updates the weight of the node.
The high concurrency access processing system is added with an independent node serving as a model training server and used for sharing the working pressure of the Nginx, the Nginx server sends the obtained load information of the service server to the model training server, algorithm operation and weight updating judgment are carried out by the Nginx server, and the obtained result is sent to the Nginx server.
In the Web high-concurrency access processing system, the Nginx server sends the acquired server load information to the model training server at one time, so that the communication times in an internal server in the system are reduced, and the influence of network congestion on the normal service is avoided.
In the Web high-concurrency access processing system, when the load information of the service servers is analyzed, in order to avoid the situation that the extra cost of resources is larger than the optimization of the system due to the collection of excessive parameters, the utilization rate of a CPU (Central processing Unit), the utilization rate of a memory, the utilization rate of a disk IO (input/output) and the utilization rate of a network bandwidth are selected to calculate the weight of each service server.
In the dynamic load balancing algorithm, S is assumediIs the ith server node, i ∈ (1,..,. n). C (S)i)、M(Si)、D(Si)、W(Si) Respectively representing the performance of CPU, memory, disk IO and network bandwidth of each service server, and assuming that P is the performance index of the service server, Pc(Total)、Pm(Total)、Pd(Total)、Pw(Total) respectively represents the sum of the performance conditions of the CPUs, the memories, the disk IOs and the network bandwidths of all the nodes of the cluster, and the formula is as follows:
Figure BDA0002260514180000041
Figure BDA0002260514180000042
Figure BDA0002260514180000043
Figure BDA0002260514180000044
in order to obtain the real performance proportion W of each service serverp(Si) The load balancing algorithm compares the CPU performance of each server node with the proportion Pc(Si) Memory performance specific gravity Pm(Si) Disk IO Performance specific gravity Pd(Si) And performance specific gravity P of network bandwidthw(Si) Dividing the result by the sum of the performances of all the nodes in the four aspects, and finally multiplying the result by a specific weight coefficient of different performances in the whole system, wherein the coefficient of the specific weight of the CPU performance is KcCoefficient of memory performance specific gravity is KmThe coefficient of the specific gravity of the IO performance of the magnetic disk is KdThe coefficient of the network bandwidth performance proportion is KwAnd K isc+Km+Kd+Kw1. To make Wp(Si) For an integer, the algorithm adjusts W by a constant Ap(Si) The formula is as follows:
Figure BDA0002260514180000051
suppose Uc(Si)、Uw(Si)、Ud(Si)、Uw(Si) Respectively the real-time use conditions of the CPU, the memory, the disk IO and the network bandwidth of the ith server node, and multiplying the real-time use conditions by different specific weight coefficients to obtain the resource use rate condition U (S) of the nodei) The formula is as follows:
U(Si)=Kc*Uc(Si)+Km*Um(Si)+Kd*Ud(Si)+Kw*Uw(Si)
then the weight W that the node has consumedL(Si) From resource usage case U (S)i) Multiplying by the original performance weight Wp(Si) The formula is obtained as follows:
WL(Si)=U(Si)*Wp(Si)
the above formula is a formula for calculating the dynamic weight in the load balancing algorithm;
because the frequent updating of the weight of each service server node may cause a larger overhead, a threshold concept is introduced into the load balancing algorithm for dynamically adjusting the weight, the variance is used as the resource utilization rate of each server for evaluation, and when the resource utilization rate variance of the server is higher than a set threshold, the weight is modified; otherwise, the current weight is not updated so as to avoid unnecessary resource waste;
assuming that the server cluster has n nodes in total, the resource utilization rate of each service server node is U (S)i),
Figure BDA0002260514180000052
The average value of the resource utilization rate of each node is the standard deviation S of the resource utilization rateuThe standard deviation formula is obtained by solving the following formula:
Figure BDA0002260514180000061
assume a threshold value of T1When the obtained standard deviation S of the resource utilization rate is obtaineduGreater than a threshold value T1If so, updating the weight of the node; otherwise, the weight is not updated;
the above formula is a formula for calculating the standard deviation of the resource utilization rate of the server node in the load balancing algorithm.
A method of resolving high concurrency in a Web server, comprising the steps of:
s1, after the Nginx server is started, global initialization is carried out by reading the configuration file, and the Nginx server can set initial weight for each service server according to the information of relevant parameters;
s2, when the client initiates a request to the server, the Nginx server will send the request to the service server with the least load, thereby avoiding the server overload problem;
s3, the Nginx server periodically obtains the utilization rate of the CPU, the utilization rate of the memory, the utilization rate of the disk IO and the utilization rate of the network bandwidth of each service server within a set time, and sends the information to the model training server;
s4, after receiving the information sent by the Nginx server, the model training server calculates the real-time weight of each service server node according to a preset algorithm, compares the real-time weight with a set threshold value, judges whether the current weight of the server needs to be updated or not, and finally sends the judgment result and the new weight to the Nginx server;
and S5, the Nginx server correspondingly processes the weight of each service server according to the feedback information of the model training server.
The invention has the beneficial effects that:
(1) a model training server is added into the system and is specially used for running an algorithm of weight calculation, so that the calculation consumption of the Nginx load balancer is reduced, and the pressure resistance of the Nginx server is improved.
(2) In the system, the Nginx server periodically acquires relevant parameter information of each service server node by taking a set time as a reference, and transmits data information to the model training server at one time, so that the internal communication times of the server are reduced, and normal services are prevented from being influenced by network congestion.
(3) An optimization idea is introduced into a load balancing algorithm for dynamically adjusting the weight, a threshold concept is added, when the new weight of each server node exceeds a threshold value, the current weight is updated, otherwise, the current weight is not updated. The addition of the threshold value can avoid frequent updating of the weight, which causes excessive overhead.
Drawings
FIG. 1 is a system framework diagram consisting of a database server, a Nginx server, a model training server, and a plurality of Web servers of the present invention.
Fig. 2 is a flowchart of the Nginx server processing a client dynamic request.
FIG. 3 is a flow chart of a load algorithm to dynamically adjust weights.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 3, a system for solving high concurrency of a Web server includes a database server, a Nginx server, a Web server cluster, and a model training server.
The database server comprises a MySQL database and a Redis database, wherein the MySQL database is used as a container of persistent data, the Redis database is used as a shared data pool and a data buffer pool, frequently-used data of a user are cached to the database at the memory level of Redis, and then the data in the Redis database is synchronized to the MySQL database, so that the running speed and the query speed of the data can be increased;
the Nginx server periodically acquires the load information of each service server by taking a set time as a reference and transmits the load information to the model training server; the load information is the CPU utilization rate, the memory utilization rate, the disk IO utilization rate and the network bandwidth utilization rate of the service server;
the Web server cluster is realized by configuring a plurality of ports;
the model training server calculates the dynamic weight of the received data information, compares the result obtained by calculation with a set threshold value, judges whether the weight needs to be updated or not, and finally returns the result to the Nginx server; the model training model can calculate the weight of each Web server at the current time according to a preset load balancing algorithm.
Further, the load balancing algorithm is an algorithm for dynamically adjusting the weight based on a Nginx weighted polling algorithm, and is an improvement on the weighted polling algorithm, wherein the load balancing algorithm for dynamically adjusting the weight periodically collects load information of each service server node by taking a set time as a reference, and continuously updates the weight of each node.
The Nginx server simultaneously sends the acquired relevant parameter information of the service server to the model training server, the model training server calculates the weight of the received server load information, in order to avoid resource waste caused by frequent weight updating, a threshold concept is added into the load balancing algorithm, and the weight is updated only when the weight of the server node exceeds a threshold; otherwise, the weight value is not updated;
suppose U (S)i) Represents the weight currently consumed by the ith server node in the load cluster, i.e. the resource utilization rate of the node, i e (1.,. n),
Figure BDA0002260514180000091
is the average value of the resource utilization rate, T, of each server node in the load cluster1The preset threshold value is used for obtaining the standard deviation of the resource utilization rate of the node
Figure BDA0002260514180000092
If the node calculates the obtained SuGreater than T1Then the Nginx server updates the weight of the node.
The high concurrency access processing system is added with an independent node serving as a model training server and used for sharing the working pressure of the Nginx, the Nginx server sends the obtained load information of the service server to the model training server, algorithm operation and weight updating judgment are carried out by the Nginx server, and the obtained result is sent to the Nginx server.
In the Web high-concurrency access processing system, the Nginx server sends the acquired server load information to the model training server at one time, so that the communication times in an internal server in the system are reduced, and the influence of network congestion on the normal service is avoided.
In the Web high-concurrency access processing system, when the load information of the service servers is analyzed, in order to avoid the situation that the extra cost of resources is larger than the optimization of the system due to the collection of excessive parameters, the utilization rate of a CPU (Central processing Unit), the utilization rate of a memory, the utilization rate of a disk IO (input/output) and the utilization rate of a network bandwidth are selected to calculate the weight of each service server.
The steps of the load balancing algorithm are as follows:
s1, when the client initiates a request to the server, the Nginx server processes the request. Firstly, the Nginx server inquires data in the Redis server, and if a result is inquired, data information is returned to the client; and if no result exists, inquiring data in the MySQL database.
And S2, when Nginx receives the dynamic request of the client, distributing tasks according to the weight of each current service server, and sending the request to the service server node with the largest weight, namely the strongest processing capability, so as to ensure that the request can be effectively processed.
And S3, after the Nginx server is started, periodically reading the load information of each service server, sending the data to the model training server at one time, performing weight calculation on the obtained load information by the model training server, and sending the processing result to the Nginx server.
And S4, after receiving the feedback information of the model training server, the Nginx server performs corresponding processing according to a preset load balancing algorithm for dynamically adjusting the weight, and if the feedback information requires to update the current weight of the service server node, the weight of the server node is reassigned.
In step S2, the flow of processing the client request by the Nginx is shown in fig. 2. After a client initiates a dynamic request to a server, an Nginx server firstly compares the weight of each node in the server cluster and sends the request to the server node with the maximum weight. If the request is successfully sent to the server node, returning a result required by the client; if the request is failed to be sent to the server node and the number of repeated sending failures is more than 20, the Nginx server does not send the request to the server any more.
In step S4, the load balancing algorithm for dynamically adjusting the weights is shown in fig. 3, and the algorithm is a load balancing algorithm for dynamically adjusting the weights based on a weighted round robin algorithm. The new load balancing algorithm for dynamically adjusting the weight is to reassign the load balancing weight of each service server node according to the running state of each service server node on the basis of the original Nginx weighted polling algorithm, so that the overload problem of each node is prevented. Meanwhile, in order to avoid the problem of resource waste caused by frequent weight updating, a threshold concept is introduced into the load balancing algorithm, and when the standard deviation of the resource utilization rate of a service server node is higher than a set threshold, the Nginx server can update the weight of the node.
Suppose U (S)i) Represents the weight currently consumed by the ith server node in the load cluster, i.e. the resource utilization rate of the node, i ∈ (1.,. n).
Figure BDA0002260514180000111
Is the average value of the resource utilization rate, T, of each server node in the load cluster1The preset threshold value can obtain the standard deviation of the resource utilization rate of the node
Figure BDA0002260514180000112
If the node calculates the obtained SuGreater than T1Then the Nginx server updates the weight of the node.
The implementation steps of the load balancing algorithm are as follows:
(1) initializing Nginx servers and assigning initial weights to each service server node;
(2) after receiving a request of a client, the Nginx server sends the request to a service server with the maximum weight;
(3) the Nginx server sends the periodically read load information of the service server to the model training server, and the model training server performs load balancing algorithm processing on the received data information and sends feedback information to the Nginx server;
(4) and after the Nginx receives the feedback information, performing corresponding processing on the weight of each service server.

Claims (7)

1. A system for solving high concurrency of Web servers is characterized by comprising a database server, a Nginx front-end server, a model training server and a Web server cluster;
the database server consists of a MySQL database and a Redis database; the MySQL database is used as a container of persistent data; the Redis database is used as MySQL cache, commonly used data of a user is cached in the Redis database, and then the data in the Redis database is synchronized into the MySQL database; when a client submits a request to a server through a forward proxy, firstly, data cached in a Redis database is inquired, if a result is not inquired, a MySQL database is inquired, the inquired result is returned, and the Redis database is updated;
the Nginx server periodically collects load information of each service server node by taking set time as a reference, wherein the load information is the CPU utilization rate, the memory utilization rate, the disk IO utilization rate and the network bandwidth utilization rate of the service server;
the Web server cluster consists of a plurality of service servers and is used for processing different requests of the client;
the model training server is specially arranged for calculating the real-time weight of each service server;
the Nginx server sends the load information to the model training server, and the model training server calculates the weight of each Web server at the current time according to a preset load balancing algorithm.
2. The system for resolving high concurrency for Web servers of claim 1, wherein: the load balancing algorithm is an algorithm for dynamically adjusting the weight based on an Nginx weighted polling algorithm, is an improvement on the weighted polling algorithm, periodically collects load information of each service server node by taking a set time as a reference, and continuously updates the weight of each node.
3. The system for resolving high concurrency for Web servers of claim 2, wherein: the Nginx server simultaneously sends the acquired relevant parameter information of the service server to the model training server, the model training server calculates the weight of the received server load information, in order to avoid resource waste caused by frequent weight updating, a threshold concept is added into the load balancing algorithm, and the weight is updated only when the weight of the server node exceeds a threshold; otherwise, the weight value is not updated;
suppose U (S)i) Represents the weight currently consumed by the ith server node in the load cluster, i.e. the resource utilization rate of the node, i e (1.,. n),
Figure FDA0002260514170000021
is the average value of the resource utilization rate, T, of each server node in the load cluster1The preset threshold value is used for obtaining the standard deviation of the resource utilization rate of the node
Figure FDA0002260514170000022
If the node calculates the obtained SuGreater than T1Then the Nginx server updates the weight of the node.
4. A system for resolving high concurrency for Web servers as recited in claim 3, wherein: the high concurrency access processing system is added with an independent node serving as a model training server and used for sharing the working pressure of the Nginx, the Nginx server sends the obtained load information of the service server to the model training server, algorithm operation and weight updating judgment are carried out by the Nginx server, and the obtained result is sent to the Nginx server.
5. The system for resolving high concurrency for Web servers of claim 4, wherein: in the Web high-concurrency access processing system, the Nginx server sends the acquired server load information to the model training server at one time, so that the communication times in an internal server in the system are reduced, and the influence of network congestion on the normal service is avoided.
6. The system for resolving high concurrency for Web servers of claim 5, wherein: in the Web high-concurrency access processing system, when the load information of the service servers is analyzed, in order to avoid the situation that the extra cost of resources is larger than the optimization of the system due to the collection of excessive parameters, the utilization rate of a CPU (Central processing Unit), the utilization rate of a memory, the utilization rate of a disk IO (input/output) and the utilization rate of a network bandwidth are selected to calculate the weight of each service server.
7. A method for implementing the system for resolving high concurrency of Web servers according to claim 1, wherein: the method comprises the following steps:
s1, after the Nginx server is started, global initialization is carried out by reading the configuration file, and the Nginx server can set initial weight for each service server according to the information of relevant parameters;
s2, when the client initiates a request to the server, the Nginx server will send the request to the service server with the least load, thereby avoiding the server overload problem;
s3, the Nginx server periodically obtains the utilization rate of the CPU, the utilization rate of the memory, the utilization rate of the disk IO and the utilization rate of the network bandwidth of each service server within a set time, and sends the information to the model training server;
s4, after receiving the information sent by the Nginx server, the model training server calculates the real-time weight of each service server node according to a preset algorithm, compares the real-time weight with a set threshold value, judges whether the current weight of the server needs to be updated or not, and finally sends the judgment result and the new weight to the Nginx server;
and S5, the Nginx server correspondingly processes the weight of each service server according to the feedback information of the model training server.
CN201911069556.5A 2019-11-05 2019-11-05 System and method for solving high concurrency of Web server Pending CN110933139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911069556.5A CN110933139A (en) 2019-11-05 2019-11-05 System and method for solving high concurrency of Web server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911069556.5A CN110933139A (en) 2019-11-05 2019-11-05 System and method for solving high concurrency of Web server

Publications (1)

Publication Number Publication Date
CN110933139A true CN110933139A (en) 2020-03-27

Family

ID=69852283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911069556.5A Pending CN110933139A (en) 2019-11-05 2019-11-05 System and method for solving high concurrency of Web server

Country Status (1)

Country Link
CN (1) CN110933139A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930511A (en) * 2020-08-24 2020-11-13 北京工业大学 Identifier resolution node load balancing device based on machine learning
CN111935090A (en) * 2020-07-07 2020-11-13 上海微亿智造科技有限公司 Big data transmission and persistence method and system for industrial intelligent Internet of things
CN112019620A (en) * 2020-08-28 2020-12-01 中南大学 Web cluster load balancing algorithm and system based on Nginx dynamic weighting
CN112579650A (en) * 2020-12-25 2021-03-30 恩亿科(北京)数据科技有限公司 Data processing method and system based on Redis cache
CN112667600A (en) * 2020-12-28 2021-04-16 紫光云技术有限公司 Inventory solution method combining redis and MySQL
CN112883280A (en) * 2021-03-25 2021-06-01 贵阳货车帮科技有限公司 Processing system and method for user recommended content
CN113110933A (en) * 2021-03-11 2021-07-13 浙江工业大学 System with Nginx load balancing technology
CN114090394A (en) * 2022-01-19 2022-02-25 山东卓朗检测股份有限公司 Distributed server cluster load abnormity analysis method
CN114996022A (en) * 2022-07-18 2022-09-02 浙江出海数字技术有限公司 Multi-channel available big data real-time decision making system
CN117061526A (en) * 2023-10-12 2023-11-14 人力资源和社会保障部人事考试中心 Access peak anti-congestion method based on global and local service access control
CN117149099A (en) * 2023-10-31 2023-12-01 江苏华鲲振宇智能科技有限责任公司 Calculation and storage split server system and control method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657379A (en) * 2017-01-06 2017-05-10 重庆邮电大学 Implementation method and system for NGINX server load balancing
CN107800796A (en) * 2017-11-01 2018-03-13 重庆邮电大学 A kind of intelligent lighting managing and control system implementation method
CN108111586A (en) * 2017-12-14 2018-06-01 重庆邮电大学 The web cluster system and method that a kind of high concurrent is supported
US20180191815A1 (en) * 2016-12-29 2018-07-05 UBTECH Robotics Corp. Data transmission method and device, distributed storage system
CN109308221A (en) * 2018-08-02 2019-02-05 南京邮电大学 A kind of Nginx dynamic load balancing method based on WebSocket long connection
CN110012098A (en) * 2019-04-04 2019-07-12 浙江工业大学 A kind of web high concurrent access process system and method
CN111381971A (en) * 2020-03-17 2020-07-07 重庆邮电大学 Nginx-based dynamic weight load balancing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180191815A1 (en) * 2016-12-29 2018-07-05 UBTECH Robotics Corp. Data transmission method and device, distributed storage system
CN106657379A (en) * 2017-01-06 2017-05-10 重庆邮电大学 Implementation method and system for NGINX server load balancing
CN107800796A (en) * 2017-11-01 2018-03-13 重庆邮电大学 A kind of intelligent lighting managing and control system implementation method
CN108111586A (en) * 2017-12-14 2018-06-01 重庆邮电大学 The web cluster system and method that a kind of high concurrent is supported
CN109308221A (en) * 2018-08-02 2019-02-05 南京邮电大学 A kind of Nginx dynamic load balancing method based on WebSocket long connection
CN110012098A (en) * 2019-04-04 2019-07-12 浙江工业大学 A kind of web high concurrent access process system and method
CN111381971A (en) * 2020-03-17 2020-07-07 重庆邮电大学 Nginx-based dynamic weight load balancing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZEPENG WEN; GONGLIANG LI; GUANGHONG YANG: "Research and Realization of Nginx-based Dynamic Feedback Load Balancing Algorithm", 《2018 IEEE 3RD ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC)》 *
孟利民,潘进学: "视频监控系统中负载均衡算法的设计", 《浙江工业大学学报》 *
张尧: "基于Nginx高并发Web服务器的改进与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935090B (en) * 2020-07-07 2021-04-06 上海微亿智造科技有限公司 Big data transmission and persistence method and system for industrial intelligent Internet of things
CN111935090A (en) * 2020-07-07 2020-11-13 上海微亿智造科技有限公司 Big data transmission and persistence method and system for industrial intelligent Internet of things
CN111930511A (en) * 2020-08-24 2020-11-13 北京工业大学 Identifier resolution node load balancing device based on machine learning
CN112019620B (en) * 2020-08-28 2021-12-28 中南大学 Web cluster load balancing method and system based on Nginx dynamic weighting
CN112019620A (en) * 2020-08-28 2020-12-01 中南大学 Web cluster load balancing algorithm and system based on Nginx dynamic weighting
CN112579650A (en) * 2020-12-25 2021-03-30 恩亿科(北京)数据科技有限公司 Data processing method and system based on Redis cache
CN112667600A (en) * 2020-12-28 2021-04-16 紫光云技术有限公司 Inventory solution method combining redis and MySQL
CN113110933A (en) * 2021-03-11 2021-07-13 浙江工业大学 System with Nginx load balancing technology
CN113110933B (en) * 2021-03-11 2024-04-09 浙江工业大学 System with Nginx load balancing technology
CN112883280A (en) * 2021-03-25 2021-06-01 贵阳货车帮科技有限公司 Processing system and method for user recommended content
CN114090394B (en) * 2022-01-19 2022-04-22 山东卓朗检测股份有限公司 Distributed server cluster load abnormity analysis method
CN114090394A (en) * 2022-01-19 2022-02-25 山东卓朗检测股份有限公司 Distributed server cluster load abnormity analysis method
CN114996022A (en) * 2022-07-18 2022-09-02 浙江出海数字技术有限公司 Multi-channel available big data real-time decision making system
CN114996022B (en) * 2022-07-18 2024-03-08 山西华美远东科技有限公司 Multi-channel available big data real-time decision-making system
CN117061526A (en) * 2023-10-12 2023-11-14 人力资源和社会保障部人事考试中心 Access peak anti-congestion method based on global and local service access control
CN117061526B (en) * 2023-10-12 2023-12-12 人力资源和社会保障部人事考试中心 Access peak anti-congestion method based on global and local service access control
CN117149099A (en) * 2023-10-31 2023-12-01 江苏华鲲振宇智能科技有限责任公司 Calculation and storage split server system and control method
CN117149099B (en) * 2023-10-31 2024-03-12 江苏华鲲振宇智能科技有限责任公司 Calculation and storage split server system and control method

Similar Documents

Publication Publication Date Title
CN110933139A (en) System and method for solving high concurrency of Web server
CN107734558A (en) A kind of control of mobile edge calculations and resource regulating method based on multiserver
CN113110933B (en) System with Nginx load balancing technology
CN101116056B (en) Systems and methods for content-aware load balancing
CN107426332B (en) A kind of load-balancing method and system of web server cluster
US8087025B1 (en) Workload placement among resource-on-demand systems
US20170142177A1 (en) Method and system for network dispatching
CN109547517B (en) Method and device for scheduling bandwidth resources
CN106657379A (en) Implementation method and system for NGINX server load balancing
US20170126583A1 (en) Method and electronic device for bandwidth allocation based on online media services
CN110012098A (en) A kind of web high concurrent access process system and method
CN108416465B (en) Workflow optimization method in mobile cloud environment
KR20150017984A (en) The method and apparatus for distributing data in a hybrid cloud environment
CN101557344A (en) Dynamic load balancing method based on spatial geographical locations
CN111381971A (en) Nginx-based dynamic weight load balancing method
CN111131486B (en) Load adjustment method and device of execution node, server and storage medium
CN105512053A (en) Mirror caching method for mobile transparent computing system server terminal multi-user access
Kang et al. Application of adaptive load balancing algorithm based on minimum traffic in cloud computing architecture
CN111338801A (en) Subtree migration method and device for realizing metadata load balance
CN110198267B (en) Traffic scheduling method, system and server
CN107566535B (en) Self-adaptive load balancing method based on concurrent access timing sequence rule of Web map service
Liang et al. A location-aware service deployment algorithm based on k-means for cloudlets
CN117155942A (en) Micro-service dynamic self-adaptive client load balancing method and system
Shi et al. QoS-awareness of microservices with excessive loads via inter-datacenter scheduling
CN113329432A (en) Edge service arrangement method and system based on multi-objective optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327