CN106210028B - A kind of server prevents method, server and the system of overload - Google Patents

A kind of server prevents method, server and the system of overload Download PDF

Info

Publication number
CN106210028B
CN106210028B CN201610526094.5A CN201610526094A CN106210028B CN 106210028 B CN106210028 B CN 106210028B CN 201610526094 A CN201610526094 A CN 201610526094A CN 106210028 B CN106210028 B CN 106210028B
Authority
CN
China
Prior art keywords
concurrent request
storage server
server
handling capacity
request amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610526094.5A
Other languages
Chinese (zh)
Other versions
CN106210028A (en
Inventor
罗少奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201610526094.5A priority Critical patent/CN106210028B/en
Publication of CN106210028A publication Critical patent/CN106210028A/en
Application granted granted Critical
Publication of CN106210028B publication Critical patent/CN106210028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The technical issues of the embodiment of the invention discloses method, server and systems that a kind of server prevents overload, solve current distributed memory system in high load, and the storage server as caused by excessive concurrent request amount is collapsed.The method that server of the embodiment of the present invention prevents overload includes: constantly to send concurrent request to storage server according to initial concurrent request amount window;The handling capacity returned packet quantity and calculate storage server returned according to storage server, and practical concurrent request amount window is determined according to handling capacity;Using practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission reaches concurrent request threshold value, then stop sending concurrent request to storage server.

Description

A kind of server prevents method, server and the system of overload
Technical field
The present invention relates to technical field of data processing more particularly to a kind of server prevent overload method, server and System.
Background technique
Distributed memory system is that data dispersion is stored in more independent equipment.Traditional network store system All data, bottleneck and reliability and peace of the storage server as system performance are stored using the storage server of concentration The focus of full property, is not able to satisfy the needs of Mass storage application.Distributed network storage system uses expansible system knot Structure shares storage load using more storage servers, positions storage information using location server, it not only increases system Reliability, availability and access efficiency, be also easy to extend.
Current distributed memory system is two-tiered structure, and first layer is proxy server, and the second layer is message synchronization collection Group and storage service cluster, the handling capacity when high load, it will since excessive concurrent request amount causes storage server to collapse Routed technical problem.
Summary of the invention
A kind of server provided in an embodiment of the present invention prevents method, server and the system of overload, solves current Distributed memory system in high load, the storage server as caused by excessive concurrent request amount collapse the technical issues of.
The method that a kind of server provided in an embodiment of the present invention prevents overload, comprising:
Concurrent request constantly is sent to storage server according to initial concurrent request amount window;
The handling capacity of storage server is calculated according to the packet quantity of returning that the storage server returns, and is gulped down according to described The amount of spitting determines practical concurrent request amount window;
Using the practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission reaches To the concurrent request threshold value, then stop sending the concurrent request to the storage server.
Optionally, it is also wrapped before constantly sending concurrent request to storage server according to initial concurrent request amount window size It includes:
The size of the initial concurrent request amount window is set for infinity.
Optionally, it is specifically wrapped according to the handling capacity that time packet quantity that the storage server returns calculates storage server It includes:
Storage server is calculated by the packet quantity of returning that preset time period and the concurrent request sent return Handling capacity.
Optionally, determine that practical concurrent request amount window specifically includes according to the handling capacity:
When the request number of times of the concurrent request reaches preset request number of times threshold values, it is determined that the storage server is High load;
Presently described practical concurrent request amount window is set by the current calculated handling capacity.
Optionally, presently described practical concurrent request amount window is set by the current calculated handling capacity specifically to wrap It includes:
During constantly sending concurrent request to the storage server, to the handling capacity of the storage server It is calculated in real time;
Judge whether the calculated handling capacity is greater than the practical concurrent request amount window in real time, if so, updating The practical concurrent request amount window is the real-time calculated handling capacity.
A kind of server provided in an embodiment of the present invention, comprising:
Concurrent request transmission unit, for constantly concurrently being asked to storage server transmission according to initial concurrent request amount window It asks;
Computing unit, the packet quantity of returning for being returned according to the storage server calculate handling up for storage server Amount, and practical concurrent request amount window is determined according to the handling capacity;
System load capacity unit is determined, for using the practical concurrent request amount window size as concurrent request threshold Value then stops sending the concurrent request and deposit to described until the concurrent request of transmission reaches the concurrent request threshold value Store up server.
Optionally, server further include:
Setting unit, for the size of the initial concurrent request amount window to be arranged as infinity.
Optionally, computing unit specifically includes:
Computation subunit, the packet quantity of returning for being returned by preset time period and the concurrent request sent calculate The handling capacity of storage server out.
Optionally, computing unit is specific further include:
High load determines subelement, for when the request number of times of the concurrent request reaches preset request number of times threshold values, Then determine that the storage server is high load;
Practical concurrent request amount determines subelement, for setting presently described reality for the current calculated handling capacity Border concurrent request amount window.
Optionally, practical concurrent request amount determines that subelement specifically includes:
Real-time computing module is used for during constantly sending concurrent request to the storage server, to the storage The handling capacity of server is calculated in real time;
Judgment module, for judging whether the real-time calculated handling capacity is greater than the practical concurrent request amount window Mouthful, if so, updating the practical concurrent request amount window is the real-time calculated handling capacity.
The system that a kind of server provided in an embodiment of the present invention prevents overload, comprising:
Server described in any one referred in several storage servers and the present embodiment;
Several storage servers and server foundation have communication connection relationship.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that
A kind of server provided in an embodiment of the present invention prevents method, server and the system of overload, wherein server is anti- The method only overloaded includes: constantly to send concurrent request to storage server according to initial concurrent request amount window;According to storage The handling capacity returned packet quantity and calculate storage server that server returns, and practical concurrent request amount window is determined according to handling capacity Mouthful;Using practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission reaches concurrent request threshold Value then stops sending concurrent request to storage server.In the present embodiment, by according to initial concurrent request amount window constantly to Storage server sends concurrent request, then calculates handling up for storage server according to the packet quantity of returning that storage server returns Amount, and practical concurrent request amount window is determined according to handling capacity, finally using practical concurrent request amount window size as concurrently asking Threshold value is sought, until the concurrent request of transmission reaches concurrent request threshold value, then stops sending concurrent request to storage server, realizes The load capacity for first assessing storage server solves current further according to the traffic volume of load capacity control concurrent request Distributed memory system in high load, the storage server as caused by excessive concurrent request amount collapse the technical issues of.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art To obtain other attached drawings according to these attached drawings.
Fig. 1 is the process signal of the one embodiment for the method that a kind of server provided in an embodiment of the present invention prevents overload Figure;
Fig. 2 is that the process of another embodiment of the method that a kind of server provided in an embodiment of the present invention prevents overload is shown It is intended to;
Fig. 3 is a kind of structural schematic diagram of one embodiment of server provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of another embodiment of server provided in an embodiment of the present invention;
Fig. 5 is the structural representation of the one embodiment for the system that a kind of server provided in an embodiment of the present invention prevents overload Figure;
Fig. 6 is distributed memory system configuration diagram.
Specific embodiment
A kind of server provided in an embodiment of the present invention prevents method, server and the system of overload, solves current Distributed memory system in high load, the storage server as caused by excessive concurrent request amount collapse the technical issues of.
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below Embodiment be only a part of the embodiment of the present invention, and not all embodiment.Based on the embodiments of the present invention, this field Those of ordinary skill's all other embodiment obtained without making creative work, belongs to protection of the present invention Range.
Referring to Fig. 1, a kind of server provided in an embodiment of the present invention prevents one embodiment of method of overload from including:
101, concurrent request constantly is sent to storage server according to initial concurrent request amount window;
In the present embodiment, when the processing capacity for needing to assess entire distributed memory system system to prevent locking system collapse, Concurrent request constantly is sent to storage server firstly the need of according to initial concurrent request amount window.
102, the handling capacity returned packet quantity and calculate storage server returned according to storage server, and according to handling capacity Determine practical concurrent request amount window;
After constantly sending concurrent request to storage server according to initial concurrent request amount window, need according to storage The handling capacity returned packet quantity and calculate storage server that server returns, and practical concurrent request amount window is determined according to handling capacity Mouthful.
103, using practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission reaches simultaneously Hair request threshold value then stops sending concurrent request to storage server.
When the packet quantity of returning returned according to storage server calculates the handling capacity of storage server, and it is true according to handling capacity After fixed practical concurrent request amount window, need using practical concurrent request amount window size as concurrent request threshold value, Zhi Daofa The concurrent request sent reaches concurrent request threshold value, then stops sending concurrent request to storage server.
In the present embodiment, by constantly sending concurrent request to storage server according to initial concurrent request amount window, so The handling capacity of storage server is calculated according to the packet quantity of returning that storage server returns afterwards, and determines reality simultaneously according to handling capacity Request amount window is sent out, finally using practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission Reach concurrent request threshold value, then stops sending concurrent request to storage server, realize the load for first assessing storage server Ability solves current distributed memory system in high load further according to the traffic volume of load capacity control concurrent request, The technical issues of storage server as caused by excessive concurrent request amount is collapsed.
The above is that the process of the method for overload is prevented to be described in detail to server, will be carried out below to detailed process Detailed description, referring to Fig. 2, a kind of server provided in an embodiment of the present invention prevents another embodiment of the method for overload Include:
201, the size that initial concurrent request amount window is arranged is infinity;
In the present embodiment, when the processing capacity for needing to assess entire distributed memory system system to prevent locking system collapse, It is infinity firstly the need of the size that initial concurrent request amount window is arranged.
202, concurrent request constantly is sent to storage server according to initial concurrent request amount window;
After the size that initial concurrent request amount window is arranged is infinitely great, need according to initial concurrent request amount window Constantly concurrent request is sent to storage server.
It should be noted that concurrent request amount window indicates the quantity that can transmit a request to external service simultaneously.
203, storage server is calculated by the packet quantity of returning that preset time period and the concurrent request sent return Handling capacity;
After constantly sending concurrent request to storage server according to initial concurrent request amount window, need by preset The handling capacity returned packet quantity and calculate storage server that period and the concurrent request sent return.
204, when the request number of times of concurrent request reaches preset request number of times threshold values, it is determined that storage server is high negative It carries;
When the packet quantity of returning returned by preset time period and the concurrent request sent calculates gulping down for storage server After the amount of spitting, if the request number of times of concurrent request reaches preset request number of times threshold values, it is determined that storage server is high load.
205, during constantly sending concurrent request to storage server, the handling capacity of storage server is carried out real-time It calculates;
When the request number of times of concurrent request reaches preset request number of times threshold values, it is determined that storage server be high load it Afterwards, it needs to set currently practical concurrent request amount window for current calculated handling capacity, it is specifically currently calculated to gulp down The amount of spitting is set as currently practical concurrent request amount window, can be during constantly sending concurrent request to storage server, The handling capacity of storage server is calculated in real time.
206, judge whether calculated handling capacity is greater than practical concurrent request amount window in real time, if so, thening follow the steps 207;
During constantly sending concurrent request to storage server, the handling capacity of storage server is counted in real time After calculation, need to judge whether real-time calculated handling capacity is greater than practical concurrent request amount window, if so, thening follow the steps 207。
207, updating practical concurrent request amount window is real-time calculated handling capacity;
When judge in real time calculated handling capacity be greater than practical concurrent request amount window, then update practical concurrent request amount Window is real-time calculated handling capacity.
208, using practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission reaches simultaneously Hair request threshold value then stops sending concurrent request to storage server.
When the packet quantity of returning returned according to storage server calculates the handling capacity of storage server, and it is true according to handling capacity After fixed practical concurrent request amount window, need using practical concurrent request amount window size as concurrent request threshold value, Zhi Daofa The concurrent request sent reaches concurrent request threshold value, then stops sending concurrent request to storage server.
The antioverloading process of server is described in detail with a concrete application scene below, as shown in fig. 6, using Example include:
The practical framework of Fig. 6 is that distributed memory system horsetable (packet type framework table) uses two-tiered structure, First layer service is horse_proxy (proxy server);Second layer service be sync_broker (message synchronization cluster) and Storage (storage service cluster).
In the distributed memory system, horse_proxy process performance is substantially better than sync_broker and storage, Therefore the performance bottle of system is tightly in sync_broker and storage.Horse_proxy service is by calculating external service Handling capacity when (sync_broker and storage) is in high load controls concurrent request amount window, to prevent excessive Concurrent request amount pressure is across external service, to guarantee the normal service ability of whole system.Wherein handling capacity refers to the unit time The interior number of request that can be handled;Concurrent request refers to the request amount sent in the unit time.
Following step is for horse_proxy and storage is serviced:
(1) concurrent request amount window W indicates the quantity that can transmit a request to external service simultaneously, for assessing external clothes The ability that business can be handled simultaneously.Concurrent request amount window W is initialized to infinity, i.e., at the beginning with no restriction;horse_ Proxy service is constantly serviced to storage with W window size sends request.
(2) horse_proxy service is serviced by the packet amount R of returning of T in a period of time (such as 10 seconds) to calculate storage Handling capacity: handling capacity=R/T
(3) when request timed out number reaches certain threshold values V, horse_proxy service thinks storage service in height Load, and using handling capacity at this time as concurrent request amount window W;
(4) as long as handling capacity is greater than concurrent request amount window W, horse_proxy service is concurrently asked with handling capacity update The amount of asking window W;
(5) as long as concurrent request amount window W is less than, horse_proxy service can be serviced to send to storage and be asked It asks, has expired and just stopped sending.
In (3) if in V it is too small, external service may be will mistakenly believe that and be in high load, if too big, may not enough and Shi Faxian storage service is in the state of high load, so V value will be with being arranged according to actual conditions in tradeoff.
Horse_proxy service service ability better than under storage service scenario, it is ensured that do not press across Storage is serviced and is kept best service ability, to guarantee whole system nonoverload, to stop working.It needs to illustrate , fluctuation may be generated in state change, but this fluctuation is at one or two in T seconds, and state change number is few.
In horsetable two-tiered structure system, the case where service using first layer service processing performance better than the second layer Under, first layer service controls concurrent request amount by assessment second layer service ability, to guarantee the positive informal dress of second service Business ability, the final normal service ability for guaranteeing whole system, even if in the case of an overload.
In the present embodiment, by constantly sending concurrent request to storage server according to initial concurrent request amount window, so The handling capacity of storage server is calculated according to the packet quantity of returning that storage server returns afterwards, and determines reality simultaneously according to handling capacity Request amount window is sent out, finally using practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission Reach concurrent request threshold value, then stops sending concurrent request to storage server, realize the load for first assessing storage server Ability solves current distributed memory system in high load further according to the traffic volume of load capacity control concurrent request, The technical issues of storage server as caused by excessive concurrent request amount is collapsed, and constantly calculate handling capacity and current The data of concurrent request amount window determine last practical concurrent request amount window, the more accurate service ability of system Assessment, ensure that the normal service ability of whole system.
Referring to Fig. 3, a kind of one embodiment of server provided in an embodiment of the present invention includes:
Concurrent request transmission unit 301, for constantly being sent simultaneously to storage server according to initial concurrent request amount window Hair request;
Computing unit 302, the handling capacity returned packet quantity and calculate storage server for being returned according to storage server, And practical concurrent request amount window is determined according to handling capacity;
It determines system load capacity unit 303, is used for using practical concurrent request amount window size as concurrent request threshold value, Until the concurrent request of transmission reaches concurrent request threshold value, then stop sending concurrent request to storage server.
In the present embodiment, constantly taken to storage by concurrent request transmission unit 301 according to initial concurrent request amount window Business device sends concurrent request, and then computing unit 302 calculates storage server according to the packet quantity of returning that storage server returns Handling capacity, and practical concurrent request amount window is determined according to handling capacity, finally determining system load capacity unit 303 is with reality Concurrent request amount window size is as concurrent request threshold value, until the concurrent request of transmission reaches concurrent request threshold value, then stops Concurrent request is sent to storage server, the load capacity for first assessing storage server is realized, is controlled further according to load capacity The traffic volume of concurrent request solves current distributed memory system in high load, since excessive concurrent request amount is led The technical issues of storage server collapse of cause.
The above is that each unit of server is described in detail, and sub-unit is described in detail below, asks Refering to Fig. 4, a kind of another embodiment of server provided in an embodiment of the present invention includes:
Setting unit 401, for the size of initial concurrent request amount window to be arranged as infinity.
Concurrent request transmission unit 402, for constantly being sent simultaneously to storage server according to initial concurrent request amount window Hair request;
Computing unit 403, the handling capacity returned packet quantity and calculate storage server for being returned according to storage server, And practical concurrent request amount window is determined according to handling capacity;
Computing unit 403 specifically includes:
Computation subunit 4031, the packet quantity of returning for being returned by preset time period and the concurrent request sent calculate The handling capacity of storage server out.
High load determines subelement 4032, when reaching preset request number of times threshold values for the request number of times when concurrent request, Then determine that storage server is high load;
Practical concurrent request amount determines subelement 4033, for by current calculated handling capacity be set as it is currently practical simultaneously Send out request amount window;
Practical concurrent request amount determines that subelement 4033 specifically includes:
Real-time computing module 4031a, is used for during constantly sending concurrent request to storage server, to storage service The handling capacity of device is calculated in real time;
Judgment module 4032b, for judging whether real-time calculated handling capacity is greater than practical concurrent request amount window, if It is to update practical concurrent request amount window then for real-time calculated handling capacity.
It determines system load capacity unit 404, is used for using practical concurrent request amount window size as concurrent request threshold value, Until the concurrent request of transmission reaches concurrent request threshold value, then stop sending concurrent request to storage server.
In the present embodiment, constantly taken to storage by concurrent request transmission unit 402 according to initial concurrent request amount window Business device sends concurrent request, and then computing unit 403 calculates storage server according to the packet quantity of returning that storage server returns Handling capacity, and practical concurrent request amount window is determined according to handling capacity, finally determining system load capacity unit 404 is with reality Concurrent request amount window size is as concurrent request threshold value, until the concurrent request of transmission reaches concurrent request threshold value, then stops Concurrent request is sent to storage server, the load capacity for first assessing storage server is realized, is controlled further according to load capacity The traffic volume of concurrent request solves current distributed memory system in high load, since excessive concurrent request amount is led The technical issues of storage server collapse of cause, and constantly calculate the data of handling capacity and current concurrent request amount window Determine last practical concurrent request amount window, the more accurate assessment of the service ability of system ensure that whole system Normal service ability.
Referring to Fig. 5, a kind of server provided in the embodiment of the present invention prevents one embodiment packet of the system of overload It includes:
The server 52 referred in several storage servers 51 and Fig. 3 and Fig. 4 embodiment;
Several storage servers 51 and the foundation of server 52 have communication connection relationship.
It should be noted that storage server 51 can be storage service cluster, further comprise message synchronization cluster, Several clients.
In the two-tiered structure of distributed memory system, it is better than the feelings of second layer service using first layer service processing performance Under condition, first layer service controls concurrent request amount by assessment second layer service ability, to guarantee the normal of second service Service ability, the final normal service ability for guaranteeing whole system, even if in the case of an overload.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (5)

1. a kind of method that server prevents overload characterized by comprising
Concurrent request constantly is sent to storage server according to initial concurrent request amount window;
The handling capacity returned packet quantity and calculate storage server returned according to the storage server, and according to the handling capacity Determine practical concurrent request amount window;
Using the practical concurrent request amount window size as concurrent request threshold value, until the concurrent request of transmission reaches institute Concurrent request threshold value is stated, then stops sending the concurrent request to the storage server;
It is specifically included according to the handling capacity that time packet quantity that the storage server returns calculates storage server:
Handling up for storage server is calculated by the packet quantity of returning that preset time period and the concurrent request sent return Amount;
The handling capacity of the storage server is equal to the packet amount R of returning divided by the preset time period T;
Determine that practical concurrent request amount window specifically includes according to the handling capacity:
When the request number of times of the concurrent request reaches preset request number of times threshold values, it is determined that the storage server is high negative It carries;
Presently described practical concurrent request amount window is set by the current calculated handling capacity;
Presently described practical concurrent request amount window is set by the current calculated handling capacity to specifically include:
During constantly sending concurrent request to the storage server, the handling capacity of the storage server is carried out It calculates in real time;
Judge whether the calculated handling capacity is greater than the practical concurrent request amount window in real time, if so, described in updating Practical concurrent request amount window is the real-time calculated handling capacity.
2. the method that server according to claim 1 prevents overload, which is characterized in that according to initial concurrent request amount window Before mouth size constantly sends concurrent request to storage server further include:
The size of the initial concurrent request amount window is set for infinity.
3. a kind of server characterized by comprising
Concurrent request transmission unit, for constantly sending concurrent request to storage server according to initial concurrent request amount window;
Computing unit, the handling capacity returned packet quantity and calculate storage server for being returned according to the storage server, and Practical concurrent request amount window is determined according to the handling capacity;
It determines system load capacity unit, is used for using the practical concurrent request amount window size as concurrent request threshold value, directly The concurrent request to transmission reaches the concurrent request threshold value, then stops sending the concurrent request to the storage service Device;
Computing unit specifically includes:
Computation subunit, time packet quantity for being returned by preset time period and the concurrent request sent, which calculates, deposits Store up the handling capacity of server;
The handling capacity of the storage server is equal to the packet amount R of returning divided by the preset time period T;
Computing unit is specific further include:
High load determines subelement, for when the request number of times of the concurrent request reaches preset request number of times threshold values, then really The fixed storage server is high load;
Practical concurrent request amount determines subelement, for setting presently described reality simultaneously for the current calculated handling capacity Send out request amount window;
Practical concurrent request amount determines that subelement specifically includes:
Real-time computing module is used for during constantly sending concurrent request to the storage server, to the storage service The handling capacity of device is calculated in real time;
Judgment module, for judging whether the real-time calculated handling capacity is greater than the practical concurrent request amount window, if It is to update the practical concurrent request amount window then for the real-time calculated handling capacity.
4. server according to claim 3, which is characterized in that server further include:
Setting unit, for the size of the initial concurrent request amount window to be arranged as infinity.
5. the system that a kind of server prevents overload characterized by comprising
Several storage servers, and the server as described in any one of claim 3 and 4;
Several storage servers and server foundation have communication connection relationship.
CN201610526094.5A 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload Active CN106210028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610526094.5A CN106210028B (en) 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610526094.5A CN106210028B (en) 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload

Publications (2)

Publication Number Publication Date
CN106210028A CN106210028A (en) 2016-12-07
CN106210028B true CN106210028B (en) 2019-09-06

Family

ID=57465462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610526094.5A Active CN106210028B (en) 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload

Country Status (1)

Country Link
CN (1) CN106210028B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255599A (en) * 2016-12-29 2018-07-06 北京京东尚科信息技术有限公司 Based on the treating method and apparatus largely asked

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882105A (en) * 2010-06-01 2010-11-10 华南理工大学 Method for testing response time of Web page under concurrent environment
CN102148759A (en) * 2011-04-01 2011-08-10 许旭 Method for saving export bandwidth of backbone network by cache acceleration system
CN103236956A (en) * 2013-04-18 2013-08-07 神州数码网络(北京)有限公司 Method and switch for testing throughput of communication equipment
CN105207832A (en) * 2014-06-13 2015-12-30 腾讯科技(深圳)有限公司 Server stress testing method and device
CN105701207A (en) * 2016-01-12 2016-06-22 腾讯科技(深圳)有限公司 Request quantity forecast method of resource and application recommendation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882105A (en) * 2010-06-01 2010-11-10 华南理工大学 Method for testing response time of Web page under concurrent environment
CN102148759A (en) * 2011-04-01 2011-08-10 许旭 Method for saving export bandwidth of backbone network by cache acceleration system
CN103236956A (en) * 2013-04-18 2013-08-07 神州数码网络(北京)有限公司 Method and switch for testing throughput of communication equipment
CN105207832A (en) * 2014-06-13 2015-12-30 腾讯科技(深圳)有限公司 Server stress testing method and device
CN105701207A (en) * 2016-01-12 2016-06-22 腾讯科技(深圳)有限公司 Request quantity forecast method of resource and application recommendation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
web系统性能测试工具的研究;邵燕琳;《中国优秀硕士学位论文全文库 信息科技辑》;20071215;正文第2、12-13、27、39-53页

Also Published As

Publication number Publication date
CN106210028A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN109618002B (en) Micro-service gateway optimization method, device and storage medium
CN103440202B (en) A kind of communication means based on RDMA, system and communication equipment
CN105897836A (en) Back source request processing method and device
CN105554102A (en) Elastic expansion method based on container cluster and application system thereof
CN106020926B (en) A kind of method and device transmitted for data in virtual switch technology
EP3745678B1 (en) Storage system, and method and apparatus for allocating storage resources
CN102281190A (en) Networking method for load balancing apparatus, server and client access method
CN113742135B (en) Data backup method, device and computer readable storage medium
CN106878197A (en) A kind of management system and method for the transmission of cloud platform message
US11316916B2 (en) Packet processing method, related device, and computer storage medium
CN105635083A (en) Service processing method and service processing system based on server and client architecture
CN105554125B (en) A kind of method and its system for realizing webpage fit using CDN
CN107529186A (en) The method and system of channel transmission upstream data, client, server
CN105978938A (en) Service processing equipment service status determining method and scheduling equipment
CN106304154B (en) A kind of data transmission method and PDCP entity of PDCP entity
CN106210028B (en) A kind of server prevents method, server and the system of overload
CN113726847B (en) Network system, network segmentation method and electronic equipment
CN108418752A (en) A kind of creation method and device of aggregation group
CN114116207A (en) Flow control method, device, equipment and system
CN118041937A (en) Data access method and device of storage device
WO2020010507A1 (en) Method for adjusting communication link, communication device, and unmanned aerial vehicle
CN108337328A (en) A kind of data exchange system, data uploading method and data download method
CN105099753B (en) The method of Network Management System and its processing business
CN105450679A (en) Method and system for performing data cloud storage
CN108632921A (en) A kind of core net switching method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210111

Address after: 510000 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511449 Wanda Plaza, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20161207

Assignee: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000053

Denomination of invention: Method, server and system for preventing overload of server

Granted publication date: 20190906

License type: Common License

Record date: 20210208