CN104363182B - A kind of load-balancing method and system based on bilayer caching - Google Patents
A kind of load-balancing method and system based on bilayer caching Download PDFInfo
- Publication number
- CN104363182B CN104363182B CN201410613099.2A CN201410613099A CN104363182B CN 104363182 B CN104363182 B CN 104363182B CN 201410613099 A CN201410613099 A CN 201410613099A CN 104363182 B CN104363182 B CN 104363182B
- Authority
- CN
- China
- Prior art keywords
- management module
- terminal
- request
- data information
- memory management
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 230000004044 response Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The present invention provide it is a kind of based on bilayer caching load-balancing method and system, the above method include the following steps:Memory management module, disk management module are set in web page server in advance and parameter setting is carried out to the memory management module, the disk management module;After the web page server receives terminal request, by inquiring the memory management module, gets the data information of terminal request and feed back to the terminal;If inquiry failure, continues to inquire the disk management module, gets the data information of the terminal request and feed back to the terminal.Invention increases the probability that end-user request is responded from network layer, reduce request scheduling to background application server cluster, the request pressure of background server is alleviated, while reducing the quantity of background application server, achievees the purpose that reduce input cost.
Description
Technical field
The invention belongs to load balancing field more particularly to a kind of load-balancing methods and system based on bilayer caching.
Background technology
As enterprise expands and the pay attention to day by day of reliability the continuous of web page server, that is, web server performance requirement,
More and more enterprises increase system using the concurrent capability of load equalizer enhancing web services by building parallel cluster and realizing
The target for reliability of uniting, increases the computing resource of the web services of enterprise, if passing through increase background application server simply
Quantity not only increases cost for enterprise, but also causes the waste of resource.
Most of caching medium is all memory cache, it is well known that the price of 16GB memory bars is very expensive, and
And with the increase of memory size, DRAM price exponentially grade increases and web server mainboard limits the upper of memory size
Therefore limit relies solely on improving web server memory size to enhance the method for web server concurrent capability, and relatively more tired
It is difficult and often fancy price is difficult to bear.
Invention content
The present invention provides a kind of load-balancing method and system based on bilayer caching, to solve the above problems.
The present invention provides a kind of load-balancing method based on bilayer caching.The above method includes the following steps:
In advance in web page server be arranged memory management module, disk management module and to the memory management module,
The disk management module carries out parameter setting;
After the web page server receives terminal request, by inquiring the memory management module, gets terminal and ask
The data information asked simultaneously feeds back to the terminal;If inquiry failure, continues to inquire the disk management module, get described
The data information of terminal request simultaneously feeds back to the terminal.
The present invention also provides a kind of SiteServer LBSs based on bilayer caching, including terminal, web page server;Wherein,
The terminal is connected with the web page server;
In advance in web page server be arranged memory management module, disk management module and to the memory management module,
The disk management module carries out parameter setting;
The web page server, by inquiring the memory management module, gets end after receiving terminal request
It holds the data information of request and feeds back to the terminal;If inquiry failure, continues to inquire the disk management module, get
The data information of terminal request simultaneously feeds back to the terminal.
Compared to prior art, according to a kind of load-balancing method and system based on bilayer caching provided by the invention,
Pass through following scheme:Memory management module, disk management module and to the memory management are set in web page server in advance
Module, the disk management module carry out parameter setting;After the web page server receives terminal request, described in inquiry
Memory management module gets the data information of terminal request and feeds back to the terminal;If inquiry failure continues to inquire institute
Disk management module is stated, the data information of the terminal request is got and feeds back to the terminal;On the one hand, it increases from net
Network layers respond the probability of end-user request;On the other hand, reduce request scheduling to background application server cluster, mitigate
The request pressure of background server, while reducing the quantity of background application server, achieve the purpose that reduce input cost.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and is constituted part of this application, this hair
Bright illustrative embodiments and their description are not constituted improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 show the load-balancing method flow chart based on bilayer caching of the embodiment of the present invention 1;
Fig. 2 show the SiteServer LBS structure chart based on bilayer caching of the embodiment of the present invention 2;
Fig. 3 show the SiteServer LBS structure chart based on bilayer caching of the embodiment of the present invention 3.
Specific implementation mode
Come that the present invention will be described in detail below with reference to attached drawing and in conjunction with the embodiments.It should be noted that not conflicting
In the case of, the features in the embodiments and the embodiments of the present application can be combined with each other.
Fig. 1 show the load-balancing method flow chart based on bilayer caching of the embodiment of the present invention 1, including following step
Suddenly:
Step 101:Memory management module, disk management module and to the memory pipe are set in web page server in advance
Manage module, the disk management module carries out parameter setting;
Following parameter setting is carried out to memory management module:
1, setting connects overtime duration, sends overtime duration, reads time-out duration, cache invalidation time;
2, setting access request mode only directly receives inter access, does not receive external request directly.
Such as:
memc_connect_t imeout 100ms;The overtime duration of #memc connections;
memc_send_t imeout 100ms;#memc sends overtime duration;
memc_read_t imeout 100ms;#memc reads overtime duration;
set$memc_expt ime 1000;The # cache invalidation times;
interna l;# only directly receives inter access, does not receive external ht tp requests directly
Following parameter setting is carried out to disk management module:
1, it is the name of disk management module and storage allocation size and disk size;
Such as:Disk management module is named as cache1, and the memory size of distribution is 100MB, and the disk size of distribution is
10GB。
2, it is the number of characters of disk management module assignment file storing directory and catalogue at different levels;
Such as:It is for disk management module assignment file storing directory:#/data/ngx_cache/cache1 (is indicated
The catalogue to be stored of this file of cache1).
#level s=1:2 indicate that the first order catalogue of CACHE DIRECTORY is 1 character, and second level catalogue is 2 characters,
This forms of i.e./data/ngx_cache/cache1/a/1b.
Step 102:After the web page server receives terminal request, by inquiring the memory management module, obtain
To terminal request data information and feed back to the terminal;If inquiry failure, continues to inquire the disk management module, obtain
It gets the data information of terminal request and feeds back to the terminal.
Web page server continues to inquire disk management module, gets the data information of terminal request and feeds back to the end
While end, sends the data information of the terminal request to memory management module and stored by the memory management module.
Memory management module periodically or in real time sends information acquisition request to disk management module;
After disk management module receives described information acquisition request, to memory management module feedback information response message;
Wherein, the data information of terminal request is carried in described information response message.
Specific periodic quantity is flexibly arranged according to actual conditions, does not limit protection scope of the present invention herein.
Disk management module deletes the data information not being accessed in the first preset period of time.
In the first preset period of time of the disk management module deletion (such as:24 hours) data information that is not accessed it
Before, further include:The data information not being accessed in first preset period of time is sent to memory management by the disk management module
Module;After the memory management module receives the data information not being accessed in first preset period of time, if in the free time
Size is deposited more than the size of data not being accessed in the first preset period of time, then is stored in first preset period of time without interviewed
The data information asked.
In the second preset period of time of the memory management module deletion (such as:12 hours) data information that is not accessed.
Wherein, the first preset period of time may be the same or different with the second preset period of time, and the setting of occurrence is according to reality
Border situation is flexibly arranged, and does not limit protection scope of the present invention herein.
Step 103:If web page server inquires failure in disk management module, by being looked into application server transmission
Request is ask, the data information of end-user request is obtained and feeds back to terminal user.
Fig. 2 show the SiteServer LBS structure chart based on bilayer caching of the embodiment of the present invention 2, including terminal, net
Page server;Wherein, the terminal is connected with the web page server;
In advance in web page server be arranged memory management module, disk management module and to the memory management module,
The disk management module carries out parameter setting;
The web page server, by inquiring the memory management module, gets end after receiving terminal request
It holds the data information of request and feeds back to the terminal;If inquiry failure, continues to inquire the disk management module, get
The data information of terminal request simultaneously feeds back to the terminal.
Fig. 3 show the SiteServer LBS structure chart based on bilayer caching of the embodiment of the present invention 3, including:Terminal, net
Page server, application server 1, application server 2, application server 3;Wherein, the terminal and the web page server phase
Even;The web page server is connected directly with application server 1, application server 2, application server 3 respectively;
In advance in web page server be arranged memory management module, disk management module and to the memory management module,
The disk management module carries out parameter setting;
The web page server, by inquiring the memory management module, gets end after receiving terminal request
It holds the data information of request and feeds back to the terminal;If inquiry failure, continues to inquire the disk management module, get
The data information of terminal request simultaneously feeds back to the terminal;
If web page server inquires failure in disk management module, asked by sending inquiry to the application server
It asks, obtain the data information of terminal request and feeds back to terminal.
Pass through following scheme:Memory management module, disk management module and to described are set in web page server in advance
Memory management module, the disk management module carry out parameter setting;After the web page server receives terminal request, pass through
The memory management module is inquired, the data information of terminal request is got and feeds back to the terminal;If inquiry failure, after
The continuous inquiry disk management module, gets the data information of the terminal request and feeds back to the terminal;On the one hand, increase
The big probability that end-user request is responded from network layer;On the other hand, reduce request scheduling to background application server
Cluster, alleviates the request pressure of background server, while reducing the quantity of background application server, reach reduction input at
This purpose.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, any made by repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.
Claims (8)
1. a kind of load-balancing method based on bilayer caching, which is characterized in that include the following steps:
Memory management module, disk management module and to the memory management module, described are set in web page server in advance
Disk management module carries out parameter setting;
After the web page server receives terminal request, by inquiring the memory management module, terminal request is got
Data information simultaneously feeds back to the terminal;If inquiry failure, continues to inquire the disk management module, gets the terminal
The data information of request simultaneously feeds back to the terminal;
Wherein, include to the parameter of memory management module progress parameter setting:When the overtime duration parameters of connection, transmission time-out
Long parameter reads overtime duration parameters, cache invalidation time parameter, access request mode parameter;
The memory management module periodically or in real time sends information acquisition request to the disk management module;
After the disk management module receives described information acquisition request, disappear to memory management module feedback information response
Breath;Wherein, the data information of terminal request is carried in described information response message.
2. according to the method described in claim 1, it is characterized in that:To the disk management module setting parameter include:Point
With memory size parameter, disk size parameter, file storing directory parameter, directory characters number parameter at different levels.
3. according to the method described in claim 1, it is characterized in that:The web page server continues to inquire the disk management mould
Block while getting the data information of the terminal request and feed back to the terminal, is sent to the memory management module
The data information of the terminal request is simultaneously stored by the memory management module.
4. according to the method described in claim 1, it is characterized in that:The disk management module is deleted not to be had in the first preset period of time
There is accessed data information.
5. according to the method described in claim 4, it is characterized in that:The disk management module is deleted not to be had in the first preset period of time
Before having accessed data information, further include:The number that the disk management module will be not accessed in the first preset period of time
It is believed that breath is sent to memory management module;The memory management module, which receives, not to be accessed in first preset period of time
After data information, if free memory size is more than the size of data that is not accessed in the first preset period of time, described the is stored
The data information not being accessed in one preset period of time;
The memory management module deletes the data information not being accessed in the second preset period of time.
6. according to the method described in claim 1, it is characterized in that:If web page server inquires mistake in disk management module
It loses, then by sending inquiry request to application server, obtains the data information of terminal request and feed back to terminal.
7. a kind of SiteServer LBS based on bilayer caching, which is characterized in that including terminal, web page server;Wherein, described
Terminal is connected with the web page server;
Memory management module, disk management module and to the memory management module, described are set in web page server in advance
Disk management module carries out parameter setting;
The web page server, by inquiring the memory management module, gets terminal and asks after receiving terminal request
The data information asked simultaneously feeds back to the terminal;If inquiry failure, continues to inquire the disk management module, gets terminal
The data information of request simultaneously feeds back to the terminal;
Wherein, include to the parameter of memory management module progress parameter setting:When the overtime duration parameters of connection, transmission time-out
Long parameter reads overtime duration parameters, cache invalidation time parameter, access request mode parameter;
The memory management module periodically or in real time sends information acquisition request to the disk management module;
After the disk management module receives described information acquisition request, disappear to memory management module feedback information response
Breath;Wherein, the data information of terminal request is carried in described information response message.
8. system according to claim 7, which is characterized in that further include one or more application server;The webpage
Server is connected directly with one or more of application servers respectively;
If web page server inquires failure in disk management module, by sending inquiry request to the application server,
It obtains the data information of terminal request and feeds back to terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410613099.2A CN104363182B (en) | 2014-11-04 | 2014-11-04 | A kind of load-balancing method and system based on bilayer caching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410613099.2A CN104363182B (en) | 2014-11-04 | 2014-11-04 | A kind of load-balancing method and system based on bilayer caching |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104363182A CN104363182A (en) | 2015-02-18 |
CN104363182B true CN104363182B (en) | 2018-07-31 |
Family
ID=52530409
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410613099.2A Active CN104363182B (en) | 2014-11-04 | 2014-11-04 | A kind of load-balancing method and system based on bilayer caching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104363182B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111352915A (en) * | 2018-12-20 | 2020-06-30 | 北京奇虎科技有限公司 | Machine learning system, machine learning parameter server and implementation method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101075241A (en) * | 2006-12-26 | 2007-11-21 | 腾讯科技(深圳)有限公司 | Method and system for processing buffer |
CN101576855A (en) * | 2009-06-19 | 2009-11-11 | 深圳市科陆电子科技股份有限公司 | Data storing system and method based on cache |
-
2014
- 2014-11-04 CN CN201410613099.2A patent/CN104363182B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101075241A (en) * | 2006-12-26 | 2007-11-21 | 腾讯科技(深圳)有限公司 | Method and system for processing buffer |
CN101576855A (en) * | 2009-06-19 | 2009-11-11 | 深圳市科陆电子科技股份有限公司 | Data storing system and method based on cache |
Also Published As
Publication number | Publication date |
---|---|
CN104363182A (en) | 2015-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105025084A (en) | A cloud storage system based on synchronization agents and mixed storage | |
CN107888666B (en) | Cross-region data storage system and data synchronization method and device | |
US20110145393A1 (en) | Method for dynamic reservation of cloud and on premises computing resources for software execution | |
CN102387220A (en) | Offline downloading method and system based on cloud storage | |
CN105025053A (en) | Distributed file upload method based on cloud storage technology and system | |
CN108551399B (en) | Service deployment method, system and related device in cloud environment | |
CN103368986A (en) | Information recommendation method and information recommendation device | |
CN106559392A (en) | A kind of file sharing method, device and system | |
CN104539982B (en) | A kind of point-to-point resource-sharing schedule method of video, system and nodal terminal | |
CN103020223A (en) | File sharing processing method, device and system | |
CN109194718A (en) | A kind of block chain network and its method for scheduling task | |
CN104780202B (en) | The system and method for virtualizing and managing for end-to-end cloud service database | |
CN108347459A (en) | A kind of high in the clouds data quick storage method and device | |
CN104125303B (en) | Reading and writing data requesting method, client and system | |
CN104079600B (en) | File memory method, device, access client and meta data server system | |
CN104092776A (en) | Method and system for accessing information | |
CN112099991A (en) | Method, device, system and storage medium for data backup and source data access | |
CN102664894B (en) | System and method for software provision based on cloud computing | |
CN106294842A (en) | A kind of data interactive method, platform and distributed file system | |
US20170160904A1 (en) | Sharing a template file | |
EP2874059A1 (en) | Personal cloud storage chain service system and method | |
CN104363182B (en) | A kind of load-balancing method and system based on bilayer caching | |
KR101379105B1 (en) | System and method for offering cloud computing service | |
CN110019359B (en) | Method, device and system for preventing cache breakdown | |
CN106534336B (en) | A kind of video subscribing dynamically realizes system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |