CN109729108A - A kind of method, associated server and system for preventing caching from puncturing - Google Patents

A kind of method, associated server and system for preventing caching from puncturing Download PDF

Info

Publication number
CN109729108A
CN109729108A CN201711024741.3A CN201711024741A CN109729108A CN 109729108 A CN109729108 A CN 109729108A CN 201711024741 A CN201711024741 A CN 201711024741A CN 109729108 A CN109729108 A CN 109729108A
Authority
CN
China
Prior art keywords
keyword
cache server
server
mapping
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711024741.3A
Other languages
Chinese (zh)
Other versions
CN109729108B (en
Inventor
李凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Tmall Technology Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201711024741.3A priority Critical patent/CN109729108B/en
Publication of CN109729108A publication Critical patent/CN109729108A/en
Application granted granted Critical
Publication of CN109729108B publication Critical patent/CN109729108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the present application provides a kind of method for preventing caching from puncturing, associated server and system, the flowing of access parameter of the cache server in cache server cluster is monitored by monitoring server, the flowing of access parameter can react the loading condition of cache server, monitoring server will dynamically load the corresponding value of keyword in biggish cache server and move in other cache servers, to utmostly utilize the overall load ability of entire cache server cluster, the problem of puncturing is cached to avoid causing to cause since the momentary load of separate unit cache server is larger when using caching.

Description

A kind of method, associated server and system for preventing caching from puncturing
Technical field
This application involves Data cache technology field, in particular to a kind of method for preventing caching from puncturing, associated server And system.
Background technique
Caching is a kind of commonly used technology in website service end, is reading to write in few business scenario, by using slow more Deposit can with effectively supporting high concurrent amount of access, to rear end the data sources such as database accomplish to protect well.When website makes It is usually all first to check in caching with the presence or absence of data when with caching, if there is direct return cache content, if there is no Just directly inquiry database and then again caching query results.Have on the market at present and much caches products, such as Redis, Memcached etc. can encounter the problem of caching punctures substantially, therefore, how effectively prevent no matter which kind of caches product using Caching breakdown only occur is the problem for having to solve.
Summary of the invention
In order to be easy to appear caching breakdown when solving using caching, a large amount of requests flow to database, lead to database snowslide The problem of, the embodiment of the present application provides a kind of method for preventing caching from puncturing, application server, monitoring server and is System monitors the flowing of access parameter of the cache server in cache server cluster, flowing of access ginseng by monitoring server Number can react the loading condition of cache server, will dynamically load the corresponding value of keyword in biggish cache server and move It moves on in other cache servers, to utmostly utilize the overall load ability of entire cache server cluster, avoids Cause to cause the problem of caching punctures since the momentary load of separate unit cache server is larger when using caching.
A kind of method for preventing caching from puncturing is provided in the application first aspect, this method is applied to monitoring server In, comprising:
Monitor the corresponding flowing of access parameter of cache server in cache server cluster, the flowing of access parameter packet Include query rate per second and/or network traffic flow per second;
Determine that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
Select at least one keyword as keyword to be migrated from first cache server;
The keyword to be migrated is mapped, it is according to the keyword after mapping that the keyword to be migrated is corresponding Value is from migrating the second cache server into the cache server cluster in first cache server.
A kind of monitoring server is provided in the application second aspect, comprising:
Monitoring module, it is described for monitoring the corresponding flowing of access parameter of the cache server in cache server cluster Flowing of access parameter includes query rate per second and/or network traffic flow per second;
Determining module, for determining that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
Selecting module, for selecting at least one keyword as key to be migrated from first cache server Word;
Transferring module, for mapping the keyword to be migrated, according to the keyword after mapping by described wait move The corresponding value of keyword is moved to take from the second caching migrated in first cache server into the cache server cluster Business device.
A kind of method for preventing caching from puncturing is provided in the application third aspect, is applied in application server, the party Method includes:
Request of data is received, determines keyword relevant to the request of data;
It checks in keyword mapping table with the presence or absence of the keyword after mapping corresponding with the keyword;The pass Key word mapping table is monitoring server push, for characterizing between the keyword after the keyword being migrated and mapping Mapping relations;
If it exists, according to the crucial word access local cache after the mapping, if the mapping is not present in local cache The corresponding value of keyword afterwards, then according to the cache server in the keyword access cache server cluster after the mapping, The value being accessed is fed back into client.
A kind of application server is provided in the application fourth aspect, comprising:
Determining module is requested for receiving data, determines keyword relevant to the request of data;
Module is checked, for checking in keyword mapping table with the presence or absence of after mapping corresponding with the keyword Keyword;The keyword mapping table is monitoring server push, after characterizing the keyword being migrated and mapping Keyword between mapping relations;If it exists, access modules are triggered;
Access modules, for according to the crucial word access local cache after the mapping, if institute is not present in local cache The corresponding value of keyword after stating mapping then takes according to the caching in the keyword access cache server cluster after the mapping Business device, feeds back to client for the value being accessed.
A kind of system for preventing caching from puncturing is provided at the 5th aspect of the application, which includes:
The application server and caching clothes of monitoring server, the offer of the application fourth aspect that the application second aspect provides Business device cluster;
Wherein, the cache server in the cache server cluster, for the structure storing data using key-value pair.
A kind of monitoring server is provided at the 6th aspect of the application, comprising:
Processor, memory, network interface and bus system;
The bus system, for each hardware component of the monitoring server to be coupled;
The network interface, for realizing the communication link between the monitoring server and at least one other server It connects;
The memory, for storing program instruction;
The processor, for reading the instruction and/or data that store in the memory, the following operation of execution:
Monitor the corresponding flowing of access parameter of cache server in cache server cluster, the flowing of access parameter packet Include query rate per second and/or network traffic flow per second;
Determine that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
Select at least one keyword as keyword to be migrated from first cache server;
The keyword to be migrated is mapped, it is according to the keyword after mapping that the keyword to be migrated is corresponding Value is from migrating the second cache server into the cache server cluster in first cache server.
A kind of application server is provided at the 7th aspect of the application, comprising:
Processor, memory, network interface and bus system;
The bus system, for each hardware component of the application server to be coupled;
The network interface, for realizing the communication link between the application server and at least one other server It connects;
The memory, for storing program instruction;
The processor, for reading the instruction and/or data that store in the memory, the following operation of execution:
Request of data is received, determines keyword relevant to the request of data;
It checks in keyword mapping table with the presence or absence of the keyword after mapping corresponding with the keyword;The pass Key word mapping table is monitoring server push, for characterizing between the keyword after the keyword being migrated and mapping Mapping relations;
If it exists, according to the crucial word access local cache after the mapping, if the mapping is not present in local cache The corresponding value of keyword afterwards, then according to the cache server in the keyword access cache server cluster after the mapping, The value being accessed is fed back into client.
Compared with prior art, the application has the following advantages:
The application monitors the corresponding visit of every cache server in entire cache server cluster using monitoring server Ask flow parameter, which is able to reflect the actual loading situation of cache server;Determine the flowing of access ginseng Number be more than corresponding preset threshold the first cache servers, selected from the first cache server at least one keyword as Keyword to be migrated;That is, the application goes out to load higher cache server according to flowing of access parameter selection, it is higher for loading Cache server select some keywords, the corresponding value of these keywords is migrated out, with reach will load it is higher Purpose of the access pressure dissipation of cache server to the lower cache server of load;Then, keyword to be migrated is carried out Mapping, migrates the corresponding value of keyword to be migrated from the first cache server to buffer service according to the keyword after mapping The second cache server in device cluster.The application will be loaded in higher cache server by keyword mapping mode The corresponding value of partial key moves in the lower cache server of load, to utmostly utilize entire cache server The overall load ability of cluster avoids causing to cause since the momentary load of separate unit cache server is larger when using caching slow The problem of depositing breakdown.
Certainly, implementing either the application proposition method, it is not absolutely required to reach all of above advantage simultaneously.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those of ordinary skill in the art, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is the Sample Scenario figure of the application in practical applications;
Fig. 2 is a kind of structure chart of system for preventing caching from puncturing provided by the embodiments of the present application;
Fig. 3 is a kind of flow chart of method for preventing caching from puncturing provided by the embodiments of the present application;
Fig. 4 is the flow chart for the method that another kind provided by the embodiments of the present application prevents caching from puncturing;
Fig. 5 is a kind of structure chart of monitoring server provided by the embodiments of the present application;
Fig. 6 is a kind of structure chart of application server provided by the embodiments of the present application;
Fig. 7 is a kind of hardware structure diagram of monitoring server provided by the embodiments of the present application;
Fig. 8 is a kind of hardware structure diagram of application server provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
Technical solution provided by the present application in order to facilitate understanding below first carries out the research background of technical scheme Simple declaration.
Inventor has found that distributed caching punctures in network service end in application, can usually encounter caching under study for action, leads The phenomenon that causing a large amount of requests to flow into database (DB), cause database snowslide.Using distributed slow especially in electric business website It deposits it is easy to appear caching breakdown, for example, the usually release product advertising campaign in electric business website, to attract user to buy.It produces Product advertising campaign often has the advertising campaign time, and therefore, the same time point that a large number of users can start in advertising campaign passes through hand Machine APP logs in loose-leaf, then the flowing of access of the loose-leaf can unexpected bigger, application server needle at this time point To the request of data of the loose-leaf, need to obtain a keyword (key) or multiple from caching server cluster medium-long range The corresponding value (value) of key, if these key hash is directed toward same cache server, then, this loose-leaf Total flowing of access can all be pressed onto this cache server, lead in entire distributed server only this caching clothes Device of being engaged in is called, and other machines does not have an effect, and the flowing of access that moment increases, i.e. flowing of access peak time this cache Server can be punctured by moment, and a large amount of requests flow to DB, lead to entire business cluster crash.
In addition, inventor also found in the course of the research, if " the single key " of the loose-leaf or " hash is to same The corresponding value of multiple key " of one cache server is excessive, then may result in cache server and trigger net at work Network transmits flow current limliting, so that largely request flows to DB, leads to entire business cluster crash.
Moreover, because burst flow has uncertainty, unpredictability, therefore, using common dilatation or manually The mode of adjustment key can not dynamically, real-time solve the problems, such as that the burst flow under different scenes causes caching to puncture.
It is found based on the studies above, this application provides the method, associated server and the systems that prevent caching from puncturing, are Understanding convenient for those skilled in the art to technical solution provided by the present application, below first connected applications scene, to the application Applicable cases in practice are introduced.
It is the Sample Scenario figure of the application in practical applications referring to Fig. 1, as shown in Figure 1, the scene includes: monitoring clothes Business device 100, cache server cluster 200, application server 300, client 400;One kind that the application first aspect provides is anti- The method for only caching breakdown can be applied in monitoring server 100 in the form of application program;The application third aspect provides It is a kind of prevent caching puncture method can also in the form of application program be applied to application server 300 in;Buffer service Device cluster 200 refers to the cache server of structure deployment in a distributed manner;For hardware point of view, cache server cluster be can be Using more virtual cache servers of distributed frame deployment on a physical machine;It is also possible to utilize more physical machines Device uses more physical cache servers of distributed frame deployment;It certainly, can also be with when being disposed using more physical machines Virtual cache server is disposed for wherein one or more physical machines.For hardware point of view, monitoring server 100 can To be the hardware device with data-handling capacity and network capacity such as computer, processor;Application server 300 is energy It is enough be mounted with the clients such as application program (APP) or browser the hardware device that business datum is supported be provided, as computer, Processor etc., for example, application server 300 can be the server of Taobao APP.In actual hardware deployment, the monitoring service Device 100 can be realized using cluster server, to cope with the biggish business scenario of data volume, naturally it is also possible to use single machine side Formula is disposed.And application server 300 can also be realized using cluster server, to be applied to the biggish business field of data volume Scape if data volume is smaller, can also only carry out application deployment server certainly in a manner of single machine.The application is when realizing to prison Control server, application server specific machine number be not construed as limiting, Fig. 1 is only a kind of Sample Scenario, facilitates and understands the application Principle.
For the application when realizing, monitoring server 100 monitors every cache server in cache server cluster 200, According to the flowing of access parameter of every cache server, therefrom determine that flowing of access parameter is more than the caching of corresponding preset threshold Server is as the first cache server, it is believed that the load superelevation of the first cache server is easy to appear caching breakdown;Therefore, Select some key as keyword to be migrated from the first cache server, it is by keyword mapping mode that these are to be migrated The corresponding value of keyword is migrated into lower second cache server of load;For example, monitoring server 100 is determined to delay Deposit server 2001 be the first cache server, therefore, by the corresponding value1 of key1 in first cache server migrate to Second cache server 2002, specifically, key1, which is carried out keyword mapping, becomes key11, then it will be in the first cache server The corresponding value1 of key1 is buffered in the second cache server 2002 with the key-value pair of key11-value1.In this way, monitoring service Device 100 just alleviates the load pressure of the first cache server 1001 by operations such as monitoring, key-value pair migrations, avoids first from delaying It deposits server 2001 and is buffered breakdown;Monitoring server is after completing mapping, by the mapping table between key1 and key11 Be pushed to application server 300, in this way, application server 300 receive client 400 transmission request of data when, if should Request of data needs to request the corresponding value1 of key1, then application server 300 is according to the mapping relations between key1 and key11 Table obtains corresponding value1 using key11 access cache server 2002, and the value1 is anti-as the access result of key1 Client of feeding 400.Wherein, client 400 can be also possible to the forms such as browser and be loaded in terminal with application APP Middle its function of realization.
As it can be seen that the application is in practical applications, by monitoring server according to the buffer service in cache server cluster The loading condition of the flowing of access parameter measure cache server of device, will dynamically load key- in higher cache server Value pairs, be distributed in the lower cache server of load by key mapping mode, thus avoid separate unit cache server by Cause to cache the problem of puncturing in the increase of flowing of access moment.
In order to be adapted to use above scene, the embodiment of the present application provides a kind of system for preventing caching from puncturing, below The system is explained in conjunction with Fig. 2.
Referring to fig. 2, Fig. 2 is a kind of system for preventing caching from puncturing provided by the embodiments of the present application, which includes:
Monitoring server 201, application server 202 and caching server cluster 203.
Wherein, monitoring server 201, for monitoring the corresponding flowing of access of cache server in cache server cluster Parameter, the flowing of access parameter include query rate per second and/or network traffic flow per second;Determine the flowing of access parameter More than the first cache server of corresponding preset threshold;At least one keyword is selected to make from first cache server For keyword to be migrated;The keyword to be migrated is mapped, according to the keyword after mapping by the key to be migrated The corresponding value of word is from migrating the second cache server into the cache server cluster in first cache server;It can Choosing, the description of embodiment of the method shown in following FIG. 3 can be participated in about the data handling procedure of monitoring server 201.
In addition, the monitoring server can be also used for pushing keyword mapping table, the pass to application server Key word mapping table is used to record mapping relations between the keyword after the keyword to be migrated and mapping.
Application server 202, is requested for receiving data, determines keyword relevant to the request of data;Check pass With the presence or absence of the keyword after mapping corresponding with the keyword in key word mapping table;The keyword mapping table It is that monitoring server pushes, for characterizing the mapping relations between the keyword after the keyword being migrated and mapping;If depositing According to the crucial word access local cache after the mapping, if there is no the keywords pair after the mapping in local cache The value answered, then according to the cache server in the keyword access cache server cluster after the mapping, the value that will be accessed Feed back to client.Optionally, the reality of method shown in following FIG. 4 can be participated in about the data handling procedure of application server 202 Apply the description of example.
Cache server in cache server cluster 203, it is slow for the structure type according to key-value pair (key-value) Deposit data.Optionally, which can be disposed using memcached framework, can also use redis Framework is disposed.
Wherein, key-value is a kind of non-relational data model, and cache server cluster is by data according to key-value pair Form carries out tissue, index and storage, and key-value storage is very useful not to be related to excessive data relationship, the industry of business relations Business data, can effectively reduce the number of read-write disk, have better readwrite performance.
Using a kind of system for preventing caching from puncturing provided by the embodiments of the present application, caching clothes are monitored by monitoring server Business device cluster measures the load of cache server according to the flowing of access parameter of the cache server in cache server cluster Situation will dynamically load in higher cache server key-value pairs, and it is lower to be distributed to load by key mapping mode Cache server in, thus avoid the problem that separate unit cache server due to flowing of access moment increase cause caching puncture; And the dynamic changes for notifying application server to cache by way of keyword mapping table, to guarantee business Continuity and real-time.
Below with reference to Fig. 3, a kind of method for preventing caching from puncturing provided by the embodiments of the present application is explained.
Referring to Fig. 3, Fig. 3 is a kind of flow chart of method for preventing caching from puncturing provided by the embodiments of the present application, this method It can be applied in monitoring server in the form of application program, this method may comprise steps of:
301, monitoring server monitors the corresponding flowing of access parameter of cache server in cache server cluster, described Flowing of access parameter includes query rate per second and/or network traffic flow per second;
In specific implementation, monitoring server carries out network communication with every cache server in cache server cluster, To obtain the corresponding flowing of access parameter of every cache server.
In an optional implementation manner, every cache server is by the flowing of access parameter active reporting of itself to prison Control server.For example, every cache server can according to predetermined period, periodically by the query rate per second of itself and/or Network traffic flow active reporting per second is to monitoring server.Wherein, predetermined period can be the time quantum of second grade, minute grade, For example, predetermined period is 1 second or 1 minute.This mode can effectively save Internet resources, monitoring server only need according to Period receives flowing of access parameter.
In another optional implementation, monitoring server active every buffer service into cache server cluster Device sends the request of flowing of access parameter query, then after every cache server receives the flow parameter inquiry request, response The request feeds back corresponding flowing of access parameter to monitoring server.Monitoring server can be taken using mass-sending mode to caching Every cache server sends flowing of access parameter query request in business device cluster.In addition, monitoring server can be according to industry Business demand is within a certain period of time monitored cache server cluster, obtains access in such a way that active transmission is requested Flow parameter, it is seen then that this mode enhances monitoring server to the control ability of the caching situation of entire distributed server, Monitoring server according to service conditions, only can actively obtain the flowing of access parameter of cache server when business needs.
Wherein, the corresponding flowing of access parameter of a cache server is according to each key pairs in this cache server The flowing of access parameter answered and determination.Specifically, the corresponding query rate (qps) per second of a cache server is this caching In server the corresponding query rate (qps) per second of each key and value;The corresponding network transmission stream per second of one cache server Amount be in this cache server all corresponding network traffic flows per second of key and value, wherein each key is corresponding Network traffic flow per second be storage size shared by the corresponding qps of each key and value product, the network per second Transmission flow is also referred to as network transmission size per second.As it can be seen that passing through the corresponding flowing of access parameter energy of every cache server Enough measure the access pressure of the cache server, i.e. load pressure.
302, determine that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
At work due to cache server cluster, if the moment access pressure of certain cache server is excessive, this Cache server is just buffered breakdown, leads to that moment is largely requested to introduce DB, will lead to DB snowslide.Therefore, in the application In embodiment, monitoring server has found that it is likely that the cache server that caching breakdown can occur by step 302 in time.
In specific implementation, it is previously provided with corresponding preset threshold for different flowing of access parameters, here pre- If threshold value for separate unit cache server, is respectively corresponded to below for query rate per second and network traffic flow per second Preset threshold be explained respectively.
Specifically, preset threshold corresponding with query rate per second is the query rate upper limit per second according to separate unit cache server Value and determine, and the query rate upper limit value per second of separate unit cache server is determined according to the software and hardware element of single server The theoretical upper values of fixed query rate per second;In order to timely and effectively find that caching breakdown will be will appear on suitable opportunity Cache server, to the setting of the preset threshold with regard to extremely important, the embodiment of the present application proposes a kind of implementation, specifically , preset threshold corresponding with query rate per second is the query rate upper limit value per second of separate unit cache server and multiplying for the ratio of transfiniting Product, optionally, the ratio value that transfinites are the numerical value between 0.5 to 1, for example, this transfinites, ratio value is 0.8;Certainly, this is super Limit ratio value may be the numerical value less than 0.5.
Specifically, preset threshold corresponding with network traffic flow per second is the network per second according to separate unit cache server Flow upper limit value and determination are transmitted, and the network traffic flow upper limit value per second of separate unit cache server is according to separate unit service The theoretical upper values for the network traffic flow per second that the software and hardware element function of device is determined;In order to have in time on suitable opportunity The discovery of effect ground will will appear the cache server of caching breakdown, and to the setting of the preset threshold with regard to extremely important, the application is real It applies example and proposes a kind of implementation, preset threshold corresponding with network traffic flow per second is the per second of separate unit cache server The product of network traffic flow upper limit value and the ratio that transfinites, optionally, the ratio value that transfinites are the numerical value between 0.5 to 1, example Such as, which is 0.8;Certainly, which may be the numerical value less than 0.5.
In specific implementation, monitoring server may determine one first caching from the cache server cluster Server, it is also possible to determine at least two first cache servers;Then, monitoring server is directed to each first buffer service Device implements the steps of 103 and 104, thus by the pressure dissipation in biggish first cache server of load pressure to load In lesser second cache server of pressure.
303, select at least one keyword as keyword to be migrated from first cache server;
In specific implementation, monitoring server is directed to the first cache server, and a keyword is selected from its caching, or Person selects at least two keywords as keyword to be migrated, to migrating the corresponding value of selected keyword to In two cache servers, to alleviate the load pressure of first cache server.
When realizing, the embodiment of the present application provides the optional implementation of following two for step 103, below to two The optional implementation of kind is explained.
A kind of optional implementation, comprising:
Monitoring server is joined for the keyword in first cache server according to the corresponding flowing of access of keyword The descending sequence of number is ranked up keyword;
The forward preceding M keyword of monitoring server selected and sorted is as keyword to be migrated;Wherein, M be greater than or Positive integer equal to 1.
In the optional implementation, if flowing of access parameter is qps, for what is cached in the first cache server Key is ranked up according to the descending sequence of the corresponding qps of each key;If flowing of access parameter is network transmission stream per second Amount, then it is descending according to the corresponding network traffic flow per second of each key for the key cached in the first cache server Sequence be ranked up;Then, selected and sorted M key in the top is as key to be migrated, then these key to be migrated Corresponding value is migrated into the second cache server.
Using the implementation, some biggish key of load pressure, energy for the first cache server are preferentially selected Enough load pressures for fast and effeciently alleviating the first cache server, avoid the first cache server from being buffered in follow-up work Breakdown.
Another optional implementation, comprising:
It is ascending according to the corresponding flowing of access parameter of keyword for the keyword in first cache server Sequence keyword is ranked up;
The forward top n keyword of selected and sorted is as keyword to be migrated;Wherein, N is just whole more than or equal to 1 Number.
Using the implementation, some lesser key of load pressure for the first cache server are preferentially selected, in this way The purpose of selection is in order to prevent to migrate the corresponding value of these key to the second cache server, is taken to the second caching Business device causes much pressure, in this way, can play the role of alleviating the load pressure of the first cache server and play will not Significantly affect the purpose of the second cache server working performance.
Monitoring server is directed to each first cache server, selects key to be migrated, then executes step 304 and executes Migration operation.
304, the keyword to be migrated is mapped, according to the keyword after mapping by the keyword pair to be migrated The value answered is from migrating the second cache server into the cache server cluster in first cache server.
In specific implementation, monitoring server be directed to first cache server, if only select one it is to be migrated Key then migrates the corresponding value of this key to be migrated into second cache server;Monitoring server is directed to one A first cache server, it is if selecting at least two key to be migrated, at least two key to be migrated is corresponding Value can be migrated into second cache server, can also be migrated respectively at least two second cache servers.
Since monitoring server can determine the first cache server from caching server cluster according to query rate per second, The first cache server can also be determined according to network traffic flow per second, it is, of course, also possible to comprehensively look into according to per second Inquiry rate and network traffic flow per second determine the first cache server;For the side of above several the first cache servers of determination Formula, corresponding, monitoring server needs to determine the second cache server according to corresponding flowing of access parameter.
Based on this, the embodiment of the present application is directed to how to determine that the second cache server provides several optional realization sides Formula is below explained the optional implementation of these types respectively.
The case where the first cache server is determined according to query rate per second for monitoring server, a kind of optional realization side Formula, comprising:
Query rate per second in the cache server cluster is no more than according to query rate per second ascending sequence pre- If the cache server of query rate threshold value per second be ranked up;
At least one forward cache server of selected and sorted is as the second cache server.
The case where the first cache server is determined according to network traffic flow per second for monitoring server, it is a kind of optional Implementation, comprising:
According to the ascending sequence of network traffic flow per second to network transmission per second in the cache server cluster The cache server that flow is no more than preset network traffic flow threshold value per second is ranked up;
At least one forward cache server of selected and sorted is as the second cache server.
It is that the flowing of access parameter includes query rate per second and network traffic flow per second there are also a kind of situation;
Then step 302 specifically:
It is more than preset query rate threshold value per second by query rate per second, and network traffic flow per second is more than preset per second The cache server of network traffic flow threshold value is as the first cache server;
Then it can also determine in the following manner the second cache server:
According to query rate per second and the ascending sequence of network traffic flow product per second, to the cache server collection Other cache servers removed except first cache server in group are ranked up;
At least one forward cache server of selected and sorted is as the second cache server.
In above several optional implementations, monitoring server excludes fixed the from caching server cluster One cache server is ranked up remaining cache server according to the ascending sequence of flowing of access parameter, from residue Cache server in select a cache server in the top as the second cache server, it is preferred that selected and sorted First cache server is as the second cache server.It is selected from the first cache server alternatively, monitoring server is directed to Multiple key to be migrated out, select at least two cache servers in the top as the second cache server.Based on this Its load pressure of second cache server of kind selection mode selection is relatively small, can share the access of the first cache server Pressure.
In specific implementation, other than above several implementations, the embodiment of the present application also proposed another optional Implementation, which includes:
According to the working performance of the cache server in cache server cluster, the preferable buffer service of working performance is selected Device is as the second cache server.Wherein, the working performance of cache server includes: CPU usage, storage space utilization, Residual memory space size etc..
For example, according to the ascending sequence of the CPU usage of cache server to removing in cache server cluster Other cache servers except one cache server are ranked up, at least one forward cache server conduct of selected and sorted Second cache server.
Next, being explained in step 304 about the implementation mapped the keyword to be migrated It is bright.
In specific implementation, monitoring server can map the keyword to be migrated according to mapping ruler, institute Stating mapping ruler is to increase a character that can be hashed to the second cache server in the tail portion of keyword to be migrated.
What needs to be explained here is that the realization of the application can with no restriction, as long as reflecting to the concrete form of mapping ruler Penetrating rule can guarantee to obtain key ' after being mapped the key in the first cache server, can be by key pairs according to the key ' The value answered is hashed again into the second specified cache server.But in view of in some cache server clusters It can be using hash rule, for example, hash rule is the last character according to key to buffer service in cache server cluster The sum of device carries out modulus as hash function, is based on this, and in order to preferably be compatible with these cache server clusters, the application is real Apply example propose a kind of optional mapping ruler be the tail portion of key increase the key that one enables to after mapping be hashed to One character of the second cache server.
Monitoring server hashes the corresponding value of part key in the first cache server according to the key after mapping Into the second cache server, to alleviate the access pressure of the first cache server, to utmostly utilize entire caching The overall load ability of server cluster avoids causing when using caching since the momentary load of separate unit cache server is larger The problem of causing caching breakdown.
In order to guarantee the dynamic changes of operation system real-time synchronization cache server cluster, guarantee that operation system is normal Work, monitoring server can also notify the adjustment situation about dynamic adjustment caching to application server, then in the above method On the basis of, following steps can also be increased:
Monitoring server pushes keyword mapping table to application server, and the keyword mapping table is for remembering Mapping relations between keyword after recording the keyword to be migrated and mapping.
In specific implementation, monitoring server produces keyword mapping table after completing key mapping, from And push to the keyword mapping table in each application server in operation system, so that application server is timely The migration situation of key-value is solved, to guarantee that server cluster provides normal business number to application server in operation system According to support.
It is to be carried out to the method provided by the embodiments of the present application that prevent caching from puncturing applied in monitoring server above It illustrates.
The method for preventing caching from puncturing another kind provided by the embodiments of the present application below is explained.
Referring to fig. 4, Fig. 4 is the flow chart for the method that another kind provided by the embodiments of the present application prevents caching from puncturing, the party Method can be applied in application server in the form of application program, which can be web application server, can also To be APP application server, which is communicated respectively with monitoring server and caching server cluster, such as Fig. 4 It is shown, method includes the following steps:
401, application server receives request of data, determines keyword relevant to the request of data;
In specific implementation, client sends request of data to application server, in the request of data comprising page url and Business relevant parameter, such as service identification, the page hole bit identification etc.;Application server is according to the letter carried in the request of data Breath is capable of determining that keyword (key) relevant to the request of data.
The a large amount of identical request of data that application server is sent in meeting of same time different clients, but application service Device is required to determine relative keyword for each request of data.
For example, the usually release product advertising campaign in electric business website, to attract user to buy.Special promotion is often There is the advertising campaign time, therefore, a large number of users can utilize browsing at the same time point that advertising campaign starts by personal terminal Device logs in loose-leaf, then the personal terminal that application server can receive different user sent by browser with the activity The relevant request of data of the page, then application server can determine for each request of data for its relevant key.Afterwards Continuous, application server accesses corresponding value according to key, so that the value that access obtains is fed back to client.
It in the embodiment of the present application, is not directly according to the key directly from local after application server determines key Corresponding value is accessed in caching, but first searches whether there is mapping corresponding with the key according to keyword mapping table Key ' afterwards, that is, need first to judge, whether the corresponding value of the key is from source cache in cache server cluster It is migrated in server in another cache server.Then application server needs to be implemented step after executing step 201 202。
402, application server is checked in keyword mapping table with the presence or absence of after mapping corresponding with the keyword Keyword;The keyword mapping table is monitoring server push, for characterizing the keyword being migrated and mapping The mapping relations between keyword afterwards;
Wherein, keyword mapping table is monitoring server according to the map operation that key is embodied, the packet of generation How relation table containing the mapping relations between the key ' after the value key being migrated and mapping carries out about monitoring server Mapping, and the realization of the relation table how is generated, it may refer to the associated description in embodiment of the method shown in above-mentioned Fig. 3, this Place repeats no more.
It should be noted that the keyword is mapped and is closed after application server receives the keyword mapping table It is that table is stored, to be used when handling request of data.
If check result be it is no, show that the key relevant to request of data is non-mapped, the corresponding value of the key is not It is migrated, then the application server directly accesses corresponding value according to the key from local cache, if not having in local cache Have, then application server is communicated with the cache server in cache server cluster again, i.e., then again from caching server set The corresponding value of the key is accessed in group, the value that access obtains is fed back into client.
If check result be it is yes, show that the key relevant to request of data has been mapped, the corresponding value of the key is It is migrated, then the application server executes step 403.
403, and if it exists, application server is according to the crucial word access local cache after the mapping, if in local cache There is no the corresponding values of keyword after the mapping, then according in the keyword access cache server cluster after the mapping Cache server, the value being accessed is fed back into client.
In specific implementation, that be actually subjected to access due to client is key, but the key has been mapped as key ', and supervises Server is controlled according to the key ' after mapping by the corresponding value of key via source cache server migration to another caching In server.That is, must pass through key ' according to the corresponding value of the key can be accessed.Therefore, application server should Key is assigned a value of key ' again, in this way, local cache is first accessed using the key ', if there are the key ' is corresponding in local cache The value being accessed then is fed back to client by value.If the corresponding value of the key ' is not present in local cache, answer With server access cache server cluster, accessed from that cache server for being cached with the corresponding value of the key ' Corresponding value is obtained, the value that access obtains is fed back into client.
Using method provided by the embodiments of the present application, application server can quickly be found by keyword mapping mode The key that value is migrated, and then quickly accessed according to the key after mapping and obtain corresponding value, it is reflected according to this keyword The key-value for caching breakdown in order to prevent in distributed server and being migrated can be accessed in the mode penetrated in real time, both The data access logic for not changing client nor affects on the storage logic in distributed server, so as to utmostly Using the overall load ability of entire cache server cluster, moment when using caching due to separate unit cache server is avoided Load larger the problem of leading to initiation caching breakdown.
In practical applications, the moment high concurrent request of single key be easy to cause caching to puncture, for example, in electric business website In, the advertising campaign in limited time usually occurred, in the advertising campaign of prescribing a time limit, a large number of users logs in the loose-leaf of website just simultaneously It is easy to produce the request of data of moment high concurrent, this will lead to the caching clothes for storing the value of the relevant key of the request of data Business device easy moment is punctured, and in order to alleviate this problem, the embodiment of the present application also provides a kind of optional settling mode, tools Body, application server using mapping after keyword access cache server cluster when, can also judge with after the mapping The corresponding flowing of access parameter of keyword whether be more than preset threshold, that is, judge the corresponding value of the keyword whether be heat Point data, if it is hot spot data, by the corresponding value priority cache of the keyword in local cache;Concrete scheme exists It is that can also increase following steps on the basis of method shown in Fig. 3 when realization:
Application server judges whether the corresponding flowing of access parameter of the keyword after the mapping is more than corresponding default Threshold value, the flowing of access parameter include query rate per second and/or network traffic flow per second;
If so, by being cached from the value corresponding with the keyword after the mapping obtained in the cache server cluster In local cache.
Wherein, the preset threshold includes:
The first preset threshold corresponding with the query rate per second of keyword, first preset threshold are according to the caching Application server total number institute is true in the query rate upper limit value and operation system per second of separate unit cache server in server cluster Fixed;
The second preset threshold corresponding with the network traffic flow per second of keyword, second preset threshold are according to institute State application server in the network traffic flow upper limit value per second of separate unit cache server in cache server cluster and operation system Determined by total number.
Wherein, the query rate upper limit value per second of separate unit cache server is determined according to the software and hardware element of single server The theoretical upper values of fixed query rate per second;The network traffic flow upper limit value per second of separate unit cache server is taken according to separate unit The theoretical upper values for the network traffic flow per second that the software and hardware element of business device is determined;In order to timely and effective on suitable opportunity Ground discovery will will lead to the hot spot key of caching breakdown, about the request of data of hot spot key, i.e., will break through separate unit caching clothes The processing capacity of business device will lead to caching breakdown.It is explained below by setting situation of the way of example to preset threshold Explanation.
For example, the first preset threshold can be using value as the query rate upper limit value S1 per second of separate unit cache server and the ratio that transfinites Ratio S1*T1/G in the product and operation system of example T1 between application server total number G, optionally, the ratio T1 that transfinites Value is the numerical value between 0.5 to 1, for example, this transfinites, ratio value is 0.8;Certainly, which may be small In 0.5 numerical value.
For example, the second preset threshold as separate unit cache server network traffic flow upper limit value S2 per second and can be surpassed using value Ratio S2*T2/G in the product and operation system of limit ratio T2 between application server total number G, optionally, the ratio that transfinites Example T value is the numerical value between 0.5 to 1, for example, this transfinites, ratio value is 0.8;Certainly, which can also be with For the numerical value less than 0.5.
In this way, application server can find lead in time while the request of data at customer in response end The hot spot key of caching breakdown is caused, and then preferentially hot spot key is buffered in local cache, thus by finding hot spot key in time It is stored in local cache, in this way, can directly read from local cache in the subsequently received request of data about hot spot key It takes value to feed back to client, hot spot key moment high concurrent can be requested to intercept local cache, that is, by application server The access pressure of cache server is actively shared, to evade the risk of single key high concurrent request breakdown caching.
Corresponding with method shown in Fig. 3, the embodiment of the present application also provides a kind of monitoring servers, below with reference to Fig. 5 The monitoring server is explained.
Referring to Fig. 5, Fig. 5 is a kind of structure chart of monitoring server provided by the embodiments of the present application, the monitoring server packet It includes:
Monitoring module 501, for monitoring the corresponding flowing of access parameter of the cache server in cache server cluster, institute Stating flowing of access parameter includes query rate per second and/or network traffic flow per second;
Determining module 502, for determining that the flowing of access parameter is more than the first buffer service of corresponding preset threshold Device;
Selecting module 503, for selecting at least one keyword as pass to be migrated from first cache server Key word;
Transferring module 504, for mapping the keyword to be migrated, according to the keyword after mapping will it is described to The corresponding value of migration keyword is cached from second migrated in first cache server into the cache server cluster Server.
What needs to be explained here is that the specific implementation of each functional module in the monitoring server is referred to texts and pictures The description of embodiment of the method shown in 3, details are not described herein again.
Corresponding with method shown in Fig. 4, the embodiment of the present application also provides a kind of application servers, below with reference to Fig. 6 The application server is explained.
Referring to Fig. 6, Fig. 6 is a kind of structure chart of application server provided by the embodiments of the present application, the application server packet It includes:
Determining module 601, is requested for receiving data, determines keyword relevant to the request of data;
Module 602 is checked, for checking in keyword mapping table with the presence or absence of mapping corresponding with the keyword Keyword afterwards;The keyword mapping table is monitoring server push, for characterizing the keyword being migrated and reflecting The mapping relations between keyword after penetrating;If it exists, access modules are triggered;
Access modules 603, for according to the crucial word access local cache after the mapping, if being not present in local cache The corresponding value of keyword after the mapping, then according to the caching in the keyword access cache server cluster after the mapping The value being accessed is fed back to client by server.
What needs to be explained here is that the specific implementation of each functional module in the application server is referred to texts and pictures The description of embodiment of the method shown in 4, details are not described herein again.
Fig. 7 is a kind of hardware structure diagram of monitoring server 700 provided by the embodiments of the present application, in the present embodiment, prison Control server 700 can specifically include: processor 701, memory 702, network interface 703 and bus system 704.
The bus system 704, for each hardware component of monitoring server 700 to be coupled.
The network interface 703, for realizing the communication link between monitoring server 700 and at least one other server It connects, can be used internet, wide area network, local network, the modes such as Metropolitan Area Network (MAN) realize communication connection.
The memory 702, for storing program instruction and/or data.
The processor 701, for reading the instruction stored in memory 702, the following operation of execution:
Monitor the corresponding flowing of access parameter of cache server in cache server cluster, the flowing of access parameter packet Include query rate per second and/or network traffic flow per second;
Determine that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
Select at least one keyword as keyword to be migrated from first cache server;
The keyword to be migrated is mapped, it is according to the keyword after mapping that the keyword to be migrated is corresponding Value is from migrating the second cache server into the cache server cluster in first cache server.
It should be noted that the specific implementation for each step that the processor 701 executes may refer to side shown in figure 3 above Each step in method embodiment, details are not described herein again.
Fig. 8 is that a kind of hardware structure diagram of application server 800 provided by the embodiments of the present application is answered in the present embodiment It can specifically include with server 800: processor 801, memory 802, network interface 803 and bus system 804.
The bus system 804, for each hardware component of the application server to be coupled;
The network interface 803, for realizing the communication between the application server and at least one other server Connection;
The memory 802, for storing program instruction;
The processor 801, for reading the instruction stored in the memory, the following operation of execution:
Request of data is received, determines keyword relevant to the request of data;
It checks in keyword mapping table with the presence or absence of the keyword after mapping corresponding with the keyword;The pass Key word mapping table is monitoring server push, for characterizing between the keyword after the keyword being migrated and mapping Mapping relations;
If it exists, according to the crucial word access local cache after the mapping, if the mapping is not present in local cache The corresponding value of keyword afterwards, then according to the cache server in the keyword access cache server cluster after the mapping, The value being accessed is fed back into client.
It should be noted that the specific implementation for each step that the processor 801 executes may refer to side shown in figure 4 above Each step in method embodiment, details are not described herein again.
It should also be noted that, all the embodiments in this specification are described in a progressive manner, each embodiment What is stressed is the difference from other embodiments, and same and similar part refers to each other i.e. between each embodiment It can.For device class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, correlation Place illustrates referring to the part of embodiment of the method.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.
A kind of method, associated server and system for preventing caching from puncturing provided herein has been carried out in detail above Thin to introduce, specific examples are used herein to illustrate the principle and implementation manner of the present application, and above embodiments are said It is bright to be merely used to help understand the present processes and its core concept;At the same time, for those skilled in the art, foundation The thought of the application, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification is not It is interpreted as the limitation to the application.

Claims (18)

1. a kind of method for preventing caching from puncturing characterized by comprising
The corresponding flowing of access parameter of cache server in cache server cluster is monitored, the flowing of access parameter includes every Second query rate and/or network traffic flow per second;
Determine that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
Select at least one keyword as keyword to be migrated from first cache server;
The keyword to be migrated is mapped, according to the keyword after mapping by the corresponding value of the keyword to be migrated from The second cache server into the cache server cluster is migrated in first cache server.
2. method according to claim 1, which is characterized in that the flowing of access parameter includes query rate per second;
Then the method also includes: determine second cache server in the following manner:
Query rate per second in the cache server cluster is no more than according to query rate per second ascending sequence preset The cache server of query rate threshold value per second is ranked up;
At least one forward cache server of selected and sorted is as the second cache server.
3. method according to claim 1, which is characterized in that the flowing of access parameter includes network traffic flow per second;
Then the method also includes: determine second cache server in the following manner:
According to the ascending sequence of network traffic flow per second to network traffic flow per second in the cache server cluster Cache server no more than preset network traffic flow threshold value per second is ranked up;
At least one forward cache server of selected and sorted is as the second cache server.
4. method according to claim 1, which is characterized in that the flowing of access parameter includes query rate per second and net per second Network transmits flow;
Then the determination flowing of access parameter is more than the first cache server of corresponding preset threshold, specifically:
Determine that query rate per second is more than preset query rate threshold value per second, and network traffic flow per second is more than preset net per second First cache server of network transmission flow threshold;
Then the method also includes: determine second cache server in the following manner:
According to the sequence that the product of query rate per second and network traffic flow per second is ascending, to the cache server cluster Middle other cache servers removed except first cache server are ranked up;
At least one forward cache server of selected and sorted is as the second cache server.
5. according to any one of claim 2 to 4 the method, which is characterized in that the selected and sorted it is forward at least one is slow Server is deposited as the second cache server, comprising:
The cache server of selected and sorted first is as the second cache server.
6. method according to claim 1, which is characterized in that described to select at least one from first cache server Keyword is as keyword to be migrated, comprising:
For the keyword in first cache server, according to descending suitable of the corresponding flowing of access parameter of keyword Ordered pair keyword is ranked up;
The forward preceding M keyword of selected and sorted is as keyword to be migrated;Wherein, M is the positive integer more than or equal to 1.
7. method according to claim 1, which is characterized in that described to select at least one from first cache server Keyword is as keyword to be migrated, comprising:
For the keyword in first cache server, according to ascending suitable of the corresponding flowing of access parameter of keyword Ordered pair keyword is ranked up;
The forward top n keyword of selected and sorted is as keyword to be migrated;Wherein, N is the positive integer more than or equal to 1.
8. method according to claim 1, which is characterized in that
The corresponding query rate per second of cache server is the corresponding query rate per second of keyword all in cache server And value;
The corresponding network traffic flow per second of cache server passes for the corresponding network per second of keywords all in cache server Defeated flow and value;Wherein, the corresponding network traffic flow per second of each keyword be the corresponding query rate per second of keyword with The product of storage size shared by the corresponding value of keyword.
9. method according to claim 1, which is characterized in that described to map the keyword to be migrated, comprising:
The keyword to be migrated is mapped according to mapping ruler, the mapping ruler is in the tail portion of keyword to be migrated Increase a character that can be hashed to the second cache server.
10. method according to claim 1, which is characterized in that the method also includes:
Keyword mapping table is pushed to application server, the keyword mapping table is for recording the pass to be migrated Mapping relations between keyword after key word and mapping.
11. a kind of monitoring server characterized by comprising
Monitoring module, for monitoring the corresponding flowing of access parameter of the cache server in cache server cluster, the access Flow parameter includes query rate per second and/or network traffic flow per second;
Determining module, for determining that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
Selecting module, for selecting at least one keyword as keyword to be migrated from first cache server;
Transferring module, for mapping the keyword to be migrated, according to the keyword after mapping by the pass to be migrated The corresponding value of key word is from migrating the second cache server into the cache server cluster in first cache server.
12. a kind of method for preventing caching from puncturing characterized by comprising
Request of data is received, determines keyword relevant to the request of data;
It checks in keyword mapping table with the presence or absence of the keyword after mapping corresponding with the keyword;The keyword Mapping table is monitoring server push, for characterizing the mapping between the keyword after the keyword being migrated and mapping Relationship;
If it exists, according to the crucial word access local cache after the mapping, if there is no after the mapping in local cache The corresponding value of keyword will visit then according to the cache server in the keyword access cache server cluster after the mapping The value asked feeds back to client.
13. method according to claim 12, which is characterized in that according to the keyword access cache service after the mapping When cache server in device cluster, the method also includes:
Whether the corresponding flowing of access parameter of keyword after judging the mapping is more than corresponding preset threshold, the access stream Measuring parameter includes query rate per second and/or network traffic flow per second;
If so, by local is buffered in from the value corresponding with the keyword after the mapping being accessed in the cache server In caching.
14. 3 the method according to claim 1, which is characterized in that the preset threshold includes:
The first preset threshold corresponding with the query rate per second of keyword, first preset threshold are according to the buffer service In the query rate upper limit value and operation system per second of cache server in device cluster determined by application server total number;
The second preset threshold corresponding with the network traffic flow per second of keyword, second preset threshold are according to described slow Deposit application server total number in the cache server network traffic flow upper limit value per second and operation system in server cluster It is identified.
15. a kind of application server characterized by comprising
Determining module is requested for receiving data, determines keyword relevant to the request of data;
Module is checked, for checking in keyword mapping table with the presence or absence of the key after mapping corresponding with the keyword Word;The keyword mapping table is monitoring server push, for characterizing the pass after the keyword being migrated and mapping Mapping relations between key word;If it exists, access modules are triggered;
Access modules, for according to the crucial word access local cache after the mapping, if being reflected in local cache there is no described The corresponding value of keyword after penetrating, then according to the buffer service in the keyword access cache server cluster after the mapping The value being accessed is fed back to client by device.
16. a kind of system for preventing caching from puncturing characterized by comprising
Monitoring server described in the claims 11, application server and cache server described in the claims 15 Cluster;
Wherein, the cache server in the cache server cluster, for the structure storing data using key-value pair.
17. a kind of monitoring server characterized by comprising
Processor, memory, network interface and bus system;
The bus system, for each hardware component of the monitoring server to be coupled;
The network interface, for realizing the communication connection between the monitoring server and at least one other server;
The memory, for storing program instruction;
The processor, for reading the instruction and/or data that store in the memory, the following operation of execution:
The corresponding flowing of access parameter of cache server in cache server cluster is monitored, the flowing of access parameter includes every Second query rate and/or network traffic flow per second;
Determine that the flowing of access parameter is more than the first cache server of corresponding preset threshold;
Select at least one keyword as keyword to be migrated from first cache server;
The keyword to be migrated is mapped, according to the keyword after mapping by the corresponding value of the keyword to be migrated from The second cache server into the cache server cluster is migrated in first cache server.
18. a kind of application server characterized by comprising
Processor, memory, network interface and bus system;
The bus system, for each hardware component of the application server to be coupled;
The network interface, for realizing the communication connection between the application server and at least one other server;
The memory, for storing program instruction;
The processor, for reading the instruction and/or data that store in the memory, the following operation of execution:
Request of data is received, determines keyword relevant to the request of data;
It checks in keyword mapping table with the presence or absence of the keyword after mapping corresponding with the keyword;The keyword Mapping table is monitoring server push, for characterizing the mapping between the keyword after the keyword being migrated and mapping Relationship;
If it exists, according to the crucial word access local cache after the mapping, if there is no after the mapping in local cache The corresponding value of keyword will visit then according to the cache server in the keyword access cache server cluster after the mapping The value asked feeds back to client.
CN201711024741.3A 2017-10-27 2017-10-27 Method for preventing cache breakdown, related server and system Active CN109729108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711024741.3A CN109729108B (en) 2017-10-27 2017-10-27 Method for preventing cache breakdown, related server and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711024741.3A CN109729108B (en) 2017-10-27 2017-10-27 Method for preventing cache breakdown, related server and system

Publications (2)

Publication Number Publication Date
CN109729108A true CN109729108A (en) 2019-05-07
CN109729108B CN109729108B (en) 2022-01-14

Family

ID=66290831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711024741.3A Active CN109729108B (en) 2017-10-27 2017-10-27 Method for preventing cache breakdown, related server and system

Country Status (1)

Country Link
CN (1) CN109729108B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222034A (en) * 2019-06-04 2019-09-10 北京奇艺世纪科技有限公司 A kind of database maintenance method and device
CN111367672A (en) * 2020-03-05 2020-07-03 北京奇艺世纪科技有限公司 Data caching method and device, electronic equipment and computer storage medium
CN111586438A (en) * 2020-04-27 2020-08-25 北京文香信息技术有限公司 Method, device and system for processing service data
CN111885184A (en) * 2020-07-29 2020-11-03 深圳壹账通智能科技有限公司 Method and device for processing hot spot access keywords in high concurrency scene
CN112307069A (en) * 2020-11-12 2021-02-02 京东数字科技控股股份有限公司 Data query method, system, device and storage medium
CN113141264A (en) * 2020-01-16 2021-07-20 北京京东振世信息技术有限公司 High-concurrency access processing method and device and storage medium
CN113765978A (en) * 2020-11-17 2021-12-07 北京沃东天骏信息技术有限公司 Hotspot request detection system, method, device, server and medium
CN113760974A (en) * 2020-09-08 2021-12-07 北京沃东天骏信息技术有限公司 Dynamic caching method, device and system
CN114116796A (en) * 2021-11-02 2022-03-01 浪潮云信息技术股份公司 Distributed cache system for preventing cache treading
CN114422434A (en) * 2021-12-08 2022-04-29 联动优势电子商务有限公司 Hot key storage method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244685A (en) * 2011-08-11 2011-11-16 中国科学院软件研究所 Distributed type dynamic cache expanding method and system supporting load balancing
US20170005953A1 (en) * 2015-07-04 2017-01-05 Broadcom Corporation Hierarchical Packet Buffer System
CN106357426A (en) * 2016-08-26 2017-01-25 东北大学 Large-scale distribution intelligent data collection system and method based on industrial cloud
CN107145386A (en) * 2017-04-28 2017-09-08 广东欧珀移动通信有限公司 Data migration method, terminal device and computer-readable recording medium
US20170277598A1 (en) * 2016-03-28 2017-09-28 International Business Machines Corporation Application aware export to object storage of low-reference data in deduplication repositories

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244685A (en) * 2011-08-11 2011-11-16 中国科学院软件研究所 Distributed type dynamic cache expanding method and system supporting load balancing
US20170005953A1 (en) * 2015-07-04 2017-01-05 Broadcom Corporation Hierarchical Packet Buffer System
US20170277598A1 (en) * 2016-03-28 2017-09-28 International Business Machines Corporation Application aware export to object storage of low-reference data in deduplication repositories
CN106357426A (en) * 2016-08-26 2017-01-25 东北大学 Large-scale distribution intelligent data collection system and method based on industrial cloud
CN107145386A (en) * 2017-04-28 2017-09-08 广东欧珀移动通信有限公司 Data migration method, terminal device and computer-readable recording medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222034A (en) * 2019-06-04 2019-09-10 北京奇艺世纪科技有限公司 A kind of database maintenance method and device
CN113141264A (en) * 2020-01-16 2021-07-20 北京京东振世信息技术有限公司 High-concurrency access processing method and device and storage medium
CN113141264B (en) * 2020-01-16 2023-12-08 北京京东振世信息技术有限公司 High concurrency access processing method, device and storage medium
CN111367672A (en) * 2020-03-05 2020-07-03 北京奇艺世纪科技有限公司 Data caching method and device, electronic equipment and computer storage medium
CN111586438A (en) * 2020-04-27 2020-08-25 北京文香信息技术有限公司 Method, device and system for processing service data
CN111586438B (en) * 2020-04-27 2021-08-17 安徽文香科技有限公司 Method, device and system for processing service data
CN111885184A (en) * 2020-07-29 2020-11-03 深圳壹账通智能科技有限公司 Method and device for processing hot spot access keywords in high concurrency scene
CN113760974A (en) * 2020-09-08 2021-12-07 北京沃东天骏信息技术有限公司 Dynamic caching method, device and system
CN112307069A (en) * 2020-11-12 2021-02-02 京东数字科技控股股份有限公司 Data query method, system, device and storage medium
CN113765978A (en) * 2020-11-17 2021-12-07 北京沃东天骏信息技术有限公司 Hotspot request detection system, method, device, server and medium
CN114116796A (en) * 2021-11-02 2022-03-01 浪潮云信息技术股份公司 Distributed cache system for preventing cache treading
CN114422434A (en) * 2021-12-08 2022-04-29 联动优势电子商务有限公司 Hot key storage method and device

Also Published As

Publication number Publication date
CN109729108B (en) 2022-01-14

Similar Documents

Publication Publication Date Title
CN109729108A (en) A kind of method, associated server and system for preventing caching from puncturing
Amiri et al. DBProxy: A dynamic data cache for Web applications
US10262005B2 (en) Method, server and system for managing content in content delivery network
JP5725661B2 (en) Distributed search system
US8849838B2 (en) Bloom filter for storing file access history
EP2369494A1 (en) Web application based database system and data management method therof
US20140337484A1 (en) Server side data cache system
CN108170768A (en) database synchronization method, device and readable medium
US20110161825A1 (en) Systems and methods for testing multiple page versions across multiple applications
CN106648464B (en) Multi-node mixed block cache data reading and writing method and system based on cloud storage
CN104426718B (en) Data decryptor server, cache server and redirection method for down loading
US10656839B2 (en) Apparatus and method for cache provisioning, configuration for optimal application performance
WO2021142965A1 (en) Data synchronization method and apparatus, and computer device and storage medium
CN109120709A (en) A kind of caching method, device, equipment and medium
CN102868727A (en) Method for realizing high availability of logical volume
Zhou et al. Improving big data storage performance in hybrid environment
CN104021137B (en) A kind of client based on catalogue mandate is locally opened and closed the method and system of file
US11397711B1 (en) Proxy-based database scaling
CN107181773A (en) Data storage and data managing method, the equipment of distributed memory system
CN103957252B (en) The journal obtaining method and its system of cloud stocking system
CN104281486B (en) A kind of virtual machine treating method and apparatus
CN103442000B (en) WEB caching replacement method and device, http proxy server
US11954533B2 (en) Using machine learning techniques to flow control clients in a deduplication file system
CN113626463B (en) Web performance optimization method under high concurrency access
Zhao et al. DotSlash: Handling web hotspots at dynamic content web sites

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221117

Address after: Room 507, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee after: ZHEJIANG TMALL TECHNOLOGY Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Patentee before: ALIBABA GROUP HOLDING Ltd.

TR01 Transfer of patent right