CN102244685B - Distributed type dynamic cache expanding method and system for supporting load balancing - Google Patents

Distributed type dynamic cache expanding method and system for supporting load balancing Download PDF

Info

Publication number
CN102244685B
CN102244685B CN201110230333XA CN201110230333A CN102244685B CN 102244685 B CN102244685 B CN 102244685B CN 201110230333X A CN201110230333X A CN 201110230333XA CN 201110230333 A CN201110230333 A CN 201110230333A CN 102244685 B CN102244685 B CN 102244685B
Authority
CN
China
Prior art keywords
data
node
caching server
migration
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110230333XA
Other languages
Chinese (zh)
Other versions
CN102244685A (en
Inventor
黄涛
秦秀磊
张文博
魏峻
钟华
朱鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Jun'an Tai Investment Group Co., Ltd.
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN201110230333XA priority Critical patent/CN102244685B/en
Publication of CN102244685A publication Critical patent/CN102244685A/en
Application granted granted Critical
Publication of CN102244685B publication Critical patent/CN102244685B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of distributed caching dynamic retractility method and system of holding load equilibrium, belong to software technology field. The method include the steps that 1) each respective resource utilization of cache server periodic monitoring; 2) each cache server calculates respective weighted load value Li according to the currently monitored resource utilization, and sends it to cache cluster manager; 3) cache cluster manager calculates the current mean load value of distributed cache system according to weighted load value Li
Figure DDA0000082695380000011
When
Figure DDA0000082695380000012
When higher than threshold value thremax, extended operation is executed; When
Figure DDA0000082695380000013
When lower than given threshold thremin, shrinkage operation is executed. This system includes cache server, cache client and caching cluster manager dual system; Cache server passes through network respectively and connect with cache client, cache cluster manager. Logistics networks flow of the present invention equiblibrium mass distribution between each cache node, optimizes resource utilization ratio; Solves the clock availability security problem of data consistency and service.

Description

A kind of distributed caching dynamic retractility method and system of holding load equilibrium
Technical field
The present invention relates to a kind of distributed caching dynamic retractility method and system thereof, relate in particular to a kind of distributed caching dynamic retractility method and system of holding load equilibrium, belong to software technology field.
Background technology
Under the cloud computing environment, explosive growth has appearred in number of users and network traffics, how on cheap, standardized hardware and software platform, big capacity, professional transaction of closing of bonding is used and is provided good supporting to become the problem that numerous enterprises faces.Usually the service end bottleneck can appear at database, and in order further to address this problem, the distributed caching technology is introduced.Distributed caching furthered the cluster object data and use between distance, be the expedited data visit, provide data distributed shared key technology, this technology has important effect for the extended capability, the safeguards system reliability that improve system.
Forrester points out that in technical report in 2010 scalability is one of key characteristic of distributed caching system.The present invention is with flexible two classes that are divided into of caching system: static flexible and dynamic retractility.Static stretching when referring to system's increase and decrease cache node need stop the system of current operation, upgrades configuration information according to the situation of node increase and decrease, restarts whole system then.Dynamic retractility then is a kind of online flexible, and system can finish the interpolation of cache node and deletion, data cached migration and the renewal of configuration information automatically according to the variation of load, is self-management and the self that carries out under the artificial situation about participating in not having.The tradition caching system is main purpose to improve data access speed, is applied to page cache mostly, handles buffer memory and data object buffer memory.The current cache system also is used for the store status data except accelerating application access, high available support is provided, and therefore the lasting availability of data consistency and service must be protected when flexible.The existing distributed caching system is scarcely supported dynamic retractility, as OSCache, and Memcached, Terracotta EX etc., system need restart when scale is expanded, and causes service disruption and loss of data, can't adapt to demands of applications.
But there is the challenge of following two aspects in the distributed caching system that realizes a dynamic retractility: 1) during the buffer memory dynamic retractility, the partial buffering data need be finished migration between node.The buffer memory service disruption causes asking failing or the losing of important business data, status data in the transition process, a plurality of copies of multi-user concurrent operating data and the inconsistent grade of data that produces all can be brought the interests loss to the user.In addition, the data migration also will be considered migration overhead and current system loading conditions, rationally controls the migration progress, avoids causing system overload so that the service collapse because expense is excessive.Therefore, how to ensure that lasting availability and the data consistency of buffer memory service becomes a major challenge in the telescopic process.2) can have some hot spot datas district in the caching system, namely visit the higher relatively zone of load, characteristics and the user access pattern of application often depended in the position in hot spot data district.The hot spot data district very easily becomes system bottleneck, and then the availability of buffer memory service is brought influence, so needs in the telescopic process to consider that hot spot data is carried out rebalancing method to be handled.
In the distributed caching system, the data balancing strategy on each caching server node, when obtaining data is retrieved the data distributed store from the node that calculates, avoided the formula of flooding to search, and has improved the efficient of data search.Early stage popular strategy is the remainder algorithm, and this algorithm is realized simple, and its core concept is that the remainder according to the server node number carries out data and distributes.The deficiency of remainder algorithm is when server node changes (when for example node failure or node are flexible), acute variation can take place in the server node that key assignments is mapped to, have only a spot of data still in former server node visit, other data have then been transferred to new server node, and hit rate declines to a great extent.
In order to address the above problem, scholar (the D.Karger of the Massachusetts Institute of Technology, E.Lehman, T.Leighton, R.Panigrahy, M.Levine, and D.Lewin.Consistent hashing and random trees:distributed caching protocols for relieving hot spots on the World Wide Web.In Proceedings of the Twenty-Ninth Annual ACM Symposium on theory of Computing.STOC ' 97.pp.654-663.1997.) has proposed everybody present known consistency hash algorithm (Consistent hashing) in 97 years, this algorithm is at present still at application layer multicast, fields such as P2P and buffer memory platform are extensive use of.Its basic thought is that the output area of hash function is defined as a fixing annular space, is referred to as the Hash ring, and each server node shines upon mutually with the value that ring is gone up a certain random site representative.Each data cached key assignments also is mapped to position on the Hash ring, advances clockwise on ring then, and when the position of finding first to represent certain server node, this store data items is in this caching server.The consistency hash algorithm has solved data partition and routing issue preferably, has reduced node to greatest extent and has changed the influence that data are distributed and cause.Be expressed as Fig. 1.Document (D.Hastorun, M.Jampani, G.Kakulapati, A.Pilchin, S.Sivasubramanian, P.Vosshall, and W.Vogels, " Dynamo:Amazon ' s highly available key-value store; " In Proceedings of ACM Symposium on Operating Systems Principles (SOSP ' 07) .pp.205-220,2007.) the consistency hash algorithm is conducted in-depth analysis, think that there are following 2 deficiencies in it: 1) service node random distribution on the Hash ring causes the uneven distribution of data and load; 2) algorithm has been ignored the performance difference of service node.In actual scene, the focus subregion can't be eliminated by improving the distributed hash algorithm fully to the influence of system.
People such as the Chiu of Washington State University (D.Chiu, A.Shetty, G.Agrawal.Elastic Cloud Caches for Accelerating Service-Oriented Computations.In Proceedings of the ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC ' 10) .pp.1-11,2010.) study at the dynamic retractility problem of caching system, and a kind of data migration strategy based on greedy method has been proposed.When user's request is inserted data to cache node n, if this node free memory inadequate resource, meeting trigger data migration event, migration algorithm is moved to other nodes with the partial data of node n.The lightest cache node of present load is paid the utmost attention in choosing of destination node.If migration may cause target node memory to overflow, then distribute a new node.When migration is finished, upgrade the Hash mapping table synchronously.Choose two the lightest cache nodes of present load when node shrinks and carry out the data merging.The deficiency of above-mentioned research work mainly contains 2 points: the one, and the node telescopic process is not considered the influence that heterogeneous nodes, focus regional addressing bring, can't dynamic equalization focus subregion.The 2nd, the data consistency guarantee is not provided, the data migration operation may be subjected to the influence of network factors to cause operation failure or loss of data, and client Hash mapping table information inconsistency may cause certain customers to obtain a plurality of copies of object requests failure or concurrent operations data and produce problem of inconsistency etc.
The research work of the Flavio of Eidgenoess Tech Hochschule (ETH http://systems.ethz.pubzone.org/servlet/Attachment? attachmentId=109﹠amp; VersionId=1378371) be primarily aimed at the dynamic retractility problem of cloud storage system, this work is based on the method on-line monitoring focus subregion of statistics, and the partial data with the focus subregion migrates to the lighter neighbor node of load simultaneously.In order to ensure that migration finishes smoothly, the hash algorithm of its use can keep the sequencing between the Key.Deficiency is that load-balancing decision finishes in each cache node this locality, lacks global information, and system often needs just can enter stable equilibrium state through iteration repeatedly like this, and convergence rate is slow, and the expense of introducing is bigger.
Summary of the invention
The objective of the invention is to overcome the problem that exists in the existing scheme, a kind of distributed caching dynamic retractility method and system of holding load equilibrium are provided.How the resource utilization of each cache node in the inventive method on-line monitoring system stretches based on the weighting load value that calculates and system's average load value decision system.Data balancing problem during at dynamic retractility, the inventive method consider that hot spot data to the influence that system availability causes, has proposed a kind of load-balancing method that is applicable to isomerous environment.Data consistency security problem during at dynamic retractility, the present invention has realized a kind of Data Access Protocol based on three phase requests, simultaneously in order to eliminate the influence that the data migration causes system availability as much as possible, the present invention adopts a kind of controlled data migration method effectively to control the migration progress, reduces migration overhead.
To the caching system dynamic retractility time, the inventive method relates generally to following three kinds of fundamental mechanisms: data route, data balancing and information synchronization mechanism describe in detail this three partial content below:
(1) routing mechanism of distributed caching can be divided into initiatively initiatively three kinds of (Server-driven) and load equalizer routes of (Client-driven), server of client.The inventive method adopts the initiatively mechanism of route of client, i.e. client maintenance routing table, and will ask directly to be forwarded to destination node according to routing table information.Than other two kinds of mechanism, there is following advantage in this mechanism: data are routed to destination node only needs single-hop, has effectively reduced the network overhead that the multi-hop routing forwarding is brought; Caching server need not be born route forwarding function, is conducive to the raising of its performance; Response time is short.
(2) the data balancing mechanism of the present invention's employing has increased the support to heterogeneous nodes, and each cache node i is endowed an initial weight w according to its disposal ability iWith whole Hash space be divided into some Q piece of data subregions that wait size (Q>>K, Q>>N, K is the backup number, N is buffer memory service node number), cache node i is mapped to T in the following manner iOn the individual subregion:
T i = w i * Q Σ i = 1 N w i - - - ( 1 )
The mapping relations of subregion and cache node are kept in the partitioned server mapping table (i of mapping table capable preserved i subregion corresponding cache node).The mode of customer end adopted secondary Hash mapping is come the locator data item, is expressed as Fig. 2.At first the key value with data item is mapped on the Hash ring by a hash function; positional information is labeled as token; by secondary Hash mapping function the token that gets access to being mapped as a value then is subregion sign between 1 to Q; after obtaining the subregion sign, client is inquired about the cache node of depositing this subregion from the partitioned server mapping table.Can change the partitioned server mapping relations during caching system dynamic retractility, at this moment, client needs to upgrade synchronously this mapping table (being also referred to as routing table or subregion routing table).
(3) the subregion routing table is the key that whole caching system is able to true(-)running.During the migration of caching system dynamic retractility and data, the subregion routing iinformation can change, and therefore need have a sets of data synchronization mechanism to guarantee that the content of routing iinformation can in time upgrade and obtain effective affirmation.Existing data synchronization mechanism mainly comprises client affirmation, server affirmation, TTL (Time To Live) and piggybacking (Piggyback Validation) mechanism, and subregion routing iinformation of the present invention adopts Piggyback Validation mode synchronously.Because the check information amount is less, the network overhead that this synchronization mechanism is introduced is also less relatively, and the client and server end need not to record too much information simultaneously, can not produce overhead.Particularly, when client sends data access request to caching server, in request, understand incidentally subregion route version information and synchronizing signal; During the service end analysis request, then in the request response, add synchronized result as the existence of finding synchronizing signal; Client is as receiving asynchronous signal, and then the routing iinformation that please look for novelty to server is again asked otherwise finish this.
Based on above-mentioned three kinds of mechanism, technical solution of the present invention can be expressed as Fig. 3, may further comprise the steps:
1) in each sampling window of feature sampling time section (or fixed reference feature sampling time section), monitors the utilance of processor (CPU), internal memory (Memory) and the network resources such as (Network) of each cache node.
When 2) each feature sampling time section finished, each cache node was responsible for calculating the weighting load value in this feature sampling time section, account form such as equation (2):
L i = Σ j = 1 M ( α * CPU ij + β * Mem ij + γ * Net ij ) / M - - - ( 2 )
L wherein iRepresent the weighting load value of cache node i in this feature sampling time section, α, beta, gamma represent CPU respectively, the weights of internal memory and network, CPU Ij, Mem IjAnd Net IjRepresent the CPU of node i in j sampling window respectively, internal memory and network resource utilization, M are represented the sampling window number.
Each cache node is sent to the cache cluster manager with the weighting load value that calculates, and manager is responsible for calculating the average load value of system in this feature sampling time section and how system to be stretched and makes decisions account form such as equation (3):
L ‾ = Σ i = 1 N L i / N - - - ( 3 )
Wherein
Figure BDA0000082695360000044
Representative system average load value, N represents the cache node number.
When the average load value of system is higher than threshold value thre MaxThe time, the XM extended operation; When being lower than threshold value thre MinThe time, the XM shrinkage operation.The cache cluster manager is responsible for coordinating each cache node and is finished the renewal of subregion routing iinformation and the migration of data partition.
3) during the cache node dynamic expansion, the partial buffering data need be finished migration between node.The inventive method is considered the influence that hot spot data causes system availability, on the basis of data with existing partition scheme, by the target of the partition data on the mobile unbalanced service node with the realization load balancing.Existing cache node number is N in the supposing the system, and target is to find the solution N+1 node dynamically to add fashionable data partition migration scheme.The inventive method is used transBytes iThe averaging network flow of expression cache node i in feature sampling time section uses w iThe weights of expression cache node i.The variance D (T) of employing weighted network flow T comes the balanced intensity of marked network flow, definition weighted network flow T iT is as follows with the average weighted network traffics:
T i = transBytes i w i T ‾ = Σ i = 1 N + 1 T i / ( N + 1 ) - - - ( 4 )
In a feature sampling time section, if the weighted network flow T of cache node iSatisfy
Figure BDA0000082695360000053
(ε is threshold value) thinks that then this node is in the load balancing state; Otherwise, think to be in non-balanced state.Definition migration node set is for satisfying
Figure BDA0000082695360000054
The set that the node of condition constitutes.According to weighted network flow T iAnd the relation between the average weighted network traffics will be moved node set and is further divided into the node set MigrationOutSet that moves out (node i belongs to the node set of moving out, and and if only if ) and the node set MigrationInSet that moves into (node i belongs to the node set of moving into, and and if only if
Figure BDA0000082695360000056
).Simultaneously, new node is added among the MigrationInSet.The basic element of migration is data partition, and each partition network flowmeter of cache node i is shown set { t I1, t I2, t I3T Ik, k represents the number of partitions of this node.Migratory direction is constrained to unidirectional, and namely data can only the node from MigrationOutSet be moved on the node among the MigrationInSet.Target function is shown in equation (5):
Target function: Min Σ i = 1 S D ( T i ) - - - ( 5 )
The target of finding the solution data partition migration scheme is the variance sum minimum that makes migration node set weighted network flow T, and wherein S represents to move node number in the node set (S<N).The remaining space of supposing the node of moving into is space to be filled, and the redundant data of the node of moving out is article to be loaded, and then this problem is converted into the many knapsacks of 0-1 (0-1MKP) problem.The 0-1MKP problem is a np hard problem (S.Hanafi, A.Freville.An efficient tabu search approach for the 0-1 multidimensional knapsack problem.European of Operational Research, 1998,106:659~675), the present invention uses branch and bound method to find the solution the approximate optimal solution of this problem.
4) based on finding the solution the data transference package that obtains in the step 3), each cache node of cache cluster manager coordinates is finished the data migration operation.Data consistency in the transition process and service availability need be protected, the inventive method adopts the Data Access Protocol of three phase requests to realize the consistency visit of migration data, adopt controlled data migration method to avoid migration operation to cause excessive expense to caching server, thereby ensured the lasting availability of buffer memory service.
1. based on the Data Access Protocol of three phase requests
In order to ensure the continuity of user's visit in the transition process, after the data partition migration is finished in employing, again with the strategy of subregion from the knot removal of moving out.When data were moved, same data may be present in move out node and the node of moving into simultaneously like this, and the version of the two may be inconsistent.Consider when migration operation begins that the buffer memory service node is new routing information more, and client and service end synchronous routing table version number and judge whether current routing iinformation is latest edition, if for legacy version then need to safeguard again a up-to-date routing table.Therefore in data migration process, when taking place client to the visit of migration data, can adopt the preferential strategy of the node of moving into, i.e. (V when the data of move into node and the node of moving out produce conflict i≠ V o), cache client is thought the node data V that moves into iEffectively, and with these data return and (namely directly return V iGive the user), be expressed as Fig. 4.More during new data, owing to be difficult to monitor the transition state of each data item, agreement adopts the equal method for updating in the side of moving out and the side of moving into, and namely client at first can be upgraded the data V of the node of moving out o(more new data namely directly covers), after getting access to the successful information of renewal, upgrade the node of moving into again, lose to avoid upgrading that (the renewal is here lost and referred to these data successfully migrated to when moving into node when the node of moving out before Data Update request arrival, if only do not upgrade synchronously the node of moving into moving out node updates this moment, then the data got from the node of moving into of client are inconsistent datas of legacy version), be expressed as Fig. 5.
Owing to adopt cache client subregion routing mode initiatively, for realizing the consistency target of migration data visit, client need adopt corresponding data access mode according to server state, thereby hold mode is synchronous.In order to address this problem, the inventive method has realized a kind of Data Access Protocol based on client three phase requests.Cache client can be preserved the routing iinformation of a plurality of versions in this agreement, will ask simultaneously to handle to be divided into three phases.Phase I, it is synchronous that client is carried out routing iinformation based on piggybacking (Piggyback Validation) mechanism and buffer memory service node, and the result of information synchronization guarantees that the routing table that requesting client has up-to-date two versions (is expressed as RT nAnd RT N-1); Second stage, client finish routing iinformation synchronously after, use the subregion routing table RT of latest edition in the caching system nCarry out route, data cached based on the request of piggybacking mechanism; Phase III, the data value that returns according to the second stage caching server and the server state (being divided into transition state and stable state) that returns handle accordingly.If this moment, server was in transition state, client can judge at first whether the data value that server returns is empty.If be not empty, show that request msg successfully migrates to destination node, then client directly returns to this data value the application service of the request of sending; Otherwise if the data value that returns shows that for empty request msg does not migrate to destination node as yet, then client is according to the routing iinformation RT of last revision N-1Find out the node of moving out, and send request of data to this node.If server was in stable state (expansion was finished this moment, the free of data transition state), show that transition process completes successfully, this moment, data cached memory location in caching system had uniqueness, and client directly returns to this data value the application service of request.
2. controlled data are moved
During the cache node dynamic expansion, resource utilization ratio is positioned at saturation point, and the data migration operation can be introduced extra network and computing cost, node is providing service if move out this moment, this part expense might cause system service abnormal end, and too fast migration velocity may be aggravated system originally with regard to already present resource pressure simultaneously.Finish smoothly in order to ensure migration operation, eliminate simultaneously the influence that the data migration causes system availability as much as possible, the present invention adopts a kind of controlled data migration method, namely rationally control migration operation progress (data cached migrate to low load node from the high capacity node usually, so migration operation is bigger to the node influence of moving out) according to the performance condition of node of moving out.The principal element that influences caching server request treatment effeciency is the I/O network bandwidth, in the inventive method, the node of moving out is monitored its network overhead, and when the network flow velocity reached the bandwidth threshold of a certain setting, the node of moving out can reduce θ % with the data migration velocity.
Meanwhile, if the long or frequent interruption of transition process duration then can cause system to be in transition state for a long time and increases extra data sync expense, reduce the entire system performance.Therefore, can not interrupt the data migration operation fully when slowing down the data migration velocity, the minimum available network flow velocity of the inventive method setting data migration is 10% of this meshed network bandwidth.
When 5) cache node dynamically shrinks, at first still be specified data migration scheme, two the lightest node i of load in the selecting system (using L to represent) ', j ' (L J '<L I ') carry out subregion and merge node i ' and all subregions all migrate to node j '.In remaining cache node, according to weighted network flow T iWith the average weighted network traffics Relation, will satisfy
Figure BDA0000082695360000072
The migration node set of condition is further divided into move out node set and the node set of moving into.Use the subregion bound method to find the solution data partition migration scheme, target is the variance sum minimum that makes migration node set weighted network flow T.Finish the data migration based on above-mentioned migration scheme, the consistency guarantee in the data migration process is finished based on the Data Access Protocol of three phase requests, adopts controlled data migration method effectively to control the migration progress simultaneously.When migration operation finishes, with node i ' from cache cluster, remove, be expressed as Fig. 6.
Compared with prior art, the present invention has following technical advantage:
1) the data balancing mechanism of the present invention's employing has increased the support to heterogeneous nodes, considered the influence that the focus subregion produces system availability in the running simultaneously, to ensure network traffics equiblibrium mass distribution between each cache node, optimized resource utilization ratio by the data partition on the mobile unbalanced service node.
Data consistency security problem during 2) at dynamic retractility, the present invention has realized a kind of Data Access Protocol based on three phase requests, and this agreement adopts piggybacking mechanism subregion routing iinformation synchronously, and the synchronization overhead of introducing is less, and is easy to implement.In three phase requests are handled, the routing iinformation of a plurality of versions of client maintenance, in conjunction with and server between route and state synchronized, solved the inconsistent problem of the data access in the transition process; By adopting controlled data migration method rationally to control the migration progress, the influence of effectively having avoided the data migration that system availability is caused.
3) the present invention's flexible resource of can be the buffer memory platform is supplied with Performance tuning and is provided support.By all kinds of resource utilizations of cache node are detected, the performance bottleneck in the helpdesk administrative staff discovery system in time, and finish the dynamic expansion of cache node automatically, ensured continuity and the consistency of service quality; When resource utilization ratio is low, finish the dynamic contraction of node automatically, realize the supply as required of resource, reduced artificial participation simultaneously.
Description of drawings
Fig. 1 represents the consistency Hash;
Fig. 2 represents the secondary Hash mapping;
Fig. 3 represents distributed caching dynamic expansion method;
Get operation when Fig. 4 represents the data migration;
Update operation when Fig. 5 represents the data migration;
Fig. 6 represents the dynamic contraction method of distributed caching;
Fig. 7 represents deployment topologies figure;
Fig. 8 represents the processing of three phase requests;
Fig. 9 represents to move control flow;
Figure 10 represents dynamic expansion performance test result.
Embodiment
The invention will be further described below in conjunction with specific embodiments and the drawings.
Whole distributed caching system is made up of caching server (Cache server), cache client (Cache Client) and buffer memory cluster manager dual system (Cache Admin) three parts.Pass through network connection between the three.Each caching server independent operating passes through Management Agent unified monitoring and management by the cache cluster manager.Management Agent and caching server are positioned at same physical node, are responsible for generating JMX management MBean, and after receiving the control command that the cache cluster manager sends, the Management Agent meeting is adaptive automatically should order and the corresponding operation of control buffer memory service processes execution.
In the cache cluster manager, the topology monitor adopts the multicasting technology monitor server node topology based on Jgroups to change, and obtain the performance information of each service node by JMX remote access Management Agent, finally provide unified monitoring view to control whole cache cluster.The cache cluster manager is mainly by multicast communication assembly, topological monitor, controller, and the cluster management module, Web end control desk is formed.Wherein the cluster management module mainly provides data partition heavily to distribute and two major functions of cluster scale adjustment (comprising dynamic expansion and contraction).
Caching server mainly is divided into buffer memory service and Management Agent two parts.Wherein the buffer memory service is made up of data management module, order control engine, state management module and migration management module.Data management module mainly is in charge of the memory headroom that has distributed, also is responsible for data cached tissue, storage and inquiry simultaneously.Order control engine mainly is responsible for according to buffer memory process of commands node state scheduling cache request, communicates the agreement relevant treatment, and finishes the cache request processing procedure.State management module mainly is responsible for the condition managing of caching server, comprises that the piggybacking to route information is handled in server migration state, cache cluster scale variable condition and the client-requested.The migration management module is responsible in internodal migration work.The major responsibility of Management Agent is to order cycle management for the caching server cluster provides Topology Management and buffer memory waiter.
Cache client mainly is made up of main modular such as cache client API, client kernel, subregion selector, request transponder, many versions configuration manager, adaptive integration modules, is responsible for communicating by letter and state synchronized etc. between application and caching server.Wherein, cache client API, client kernel and subregion selector have been realized the basic function of client, request transponder and many versions configuration manager are in order to support the Data Access Protocol of three phase requests that the inventive method proposes, and adaptive integration module is then in order to strengthen maintainability and the ease for use of existing caching system.Cache client API provides and has comprised reading and writing, a series of service interface of deletion data.The client kernel mainly comprises node administration module, connection management module and communication module, and the Core Feature of cache client is provided.Wherein the connection management module be responsible for the maintain customer end to all connections of caching server with use these to be connected necessary communication.Cache client realizes that based on JAVA NIO than conventional congestion I/O model, JAVA NIO has efficiently, the less advantage of resource overhead.Conventional congestion I/O model often adopts the mode that is connected to form connection pool that is pre-created some to improve treatment effeciency, and only need creating a connection, NIO gets final product, so just significantly reduce the expense of thread creation and switching, can better support high concurrent processing of request.After the node administration module is obtained connection, realize the network data exchange of NIO mechanism by the calling communication module.The subregion selector has been realized a kind of route selection algorithm based on the consistency Hash.When data access, at first corresponding subregion is found in mapping based on key assignments, finds this section post corresponding cache server according to the subregion routing table then.The request transponder mainly is responsible for handling the synchronizing information of returning from server end, and carries out subsequent request according to this information and transmit.Many versions configuration manager is responsible for safeguarding the routing table information of a plurality of versions.During client terminal start-up, configuration manager can connect any one caching server node, obtains and resolve routing iinformation, realizes the self-configuring of connection attribute; When the caching server end was in transition state, configuration manager can provide the support of historical routing iinformation.Simultaneously, for ease of caching system and existing application server (Tomcat for example, Jboss etc.) slitless connection, the state backup and high available support of loose coupling are provided, adaptive integration module is responsible for the status object module (Session) of each application server is carried out adaptive, and so only configuration file need simply be set just can use the buffer memory service.Adaptive integration module provides general Java object sequence method simultaneously, by the Java object is converted into the XML form, realizes serializing and unserializing flexibly.
As the experimental situation of present embodiment, front end uses LoadRunner to generate load, and middleware Web container adopts the request of Tomcat process user, and the rear end is DB2 database and distributed caching system.Wherein, the distributed caching system is mainly used to the buffer memory business datum, accelerates application access, eliminates the database access bottleneck simultaneously.Deployment topologies can be expressed as Fig. 7, and the environment for use configuration is as shown in table 1.
The Web that present embodiment adopts is applied as a simple on-line shopping system, comprises functions such as goods browse, commercial articles ordering, acknowledgement of orders.The business datum of this application is kept in the DB2 database, and initial book data amount is 3000000 books records, and every books record comprises bibliography information and book contents information.This is used by using the data in the transparent access cache system of Hibernate framework.Test script adopts LoadRunner to record, and comprises internet book store's page browsing, submits operations such as shopping list to, adds 0.3 second think time (Think time) between each request.
The configuration of table 1 environment for use
Figure BDA0000082695360000101
Present embodiment method idiographic flow is as follows:
1) in the present embodiment method, Management Agent is responsible for providing all kinds of statistical informations of buffer memory service end, as: buffer memory reading times, update times, hit-count, each partition network flow etc.; Buffer memory service node status monitoring function is provided simultaneously, as: CPU, internal memory and network utilization etc.In each sampling period of feature sampling time section, Management Agent is responsible for monitoring the utilance (for the ease of calculating, all representing with the percentage form) of the resources such as CPU, internal memory and network of place node.It is 300 seconds that present embodiment defines each feature sampling time section, is made up of 10 sampling windows, and each sampling window width is 30 seconds.
2) when each feature sampling time section finished, Management Agent was calculated the weighting load value in this feature sampling time section.α, beta, gamma represent CPU respectively, the weights of internal memory and network, and the critical system resources of considering the distributed caching system is internal memory and network I/O, and cpu resource priority is low slightly, and present embodiment is with α, and beta, gamma is made as 0.1,0.4 and 0.5 respectively.Whether the cache cluster manager calculates the average load value of system in this feature sampling time section based on long-range each the node monitoring information that obtains of JMX, need to stretch and how to stretch based on this load value decision system.Present embodiment is with the threshold value thre of node extended operation MaxBe defined as 70%, the threshold value thre that node shrinks MinBe defined as 30%, namely when system's average load value is higher than 70%, the XM extended operation, when being lower than threshold value 30%, the XM shrinkage operation.
3) during the cache node dynamic expansion, at first need to solve new node and add fashionable data partition migration scheme.In the present embodiment, Server1, Server2 and Server3 are existing server node in the caching system, and the 4th node Server4 is node to be added.The cache cluster manager is based on the long-range network traffic information (transBytes) that obtains 3 service nodes of JMX.Because each cache node configuration is identical, so weight w iAll be made as 1.Based on above-mentioned information, the cache cluster manager calculates the weighted network flow T of each node iAverage weighted network flow value with system
Figure BDA0000082695360000111
In a feature sampling time section, according to
Figure BDA0000082695360000112
And the relation between the threshold epsilon is divided into equilibrium state and non-balanced state with the cache node state, and present embodiment is made as 15% with threshold epsilon.Be in the node of non-balanced state, the weighted network flow surpasses the node of mean value and puts into the node set of moving out (MigrationOutSet), puts into the node set of moving into (MigrationInSet) with new node Server4 and less than the node of mean value.The basic element of migration is data partition, and migratory direction is constrained to the partition data on the cache node among the MigrationOutSet and moves on the node among the MigrationInSet.Based on each partition network flow information of cache node and migration node set information, the approximate optimal solution that present embodiment adopts branch and bound method to find the solution the migration scheme, target is the variance sum minimum that makes migration node set weighted network flow T.
4) based on finding the solution the data transference package that obtains in the step 3), the cache cluster manager at first uses controller to send new subregion routing iinformation to upgrade routing configuration to all service nodes, uses controller to control each node then and finishes the data migration by migration plan.In order to ensure the data consistency in the transition process, the inventive method adopts the Data Access Protocol based on three phase requests, the state management module of service node is responsible in service end transition state and the client-requested piggybacking of route information is handled, the client-requested transponder is responsible for handling the synchronizing information from service end, carries out subsequent request and transmits processing.
1. based on the Data Access Protocol of three phase requests
Be the explanation handling process, suppose when initial to have three cache node Server1, Server2 and Server3 in the system that whole Hash ring is divided into the data partition of sizes such as 12 parts, each self-contained 4 parts of three cache nodes, the subregion routing table is expressed as RT N-1(Server1, Server2 and Server3 are abbreviated as S1, S2 and S3 respectively in the routing table).New node Server4 adds during the system dynamic expansion, finds the solution the data transference package that obtains according to step 3), and subregion 4,8 and 12 need be moved among the Server4 by former service node, and up-to-date subregion routing table is expressed as RT n, as shown in Figure 8.Suppose that the migration of present data do not finish as yet, client receives that a key assignments is the data access request of Key1.
(1) client is carried out Hash calculation (adopting Distributed Consistent Hashing algorithm) to this key assignments, and result of calculation is mapped to Hash ring a position, through the secondary Hash mapping position a is mapped in the data partition 4 then.Suppose that the current routing table version of client is t (t≤n-1, consider if client request msg not for a long time, the routing table version may be lower), according to this routing iinformation, locator data subregion 4 is stored in (value of D may be 1,2 or 3) on the server node Server D, user end to server node Server D sends data access request, version synchronizing information incidentally in the request;
(2) caching server node Server D receives incidentally and carries out the version verification after the information, if t<n-1 then can return the routing table RT of up-to-date two versions nWith RT N-1If t=n-1 then can return up-to-date routing table RT nThe request of phase I this moment is finished;
(3) client reads the subregion routing table RT of latest edition n, according to routing iinformation, Key1 is stored on the server node Server4, and client sends data access request to Server4.
(4) since this moment the data migration operation do not finish as yet, Server4 in the request response incidentally Returning mark bit table prescribed server be in transition state.If these data successfully migrate to Server4, then directly return this data to the application service of the request of sending, if do not migrate to Server4 as yet, then return null value.This moment, the second stage request was finished;
(5) client is received when server is in the message of transition state, can judge whether the data value that returns from server is empty.If be not empty, then directly this data value is returned to the application service of the request of sending; Otherwise, handle if the data value that returns for empty, then needs to carry out extra request.Client terminal local reads the routing iinformation RT of last revision N-1, according to this version routing iinformation, Key1 is positioned on the server node Server1, and client sends data access request to Server1 again;
(6) Server1 will ask result to be returned.The request of phase III this moment is finished;
(7) cache client is returned result to the application service of the request of sending.
2. controlled data are moved
Migration management module in the buffer memory service node is responsible for data partition in internodal migration work, when migration orders arrival and parsing to be finished, creates the background migrate thread and carries out data migration work.The migration thread obtains the block chained list of corresponding subregion by migration subregion view, begins to carry out the data migration operation from the afterbody of chained list, namely adds the high priority data migration of caching system at first, and the back adds the data of caching system and moves at last.Watch-dog monitoring system network traffics expense after the migration thread is finished the volume of transmitted data of fixed size, is obtained system mode from watch-dog simultaneously, and adjusts the migration velocity in next data migration cycle.In the methods of the invention, when the network flow velocity reaches the network bandwidth threshold value of setting, migration velocity is reduced by 20%.The minimal network flow velocity of reserving is 10% of this meshed network bandwidth.After all data partition migrations finished, the replacement server state was stable state.Be expressed as Fig. 9.
Present embodiment is tested at this dynamic expansion method.Whole experiment is used the concurrent visit of LoadRunner analog subscriber, and each affairs comprises 10 servlet requests, and average transaction response time (Average Transaction Response Time) data are as evaluation criterion in the collection experimentation.Be expressed as Figure 10.Have Server1, Server2 and three service nodes of Server3 in the caching system when initial, the user concurrent amount is made as 600.At warm-up phase, because using, web constantly business datum is put into caching system, and the average transaction response time of system descends gradually, and after the caching system space reached certain load, application performance tended towards stability, and the average transaction response time is 12.9 seconds.Further increase the visit load this moment, and the user concurrent amount is transferred to 1000, and the caching server cluster arrives the resource saturation point, and dynamic expansion adds 1 the new continuity of cache node server4 to ensure service quality.Consider that data migration process can introduce overhead to system, average transaction response time this moment slightly rise (amplitude is 0.8 second).After expansion process was finished, system processing power increased, and the response time descends again, finally reaches stable again, and the average transaction response time is 8.1 seconds, compared when stablizing before, had reduced 4.8 seconds.
For the further validity of proof load equalization algorithm, the variance D (T) of present embodiment definition weighted network flow T and average weighted network flow value T square ratio be the unbalanced degree of load of cache cluster.Collect the unbalanced degrees of data of load after new node adds, be expressed as table 2.Dynamic expansion method with not working load equilibrium is compared, and the inventive method has better load balancing effect, and the unbalanced degree of system load has reduced by 82.4%.
The unbalanced degree of load of two kinds of methods of table 2
The dynamic expansion of holding load equilibrium The dynamic expansion of holding load equilibrium not The two ratio
6.28*10 -3 3.56*10 -2 0.176
5) when system's average load value is lower than threshold value 30%, the XM shrinkage operation.In the present embodiment, supposing that Server1, Server2, Server3 and Server4 are existing server node in the caching system, at first is the establishment of migration scheme.Two cache nodes of weighting load value L minimum (are assumed to Server2 and Server3 and L in the selecting system Server3<L Server2), all subregions of Server2 are all migrated to Server3.To consider in addition cache node is carried out load balancing, in remaining cache node Server1 and Server4, will satisfy
Figure BDA0000082695360000131
Node add the node set of moving out, will satisfy
Figure BDA0000082695360000132
Node add the node set of moving into, threshold epsilon ' be made as 10%.The approximate optimal solution that present embodiment adopts branch and bound method to find the solution the migration scheme.Finish the data migration based on above-mentioned migration scheme, use the Data Access Protocol of three phase requests to ensure data consistency in the data migration process, adopt controlled data migration method effectively to control the migration progress.When migration operation finished, Server2 removed from cache cluster with node.

Claims (10)

1. the distributed caching dynamic retractility method of a holding load equilibrium the steps include:
1) each caching server periodic monitoring resource utilization separately;
2) each caching server is according to the resource utilization calculating weighting load value L separately of current monitoring i, and send it to the cache cluster manager;
3) described cache cluster manager is according to the weighting load value L of each caching server i, calculate the current average load value of distributed caching system
Figure FDA00003134634700011
When
Figure FDA00003134634700012
Be higher than preset threshold thre MaxThe time, carry out extended operation; When
Figure FDA00003134634700013
Be lower than preset threshold thre MinThe time, carry out shrinkage operation; Wherein,
A) extended operation is: the weighted network flow T that at first calculates each caching server iAveraging network flow with system
Figure FDA00003134634700014
To satisfy then
Figure FDA00003134634700015
Caching server form the node set of moving out, satisfy
Figure FDA00003134634700016
Caching server form the node set of moving into, simultaneously, new node is added in the described node set of moving into; At last according to target function
Figure FDA00003134634700017
The migration scheme of the data of determining caching server in the described node set of moving out caching server in the described node set of moving into, the line data of going forward side by side migration;
B) shrinkage operation is: at first two caching servers of weighting load value L minimum are carried out union operation in the selecting system; Calculate the weighted network flow T of each caching server then iAveraging network flow with system
Figure FDA00003134634700018
To satisfy then
Figure FDA00003134634700019
Caching server form the node set of moving out, satisfy
Figure FDA000031346347000110
Caching server form the node set of moving into; At last according to target function
Figure FDA000031346347000111
The migration scheme of the data of determining caching server in the described node set of moving out caching server in the described node set of moving into, the line data of going forward side by side migration; Wherein, ε, ε ' are respectively setting threshold, and D (T) is the variance of weighted network flow T, and S is for the caching server sum of data migration takes place.
2. the method for claim 1 is characterized in that described resource comprises: processor, internal memory, network.
3. method as claimed in claim 2 is characterized in that described weighting load value L iComputing formula be:
Wherein, α represents the weights of CPU, and β is the weights of internal memory, and γ is the weights of network, CPU IjThe cpu busy percentage of expression node i in j sampling window, Mem IjThe memory usage of expression node i in j sampling window, Net IjThe network utilization of expression node i in j sampling window, M represents the sampling window number.
4. as claim 1 or 2 or 3 described methods, it is characterized in that adopting branch and bound method to ask described target function, obtain described migration scheme.
5. as claim 1 or 2 or 3 described methods, it is characterized in that client conducts interviews to migration data if in the data migration process, client is at first upgraded the migration data of the node of moving out, after the renewal operation gets access to the successful information of renewal, upgrade the node of moving into again.
6. as claim 1 or 2 or 3 described methods, it is characterized in that if in the data migration process, client conducts interviews to migration data, the data that the node of moving into returns are inconsistent with the data that the node of moving out returns, and then client reads the return data of the node of moving into.
7. as claim 1 or 2 or 3 described methods, it is characterized in that client conducts interviews to migration data if in the data migration process, then:
1) it is synchronous that the caching server at customer end adopted piggybacking mechanism and visit data place carries out routing iinformation, guarantees that client has the routing table of up-to-date two versions: RT nAnd RT N-1
2) client is used subregion routing table RT nCarry out route, and adopt piggybacking mechanism data cached to the caching server request;
3) client handles accordingly according to caching server return value and the server state that returns, that is: when server state was transition state, if return value is not empty, then client directly returned to result the application service of the request of sending; If return value is empty, then client is according to the routing iinformation RT of last revision N-1Find out the node of moving out, and send request of data to this node; When caching system was positioned at stable state, client directly sent to the return value of caching server the application service of request.
8. as claim 1 or 2 or 3 described methods, it is characterized in that the described node of moving out monitors its network overhead, when its network flow velocity reaches the bandwidth threshold of a certain setting, the node of moving out reduces θ % with the data migration velocity, but the data migration velocity after reducing is not less than the data migration velocity minimum value of setting.
9. the distributed caching dynamic retractility system of a holding load equilibrium is characterized in that comprising caching server, cache client and buffer memory cluster manager dual system; Described caching server is connected with described cache client, cache cluster manager by network respectively; Wherein:
Described caching server is used for the control command according to the cache cluster manager that receives, and control buffer memory service processes is carried out corresponding operation;
Described cache cluster manager is used for monitoring caching server node topology and changes, and obtains the performance information of each caching server, finally provides unified monitoring view to control whole cache cluster; Wherein, described cache cluster manager is according to the weighting load value L of each caching server i, calculate the current average load value of distributed caching system When
Figure FDA00003134634700022
Be higher than preset threshold thre MaxThe time, carry out extended operation; When
Figure FDA00003134634700023
Be lower than preset threshold thre MinThe time, carry out shrinkage operation;
Extended operation is: the weighted network flow T that at first calculates each caching server iAveraging network flow with system
Figure FDA00003134634700024
To satisfy then
Figure FDA00003134634700031
Caching server form the node set of moving out, satisfy
Figure FDA00003134634700032
Caching server form the node set of moving into, simultaneously, new node is added in the described node set of moving into; At last according to target function
Figure FDA00003134634700033
Figure FDA00003134634700034
The migration scheme of the data of determining caching server in the described node set of moving out caching server in the described node set of moving into, the line data of going forward side by side migration;
Shrinkage operation is: at first two caching servers of weighting load value L minimum are carried out union operation in the selecting system; Calculate the weighted network flow T of each caching server then iAveraging network flow with system
Figure FDA00003134634700035
To satisfy then
Figure FDA00003134634700036
Caching server form the node set of moving out, satisfy
Figure FDA00003134634700037
Caching server form the node set of moving into; At last according to target function
Figure FDA00003134634700038
The migration scheme of the data of determining caching server in the described node set of moving out caching server in the described node set of moving into, the line data of going forward side by side migration; Wherein, ε, ε ' are respectively setting threshold, and D (T) is the variance of weighted network flow T, and S is for the caching server sum of data migration takes place;
Described cache client, be responsible for to use with caching server between communicate by letter and state synchronized.
10. system as claimed in claim 9 is characterized in that described caching server comprises buffer memory service module and Management Agent; Described buffer memory service module comprises data management module, order control engine, state management module and migration management module; Described client comprises request transponder, many versions configuration manager; Wherein:
Described data management module is used for the memory headroom that management has distributed, also is responsible for data cached tissue, storage and inquiry simultaneously;
Described order control engine is used for according to buffer memory process of commands node state scheduling cache request, communicates protocol processes, and finishes the cache request processing procedure;
Described state management module is used for the condition managing of caching server, comprises that the piggybacking to route information is handled in server migration state, cache cluster scale variable condition and the client-requested;
Described migration management module is responsible in the internodal migration work of caching server;
Described Management Agent is used to the caching server cluster to provide Topology Management and buffer memory waiter to order cycle management;
The described request transponder is for the treatment of the synchronizing information of returning from the caching server end, and carries out subsequent request according to this information and transmit;
Described many versions configuration manager is used for safeguarding the routing table information of a plurality of versions.
CN201110230333XA 2011-08-11 2011-08-11 Distributed type dynamic cache expanding method and system for supporting load balancing Expired - Fee Related CN102244685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110230333XA CN102244685B (en) 2011-08-11 2011-08-11 Distributed type dynamic cache expanding method and system for supporting load balancing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110230333XA CN102244685B (en) 2011-08-11 2011-08-11 Distributed type dynamic cache expanding method and system for supporting load balancing

Publications (2)

Publication Number Publication Date
CN102244685A CN102244685A (en) 2011-11-16
CN102244685B true CN102244685B (en) 2013-09-18

Family

ID=44962515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110230333XA Expired - Fee Related CN102244685B (en) 2011-08-11 2011-08-11 Distributed type dynamic cache expanding method and system for supporting load balancing

Country Status (1)

Country Link
CN (1) CN102244685B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104516952A (en) * 2014-12-12 2015-04-15 华为技术有限公司 Memory partition deployment method and device
US9619391B2 (en) 2015-05-28 2017-04-11 International Business Machines Corporation In-memory caching with on-demand migration
CN109857725A (en) * 2019-02-20 2019-06-07 北京百度网讯科技有限公司 Data base management method and device, server and computer-readable medium

Families Citing this family (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801766B (en) * 2011-11-18 2015-01-07 北京安天电子设备有限公司 Method and system for load balancing and data redundancy backup of cloud server
CN103188159B (en) * 2011-12-28 2016-08-10 英业达股份有限公司 The management method of hardware usefulness and high in the clouds arithmetic system
CN102662759A (en) * 2012-03-20 2012-09-12 浪潮电子信息产业股份有限公司 Energy saving method based on CPU (central processing unit) load in cloud OS (operating system)
CN102855299A (en) * 2012-08-16 2013-01-02 上海引跑信息科技有限公司 Method for realizing iterative migration of distributed database without interrupting service
CN102843244A (en) * 2012-09-18 2012-12-26 苏州阔地网络科技有限公司 Method and system for web conferencing load distribution
CN102843304A (en) * 2012-09-21 2012-12-26 苏州阔地网络科技有限公司 Network conference load control method and network conference load control system
CN102843305A (en) * 2012-09-21 2012-12-26 苏州阔地网络科技有限公司 Method and system for web conferencing load distribution
CN102857577B (en) * 2012-09-24 2015-10-28 北京联创信安科技有限公司 A kind of system and method for cluster-based storage automatic load balancing
CN102932424B (en) * 2012-09-29 2015-04-08 浪潮(北京)电子信息产业有限公司 Method and system for synchronizing data caching of distributed parallel file system
CN103812895A (en) * 2012-11-12 2014-05-21 华为技术有限公司 Scheduling method, management nodes and cloud computing cluster
CN103838770A (en) * 2012-11-26 2014-06-04 中国移动通信集团北京有限公司 Logic data partition method and system
CN102984274A (en) * 2012-12-13 2013-03-20 江苏新彩软件有限公司 Dynamic load optimization method for lottery cloud service system based on data mining
CN103095599A (en) * 2013-01-18 2013-05-08 浪潮电子信息产业股份有限公司 Dynamic feedback weighted integration load scheduling method of cloud computing operating system
CN103139295A (en) * 2013-01-30 2013-06-05 广东电网公司电力调度控制中心 Cloud computing resource dispatch method and device
CN104009862A (en) * 2013-02-27 2014-08-27 腾讯科技(深圳)有限公司 Equipment scheduling method and system
CN103227754B (en) * 2013-04-16 2017-02-08 浪潮(北京)电子信息产业有限公司 Dynamic load balancing method of high-availability cluster system, and node equipment
CN110677305B (en) * 2013-06-24 2023-04-07 中国银联股份有限公司 Automatic scaling method and system in cloud computing environment
CN103346914A (en) * 2013-07-03 2013-10-09 曙光信息产业(北京)有限公司 Method and device for topological structure update of distributed file system
CN103336726B (en) * 2013-07-10 2016-08-31 北京百度网讯科技有限公司 The method and apparatus of multitask conflict in detection linux system
CN103428102B (en) * 2013-08-06 2016-08-10 北京智谷睿拓技术服务有限公司 The method and system of balancing dynamic load is realized in distributed network
JP6131170B2 (en) * 2013-10-29 2017-05-17 株式会社日立製作所 Computer system and data arrangement control method
CN103605575B (en) * 2013-11-18 2017-10-13 深圳市远行科技股份有限公司 A kind of Cloud Foundry platform applications dispatch system and method
US9519869B2 (en) 2013-11-25 2016-12-13 International Business Machines Corporation Predictive computer system resource monitoring
CN103731341B (en) * 2013-12-30 2018-08-03 广州华多网络科技有限公司 A kind of method and system that instant messaging business is handled
CN103763346B (en) * 2013-12-31 2017-07-25 华为技术有限公司 A kind of distributed resource scheduling method and device
CN103916396B (en) * 2014-04-10 2016-09-21 电子科技大学 A kind of cloud platform application example automatic telescopic method based on loaded self-adaptive
CN103997526B (en) * 2014-05-21 2018-05-22 中国科学院计算技术研究所 A kind of expansible SiteServer LBS and method
CN103970907A (en) * 2014-05-28 2014-08-06 浪潮电子信息产业股份有限公司 Method for dynamically expanding database cluster
CN104023068B (en) * 2014-06-13 2017-12-15 北京信诺瑞得软件系统有限公司 A kind of method that Passive Mode elastic calculation scheduling of resource is realized in load balancing
CN105338026B (en) * 2014-07-24 2018-10-09 阿里巴巴集团控股有限公司 The acquisition methods of data resource, device and system
CN105573838B (en) * 2014-10-14 2022-04-29 创新先进技术有限公司 Cache health degree detection method and device
AU2014410705B2 (en) * 2014-11-05 2017-05-11 Xfusion Digital Technologies Co., Ltd. Data processing method and apparatus
CN104484130A (en) * 2014-12-04 2015-04-01 北京同有飞骥科技股份有限公司 Construction method of horizontal expansion storage system
CN106034144B (en) * 2015-03-12 2019-10-15 中国人民解放军国防科学技术大学 A kind of fictitious assets date storage method based on load balancing
CN104850634A (en) * 2015-05-22 2015-08-19 中国联合网络通信集团有限公司 Data storage node adjustment method and system
CN106330743B (en) * 2015-06-29 2020-10-13 中兴通讯股份有限公司 Method and device for measuring flow balance degree
CN105099935A (en) * 2015-07-28 2015-11-25 小米科技有限责任公司 Server load control method and device
CN106411971B (en) * 2015-07-29 2020-04-21 腾讯科技(深圳)有限公司 Load adjusting method and device
CN106612296A (en) * 2015-10-21 2017-05-03 阿里巴巴集团控股有限公司 A method and apparatus for assigning user equipment connection requests
CN105183670B (en) * 2015-10-27 2018-11-27 北京百度网讯科技有限公司 Data processing method and device for distributed cache system
CN105357296B (en) * 2015-10-30 2018-10-23 河海大学 Elastic caching system under a kind of Docker cloud platforms
CN105338109B (en) * 2015-11-20 2018-10-12 小米科技有限责任公司 Fragment dispatching method, device and distributed server system
CN106817432B (en) * 2015-11-30 2020-09-11 华为技术有限公司 Method, system and equipment for elastically stretching virtual resources in cloud computing environment
US9910713B2 (en) * 2015-12-21 2018-03-06 Amazon Technologies, Inc. Code execution request routing
CN106909557B (en) * 2015-12-23 2020-06-16 中国电信股份有限公司 Memory cluster storage method and device and memory cluster reading method and device
CN105516369A (en) * 2016-02-04 2016-04-20 城云科技(杭州)有限公司 Video cloud platform load balancing method and video cloud platform load balancing dispatcher
CN105847352B (en) * 2016-03-22 2019-09-17 聚好看科技股份有限公司 Expansion method, device and distributed cache system based on distributed cache system
CN105959427B (en) * 2016-04-25 2020-01-21 中国互联网络信息中心 DNS server automatic expansion method
CN114443557A (en) * 2016-04-26 2022-05-06 安博科技有限公司 Data beacon pulse generator powered by information slingshot
CN107608783A (en) * 2016-07-11 2018-01-19 中兴通讯股份有限公司 A kind of method and device of data processing
US10097344B2 (en) * 2016-07-15 2018-10-09 Mastercard International Incorporated Method and system for partitioned blockchains and enhanced privacy for permissioned blockchains
CN106250226B (en) * 2016-08-02 2019-06-18 福建省华渔教育科技有限公司 Method for scheduling task and system based on consistency hash algorithm
CN107689977B (en) * 2016-08-05 2021-12-07 厦门雅迅网络股份有限公司 Routing method and system for distributed caching and pushing
CN106371916B (en) * 2016-08-22 2019-01-22 浪潮(北京)电子信息产业有限公司 A kind of thread optimized method and device thereof of storage system IO
CN106484528B (en) * 2016-09-07 2019-08-27 北京百度网讯科技有限公司 For realizing the method and device of cluster dynamic retractility in Distributed Architecture
CN106503098B (en) * 2016-10-14 2021-11-12 中金云金融(北京)大数据科技股份有限公司 Block chain cloud service framework system built in Paas service layer
CN106815075B (en) * 2016-12-15 2020-06-12 上海交通大学 Regional decomposition optimization method for building fire numerical simulation
CN106603299B (en) * 2016-12-28 2020-05-01 北京奇艺世纪科技有限公司 Method and device for generating service health index
CN106886460B (en) * 2017-02-22 2021-07-20 北京百度网讯科技有限公司 Load balancing method and device
CN106790705A (en) * 2017-02-27 2017-05-31 郑州云海信息技术有限公司 A kind of Distributed Application local cache realizes system and implementation method
CN106873919A (en) * 2017-03-20 2017-06-20 郑州云海信息技术有限公司 A kind of date storage method and device based on cloud storage system
CN107018197A (en) * 2017-04-13 2017-08-04 南京大学 A kind of holding load dynamic retractility mobile awareness Complex event processing method in a balanced way
CN107193989B (en) * 2017-05-31 2021-05-28 郑州云海信息技术有限公司 NAS cluster cache processing method and system
CN107196865B (en) * 2017-06-08 2020-07-24 中国民航大学 Load-aware adaptive threshold overload migration method
CN109218341B (en) * 2017-06-29 2022-02-25 北京京东尚科信息技术有限公司 Load balancing method and device for monitoring server and server
CN107391033B (en) * 2017-06-30 2020-07-07 北京奇虎科技有限公司 Data migration method and device, computing equipment and computer storage medium
CN107463630A (en) * 2017-07-14 2017-12-12 太仓诚泽网络科技有限公司 Multiterminal webpage control system
CN109729108B (en) * 2017-10-27 2022-01-14 阿里巴巴集团控股有限公司 Method for preventing cache breakdown, related server and system
CN107819867A (en) * 2017-11-18 2018-03-20 洛阳理工学院 The load-balancing method and device of a kind of cluster network
CN108089918B (en) * 2017-12-06 2020-07-14 华中科技大学 Graph computation load balancing method for heterogeneous server structure
CN108182105B (en) * 2017-12-12 2023-08-15 苏州大学 Local dynamic migration method and control system based on Docker container technology
CN108111586A (en) * 2017-12-14 2018-06-01 重庆邮电大学 The web cluster system and method that a kind of high concurrent is supported
CN108234616A (en) * 2017-12-25 2018-06-29 深圳华强聚丰电子科技有限公司 A kind of high-available distributed web caching systems and method
CN108183947A (en) * 2017-12-27 2018-06-19 深圳天源迪科信息技术股份有限公司 Distributed caching method and system
CN108519954A (en) * 2018-03-23 2018-09-11 北京焦点新干线信息技术有限公司 A kind of method and device of centralized management caching
CN108512919B (en) * 2018-03-25 2021-07-13 上海米卡信息技术服务有限公司 Cloud storage space allocation method and server
CN108551474B (en) * 2018-03-26 2021-03-09 南京邮电大学 Load balancing method of server cluster
CN108712457B (en) * 2018-04-03 2022-06-07 苏宁易购集团股份有限公司 Method and device for adjusting dynamic load of back-end server based on Nginx reverse proxy
CN108924244B (en) * 2018-07-24 2022-02-25 阿里巴巴(中国)有限公司 Distributed system and flow distribution method and device for same
CN109376013B (en) * 2018-10-11 2020-12-15 北京小米智能科技有限公司 Load balancing method and device
CN109471720A (en) * 2018-10-19 2019-03-15 曙光信息产业(北京)有限公司 Online operational system
CN109525662A (en) * 2018-11-14 2019-03-26 程桂平 The method of copy is set for Hot Contents
CN109656956B (en) * 2018-12-14 2023-06-09 浪潮软件集团有限公司 Method and device for realizing centralized caching of service system data
CN109857528B (en) * 2019-01-10 2021-08-27 北京三快在线科技有限公司 Data migration speed adjusting method and device, storage medium and mobile terminal
CN110169774B (en) * 2019-05-28 2022-06-14 深圳正指向科技有限公司 Motion state identification system and method based on block chain
CN112015326B (en) * 2019-05-28 2023-02-17 浙江宇视科技有限公司 Cluster data processing method, device, equipment and storage medium
CN110374851A (en) * 2019-07-19 2019-10-25 爱景智能装备(无锡)有限公司 The method of startup-shutdown priority is quickly judged in a kind of multiple air compressors
CN112241398A (en) * 2019-07-19 2021-01-19 北京京东尚科信息技术有限公司 Data migration method and system
CN110943925B (en) * 2019-11-26 2020-11-03 腾讯科技(深圳)有限公司 Method and related device for synchronizing routing information
CN111245743B (en) * 2020-01-09 2023-09-08 浙江吉利汽车研究院有限公司 Information processing method, storage medium, gateway and automobile
CN113468127A (en) * 2020-03-30 2021-10-01 同方威视科技江苏有限公司 Data caching method, device, medium and electronic equipment
CN111600794B (en) * 2020-07-24 2020-12-18 腾讯科技(深圳)有限公司 Server switching method, terminal, server and storage medium
CN114584565B (en) * 2020-12-01 2024-01-30 中移(苏州)软件技术有限公司 Application protection method and system, electronic equipment and storage medium
CN112738339B (en) * 2020-12-29 2022-09-23 杭州东信北邮信息技术有限公司 Service instance lossless capacity expansion and reduction method under telecommunication domain micro-service architecture
CN112717376B (en) * 2021-01-04 2022-12-02 厦门梦加网络科技股份有限公司 Method and system for enhancing stability of mobile phone online game
CN113709054A (en) * 2021-07-16 2021-11-26 济南浪潮数据技术有限公司 Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system
CN113986522A (en) * 2021-08-29 2022-01-28 中盾创新数字科技(北京)有限公司 Load balancing-based distributed storage server capacity expansion system
CN114328062B (en) * 2021-11-18 2023-09-05 芯华章科技股份有限公司 Method, device and storage medium for checking cache consistency
CN113835868B (en) * 2021-11-25 2022-04-15 之江实验室 Buffer scheduling method based on feedback and fair queue service quality perception
CN114466017B (en) * 2022-03-14 2024-03-12 阿里巴巴(中国)有限公司 Data monitoring method and device for kubernetes edge cluster
CN115242721A (en) * 2022-07-05 2022-10-25 中国电子科技集团公司第十四研究所 Embedded system and data flow load balancing method based on same
CN115297131B (en) * 2022-08-01 2023-05-26 东北大学 Sensitive data distributed storage method based on consistent hash
CN115412464B (en) * 2022-11-01 2023-03-24 江苏荣泽信息科技股份有限公司 Dynamic expansion method of block chain based on flow
CN116028234B (en) * 2023-03-31 2023-07-21 山东浪潮科学研究院有限公司 Distributed database load balancing method, device, equipment and storage medium
CN117251341A (en) * 2023-09-27 2023-12-19 中国科学院空天信息创新研究院 Real-time monitoring method and device for cache service cluster, electronic equipment and medium
CN117614956B (en) * 2024-01-24 2024-03-29 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Intra-network caching method and system for distributed storage and storage medium
CN118426713A (en) * 2024-07-05 2024-08-02 北京天弘瑞智科技有限公司 Cluster file distributed management method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078960A (en) * 1998-07-03 2000-06-20 Acceleration Software International Corporation Client-side load-balancing in client server network
CN101697526A (en) * 2009-10-10 2010-04-21 中国科学技术大学 Method and system for load balancing of metadata management in distributed file system
CN102143215A (en) * 2011-01-20 2011-08-03 中国人民解放军理工大学 Network-based PB level cloud storage system and processing method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078960A (en) * 1998-07-03 2000-06-20 Acceleration Software International Corporation Client-side load-balancing in client server network
CN101697526A (en) * 2009-10-10 2010-04-21 中国科学技术大学 Method and system for load balancing of metadata management in distributed file system
CN102143215A (en) * 2011-01-20 2011-08-03 中国人民解放军理工大学 Network-based PB level cloud storage system and processing method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种构件级动态集群管理系统的设计与实现;罗嵘、王涛、张文博、谢飞;《计算机应用研究》;20110430;第28卷(第4期);全文 *
可伸缩式分布式VOD系统的设计与实现;杨灿,卢正鼎,邹雪城;《华中科技大学学报》;20050630;第33卷(第1期);28-31 *
面向云环境的自适应集群调整方法;周欢云、王伟、张文博;《计算机科学与探索》;20110531(第4期);全文 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104516952A (en) * 2014-12-12 2015-04-15 华为技术有限公司 Memory partition deployment method and device
CN104516952B (en) * 2014-12-12 2018-02-13 华为技术有限公司 A kind of memory partitioning dispositions method and device
US9619391B2 (en) 2015-05-28 2017-04-11 International Business Machines Corporation In-memory caching with on-demand migration
CN109857725A (en) * 2019-02-20 2019-06-07 北京百度网讯科技有限公司 Data base management method and device, server and computer-readable medium

Also Published As

Publication number Publication date
CN102244685A (en) 2011-11-16

Similar Documents

Publication Publication Date Title
CN102244685B (en) Distributed type dynamic cache expanding method and system for supporting load balancing
US8108612B2 (en) Location updates for a distributed data store
US7457835B2 (en) Movement of data in a distributed database system to a storage location closest to a center of activity for the data
US9996552B2 (en) Method for generating a dataset structure for location-based services and method and system for providing location-based services to a mobile device
CN103237046B (en) Support distributed file system and the implementation method of mixed cloud storage application
Nishio et al. Data management issues in mobile and peer-to-peer environments
CN111580930A (en) Native cloud application architecture supporting method and system for domestic platform
CN101753405A (en) Cluster server memory management method and system
CN102164184A (en) Computer entity access and management method for cloud computing network and cloud computing network
Mayer et al. Graph: Heterogeneity-aware graph computation with adaptive partitioning
CN109639773B (en) Dynamically constructed distributed data cluster control system and method thereof
WO2021120633A1 (en) Load balancing method and related device
US20170351620A1 (en) Caching Framework for Big-Data Engines in the Cloud
CN104679594A (en) Middleware distributed calculating method
CN107807983A (en) A kind of parallel processing framework and design method for supporting extensive Dynamic Graph data query
Le et al. Dynastar: Optimized dynamic partitioning for scalable state machine replication
CN112492022A (en) Cluster, method, system and storage medium for improving database availability
Duan et al. A novel load balancing scheme for mobile edge computing
CN107197039B (en) A kind of PAAS platform service packet distribution method and system based on CDN
CN112698941A (en) Real-time database query method based on dynamic load balancing
Panigrahi et al. DATALET: An approach to manage big volume of data in cyber foraged environment
CN115357375A (en) Server less parallel computing method and system facing MPI
Tenzakhti et al. Replication algorithms for the world-wide web
WO2015055502A2 (en) Method of partitioning storage in a distributed data storage system and corresponding device
Fang et al. Design and evaluation of a pub/sub service in the cloud

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191022

Address after: 250002 room 701, Runxiang building, No. 87, Jingqi Road, Shizhong District, Jinan City, Shandong Province

Patentee after: Jinan Jun'an Tai Investment Group Co., Ltd.

Address before: 100190 No. four, 4 South Street, Haidian District, Beijing, Zhongguancun

Patentee before: Institute of Software, Chinese Academy of Sciences

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130918

Termination date: 20200811

CF01 Termination of patent right due to non-payment of annual fee