CN104142896A - Cache control method and system - Google Patents

Cache control method and system Download PDF

Info

Publication number
CN104142896A
CN104142896A CN201310172568.7A CN201310172568A CN104142896A CN 104142896 A CN104142896 A CN 104142896A CN 201310172568 A CN201310172568 A CN 201310172568A CN 104142896 A CN104142896 A CN 104142896A
Authority
CN
China
Prior art keywords
data cached
access
application
node
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310172568.7A
Other languages
Chinese (zh)
Other versions
CN104142896B (en
Inventor
杨琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taobao China Software Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201310172568.7A priority Critical patent/CN104142896B/en
Publication of CN104142896A publication Critical patent/CN104142896A/en
Application granted granted Critical
Publication of CN104142896B publication Critical patent/CN104142896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A cache control method comprises steps as follows: a cluster comprising a plurality of servers forms a layered network structure; each layer in the structure is formed by connecting a plurality of nodes in the form of a set first structure; each first structure forms a node of the previous network structure; the bottommost layer comprises one or more first structures, each server in the cluster is taken as a node in the first structure of the bottommost layer; the first structure is formed by n nodes according to the preset rules: each node of the n nodes is connected with k nodes in the n nodes, k is larger than 0 and smaller than n, and n is larger than 1; and when cache data are judged to be updated, an update notification is sent to the first structure on the top layer and transmitted to the first structure on the bottommost layer in a layer-by-layer manner until all the nodes of the first structure on the bottommost layer acquire the updating notification, so that the servers in the cluster can update the cache data. The invention further provides a cache control system. According to the cache control method and system, the local cache of each server in the cluster can be timely updated and synchronized.

Description

A kind of buffer control method and system
Technical field
The application relates to computer realm, relates in particular to a kind of buffer control method and system.
Background technology
In prior art, generally there are two kinds of caching methods:
One is local cache: data are kept on the internal memory of home server, and what when access, walk is only system bus, and speed is fast.As the EhCache increasing income, googel MemCached, OSCache.Local cache is disposed together with application, can take the more memory headroom of application server.
Another kind is distributed caching: application, needing the data of buffer memory by network, is left on distributed cache server.Also can set up caching server cluster, spatial cache is enough large in theory; And do not take application server memory headroom.Distributed caching need to be undertaken alternately, taking the network bandwidth by network, in addition, has the calculated performance consumption of serializing data, poor-performing.
For local cache, when data occur to change, general realization is all to lose efficacy by the related data in all CACHE (buffer memory) in cluster by message informing.In the time again needing related data, each server obtains data to data source, and puts into CACHE.The method can not realize data cached upgrading in time and ensure the synchronous of affairs (cluster mode).
Summary of the invention
The technical matters that the application will solve is to provide a kind of buffer control method and system, realizes data cached upgrading in time and transactional synchronization.
In order to address the above problem, the application provides a kind of buffer control method, comprising:
The cluster that comprises multiple servers is formed to level network structure;
Wherein, every one deck of described level network structure is formed by connecting with the form of the first structure of setting by multiple nodes; Each first structure that comprises multiple nodes forms a node in last layer network structure; The bottom of described level network structure comprises one or more the first structures, and the each server in described cluster is as a node in the first structure of the bottom of described level network structure;
Described the first structure is formed by pre-defined rule by every n node, described pre-defined rule comprises: the each node in a described n node is connected with k node in this n node, 0 < k < n, n > 1;
Judge when data cached generation is upgraded, send first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, until each node of the first structure of the described bottom obtains described update notification, be that each server in described cluster obtains described update notification, so that the each server update in described cluster is data cached.
Said method also can have following characteristics, described transmission update notification is to the first structure of the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, comprise until each node of the first structure of the described bottom obtains described update notification:
In each described the first structure, set a host node;
Described update notification is sent to the host node of the first structure of described top layer;
Each node receives after described update notification, carries out following forward process:
Described update notification is sent to the adjacent node in same the first structure; And, if described update notification is sent to the host node in lower one deck the first structure corresponding in this level network structure by the non-bottom node of described node.
Said method also can have following characteristics, and described method also comprises: each node receives after described update notification, carries out before described forward process, judge whether processed described update notification, if processed, abandon this update notification, if no, just carry out described forward process.
Said method also can have following characteristics, and described method also comprises: described server obtain described update notification and receive new data cached after, upgrade former data cached.
Said method also can have following characteristics, and described n is 10, and described k is 3, and described the first structure is Petersen figure.
Said method also can have following characteristics, and described method also comprises:
By data cached the storing in distributed caching of application, and, frequency of utilization and/or size are met in the data cached local cache that stores the server in described cluster into of predetermined condition;
Receive after the same data cached request of access of one or more application access, if there is not the data cached of described one or more application request access in described distributed caching, after obtaining described one or more application request access data cached to data source, the data cached of described one or more application request access sent to described one or more application, and by data cached being updated in described distributed caching of described one or more application request access.
Said method also can have following characteristics, and described predetermined condition comprises:
Frequency of utilization is greater than first threshold and the spatial cache that takies is less than Second Threshold;
And/or frequency of utilization is greater than the 4th threshold value and the spatial cache that takies is greater than the 3rd threshold value.
Said method also can have following characteristics, and described method also comprises:
Receive after the cache data access request of application, if the data cached of described application request access is arranged in the local cache of the server of initiating described cache data access request or is positioned at described distributed caching, from described local cache or described distributed caching, obtain the data cached of described application request access and send to described application.
Said method also can have following characteristics, describedly from described local cache or described distributed caching, obtains the data cached of described application request access and sends to described application to comprise:
In the time that described application request access data cached meets described predetermined condition and initiates to exist in the local cache of server of described cache data access request described application request access data cached, from initiate the local cache of server of described cache data access request, obtain the data cached of described application request access, send to described application;
When meeting described predetermined condition and initiate, described application request access data cached there is not the data cached of described application request access in the local cache of server of described cache data access request, or, when described application request access data cached do not meet described predetermined condition, judge and in described distributed caching, whether have the data cached of described application request access, if existed, from described distributed caching, obtain the data cached of described application request access, send to described application.
Said method also can have following characteristics, and described method also comprises:
On each described server, be furnished with central control unit, and the intercommunication of central control unit on different server;
After the described same data cached request of access of access that receives multiple application, if there is not the data cached of described multiple application request access in described distributed caching, obtain the data cached of described multiple application request access to data source, described multiple application request access data cached sent to described multiple application, and described multiple application request access data cached is updated to described distributed caching comprises:
Multiple central control unit receive respectively after the same data cached request of access of application access on the server at its place, if there is not the data cached of described multiple application request access in described distributed caching, from obtaining described multiple application request access data cached, described data source sent to the application on the server at described its place by one of them central control unit, and by data cached being updated in described distributed caching of described multiple application request access, and, send to other central control unit in described multiple central control unit to be sent to the application on the server at place separately by described other central control unit described multiple application request access data cached.
The application also provides a kind of cache control system, and described system comprises: cluster and central control unit, and described central control unit comprises configuration module and update module, wherein:
Described cluster comprises multiple servers, and each server comprises local cache system;
Described configuration module is used for, and described multiple servers of described cluster are formed to level network structure, records described level network structure: wherein, every one deck of described level network structure is formed by connecting with the form of the first structure of setting by multiple nodes; Each first structure that comprises multiple nodes forms a node in last layer network structure; The bottom of described level network structure comprises one or more the first structures, and the each server in described cluster is as a node in the first structure of the bottom of described level network structure; Described the first structure is formed by pre-defined rule by every n node, described pre-defined rule comprises: the each node in a described n node is connected with k node in this n node, 0 < k < n, n > 1;
Described update module is used for: judge when data cached generation is upgraded, send first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, until each node of the first structure of the described bottom obtains described update notification, be that each server in described cluster obtains described update notification, so that the each server update in described cluster is data cached.
Said system also can have following characteristics, described configuration module also for: set a host node in each described the first structure;
Described update module sends first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, comprise until each node of the first structure of the described bottom obtains described update notification:
Described update notification is sent to the host node of the first structure of described top layer; Each node receives after described update notification, carry out following forward process: described update notification is sent to the adjacent node in same the first structure, and, if the non-bottom node of described node, sends to the host node in lower one deck the first structure corresponding in this level network structure by described update notification.
Said system also can have following characteristics, described update module also for: each node receives after described update notification, carries out before described forward process, judge whether processed described update notification, if processed, abandon this update notification, if no, just carry out described forward process.
Said system also can have following characteristics, described server also for: obtain described update notification and receive new data cached after, use described new data cached renewal former data cached.
Said system also can have following characteristics, and described n is 10, and described k is 3, and described the first structure is Petersen figure.
Said system also can have following characteristics, and described system also comprises: distributed cache system; Described central control unit also comprises storage control module and access control module, wherein:
Described storage control module is used for, by data cached the storing in described distributed cache system of application, and, frequency of utilization and/or size are met in the data cached local cache system that stores the server in described cluster into of predetermined condition;
Described access control module is used for, receive after the same data cached request of access of one or more application access, if there is not the data cached of described one or more application request access in described distributed cache system, after obtaining described one or more application request access data cached to data source, the data cached of described one or more application request access sent to described one or more application, and by data cached being updated in described distributed cache system of described one or more application request access.
Said system also can have following characteristics, and described predetermined condition comprises:
Frequency of utilization is greater than first threshold and the spatial cache that takies is less than Second Threshold;
And/or frequency of utilization is greater than the 4th threshold value and the spatial cache that takies is greater than the 3rd threshold value.
Said system also can have following characteristics, described access control module also for, receive after the cache data access request of application, if the data cached of described application request access is arranged in the local cache system of the server of initiating described cache data access request or is positioned at described distributed cache system, from described local cache system or described distributed cache system, obtain the data cached of described application request access and send to described application.
Said system also can have following characteristics, and described access control module obtains the data cached of described application request access and sends to described application to comprise from described local cache system or described distributed cache system:
In the time that described application request access data cached meets described predetermined condition and initiates to exist in the local cache system of server of described cache data access request described application request access data cached, from initiate the local cache system of server of described cache data access request, obtain the data cached of described application request access, send to described application;
When meeting described predetermined condition and initiate, described application request access data cached there is not the data cached of described application request access in the local cache system of server of described cache data access request, or, when described application request access data cached do not meet described predetermined condition, judge and in described distributed cache system, whether have the data cached of described application request access, if existed, from described distributed cache system, obtain the data cached of described application request access, send to described application.
Said system also can have following characteristics, on each described server, is furnished with described central control unit, and the intercommunication of described central control unit on different server;
Described access control module also for, when the access control module of the central control unit on multiple servers receives respectively after the same data cached request of access of application access on the server at its place, if there is not the data cached of described multiple application request access in described distributed cache system, from obtaining described multiple application request access data cached, described data source sent to the application on the server at described its place by the access control module of one of them central control unit; And notify described update module by data cached being updated in described distributed cache system of described multiple application request access, and, by the data cached access control module that sends to other central control unit in described multiple central control unit of described multiple application request access, sent to the application on the server at place separately by the access control module of described other central control unit.
The application comprises following advantage:
1, use local cache and distributed caching, Hoisting System performance, also avoids local EMS memory occupation too much simultaneously.
2, data, except leaving local cache China and foreign countries in, also can be deposited in local disk for large data, avoid too much taking local internal memory, and in addition, large data are directly obtained from this locality, can avoid the loss of large object data Internet Transmission and serializing.
3, form the first structure by multiple nodes and carry out update notification, realize in the cluster of P2P pattern local cache synchronous, ensure the upgrading in time and synchronously of local cache of the each server in cluster.With respect to broadcast type update notification mode of the prior art, in the application because update notification exists redundancy (taking Petersen figure as example, each node receives notice 3 times), can avoid the problem that causes certain applications server to be informed to while only notifying 1 time.In addition, in broadcast type update notification mode, in the single application server needs of initiation update notification and cluster, between all the other servers, carry out mutual, interactive operation is too much, high to individual server performance requirement, in the embodiment of the present application, individual server only need receive update notification three times, sends three times update notification, low to individual server performance requirement.
4, by two sections of submission strategies of central control unit, control synchronous transactional.
Certainly, arbitrary product of enforcement the application might not need to reach above-described all advantages simultaneously.
Brief description of the drawings
Fig. 1 is the embodiment of the present application cache control system schematic diagram;
Fig. 2 is the embodiment of the present application Petersen figure schematic diagram;
Fig. 3 is the application's cache control system block diagram.
Embodiment
For making the application's object, technical scheme and advantage clearer, hereinafter in connection with accompanying drawing, the application's embodiment is elaborated.It should be noted that, in the situation that not conflicting, the combination in any mutually of the feature in embodiment and embodiment in the application.
In addition, although there is shown logical order in flow process, in some cases, can carry out shown or described step with the order being different from herein.
In the application, data cached by central control unit unified management, when application needs access cache data, send request of access to central control unit, return to application by central control unit by data cached, and, by by the server composition level network structure in cluster, realize in the cluster of P2P pattern local cache synchronous, ensure the upgrading in time and synchronously of local cache of the each server in cluster.
The embodiment of the present application provides a kind of buffer control method, comprising:
The cluster that comprises multiple servers is formed to level network structure;
Wherein, every one deck of described level network structure is formed by connecting with the form of the first structure of setting by multiple nodes; Each first structure that comprises multiple nodes forms a node in last layer network structure; The bottom of described level network structure comprises one or more the first structures, and the each server in described cluster is as a node in the first structure of the bottom of described level network structure;
Described the first structure is formed by pre-defined rule by every n node, described pre-defined rule comprises: the each node in a described n node is connected with k node in this n node, 0 < k < n, n > 1;
Judge when data cached generation is upgraded, send first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, until each node of the first structure of the described bottom obtains described update notification, be that each server in described cluster obtains described update notification, so that the each server update in described cluster is data cached.
In a kind of alternatives of the present embodiment, a kind of layoutprocedure of level network structure is:
Using the each application server in cluster as a node, every n node connects by pre-defined rule, generates the first structure of the bottom in described level network structure, and every n node generates described first structure;
If the first structure of the described bottom only has 1, the bottom of this layer of utmost point network structure is top layer, and configuration finishes;
If the first structure of the bottom is greater than 1, node in (being the last layer of the bottom) using each first structure of the bottom as current layer, every n node of described current layer network structure connects by described pre-defined rule, generates the first new structure;
Continue to judge whether the first number of structures in current layer is greater than 1, if only have 1, configuration finishes; If be greater than 1, node in the last layer network structure of current layer using each the first structure in current layer, and described node is connected by described pre-defined rule, generate the first new structure; Repeat this step, until the top layer in described level network structure only has 1 the first structural generation; Wherein, while generating each the first structure, if remaining nodes is less than n, the node lacking substitutes with empty node, with regeneration the first structure.
In a kind of alternatives of the present embodiment, described transmission update notification is to the first structure of described top layer, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, comprise until each node of the first structure of the described bottom obtains described update notification:
Described transmission update notification, to the first structure of the top layer of described level network structure, is successively forwarded to the first structure of the described bottom by the first structure of described top layer, comprise until each node of the first structure of the described bottom obtains described update notification:
In each described the first structure, can set a host node;
Described update notification is sent to the host node of the first structure of described top layer;
Each node receives after described update notification, carries out following forward process:
Described update notification is sent to the adjacent node in same the first structure; And, if described update notification is sent to the host node in lower one deck the first structure corresponding in this level network structure by the non-bottom node of described node.
Described method also comprises: each node receives after described update notification, carries out before described forward process, also can judge whether processed described update notification, if processed, abandons this update notification, if do not had, just carries out described forward process.
In a kind of alternatives of the present embodiment, also comprise: described server obtain described update notification and receive new data cached after, upgrade former data cached.
In a kind of alternatives of the present embodiment, described server obtain described update notification and receive new data cached after, notify described server to change and come into force, server receives and changes after the instruction coming into force, and uses described new data cached renewal former data cached.
In a kind of alternatives of the present embodiment, described the first structure is Petersen figure, and described n is 10, and described k is 3.Certainly, n, k also can get other value, and the first structure can be also other structures, such as, n=5, k=2, the first structure is a pentagon, etc.
In a kind of alternatives of the present embodiment, also comprise:
By data cached the storing in distributed caching of application, and, frequency of utilization and/or size are met in the data cached local cache that stores the server in described cluster into of predetermined condition;
Receive after the same data cached request of access of one or more application access, if there is not the data cached of described one or more application request access in described distributed caching, after obtaining described one or more application request access data cached to data source, the data cached of described one or more application request access sent to described one or more application, and by data cached being updated in described distributed caching of described one or more application request access.
Predetermined condition can be set as required, and in a kind of alternatives of the present embodiment, described predetermined condition comprises:
Frequency of utilization is greater than first threshold and the spatial cache that takies is less than Second Threshold;
And/or frequency of utilization is greater than the 4th threshold value and the spatial cache that takies is greater than the 3rd threshold value.
In a kind of alternatives of the present embodiment, also comprise:
Receive after the cache data access request of application, if the data cached of described application request access is arranged in the local cache of the server of initiating described cache data access request or is positioned at described distributed caching, from described local cache or described distributed caching, obtain the data cached of described application request access and send to described application.
In a kind of alternatives of the present embodiment, describedly from described local cache or described distributed caching, obtain the data cached of described application request access and send to described application to comprise:
In the time that described application request access data cached meets described predetermined condition and initiates to exist in the local cache of server of described cache data access request described application request access data cached, from initiate the local cache of server of described cache data access request, obtain the data cached of described application request access, send to described application;
When meeting described predetermined condition and initiate, described application request access data cached there is not the data cached of described application request access in the local cache of server of described cache data access request, or, when described application request access data cached do not meet described predetermined condition, judge and in described distributed caching, whether have the data cached of described application request access, if existed, from described distributed caching, obtain the data cached of described application request access, send to described application.
In a kind of alternatives of the present embodiment, also comprise:
On each described server, be furnished with central control unit, and the intercommunication of central control unit on different server;
After the described same data cached request of access of access that receives multiple application, if there is not the data cached of described multiple application request access in described distributed caching, obtain the data cached of described multiple application request access to data source, described multiple application request access data cached sent to described multiple application, and described multiple application request access data cached is updated to described distributed caching comprises:
Multiple central control unit receive respectively after the same data cached request of access of application access on the server at its place, if there is not the data cached of described multiple application request access in described distributed caching, from obtaining described multiple application request access data cached, described data source sent to the application on the server at described its place by one of them central control unit, and by data cached being updated in described distributed caching of described multiple application request access, and, send to other central control unit in described multiple central control unit to be sent to the application on the server at place separately by described other central control unit described multiple application request access data cached.
In a kind of alternatives of the present embodiment, the local cache that the data that meet described predetermined condition is stored into the application server of described application comprises:
Store in the first interval that satisfied the first subconditional data are put into the local cache of the application server of described application;
By described meet the second subconditional data deposit in described application application server or with file mode storage, the concordance list that meets the second subconditional data is deposited between the Second Region of local cache of the application server of described application, and described concordance list instruction meets the memory location of described the second subconditional data.
Central control unit is obtained the data cached of described application request access and is sent to described application in two kinds of situation from described local cache or distributed caching, whether meets predetermined condition process respectively according to the data of request access.Also be stored in local cache owing to meeting the data cached of predetermined condition, therefore, if require access to meet the data cached of predetermined condition, central control unit is first searched local cache, if local cache does not require the data cached of access, search distributed caching, if also do not require the data cached of access in distributed caching, go data source to obtain and require the data cached of access.If require access not meet the data cached of predetermined condition, directly search distributed caching, if do not require the data cached of access in distributed caching, go data source to obtain and require the data cached, concrete of access:
1) in the time that described application request access data cached meets described predetermined condition, judge and in the local cache of application server of initiating described cache data access request, whether have the data cached of described application request access, if existed, from initiate the local cache of application server of described cache data access request, obtain the data cached of described application request access, send to described application; In the local cache of application server of described cache data access request, there is not the data cached of described application request access if initiated, judge and in described distributed caching, whether have the data cached of described application request access, if existed, from described distributed caching, obtain the data cached of described application request access, send to described application;
2) in the time that described application request access data cached do not meet described predetermined condition, judge and in described distributed caching, whether have the data cached of described application request access, if existed, from described distributed caching, obtain the data cached of described application request access, send to described application.
In such scheme, in conjunction with data cached described in distributed caching and local cache storage, with respect to the mode that only uses single cache, promote system performance, also avoided too much taking local internal memory.In addition, data cached by central control unit unified management, when application needs access cache data, unification is gone to obtain by central control unit, avoids multiple servers while access data sources to cause even snowslide of system congestion.
Further illustrate the application below by a specific embodiment.
In distributed caching, deposit full dose data cached, classify to data cached, by the size of data and frequency of utilization, partial data is also deposited in local cache to portion, when application operation, this part high priority data is obtained to data from local cache.
In the present embodiment, the data that need buffer memory are divided into following four classes:
Category-A: take spatial cache and be greater than the 5th threshold value, but frequency of utilization in set period is less than the data of the 6th threshold value.As: commodity data.
Category-B: frequency of utilization is greater than first threshold, takies spatial cache and is less than Second Threshold.As: supplemental characteristic when system operation.
C class: take spatial cache and be greater than the 3rd threshold value, frequency of utilization is greater than the 4th threshold value.
D class: take spatial cache and be greater than the 3rd threshold value, frequency of utilization is less than or equal to the 4th threshold value.
In the present embodiment, above-mentioned category-B and C class data are the data that meet aforementioned predetermined condition, category-A and D class data are the data that do not meet predetermined condition, therefore, category-A, D class data only need to leave on distributed caching, category-B and C class, except depositing in distributed caching, need be deposited copy at local cache simultaneously.
The cache control system of the present embodiment is based on framework shown in Fig. 2, as shown in Figure 2, comprise: central control unit 101, distributed caching 102, application A cluster 103, data source (Datastore) 104, and application B105, central control unit 101 is logic entity, in this example, conveniently draws separately in order to illustrate, physically central control unit 101 can be deployed on the each application server in application A cluster 103, wherein:
Central control unit 101: for the local cache under cluster environment is managed, the shared buffer memory data between different application are managed, B, C class data in local cache and distributed caching to same application are carried out management by synchronization.Central control unit can be deployed on application server together with application, there is no the loss of network service and so on application.This central control unit can be a JAR (Java Archive, Java archive file) bag or dll routine library etc.
Distributed caching (center cache) 102, for according to the control of central control unit, stores all data cachedly, comprises above-mentioned A, B, C, D tetra-class data; Wherein, can carry out distributed buffer based on distributed hash table algorithm, as memcache, tair etc.
Application A cluster 103: be a cluster environment of application A, application deployment A all on each station server, and on every station server, all contain a local cache (as ehCache googel MemCached).Data in application A cluster in each server local cache are all the subsets of data in distributed caching, and each server is according to the control of central control unit, the above-mentioned category-B of buffer memory, C class data.
In data source 104, the true source of storage application desired data, is generally database or file system, as mysql, TFS (Taobao File System, Taobao's file system).
Application B105: also cluster environment or stand-alone environment, for convenience of describing, is stand-alone environment in the present embodiment.
Application A and the scene relation of applying B: application A user is domestic consumer; Application B uses to keeper, is used for configuring application A operation correlation parameter.
In this embodiment, the local cache in the each server in application A cluster can be divided into two intervals, deposits the category-B data of central control unit distribution in the first interval; The concordance list of depositing the C class data of central control unit distribution between Second Region, C class data are preserved in addition with file mode or at internal memory, indicate the memory location of C class data in concordance list.
Central control unit can, according to FIFO (First Input First Output, first-in first-out) and maximum access times scheduling algorithm, be put into C class data between Second Region and manage, thereby avoids the loss of large data network transmission and serializing.
Tables of data between the first interval censored data table of a central control unit application configuration good at managing and Second Region (suppose that certain configuration parameter need be through conventional, in table, configure so the title of this parameter), the data of recording a demerit in table acceptance of the bid will be obtained from local cache, obtain not then, just can arrive in distributed caching 102 and obtain, in distributed caching 102, not obtain and then obtain to data source 104.
A, D class data all can directly be obtained by central control unit in distributed caching 102, in distributed caching 102, do not obtain and then obtain to data source 104.
When application A cluster environment starts for the first time, the server that First starts can first load all B, C class data successively by central control unit, be placed in distributed caching and local cache, next just can directly from distributed caching, load data into local cache by central control unit when other startup of server and suffer.
When applications has been revised B, C class data, send notice to central control unit, trigger the P2P mode data synchronizing process based on Petersen nomography, the local cache that ensures all services is all with a mirror image.
As shown in Figure 2, be the structure of Petersen figure, comprise 10 nodes, each node can be connected with other three nodes.
Can be P2P pattern to carrying out the mechanism of management by synchronization between the local cache of application cluster, realize based on Petersen nomography.
Suppose to comprise N station server in application A cluster.
Described cluster is configured to the level network structure based on Petersen figure as follows:
Wherein, every one deck of described level network structure is formed by connecting with the form of Petersen figure by multiple nodes; The Petersen figure that each comprises multiple nodes forms the node in last layer network structure; The bottom of described level network structure comprises one or more Petersen figure, and the each server in cluster is as a node in the Petersen figure of the bottom.
In Petersen figure, comprise 10 nodes, each node is connected with other 3 nodes in same Petersen figure.If generate more than 1 Petersen figure in same layer structure, each Petersen figure of current generation is used as to a node, then Petersen figure of every 10 node recompositions; Repeat this step, until finally generate a Petersen figure, in the process of generation Petersen figure, if while generating certain Petersen figure, nodes is less than 10, replaces with specific empty node (null), form regeneration Petersen after 10 nodes and schemes.
Wherein, each Petersen figure can specify the host node of a node as this Petersen figure.
By above-mentioned steps, generate a nested Petersen figure, the Petersen figure of the bottom is made up of actual server node, and top layer is a Petersen figure.The method of the level network structure based on Petersen figure of above-mentioned generation also can be applicable to generate the level network structure of first structure with non-Petersen figure.
The buffer memory synchronizing step of the P2P pattern based on above-mentioned Petersen figure is as follows:
Step 401, passes to data cached update notification the host node of the top layer Petersen figure of described level network structure.
Step 402, each node is received after update notification, first judges whether to process this update notification, if do not processed, by adjacent three nodes that are transmitted to update notification with layer.
Concrete, the host node of top layer Petersen figure is received after update notification, notice is with the adjacent node of layer, adjacent node with layer receives after update notification, if do not processed, continue to forward update notification to adjacent three nodes with layer, the like, this update notification can be forwarded to all nodes with layer.
Each node of same layer can be received at most notice three times like this, but only need process notice for the first time.Such processing mode is only processed node in receiving update notification for the first time, avoids redundant operation.
Step 403, if node or Petersen scheme, instead of server, reinform so the host node of this Petersen figure, until notify the server node that arrives bottom Petersen figure;
Step 404, server node is received after update notification, newData is read in internal memory, and notice central control unit successfully receives data and completes.
Step 405, Servers-all all notifies central control unit to receive after data success, and central control unit notice Servers-all changes and comes into force, and each server uses newData to cover former data, reinforms central control unit and activates successfully.
Step 406, Servers-all is all notified while activating successfully, and whole process finishes.
Except the node of the Petersen figure of the bottom is server, other nodes are logic node, and the operation that logic node is processed update notification is completed by central control unit, until update notification arrives the server node of the bottom.Central control unit on each server is carried out alternately, completes notice operation.
In prior art, for cluster environment, data cached between each server is synchronously generally the cache invalidation of being notified other servers by a station server broadcast.When other server visit datas next time, first ask buffer memory, if miss, just obtain data to True Data source and upgrade again buffer memory simultaneously.The method can not in time and ensure the synchronous of affairs (cluster mode).In the present embodiment, after an application server renewal is data cached, upgrade the data cached of other application servers by the P2P pattern synchronization based on Petersen figure, ensured the efficiency of buffer update.
If there is the synchronous renewal of application server unsuccessful, can, by the direct and failed server communication of central control unit, upgrade; Or directly the local cache of failed application server was lost efficacy, dependence Passive Mode upgrades, and Passive Mode upgrades and refers to and only make data failure, in the time that follow-up needs are used inefficacy data cached, obtain new data cachedly from data source, upgrade lost efficacy data cached; Or notify all application servers to carry out rolling back action by central control unit, in the present embodiment, rolling back action refers to return to and carries out data cached renewal state before, the selection that meets more traffic performance demand is provided.
Such scheme tool has the following advantages:
1, utilize the feature of Petersen figure, use spreading news and new data cached bag of P2P model virus formula.
2, every station server only can fixedly be received the notice of buffer update and notify three times of sending buffer update, simply controlled.
3, utilize two sections of ways of submission, two sections of ways of submission in the application are that server is received after update notification, new data cached reading in internal memory, and notice central control unit successfully receives data and completes, Servers-all all notifies central control unit to receive after data success, central control unit notice Servers-all changes and comes into force, each server uses new data cached covering former data cached, the central control unit that reinforms each server activates successfully, thereby ensure the consistance of affairs, make the data in the interior each server local cache of cluster all consistent.In prior art, for cluster environment, data cached between each server is synchronously generally the cache invalidation of being notified other servers by a station server broadcast.When other servers visit datas next time, first ask buffer memory, if miss, just obtain data and upgrade the mode of buffer memory simultaneously to data source, the application can carry out in time data cached renewal with and ensure affairs (cluster mode) synchronously.
The embodiment of the present application also provides a kind of cache control system, as shown in Figure 3, comprising:
Cluster 301 and central control unit 302, described cluster 301 comprises multiple servers 3011, each server comprises local cache system 3012; Described central control unit 302 comprises configuration module 3021 and update module 3022, wherein
Described configuration module 3021 for, the described multiple servers of described cluster are formed to level network structures, record described level network structure: wherein, every one deck of described level network structure is formed by connecting with the form of the first structure of setting by multiple nodes; Each first structure that comprises multiple nodes forms a node in last layer network structure; The bottom of described level network structure comprises one or more the first structures, and the each server in described cluster is as a node in the first structure of the bottom of described level network structure; Described the first structure is formed by pre-defined rule by every n node, described pre-defined rule comprises: the each node in a described n node is connected with k node in this n node, 0 < k < n, n > 1;
Described update module 3022 for: while judging that data cached generation is upgraded, send first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, until each node of the first structure of the described bottom obtains described update notification, be that each server in described cluster obtains described update notification, so that the each server update in described cluster is data cached.
Described configuration module 3021 also for: set a host node in each described the first structure;
Described update module 3022 sends first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, comprise until each node of the first structure of the described bottom obtains described update notification:
Described update notification is sent to the host node of the first structure of described top layer; Each node receives after described update notification, carry out following forward process: described update notification is sent to the adjacent node in same the first structure, and, if the non-bottom node of described node, sends to the host node in lower one deck the first structure corresponding in this level network structure by described update notification.
In a kind of alternatives of the present embodiment, described update module 3022 also for: each node receives after described update notification, carry out before described forward process, judge whether processed described update notification, if processed, abandon this update notification, if do not had, just carry out described forward process.
In a kind of alternatives of the present embodiment, described server 3011 also for: obtain described update notification and receive new data cached after, use described new data cached renewal former data cached.
In a kind of alternatives of the present embodiment, described n is 10, and described k is 3, and described the first structure is Petersen figure.
In a kind of alternatives of the present embodiment, described system also comprises: distributed cache system 303; Described central control unit 302 also comprises storage control module 3023 and access control module 3024, wherein:
Described storage control module 3023 for, by data cached the storing in described distributed cache system 303 of application, and, frequency of utilization and/or size are met in the data cached local cache system 3012 that stores the server in described cluster into of predetermined condition;
Described access control module 3024 for, receive after the same data cached request of access of one or more application access, if there is not the data cached of described one or more application request access in described distributed cache system 303, after obtaining described one or more application request access data cached to data source, the data cached of described one or more application request access sent to described one or more application, and by data cached being updated in described distributed cache system 303 of described one or more application request access.
In a kind of alternatives of the present embodiment, described predetermined condition comprises:
Frequency of utilization is greater than first threshold and the spatial cache that takies is less than Second Threshold;
And/or frequency of utilization is greater than the 4th threshold value and the spatial cache that takies is greater than the 3rd threshold value.
In a kind of alternatives of the present embodiment, described access control module 3024 also for, receive after the cache data access request of application, if the data cached of described application request access is arranged in the local cache system 3012 of the server of initiating described cache data access request or is positioned at described distributed cache system 303, from described local cache system 3012 or described distributed cache system 303, obtain the data cached of described application request access and send to described application.
In a kind of alternatives of the present embodiment, described access control module 3024 obtains the data cached of described application request access and sends to described application to comprise from described local cache system 3012 or described distributed cache system 303:
In the time that described application request access data cached meets described predetermined condition and initiates to exist in the local cache system of server of described cache data access request described application request access data cached, from initiate the local cache system 3012 of server of described cache data access request, obtain the data cached of described application request access, send to described application;
When meeting described predetermined condition and initiate, described application request access data cached there is not the data cached of described application request access in the local cache system 3012 of server of described cache data access request, or, when described application request access data cached do not meet described predetermined condition, judge and in described distributed cache system 303, whether have the data cached of described application request access, if existed, from described distributed cache system 303, obtain the data cached of described application request access, send to described application.
In a kind of alternatives of the present embodiment, on each described server 3011, be furnished with described central control unit 302, and the intercommunication of described central control unit 302 on different server;
Described access control module 3024 also for, when the access control module 3024 of the central control unit 302 on multiple servers receives respectively after the same data cached request of access of application access on the server at its place, if there is not the data cached of described multiple application request access in described distributed cache system 303, from obtaining described multiple application request access data cached, described data source sent to the application on the server at described its place by the access control module 3024 of one of them central control unit 302, and notify described update module 3022 by data cached being updated in described distributed cache system 303 of described multiple application request access, and, by the data cached access control module that remains other central control unit in described multiple central control unit that sends to of described multiple application request access, sent to the application on the server at place separately by the access control module of described other central control unit.
One of ordinary skill in the art will appreciate that all or part of step in said method can carry out instruction related hardware by program and complete, described program can be stored in computer-readable recording medium, as ROM (read-only memory), disk or CD etc.Alternatively, all or part of step of above-described embodiment also can realize with one or more integrated circuit.Correspondingly, the each module/unit in above-described embodiment can adopt the form of hardware to realize, and also can adopt the form of software function module to realize.The application is not restricted to the combination of the hardware and software of any particular form.

Claims (20)

1. a buffer control method, is characterized in that, comprising:
The cluster that comprises multiple servers is formed to level network structure;
Wherein, every one deck of described level network structure is formed by connecting with the form of the first structure of setting by multiple nodes; Each first structure that comprises multiple nodes forms a node in last layer network structure; The bottom of described level network structure comprises one or more the first structures, and the each server in described cluster is as a node in the first structure of the bottom of described level network structure;
Described the first structure is formed by pre-defined rule by every n node, described pre-defined rule comprises: the each node in a described n node is connected with k node in this n node, 0 < k < n, n > 1;
Judge when data cached generation is upgraded, send first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, until each node of the first structure of the described bottom obtains described update notification, be that each server in described cluster obtains described update notification, so that the each server update in described cluster is data cached.
2. the method for claim 1, it is characterized in that, described transmission update notification is to the first structure of the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, comprise until each node of the first structure of the described bottom obtains described update notification:
In each described the first structure, set a host node;
Described update notification is sent to the host node of the first structure of described top layer;
Each node receives after described update notification, carries out following forward process:
Described update notification is sent to the adjacent node in same the first structure; And, if described update notification is sent to the host node in lower one deck the first structure corresponding in this level network structure by the non-bottom node of described node.
3. method as claimed in claim 2, it is characterized in that, described method also comprises: each node receives after described update notification, carry out before described forward process, judge whether processed described update notification, if processed, abandon this update notification, if no, just carry out described forward process.
4. the method for claim 1, is characterized in that, described method also comprises: described server obtain described update notification and receive new data cached after, upgrade former data cached.
5. the method as described in as arbitrary in claim 1 to 4, is characterized in that, described n is 10, and described k is 3, and described the first structure is Petersen figure.
6. the method as described in as arbitrary in claim 1 to 4, is characterized in that, described method also comprises:
By data cached the storing in distributed caching of application, and, frequency of utilization and/or size are met in the data cached local cache that stores the server in described cluster into of predetermined condition;
Receive after the same data cached request of access of one or more application access, if there is not the data cached of described one or more application request access in described distributed caching, after obtaining described one or more application request access data cached to data source, the data cached of described one or more application request access sent to described one or more application, and by data cached being updated in described distributed caching of described one or more application request access.
7. method as claimed in claim 6, is characterized in that, described predetermined condition comprises:
Frequency of utilization is greater than first threshold and the spatial cache that takies is less than Second Threshold;
And/or frequency of utilization is greater than the 4th threshold value and the spatial cache that takies is greater than the 3rd threshold value.
8. method as claimed in claim 6, is characterized in that, described method also comprises:
Receive after the cache data access request of application, if the data cached of described application request access is arranged in the local cache of the server of initiating described cache data access request or is positioned at described distributed caching, from described local cache or described distributed caching, obtain the data cached of described application request access and send to described application.
9. method as claimed in claim 8, is characterized in that, describedly from described local cache or described distributed caching, obtains the data cached of described application request access and sends to described application to comprise:
In the time that described application request access data cached meets described predetermined condition and initiates to exist in the local cache of server of described cache data access request described application request access data cached, from initiate the local cache of server of described cache data access request, obtain the data cached of described application request access, send to described application;
When meeting described predetermined condition and initiate, described application request access data cached there is not the data cached of described application request access in the local cache of server of described cache data access request, or, when described application request access data cached do not meet described predetermined condition, judge and in described distributed caching, whether have the data cached of described application request access, if existed, from described distributed caching, obtain the data cached of described application request access, send to described application.
10. method as claimed in claim 6, is characterized in that, described method also comprises:
On each described server, be furnished with central control unit, and the intercommunication of central control unit on different server;
After the described same data cached request of access of access that receives multiple application, if there is not the data cached of described multiple application request access in described distributed caching, obtain the data cached of described multiple application request access to data source, described multiple application request access data cached sent to described multiple application, and described multiple application request access data cached is updated to described distributed caching comprises:
Multiple central control unit receive respectively after the same data cached request of access of application access on the server at its place, if there is not the data cached of described multiple application request access in described distributed caching, from obtaining described multiple application request access data cached, described data source sent to the application on the server at described its place by one of them central control unit, and by data cached being updated in described distributed caching of described multiple application request access, and, send to other central control unit in described multiple central control unit to be sent to the application on the server at place separately by described other central control unit described multiple application request access data cached.
11. 1 kinds of cache control systems, is characterized in that, described system comprises: cluster and central control unit, and described central control unit comprises configuration module and update module, wherein:
Described cluster comprises multiple servers, and each server comprises local cache system;
Described configuration module is used for, and described multiple servers of described cluster are formed to level network structure, records described level network structure: wherein, every one deck of described level network structure is formed by connecting with the form of the first structure of setting by multiple nodes; Each first structure that comprises multiple nodes forms a node in last layer network structure; The bottom of described level network structure comprises one or more the first structures, and the each server in described cluster is as a node in the first structure of the bottom of described level network structure; Described the first structure is formed by pre-defined rule by every n node, described pre-defined rule comprises: the each node in a described n node is connected with k node in this n node, 0 < k < n, n > 1;
Described update module is used for: judge when data cached generation is upgraded, send first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, until each node of the first structure of the described bottom obtains described update notification, be that each server in described cluster obtains described update notification, so that the each server update in described cluster is data cached.
12. systems as claimed in claim 11, is characterized in that,
Described configuration module also for: set a host node in each described the first structure;
Described update module sends first structure of update notification to the top layer of described level network structure, successively be forwarded to the first structure of the described bottom by the first structure of described top layer, comprise until each node of the first structure of the described bottom obtains described update notification:
Described update notification is sent to the host node of the first structure of described top layer; Each node receives after described update notification, carry out following forward process: described update notification is sent to the adjacent node in same the first structure, and, if the non-bottom node of described node, sends to the host node in lower one deck the first structure corresponding in this level network structure by described update notification.
13. systems as claimed in claim 12, it is characterized in that, described update module also for: each node receives after described update notification, carry out before described forward process, judge whether processed described update notification, if processed, abandon this update notification, if no, just carry out described forward process.
14. systems as claimed in claim 11, is characterized in that, described server also for: obtain described update notification and receive new data cached after, use described new data cached renewal former data cached.
15. systems as described in as arbitrary in claim 11 to 14, is characterized in that, described n is 10, and described k is 3, and described the first structure is Petersen figure.
16. systems as described in as arbitrary in claim 11 to 14, is characterized in that, described system also comprises: distributed cache system; Described central control unit also comprises storage control module and access control module, wherein:
Described storage control module is used for, by data cached the storing in described distributed cache system of application, and, frequency of utilization and/or size are met in the data cached local cache system that stores the server in described cluster into of predetermined condition;
Described access control module is used for, receive after the same data cached request of access of one or more application access, if there is not the data cached of described one or more application request access in described distributed cache system, after obtaining described one or more application request access data cached to data source, the data cached of described one or more application request access sent to described one or more application, and by data cached being updated in described distributed cache system of described one or more application request access.
17. systems as claimed in claim 16, is characterized in that, described predetermined condition comprises:
Frequency of utilization is greater than first threshold and the spatial cache that takies is less than Second Threshold;
And/or frequency of utilization is greater than the 4th threshold value and the spatial cache that takies is greater than the 3rd threshold value.
18. systems as claimed in claim 16, it is characterized in that, described access control module also for, receive after the cache data access request of application, if the data cached of described application request access is arranged in the local cache system of the server of initiating described cache data access request or is positioned at described distributed cache system, from described local cache system or described distributed cache system, obtain the data cached of described application request access and send to described application.
19. systems as claimed in claim 18, is characterized in that, described access control module obtains the data cached of described application request access and sends to described application to comprise from described local cache system or described distributed cache system:
In the time that described application request access data cached meets described predetermined condition and initiates to exist in the local cache system of server of described cache data access request described application request access data cached, from initiate the local cache system of server of described cache data access request, obtain the data cached of described application request access, send to described application;
When meeting described predetermined condition and initiate, described application request access data cached there is not the data cached of described application request access in the local cache system of server of described cache data access request, or, when described application request access data cached do not meet described predetermined condition, judge and in described distributed cache system, whether have the data cached of described application request access, if existed, from described distributed cache system, obtain the data cached of described application request access, send to described application.
20. systems as claimed in claim 16, is characterized in that,
On each described server, be furnished with described central control unit, and the intercommunication of described central control unit on different server;
Described access control module also for, when the access control module of the central control unit on multiple servers receives respectively after the same data cached request of access of application access on the server at its place, if there is not the data cached of described multiple application request access in described distributed cache system, from obtaining described multiple application request access data cached, described data source sent to the application on the server at described its place by the access control module of one of them central control unit; And notify described update module by data cached being updated in described distributed cache system of described multiple application request access, and, by the data cached access control module that sends to other central control unit in described multiple central control unit of described multiple application request access, sent to the application on the server at place separately by the access control module of described other central control unit.
CN201310172568.7A 2013-05-10 2013-05-10 A kind of buffer control method and system Active CN104142896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310172568.7A CN104142896B (en) 2013-05-10 2013-05-10 A kind of buffer control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310172568.7A CN104142896B (en) 2013-05-10 2013-05-10 A kind of buffer control method and system

Publications (2)

Publication Number Publication Date
CN104142896A true CN104142896A (en) 2014-11-12
CN104142896B CN104142896B (en) 2017-05-31

Family

ID=51852075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310172568.7A Active CN104142896B (en) 2013-05-10 2013-05-10 A kind of buffer control method and system

Country Status (1)

Country Link
CN (1) CN104142896B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580422A (en) * 2014-12-26 2015-04-29 赞奇科技发展有限公司 Cluster rendering node data access method based on shared cache
CN104935654A (en) * 2015-06-10 2015-09-23 华为技术有限公司 Caching method, write point client and read client in server cluster system
CN105701219A (en) * 2016-01-14 2016-06-22 北京邮电大学 Distributed cache implementation method
CN106506704A (en) * 2016-12-29 2017-03-15 北京奇艺世纪科技有限公司 A kind of buffering updating method and device
CN106790705A (en) * 2017-02-27 2017-05-31 郑州云海信息技术有限公司 A kind of Distributed Application local cache realizes system and implementation method
CN106921648A (en) * 2016-11-15 2017-07-04 阿里巴巴集团控股有限公司 Date storage method, application server and remote storage server
CN107231395A (en) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 Date storage method, device and system
CN107301048A (en) * 2017-06-23 2017-10-27 北京中泰合信管理顾问有限公司 Using the internal control and management system of response type sharing application framework
CN108446356A (en) * 2018-03-12 2018-08-24 上海哔哩哔哩科技有限公司 Data cache method, server and data buffering system
CN108536481A (en) * 2018-02-28 2018-09-14 努比亚技术有限公司 A kind of application program launching method, mobile terminal and computer storage media
WO2019068246A1 (en) * 2017-10-03 2019-04-11 Huawei Technologies Co., Ltd. Controller communications in access networks
JP2020517141A (en) * 2017-04-13 2020-06-11 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Scalable data center network topology for distributed switches
CN112148202A (en) * 2019-06-26 2020-12-29 杭州海康威视数字技术股份有限公司 Training sample reading method and device
WO2020257981A1 (en) * 2019-06-24 2020-12-30 Continental Automotive Gmbh Process for software and function update of hierarchic vehicle systems
CN113392126A (en) * 2021-08-17 2021-09-14 北京易鲸捷信息技术有限公司 Execution plan caching and reading method based on distributed database

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020099833A1 (en) * 2001-01-24 2002-07-25 Steely Simon C. Cache coherency mechanism using arbitration masks
CN1451116A (en) * 1999-11-22 2003-10-22 阿茨达科姆公司 Distributed cache synchronization protocol
CN101674233A (en) * 2008-09-12 2010-03-17 中国科学院声学研究所 Peterson graph-based storage network structure and data read-write method thereof
US8103799B2 (en) * 1997-03-05 2012-01-24 At Home Bondholders' Liquidating Trust Delivering multimedia services

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8103799B2 (en) * 1997-03-05 2012-01-24 At Home Bondholders' Liquidating Trust Delivering multimedia services
CN1451116A (en) * 1999-11-22 2003-10-22 阿茨达科姆公司 Distributed cache synchronization protocol
US20020099833A1 (en) * 2001-01-24 2002-07-25 Steely Simon C. Cache coherency mechanism using arbitration masks
CN101674233A (en) * 2008-09-12 2010-03-17 中国科学院声学研究所 Peterson graph-based storage network structure and data read-write method thereof

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580422A (en) * 2014-12-26 2015-04-29 赞奇科技发展有限公司 Cluster rendering node data access method based on shared cache
CN104935654A (en) * 2015-06-10 2015-09-23 华为技术有限公司 Caching method, write point client and read client in server cluster system
WO2016197666A1 (en) * 2015-06-10 2016-12-15 华为技术有限公司 Cache method, write point client and read client in server cluster system
CN104935654B (en) * 2015-06-10 2018-08-21 华为技术有限公司 Caching method, write-in point client in a kind of server cluster system and read client
CN105701219A (en) * 2016-01-14 2016-06-22 北京邮电大学 Distributed cache implementation method
CN105701219B (en) * 2016-01-14 2019-04-02 北京邮电大学 A kind of implementation method of distributed caching
CN107231395A (en) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 Date storage method, device and system
CN106921648A (en) * 2016-11-15 2017-07-04 阿里巴巴集团控股有限公司 Date storage method, application server and remote storage server
CN106506704A (en) * 2016-12-29 2017-03-15 北京奇艺世纪科技有限公司 A kind of buffering updating method and device
CN106790705A (en) * 2017-02-27 2017-05-31 郑州云海信息技术有限公司 A kind of Distributed Application local cache realizes system and implementation method
JP6998391B2 (en) 2017-04-13 2022-02-10 インターナショナル・ビジネス・マシーンズ・コーポレーション Scalable data center network topology for distributed switches
JP2020517141A (en) * 2017-04-13 2020-06-11 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Scalable data center network topology for distributed switches
CN107301048B (en) * 2017-06-23 2020-09-01 北京中泰合信管理顾问有限公司 Internal control management system of application response type shared application architecture
CN107301048A (en) * 2017-06-23 2017-10-27 北京中泰合信管理顾问有限公司 Using the internal control and management system of response type sharing application framework
WO2019068246A1 (en) * 2017-10-03 2019-04-11 Huawei Technologies Co., Ltd. Controller communications in access networks
US10511524B2 (en) 2017-10-03 2019-12-17 Futurewei Technologies, Inc. Controller communications in access networks
CN108536481A (en) * 2018-02-28 2018-09-14 努比亚技术有限公司 A kind of application program launching method, mobile terminal and computer storage media
CN108446356A (en) * 2018-03-12 2018-08-24 上海哔哩哔哩科技有限公司 Data cache method, server and data buffering system
CN108446356B (en) * 2018-03-12 2023-08-29 上海哔哩哔哩科技有限公司 Data caching method, server and data caching system
WO2020257981A1 (en) * 2019-06-24 2020-12-30 Continental Automotive Gmbh Process for software and function update of hierarchic vehicle systems
CN112148202A (en) * 2019-06-26 2020-12-29 杭州海康威视数字技术股份有限公司 Training sample reading method and device
CN112148202B (en) * 2019-06-26 2023-05-26 杭州海康威视数字技术股份有限公司 Training sample reading method and device
CN113392126A (en) * 2021-08-17 2021-09-14 北京易鲸捷信息技术有限公司 Execution plan caching and reading method based on distributed database
CN113392126B (en) * 2021-08-17 2021-11-02 北京易鲸捷信息技术有限公司 Execution plan caching and reading method based on distributed database

Also Published As

Publication number Publication date
CN104142896B (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN104142896A (en) Cache control method and system
KR101429555B1 (en) System and method for providing high availability data
US11604811B2 (en) Systems and methods for adaptive data replication
CN103905537A (en) System for managing industry real-time data storage in distributed environment
CN103138912B (en) Method of data synchronization and system
CN109639773B (en) Dynamically constructed distributed data cluster control system and method thereof
CN113268472B (en) Distributed data storage system and method
US20090083210A1 (en) Exchange of syncronization data and metadata
CN103312624A (en) Message queue service system and method
CN115599747B (en) Metadata synchronization method, system and equipment of distributed storage system
CN112804332B (en) Message processing system, method, device, equipment and computer readable storage medium
CN111190547A (en) Distributed container mirror image storage and distribution system and method
CN110837409A (en) Method and system for executing task regularly
CN111338806A (en) Service control method and device
CN105069152A (en) Data processing method and apparatus
CN114385561A (en) File management method and device and HDFS system
CN102624932A (en) Index-based remote cloud data synchronizing method
CN107547605B (en) message reading and writing method based on node queue and node equipment
WO2023065900A1 (en) Device state message processing method and message distribution system
CN113254511B (en) Distributed vector retrieval system and method
CN111600958B (en) Service discovery system, service data management method, server, and storage medium
CN111767345B (en) Modeling data synchronization method, modeling data synchronization device, computer equipment and readable storage medium
CN114625566A (en) Data disaster tolerance method and device, electronic equipment and storage medium
KR101681651B1 (en) System and method for managing database
CN117742571A (en) Data processing method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211109

Address after: Room 554, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee after: TAOBAO (CHINA) SOFTWARE CO.,LTD.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Patentee before: ALIBABA GROUP HOLDING Ltd.