CN104142896B - A kind of buffer control method and system - Google Patents

A kind of buffer control method and system Download PDF

Info

Publication number
CN104142896B
CN104142896B CN201310172568.7A CN201310172568A CN104142896B CN 104142896 B CN104142896 B CN 104142896B CN 201310172568 A CN201310172568 A CN 201310172568A CN 104142896 B CN104142896 B CN 104142896B
Authority
CN
China
Prior art keywords
data cached
access
application
node
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310172568.7A
Other languages
Chinese (zh)
Other versions
CN104142896A (en
Inventor
杨琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taobao China Software Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201310172568.7A priority Critical patent/CN104142896B/en
Publication of CN104142896A publication Critical patent/CN104142896A/en
Application granted granted Critical
Publication of CN104142896B publication Critical patent/CN104142896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of buffer control method, including:Level network structure will be formed comprising the cluster of multiple servers;Each layer is formed by connecting by multiple nodes in the form of the first structure for setting in the structure;Each first structure constitutes a node in last layer network structure;The bottom includes one or more first structures, and each server in cluster is used as a node in the first structure of the bottom;First structure is formed by every n node by pre-defined rule:K node in n node of each node and this in n node is connected, 0 < k < n, n > 1;When judging data cached renewal, update notification is sent to the first structure of top layer, be successively forwarded to the first structure of the bottom, until each node of the first structure of the bottom obtains update notification, so that each server update in cluster is data cached.The application also provides a kind of cache control system.The application ensures upgrading in time and synchronous for the local cache of each server in cluster.

Description

A kind of buffer control method and system
Technical field
The application is related to computer realm, more particularly to a kind of buffer control method and system.
Background technology
In the prior art, two kinds of caching methods be generally there are:
One kind is local cache:Data are stored on the internal memory of home server, and what is walked during access is only system bus, speed Degree is fast.EhCache, googel MemCached, the OSCache for such as increasing income.Local cache is disposed together with application, and can take should With the more memory headroom of server.
Another kind is distributed caching:, by network, distributed cache server is stored in using the data that need to be cached On.Caching server cluster can also be set up, spatial cache is sufficiently large in theory;And it is empty to be not take up application server internal memory Between.Distributed caching needs to be interacted by network, the network bandwidth is taken, in addition, the calculating performance for having serialized data disappears Consumption, poor-performing.
For local cache, when data are changed, general realization is owned by cluster by message informing Related data failure in CACHE (caching).When related data is needed again, each server to data source obtains data, and It is put into CACHE.The method can not realize the data cached synchronization for upgrading in time and ensureing affairs (cluster mode).
The content of the invention
The application technical problem to be solved is to provide a kind of buffer control method and system, realizes data cached timely Update and transactional synchronization.
In order to solve the above problems, this application provides a kind of buffer control method, including:
Level network structure will be formed comprising the cluster of multiple servers;
Wherein, each layer of the hierarchical network structure connected in the form of the first structure for setting by multiple nodes and Into;Each first structure comprising multiple nodes constitutes a node in last layer network structure;The hierarchical network structure The bottom include one or more first structures, each server in the cluster as the hierarchical network structure most A node in the first structure of bottom;
The first structure is formed by every n node by pre-defined rule, and the pre-defined rule includes:In the n node N node of each node and this in k node be connected, 0 < k < n, n > 1;
When judging data cached renewal, the first knot of update notification to the top layer of the hierarchical network structure is sent Structure, the first structure of the bottom is successively forwarded to by the first structure of the top layer, until the first knot of the bottom Each node of structure obtains the update notification, i.e., each server in described cluster obtains the update notification, so that described Each server update in cluster is data cached.
The above method can also have the characteristics that, it is described send update notification to the hierarchical network structure top layer the One structure, the first structure of the bottom is successively forwarded to by the first structure of the top layer, until the of the bottom Each node of one structure obtains the update notification to be included:
A host node is set in each described first structure;
The update notification is sent the host node of the first structure to the top layer;
After each node receives the update notification, following forward process is carried out:
The update notification is sent to the adjacent node in same first structure;And, if the node is non-most Bottom layer node, the host node in the hierarchical network structure in corresponding next layer of first structure is sent to by the update notification.
The above method can also have the characteristics that methods described also includes:After each node receives the update notification, Before carrying out the forward process, judge whether the processed update notification, if processed, abandon the update notification, such as Fruit does not have, and just carries out the forward process.
The above method can also have the characteristics that methods described also includes:The server obtains the update notification simultaneously Receive it is new it is data cached after, update former data cached.
The above method can also have the characteristics that the n is 10, and the k is 3, and the first structure is schemed for Petersen.
The above method can also have the characteristics that methods described also includes:
The data cached storage that will be applied in distributed caching, and, frequency of use and/or size are met into predetermined bar The data cached storage of part is in the local cache of the server in the cluster;
After receiving one or more application same data cached access requests of access, if in the distributed caching In the absence of the data cached of one or more of application requests access, then obtaining one or more of applications to data source please Ask access it is data cached after, by one or more of application requests access it is data cached be sent to it is one or more of Using, and by the data cached renewal of one or more of application requests access to the distributed caching.
The above method can also have the characteristics that the predetermined condition includes:
Frequency of use is less than Second Threshold more than the spatial cache of first threshold and occupancy;
And/or, frequency of use is more than the 3rd threshold value more than the spatial cache of the 4th threshold value and occupancy.
The above method can also have the characteristics that methods described also includes:
After receiving the cache data access request of application, if application request access is data cached positioned at initiation The local cache of the server of cache data access request or in the distributed caching, then from described local slow Deposit or the distributed caching in obtain that the application request accesses data cached and be sent to the application.
The above method can also have the characteristics that, described to obtain described from the local cache or the distributed caching Access data cached of application request is simultaneously sent to the application and includes:
When the application request access data cached meet the predetermined condition and initiate the cache data access please Have that the application request accesses in the local cache of the server asked it is data cached when, from initiating the cache data access The data cached of the application request access is obtained in the local cache of the server of request, the application is sent to;
When the application request access data cached meet the predetermined condition and initiate the cache data access please In the absence of the data cached of application request access in the local cache of the server asked, or, the application request is accessed It is data cached when being unsatisfactory for the predetermined condition, judge what is accessed with the presence or absence of the application request in the distributed caching It is data cached, if it is present obtaining the data cached of the application request access from the distributed caching, it is sent to institute State application.
The above method can also have the characteristics that methods described also includes:
Phase between the CCU on CCU, and different server is disposed with each described server Mutual communication;
After the same data cached access request of access for receiving multiple applications, if in the distributed caching In the absence of data cached, the then caching for being accessed to data source the multiple application request of acquisition that the multiple application request is accessed Data, by the multiple application request access it is data cached be sent to the multiple application, and by the multiple application request The data cached renewal for accessing includes to the distributed caching:
Multiple CCUs be respectively received it where server on application access same data cached visit After asking request, if in the absence of the data cached of the multiple application request access in the distributed caching, by one of them CCU from the data source obtain that the multiple application request accesses it is data cached after be sent to it is described where it Server on application, and the data cached renewal that the multiple application request is accessed is in the distributed caching, with And, data cached it is sent to other center control dresses in the multiple CCU by what the multiple application request was accessed Put the application being sent to by described other CCUs on the server at respective place.
The application also provides a kind of cache control system, and the system includes:Cluster and CCU, it is described in Centre control device includes configuration module and update module, wherein:
The cluster includes multiple servers, and each server includes local cache system;
The configuration module is used for, and the multiple server of the cluster is formed into level network structure, and record is described Hierarchical network structure:Wherein, each layer of the hierarchical network structure is connected by multiple nodes in the form of the first structure for setting Connect and form;Each first structure comprising multiple nodes constitutes a node in last layer network structure;The hierarchical network The bottom of structure includes one or more first structures, and each server in the cluster is used as the hierarchical network structure The bottom first structure in a node;The first structure is formed by every n node by pre-defined rule, described predetermined Rule includes:K node in n node of each node and this in the n node is connected, 0 < k < n, n > 1;
The update module is used for:When judging data cached renewal, update notification to the hierarchical network knot is sent The first structure of the top layer of structure, the first structure of the bottom, Zhi Daosuo are successively forwarded to by the first structure of the top layer Each node for stating the first structure of the bottom obtains the update notification, i.e., described in each server acquisition in described cluster more It is new to notify, so that each server update in the cluster is data cached.
Said system can also have the characteristics that the configuration module is additionally operable to:Set in each described first structure One host node;
The update module sends the first structure of top layer of the update notification to the hierarchical network structure, by the top layer First structure be successively forwarded to the first structure of the bottom, until each node of the first structure of the bottom is obtained Taking the update notification includes:
The update notification is sent the host node of the first structure to the top layer;Each node receives the renewal After notice, following forward process is carried out:The update notification is sent to the adjacent node in same first structure, and, If the non-bottom node of node, the update notification is sent to corresponding next layer first in the hierarchical network structure Host node in structure.
Said system can also have the characteristics that the update module is additionally operable to:Each node receives described renewal and leads to After knowing, before carrying out the forward process, judge whether the processed update notification, if processed, abandon the renewal and lead to Know, if it did not, just carrying out the forward process.
Said system can also have the characteristics that the server is additionally operable to:Obtain the update notification and receive new It is data cached using the new data cached renewal original after data cached.
Said system can also have the characteristics that the n is 10, and the k is 3, and the first structure is schemed for Petersen.
Said system can also have the characteristics that the system also includes:Distributed cache system;The center control dress Putting also includes storage control module and access control module, wherein:
The storage control module is used for, in the data cached storage that will be applied to the distributed cache system, and, Frequency of use and/or size are met into the data cached storage of predetermined condition to the local cache system of the server in the cluster In system;
The access control module is used for, and receives one or more applications and accesses same data cached access request Afterwards, if not existing the data cached of one or more of application requests access in the distributed cache system, to number According to source obtain that one or more of application requests access it is data cached after, one or more of application requests are accessed It is data cached to be sent to one or more of applications, and the data cached renewal that one or more of application requests are accessed To in the distributed cache system.
Said system can also have the characteristics that the predetermined condition includes:
Frequency of use is less than Second Threshold more than the spatial cache of first threshold and occupancy;
And/or, frequency of use is more than the 3rd threshold value more than the spatial cache of the 4th threshold value and occupancy.
Said system can also have the characteristics that the access control module is additionally operable to, and receive the data cached of application After access request, if the data cached server positioned at the initiation cache data access request that the application request is accessed Local cache system or in the distributed cache system, then from the local cache system or described distributed slow The data cached of the application request access is obtained in deposit system and the application is sent to.
Said system can also have the characteristics that, the access control module is from the local cache system or the distribution The data cached of the application request access is obtained in formula caching system and the application is sent to includes:
When the application request access data cached meet the predetermined condition and initiate the cache data access please Have that the application request accesses in the local cache system of the server asked it is data cached when, it is described data cached from initiating The data cached of the application request access is obtained in the local cache system of the server of access request, described answering is sent to With;
When the application request access data cached meet the predetermined condition and initiate the cache data access please In the absence of the data cached of application request access in the local cache system of the server asked, or, the application request What is accessed is data cached when being unsatisfactory for the predetermined condition, judges to be asked with the presence or absence of the application in the distributed cache system The data cached of access is asked, if it is present obtaining the caching that the application request is accessed from the distributed cache system Data, are sent to the application.
Said system can also be had the characteristics that, the CCU is disposed with each described server, and not With the intercommunication of the CCU on server;
The access control module is additionally operable to, when the access control module of the CCU on multiple servers is distinguished After application on server where receiving it accesses same data cached access request, if the distributed caching system In the absence of the data cached of the multiple application request access in system, by the access control module of one of CCU From the data source obtain that the multiple application request accesses it is data cached after be sent on the server where it Using;And notify the data cached renewal that the update module accesses the multiple application request to the distributed caching system In system, and, data cached it is sent to what the multiple application request was accessed in the multiple CCU in other The access control module of control device is entreated, respective place is sent to by the access control module of other CCUs Application on server.
The application includes advantages below:
1st, while using local cache and distributed caching, lifting system performance also avoids local memory from taking excessive.
2nd, data be except can be stored in local cache at home and abroad, in can also depositing in local disk for big data, it is to avoid excessive Local memory is taken, in addition, big data can avoid the damage of large object data network transmission and serializing directly from local acquisition Consumption.
3rd, constitute first structure by multiple nodes and be updated notice, realize that local cache is same in the cluster of P2P patterns Step, it is ensured that the local cache of each server in cluster upgrades in time and synchronous.Relative to broadcast type of the prior art more New advice method, because update notification has redundancy, (by taking Petersen figures as an example, each node receives 3 times and leads in the application Know), the problem that can avoid only notifying to cause certain applications server to be informed at 1 time.In addition, broadcast type updates logical Know in mode, interacted between remaining server in the single application server needs and cluster of initiation update notification, interaction Operation is excessive, and high to individual server performance requirement, individual server need to only receive three update notifications in the embodiment of the present application, Three update notifications are sent, it is low to individual server performance requirement.
4th, submit strategy to by two sections of CCU, control synchronous transactional.
Certainly, any product for implementing the application it is not absolutely required to while reaching all the above advantage.
Brief description of the drawings
Fig. 1 is the embodiment of the present application cache control system schematic diagram;
Fig. 2 is that the embodiment of the present application Petersen diagrams are intended to;
Fig. 3 is the application cache control system block diagram.
Specific embodiment
For the purpose, technical scheme and advantage for making the application become more apparent, below in conjunction with accompanying drawing to the application Embodiment be described in detail.It should be noted that in the case where not conflicting, in the embodiment and embodiment in the application Feature can mutually be combined.
In addition, though show logical order in flow charts, but in some cases, can be with different from herein Order performs shown or described step.
In the application, be managed collectively by CCU it is data cached, during using needing access cache data, Xiang Zhong Centre control device sends access request, and application is returned to by data cached by CCU, and, by by cluster Server group realizes local cache synchronization in the cluster of P2P patterns into level network structure, it is ensured that each server in cluster Local cache upgrades in time and synchronous.
The embodiment of the present application provides a kind of buffer control method, including:
Level network structure will be formed comprising the cluster of multiple servers;
Wherein, each layer of the hierarchical network structure connected in the form of the first structure for setting by multiple nodes and Into;Each first structure comprising multiple nodes constitutes a node in last layer network structure;The hierarchical network structure The bottom include one or more first structures, each server in the cluster as the hierarchical network structure most A node in the first structure of bottom;
The first structure is formed by every n node by pre-defined rule, and the pre-defined rule includes:In the n node N node of each node and this in k node be connected, 0 < k < n, n > 1;
When judging data cached renewal, the first knot of update notification to the top layer of the hierarchical network structure is sent Structure, the first structure of the bottom is successively forwarded to by the first structure of the top layer, until the first knot of the bottom Each node of structure obtains the update notification, i.e., each server in described cluster obtains the update notification, so that described Each server update in cluster is data cached.
In a kind of alternative of the present embodiment, a kind of configuration process of hierarchical network structure is:
Using each application server in cluster an as node, it is attached by pre-defined rule per n node, is generated The first structure of the bottom in the hierarchical network structure, and generate a first structure per n node;
If the bottom that the first structure of the bottom only has 1, this layer of pole network structure is top, configuration Terminate;
If the first structure of the bottom is more than 1, using each first structure of the bottom as current layer (i.e. most bottom The last layer of layer) in a node, every n node of the current layer network structure be attached by the pre-defined rule, The new first structure of generation;
Continuation judges whether the first structure quantity in current layer is more than 1, if only 1, configuration terminates;If More than 1, then using each first structure in current layer an as node in the last layer network structure of current layer, and will The node is attached by the pre-defined rule, generates new first structure;The step is repeated, until the hierarchical network knot Top in structure only has 1 first structure to generate;Wherein, when generating each first structure, if remaining nodes are less than n Individual, then the node for lacking is substituted with empty node, to regenerate first structure.
In a kind of alternative of the present embodiment, the first structure for sending update notification to the top layer, by institute The first structure for stating top layer is successively forwarded to the first structure of the bottom, until the first structure of the bottom each Node obtains the update notification to be included:
The first structure of top layer of the transmission update notification to the hierarchical network structure, by the first knot of the top layer Structure is successively forwarded to the first structure of the bottom, until the bottom first structure each node obtain described in more It is new to notify to include:
A host node can be set in each described first structure;
The update notification is sent the host node of the first structure to the top layer;
After each node receives the update notification, following forward process is carried out:
The update notification is sent to the adjacent node in same first structure;And, if the node is non-most Bottom layer node, the host node in the hierarchical network structure in corresponding next layer of first structure is sent to by the update notification.
Methods described also includes:After each node receives the update notification, before carrying out the forward process, can also sentence It is disconnected whether the processed update notification, if processed, the update notification is abandoned, if it did not, just carrying out the forwarding Treatment.
In a kind of alternative of the present embodiment, also include:The server obtains the update notification and receives new It is data cached after, update former data cached.
In a kind of alternative of the present embodiment, obtain the update notification in the server and receive new caching After data, notify that the server carries out change and comes into force, after server receives the instruction that change comes into force, delayed using described new Deposit data updates former data cached.
In a kind of alternative of the present embodiment, the first structure is schemed for Petersen, and the n is 10, and the k is 3.Certainly, n, k can also take other values, and first structure can also be other structures, such as, n=5, k=2, first structure are one Pentagon, etc..
In a kind of alternative of the present embodiment, also include:
The data cached storage that will be applied in distributed caching, and, frequency of use and/or size are met into predetermined bar The data cached storage of part is in the local cache of the server in the cluster;
After receiving one or more application same data cached access requests of access, if in the distributed caching In the absence of the data cached of one or more of application requests access, then obtaining one or more of applications to data source please Ask access it is data cached after, by one or more of application requests access it is data cached be sent to it is one or more of Using, and by the data cached renewal of one or more of application requests access to the distributed caching.
Predetermined condition can set as needed, and in a kind of alternative of the present embodiment, the predetermined condition includes:
Frequency of use is less than Second Threshold more than the spatial cache of first threshold and occupancy;
And/or, frequency of use is more than the 3rd threshold value more than the spatial cache of the 4th threshold value and occupancy.
In a kind of alternative of the present embodiment, also include:
After receiving the cache data access request of application, if application request access is data cached positioned at initiation The local cache of the server of cache data access request or in the distributed caching, then from described local slow Deposit or the distributed caching in obtain that the application request accesses data cached and be sent to the application.
It is described to obtain institute from the local cache or the distributed caching in a kind of alternative of the present embodiment Stating the data cached of application request access and being sent to the application includes:
When the application request access data cached meet the predetermined condition and initiate the cache data access please Have that the application request accesses in the local cache of the server asked it is data cached when, from initiating the cache data access The data cached of the application request access is obtained in the local cache of the server of request, the application is sent to;
When the application request access data cached meet the predetermined condition and initiate the cache data access please In the absence of the data cached of application request access in the local cache of the server asked, or, the application request is accessed It is data cached when being unsatisfactory for the predetermined condition, judge what is accessed with the presence or absence of the application request in the distributed caching It is data cached, if it is present obtaining the data cached of the application request access from the distributed caching, it is sent to institute State application.
In a kind of alternative of the present embodiment, also include:
Phase between the CCU on CCU, and different server is disposed with each described server Mutual communication;
After the same data cached access request of access for receiving multiple applications, if in the distributed caching In the absence of data cached, the then caching for being accessed to data source the multiple application request of acquisition that the multiple application request is accessed Data, by the multiple application request access it is data cached be sent to the multiple application, and by the multiple application request The data cached renewal for accessing includes to the distributed caching:
Multiple CCUs be respectively received it where server on application access same data cached visit After asking request, if in the absence of the data cached of the multiple application request access in the distributed caching, by one of them CCU from the data source obtain that the multiple application request accesses it is data cached after be sent to it is described where it Server on application, and the data cached renewal that the multiple application request is accessed is in the distributed caching, with And, data cached it is sent to other center control dresses in the multiple CCU by what the multiple application request was accessed Put the application being sent to by described other CCUs on the server at respective place.
In a kind of alternative of the present embodiment, that is applied described in the data Cun Chudao that will meet the predetermined condition should Included with the local cache of server:
The first subconditional data will be met to be put into the first interval of the local cache of the application server of the application Storage;
The second subconditional data of the satisfaction are stored in the application server of the application or are stored with file mode, To meet the second subconditional data concordance list be stored in the application application server local cache second interval, institute Concordance list is stated to indicate to meet the storage location of the described second subconditional data.
CCU obtains the caching number that the application request is accessed from the local cache or distributed caching According to and be sent to the application in two kinds of situation, whether meet predetermined condition and process respectively according to the data that request is accessed.Due to Meet the data cached of predetermined condition also to be stored in local cache, therefore, if requiring to access the caching for meeting predetermined condition Data, then CCU first searches local cache, if local cache does not require data cached, the lookup point for accessing Cloth is cached, if also not requiring the data cached of access in distributed caching, is gone data source to obtain and is required the slow of access Deposit data.If it is required that access is unsatisfactory for the data cached of predetermined condition, directly searching distributed caching, if distributed slow There is no the data cached of requirement access in depositing, then go data source to obtain and require the data cached of access, specifically:
1) when the application request access it is data cached meet the predetermined condition when, judge initiate it is described data cached With the presence or absence of the data cached of application request access in the local cache of the application server of access request, if it does, The caching that the application request is accessed is obtained from the local cache of application server for initiating the cache data access request Data, are sent to the application;If do not deposited in the local cache for initiating the application server of the cache data access request In the data cached of application request access, then judge to whether there is what the application request was accessed in the distributed caching It is data cached, if it is present obtaining the data cached of the application request access from the distributed caching, it is sent to institute State application;
2) when the application request access it is data cached be unsatisfactory for the predetermined condition when, judge the distributed caching In with the presence or absence of the application request access it is data cached, if it is present obtained from the distributed caching it is described should With the data cached of access is asked, the application is sent to.
It is data cached with reference to described in distributed caching and local cache storage in such scheme, relative to only using single The mode of caching, improves systematic function, also avoids excessively taking local memory.In addition, unifying to manage by CCU Reason is data cached, unified to be gone to obtain by CCU, it is to avoid multiple servers are simultaneously during using needing access cache data Access data source and cause system congestion even snowslide.
The application is further illustrated below by a specific embodiment.
Except the data cached outer of full dose is deposited in distributed caching, classify to data cached, by data size with Frequency of use, partial data, also storage is a in local cache, preferential to this partial data during using operation to delay from local Deposit middle acquisition data.
In the present embodiment, the data that need to be cached are divided into following four class:
A classes:Take spatial cache and be more than the 5th threshold value, but number of the frequency of use less than the 6th threshold value in set period According to.Such as:Commodity data.
B classes:Frequency of use is more than first threshold, takes spatial cache and is less than Second Threshold.Such as:Parameter during system operation Data.
C classes:Take spatial cache and be more than the 3rd threshold value, frequency of use is more than the 4th threshold value.
D classes:Take spatial cache and be more than the 3rd threshold value, frequency of use is less than or equal to the 4th threshold value.
In the present embodiment, above-mentioned B classes and C class data are the data for meeting foregoing predetermined condition, and A classes and D classes data are not for Meet the data of predetermined condition, therefore, A classes, D classes data are only needed to be stored on distributed caching, and B classes and C classes are removed in distribution Deposit outer in caching, while copy need to be deposited in local cache.
The cache control system of the present embodiment is based on framework shown in Fig. 2, as shown in Fig. 2 including:CCU 101, Distributed caching 102, using A clusters 103, data source (Datastore) 104, and applies B105, CCU 101 Be in logic entity, this example in order to illustrate it is convenient individually draw, physically CCU 101 can be deployed in and apply A On each application server in cluster 103, wherein:
CCU 101:For being managed to the local cache under cluster environment, shared between different application It is data cached to be managed, management is synchronized to B, C the class data in the local cache and distributed caching of same application.In Centre control device can be disposed on the application server together with applying, with loss of the application without network service etc.In this Centre control device can be a JAR (Java Archive, Java archive file) bags or dll program libraries etc..
Distributed caching (center cache) 102, for the control according to CCU, stores all caching numbers According to, including the class data of above-mentioned A, B, C, D tetra-;Wherein it is possible to being based on distributed hash table algorithm carries out distributed buffer, such as Memcache, tair etc..
Using A clusters 103:It is a cluster environment using A, all application deployment A on each server, and every All contain local cache (such as ehCache googel MemCached) on server.Using each server sheet in A clusters Data in ground caching are all the subsets of data in distributed caching, and each server is cached according to the control of CCU Above-mentioned B classes, C class data.
The true source of data needed for storage application, generally database or file system in data source 104, such as mysql, TFS (Taobao File System, Taobao's file system).
Using B105:Cluster environment or stand-alone environment are may also be, is stand-alone environment in the present embodiment for convenience of describing.
Using the scene relation of A and application B:It is domestic consumer using A user;Using B used to keeper, For configuring relevant parameter is run using A.
In the embodiment, two intervals can be divided into using the local cache in each server in A clusters, in first interval The B class data of storage CCU distribution;The concordance list of the C class data of second interval storage CCU distribution, C Class data are preserved in addition with file mode or in internal memory, and the storage location of C class data is indicated in concordance list.
CCU can be used according to FIFO (First Input First Output, FIFO) and at most C class data, are put into second interval and are managed by number of times scheduling algorithm, so as to avoid the damage of big data network transmission and serializing Consumption.
The first interval tables of data and second interval tables of data of a CCU application configuration good at managing (suppose certain Individual configuration parameter need to be through conventional, then the title of this parameter is configured in table), labeled data will be delayed from local in table Middle acquisition is deposited, is obtained not then, can just obtained in distributed caching 102 and do not arrived then then to acquisition in distributed caching 102 Data source 104 is obtained.
A, D class data all can be by CCUs directly to acquisition in distributed caching 102, in distributed caching Obtained in 102 and do not arrive then then data source 104 and obtain.
When starting for the first time using A cluster environment, the server that First starts can be first successively by CCU All B, C class data are loaded, is placed into distributed caching and local cache, by center during following other startup of server Control device just directly can load data into local cache and suffer from distributed caching.
When applications have modified B, C class data, sent to CCU and notified, triggering is based on Petersen graphic calculations The P2P mode data synchronizing processes of method, it is ensured that the local cache of all services is all with portion mirror image.
As shown in Fig. 2 be the structure of Petersen figures, including 10 nodes, each node can be with other three nodes Connection.
Mechanism to synchronizing management between the local cache of application cluster can be P2P patterns, based on Petersen Nomography is realized.
Assuming that using in A clusters include N platform servers.
The cluster is configured to the hierarchical network structure of Petersen figures as follows:
Wherein, each layer of the hierarchical network structure is formed by connecting by multiple nodes in the form of Petersen figures;Often The one Petersen figures comprising multiple nodes constitute the node in last layer network structure;The bottom of the hierarchical network structure Including one or more Petersen figures, each server in cluster is used as a section in the Petersen figures of the bottom Point.
In Petersen figures, comprising 10 nodes, each node connects with other 3 nodes in same Petersen figures Connect.If more than 1 Petersen figures are generated in same layer structure, each the Petersen figure being currently generated as one Node, then one Petersen of every 10 node recompositions figures;The step is repeated, until ultimately generating one Petersen schemes, and during Petersen figures are generated, if generate certain Petersen figures, nodes are less than 10, Replaced with specific empty node (null), Petersen figures are regenerated after 10 nodes of composition.
Wherein, each Petersen figure can specify a node as the host node of the Petersen figures.
By above-mentioned steps, a Petersen figure for nesting is generated, the Petersen figures of the bottom are by actual Server node is constituted, and top is a Petersen figure.The hierarchical network structure based on Petersen figures of above-mentioned generation Method can also be applied to generation with non-Petersen figures first structure hierarchical network structure.
The cache synchronization step of the P2P patterns based on above-mentioned Petersen figures is as follows:
Step 401, data cached update notification is transmitted to the master of the top Petersen figures of the hierarchical network structure Node.
Step 402, after each node receives update notification, first judges whether to treat the update notification, if do not located Managed, adjacent three nodes of same layer will be transmitted to update notification.
Specifically, after the host node of top layer Petersen figures receives update notification, notifying with the adjacent node of layer, with layer After adjacent node receives update notification, if do not treated, continue to forward update notification to adjacent three sections of same layer Point, the like, the update notification can be forwarded to all nodes of same layer.
So each node of same layer can at most be connected to three notices, but need to only process notice for the first time.Such place Reason mode so that node is only processed when first time update notification is received, it is to avoid redundant operation.
Step 403, if node or Petersen figures, rather than server, then reinform this Petersen figures Host node, until notify reach bottom Petersen figures server node;
Step 404, after server node receives update notification, during newData is read internal memory, and notifies central control Successful data reception is completed device.
Step 405, after Servers-all all notifies that CCU receives data success, CCU notifies institute There is server to carry out change to come into force, each server covers former data using newData, reinforms CCU and activates into Work(.
Step 406, when Servers-all all notifies to activate successfully, whole process terminates.
In addition to the node of the Petersen figures of the bottom is for server, other nodes are logical node, at logical node The operation for managing update notification is completed by CCU, until update notification reaches the server node of the bottom.Each service CCU on device is interacted, completion notice operation.
In the prior art, for cluster environment, data cached synchronization is usually wide by a server between each server Broadcast the cache invalidation of other servers of message informing.Caching is first asked during other servers access next time data, if do not ordered In, just obtain data to True Data source and update caching simultaneously again.The method in time and can not ensure affairs (cluster mode) It is synchronous.In the present embodiment, after an application server renewal is data cached, by the P2P pattern synchronizations based on Petersen figures Update the data cached of other application server, it is ensured that the efficiency of buffer update.
If application server synchronized update is failed, server that can be by CCU directly with failure leads to Letter, is updated;Or directly the local cache failure of the application server for failing is updated, passively by Passive Mode Schema update refers to only to make data failure, subsequently need to use failure it is data cached when, obtain new caching from data source Data, update the data cached of failure;Or notify that all application servers carry out rolling back action, this reality by CCU Apply in example, rolling back action refers to the state for returning to and carrying out before data cached renewal, there is provided meet more traffic performances The selection of demand.
Such scheme has the following advantages that:
1st, the characteristics of being schemed using Petersen, uses spreading news and new data cached bag for P2P model virus formulas.
2nd, every server can only be fixed to receive buffer update notice and send notifying three times for buffer update, simply may be used Control.
3rd, using two sections of ways of submission, two sections of ways of submission in the application are after server receives update notification, new It is data cached read internal memory, and notify that successful data reception is completed CCU, Servers-all is all notified After CCU receives data success, CCU notifies that Servers-all carries out change and comes into force, and each server makes Data cached with new data cached covering original, each server reinforms CCU and activates successfully, so as to ensure affairs Uniformity, make the data in cluster in each server local caching all consistent.Relative in the prior art, for collection group rings Border, data cached synchronization is usually to be lost by a caching for server broadcast message informing other servers between each server Effect.Caching is first asked during other servers access next time data, if miss, just data is obtained simultaneously while more to data source The mode of new caching, the application can in time carry out data cached renewal and and ensure the synchronization of affairs (cluster mode).
The embodiment of the present application also provides a kind of cache control system, as shown in figure 3, including:
Cluster 301 and CCU 302, the cluster 301 include multiple servers 3011, each server bag System containing local cache 3012;The CCU 302 includes configuration module 3021 and update module 3022, wherein
The configuration module 3021 is used for, and the multiple server of the cluster is formed into level network structure, record The hierarchical network structure:Wherein, the shape of first structure of each layer of the hierarchical network structure by multiple nodes to set Formula is formed by connecting;Each first structure comprising multiple nodes constitutes a node in last layer network structure;The level The bottom of network structure includes one or more first structures, and each server in the cluster is used as the hierarchical network A node in the first structure of the bottom of structure;The first structure is formed by every n node by pre-defined rule, described Pre-defined rule includes:K node in n node of each node and this in the n node is connected, 0 < k < n, n > 1;
The update module 3022 is used for:When judging data cached renewal, update notification to the level net is sent The first structure of the top layer of network structure, is successively forwarded to the first structure of the bottom, directly by the first structure of the top layer Each node to the first structure of the bottom obtains the update notification, i.e., each server in described cluster obtains institute Update notification is stated, so that each server update in the cluster is data cached.
The configuration module 3021 is additionally operable to:A host node is set in each described first structure;
The update module 3022 sends the first structure of update notification to the top layer of the hierarchical network structure, by described The first structure of top layer is successively forwarded to the first structure of the bottom, until each section of the first structure of the bottom Point obtains the update notification to be included:
The update notification is sent the host node of the first structure to the top layer;Each node receives the renewal After notice, following forward process is carried out:The update notification is sent to the adjacent node in same first structure, and, If the non-bottom node of node, the update notification is sent to corresponding next layer first in the hierarchical network structure Host node in structure.
In a kind of alternative of the present embodiment, the update module 3022 is additionally operable to:Each node receives described After update notification, before carrying out the forward process, judge whether the processed update notification, if processed, abandoning should Update notification, if it did not, just carrying out the forward process.
In a kind of alternative of the present embodiment, the server 3011 is additionally operable to:Obtain the update notification and connect Receive it is new it is data cached after, it is data cached using the new data cached renewal original.
In a kind of alternative of the present embodiment, the n is 10, and the k is 3, and the first structure is Petersen Figure.
In a kind of alternative of the present embodiment, the system also includes:Distributed cache system 303;The center Control device 302 also includes storage control module 3023 and access control module 3024, wherein:
The storage control module 3023 is used for, the data cached storage that will be applied to the distributed cache system 303 In, and, frequency of use and/or size are met into the data cached storage of predetermined condition to the sheet of the server in the cluster In ground caching system 3012;
The access control module 3024 is used for, and receiving one or more application same data cached access of access please After asking, if in the absence of the data cached of one or more of application requests access in the distributed cache system 303, To data source obtain that one or more of application requests access it is data cached after, one or more of application requests are visited Ask it is data cached be sent to one or more of applications, and by one or more of application requests access it is data cached In updating the distributed cache system 303.
In a kind of alternative of the present embodiment, the predetermined condition includes:
Frequency of use is less than Second Threshold more than the spatial cache of first threshold and occupancy;
And/or, frequency of use is more than the 3rd threshold value more than the spatial cache of the 4th threshold value and occupancy.
In a kind of alternative of the present embodiment, the access control module 3024 is additionally operable to, and receives the slow of application After deposit data access request, if the application request access it is data cached positioned at initiating cache data access request The local cache system 3012 of server or in the distributed cache system 303, then from the local cache system The data cached of the application request access is obtained in 3012 or described distributed cache systems 303 and the application is sent to.
In a kind of alternative of the present embodiment, the access control module 3024 is from the local cache system 3012 Or obtain that the application request accesses in the distributed cache system 303 data cached and be sent to the application and include:
When the application request access data cached meet the predetermined condition and initiate the cache data access please Have that the application request accesses in the local cache system of the server asked it is data cached when, it is described data cached from initiating The data cached of the application request access is obtained in the local cache system 3012 of the server of access request, is sent to described Using;
When the application request access data cached meet the predetermined condition and initiate the cache data access please In the absence of the data cached of application request access in the local cache system 3012 of the server asked, or, the application It is data cached when being unsatisfactory for the predetermined condition that request is accessed, and judges to whether there is institute in the distributed cache system 303 The data cached of application request access is stated, if it is present obtaining the application request from the distributed cache system 303 What is accessed is data cached, is sent to the application.
In a kind of alternative of the present embodiment, the CCU is disposed with each described server 3011 The intercommunication of the CCU 302 on 302, and different server;
The access control module 3024 is additionally operable to, when the access control of the CCU 302 on multiple servers Module 3024 be respectively received it where server on application access same data cached access request after, if described In the absence of the data cached of the multiple application request access in distributed cache system 303, by one of center control dress Put 302 access control module 3024 from the data source obtain that the multiple application request accesses it is data cached after be sent to Application on the server where it, and notify the caching that the update module 3022 accesses the multiple application request Data are updated in the distributed cache system 303, and, by data cached being sent to of the multiple application request access The access control module of remaining other CCUs in the multiple CCU, by described other center control dresses The access control module put is sent to the application on the server at respective place.
One of ordinary skill in the art will appreciate that all or part of step in the above method can be instructed by program Related hardware is completed, and described program can be stored in computer-readable recording medium, such as read-only storage, disk or CD Deng.Alternatively, all or part of step of above-described embodiment can also be realized using one or more integrated circuits.Accordingly Ground, each module/unit in above-described embodiment can be realized in the form of hardware, it would however also be possible to employ the shape of software function module Formula is realized.The application is not restricted to the combination of the hardware and software of any particular form.

Claims (20)

1. a kind of buffer control method, it is characterised in that including:
Level network structure will be formed comprising the cluster of multiple servers;
Wherein, each layer of the hierarchical network structure is formed by connecting by multiple nodes in the form of the first structure for setting;Often One first structure comprising multiple nodes constitutes a node in last layer network structure;The most bottom of the hierarchical network structure Layer includes one or more first structures, and each server in the cluster is used as the bottom of the hierarchical network structure A node in first structure;
The first structure is formed by every n node by pre-defined rule, and the pre-defined rule includes:It is every in the n node K node in n node of individual node and this is connected, 0 < k < n, n > 1;
When judging data cached renewal, the first structure of update notification to the top layer of the hierarchical network structure is sent, by The first structure of the top layer is successively forwarded to the first structure of the bottom, until the bottom first structure it is every Individual node obtains the update notification, i.e., each server in described cluster obtains the update notification, so that in the cluster Each server update it is data cached.
2. the method for claim 1, it is characterised in that the transmission update notification to the top of the hierarchical network structure The first structure of layer, is successively forwarded to the first structure of the bottom, until the most bottom by the first structure of the top layer Each node of the first structure of layer obtains the update notification to be included:
A host node is set in each described first structure;
The update notification is sent the host node of the first structure to the top layer;
After each node receives the update notification, following forward process is carried out:
The update notification is sent to the adjacent node in same first structure;And, if the non-bottom of the node Node, the host node in the hierarchical network structure in corresponding next layer of first structure is sent to by the update notification.
3. method as claimed in claim 2, it is characterised in that methods described also includes:Each node receives the renewal After notice, before carrying out the forward process, judge whether the processed update notification, if processed, abandon the renewal Notify, if it did not, just carrying out the forward process.
4. the method for claim 1, it is characterised in that methods described also includes:The server obtains the renewal Notify and receive it is new it is data cached after, update former data cached.
5. the method as described in Claims 1-4 is any, it is characterised in that the n is 10, the k is 3, the first structure For Petersen schemes.
6. the method as described in Claims 1-4 is any, it is characterised in that methods described also includes:
The data cached storage that will be applied in distributed caching, and, frequency of use and/or size are met into predetermined condition It is data cached to store in the local cache of the server in the cluster;
After receiving one or more application same data cached access requests of access, if do not deposited in the distributed caching In the data cached of one or more of application requests access, then obtain one or more of application requests to data source and visit Ask it is data cached after, by one or more of application requests access it is data cached be sent to it is one or more of should With, and by the data cached renewal of one or more of application requests access to the distributed caching.
7. method as claimed in claim 6, it is characterised in that the predetermined condition includes:
Frequency of use is less than Second Threshold more than the spatial cache of first threshold and occupancy;
And/or, frequency of use is more than the 3rd threshold value more than the spatial cache of the 4th threshold value and occupancy.
8. method as claimed in claim 6, it is characterised in that methods described also includes:
After receiving the cache data access request of application, if application request access is data cached described positioned at initiating The local cache of the server of cache data access request or in the distributed caching, then from the local cache or The data cached of the application request access is obtained in the distributed caching and the application is sent to.
9. method as claimed in claim 8, it is characterised in that described to be obtained from the local cache or the distributed caching Taking the data cached of the application request access and being sent to the application includes:
Data cached when application request access meets the predetermined condition and initiates the cache data access request Have that the application request accesses in the local cache of server it is data cached when, from initiating the cache data access request Server local cache in obtain that the application request accesses it is data cached, be sent to the application;
Data cached when application request access meets the predetermined condition and initiates the cache data access request In the absence of the data cached of application request access in the local cache of server, or, it is slow that the application request is accessed When deposit data is unsatisfactory for the predetermined condition, the caching accessed with the presence or absence of the application request in the distributed caching is judged Data, if it is present obtaining the data cached of the application request access from the distributed caching, are sent to described answering With.
10. method as claimed in claim 6, it is characterised in that methods described also includes:
Phase intercommunication between the CCU on CCU, and different server is disposed with each described server Letter;
After the same data cached access request of access for receiving multiple applications, if do not deposited in the distributed caching In data cached, the then caching number for being accessed to data source the multiple application request of acquisition that the multiple application request is accessed According to data cached by the access of the multiple application request is sent to the multiple application, and the multiple application request is visited The data cached renewal asked includes to the distributed caching:
Multiple CCUs be respectively received it where server on application access same data cached access please After asking, if in the absence of the data cached of the multiple application request access in the distributed caching, by one of center Control device from the data source obtain that the multiple application request accesses it is data cached after be sent to the clothes where it Application on business device, and the data cached renewal that the multiple application request is accessed is in the distributed caching, and, will The multiple application request access it is data cached be sent in the multiple CCU other CCUs by Described other CCUs are sent to the application on the server at respective place.
11. a kind of cache control systems, it is characterised in that the system includes:Cluster and CCU, the center Control device includes configuration module and update module, wherein:
The cluster includes multiple servers, and each server includes local cache system;
The configuration module is used for, and the multiple server of the cluster is formed into level network structure, records the level Network structure:Wherein, each layer of the hierarchical network structure connected in the form of the first structure for setting by multiple nodes and Into;Each first structure comprising multiple nodes constitutes a node in last layer network structure;The hierarchical network structure The bottom include one or more first structures, each server in the cluster as the hierarchical network structure most A node in the first structure of bottom;The first structure is formed by every n node by pre-defined rule, the pre-defined rule Including:K node in n node of each node and this in the n node is connected, 0 < k < n, n > 1;
The update module is used for:When judging data cached renewal, update notification to the hierarchical network structure is sent The first structure of top layer, the first structure of the bottom is successively forwarded to by the first structure of the top layer, until it is described most Each node of the first structure of bottom obtains the update notification, i.e., each server in described cluster obtains described renewal and leads to Know, so that each server update in the cluster is data cached.
12. systems as claimed in claim 11, it is characterised in that
The configuration module is additionally operable to:A host node is set in each described first structure;
The update module sends the first structure of top layer of the update notification to the hierarchical network structure, by the of the top layer One structure is successively forwarded to the first structure of the bottom, until each node of the first structure of the bottom obtains institute Stating update notification includes:
The update notification is sent the host node of the first structure to the top layer;Each node receives the update notification Afterwards, following forward process is carried out:The update notification is sent to the adjacent node in same first structure, and, if The non-bottom node of node, corresponding next layer of first structure in the hierarchical network structure is sent to by the update notification In host node.
13. systems as claimed in claim 12, it is characterised in that the update module is additionally operable to:Each node receives institute After stating update notification, before carrying out the forward process, judge whether the processed update notification, if processed, abandon The update notification, if it did not, just carrying out the forward process.
14. systems as claimed in claim 11, it is characterised in that the server is additionally operable to:Obtain the update notification simultaneously Receive it is new it is data cached after, it is data cached using the new data cached renewal original.
15. system as described in claim 11 to 14 is any, it is characterised in that the n is 10, the k is 3, described first Structure is schemed for Petersen.
16. system as described in claim 11 to 14 is any, it is characterised in that the system also includes:Distributed caching system System;The CCU also includes storage control module and access control module, wherein:
The storage control module is used for, in the data cached storage that will be applied to the distributed cache system, and, will make Meet the data cached storage of predetermined condition to the local cache system of the server in the cluster with frequency and/or size In;
The access control module is used for, after receiving one or more application same data cached access requests of access, such as In the absence of the data cached of one or more of application requests access in really described distributed cache system, then obtained to data source Take that one or more of application requests access it is data cached after, the caching number that one or more of application requests are accessed According to being sent to one or more of applications, and the data cached renewal that one or more of application requests are accessed is described in In distributed cache system.
17. systems as claimed in claim 16, it is characterised in that the predetermined condition includes:
Frequency of use is less than Second Threshold more than the spatial cache of first threshold and occupancy;
And/or, frequency of use is more than the 3rd threshold value more than the spatial cache of the 4th threshold value and occupancy.
18. systems as claimed in claim 16, it is characterised in that the access control module is additionally operable to, and receives application After cache data access request, if application request access is data cached positioned at the initiation cache data access request Server local cache system or in the distributed cache system, then from the local cache system or described The data cached of the application request access is obtained in distributed cache system and the application is sent to.
19. systems as claimed in claim 18, it is characterised in that the access control module from the local cache system or The data cached of the application request access is obtained in the distributed cache system and the application is sent to includes:
Data cached when application request access meets the predetermined condition and initiates the cache data access request Have that the application request accesses in the local cache system of server it is data cached when, from initiating the cache data access The data cached of the application request access is obtained in the local cache system of the server of request, the application is sent to;
Data cached when application request access meets the predetermined condition and initiates the cache data access request In the absence of the data cached of application request access in the local cache system of server, or, the application request is accessed It is data cached when being unsatisfactory for the predetermined condition, judge to be visited with the presence or absence of the application request in the distributed cache system That asks is data cached, if it is present the data cached of the application request access is obtained from the distributed cache system, It is sent to the application.
20. systems as claimed in claim 16, it is characterised in that
Be disposed with the CCU on each described server, and the CCU on different server it Intercommunication;
The access control module is additionally operable to, when the access control module of the CCU on multiple servers is received respectively After application on server where to it accesses same data cached access request, if in the distributed cache system In the absence of the multiple application request access it is data cached, by the access control module of one of CCU from institute State that data source obtains that the multiple application request accesses it is data cached after the application that is sent on the server where it; And the data cached renewal that the update module accesses the multiple application request is notified in the distributed cache system, And, data cached it is sent to other center controls in the multiple CCU by what the multiple application request was accessed The access control module of device, the server at respective place is sent to by the access control module of other CCUs On application.
CN201310172568.7A 2013-05-10 2013-05-10 A kind of buffer control method and system Active CN104142896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310172568.7A CN104142896B (en) 2013-05-10 2013-05-10 A kind of buffer control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310172568.7A CN104142896B (en) 2013-05-10 2013-05-10 A kind of buffer control method and system

Publications (2)

Publication Number Publication Date
CN104142896A CN104142896A (en) 2014-11-12
CN104142896B true CN104142896B (en) 2017-05-31

Family

ID=51852075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310172568.7A Active CN104142896B (en) 2013-05-10 2013-05-10 A kind of buffer control method and system

Country Status (1)

Country Link
CN (1) CN104142896B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580422A (en) * 2014-12-26 2015-04-29 赞奇科技发展有限公司 Cluster rendering node data access method based on shared cache
CN104935654B (en) * 2015-06-10 2018-08-21 华为技术有限公司 Caching method, write-in point client in a kind of server cluster system and read client
CN105701219B (en) * 2016-01-14 2019-04-02 北京邮电大学 A kind of implementation method of distributed caching
CN107231395A (en) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 Date storage method, device and system
CN106921648A (en) * 2016-11-15 2017-07-04 阿里巴巴集团控股有限公司 Date storage method, application server and remote storage server
CN106506704A (en) * 2016-12-29 2017-03-15 北京奇艺世纪科技有限公司 A kind of buffering updating method and device
CN106790705A (en) * 2017-02-27 2017-05-31 郑州云海信息技术有限公司 A kind of Distributed Application local cache realizes system and implementation method
US10355939B2 (en) * 2017-04-13 2019-07-16 International Business Machines Corporation Scalable data center network topology on distributed switch
CN107301048B (en) * 2017-06-23 2020-09-01 北京中泰合信管理顾问有限公司 Internal control management system of application response type shared application architecture
US10511524B2 (en) * 2017-10-03 2019-12-17 Futurewei Technologies, Inc. Controller communications in access networks
CN108536481A (en) * 2018-02-28 2018-09-14 努比亚技术有限公司 A kind of application program launching method, mobile terminal and computer storage media
CN108446356B (en) * 2018-03-12 2023-08-29 上海哔哩哔哩科技有限公司 Data caching method, server and data caching system
WO2020257981A1 (en) * 2019-06-24 2020-12-30 Continental Automotive Gmbh Process for software and function update of hierarchic vehicle systems
CN112148202B (en) * 2019-06-26 2023-05-26 杭州海康威视数字技术股份有限公司 Training sample reading method and device
CN113392126B (en) * 2021-08-17 2021-11-02 北京易鲸捷信息技术有限公司 Execution plan caching and reading method based on distributed database

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1451116A (en) * 1999-11-22 2003-10-22 阿茨达科姆公司 Distributed cache synchronization protocol
CN101674233A (en) * 2008-09-12 2010-03-17 中国科学院声学研究所 Peterson graph-based storage network structure and data read-write method thereof
US8103799B2 (en) * 1997-03-05 2012-01-24 At Home Bondholders' Liquidating Trust Delivering multimedia services

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6961825B2 (en) * 2001-01-24 2005-11-01 Hewlett-Packard Development Company, L.P. Cache coherency mechanism using arbitration masks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8103799B2 (en) * 1997-03-05 2012-01-24 At Home Bondholders' Liquidating Trust Delivering multimedia services
CN1451116A (en) * 1999-11-22 2003-10-22 阿茨达科姆公司 Distributed cache synchronization protocol
CN101674233A (en) * 2008-09-12 2010-03-17 中国科学院声学研究所 Peterson graph-based storage network structure and data read-write method thereof

Also Published As

Publication number Publication date
CN104142896A (en) 2014-11-12

Similar Documents

Publication Publication Date Title
CN104142896B (en) A kind of buffer control method and system
CN112087333B (en) Micro-service registry cluster and information processing method thereof
CN106506605B (en) SaaS application construction method based on micro-service architecture
CN103581276B (en) Cluster management device, system, service customer end and correlation method
CN103973725B (en) A kind of distributed cooperative algorithm and synergist
CN107368369B (en) Distributed container management method and system
CN102244685A (en) Distributed type dynamic cache expanding method and system supporting load balancing
US9847903B2 (en) Method and apparatus for configuring a communication system
WO2018166398A1 (en) System for managing license in nfv network
CN109639773B (en) Dynamically constructed distributed data cluster control system and method thereof
CN112463366A (en) Cloud-native-oriented micro-service automatic expansion and contraction capacity and automatic fusing method and system
CN111858045A (en) Multitask GPU resource scheduling method, device, equipment and readable medium
CN107992270B (en) Method and device for globally sharing cache of multi-control storage system
JP2016144169A (en) Communication system, queue management server, and communication method
CN109733444B (en) Database system and train monitoring management equipment
CN102624932A (en) Index-based remote cloud data synchronizing method
CN116723077A (en) Distributed IT automatic operation and maintenance system
CN111600958B (en) Service discovery system, service data management method, server, and storage medium
CN111382132A (en) Medical image data cloud storage system
US20220019485A1 (en) Preserving eventually consistent distributed state of multi-layer applications
CN111083182B (en) Distributed Internet of things equipment management method and device
CN107819858B (en) Method and device for managing cloud service during dynamic expansion and contraction of cloud service
CN110110004B (en) Data operation method, device and storage medium
CN111400110A (en) Database access management system
CN105868045A (en) Data caching method and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211109

Address after: Room 554, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee after: TAOBAO (CHINA) SOFTWARE CO.,LTD.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Patentee before: ALIBABA GROUP HOLDING Ltd.