CN201601694U - Distribution type cache system - Google Patents
Distribution type cache system Download PDFInfo
- Publication number
- CN201601694U CN201601694U CN2009201338062U CN200920133806U CN201601694U CN 201601694 U CN201601694 U CN 201601694U CN 2009201338062 U CN2009201338062 U CN 2009201338062U CN 200920133806 U CN200920133806 U CN 200920133806U CN 201601694 U CN201601694 U CN 201601694U
- Authority
- CN
- China
- Prior art keywords
- cache
- client
- sequence number
- contents
- server end
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Information Transfer Between Computers (AREA)
Abstract
The utility model relates to a distribution type cache system which includes one or a plurality of cache clients used for requiring storing cache contents relevant to one key value or requiring storing the cache contents corresponding to one sequence number in one key value, and a cache serve terminal used for storing the cache contents relevant to one key value into one storage unit and returning the sequence number corresponding to the cache content to the cache clients or the cache contents corresponding to the sequence number required by the cache clients to the cache clients; and the cache clients are connected with the cache server terminal through a TCP/IP network. When the clients need to obtain a plurality of cache contents relevant to one key value, the clients do not need to send a plurality of requirements and the server terminal does not need to respond for a plurality of times and only needs to send one requirement; when the clients only want to obtain one cache content of one key value, the cache contents of the whole storage unit are not needed to be downloaded from the server terminal, and the cache contents can be only downloaded according to the sequence number of the required cache contents.
Description
Technical field
The utility model relates to the distributed caching field, more particularly, relates to a kind of distributed caching system.
Background technology
Existing various distributed caching method, most of key assignments is corresponding one by one with cache contents.Need obtain the relevant buffer memory of a plurality of contents when client, need client repeatedly to send request and server end repeatedly responds, increase Content of Communication, bring problems such as inefficiency; But when client is buffered in a plurality of related contents in the key assignments corresponding cache, when client only needs one of them content, need download whole cache contents, also increase the unnecessary traffic, bring problems such as inefficiency from server.
The utility model content
The technical problems to be solved in the utility model is, at the above-mentioned defective of prior art, provides a kind of distributed caching system, and client can make things convenient for the accessed cache content, has reduced the client and server end number of communications and the traffic, has improved communication efficiency.
The technical scheme that its technical problem that solves the utility model adopts is: construct a kind of distributed caching system, comprise one or more cache client that are used for asking storing the cache contents relevant or request and a key assignments one sequence number corresponding cache content, and be used for storing the cache contents relevant and return to cache client or will return to the caching server end of cache client with the sequence number corresponding cache content of cache client request in a memory cell and the sequence number that described cache contents is corresponding with a key assignments with a key assignments;
One or more cache client are connected with the caching server end by the TCP/IP network.
In the utility model, described caching server end comprises the polylith memory cell, key assignments of the corresponding described cache client of a memory cell of described caching server end.
In the utility model, described cache client comprises a plurality of key assignments and a plurality of sequence number, a key-value pair of described cache client is answered one or more sequence numbers, in the memory cell of answering with described key-value pair that a cache contents of the corresponding described cache client of each sequence number, this cache contents are stored in described caching server end.
In the utility model, a memory cell of described caching server end is deposited one or more cache contents, and cache contents leaves in the described memory cell according to the sequence number of correspondence is ascending.
In the utility model, described cache client is according to the one or more cache contents of serial number request, and described caching server end takes out and return the sequence number corresponding cache content of request according to sequence number.
In the utility model, described cache client sends a plurality of requests of storing the different cache contents that are associated with same key assignments in the different moment to described caching server end.
In the utility model, during a plurality of cache contents of described cache client request, only need to send once request, described caching server end only need be done a secondary response, takes out and return the sequence number corresponding cache content of request according to sequence number.
Implement a kind of distributed caching of the present utility model system, cache client has following beneficial effect: when need be obtained a plurality of cache contents of same key assignments association, cache client does not need repeatedly to send request, the caching server end does not need repeatedly to respond yet, only need to send once request, the sequence number of the cache contents of needs is sent to the caching server end, and the caching server end returns to cache client with these sequence number corresponding cache contents.When cache client is buffered in a plurality of related contents in the memory cell that a key-value pair answers, when cache client is only wanted to obtain one of them cache contents, do not need to download the cache contents of whole memory unit, only need the sequence number download cache contents of cache contents as required to get final product from the buffer memory server end.Cache client can make things convenient for the accessed cache content, has reduced cache client and the buffer memory server end number of communications and the traffic, has improved communication efficiency.
Description of drawings
The utility model is described in further detail below in conjunction with drawings and Examples, in the accompanying drawing:
Fig. 1 is a system configuration schematic diagram of the present utility model;
Fig. 2 is the cache client of the utility model embodiment sends an interaction figure from a plurality of requests of storage associate content to the caching server end;
Fig. 3 is the distribution schematic diagram of the utility model embodiment caching server end cache contents in memory cell;
Fig. 4 is the interaction figure of the cache client of the utility model embodiment to caching server end request cache contents.
Embodiment
In order to make the purpose of this utility model, technical scheme and advantage clearer,, the utility model is further elaborated below in conjunction with drawings and Examples.Should be appreciated that specific embodiment described herein only in order to explanation the utility model, and be not used in qualification the utility model.
Fig. 1 is a system configuration schematic diagram of the present utility model.In the embodiment of the present utility model shown in Fig. 1, comprise caching server end and a plurality of cache client, the caching server end is connected by the TCP/IP network with cache client.Cache client is to caching server request memory buffers and request buffer memory; The request of caching server end response buffer client, the cache contents of memory buffers client also can return to client with cache contents.
The cache contents that user end to server end request storage is relevant with a key assignments, perhaps a sequence number corresponding cache content in a request and the key assignments; The request of server end customer in response end, the storage client the cache contents relevant with a key assignments in a memory cell and the sequence number that described cache contents is corresponding return to client, perhaps will return to client with the sequence number corresponding cache content of client-requested.
Fig. 2 is the cache client of the utility model embodiment sends an interaction figure from a plurality of requests of storage associate content to the caching server end.As can be seen from Figure 2:
1) cache client sends cache request, the content C1 that buffer memory is relevant with key assignments K1 at moment T1; Caching server end memory buffers content C1, and the sequence number S1 that returns the C1 correspondence is to cache client.
2) cache client sends cache request, the content C2 that buffer memory is relevant with key assignments K1 at moment T2; Caching server end memory buffers content C2, and the sequence number S2 that returns the C2 correspondence is to cache client.S2 is greater than S1.
3) cache client sends cache request, the content C3 that buffer memory is relevant with key assignments K1 at moment T3; Caching server end memory buffers content C3, and the sequence number S3 that returns the C3 correspondence is to cache client.S3 is greater than S2.
The caching server end with cache contents according in the ascending memory cell that leaves key assignments K1 correspondence in of sequence number.As shown in Figure 3.
Fig. 3 is the distribution schematic diagram of the utility model embodiment caching server end cache contents in memory cell.The caching server end is stored in cache contents in the memory cell.The key assignments of cache contents is corresponding one by one with a memory cell of caching server end; The memory cell that key-value pair is answered is ascending to be numbered, respectively the buffer memory content relevant with same key assignments.
As can be seen from Figure 3, the cell stores of key assignments K1 correspondence 3 cache contents C1, C2, C3, the sequence number of C1, C2, C3 correspondence is respectively S1, S2, S3, and S1<S2<S3.Cache contents C1 of the cell stores of key assignments K2 correspondence, corresponding sequence number is S1.The cell stores of key assignments K3 correspondence 5 cache contents C1, C2, C3, C4, C5, the sequence number of C1, C2, C3, C4, C5 correspondence is respectively S1, S2, S3, S4, S5, and S1<S2<S3<S4<S5.
Fig. 4 is the interaction figure of the cache client of the utility model embodiment to caching server end request cache contents.Client is according to the one or more cache contents of serial number request, and server end takes out and return the sequence number corresponding cache content of request according to sequence number, and client only need send once request, and server end only need be done a secondary response.
As can be seen from Figure 4, when sequence number was the cache contents of S1 among the client-requested key assignments K1, the caching server end was back to cache client with sequence number S1 corresponding cache content C1.When sequence number among the client-requested key assignments K1 during greater than the cache contents of S1, the sequence number greater than S1 is S2 and S3 among the key assignments K1 as can be seen from Figure 3, and the caching server end is back to cache client with sequence number S2 and S3 corresponding cache content C2 and C3.When the cache contents of client-requested key assignments K1, sequence number S1, the S2 and the S3 that comprise of key assignments K1 as can be seen from Figure 3, the caching server end is back to cache client with sequence number S1, S2 and S3 corresponding cache content C1, C2 and C3.
Distributed caching of the present utility model system, when cache client need be obtained a plurality of cache contents of same key assignments association, cache client does not need repeatedly to send request, the caching server end does not need repeatedly to respond yet, only need to send once request, the sequence number of the cache contents of needs is sent to the caching server end, and the caching server end returns to cache client with these sequence number corresponding cache contents.When cache client is buffered in a plurality of related contents in the memory cell that a key-value pair answers, when cache client is only wanted to obtain one of them cache contents, do not need to download the cache contents of whole memory unit, only need the sequence number download cache contents of cache contents as required to get final product from the buffer memory server end.Cache client can make things convenient for the accessed cache content, has reduced cache client and the buffer memory server end number of communications and the traffic, has improved communication efficiency.
The above only is preferred embodiment of the present utility model; not in order to restriction the utility model; all any modifications of within spirit of the present utility model and principle, being done, be equal to and replace and improvement etc., all should be included within the protection range of the present utility model.
Claims (4)
1. a distributed caching system is characterized in that, comprising:
One or more cache client (100) that are used for asking to store the cache contents relevant or request and a key assignments one sequence number corresponding cache content with a key assignments, and
Being used for storing the cache contents relevant with a key assignments returns to cache client (100) or will return to the caching server end (200) of cache client (100) with the sequence number corresponding cache content of cache client (100) request in a memory cell and the sequence number that described cache contents is corresponding;
One or more cache client (100) are connected with caching server end (200) by the TCP/IP network.
2. a kind of distributed caching according to claim 1 system, it is characterized in that, described caching server end (200) comprises the polylith memory cell, (100) key assignments of the corresponding described cache client of a memory cell of described caching server end (200).
3. a kind of distributed caching according to claim 1 system, it is characterized in that, described cache client (100) comprises a plurality of key assignments and a plurality of sequence number, a key-value pair of described cache client (100) is answered one or more sequence numbers, in the memory cell of answering with described key-value pair that a cache contents of the corresponding described cache client of each sequence number (100), this cache contents are stored in described caching server end (200).
4. a kind of distributed caching according to claim 1 system is characterized in that a memory cell of described caching server end (200) is deposited the ascending cache contents that leaves in the described memory cell of one or more sequence numbers according to correspondence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009201338062U CN201601694U (en) | 2009-07-10 | 2009-07-10 | Distribution type cache system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009201338062U CN201601694U (en) | 2009-07-10 | 2009-07-10 | Distribution type cache system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN201601694U true CN201601694U (en) | 2010-10-06 |
Family
ID=42812753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009201338062U Expired - Fee Related CN201601694U (en) | 2009-07-10 | 2009-07-10 | Distribution type cache system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN201601694U (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104461929A (en) * | 2013-09-23 | 2015-03-25 | 中国银联股份有限公司 | Distributed type data caching method based on interceptor |
CN105847382A (en) * | 2016-04-20 | 2016-08-10 | 乐视控股(北京)有限公司 | CDN file distribution method and system |
CN105847365A (en) * | 2016-03-28 | 2016-08-10 | 乐视控股(北京)有限公司 | Content caching method and content caching system |
-
2009
- 2009-07-10 CN CN2009201338062U patent/CN201601694U/en not_active Expired - Fee Related
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104461929A (en) * | 2013-09-23 | 2015-03-25 | 中国银联股份有限公司 | Distributed type data caching method based on interceptor |
CN104461929B (en) * | 2013-09-23 | 2018-03-23 | 中国银联股份有限公司 | Distributed data cache method based on blocker |
CN105847365A (en) * | 2016-03-28 | 2016-08-10 | 乐视控股(北京)有限公司 | Content caching method and content caching system |
CN105847382A (en) * | 2016-04-20 | 2016-08-10 | 乐视控股(北京)有限公司 | CDN file distribution method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105450579B (en) | Internet resources pre-add support method, client and middleware server | |
CN102137139A (en) | Method and device for selecting cache replacement strategy, proxy server and system | |
CN103139301A (en) | Internet access accelerating method and device which are used in content distribution network system | |
CN105184540A (en) | Intelligent express delivery cabinet system control method | |
CN107332843A (en) | IOS network requests intercept forwarding cache method and system | |
CN105989076A (en) | Data statistical method and device | |
CN106303704A (en) | A kind of DASH flow medium live system based on proxy server and method | |
CN107888666A (en) | A kind of cross-region data-storage system and method for data synchronization and device | |
CN106790552B (en) | A kind of content providing system based on content distributing network | |
CN105608207A (en) | Data statistics system based on Redis database and statistics method of data statistics system | |
CN106210022A (en) | A kind of system and method for processing forum's height concurrent data requests | |
CN201601694U (en) | Distribution type cache system | |
CN106775486A (en) | Data access system, method and routing server, configuration center server | |
CN107404530A (en) | Social networks cooperation caching method and device based on user interest similarity | |
CN103152396A (en) | Data placement method and device applied to content distribution network system | |
CN101355488B (en) | Method and system for controlling flow of information series business initiated by network | |
CN102710535B (en) | A kind of data capture method and equipment | |
CN106777387A (en) | A kind of Internet of Things big data access method based on HBase | |
CN106790469A (en) | A kind of buffer control method, device and system | |
CN103326925A (en) | Message push method and device | |
CN107786668A (en) | A kind of weight caching web site method based on CDN | |
KR101341082B1 (en) | System and method for monitoring virtual agents | |
CN102710790A (en) | Memcached implementation method and system based on metadata management | |
CN102737061A (en) | Distributed ticket query management system and method | |
CN103136225B (en) | A kind of method and system of Internet picture conversion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20101006 Termination date: 20170710 |