CN107241444A - A kind of distributed caching data management system, method and device - Google Patents
A kind of distributed caching data management system, method and device Download PDFInfo
- Publication number
- CN107241444A CN107241444A CN201710637356.XA CN201710637356A CN107241444A CN 107241444 A CN107241444 A CN 107241444A CN 201710637356 A CN201710637356 A CN 201710637356A CN 107241444 A CN107241444 A CN 107241444A
- Authority
- CN
- China
- Prior art keywords
- data
- read
- client
- high speed
- nonvolatile storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a kind of distributed caching data management system, method and device, management system includes:Back-end memory and high speed resource pooling pond, the high speed resource pooling pond is made up of multiple high-performance nonvolatile storages trunking mode, each one client of the high-performance nonvolatile storage correspondence, and it is used as the read-write cache memory of the client, when client writes data, client is written into data and writes direct corresponding high-performance nonvolatile storage in high speed resource pooling pond, when client needs to read data, client can see data cached in high speed resource pooling pond, when it is determined that required data are in high speed resource pooling pond, directly data are read from high speed resource pooling pond, therefore, the present invention not only reduces whole IO stacks path, accelerate data access speed, the secure access and different clients that also assures that data simultaneously are operated to back-end memory simultaneously when, the uniformity of data.
Description
Technical field
The present invention relates to technical field of distributed memory, in particular, it is related to a kind of distributed caching data management system
System, method and device.
Background technology
The development of internet industry, the geometry for promoting data storage increases again so that memory data output is increasing, such as
What, which improves data access efficiency, becomes internet development key.At present, industry is raising data access efficiency, conventional one
Method is:When client needs to access the data in back-end memory, caching is opened, goes to access data from caching, so that
Whole IO stacks path is reduced, accelerates data access speed.Detailed process is:After caching is opened, the data in caching would generally
Put into internal memory, so, be equivalent to directly read and write data from internal memory, so as to improve data access speed.Especially
The business scenario of data cached change infrequently, data access speed can be improved a lot.
However, also there is certain risk while data access speed is improved in such scheme.When client is from internal memory
During reading cache data, if abnormal conditions occurs in client, such as main frame power-off, server are delayed machine, network paralysis, in internal memory
It is data cached will lose, even if after client recovers normal, data cached in internal memory will not also recover.To ensure data
Secure access, it is existing and there is provided another data access scheme, as shown in figure 1, using high-performance nonvolatile storage generation
For internal memory, read-write cache memory, high-performance nonvolatile storage one client of correspondence, each high property are used as
Energy nonvolatile storage is all externally sightless, and such as customer end A intelligently sees high-performance nonvolatile storage A, customer end B intelligence
High-performance nonvolatile storage B can be seen.When customer end A needs to write data block A into the distributed storage money of back-end memory
When in the block storage in source pond, customer end A is buffered in high-performance nonvolatile storage A firstly the need of by data block A, then passes through again
High-performance nonvolatile storage A write-in blocks are stored.Stored when data block A has not been written to block, but it is non-volatile to be buffered in high-performance
When in memory A, if customer end B thinks that customer end A stores data block A write-in blocks, customer end B will pass through high-performance
Nonvolatile storage B performs read block A operation since during block is stored, and a kind of situation is not found in block storage
Data block A, another situation is to look for data block A, but data block A is not newest data block.Both of these case, all
The data that data and customer end B that customer end A writes can be caused to read are inconsistent.
The variety of problems existed based on traditional scheme, most enterprises manufacturer uses sacrifice storage performance for cost, does not open
Caching is opened, data access is carried out directly from client to server end, although so ensure that the secure access of data,
Data access efficiency is substantially reduced.
The content of the invention
In view of this, the present invention discloses a kind of distributed caching data management system, method and device, is being reduced with realizing
Whole IO stacks path, while accelerating data access speed, it is ensured that the secure access of data and different clients are simultaneously to rear
When end memory is operated, the uniformity of data.
A kind of distributed caching data management system, applied to server end, including:
High speed resource pooling pond, the high speed resource pooling pond passes through trunking mode by multiple high-performance nonvolatile storages
Composition, each one client of the high-performance nonvolatile storage correspondence, and it is used as the read-write speed buffering of the client
Memory;
And, the back-end memory being connected with the high speed resource pooling pond;
The high speed resource pooling pond, for when client writes data, receiving data to be written, and will be described to be written
Enter data to write direct in high-performance nonvolatile storage corresponding with the client;
The high speed resource pooling pond, is additionally operable to, when client reads data, data read command be received, according to described
Data read command, searches whether to be cached with continuing of being carried in the read write command from each high-performance nonvolatile storage
Access evidence, if it is, reading the data to be read found;If it is not, then continuing described from back-end server reading
Data to be read.
It is preferred that, the back-end memory includes:Multiple mechanical hard disks.
It is preferred that, the high-performance nonvolatile storage is used to deposit the access frequency in current time preset time period
Highest data.
A kind of distributed caching data managing method, applied to high speed resource pooling pond, the high speed resource pooling pond by
Multiple high-performance nonvolatile storages are made up of trunking mode, one visitor of each described high-performance nonvolatile storage correspondence
Family end, and it is used as the read-write cache memory of the client;
Methods described includes:
When client writes data, data to be written are received, and the data to be written are write direct and the client
Hold in corresponding high-performance nonvolatile storage;
When client reads data, data read command is received;
According to the data read command, search whether to be cached with the read-write from each high-performance nonvolatile storage
The data to be read carried in instruction;
If it is, reading the data to be read found;
If it is not, then continuing to read the data to be read from the back-end server.
A kind of distributed caching data administrator, applied to high speed resource pooling pond, the high speed resource pooling pond by
Multiple high-performance nonvolatile storages are made up of trunking mode, one visitor of each described high-performance nonvolatile storage correspondence
Family end, and it is used as the read-write cache memory of the client;
Described device includes:
Writing unit, for when client writes data, receiving data to be written, and the data to be written are direct
In write-in high-performance nonvolatile storage corresponding with the client;
Receiving unit, for when client reads data, receiving data read command;
Searching unit, for according to the data read command, being searched whether from each high-performance nonvolatile storage
It is cached with the data to be read carried in the read write command;
First reading unit, for when the searching unit finds the data to be read, reading described to be read
Data;
Second reading unit, for when the searching unit does not find the data to be read, continues after described
Server is held to read the data to be read.
It was found from above-mentioned technical scheme, the invention discloses a kind of distributed caching data management system, method and dress
Put, management system includes:Back-end memory and high speed resource pooling pond, the high speed resource pooling pond are non-volatile by multiple high-performance
Memory is made up of trunking mode, each one client of the high-performance nonvolatile storage correspondence, and is used as the visitor
The read-write cache memory at family end, when client writes data, client is written into data and writes direct money at a high speed
Corresponding high-performance nonvolatile storage in the buffer pool of source, when client needs to read data, client first can be at a high speed
Resource pooling pond is read, when the uncached data to be read in high speed resource pooling pond, then is read from back-end server.Due to this hair
It is bright that the corresponding high-performance nonvolatile storage of each client has been subjected to cluster, so that each high-performance non-volatile memory
Between device can resource-sharing, client can see all buffered data, and so, client passes through in high speed resource
Buffer pool reads data to be read first, you can it is determined that whether required reading data are buffered in high speed resource pooling pond, and only exist
When determining that high speed resource pooling pond does not cache data to be read, then from back-end server read.Therefore, the present invention not only reduces
Whole IO stacks path, accelerates data access speed, at the same also assures that data secure access and different clients simultaneously
When being operated to back-end memory, the uniformity of data.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
The embodiment of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis
Disclosed accompanying drawing obtains other accompanying drawings.
Fig. 1 is traditional cache data processing shelf figures;
Fig. 2 is a kind of frame diagram of distributed caching data management system disclosed in the embodiment of the present invention;
Fig. 3 is the signaling interaction diagram of a kind of high speed resource pooling pond and back-end server disclosed in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
The embodiment of the invention discloses a kind of distributed caching data management system, method and device, reduced with realizing
Whole IO stacks path, while accelerating data access speed, it is ensured that the secure access of data and different clients are simultaneously to rear
When end memory is operated, the uniformity of data.
Referring to Fig. 2, a kind of frame diagram of distributed caching data management system, the system disclosed in one embodiment of the invention
Applied to server end, the system includes:High speed resource pooling pond 11 and deposited with the rear end that high speed resource pooling pond 11 is connected
Reservoir 12;
Wherein:
High speed resource pooling pond 11 is made up of multiple high-performance nonvolatile storages 111 trunking mode, and each is high
One client 10 of correspondence of performance nonvolatile storage 111, and it is used as the read-write cache memory of the client 10
(Cache)。
Specifically, high-performance nonvolatile storage 111 is one kind abnormal conditions can occurs in environment, such as main frame power-off,
Server delay machine, network paralysis when, it is ensured that a kind of memory that data are not lost.
In the present invention, using read-write Cache of the high-performance nonvolatile storage 111 as client 10, when client 10
When needing to access the data in back-end memory 12, the data opened in caching, caching are placed into high-performance nonvolatile storage
In 111, at this moment, client 10 just directly can read and write data in the corresponding high-performance nonvolatile storage 111 of slave phase, so as to subtract
Few whole IO stacks path, improves data access speed.When abnormal conditions occurs in the environment residing for high-performance nonvolatile storage 111
When, data cached in high-performance nonvolatile storage 111 can be stored temporarily on high-performance nonvolatile storage 111, work as ring
After border recovers normal, client 10 is further continued for reading data, for internal memory, Gao Xing from high-performance nonvolatile storage 111
Energy nonvolatile storage 111 substantially increases the security of data access.
It should be noted that an one high-performance nonvolatile storage 111 of correspondence of client 10.
Trunked communication system is a kind of GSM for group's dispatch control communication, is mainly used in professional shifting
The dynamic communications field.The available channel that the system has can share for the total user of system, with automatically selecting channel function, it
It is shared resource, contribution, shared channel equipment and the multipurpose of service, dynamical wireless dispatching communication system.
In the present embodiment, by the way that multiple high-performance nonvolatile storages 111 are used into trunking mode, to realize to different visitors
The unified management of the high-performance nonvolatile storage 111 at family end 10, forms unified distributed caching, because these high-performance are non-
Caching of the volatile memory 111 no longer independently of oneself main frame, hence in so that each client 10 can be in high speed resource pooling
See all buffered data in pond 11.
The high speed resource pooling pond 11, for when client 10 writes data, receiving data to be written, and will be described
Data to be written are write direct in high-performance nonvolatile storage 111 corresponding with the client 10.
Specifically, the present invention uses writeback mechanism, when client 10 needs write-in data, client 10 will be treated
Write-in data are write direct in high speed resource pooling pond 11 in corresponding high-performance nonvolatile storage 111, that is, client 10
It is written into data to write direct after distributed caching, that is, returns.
High speed resource pooling pond 11, is additionally operable to, when client 10 reads data, data read command be received, according to described
Data read command, searches whether to be cached with what is carried in the read write command from each high-performance nonvolatile storage 111
Data to be read, if it is, reading the data to be read found;If it is not, then continuing to read from the back-end server 12
Take the data to be read.
Specifically, when client 10 needs to read data, client 10 can be read in high speed resource pooling pond 11 first,
Due to the corresponding high-performance nonvolatile storage 111 of each client 10 has been carried out into cluster, so that each high-performance is non-
Between volatile memory 111 can resource-sharing, client 10 it can be seen that all buffered data, so, client
By reading data to be read first in high speed resource pooling pond, you can it is determined that whether required reading data are buffered in high speed resource
Buffer pool 10, reads when high speed resource pooling 11 uncached data to be read of pond, then from back-end server 12.Therefore, when
Different clients 10 are operated to back-end memory 12 simultaneously when, it is ensured that the uniformity of data, tradition effectively prevent
In scheme, because the data of customer end A do not store back-end server 12, customer end B is just read out from back-end server 12
Operate the problem of data caused are inconsistent.
To sum up, distributed caching data management system disclosed by the invention, because non-using high-performance in distributed storage
Volatile memory 111 as client 10 read-write Cache, it is ensured that it is data cached to be stored temporarily in during environmental abnormality
On high-performance nonvolatile storage 111, after environment restoration is normal, client 10 is further continued for from high-performance nonvolatile storage
111 read data, and for internal memory, high-performance nonvolatile storage 111 substantially increases the security of data access.
The distributed caching that the present invention is used, passes through the cluster of high-performance nonvolatile storage 111 different clients 10
Same management, forms same high speed resource pooling pond so that different clients 10 are operated to back-end memory 12 simultaneously
When, it is ensured that the uniformity of data, it is prevented effectively from traditional scheme, because the data of customer end A do not store back-end services
Device 12, customer end B is just read the problem of data caused are inconsistent from back-end server 12.
The present invention uses writeback mechanism, when client 10 writes data, directly writes data into distributed slow
Deposit, that is, return;When client 10 reads data, if read data bit needed for finding is in distributed caching (namely high speed resource
Buffer pool 11) in, just directly it is read out, so as to effectively shorten the path of whole IO stacks, improves from distributed caching
Overall performance.
It should be noted that in above-described embodiment, the high-performance nonvolatile storage 111 in high speed resource pooling pond 11 is led
It is used to deposit the access frequency highest data in current time preset time period, namely dsc data.
Preferably, back-end memory 12 includes:Multiple mechanical hard disks.
In actual applications, back-end memory 12 is typically made up of the mechanical hard disk of Large Copacity, and these mechanical hard disks are united
Come together to set up resource pool, different size of volume is divided from resource pool, can be applied to a variety of scenes.Such as, it will provide
Source pond is divided into 10T, 100T volume not of uniform size, in traffic police's project, in 100T big storing in small space video, 10T's
Deposit picture in space.
Corresponding with said system embodiment, the invention also discloses a kind of distributed caching data managing method, the party
Method is applied to high speed resource pooling pond 11, and the high speed resource pooling pond 11 is passed through by multiple high-performance nonvolatile storages 111
Trunking mode is constituted, each one client 10 of the correspondence of high-performance nonvolatile storage 111, and is used as the client 10
Read-write cache memory.
Wherein, the signalling interactive process of high speed resource pooling pond 11 and back-end server 12 is referring to Fig. 3.
As shown in figure 3, signalling interactive process includes:
Step S101, when client 10 writes data, receive data to be written, and the data to be written are directly write
Enter in high-performance nonvolatile storage 111 corresponding with the client 10;
Specifically, the present invention uses writeback mechanism, when client 10 needs write-in data, client 10 will be treated
Write-in data are write direct in high speed resource pooling pond 11 in corresponding high-performance nonvolatile storage 111, that is, client 10
It is written into data to write direct after distributed caching, that is, returns.
Step S102, client 10 read data when, receive data read command;
Step S103, according to the data read command, search whether to delay from each high-performance nonvolatile storage 111
There are the data to be read carried in the read write command;
Step S104, if it is, reading the data to be read that find;
Step S105, if it is not, then continue read the data to be read from the back-end server 12.
Specifically, when client 10 needs to read data, client 10 can be read in high speed resource pooling pond 11 first,
Due to the corresponding high-performance nonvolatile storage 111 of each client 10 has been carried out into cluster, so that each high-performance is non-
Between volatile memory 111 can resource-sharing, client 10 it can be seen that all buffered data, so, client
By reading data to be read first in high speed resource pooling pond, you can it is determined that whether required reading data are buffered in high speed resource
Buffer pool 10, reads when high speed resource pooling 11 uncached data to be read of pond, then from back-end server 12.Therefore, when
Different clients 10 are operated to back-end memory 12 simultaneously when, it is ensured that the uniformity of data, tradition effectively prevent
In scheme, because the data of customer end A do not store back-end server 12, customer end B is just read out from back-end server 12
Operate the problem of data caused are inconsistent.
In the present invention, using read-write Cache of the high-performance nonvolatile storage 111 as client 10, when client 10
When needing to access the data in back-end memory 12, the data opened in caching, caching are placed into high-performance nonvolatile storage
In 111, at this moment, client 10 just directly can read and write data in the corresponding high-performance nonvolatile storage 111 of slave phase, so as to subtract
Few whole IO stacks path, improves data access speed.When abnormal conditions occurs in the environment residing for high-performance nonvolatile storage 111
When, data cached in high-performance nonvolatile storage 111 can be stored temporarily on high-performance nonvolatile storage 111, work as ring
After border recovers normal, client 10 is further continued for reading data, for internal memory, Gao Xing from high-performance nonvolatile storage 111
Energy nonvolatile storage 111 substantially increases the security of data access.
In the present embodiment, by the way that multiple high-performance nonvolatile storages 111 are used into trunking mode, to realize to different visitors
The unified management of the high-performance nonvolatile storage 111 at family end 10, forms unified distributed caching, because these high-performance are non-
Caching of the volatile memory 111 no longer independently of oneself main frame, hence in so that each client 10 can be in high speed resource pooling
See all buffered data in pond 11.
To sum up, distributed caching data managing method disclosed by the invention, because non-using high-performance in distributed storage
Volatile memory 111 as client 10 read-write Cache, it is ensured that it is data cached to be stored temporarily in during environmental abnormality
On high-performance nonvolatile storage 111, after environment restoration is normal, client 10 is further continued for from high-performance nonvolatile storage
111 read data, and for internal memory, high-performance nonvolatile storage 111 substantially increases the security of data access.
The distributed caching that the present invention is used, passes through the cluster of high-performance nonvolatile storage 111 different clients 10
Same management, forms same high speed resource pooling pond so that different clients 10 are operated to back-end memory 12 simultaneously
When, it is ensured that the uniformity of data, it is prevented effectively from traditional scheme, because the data of customer end A do not store back-end services
Device 12, customer end B is just read the problem of data caused are inconsistent from back-end server 12.
The present invention uses writeback mechanism, when client 10 writes data, directly writes data into distributed slow
Deposit, that is, return;When client 10 reads data, if read data bit needed for finding is in distributed caching (namely high speed resource
Buffer pool 11) in, just directly it is read out, so as to effectively shorten the path of whole IO stacks, improves from distributed caching
Overall performance.
Invention additionally discloses a kind of distributed caching data administrator, the device is applied to high speed resource pooling pond 11,
The high speed resource pooling pond 11 is made up of multiple high-performance nonvolatile storages 111 trunking mode, each described height
One client 10 of correspondence of performance nonvolatile storage 111, and it is used as the read-write cache memory of the client 10.
The device includes:
Writing unit, for when client 10 writes data, receiving data to be written, and the data to be written are straight
Connect in write-in high-performance nonvolatile storage 111 corresponding with the client 10;
Specifically, the present invention uses writeback mechanism, when client 10 needs write-in data, client 10 will be treated
Write-in data are write direct in high speed resource pooling pond 11 in corresponding high-performance nonvolatile storage 111, that is, client 10
It is written into data to write direct after distributed caching, that is, returns.
Receiving unit, for when client 10 reads data, receiving data read command;
Searching unit, for according to the data read command, being searched from each high-performance nonvolatile storage 111 to be
It is no to be cached with the data to be read carried in the read write command;
First reading unit, for when the searching unit finds the data to be read, reading described to be read
Data;
Second reading unit, for when the searching unit does not find the data to be read, continues after described
End server 12 reads the data to be read.
Specifically, when client 10 needs to read data, client 10 can be read in high speed resource pooling pond 11 first,
Due to the corresponding high-performance nonvolatile storage 111 of each client 10 has been carried out into cluster, so that each high-performance is non-
Between volatile memory 111 can resource-sharing, client 10 it can be seen that all buffered data, so, client
By reading data to be read first in high speed resource pooling pond, you can it is determined that whether required reading data are buffered in high speed resource
Buffer pool 10, reads when high speed resource pooling 11 uncached data to be read of pond, then from back-end server 12.Therefore, when
Different clients 10 are operated to back-end memory 12 simultaneously when, it is ensured that the uniformity of data, tradition effectively prevent
In scheme, because the data of customer end A do not store back-end server 12, customer end B is just read out from back-end server 12
Operate the problem of data caused are inconsistent.
To sum up, distributed caching data administrator disclosed by the invention, because non-using high-performance in distributed storage
Volatile memory 111 as client 10 read-write Cache, it is ensured that it is data cached to be stored temporarily in during environmental abnormality
On high-performance nonvolatile storage 111, after environment restoration is normal, client 10 is further continued for from high-performance nonvolatile storage
111 read data, and for internal memory, high-performance nonvolatile storage 111 substantially increases the security of data access.
The distributed caching that the present invention is used, passes through the cluster of high-performance nonvolatile storage 111 different clients 10
Same management, forms same high speed resource pooling pond so that different clients 10 are operated to back-end memory 12 simultaneously
When, it is ensured that the uniformity of data, it is prevented effectively from traditional scheme, because the data of customer end A do not store back-end services
Device 12, customer end B is just read the problem of data caused are inconsistent from back-end server 12.
The present invention uses writeback mechanism, when client 10 writes data, directly writes data into distributed slow
Deposit, that is, return;When client 10 reads data, if read data bit needed for finding is in distributed caching (namely high speed resource
Buffer pool 11) in, just directly it is read out, so as to effectively shorten the path of whole IO stacks, improves from distributed caching
Overall performance.
It should be noted that in device embodiment, the operation principle of each part refers to embodiment of the method and system
Embodiment corresponding part, here is omitted.
Finally, in addition it is also necessary to explanation, herein, such as first and second or the like relational terms be used merely to by
One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation
Between there is any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant meaning
Covering including for nonexcludability, so that process, method, article or equipment including a series of key elements not only include that
A little key elements, but also other key elements including being not expressly set out, or also include be this process, method, article or
The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", is not arranged
Except also there is other identical element in the process including the key element, method, article or equipment.
The embodiment of each in this specification is described by the way of progressive, and what each embodiment was stressed is and other
Between the difference of embodiment, each embodiment identical similar portion mutually referring to.
The foregoing description of the disclosed embodiments, enables professional and technical personnel in the field to realize or using the present invention.
A variety of modifications to these embodiments will be apparent for those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention
The embodiments shown herein is not intended to be limited to, and is to fit to and principles disclosed herein and features of novelty phase one
The most wide scope caused.
Claims (5)
1. a kind of distributed caching data management system, it is characterised in that applied to server end, including:
High speed resource pooling pond, the high speed resource pooling pond passes through trunking mode group by multiple high-performance nonvolatile storages
Into, each one client of the high-performance nonvolatile storage correspondence, and deposited as the read-write speed buffering of the client
Reservoir;
And, the back-end memory being connected with the high speed resource pooling pond;
The high speed resource pooling pond, for when client writes data, receiving data to be written, and by the number to be written
According to writing direct in high-performance nonvolatile storage corresponding with the client;
The high speed resource pooling pond, is additionally operable to, when client reads data, data read command be received, according to the data
Instruction is read, searches whether to be cached with the access of continuing carried in the read write command from each high-performance nonvolatile storage
According to if it is, reading the data to be read found;If it is not, then continuing to continue from described in back-end server reading
Access evidence.
2. distributed caching data management system according to claim 1, it is characterised in that the back-end memory bag
Include:Multiple mechanical hard disks.
3. distributed caching data management system according to claim 1, it is characterised in that the high-performance is non-volatile to deposit
Reservoir is used to deposit the access frequency highest data in current time preset time period.
4. a kind of distributed caching data managing method, it is characterised in that applied to high speed resource pooling pond, the high speed resource
Buffer pool is made up of multiple high-performance nonvolatile storages trunking mode, each described high-performance nonvolatile storage pair
A client is answered, and is used as the read-write cache memory of the client;
Methods described includes:
When client writes data, data to be written are received, and the data to be written are write direct and the client pair
In the high-performance nonvolatile storage answered;
When client reads data, data read command is received;
According to the data read command, search whether to be cached with the read write command from each high-performance nonvolatile storage
The data to be read of middle carrying;
If it is, reading the data to be read found;
If it is not, then continuing to read the data to be read from the back-end server.
5. a kind of distributed caching data administrator, it is characterised in that applied to high speed resource pooling pond, the high speed resource
Buffer pool is made up of multiple high-performance nonvolatile storages trunking mode, each described high-performance nonvolatile storage pair
A client is answered, and is used as the read-write cache memory of the client;
Described device includes:
Writing unit, for when client writes data, receiving data to be written, and the data to be written are write direct
In high-performance nonvolatile storage corresponding with the client;
Receiving unit, for when client reads data, receiving data read command;
Searching unit, for according to the data read command, searching whether caching from each high-performance nonvolatile storage
There are the data to be read carried in the read write command;
First reading unit, for when the searching unit finds the data to be read, reading the data to be read;
Second reading unit, for when the searching unit does not find the data to be read, continuing from rear end clothes
Business device reads the data to be read.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710637356.XA CN107241444B (en) | 2017-07-31 | 2017-07-31 | Distributed cache data management system, method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710637356.XA CN107241444B (en) | 2017-07-31 | 2017-07-31 | Distributed cache data management system, method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107241444A true CN107241444A (en) | 2017-10-10 |
CN107241444B CN107241444B (en) | 2020-07-07 |
Family
ID=59989468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710637356.XA Active CN107241444B (en) | 2017-07-31 | 2017-07-31 | Distributed cache data management system, method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107241444B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108549584A (en) * | 2018-01-25 | 2018-09-18 | 北京奇艺世纪科技有限公司 | A kind of server-side gray scale down method and device |
CN109726191A (en) * | 2018-12-12 | 2019-05-07 | 中国联合网络通信集团有限公司 | A kind of processing method and system across company-data, storage medium |
CN112214178A (en) * | 2020-11-13 | 2021-01-12 | 新华三大数据技术有限公司 | Storage system, data reading method and data writing method |
CN112328513A (en) * | 2020-10-14 | 2021-02-05 | 合肥芯碁微电子装备股份有限公司 | Scanning type exposure system and data caching and scheduling method and device thereof |
CN112764690A (en) * | 2021-02-03 | 2021-05-07 | 北京同有飞骥科技股份有限公司 | Distributed storage system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739220A (en) * | 2009-02-25 | 2010-06-16 | 浪潮电子信息产业股份有限公司 | Method for designing multi-controller memory array |
CN102262512A (en) * | 2011-07-21 | 2011-11-30 | 浪潮(北京)电子信息产业有限公司 | System, device and method for realizing disk array cache partition management |
CN105657057A (en) * | 2012-12-31 | 2016-06-08 | 华为技术有限公司 | Calculation and storage fused cluster system |
CN106406764A (en) * | 2016-09-21 | 2017-02-15 | 郑州云海信息技术有限公司 | A high-efficiency data access system and method for distributed SAN block storage |
-
2017
- 2017-07-31 CN CN201710637356.XA patent/CN107241444B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739220A (en) * | 2009-02-25 | 2010-06-16 | 浪潮电子信息产业股份有限公司 | Method for designing multi-controller memory array |
CN102262512A (en) * | 2011-07-21 | 2011-11-30 | 浪潮(北京)电子信息产业有限公司 | System, device and method for realizing disk array cache partition management |
CN105657057A (en) * | 2012-12-31 | 2016-06-08 | 华为技术有限公司 | Calculation and storage fused cluster system |
CN106406764A (en) * | 2016-09-21 | 2017-02-15 | 郑州云海信息技术有限公司 | A high-efficiency data access system and method for distributed SAN block storage |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108549584A (en) * | 2018-01-25 | 2018-09-18 | 北京奇艺世纪科技有限公司 | A kind of server-side gray scale down method and device |
CN108549584B (en) * | 2018-01-25 | 2020-11-27 | 北京奇艺世纪科技有限公司 | Method and device for degrading gray level of server side |
CN109726191A (en) * | 2018-12-12 | 2019-05-07 | 中国联合网络通信集团有限公司 | A kind of processing method and system across company-data, storage medium |
CN109726191B (en) * | 2018-12-12 | 2021-02-02 | 中国联合网络通信集团有限公司 | Cross-cluster data processing method and system and storage medium |
CN112328513A (en) * | 2020-10-14 | 2021-02-05 | 合肥芯碁微电子装备股份有限公司 | Scanning type exposure system and data caching and scheduling method and device thereof |
CN112328513B (en) * | 2020-10-14 | 2024-02-02 | 合肥芯碁微电子装备股份有限公司 | Scanning exposure system and data caching and scheduling method and device thereof |
CN112214178A (en) * | 2020-11-13 | 2021-01-12 | 新华三大数据技术有限公司 | Storage system, data reading method and data writing method |
CN112214178B (en) * | 2020-11-13 | 2022-08-19 | 新华三大数据技术有限公司 | Storage system, data reading method and data writing method |
CN112764690A (en) * | 2021-02-03 | 2021-05-07 | 北京同有飞骥科技股份有限公司 | Distributed storage system |
Also Published As
Publication number | Publication date |
---|---|
CN107241444B (en) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107241444A (en) | A kind of distributed caching data management system, method and device | |
US10956601B2 (en) | Fully managed account level blob data encryption in a distributed storage environment | |
US10789215B1 (en) | Log-structured storage systems | |
EP3673376B1 (en) | Log-structured storage systems | |
US10530888B2 (en) | Cached data expiration and refresh | |
US11294881B2 (en) | Log-structured storage systems | |
US10659225B2 (en) | Encrypting existing live unencrypted data using age-based garbage collection | |
US7010617B2 (en) | Cluster configuration repository | |
US11423015B2 (en) | Log-structured storage systems | |
US10885022B1 (en) | Log-structured storage systems | |
US11074017B2 (en) | Log-structured storage systems | |
US20190007206A1 (en) | Encrypting object index in a distributed storage environment | |
US11422728B2 (en) | Log-structured storage systems | |
CN104765575B (en) | information storage processing method | |
US20200145374A1 (en) | Scalable cloud hosted metadata service | |
US20150205819A1 (en) | Techniques for optimizing data flows in hybrid cloud storage systems | |
US10903981B1 (en) | Log-structured storage systems | |
CN103270499B (en) | log storing method and system | |
CN104537112B (en) | A kind of method of safe cloud computing | |
US10579597B1 (en) | Data-tiering service with multiple cold tier quality of service levels | |
US10942852B1 (en) | Log-structured storage systems | |
CN109725842A (en) | Accelerate random writing layout with the system and method for mixing the distribution of the bucket in storage system | |
CN103595799A (en) | Method for achieving distributed shared data bank | |
CN101673271A (en) | Distributed file system and file sharding method thereof | |
CN104079600B (en) | File memory method, device, access client and meta data server system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |