CN101788887A - System and method of I/O cache stream based on database in disk array - Google Patents

System and method of I/O cache stream based on database in disk array Download PDF

Info

Publication number
CN101788887A
CN101788887A CN201010108204A CN201010108204A CN101788887A CN 101788887 A CN101788887 A CN 101788887A CN 201010108204 A CN201010108204 A CN 201010108204A CN 201010108204 A CN201010108204 A CN 201010108204A CN 101788887 A CN101788887 A CN 101788887A
Authority
CN
China
Prior art keywords
read
data
request
management module
cache management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201010108204A
Other languages
Chinese (zh)
Inventor
吕烁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201010108204A priority Critical patent/CN101788887A/en
Publication of CN101788887A publication Critical patent/CN101788887A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a method and a system of an I/O cache stream based on a database in a disk array. In the method, a buffer management module receives an I/O request generated by the operation of the database, and the data search is carried out in the buffer management module according to the received I/O request; if the corresponding data can be found out, the data is returned to the database; otherwise, a control bottom layer independent disk redundant array (raid) module reads the data according to the I/O request; a cache bottom layer raid module reads the data and returns the read data to the database; the buffer management module transmits the describing information of the received I/O request to a prereading module; and after receiving the describing information of the I/O request, the prereading module prereads the data from the bottom layer raid module according to a preset prereading strategy and caches the preread data in the buffer management module. The method improves the speed of inquiry response of a database server.

Description

A kind of system and method for the I/O cache flow based on database in the disk array
Technical field
The present invention relates to a kind of I/O cache flow technology in the disk array, be generally used for the database application in the disk array based on database.
Background technology
Continuous development along with network application and ecommerce, the visit capacity of each website is increasing, and the database scale also constantly enlarges thereupon, and the performance issue of Database Systems is just more and more outstanding, the order of process user is too slow, will badly influence user's normal use.
Common enterprise database is used and is based upon on the disk array, therefore how to provide a kind of scheme in disk array, and it is applied in the database application system, can improve the performance of Database Systems, is that the current operation amount sharply increases the challenge that faces.
Summary of the invention
The technical problem to be solved in the present invention is, proposes the method and system of a kind of I/O cache flow based on database in the disk array, can improve the speed of database server inquiry response, improves the service response time.
In order to solve the problems of the technologies described above, the invention provides the system of a kind of I/O cache flow based on database in the disk array, comprise a cache management module, a pre-read through model and bottom Redundant Array of Independent Disks (RAID) (raid) module, described interface management module links to each other with described pre-read through model with described cache management module respectively, described cache management module links to each other with described database with described pre-read through model, described bottom raid module respectively, described pre-read through model also links to each other with described bottom raid module, wherein:
Described interface management module in order to the size to the pre-reading window of the buffer capacity of described cache management module, described pre-read through model, and is maximumly read in the size one or more in advance and is configured;
Described cache management module, be used to receive the I/O request that database manipulation produces, in described cache management module, search the corresponding data of described I/O request, if can find, then it is returned to database,, then control described bottom raid module and carry out data read according to described I/O request if can not find, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And the descriptor that the I/O that receives is asked sends to described pre-read through model, the pre-read data that reception and the described pre-read through model of buffer memory send;
Described pre-read through model is used for after the descriptor that receives the I/O request from described cache management module, carry out data pre-head according to a default strategy of reading in advance from described bottom raid module, and the data that will read in advance sends to described cache management module;
Described bottom raid module in order to the I/O of process database under the control of described cache management module request, and returns to described cache management module with the data that read.
Further, said system also can have following characteristics:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
Further, said system also can have following characteristics:
Described pre-read through model, when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
Further, said system also can have following characteristics:
Described cache management module, when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
Further, said system also can have following characteristics:
Described interface management module belongs to application layer, and described cache management module, described pre-read through model and described bottom raid module belong to inner nuclear layer.
In order to solve the problems of the technologies described above, the invention allows for the method for a kind of I/O cache flow based on database in the disk array, comprising:
One cache management module receives the I/O request that database manipulation produces, in described cache management module, carry out data search according to the I/O request that receives, if can find corresponding data, then it is returned to database, otherwise control bottom Redundant Array of Independent Disks (RAID) (raid) module is carried out data read according to described I/O request, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And
Described cache management module sends to a pre-read through model with the descriptor that the I/O that receives asks, described pre-read through model is after the descriptor that receives the I/O request, from described bottom raid module, carry out data pre-head according to a default strategy of reading in advance, and the metadata cache that will read in advance is to described cache management module.
Further, said method also can have following characteristics:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
Further, said method also can have following characteristics:
Described pre-read through model is when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
Further, said method also can have following characteristics:
Described cache management module is when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
Further, said method also can have following characteristics:
The size of the buffer capacity of described cache management module, the pre-reading window of described pre-read through model, and maximum read in the size one or more in advance and adjust according to user's configuration.
A kind of method and system of the I/O cache flow based on database in the disk array that the present invention proposes, can improve the speed of database server inquiry response, improve the service response time, can significantly improve the performance of system, thereby tackle the challenge that growing user faced the response time.
Description of drawings
Fig. 1 is the system block diagram of a kind of I/O cache flow based on database in the embodiment of the invention disk array;
Fig. 2 is the method flow diagram of a kind of I/O cache flow based on database in the embodiment of the invention disk array.
Embodiment
Database when in order to storage and the management of carrying out data, when carrying out data manipulation, for example inquire about, modification etc. operated, can produce corresponding I/O request.Described database generally can be databases such as oracle, db2, sqlserver.For the I/O request responding speed of accelerating database manipulation is produced, the embodiment of the invention provides a kind of system and method thereof of the I/O cache flow based on database, it is conceived substantially: read strategy in advance by active data, the data pre-head that may read comes out in advance, carry out buffer memory, when the I/O request comes, can feed back to application layer fast by in buffer memory, searching.
Describe embodiment of the present invention in detail below in conjunction with accompanying drawing.
Referring to Fig. 1, the figure shows the system of a kind of I/O cache flow based on database in the embodiment of the invention disk array, comprise an interface management module, a cache management module, a pre-read through model, and bottom Redundant Array of Independent Disks (RAID) (raid) module, wherein:
Described interface management module, link to each other with described pre-read through model with described cache management module respectively, in order to size, and maximumly read in the size one or more in advance and be configured the pre-reading window of the buffer memory capacity of described cache management module, described pre-read through model.
Described cache management module, link to each other with described bottom raid module with described pre-read through model respectively, be used to receive the I/O request that database manipulation produces, in described cache management module, search the corresponding data of described I/O request, if can find, then it is returned to database, if can not find, then control described bottom raid module and read the corresponding data of described I/O request, the data that the described bottom raid of buffer memory module reads, and the data that described bottom raid module is read return to described database; And the descriptor that the I/O that receives is asked sends to described pre-read through model, the pre-read data that reception and the described pre-read through model of buffer memory send.
Described pre-read through model, link to each other with described bottom raid module with described cache management module respectively, be used for after the descriptor that receives the I/O request from described cache management module, carry out data pre-head according to a default strategy of reading in advance, and the data that will read in advance send to described cache management module.
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.For example, current I/O request is for reading the data of 40-49, the last historical record of described I/O request is the data that read 30-39, then the data that read of twice request are 30-39 successively, and 40-49, can form sequential data stream, then next data that read is likely since 50 and reads one piece of data in proper order, so, can read one piece of data in proper order since 50 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data that read 50-59, then can directly from described cache management module, find, need not to carry out bottom I/O operation, thereby can accelerate data search speed.Again for example, current I/O request is for reading the data of 39-30, the last historical record of described I/O request is the data that read, then the data that read of twice request are 49-40 successively, and 39-30, can form the backward data stream, then next data that read is likely since 29 and reads one piece of data in proper order, so, can read one piece of data in proper order since 29 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data that read 29-20, then can directly from described cache management module, find, need not to carry out bottom I/O operation, thereby can accelerate data search speed.
Described pre-read through model, when carrying out data pre-head, the data length that reads is the size of the pre-reading window of described interface management block configuration, the size of described pre-reading window is no more than the maximum of described interface management block configuration and reads size in advance.
Described pre-read through model, when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.Described cache management module, when data search is carried out in request according to I/O, can after identifying the reference position and side-play amount of the corresponding data of described I/O request, described pre-read through model in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
Described bottom raid module in order to the I/O of process database under the control of described cache management module request, and returns to described cache management module with the data that read.
Described interface management module and database belong to application layer, and described cache management module, described pre-read through model and described bottom raid module belong to inner nuclear layer.
Referring to Fig. 2, the figure shows a kind of I/O cache flow method in the embodiment of the invention disk array based on database, comprise the steps:
Step S201: database is operated, and produces corresponding I/O request;
Described I/O request is when operation such as inquiry or modification is taken place by database, to be produced by operating system block device layer;
Step S202: a cache management module receives the I/O request that database manipulation produces, in described cache management module, carry out data search according to described I/O request earlier, if find corresponding data, then it is returned to database, otherwise control bottom raid module is carried out data read according to described I/O request, the data that buffer memory bottom raid module reads, and the described data that read are returned to database;
Step S203: described cache management module sends to a pre-read through model with the descriptor that the I/O that receives asks, described pre-read through model is after the descriptor that receives described I/O request, from bottom raid module, carry out data pre-head according to a default strategy of reading in advance, and the metadata cache that will read in advance is to described cache management module.
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
For example, current I/O request is for reading the data of 40-49, the last historical record of described I/O request is the data that read 30-39, then the data that read for twice are 30-39 successively, and 40-49, can form sequential data stream, then next data that read is likely since 50 and reads one piece of data in proper order, so, can read one piece of data in proper order since 50 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data of searching 50-59, then can directly from described cache management module, find, need not to carry out bottom operation, thereby can accelerate data search speed.When carrying out data pre-head, the data length that reads is the size of the pre-reading window of described interface management block configuration, and the size of this pre-reading window is no more than the maximum of described interface management block configuration and reads size in advance.
Again for example, current I/O request is for reading the data of 39-30, the last historical record of described I/O request is the data that read, then the data that read of twice request are 49-40 successively, and 39-30, can form the backward data stream, then next data that read is likely since 29 and reads one piece of data in proper order, so, can read one piece of data in proper order since 29 in advance, and it is saved in the described cache management module, like this, if next I/O request is the data that read 29-20, then can directly from described cache management module, find, need not to carry out bottom I/O operation, thereby can accelerate data search speed.
The size of the buffer capacity of described cache management module, the pre-reading window of described pre-read through model, and maximum read in the size one or more in advance and can adjust according to user's configuration.For example, when carrying out data pre-head, can increase pre-reading window, default setting is 4,000,000, thereby make that by increasing pre-reading window the data that need to read in following a period of time can be in buffering at database application, make the application need data directly to obtain data from buffering, and need not to wait for the time of magnetic disc i/o, this method can greatly improve the performance of Database Systems.
Described cache management module, when data search is carried out in request according to I/O, can call reference position and side-play amount that described pre-read through model identifies the corresponding data of described I/O request earlier, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request then.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various changes and variation.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a kind of system of the I/O cache flow based on database in the disk array, it is characterized in that, comprise a cache management module, a pre-read through model and bottom Redundant Array of Independent Disks (RAID) (raid) module, described interface management module links to each other with described pre-read through model with described cache management module respectively, described cache management module links to each other with described database with described pre-read through model, described bottom raid module respectively, described pre-read through model also links to each other with described bottom raid module, wherein:
Described interface management module in order to the size to the pre-reading window of the buffer capacity of described cache management module, described pre-read through model, and is maximumly read in the size one or more in advance and is configured;
Described cache management module, be used to receive the I/O request that database manipulation produces, in described cache management module, search the corresponding data of described I/O request, if can find, then it is returned to database,, then control described bottom raid module and carry out data read according to described I/O request if can not find, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And the descriptor that the I/O that receives is asked sends to described pre-read through model, the pre-read data that reception and the described pre-read through model of buffer memory send;
Described pre-read through model is used for after the descriptor that receives the I/O request from described cache management module, carry out data pre-head according to a default strategy of reading in advance from described bottom raid module, and the data that will read in advance sends to described cache management module;
Described bottom raid module in order to the I/O of process database under the control of described cache management module request, and returns to described cache management module with the data that read.
2. the system as claimed in claim 1 is characterized in that:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
3. system as claimed in claim 1 or 2 is characterized in that:
Described pre-read through model, when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
4. system as claimed in claim 3 is characterized in that:
Described cache management module, when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
5. the system as claimed in claim 1 is characterized in that:
Described interface management module belongs to application layer, and described cache management module, described pre-read through model and described bottom raid module belong to inner nuclear layer.
6. a kind of method of the I/O cache flow based on database in the disk array is characterized in that, comprising:
One cache management module receives the I/O request that database manipulation produces, in described cache management module, carry out data search according to the I/O request that receives, if can find corresponding data, then it is returned to database, otherwise control bottom Redundant Array of Independent Disks (RAID) (raid) module is carried out data read according to described I/O request, the data that the described bottom raid of buffer memory module reads, and the described data that read are returned to database; And
Described cache management module sends to a pre-read through model with the descriptor that the I/O that receives asks, described pre-read through model is after the descriptor that receives the I/O request, from described bottom raid module, carry out data pre-head according to a default strategy of reading in advance, and the metadata cache that will read in advance is to described cache management module.
7. method as claimed in claim 6 is characterized in that:
The described default strategy of reading in advance comprises: described pre-read through model is safeguarded a historical record, after the descriptor that receives the I/O request from described cache management module, it is saved in the described historical record, descriptor and last at least historical record thereof to described I/O request carry out pattern-recognition, if the data of described I/O request and last at least historical record correspondence thereof can be formed sequential data stream or backward data stream, then from described bottom raid module, carry out data pre-head according to the pattern that identifies.
8. as claim 6 or 7 described methods, it is characterized in that:
Described pre-read through model is when carrying out data pre-head, be reference position and the side-play amount that identifies the corresponding data of described I/O request earlier according to the descriptor of the I/O request that receives, then according to the pattern that identifies, obtain the reference position of pre-read data, from the reference position of described pre-read data, carry out data pre-head then according to the described pattern that identifies.
9. method as claimed in claim 8 is characterized in that:
Described cache management module is when data search is carried out in request according to I/O, be after described pre-read through model identifies the reference position and side-play amount of the corresponding data of described I/O request, in described cache management module, carry out data search according to the reference position and the side-play amount of the corresponding data of described I/O request.
10. method as claimed in claim 6 is characterized in that:
The size of the buffer capacity of described cache management module, the pre-reading window of described pre-read through model, and maximum read in the size one or more in advance and adjust according to user's configuration.
CN201010108204A 2010-02-05 2010-02-05 System and method of I/O cache stream based on database in disk array Pending CN101788887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010108204A CN101788887A (en) 2010-02-05 2010-02-05 System and method of I/O cache stream based on database in disk array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010108204A CN101788887A (en) 2010-02-05 2010-02-05 System and method of I/O cache stream based on database in disk array

Publications (1)

Publication Number Publication Date
CN101788887A true CN101788887A (en) 2010-07-28

Family

ID=42532116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010108204A Pending CN101788887A (en) 2010-02-05 2010-02-05 System and method of I/O cache stream based on database in disk array

Country Status (1)

Country Link
CN (1) CN101788887A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930472A (en) * 2010-09-09 2010-12-29 南京中兴特种软件有限责任公司 Parallel query method for distributed database
CN102073463A (en) * 2010-12-28 2011-05-25 创新科存储技术有限公司 Flow prediction method and device, and prereading control method and device
CN102904923A (en) * 2012-06-21 2013-01-30 华数传媒网络有限公司 Data reading method and data reading system capable of relieving disk reading bottleneck
CN105487987A (en) * 2015-11-20 2016-04-13 深圳市迪菲特科技股份有限公司 Method and device for processing concurrent sequential reading IO (Input/Output)
CN106681939A (en) * 2017-01-03 2017-05-17 北京华胜信泰数据技术有限公司 Reading method and device for disk page
CN107273053A (en) * 2017-06-22 2017-10-20 郑州云海信息技术有限公司 A kind of method and apparatus of digital independent
CN113609093A (en) * 2021-06-30 2021-11-05 济南浪潮数据技术有限公司 Reverse order reading method, system and related device of distributed file system
CN114442948A (en) * 2022-01-14 2022-05-06 济南浪潮数据技术有限公司 Method, device and equipment for pre-reading storage system and storage medium

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930472A (en) * 2010-09-09 2010-12-29 南京中兴特种软件有限责任公司 Parallel query method for distributed database
CN102073463A (en) * 2010-12-28 2011-05-25 创新科存储技术有限公司 Flow prediction method and device, and prereading control method and device
CN102073463B (en) * 2010-12-28 2012-08-22 创新科存储技术有限公司 Flow prediction method and device, and prereading control method and device
CN102904923A (en) * 2012-06-21 2013-01-30 华数传媒网络有限公司 Data reading method and data reading system capable of relieving disk reading bottleneck
CN102904923B (en) * 2012-06-21 2016-01-06 华数传媒网络有限公司 A kind of method and system alleviating the digital independent of disk reading bottleneck
CN105487987B (en) * 2015-11-20 2018-09-11 深圳市迪菲特科技股份有限公司 A kind of concurrent sequence of processing reads the method and device of IO
CN105487987A (en) * 2015-11-20 2016-04-13 深圳市迪菲特科技股份有限公司 Method and device for processing concurrent sequential reading IO (Input/Output)
CN106681939A (en) * 2017-01-03 2017-05-17 北京华胜信泰数据技术有限公司 Reading method and device for disk page
CN106681939B (en) * 2017-01-03 2019-08-23 北京华胜信泰数据技术有限公司 Reading method and device for disk page
CN107273053A (en) * 2017-06-22 2017-10-20 郑州云海信息技术有限公司 A kind of method and apparatus of digital independent
CN113609093A (en) * 2021-06-30 2021-11-05 济南浪潮数据技术有限公司 Reverse order reading method, system and related device of distributed file system
CN113609093B (en) * 2021-06-30 2023-12-22 济南浪潮数据技术有限公司 Reverse order reading method, system and related device of distributed file system
CN114442948A (en) * 2022-01-14 2022-05-06 济南浪潮数据技术有限公司 Method, device and equipment for pre-reading storage system and storage medium

Similar Documents

Publication Publication Date Title
CN101788887A (en) System and method of I/O cache stream based on database in disk array
US10198356B2 (en) Distributed cache nodes to send redo log records and receive acknowledgments to satisfy a write quorum requirement
CA2910211C (en) Object storage using multiple dimensions of object information
CN100583096C (en) Methods for managing deletion of data
CN110262922B (en) Erasure code updating method and system based on duplicate data log
US20140337484A1 (en) Server side data cache system
JP2020038623A (en) Method, device, and system for storing data
US20080177803A1 (en) Log Driven Storage Controller with Network Persistent Memory
US9817587B1 (en) Memory-based on-demand data page generation
US9182912B2 (en) Method to allow storage cache acceleration when the slow tier is on independent controller
CN107291889A (en) A kind of date storage method and system
TW201140430A (en) Allocating storage memory based on future use estimates
JP2016512906A (en) Multi-layer storage management for flexible data placement
WO2015154352A1 (en) Data migration method and device for distributed file system, and metadata server
CN103516549B (en) A kind of file system metadata log mechanism based on shared object storage
CN109598156A (en) Engine snapshot stream method is redirected when one kind is write
CN103678523A (en) Distributed cache data access method and device
CN101147118A (en) Methods and apparatus for reconfiguring a storage system
US11422721B2 (en) Data storage scheme switching in a distributed data storage system
US20230176791A1 (en) Metadata compaction
CN114265814A (en) Data lake file system based on object storage
CN110704431A (en) Hierarchical storage management method for mass data
US11099983B2 (en) Consolidating temporally-related data within log-based storage
CN103491124A (en) Method for processing multimedia message data and distributed cache system
CN204102026U (en) Large database concept all-in-one

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20100728