CN105138292A - Disk data reading method - Google Patents

Disk data reading method Download PDF

Info

Publication number
CN105138292A
CN105138292A CN201510562824.2A CN201510562824A CN105138292A CN 105138292 A CN105138292 A CN 105138292A CN 201510562824 A CN201510562824 A CN 201510562824A CN 105138292 A CN105138292 A CN 105138292A
Authority
CN
China
Prior art keywords
buffer memory
data
level
disk
level cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510562824.2A
Other languages
Chinese (zh)
Inventor
陈虹宇
罗阳
苗宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN SHENHU TECHNOLOGY Co Ltd
Original Assignee
SICHUAN SHENHU TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN SHENHU TECHNOLOGY Co Ltd filed Critical SICHUAN SHENHU TECHNOLOGY Co Ltd
Priority to CN201510562824.2A priority Critical patent/CN105138292A/en
Publication of CN105138292A publication Critical patent/CN105138292A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a disk data reading method. The disk data reading method comprises the steps that a user sends a data I/O request to a server, the server sends a hardware command to a memory system, and the two-stage memory structure of the memory system processes the I/O request. By means of the disk data reading method, the caching response time is optimized by improving a caching architecture and a writing and reading flow. The disk data reading method is suitable for data reading of the large-scale memory system.

Description

Data in magnetic disk read method
Technical field
The present invention relates to data to store, particularly a kind of data in magnetic disk read method.
Background technology
Along with the development of infotech, computing system depends on the performance of storage system more and more.In the epoch centered by computing technique, data volume is little, I/O problem often by people are ignored, the gap of performance between processor and internal memory that what people paid close attention to more is.Along with the development of the network storage, be transformed to data-centered to be calculated as center, mass data relies on disk and stores, and I/O problem is gradually by people are paid attention to, and the performance gap between processor and disk manifests gradually.Due to the speed that the speed of accessing data in magnetic disk calculates well below processor, I/O bottleneck becomes the Main Bottleneck hindering system performance.In the application that some are service-oriented, as ecommerce, search engine, social networks etc., faced by them is the data volume of PB level, only uses single-stage buffer memory can not meet their performance requirement, often uses multi-level cache architecture.Have the solution in a lot of management multi-level buffer space to be suggested in recent years, although and these multi-level cache architecture based on internal memory can improve system performance effectively, extremely waste from price and power consumption angle, cost performance is not high.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of data in magnetic disk read method, comprising:
User sends data I/O request by server, and server sends hardware command to storage system, and is asked by I/O described in the two-level cache pattern handling of storage system.
Preferably, described two-level cache structure comprises first order buffer memory and second level buffer memory, wherein using the internal memory of the redundant manipulator of disk array as first order buffer memory, using solid-state disk as second level buffer memory, cooperation interaction work will be carried out between first order buffer memory and second level buffer memory, after having built second level buffer memory, multiple application program uses second level buffer memory simultaneously, described solid-state disk is with the form work of block device, its upper component, regard solid-state disk as block device, user utilizes described solid-state disk to create pond, and set up logic magnetic disc, then subregion and profile system is carried out, described pond is the logical storage pond that use one group of disc driver forms according to RAID type, and described logic magnetic disc is the logic storage unit set up in pond, sets up file system or application program thereon,
Described buffer structure logically comprises reorientation module, data analysis module and Data Migration module, reorientation module safeguards that a mapping table carrys out the corresponding relation of block in record buffer memory to the block on solid-state disk and disk array, when there being new request to arrive reorientation module, first search mapping table, if requested piece in solid-state disk, request is redirected to solid-state disk, otherwise direct control disk array, I/O request is also tackled and is forwarded to data analysis module by reorientation module, data analysis module is collected I/O request and is upgraded block table to describe load access module, data analysis module periodically analyzes data access history, judge which block should be relocated to solid-state disk, and request msg transferring module reorientates data block by memory device, data analysis module also can run at User space at kernel state, Data Migration module is used for sending I/O order to block device and upgrading mapping table to reflect up-to-date mapping change.
Preferably, described server sends hardware command to storage system, comprises further:
According to writing attribute, build the two-level cache logic magnetic disc of varying number, the capacity that buffer memory capacity equals internal memory on controller can be write, by the logic magnetic disc constructed type configured, be sent to server, described two-level cache support builds the cache pool of RAID1 or 10 types, according to redundancy attribute, the capacity of computing pool, if build break-even two-level cache, so the capacity in two-level cache pond takies 100%; If be built with the two-level cache of redundancy, so the capacity in two-level cache pond takies 50%, and each disk array only creates a two-level cache pond;
After realizing two-level cache, the data be cached on first order buffer memory also can be mapped to second level buffer memory, each controller has a mapping table to reside in its first order buffer memory, the copy of this mapping table also remains on second level buffer memory itself, second level buffer memory mapping table is exactly an address table, the index that one describes positional information data cached in the buffer memory of the second level is comprised in 32kB block, after the second level buffer memory of second level cached configuration second level cache pool and three logic magnetic discs, first logic magnetic disc is controller 0, second is controller 1, last is appointment writable area territory,
The described two-level cache pattern handling I/O by storage system asks, and comprises further:
When receiving I/O request, judging it is read operation or write operation according to I/O request type, if read operation, then inquiring about first order buffer memory, if hit, return data; If first order inadequate buffer space, by Data Migration to second level buffer memory, if first order cache miss, but second level cache hit, then the data of second level buffer memory are returned first order buffer memory, return via first order buffer memory; If first and second grades of buffer memorys are all miss, then reading disk array, fetch data and be updated to first order buffer memory; If I/O request type is write operation, then dirty data is write first order buffer memory, when dirty number of pages reaches threshold value, by the Data Migration of first order buffer memory to second level buffer memory, second level buffer memory writes back strategy by predefine and writes back in disk array by data; The described strategy that writes back combines read-write block size and storage space utilization factor two factors, writing back under state, data are only just write on disk when removing from buffer memory, write back and need to write data to disk from buffer memory, and in the data write buffer memory upgraded.
The present invention compared to existing technology, has the following advantages:
The present invention proposes a kind of data in magnetic disk read method, by improving cache structure and read-write flow process, optimizing the cache responses time, being applicable to the digital independent of large-scale storage systems.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the data in magnetic disk read method according to the embodiment of the present invention.
Embodiment
Detailed description to one or more embodiment of the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain many substitute, amendment and equivalent.Set forth many details in the following description to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some in these details or all details.
An aspect of of the present present invention provides a kind of data in magnetic disk read method.Fig. 1 is the data in magnetic disk read method process flow diagram according to the embodiment of the present invention.The present invention is based on the two-level cache framework of internal memory and solid-state disk, replace access common hard disc to reduce request response time by access solid-state disk, the particularly data of random access mode, give full play of the performance of solid-state disk.
The redundant manipulator of disk array has internal memory exclusive separately, referred to herein as first order buffer memory, and the second level buffer memory by adding solid-state disk to build, total on disk array, allow multiple application program to use second level buffer memory simultaneously, its can automatic data-detection whether in solid-state disk, do not need to do specific adjustment according to application program.After having built second level buffer memory, between first order buffer memory and second level buffer memory, cooperation interaction work will be carried out.
Second level buffer memory is incorporated into the thought of existing storage system, will considers that solid-state disk performance potential is excavated as far as possible simultaneously, the factor affecting memory property will inevitably comprise the problem in Strategy Design and Account Dept's management side face.The present invention realizes two-level cache mechanism from the angle of function, and two-level cache is put into large data-storage system as an assembly manages.
Solid-state disk in two-level cache structure is with the form work of block device, and its upper component, as file system, regards a simple block device as by solid-state disk, and without the labyrinth of concerned with internal.User can create pond with solid-state disk, and sets up logic magnetic disc, after carry out subregion and profile system.Buffer structure is logically made up of 3 parts, i.e. reorientation module, data analysis module and Data Migration module.Reorientation module safeguards that a mapping table carrys out the corresponding relation of block in record buffer memory to the block on solid-state disk and disk array.When there being new request to arrive reorientation module, first search mapping table.If requested piece in solid-state disk, request is redirected to solid-state disk, otherwise direct control disk array.I/O request is also tackled and is forwarded to data analysis module by reorientation module, and data analysis module is collected I/O request and upgraded block table to describe load access module.Data analysis module periodically analyzes data access history, judge which block should be relocated to solid-state disk, and request msg transferring module reorientates data block by memory device.Data analysis module also can run at User space at kernel state.Data Migration module is used for sending I/O order to block device and upgrading mapping table to reflect up-to-date mapping change.Reorientation module safeguards that a mapping table carrys out the corresponding relation of block in record buffer memory to the block on solid-state disk and disk array.
The process of hyperdisk array can regard the process of carrying out Socket communication according to hardware interface as.Client is sent request by management software, and server is resolved user's request as transfer, and sends hardware command to disk array.
Two-level cache sets up a service thread, first be set up pool structure, described pond is that use one group of disc driver is according to RAID type composition storage pool logical concept, a disc driver can only be arranged in a pond, a pond arbitrarily can be had in a storage system, the pool list of specifying needs solid-state disk of the same type, comprises the comparison of the attribute such as disk running speed, capacity.
Send request with user orientation server and build order, send hardware command by server to disk array.Then according to writing attribute, build 2 two-level cache logic magnetic discs or 3 two-level cache logic magnetic discs, described logic magnetic disc is the logic storage unit set up in pond, sets up file system or application program thereon.Can write attribute if do not have, then build 2,1 logic magnetic disc serves controller 0, and 1 logic magnetic disc serves controller 1, and the capacity of each logic magnetic disc is the half of tankage simultaneously; Can write attribute if having, then build 3,1 logic magnetic disc serves controller 0,1 logic magnetic disc serves controller 1,1 logic magnetic disc is served and can be write buffer memory, can write the capacity that buffer memory capacity equals internal memory on controller, and on controller, two two-level cache logic magnetic discs divide remaining capacity equally.Configure logic magnetic disc constructed type, be sent to server, send hardware command construction logic disk by server to disk array.
Finally, two-level cache is built effectively, and task scheduling sends hardware command by server to disk array.
This two-level cache support builds the cache pool of RAID1 or 10 types, according to redundancy attribute, and the capacity of computing pool.If build break-even two-level cache, so the capacity in two-level cache pond takies 100%; If be built with the two-level cache of redundancy, so the capacity in two-level cache pond takies 50%, and the two-level cache of band redundancy has higher fault-tolerant ability.In addition, each disk array is only to create a two-level cache pond.
Each controller in array can be accessed second level buffer memory and be utilized the advantage of solid-state disk, and after realizing two-level cache, the data be cached on first order buffer memory also can be mapped to second level buffer memory.Each controller has a mapping table to reside in its first order buffer memory, and in order to security, the copy of this mapping table also remains on second level buffer memory itself.Second level buffer memory mapping table is exactly an address table, comprises the index that describes positional information data cached in the buffer memory of the second level in 32kB block.After the second level buffer memory of second level cached configuration second level cache pool and three logic magnetic discs, first logic magnetic disc is controller 0, and second is controller 1, and last is appointment writable area territory.
When initialization first order buffer memory, second level buffer memory, all read data from disk array, and by data-mapping in buffer memory, at inadequate buffer space, or reach the higher limit preset, in order to ensure that spatial cache can be used, first order buffer memory and second level cache replacement algorithm all adopt LRU.When receiving I/O request, judging it is read operation or write operation according to I/O request type, if read operation, then inquiring about first order buffer memory, if hit, return data; If first order inadequate buffer space, by Data Migration to second level buffer memory, if first order cache miss, but second level cache hit, then the data of second level buffer memory are returned first order buffer memory, return via first order buffer memory; If first and second grades of buffer memorys are all miss, then reading disk array, fetch data and be updated to first order buffer memory.If I/O request type is write operation, then dirty data is write first order buffer memory, when dirty number of pages reaches threshold value, by the Data Migration of first order buffer memory to second level buffer memory, second level buffer memory writes back strategy by predefine and data is write back in disk array.
Here writing back the setting of strategy, is that combination read-write block size and storage space utilization factor two factors are considered, reduces fritter and writes the expense and reduction write amplification coefficient brought.Writing back under state, data are only just write on disk when removing from buffer memory.Write back and need to write data to disk from buffer memory, and in the data write buffer memory upgraded.Because data may repeatedly be written in buffer memory, and do not carry out disk access, so the efficiency write back is very high.
In sum, the present invention proposes a kind of data in magnetic disk read method, by improving cache structure and read-write flow process, optimizing the cache responses time, being applicable to the digital independent of large-scale storage systems.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on network that multiple computing system forms, alternatively, they can realize with the executable program code of computing system, thus, they can be stored and be performed by computing system within the storage system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore, any amendment made when without departing from the spirit and scope of the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.In addition, claims of the present invention be intended to contain fall into claims scope and border or this scope and border equivalents in whole change and modification.

Claims (3)

1. a data in magnetic disk read method, is characterized in that, comprising:
User sends data I/O request by server, and server sends hardware command to storage system, and is asked by I/O described in the two-level cache pattern handling of storage system.
2. method according to claim 1, it is characterized in that, described two-level cache structure comprises first order buffer memory and second level buffer memory, wherein using the internal memory of the redundant manipulator of disk array as first order buffer memory, using solid-state disk as second level buffer memory, cooperation interaction work will be carried out between first order buffer memory and second level buffer memory, after having built second level buffer memory, multiple application program uses second level buffer memory simultaneously, described solid-state disk is with the form work of block device, its upper component, regard solid-state disk as block device, user utilizes described solid-state disk to create pond, and set up logic magnetic disc, then subregion and profile system is carried out, described pond is the logical storage pond that use one group of disc driver forms according to RAID type, and described logic magnetic disc is the logic storage unit set up in pond, sets up file system or application program thereon,
Described buffer structure logically comprises reorientation module, data analysis module and Data Migration module, reorientation module safeguards that a mapping table carrys out the corresponding relation of block in record buffer memory to the block on solid-state disk and disk array, when there being new request to arrive reorientation module, first search mapping table, if requested piece in solid-state disk, request is redirected to solid-state disk, otherwise direct control disk array, I/O request is also tackled and is forwarded to data analysis module by reorientation module, data analysis module is collected I/O request and is upgraded block table to describe load access module, data analysis module periodically analyzes data access history, judge which block should be relocated to solid-state disk, and request msg transferring module reorientates data block by memory device, data analysis module also can run at User space at kernel state, Data Migration module is used for sending I/O order to block device and upgrading mapping table to reflect up-to-date mapping change.
3. method according to claim 2, is characterized in that, described server sends hardware command to storage system, comprises further:
According to writing attribute, build the two-level cache logic magnetic disc of varying number, the capacity that buffer memory capacity equals internal memory on controller can be write, by the logic magnetic disc constructed type configured, be sent to server, described two-level cache support builds the cache pool of RAID1 or 10 types, according to redundancy attribute, the capacity of computing pool, if build break-even two-level cache, so the capacity in two-level cache pond takies 100%; If be built with the two-level cache of redundancy, so the capacity in two-level cache pond takies 50%, and each disk array only creates a two-level cache pond;
After realizing two-level cache, the data be cached on first order buffer memory also can be mapped to second level buffer memory, each controller has a mapping table to reside in its first order buffer memory, the copy of this mapping table also remains on second level buffer memory itself, second level buffer memory mapping table is exactly an address table, the index that one describes positional information data cached in the buffer memory of the second level is comprised in 32kB block, after the second level buffer memory of second level cached configuration second level cache pool and three logic magnetic discs, first logic magnetic disc is controller 0, second is controller 1, last is appointment writable area territory,
The described two-level cache pattern handling I/O by storage system asks, and comprises further:
When receiving I/O request, judging it is read operation or write operation according to I/O request type, if read operation, then inquiring about first order buffer memory, if hit, return data; If first order inadequate buffer space, by Data Migration to second level buffer memory, if first order cache miss, but second level cache hit, then the data of second level buffer memory are returned first order buffer memory, return via first order buffer memory; If first and second grades of buffer memorys are all miss, then reading disk array, fetch data and be updated to first order buffer memory; If I/O request type is write operation, then dirty data is write first order buffer memory, when dirty number of pages reaches threshold value, by the Data Migration of first order buffer memory to second level buffer memory, second level buffer memory writes back strategy by predefine and writes back in disk array by data; The described strategy that writes back combines read-write block size and storage space utilization factor two factors, writing back under state, data are only just write on disk when removing from buffer memory, write back and need to write data to disk from buffer memory, and in the data write buffer memory upgraded.
CN201510562824.2A 2015-09-07 2015-09-07 Disk data reading method Pending CN105138292A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510562824.2A CN105138292A (en) 2015-09-07 2015-09-07 Disk data reading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510562824.2A CN105138292A (en) 2015-09-07 2015-09-07 Disk data reading method

Publications (1)

Publication Number Publication Date
CN105138292A true CN105138292A (en) 2015-12-09

Family

ID=54723652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510562824.2A Pending CN105138292A (en) 2015-09-07 2015-09-07 Disk data reading method

Country Status (1)

Country Link
CN (1) CN105138292A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system
CN106919339A (en) * 2015-12-25 2017-07-04 华为技术有限公司 A kind of method that hard disk array and hard disk array process operation requests
CN107301023A (en) * 2017-06-29 2017-10-27 郑州云海信息技术有限公司 A kind of solid-state disk configuration information management method and device
CN107506156A (en) * 2017-09-28 2017-12-22 焦点科技股份有限公司 A kind of io optimization methods of block device
CN107589911A (en) * 2017-09-05 2018-01-16 郑州云海信息技术有限公司 A kind of I O process method and device of SSD cachings
CN107656702A (en) * 2017-09-27 2018-02-02 联想(北京)有限公司 Accelerate the method and its system and electronic equipment of disk read-write
CN108132893A (en) * 2017-12-06 2018-06-08 中国航空工业集团公司西安航空计算技术研究所 A kind of constant Cache for supporting flowing water
CN108196795A (en) * 2017-12-30 2018-06-22 惠龙易通国际物流股份有限公司 A kind of date storage method, equipment and computer storage media
CN109032851A (en) * 2018-06-26 2018-12-18 华为技术有限公司 A kind of link failure determines method and apparatus
CN109213693A (en) * 2017-06-30 2019-01-15 伊姆西Ip控股有限责任公司 Memory management method, storage system and computer program product
CN109446222A (en) * 2018-08-28 2019-03-08 厦门快商通信息技术有限公司 A kind of date storage method of Double buffer, device and storage medium
CN110196785A (en) * 2018-02-27 2019-09-03 浙江宇视科技有限公司 Backup data management method, apparatus and electronic equipment
WO2020014869A1 (en) * 2018-07-17 2020-01-23 华为技术有限公司 Method and device for processing i/o request
CN110968271A (en) * 2019-11-25 2020-04-07 北京劲群科技有限公司 High-performance data storage method, system and device
CN114296646A (en) * 2021-12-24 2022-04-08 天翼云科技有限公司 Caching method, device, server and storage medium based on IO service
CN116880776A (en) * 2023-09-06 2023-10-13 上海凯翔信息科技有限公司 Data processing system for storing data
CN117234430A (en) * 2023-11-13 2023-12-15 苏州元脑智能科技有限公司 Cache frame, data processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117248A (en) * 2011-03-09 2011-07-06 浪潮(北京)电子信息产业有限公司 Caching system and method for caching data in caching system
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
CN103631528A (en) * 2012-08-21 2014-03-12 苏州捷泰科信息技术有限公司 Read-write method and system with solid state disk as cache and read-write controller
CN103678166A (en) * 2013-08-16 2014-03-26 记忆科技(深圳)有限公司 Method and system for using solid-state disk as cache of computer
CN103858112A (en) * 2013-12-31 2014-06-11 华为技术有限公司 Data-caching method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117248A (en) * 2011-03-09 2011-07-06 浪潮(北京)电子信息产业有限公司 Caching system and method for caching data in caching system
CN103631528A (en) * 2012-08-21 2014-03-12 苏州捷泰科信息技术有限公司 Read-write method and system with solid state disk as cache and read-write controller
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
CN103678166A (en) * 2013-08-16 2014-03-26 记忆科技(深圳)有限公司 Method and system for using solid-state disk as cache of computer
CN103858112A (en) * 2013-12-31 2014-06-11 华为技术有限公司 Data-caching method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
叶晨 等: "一种海量存储系统二级缓存的设计与实现", 《计算机与现代化》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system
CN106919339A (en) * 2015-12-25 2017-07-04 华为技术有限公司 A kind of method that hard disk array and hard disk array process operation requests
CN107301023A (en) * 2017-06-29 2017-10-27 郑州云海信息技术有限公司 A kind of solid-state disk configuration information management method and device
CN109213693A (en) * 2017-06-30 2019-01-15 伊姆西Ip控股有限责任公司 Memory management method, storage system and computer program product
CN107589911A (en) * 2017-09-05 2018-01-16 郑州云海信息技术有限公司 A kind of I O process method and device of SSD cachings
CN107656702A (en) * 2017-09-27 2018-02-02 联想(北京)有限公司 Accelerate the method and its system and electronic equipment of disk read-write
CN107656702B (en) * 2017-09-27 2020-11-20 联想(北京)有限公司 Method and system for accelerating hard disk read-write and electronic equipment
CN107506156A (en) * 2017-09-28 2017-12-22 焦点科技股份有限公司 A kind of io optimization methods of block device
CN107506156B (en) * 2017-09-28 2020-05-12 焦点科技股份有限公司 Io optimization method of block device
CN108132893A (en) * 2017-12-06 2018-06-08 中国航空工业集团公司西安航空计算技术研究所 A kind of constant Cache for supporting flowing water
CN108196795A (en) * 2017-12-30 2018-06-22 惠龙易通国际物流股份有限公司 A kind of date storage method, equipment and computer storage media
CN108196795B (en) * 2017-12-30 2020-09-04 惠龙易通国际物流股份有限公司 Data storage method and device and computer storage medium
CN110196785B (en) * 2018-02-27 2022-06-14 浙江宇视科技有限公司 Data backup management method and device and electronic equipment
CN110196785A (en) * 2018-02-27 2019-09-03 浙江宇视科技有限公司 Backup data management method, apparatus and electronic equipment
CN109032851A (en) * 2018-06-26 2018-12-18 华为技术有限公司 A kind of link failure determines method and apparatus
CN109032851B (en) * 2018-06-26 2021-01-12 华为技术有限公司 Link fault determination method and device
WO2020014869A1 (en) * 2018-07-17 2020-01-23 华为技术有限公司 Method and device for processing i/o request
US11249663B2 (en) 2018-07-17 2022-02-15 Huawei Technologies Co., Ltd. I/O request processing method and device
CN109446222A (en) * 2018-08-28 2019-03-08 厦门快商通信息技术有限公司 A kind of date storage method of Double buffer, device and storage medium
CN110968271A (en) * 2019-11-25 2020-04-07 北京劲群科技有限公司 High-performance data storage method, system and device
CN110968271B (en) * 2019-11-25 2024-02-20 北京劲群科技有限公司 High-performance data storage method, system and device
CN114296646A (en) * 2021-12-24 2022-04-08 天翼云科技有限公司 Caching method, device, server and storage medium based on IO service
CN114296646B (en) * 2021-12-24 2023-06-23 天翼云科技有限公司 Caching method and device based on IO service, server and storage medium
CN116880776A (en) * 2023-09-06 2023-10-13 上海凯翔信息科技有限公司 Data processing system for storing data
CN116880776B (en) * 2023-09-06 2023-11-17 上海凯翔信息科技有限公司 Data processing system for storing data
CN117234430A (en) * 2023-11-13 2023-12-15 苏州元脑智能科技有限公司 Cache frame, data processing method, device, equipment and storage medium
CN117234430B (en) * 2023-11-13 2024-02-23 苏州元脑智能科技有限公司 Cache frame, data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105138292A (en) Disk data reading method
US9229653B2 (en) Write spike performance enhancement in hybrid storage systems
US20130318196A1 (en) Storage system and storage control method for using storage area based on secondary storage as cache area
CN104317736B (en) A kind of distributed file system multi-level buffer implementation method
CN102576293A (en) Data management in solid-state storage devices and tiered storage systems
CN106066890B (en) Distributed high-performance database all-in-one machine system
CN107615254A (en) The cache memory architectures and algorithm of blending objects storage device
CN1770088A (en) Incremental backup operations in storage networks
CN1770114A (en) Copy operations in storage networks
US11042324B2 (en) Managing a raid group that uses storage devices of different types that provide different data storage characteristics
CN102117248A (en) Caching system and method for caching data in caching system
CN103026346A (en) Logical to physical address mapping in storage systems comprising solid state memory devices
CN102981963A (en) Implementation method for flash translation layer of solid-state disc
CN104267912A (en) NAS (Network Attached Storage) accelerating method and system
CN103037004A (en) Implement method and device of cloud storage system operation
CN102945207A (en) Cache management method and system for block-level data
CN106528001A (en) Cache system based on nonvolatile memory and software RAID
US11128535B2 (en) Computer system and data management method
CN104503923B (en) A kind of asymmetric disk array cache dispatching method
CN102637147A (en) Storage system using solid state disk as computer write cache and corresponding management scheduling method
CN102262512A (en) System, device and method for realizing disk array cache partition management
CN103916459A (en) Big data filing and storing system
CN105786400A (en) Heterogeneous hybrid memory module, system and storage method
CN109213693A (en) Memory management method, storage system and computer program product
CN109739696B (en) Double-control storage array solid state disk caching acceleration method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151209