CN106776363A - Caching performance optimization method, system and method for writing data - Google Patents

Caching performance optimization method, system and method for writing data Download PDF

Info

Publication number
CN106776363A
CN106776363A CN201611225904.XA CN201611225904A CN106776363A CN 106776363 A CN106776363 A CN 106776363A CN 201611225904 A CN201611225904 A CN 201611225904A CN 106776363 A CN106776363 A CN 106776363A
Authority
CN
China
Prior art keywords
caching
data
higher level
backup
subordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611225904.XA
Other languages
Chinese (zh)
Other versions
CN106776363B (en
Inventor
柳增运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201611225904.XA priority Critical patent/CN106776363B/en
Publication of CN106776363A publication Critical patent/CN106776363A/en
Application granted granted Critical
Publication of CN106776363B publication Critical patent/CN106776363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies

Abstract

The invention belongs to computer memory system technical field, specifically related to a kind of caching performance optimization method and system, including, caching is divided into two-layer by one layer of caching to cache, respectively higher level's caching and subordinate cache, wherein, higher level's caching is responsible for data cached read-write, and subordinate's caching makes the data between two nodes keep synchronous;Quick backup layer and Image Planes are added between higher level's caching and subordinate's caching, wherein, the data copy between the responsible virtual disk of quick backup layer, the data backup that Image Planes are responsible in virtual disk is corresponding with physical disk;The ablation process of data is protected using protecting atom mechanism.The present invention uses data hierarchy synchronization, repeatedly copy, increases Information Security operation, prevents data from occurring damaging or losing.When writing data, due to data write-in subordinate's caching and return, the performance of write operation is improve.Protecting atom mechanism is used to data write-in, it is to avoid cause and write the nonsynchronous faulty operation of data because of caching layering.

Description

Caching performance optimization method, system and method for writing data
Technical field
The invention belongs to computer memory system caching technology field, and in particular to a kind of caching performance optimization method, be System and method for writing data.
Background technology
Caching is exactly the buffering area of data exchange(Referred to as Cache), when a certain hardware or a certain client want receive data According to when, system can search the data of needs from caching first, if it have found directly perform, then from internal memory if can not find Or searched in hard disk.Because the speed of service for caching is more faster than internal memory, therefore the effect of caching is exactly to lift runnability.
In modern computer storage system, the performance gap between processor and bottom storage system is increasing.Cause This, improves the performance of caching system, and what especially the performance of raising multilevel cache system became becomes more and more important.In face of large-scale data Using data center increases sharply to meet the performance requirement that user improves constantly as the main memory capacity of caching.With number According to increase, how to improve caching readwrite performance becomes more and more important, and the present invention is based on this and proposes a kind of knot of caching Structure designs to realize caching the lifting of readwrite performance.
The content of the invention
The present invention increases for prior art in face of current data amount, and caching process data volume increases and then influences storage Performance, data rate memory becomes more and more slower, it is impossible to the problems such as meeting customer need well, proposes a kind of caching performance The method of optimization.
In order to achieve the above object, the present invention is achieved by the following technical solutions:
The present invention provides a kind of caching performance optimization method, including:
Each caching is divided into higher level's caching and subordinate's caching respectively, each caches one higher level's caching of correspondence and subordinate's caching;
Multiple higher levels caching is carried out interrelated;
Backup layer is set between higher level's caching and subordinate's caching;
Multiple subordinates caching is carried out interrelated.
Further, backup layer is set between higher level's caching and subordinate's caching, including:
Backup layer is divided into quick backup layer and Image Planes, wherein:The data backup that quick backup layer is used for during higher level is cached To logical volume, Image Planes by the data backup in logical volume for backuping to physical disk.
Further, by multiple higher levels caching carry out it is interrelated, including:Data during any higher level is cached are in multiple Synchronized between higher level's caching;By multiple subordinates caching carry out it is interrelated, including:Data during any subordinate is cached exist Synchronized between multiple subordinate's cachings.
Further, also include:
Compression layer is set between backup layer and subordinate's caching, after the data compression that compression layer is used for during higher level is cached under brush Subordinate caches.
Preferably, also include:
Write-in control module is respectively provided with each caching, write-in control module is used to judge that the data in write-in higher level's caching are Swipe part under no satisfaction.
The present invention also provides a kind of caching performance optimization system, including multiple cachings, and each caching is divided into higher level's caching With subordinate's caching, backup layer and compression layer are provided between higher level's caching and subordinate's caching, higher level's caching of multiple caching is mutual Association, subordinate's caching of multiple caching is interrelated, and the data that the backup layer is used for during higher level is cached backup to logical volume Or physical disk, through overcompression after the data of compression layer reception higher level's caching, it is handed down to subordinate's caching.
Further, the backup layer includes quick backup layer and Image Planes, wherein:Quick backup layer be used for by Data in level caching backup to logical volume, and the Image Planes are used to for the data backup in logical volume to backup to physical disk.
Preferably, write-in control module is equipped with the caching, write-in control module is used to judge write-in higher level's caching In data whether meet lower swipe part.
A kind of caching performance optimizes the method for writing data of system, including:
After data write one of higher level's caching, synchronized between multiple higher levels caching;
Data during higher level is cached by backup layer backup to logical volume and physical disk;
Subordinate's caching is brushed under after the data compression during compression layer caches higher level;
After subordinate's caching is brushed under data, synchronized between multiple subordinates caching.
Preferably, after the one of caching of data write-in, also include:
Data in write-in higher level's caching are made to determine whether to meet lower swipe part, the data for meeting lower swipe part will be by under Brush, is unsatisfactory for, and next step treatment is not done.
Preferably, the data in write-in higher level's caching are made to determine whether to meet lower swipe part, including:
Judge whether the size of the data block in write-in higher level's caching is no more than preset value;
Whether detection logical volume is in presence;
The result of above-mentioned judgement or detection is when being, then it is assumed that meet lower swipe part;Above-mentioned judgement or any result of detection For it is no when, then it is assumed that be unsatisfactory for lower swipe part.
Further, the data during higher level is cached by backup layer backup to logical volume and physical disk, including:
Data during higher level is cached by the quick backup layer of backup layer backup to logical volume;
The data backup in logical volume is backuped into physical disk by the Image Planes of backup layer.
A kind of caching performance optimization method provided by the present invention, has the advantages that:
1st, the present invention uses data hierarchy synchronization, repeatedly copy, increases Information Security operation, prevents data from occurring damaging or losing Lose.When writing data, because data write-in higher level's caching returns to rule result immediately, the performance of write operation is improve.Pre-read and delay Depositing read operation and being placed on bottom caching is carried out, and bottom caching is improved to disk reading performance and had very great help closer to disk;
2nd, Image Planes are that the data backup in logical volume is corresponding with physical disk in the present invention, to prevent loss of data Or damaged condition, while set up decision mechanism ensureing when data write failure or data block not up to setting value, higher level's caching Not subordinate's caching is brushed under data, i.e., do not change the data value in logical volume, defencive function is accomplished to data in volume.
The beneficial effect of caching performance optimization system provided by the present invention and method for writing data optimizes with caching performance Being similar to for method, repeats no more.
Brief description of the drawings:
The schematic flow sheet of the caching performance optimization method that Fig. 1 is provided by the embodiment of the present invention;
The module diagram of the caching performance optimization system that Fig. 2 is provided by the embodiment of the present invention;
The schematic flow sheet of the caching performance optimization system data wiring method that Fig. 3 is provided by the embodiment of the present invention;
Specific embodiment:
Described in detail on exemplary embodiment of the invention according to accompanying drawing.
The present embodiment provides a kind of caching performance optimization method, including:
Each caching is divided into higher level's caching and subordinate's caching respectively, each caches one higher level's caching of correspondence and subordinate's caching;
Multiple higher levels caching is carried out interrelated;
Backup layer is set between higher level's caching and subordinate's caching;
Multiple subordinates caching is carried out interrelated.
Refer to Fig. 1, the schematic flow sheet of the caching performance optimization method that Fig. 1 is provided by the embodiment of the present invention;This reality Apply example and a kind of caching performance optimization method is provided, including:
Step S101, caching is divided into higher level caching and subordinate caching;
In the present embodiment, while in the presence of two cachings arranged side by side, each caching is layered, be divided into higher level caching and under Level caching, each subordinate caching has higher level's caching to correspond to therewith.
As a kind of implementation method, can simultaneously there is the caching arranged side by side of three or more than three, each caching is entered Row layering, is divided into higher level's caching and subordinate's caching, and each subordinate caching has higher level's caching to correspond to therewith.
Step S102, in the buffer setting write-in control module;
Wherein, whether the data that write-in control module is used in judgement write-in higher level's caching meet lower swipe part, to write-in higher level Data in caching are made to determine whether to meet lower swipe part, including judge twice:
Judge whether the size of the data block in write-in higher level's caching is no more than preset value;
Whether detection logical volume is in presence;
The result of above-mentioned judgement or detection is when being, then it is assumed that meet lower swipe part;Above-mentioned judgement or any result of detection For it is no when, then it is assumed that be unsatisfactory for lower swipe part.
Used as a kind of implementation method, preset value is dimensioned to 32K.
Used as another embodiment, write-in control module provides a kind of protecting atom mechanism, protecting atom mechanism Particular content is:The protection size of setting data is nK, and in process of caching is write, data block is less than nK or data for data When writing unsuccessfully, higher level's caching will not brush bottom caching under data, the data value in logical volume, wherein n=2 are not changedm, m is certainly So count and 2≤m≤5.For example, set protection size be 32K, i.e., data write-in cache when data block size when more than 32K, Although data are written into higher level's caching, status of fail still is returned to main frame Host, now higher level's caching does not count this Lower brush is carried out according to block, i.e., does not change the data in logical volume, defencive function is accomplished to data in volume.
Further, as another embodiment, write-in control module provides a kind of protecting atom mechanism, for examining Whether offline survey logical volume, when data are in ablation process, if higher level's caching detects logical volume offline, can now give main frame Host returns to a Failure state, and data during higher level caches now in it will not down brush subordinate's caching;When higher level's caching Logical volume is detected after line, data write operation is re-started until success, protecting atom mechanism can be avoided because caching divides Layer and cause and write the nonsynchronous faulty operation of data.
Step S103, by higher level caching carry out it is interrelated;
In the present embodiment, two higher levels caching is associated, data that will be in any higher level's caching and another higher level Caching is synchronized, and two higher levels cache peer node each other.
Step S104, the setting quick backup layer between higher level's caching and subordinate's caching;
The data that quick backup layer is used for during higher level is cached backup to logical volume.
Step S105, higher level caching and subordinate caching between Image Planes are set;
Image Planes are used to for the data backup in logical volume to backup to physical disk.
Step S106, Image Planes and subordinate caching between compression layer is set;
Wherein, subordinate's caching is brushed under after the data compression that compression layer is used for during higher level is cached.
Step S107, by multiple subordinates caching carry out it is interrelated;
In the present embodiment, Liang Ge subordinates caching is associated, data that will be in any subordinate caching and another subordinate Caching is synchronized, and Liang Ge subordinates cache peer node each other.
In the present embodiment, the specific process step of quick backup layer, Image Planes and compression layer can be understood as:Work as data When writing disk from main frame host ends, data arrive first at higher level's caching, and higher level is cached unpressed data syn-chronization to opposite end Node, then data again by quick backup layer and Image Planes carry out data copy backup.Because having two simultaneously to delay parallel Deposit, data are two parts after Image Planes, this two parts data is compressed after operation by compression layer, it is lower to brush slow to subordinate Deposit, therefore compressed data in subordinate's caching there are two parts, this two number is according to may be identical, it is also possible to different, this two numbers evidence Because compression algorithm may be different, it is necessary to be synchronized to peer node.
The embodiment of the present invention also provides a kind of caching performance optimization system, including multiple cachings, and each caching is divided into Level caching and subordinate's caching, are provided with backup layer and compression layer between higher level's caching and subordinate's caching, the higher level of multiple caching delays Deposit interrelated, subordinate's caching of multiple caching is interrelated, and the data that the backup layer is used for during higher level is cached are backuped to Logical volume or physical disk, through overcompression after the data of compression layer reception higher level's caching, are handed down to subordinate's caching.
Refer to Fig. 2, the module diagram of the caching performance optimization system that Fig. 2 is provided by the embodiment of the present invention;This reality Apply example and a kind of caching performance optimization system, including two parallel buffers are provided, each caching is divided into higher level's caching and subordinate delays Deposit, backup layer and compression layer are provided between higher level's caching and subordinate's caching, higher level's caching of two cachings is interrelated, two Subordinate's caching of caching is interrelated, and the data that the backup layer is used for during higher level is cached backup to logical volume or physics magnetic Disk, through overcompression after the data of compression layer reception higher level's caching, is handed down to subordinate's caching.
Further, the backup layer includes quick backup layer and Image Planes, wherein:Quick backup layer be used for by Data in level caching backup to logical volume, and the Image Planes are used to for the data backup in logical volume to backup to physical disk.
Preferably, write-in control module is equipped with the caching, write-in control module is used to judge write-in higher level's caching In data whether meet lower swipe part.
The embodiment of the present invention also provides the method for writing data that a kind of caching performance optimizes system, including:
After data write one of higher level's caching, synchronized between multiple higher levels caching;
Data during higher level is cached by backup layer backup to logical volume and physical disk;
Subordinate's caching is brushed under after the data compression during compression layer caches higher level;
After subordinate's caching is brushed under data, synchronized between multiple subordinates caching.
Refer to Fig. 3, the flow of the caching performance optimization system data wiring method that Fig. 3 is provided by the embodiment of the present invention Schematic diagram;In the present embodiment, the quantity of caching is two, there is provided a kind of caching performance optimizes the method for writing data of system, Including:
After step S301, data write one of higher level's caching, another higher level caching is synchronized to;
Step S302, it is made to determine whether to meet lower swipe part to the data in write-in higher level's caching, meets the number of lower swipe part According to will be unsatisfactory for by lower brush, next step treatment is not done.
Wherein, the data in write-in higher level's caching are made to determine whether to meet lower swipe part, including:
Judge whether the size of the data block in write-in higher level's caching is no more than preset value;
Whether detection logical volume is in presence;
The result of above-mentioned judgement or detection is when being, then it is assumed that meet lower swipe part;Above-mentioned judgement or any result of detection For it is no when, then it is assumed that be unsatisfactory for lower swipe part.
Used as a kind of implementation method, preset value is dimensioned to 32K.
As another embodiment, the judgement to the size of data block, atom are realized by protecting atom mechanism The particular content of protection mechanism is:The protection size of setting data is nK, and in process of caching is write, data block is being less than data When nK or data are write unsuccessfully, higher level's caching will not brush bottom caching under data, the data value in logical volume is not changed, wherein N=2m, m are natural number and 2≤m≤5.For example, it is 32K to set protection size, i.e. data data block size when caching is write exists During more than 32K, although data are written into higher level's caching, but still return to status of fail to main frame Host, and now higher level delays Deposit does not carry out lower brush by the data block, i.e., do not change the data in logical volume, and defencive function is accomplished to data in volume.
Further, as another embodiment, whether detection logical volume is offline, when data are in ablation process, If it is offline that higher level's caching detects logical volume, now a Failure state can be returned to main frame Host, data are now upper Level will not be brushed down in subordinate's caching in caching;When higher level's caching detects logical volume after line, re-start data and write behaviour Make until successful, protecting atom mechanism can avoid causing and writing the nonsynchronous faulty operation of data because of caching layering.
Step S303, higher level is cached by the quick backup of backup layer layer in data backup to logical volume.
Step S304, the data backup in logical volume is backuped into physical disk by the Image Planes of backup layer.
Step S305, brush under after the data compression during compression layer caches higher level subordinate's caching.
After subordinate's caching is brushed under step S306, data, synchronized between multiple subordinates caching.
In the present embodiment, Liang Ge subordinates caching is associated, data that will be in any subordinate caching and another Subordinate's caching is synchronized, and Liang Ge subordinates cache peer node each other.
The specific process step of quick backup layer, Image Planes and compression layer can be understood as:When data from main frame Host ends During write-in disk, data arrive first at higher level's caching, higher level's caching by unpressed data syn-chronization to peer node, then data Again data copy backup is carried out by quick backup layer and Image Planes.Because there are two parallel buffers simultaneously, after Image Planes Data are two parts, and this two parts data is compressed after operation by compression layer, and lower brush gives subordinate's caching, therefore subordinate is slow Compressed data in depositing has two parts, and this two number is according to possible identical, it is also possible to different, and this two number evidence can due to compression algorithm Different can there are two parts of data through overcompression in two such subordinate caching, it is necessary to be synchronized to peer node, increased data Safety operation, prevents data from occurring damaging or losing.
In addition, optimizing system by caching performance can also propose a kind of method for reading data, now higher level is cached and is made It is buffer, basic caching is carried out to subordinate's caching during digital independent, such as pre- read operation etc., subordinate's caching is directly same Virtual disk is docked, and data reading performance using redundancy is high.
Schematical specific embodiment of the invention is the foregoing is only, the scope of the present invention is not limited to, it is any The equivalent variations that those skilled in the art is made on the premise of present inventive concept and principle is not departed from and modification, all should belong to In the scope of protection of the invention.

Claims (10)

1. a kind of caching performance optimization method, it is characterised in that including:
Each caching is divided into higher level's caching and subordinate's caching respectively, each caches one higher level's caching of correspondence and subordinate's caching;
Multiple higher levels caching is carried out interrelated;
Backup layer is set between higher level's caching and subordinate's caching;
Multiple subordinates caching is carried out interrelated.
2. according to the caching performance optimization method described in claim 1, it is characterised in that set between higher level's caching and subordinate's caching Backup layer is put, including:
Backup layer is divided into quick backup layer and Image Planes, wherein:The data backup that quick backup layer is used for during higher level is cached To logical volume, Image Planes by the data backup in logical volume for backuping to physical disk.
3. according to the caching performance optimization method described in claim 1, it is characterised in that multiple higher levels caching is mutually closed Connection, including:Data during any higher level is cached are synchronized between multiple higher levels cache;Multiple subordinates caching is carried out into phase Mutual correlation, including:Data during any subordinate is cached are synchronized between multiple subordinates cache.
4. according to the caching performance optimization method described in claim 1, it is characterised in that also include:
Compression layer is set between backup layer and subordinate's caching, after the data compression that compression layer is used for during higher level is cached under brush Subordinate caches;Preferably, write-in control module is respectively provided with each caching, write-in control module is used to judge that write-in higher level delays Whether the data in depositing meet lower swipe part.
5. a kind of caching performance optimization system, it is characterised in that cached including multiple, each caching be divided into higher level's caching and under Level caching, is provided with backup layer and compression layer between higher level's caching and subordinate's caching, higher level's caching of multiple caching is interrelated, Subordinate's caching of multiple caching is interrelated, and the data that the backup layer is used for during higher level is cached backup to logical volume or physics Disk, through overcompression after the data of compression layer reception higher level's caching, is handed down to subordinate's caching.
6. caching performance according to claim 5 optimizes system, it is characterised in that the backup layer includes quick backup layer And Image Planes, wherein:The data that the quick backup layer is used for during higher level is cached backup to logical volume, and the Image Planes are used for Data backup in logical volume is backuped into physical disk;Preferably, write-in control module, write-in control are equipped with the caching Molding block is used to judge whether the data in write-in higher level's caching meet lower swipe part.
7. a kind of caching performance optimizes the method for writing data of system, it is characterised in that including:
After data write one of higher level's caching, synchronized between multiple higher levels caching;
Data during higher level is cached by backup layer backup to logical volume and physical disk;
Subordinate's caching is brushed under after the data compression during compression layer caches higher level;
After subordinate's caching is brushed under data, synchronized between multiple subordinates caching.
8. caching performance according to claim 7 optimizes the method for writing data of system, it is characterised in that data write it In one caching after, also include:
Data in write-in higher level's caching are made to determine whether to meet lower swipe part, the data for meeting lower swipe part will be by under Brush, is unsatisfactory for, and next step treatment is not done.
9. caching performance according to claim 8 optimizes the method for writing data of system, it is characterised in that to write-in higher level Data in caching are made to determine whether to meet lower swipe part, including:
Judge whether the size of the data block in write-in higher level's caching is no more than preset value;
Whether detection logical volume is in presence;
The result of above-mentioned judgement or detection is when being, then it is assumed that meet lower swipe part;Above-mentioned judgement or any result of detection For it is no when, then it is assumed that be unsatisfactory for lower swipe part.
10. caching performance according to claim 7 optimizes the method for writing data of system, it is characterised in that by backup Data during layer caches higher level backup to logical volume and physical disk, including:
Data during higher level is cached by the quick backup layer of backup layer backup to logical volume;
The data backup in logical volume is backuped into physical disk by the Image Planes of backup layer.
CN201611225904.XA 2016-12-27 2016-12-27 Cache performance optimization method and system and data writing method Active CN106776363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611225904.XA CN106776363B (en) 2016-12-27 2016-12-27 Cache performance optimization method and system and data writing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611225904.XA CN106776363B (en) 2016-12-27 2016-12-27 Cache performance optimization method and system and data writing method

Publications (2)

Publication Number Publication Date
CN106776363A true CN106776363A (en) 2017-05-31
CN106776363B CN106776363B (en) 2020-05-12

Family

ID=58921520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611225904.XA Active CN106776363B (en) 2016-12-27 2016-12-27 Cache performance optimization method and system and data writing method

Country Status (1)

Country Link
CN (1) CN106776363B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184260A (en) * 2011-06-09 2011-09-14 中国人民解放军国防科学技术大学 Method for accessing mass data in cloud calculation environment
US8775729B2 (en) * 2011-07-22 2014-07-08 International Business Machines Corporation Prefetching data tracks and parity data to use for destaging updated tracks
CN105007307A (en) * 2015-06-18 2015-10-28 浪潮(北京)电子信息产业有限公司 Storage control method and system
CN106776369A (en) * 2016-12-12 2017-05-31 郑州云海信息技术有限公司 A kind of method and device for caching mirror image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184260A (en) * 2011-06-09 2011-09-14 中国人民解放军国防科学技术大学 Method for accessing mass data in cloud calculation environment
US8775729B2 (en) * 2011-07-22 2014-07-08 International Business Machines Corporation Prefetching data tracks and parity data to use for destaging updated tracks
CN105007307A (en) * 2015-06-18 2015-10-28 浪潮(北京)电子信息产业有限公司 Storage control method and system
CN106776369A (en) * 2016-12-12 2017-05-31 郑州云海信息技术有限公司 A kind of method and device for caching mirror image

Also Published As

Publication number Publication date
CN106776363B (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN106685743B (en) Block scm cluster processing system and method
CN105404673B (en) Efficient File system constituting method based on NVRAM
US6393516B2 (en) System and method for storage media group parity protection
CN103473150B (en) A kind of fragment rewrite method in data deduplication system
CN104077380B (en) A kind of data de-duplication method, apparatus and system
CN102053802B (en) Network RAID (redundant array of independent disk) system
CN106407224B (en) The method and apparatus of file compacting in a kind of key assignments storage system
CN103763383A (en) Integrated cloud storage system and storage method thereof
CN102521058A (en) Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group
CN106155943B (en) A kind of method and device of the power down protection of dual control storage equipment
US7386664B1 (en) Method and system for mirror storage element resynchronization in a storage virtualization device
CN107329708A (en) A kind of distributed memory system realizes data cached method and system
US20170277450A1 (en) Lockless parity management in a distributed data storage system
CN108121510A (en) OSD choosing methods, method for writing data, device and storage system
CN105955841B (en) A kind of method that RAID dual controllers carry out write buffer mirror image using disk
CN106657356A (en) Data writing method and device for cloud storage system, and cloud storage system
CN109582213A (en) Data reconstruction method and device, data-storage system
CN106933493A (en) Method and apparatus for caching disk array dilatation
CN110196818A (en) Data cached method, buffer memory device and storage system
KR20180061493A (en) Recovery technique of data intergrity with non-stop database server redundancy
CN104778132A (en) Multi-core processor directory cache replacement method
CN113867627B (en) Storage system performance optimization method and system
CN103645995B (en) Write the method and device of data
US7035978B2 (en) Method, system, and program for policies for improving throughput in remote mirroring systems
CN106776363A (en) Caching performance optimization method, system and method for writing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200416

Address after: 215000 Building 9, No.1, guanpu Road, Guoxiang street, Wuzhong Economic Development Zone, Suzhou City, Jiangsu Province

Applicant after: SUZHOU LANGCHAO INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 450000 Henan province Zheng Dong New District of Zhengzhou City Xinyi Road No. 278 16 floor room 1601

Applicant before: ZHENGZHOU YUNHAI INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant