CN109947584A - A kind of degrade reading/writing method based on the storage of cloud computing distributed block - Google Patents
A kind of degrade reading/writing method based on the storage of cloud computing distributed block Download PDFInfo
- Publication number
- CN109947584A CN109947584A CN201910142761.3A CN201910142761A CN109947584A CN 109947584 A CN109947584 A CN 109947584A CN 201910142761 A CN201910142761 A CN 201910142761A CN 109947584 A CN109947584 A CN 109947584A
- Authority
- CN
- China
- Prior art keywords
- failure
- write
- node
- record
- offset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of degrade reading/writing methods based on the storage of cloud computing distributed block, the following steps are included: step 1, when some OID one or more copies write-in failure wherein, the copy of OID and write-in failure are had been written into the relevant informations such as the offset and length of data and are stored in cache;Step 2, when a copy is written successfully some OID wherein, first check for the record for whether having failure before, if still there is the data of failure, for write-in failure data, re-initiate a write request.The present invention uses degrade technology, guarantees not only to have guaranteed the strong consistency of more copies, but also can guarantee normal read-write in the case where network or physical machine are abnormal;The node of operation irregularity in intelligence investigation cluster, and the node is actively kicked out of into cluster.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of degrade based on the storage of cloud computing distributed block
Reading/writing method.
Background technique
In order to protect the reliability of user data, the storage of cloud computing distributed block generally saves user data using 3 copies,
Namely same data we need to save 3 parts.We are in order to guarantee strong consistency, when saving user data, it is necessary to wait 3 pairs
This whole is written successfully, this operation could return successfully, if low-quality disk, delay machine or network it is obstructed etc. due to cause to have
Copy failure is written, this operation will hang live, retry always fails until all be written successfully or being written
Node leaves cluster, calculates 3 new node according to the new topological structure of cluster, retry is continued to execute, until being written to
Function.
Based on the above situation, certain time is lived since write operation is possible to hang, during this period of time, user is nothing
Method saves data, brings very big inconvenience to user.
Summary of the invention
In view of the above drawbacks of the prior art, technical problem to be solved by the invention is to provide one kind to be based on cloud computing
The degrade reading/writing method of distributed block storage guarantees to be abnormal in network or physical machine using degrade technology
In the case of, not only guaranteed the strong consistency of more copies, but also can guarantee normal read-write;The node of operation irregularity in intelligence investigation cluster,
And the node is actively kicked out of into cluster.
To achieve the above object, the present invention provides a kind of read-write sides degrade based on the storage of cloud computing distributed block
Method, comprising the following steps:
Step 1, when some OID one or more copies write-in failure wherein, by OID and the copy of write-in failure
The relevant informations such as offset and length through data are written are stored in cache, and the data of write-in failure are stored in section
On the hard disk of point, if the OID has a record of failure before the copy, according to the offset of failure record twice and
Length does or operates, merge this twice fail record;
Step 2, when a copy is written successfully some OID wherein, first check for the record for whether having failure before,
If so, successfully offset, length and offset, length of failure do and operate, if still there is the data of failure,
Then for the data of write-in failure, a write request is re-initiated, if be written successfully, failure before deleting
Record and data, otherwise continue to retain.
A kind of above-mentioned degrade reading/writing method based on the storage of cloud computing distributed block, the processing write
The method and step of request are as follows:
1, node A receives client write request;
2,3 back end are calculated according to current cluster topology;
3, write request is sent to 3 back end, and waits and returns the result;
4, the return value of 3 back end is analyzed;
If the node for 5, executing request failure is failed as caused by specific reasons, related by failure is believed
Cease oid, node, offset, in length, the cache of the information preservations such as request time A, and will write-in failure correlation
Where information and content are saved in A on physical machine disk, and return to client successful value;
6, it checks and successful node is written the record for whether having write-in to fail before checked, write if so, then needing to update
Enter successful information;
C. offset, length are updated
Successful offset will be written, length carries out merge with the record offset, length to fail before, such as
Successful offset=0, length=5, that is, [0,4] is written;Offset=3, the length=10 of failure is written, also
It is [3,12] that after merge, the record of failure becomes [6,12];
D. it checks after updating offset and length as a result, merge has the section of failure later, according to previously stored
The data of write-in failure re-write, and request time is first updated before re-writing, and then initiate request to node,
Node receives request, the request time of oid is first checked for before write data, if requeset time ratio
Present update then returns to failure, stops write-in, until the equal just real execution write operation of request time, if write
Enter success, then return to success, and delete failure record, failure record is deleted in the section that merge does not fail later;
7, check write-in failure node, if having before write-in fail record, merge offset, length,
Data and update request time;
If 8, some node, more than the threshold value for being continuously written into the frequency of failure, then the back end is actively kicked out of this
Cluster triggers recovery.
A kind of above-mentioned degrade reading/writing method based on the storage of cloud computing distributed block, the processing write
Request is also corresponding with processing read request, specific method step are as follows:
1, node A receives read request;
2, whether the data segment that A node first checks for reading exists in cache in A, if it is present directly reading this
Ground hard disc data selects one of node to read data if it is not, calculating 3 node according to topology.
A kind of above-mentioned degrade reading/writing method based on the storage of cloud computing distributed block, the processing write
Request is also corresponding with abnormality processing, specific method step are as follows: if A node is hung extremely, when restarting A, needs to protect in A
The record for depositing failure is read in cache.
The beneficial effects of the present invention are:
The present invention uses degrade technology, guarantees in the case where network or physical machine are abnormal, and has both guaranteed mostly secondary
This strong consistency, and can guarantee normal read-write;The node of operation irregularity in intelligence investigation cluster, and actively kick out of the node
Cluster.
It is described further below with reference to technical effect of the attached drawing to design of the invention, specific structure and generation, with
It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Specific embodiment
As shown in Figure 1, a kind of degrade reading/writing method based on the storage of cloud computing distributed block, comprising the following steps:
Step 1, when some OID one or more copies write-in failure wherein, by OID and the copy of write-in failure
The relevant informations such as offset and length through data are written are stored in cache, and the data of write-in failure are stored in section
On the hard disk of point, if the OID has a record of failure before the copy, according to the offset of failure record twice and
Length does or operates, merge this twice fail record;
Step 2, when a copy is written successfully some OID wherein, first check for the record for whether having failure before,
If so, successfully offset, length and offset, length of failure do and operate, if still there is the data of failure,
Then for the data of write-in failure, a write request is re-initiated, if be written successfully, failure before deleting
Record and data, otherwise continue to retain.
In the present embodiment, the method and step of the processing write request are as follows:
1, node A receives client write request;
2,3 back end are calculated according to current cluster topology;
3, write request is sent to 3 back end, and waits and returns the result;
4, the return value of 3 back end is analyzed;
If the node for 5, executing request failure is failed as caused by specific reasons, related by failure is believed
Cease oid, node, offset, in length, the cache of the information preservations such as request time A, and will write-in failure correlation
Where information and content are saved in A on physical machine disk, and return to client successful value;
6, it checks and successful node is written the record for whether having write-in to fail before checked, write if so, then needing to update
Enter successful information;
E. offset, length are updated
Successful offset will be written, length carries out merge with the record offset, length to fail before, such as
Successful offset=0, length=5, that is, [0,4] is written;Offset=3, the length=10 of failure is written, also
It is [3,12] that after merge, the record of failure becomes [6,12];
F. it checks after updating offset and length as a result, merge has the section of failure later, according to previously stored
The data of write-in failure re-write, and request time is first updated before re-writing, and then initiate request to node,
Node receives request, the request time of oid is first checked for before write data, if requeset time ratio
Present update then returns to failure, stops write-in, until the equal just real execution write operation of request time, if write
Enter success, then return to success, and delete failure record, failure record is deleted in the section that merge does not fail later;
7, check write-in failure node, if having before write-in fail record, merge offset, length,
Data and update request time;
If 8, some node, more than the threshold value for being continuously written into the frequency of failure, then the back end is actively kicked out of this
Cluster triggers recovery.
In the present embodiment, the processing write request is also corresponding with processing read request, specific method step
Are as follows:
1, node A receives read request;
2, whether the data segment that A node first checks for reading exists in cache in A, if it is present directly reading this
Ground hard disc data selects one of node to read data if it is not, calculating 3 node according to topology.
In the present embodiment, the processing write request is also corresponding with abnormality processing, specific method step are as follows: if A
Node is hung extremely, when restarting A, needs to read the record for saving failure in A in cache.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that those skilled in the art without
It needs creative work according to the present invention can conceive and makes many modifications and variations.Therefore, all technologies in the art
Personnel are available by logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea
Technical solution, all should be within the scope of protection determined by the claims.
Claims (4)
1. a kind of degrade reading/writing method based on the storage of cloud computing distributed block, which comprises the following steps:
Step 1, when some OID one or more copies write-in failure wherein, the copy of OID and write-in failure have been write
Enter the relevant informations such as the offset and length of data to be stored in cache, and the data of write-in failure are stored in node
On hard disk, if the OID has the record of failure before the copy, done according to the offset and length of failure record twice
Or operation, merge this twice fail record;
Step 2, when a copy is written successfully some OID wherein, first check for the record for whether having failure before, if
Have, successful offset, length and offset, length of failure do and operate, if still having the data of failure, needle
To the data of write-in failure, a write request is re-initiated, if the record to fail before successfully deleting is written
And data, otherwise continue to retain.
2. a kind of degrade reading/writing method based on the storage of cloud computing distributed block as described in claim 1, feature exist
In the method and step of the processing write request are as follows:
1, node A receives client write request;
2,3 back end are calculated according to current cluster topology;
3, write request is sent to 3 back end, and waits and returns the result;
4, the return value of 3 back end is analyzed;
If the node for 5, executing request failure is failed as caused by specific reasons, by the relevant information of failure
In the cache of the information preservations such as oid, node, offset, length, request time A, and the related of write-in failure is believed
Where breath and content are saved in A on physical machine disk, and return to client successful value;
6, it checks and successful node is written the record for whether having write-in to fail before checked, be written to if so, then needing to update
The information of function;
A. offset, length are updated
Successful offset, length and the record offset to fail before will be written, length carries out merge, for example is written
Successful offset=0, length=5, that is, [0,4];Offset=3, the length=10 of failure is written, that is,
[3,12], after merge, the record of failure becomes [6,12];
B. it checks after updating offset and length as a result, merge has the section of failure later, according to previously stored write-in
The data of failure re-write, and request time is first updated before re-writing, and then initiate request, node to node
Request is received, the request time of oid is first checked for before write data, if requeset time is than present
Update, then return to failure, stop write-in, until the equal just real execution write operation of request time, if be written to
Function then returns to success, and deletes failure record, and failure record is deleted in the section that merge does not fail later;
7, the node of write-in failure is checked, if the record for thering is write-in to fail before, merge offset, length, data
With update request time;
If 8, some node, more than the threshold value for being continuously written into the frequency of failure, then the back end is actively kicked out of into the cluster,
Trigger recovery.
3. a kind of degrade reading/writing method based on the storage of cloud computing distributed block as claimed in claim 2, feature exist
In the processing write request is also corresponding with processing read request, specific method step are as follows:
1, node A receives read request;
2, whether the data segment that A node first checks for reading exists in cache in A, if it is present directly reading local hard
Disk data select one of node to read data if it is not, calculating 3 node according to topology.
4. a kind of degrade reading/writing method based on the storage of cloud computing distributed block as claimed in claim 2, it is characterised in that:
The processing write request is also corresponding with abnormality processing, specific method step are as follows: if A node is hung extremely, restarts A
When, it needs to read the record for saving failure in A in cache.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910142761.3A CN109947584A (en) | 2019-02-26 | 2019-02-26 | A kind of degrade reading/writing method based on the storage of cloud computing distributed block |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910142761.3A CN109947584A (en) | 2019-02-26 | 2019-02-26 | A kind of degrade reading/writing method based on the storage of cloud computing distributed block |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109947584A true CN109947584A (en) | 2019-06-28 |
Family
ID=67007071
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910142761.3A Pending CN109947584A (en) | 2019-02-26 | 2019-02-26 | A kind of degrade reading/writing method based on the storage of cloud computing distributed block |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109947584A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7500020B1 (en) * | 2003-12-31 | 2009-03-03 | Symantec Operating Corporation | Coherency of replicas for a distributed file sharing system |
US20150278019A1 (en) * | 2014-03-26 | 2015-10-01 | International Business Machines Corporation | Efficient handing of semi-asynchronous raid write failures |
CN106445419A (en) * | 2016-09-28 | 2017-02-22 | 乐视控股(北京)有限公司 | Data storage method and device and distributed storage system |
CN106933494A (en) * | 2015-12-31 | 2017-07-07 | 伊姆西公司 | Mix the operating method and device of storage device |
CN107045426A (en) * | 2017-04-14 | 2017-08-15 | 北京粉笔蓝天科技有限公司 | A kind of many copy read methods and system |
WO2017167056A1 (en) * | 2016-03-29 | 2017-10-05 | 阿里巴巴集团控股有限公司 | Virtual machine data storage method and apparatus |
-
2019
- 2019-02-26 CN CN201910142761.3A patent/CN109947584A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7500020B1 (en) * | 2003-12-31 | 2009-03-03 | Symantec Operating Corporation | Coherency of replicas for a distributed file sharing system |
US20150278019A1 (en) * | 2014-03-26 | 2015-10-01 | International Business Machines Corporation | Efficient handing of semi-asynchronous raid write failures |
CN106933494A (en) * | 2015-12-31 | 2017-07-07 | 伊姆西公司 | Mix the operating method and device of storage device |
WO2017167056A1 (en) * | 2016-03-29 | 2017-10-05 | 阿里巴巴集团控股有限公司 | Virtual machine data storage method and apparatus |
CN106445419A (en) * | 2016-09-28 | 2017-02-22 | 乐视控股(北京)有限公司 | Data storage method and device and distributed storage system |
CN107045426A (en) * | 2017-04-14 | 2017-08-15 | 北京粉笔蓝天科技有限公司 | A kind of many copy read methods and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12117911B2 (en) | Remote data replication method and system | |
CN108460045B (en) | Snapshot processing method and distributed block storage system | |
US7814057B2 (en) | Page recovery using volume snapshots and logs | |
US6463501B1 (en) | Method, system and program for maintaining data consistency among updates across groups of storage areas using update times | |
US8015430B1 (en) | Using asset dependencies to identify the recovery set and optionally automate and/or optimize the recovery | |
US6578041B1 (en) | High speed on-line backup when using logical log operations | |
US20080162599A1 (en) | Optimizing backup and recovery utilizing change tracking | |
US10884871B2 (en) | Systems and methods for copying an operating source volume | |
CN106777049B (en) | Processing method and system for avoiding repeated log output | |
CN109542682A (en) | A kind of data back up method, device, equipment and storage medium | |
CN109710185A (en) | Data processing method and device | |
CN105574187A (en) | Duplication transaction consistency guaranteeing method and system for heterogeneous databases | |
CN109684338A (en) | A kind of data-updating method of storage system | |
KR102139087B1 (en) | Method, server, and computer readable medium for index recovery using index redo log | |
CN104166605A (en) | Data backup method and system based on incremental data files | |
US20030126163A1 (en) | Method for file deletion and recovery against system failures in database management system | |
US8271454B2 (en) | Circular log amnesia detection | |
US20140279943A1 (en) | File system verification method and information processing apparatus | |
CN110941511B (en) | Snapshot merging method, device, equipment and storage medium | |
US20160203056A1 (en) | Apparatus, snapshot management method, and recording medium | |
WO2018081960A1 (en) | File management method, file system, and server system | |
CN116483284B (en) | Method, device, medium and electronic equipment for reading and writing virtual hard disk | |
CN111338853B (en) | Linux-based data real-time storage system and method | |
CN109947584A (en) | A kind of degrade reading/writing method based on the storage of cloud computing distributed block | |
US11226875B2 (en) | System halt event recovery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190628 |
|
WD01 | Invention patent application deemed withdrawn after publication |