CN108509156B - Data reading method, device, equipment and system - Google Patents

Data reading method, device, equipment and system Download PDF

Info

Publication number
CN108509156B
CN108509156B CN201810300369.2A CN201810300369A CN108509156B CN 108509156 B CN108509156 B CN 108509156B CN 201810300369 A CN201810300369 A CN 201810300369A CN 108509156 B CN108509156 B CN 108509156B
Authority
CN
China
Prior art keywords
data
storage device
target storage
check code
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810300369.2A
Other languages
Chinese (zh)
Other versions
CN108509156A (en
Inventor
马文霜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Tencent Cloud Computing Beijing Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810300369.2A priority Critical patent/CN108509156B/en
Publication of CN108509156A publication Critical patent/CN108509156A/en
Application granted granted Critical
Publication of CN108509156B publication Critical patent/CN108509156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Abstract

The embodiment of the application discloses a data reading method, a data reading device, data reading equipment and a data reading system, and relates to the technical field of storage. The method is applied to a host machine of the virtual machine, and comprises the following steps: reading first data from a target position of a target storage device of a storage device cluster according to a read request when the read request for requesting to read data from the storage device cluster from a virtual machine is acquired; detecting whether the first data is consistent with second data written into the target position; and responding to the read request by adopting the first data if the first data is consistent with the second data. According to the technical scheme provided by the embodiment of the application, when the data is read from the storage device every time, the correctness of the read data is checked, so that the read data is ensured to be the data written before, the problem of data errors can be found timely, and the situation that the data is not read and written is avoided.

Description

Data reading method, device, equipment and system
Technical Field
The embodiment of the application relates to the technical field of storage, in particular to a data reading method, device, equipment and system.
Background
With the development of Cloud Storage technology, CBS (Cloud Block Storage, Cloud disk) is used as a persistent Block Storage device which is stable, reliable, low in latency and extensible, and provides a data Block level Storage service for CVM (Cloud Virtual Machine).
In consideration of the fact that data stored in the CBS may be lost or corrupted due to software or hardware, in the related art, a scheme for checking correctness of the corresponding data is configured in the CBS. For example, the CBS periodically detects whether a problem occurs in the data it stores, and repairs the data using the snapshot and the log when a problem occurs, so as to ensure the correctness of the data it stores as much as possible.
In the related art, since the data correctness checking is completed at the CBS end, the data problem is easily found and delayed, and the situation that the data reading and writing are inconsistent may still occur.
Disclosure of Invention
The embodiment of the application provides a data reading method, a data reading device, data reading equipment and a data reading system, which can be used for solving the problem of inconsistent data reading and writing caused by delayed data problem discovery in the related technology. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a data reading method, which is applied to a host of a virtual machine, and the method includes:
reading first data from a target position of a target storage device of a storage device cluster according to a read request when the read request for requesting to read data from the storage device cluster from the virtual machine is acquired every time;
detecting whether the first data is consistent with second data written into the target position;
and responding to the read request by adopting the first data if the first data is consistent with the second data.
On the other hand, an embodiment of the present application provides a data reading apparatus, which is applied to a host of a virtual machine, and the apparatus includes:
the data reading module is used for reading first data from a target position of a target storage device of a storage device cluster according to a reading request when the reading request which is used for requesting to read data from the storage device cluster and is from the virtual machine is acquired every time;
the data detection module is used for detecting whether the first data is consistent with the second data written into the target position;
and the request response module is used for responding to the read request by adopting the first data when the first data is consistent with the second data.
In another aspect, an embodiment of the present application provides a data storage system, where the system includes: the system comprises a virtual machine, a host machine of the virtual machine and a storage device cluster;
the virtual machine is used for generating a read request and submitting the read request to the host machine, wherein the read request is used for requesting to read data from a target position of a target storage device of the storage device cluster;
the host machine is used for reading first data from a target position of the target storage equipment according to the read request when the read request from the virtual machine is acquired every time; detecting whether the first data is consistent with second data written into the target position; responding to the read request with the first data when the first data is consistent with the second data.
In yet another aspect, an embodiment of the present application provides a computer device, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded by the processor and executes the data reading method according to the above aspect.
In a further aspect, an embodiment of the present application provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the data reading method in the above aspect.
In still another aspect, the present application provides a computer program product, which is configured to execute the data reading method of the above aspect when the computer program product is executed.
According to the technical scheme provided by the embodiment of the application, when the data is read from the storage device every time, the correctness of the read data is checked, so that the read data is ensured to be the data written before, the problem of data errors can be found timely, and the situation that the data is not read and written is avoided.
Drawings
FIG. 1 is a schematic diagram of a data storage system provided by one embodiment of the present application;
FIG. 2 is a flow chart of a data reading method provided by an embodiment of the present application;
FIG. 3 is a flow chart of a data reading method according to another embodiment of the present application;
FIG. 4 is a diagram illustrating a data writing process according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a periodic verification process provided by an embodiment of the present application;
FIG. 6 is an architecture diagram of a data storage system provided by one embodiment of the present application;
FIG. 7A is a flow chart illustrating the processing of a read request by the real-time parity module;
FIG. 7B illustrates a flow chart of the processing of a write request by the real-time parity module;
FIG. 8 illustrates a schematic diagram of a real-time verification module generating and verifying a verification code;
FIG. 9 is a block diagram of a data reading device provided in one embodiment of the present application;
fig. 10 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of a data storage system according to an embodiment of the present application is shown. The data storage system may be referred to as a cloud storage system. The system may include: CVM 11, host 12, and storage device cluster 13.
The CVM 11 is also called a cloud server, is a high-performance and high-stability cloud virtual machine, and can provide a size-adjustable computing resource in a cloud storage system. In general, the number of CVMs 11 is plural.
The host 12 is the host of the CVM 11. In general, the host 12 is also multiple in number and in a distributed deployment. One or more CVMs 11 may be deployed on each host 12.
The storage device cluster 13 is used for providing data storage service for the CVM 11 and supporting the CVM 11 to read and write data. A plurality of storage devices 131 may be included in the storage device cluster 13.
Optionally, the storage device 131 is a CBS (cloud hard disk), and the CBS may be used as a system disk of the CVM 11, or may be used as a data disk of the CVM 11. The CVM 11 may interact with the CBS through disk read and write operations. The CBS provides an efficient and reliable storage device for the CVM 11, which is a high-availability, high-reliability, low-cost, customizable block storage device that can be used as an independent extensible hard disk of the CVM 11. The CBS may provide data block-level data storage, and a multi-copy distributed mechanism is employed to provide data reliability guarantees for the CVM 11. The CBS supports automatic copying in the available area and backs up data on different devices, so that the problems of data loss and the like caused by single device failure are solved, and the availability and the durability of the data are improved. According to different performances, the system can be divided into a common cloud hard disk, a high-performance cloud hard disk, an SSD (Solid State drive) cloud hard disk and the like.
Referring to fig. 2, a flowchart of a data reading method according to an embodiment of the present application is shown. The method may be applied in a host 12 of the data storage system shown in fig. 1 to process read requests issued by the CVM 11. The method may include the steps of:
step 201, when a read request for requesting to read data from the storage device cluster is obtained from the virtual machine each time, reading first data from a target position of a target storage device of the storage device cluster according to the read request.
The read request is submitted to the host 12 by the virtual machine, for example, the virtual machine submits the read request generated by the virtual machine to the request queue, and then the host 12 obtains the read request from the request queue and processes the read request. In this embodiment of the present application, the virtual machine is configured to initiate a data read/write request to a storage device in a storage device cluster, for example, the virtual machine may be a CVM 11 in the data storage system shown in fig. 1.
The read request is to request data to be read from a cluster of storage devices. For example, a read request is used to request that data be read from a target location of a target storage device of a storage device cluster. The target storage device may be any storage device in the storage device cluster 13. When the storage device cluster 13 is a cloud hard disk cluster, the target storage device may be a target cloud hard disk, and the target cloud hard disk may be any one cloud hard disk in the cloud hard disk cluster.
The target location is used to indicate the storage location of the data requested to be read in the target storage device. Optionally, the target position is represented using a position parameter and a length parameter. Wherein, the position parameter is used to indicate the head position of the data requested to be read in the target storage device, and can be expressed by offset; the length parameter is used to indicate the length of the data requested to be read, and may be represented by length. In one example, the read request includes identification information, a location parameter, and a length parameter of the target storage device.
Optionally, when the storage devices in the storage device cluster 13 are block devices, that is, the storage devices store data in fixed-size blocks, the read request is used to request to read at least one data block from the target location of the target storage device. The size of each data block is determined by the storage device, for example, the size of each data block is 4kb (kilobytes).
In one example, the host 12 sends a data reading request to the target storage device according to identification information of the target storage device carried in the reading request, where the data reading request may carry a location parameter and a length parameter, and after receiving the data reading request, the target storage device obtains first data stored at the target location according to the location parameter and the length parameter, and sends the first data to the host 12.
In another example, the host 12 forwards the read request to a host node device in the storage device cluster 13, the host node device sends a data read request to the target storage device according to identification information of the target storage device carried in the read request, where the data read request may carry a location parameter and a length parameter, and after receiving the data read request, the target storage device obtains first data stored at a target location according to the location parameter and the length parameter, and sends the first data to the host node device, and then the host node device sends the first data to the host 12.
Step 202, detecting whether the first data is consistent with the second data written into the target position.
After reading the first data, the host 12 detects whether the read first data is consistent with the second data written at the target position of the target storage device, that is, checks the correctness of the first data, thereby avoiding the problem of reading or not writing.
In a possible embodiment, the step 202 includes the following sub-steps:
1. calculating a check code of the first data;
optionally, the host 12 calculates the check code of the first data by using a preset algorithm, which is the same as the algorithm used for calculating the check code when writing data into the target storage device. Illustratively, the predetermined algorithm is a CRC (Cyclic Redundancy Check) algorithm. The length of the CRC algorithm support information field and the check field may be arbitrarily selected.
In an example, for each first data block with a preset length included in the first data, the host 12 calculates a check code corresponding to each first data block, respectively, to obtain a check code of the first data. Wherein the first data comprises at least one first data block. The preset length may be determined according to sector division of the storage device, for example, the preset length is 4kb, and a check code with a length of 1Byte is correspondingly generated for each data block with a length of 4 kb. The host 12 splices the check codes corresponding to the first data blocks in sequence to obtain the check codes of the first data.
2. Acquiring a check code of second data from pre-stored check data;
optionally, the host 12 obtains, according to the target position of the data requested to be read, a check code corresponding to the target position from pre-stored check data, that is, a check code generated correspondingly when the second data is written in the target position before.
3. Detecting whether the check code of the first data is the same as the check code of the second data;
4. if the check code of the first data is the same as the check code of the second data, determining that the first data is consistent with the second data;
5. and if the check code of the first data is different from the check code of the second data, determining that the first data is inconsistent with the second data.
If the check code of the first data is the same as the check code of the second data, it indicates that the first data read by the host 12 is the second data written in the target position before, and the read first data passes the correctness check; if the check code of the first data is different from the check code of the second data, it indicates that the first data read by the host 12 is not the second data previously written at the target location, and the read first data fails the correctness check.
It should be noted that, in this embodiment of the present application, each time the host 12 receives any one read request sent by any one of the deployed virtual machines, the correctness checking of the data is performed, so as to achieve the purpose of real-time checking, and avoid the delay of finding a data error problem.
And step 203, if the first data is consistent with the second data, responding to the read request by adopting the first data.
In the case that the read first data passes the correctness check, the host 12 sends the first data to the virtual machine in response to the read request.
In summary, according to the technical scheme provided by the embodiment of the application, when data is read from the storage device each time, correctness verification is performed on the read data to ensure that the read data is the data written before, which is helpful for finding out the problem of data errors in time and avoiding the situation that data is not read but not written.
In an alternative embodiment provided based on the embodiment of fig. 2, as shown in fig. 3, the step 202 further includes the following steps:
and step 204, if the first data is inconsistent with the second data, stopping the read-write operation related to the target storage device.
When the read data fails the correctness check, the host 12 immediately stops the read-write operation related to the target storage device, that is, the read-write requests of all the devices to the target storage device are suspended, and the read-write requests are suspended from being processed, so that the further deterioration of the correctness of the data stored in the target storage device is avoided.
Step 205, sending a data repair request to the target storage device.
The data repair request is used for triggering the target storage device to repair the first data at the target position into the second data according to the snapshot and the log. Optionally, the data repair request includes identification information, a location parameter, a length parameter, and an identifier of the target storage device, where the identifier is used to indicate that the data repair operation is performed, and for example, the identifier may be added to the read request to generate the data repair request.
A snapshot is a fully available copy of a given set of data that includes an image of the corresponding data at some point in time (i.e., the point in time at which the copy began). A snapshot may be considered a copy of the data it represents, or may be a replica of the data. The target storage device may generate a snapshot of the data it stores at regular intervals, for example, every 1 hour. The snapshot mainly has the function of data backup and recovery. When the storage device is damaged, the snapshot can be used for fast data recovery, and the data can be recovered to a state of a certain available time point. The log records each read-write operation of the target storage device, which is used to ensure the transaction and consistency of each data change. The log may be a binlog log.
After receiving the data repair request, the target storage device determines that the first data stored at the target location needs to be repaired. And the target storage equipment backs the stored data to the state before the data is written in the target position according to the recorded snapshot, and then executes various read-write operations in sequence according to the recorded log, so that correct second data is written in the target position again, and the data recovery is realized.
It should be added that, in the embodiment of the present application, the execution order of step 204 and step 205 is not limited. Step 205 may be executed before step 204, after step 204, or simultaneously with step 204, which is not limited in this embodiment of the application.
Optionally, in the event that the first data is inconsistent with the second data, the host 12 may also issue an alert to feedback to the operation and maintenance personnel and/or the user that there is a data error in the target storage device.
In the embodiment of the application, since the correctness of the read data is checked every time the data is read from the storage device, further writing of other error data based on the error data can be avoided. If the error-based data is further written into other error data, the content recorded in the log of the storage device is not trusted, and the correct data cannot be recovered based on the untrusted log. According to the scheme provided by the embodiment of the application, the condition that the error-based data is further written into other error data does not occur, so that the usability of the log can be ensured, and the correct data can be recovered based on the log.
Step 206, receiving the second data sent by the target storage device.
Optionally, after receiving the second data sent by the target storage device, the host 12 detects whether the received second data is consistent with the second data written into the target location; if yes, go to step 207; and if the two are not consistent, an alarm is given. The above data consistency detection method is the same as the method described in step 202 in the embodiment of fig. 2, and is not described herein again with reference to the above description.
Step 207, respond to the read request with the second data.
The host 12 sends the second data to the virtual machine in response to the read request.
In summary, according to the technical scheme provided by the embodiment of the application, when an error occurs in data stored in the storage device, a repair operation on the data is immediately triggered, and then the repaired correct data is adopted to respond to the read request, so that the correctness of data reading is ensured.
In another alternative embodiment provided based on the embodiment of fig. 2 or fig. 3, as shown in fig. 4, the data writing process includes the following steps:
in step 401, a write request from a virtual machine is obtained.
The write request is submitted to the host 12 by the virtual machine, for example, the virtual machine submits the write request generated by the virtual machine to the request queue, and then the host 12 obtains the write request from the request queue and processes the write request. In practical applications, the request queue for storing the read request and the request queue for storing the write request may share the same queue, or may be two different queues, which is not limited in the embodiment of the present application. The host 12 may asynchronously process the read and write requests in the queue.
The write request requests writing of second data at a target location of a target storage device. Optionally, the write request includes identification information, a location parameter, a length parameter, and second data to be written of the target storage device. In addition, in the embodiment of the present application, only the above-mentioned write request is taken as an example to describe the technical solution of the present application, and the host 12 may perform similar processing on any received write request sent by any virtual machine by using the following method flows.
Optionally, when the storage devices in the storage device cluster 13 are block devices, the second data requested to be written includes at least one data block.
Step 402, writing second data at a target location of a target storage device according to the write request.
In one example, the host 12 sends a data write request to the target storage device according to the identification information of the target storage device carried in the write request, where the data write request may carry a location parameter, a length parameter, and second data, and the target storage device writes the second data at the target location according to the location parameter and the length parameter after receiving the data write request.
In another example, the host 12 forwards the write request to a host node device in the storage device cluster 13, and the host node device sends a data write request to the target storage device according to the identification information of the target storage device carried in the write request, where the data write request may carry a location parameter, a length parameter, and second data, and after receiving the data write request, the target storage device writes the second data at the target location according to the location parameter and the length parameter.
In addition, for the case of storage in a block device, since the unit of data reading and writing is a data block, the step 402 may include the following sub-steps:
1. detecting whether data is written at a target position of a target storage device;
2. if the data is written in the target position of the target storage device, reading third data from the target position of the target storage device; detecting whether the read third data is consistent with the previously written data; if the read third data is consistent with the previously written data, determining fourth data actually written into the target position of the target storage device according to the second data and the third data; writing fourth data at the target location of the target storage device;
3. if no data is written at the target location of the target storage device, writing second data at the target location of the target storage device.
For example, assuming that the second data to be actually written by the write request is 1kb and the size of each data block is 4kb, the host 12 detects whether data has been written at the target location, and when data has been written at the target location (for example, 1kb of data has been written), the host 12 reads the 1kb of third data written at the target location, and after passing the correctness check on the third data, integrates the 1kb of second data and the 1kb of third data into one data block, and the remaining 2kb portion of the data block can be filled with 0, and then writes the data block to the target location. In addition, the specific process of performing correctness checking on the third data is the same as the specific process of performing correctness checking on the first data described above, and is referred to the description above, and is not described herein again.
In step 403, a check code of the second data is calculated.
Optionally, the host 12 calculates the check code of the second data by using a preset algorithm, for example, the preset algorithm is a CRC algorithm. In an example, for each second data block with a preset length included in the second data, the host 12 calculates a check code corresponding to each second data block, respectively, to obtain a check code of the second data. Wherein the second data comprises at least one second data block. The host 12 calculates the check code of the second data in the same way as the check code of the first data, as described above.
It should be added that, in the embodiment of the present application, the execution order of step 402 and step 403 is not limited. Step 403 may be performed before step 402, after step 402, or simultaneously with step 402, which is not limited in this embodiment of the application. Optionally, the host 12 calculates the check code of the second data after confirming that the writing of the second data is successful, so as to avoid performing unnecessary calculation operations.
Step 404, saving the check code of the second data.
The host 12 saves the check code of the second data into the check data. Optionally, the verification data is stored in a memory file system of the host 12. And in the memory file system, the verification data is stored for a long time or permanently.
Optionally, the check data is further stored in a hot cache of the host 12, which is used as a front-end cache of the memory file system and can be used to achieve faster reading relative to the memory file system.
In one example, the host 12 saves the checksum of the second data in the memory file system and the hot cache. Subsequently, when the check code of the second data needs to be read, preferentially searching whether the check code of the second data is recorded or not from the hot cache so as to achieve the purpose of quick reading; if the hot cache does not have the check code of the second data, the check code of the second data can be read from the memory file system.
In addition, the check data stored in the hot cache can be cleaned regularly to avoid occupying excessive cache space. In one example, when the stored data in the hot cache reaches a preset threshold, the stored data is cleaned up using an LRU (Least Recently Used) algorithm. In another example, stored data in the thermal cache that was not read or written during the last time period is flushed every predetermined time period. By the two modes, some check data which are unlikely to be used can be cleared, and the cache space is released. In practical applications, the check data stored in the hot cache may be periodically cleaned in any one of the above manners, or in a combination of the above two manners, or in a combination of at least one of the above manners and other manners.
In the embodiment of the present application, a data writing process will be described by taking only an example of writing the second data at the target location of the target storage device. The above-described write procedure can be referred to when writing data to any position of any one storage device of the storage device cluster 13.
In summary, according to the technical scheme provided by the embodiment of the application, when data is written into the storage device, the corresponding check code is generated and stored, so that correctness check is performed in a subsequent data reading process, and correctness of data reading is ensured.
In another alternative embodiment provided based on the embodiments of fig. 2, fig. 3 or fig. 4, a mechanism for periodically checking the data stored in the storage device is also provided. As shown in fig. 5, in this embodiment, taking periodic verification of data stored in a target storage device as an example, the manner of this embodiment may be referred to for any storage device of the storage device cluster 13 in terms of a periodic verification mechanism. The periodic verification process may include the following steps:
step 501, when the verification period of the target storage device is reached, reading data to be verified from the target storage device.
When the verification period of the target storage device is reached, the host 12 reads the data to be verified from the target storage device. The verification period may be preset according to actual requirements, for example, 12 hours, 1 day, 1 week, and the like, which is not limited in this embodiment of the application. In addition, the extraction sequence of the data to be checked may be determined by using an LRU algorithm, and data that has not been accessed for a long time in the target storage device is checked first, because the data that is accessed in a short time is already checked by the real-time checking mechanism provided in the embodiments of fig. 2 and fig. 3.
In addition, the host 12 may dynamically configure configuration parameters for periodic verification, including but not limited to whether all data is verified, a verification period, a criterion for data that is not accessed for a long time, and the like, and the host 12 implements a periodic verification mechanism according to the configuration parameters.
Optionally, when the verification period of the target storage device is reached, the host 12 obtains the load of the target storage device, detects whether the load of the target storage device is smaller than a preset threshold, and if the load of the target storage device is smaller than the preset threshold, reads the data to be verified from the target storage device, and executes a subsequent periodic verification step. When the load of the target storage device is larger than a preset threshold value, the periodic verification process of the data stored in the target storage device is not executed or stopped. The load of the target storage device may be determined according to the number of read-write requests or read-write operations to the target storage device. By the mode, the storage device is periodically checked under the condition that the storage device is in the read-write idle state, and the influence of the periodic check on normal read-write operation can be avoided.
Step 502, detecting whether the data to be verified is consistent with the data written into the target storage device.
The host 12 may calculate a verification code of the data to be verified, obtain the verification code of the data written into the target storage device from the pre-stored verification data, then detect whether the verification code of the data to be verified and the verification code of the data written into the target storage device are the same, if the same, indicate that the data to be verified is the same as the data written into the target storage device, and the data to be verified is correct, and if not, indicate that the data to be verified is not the same as the data written into the target storage device, and the data to be verified has an error.
In step 503, if the data to be verified is inconsistent with the data written into the target storage device, a repair operation on the data to be verified is triggered.
In the embodiment of the present application, a specific flow of the repair operation on the data to be verified is not limited. For example, the method includes stopping read-write operations related to the target storage device, sending an alarm, triggering the target storage device to repair data to be verified according to the snapshot and the log, or notifying an operation and maintenance worker to perform manual intervention repair.
In summary, according to the technical scheme provided by the embodiment of the application, the correctness of the data stored in the storage device can be further ensured by periodically checking the data stored in the storage device.
Referring to FIG. 6, an architecture diagram corresponding to the data storage system provided in FIG. 1 is illustrated.
The operational state of the CVM 11 includes a user state and a kernel state. The CVM 11 normally works in a user mode, and when data needs to be read from and written to the storage device cluster 13, the user mode is switched to a kernel mode. The CVM 11 may send a read request or a write request to the host 12 through the first interface. Optionally, the CVM 11 sends a read request or a write request to the host 12 through a virtio driver.
The temporary storage component of the host 12 is used for temporarily storing the read request or the write request issued by the CVM 11. Optionally, the temporary storage assembly is a vring assembly. When the host 12 is in the user mode, the host acquires a read request or a write request from the temporary storage component and performs corresponding processing.
As shown in fig. 6, when the host 12 acquires a read request from the vring component, and the read request is used to request to read data from a target location of a target storage device of the storage device cluster 13, the host 12 triggers the block device driver to read the first data from the storage device cluster 13 through the real-time check module. Optionally, the block device driver is a qemu driver. After the block device driver is triggered, the host 12 is switched from the user mode to the kernel mode, the block device driver submits the read request to the general block layer for processing, and the general block layer generates a data read request according to the read request and sends the data read request to the target storage device through the second interface. Optionally, the second interface is an iscsi drive. Optionally, the target storage device is a CBS.
The target storage device may include a proxy layer, an access layer, and a cell layer. The proxy layer is used for interacting with the host 12, and includes receiving a read-write request sent by the host 12 and responding to the read-write request sent by the host. In addition, the proxy layer is also used for maintaining snapshots and logs and executing data recovery operations. The access layer is used for realizing the butt joint between the proxy layer and the cell layer and realizing the access to the data stored in the cell layer. The cell layer is used for storing data.
Still taking the above processing flow of the read request as an example, after the proxy layer of the target storage device receives the data read request, the proxy layer reads the first data from the cell layer, and sends the first data to the host 12. After receiving the first data, the real-time check module of the host 12 generates a check code of the first data, acquires a check code of second data previously written in a target location of the target storage device from a thermal cache or a memory file system, detects whether the two check codes are consistent, and sends the first data to the CVM 11 if the two check codes are consistent, and triggers the target storage device to perform a data repair operation if the two check codes are not consistent, and then acquires correct second data, and sends the second data to the CVM 11.
As shown in fig. 6, when the host 12 obtains a write request from the vring component, the host 12 reads corresponding data from the storage device cluster 13, generates a check code of the data by the real-time check module, and stores the check code in the memory file system.
In addition, the host 12 is configured with a periodic checking module for implementing the periodic checking process described above. Optionally, the periodic check module operates as an independent process in the host 12, so that on one hand, the change to the qemu driver can be reduced, and only the real-time check module needs to be added to the qemu driver, and on the other hand, the periodic check mechanism can still be executed when the CVM 11 is stopped.
Referring collectively to fig. 7A, a flow of execution of the real-time check module in processing a read request is shown. The real-time verification module reads data from the target storage device according to the read request (71); calculating a check code (72) of the read data; detecting whether the read data is correct or not according to the check code (73); if the data is correct, feeding back the read data to the CVM 11 (74); and if not, sending an alarm, stopping the read-write operation related to the target storage equipment, and waiting for the target storage equipment to repair the data by using the snapshot and the log (75).
Referring collectively to FIG. 7B, the flow of execution of the real-time check module in processing a write request is shown. After the real-time verification module receives the read request, writing data in the target storage device (76); generating a check code (77) for the data; the check code is stored in a hot cache and memory file system (78).
Alternatively, the real-time check module may generate the check code of the data by using a CRC algorithm, as shown in fig. 8, a 1byte check code is generated for each 4kb length of data. The real-time checking module generates a check code when writing data into the storage device each time, and verifies the check code when reading data from the storage device each time.
The following are embodiments of the apparatus and system of the present application, and for details not disclosed in the embodiments of the apparatus and system of the present application, reference is made to embodiments of the method of the present application.
Referring to fig. 9, a block diagram of a data reading apparatus according to an embodiment of the present application is shown. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The apparatus may include: a request acquisition module 910, a data reading module 920, a data detection module 930, and a request response module 940.
A request receiving module 910, configured to obtain a read request from a virtual machine, where the read request is used to request to read data from a storage device cluster.
A data reading module 920, configured to read, when a read request from the virtual machine is acquired each time, first data from a target location of a target storage device of the storage device cluster according to the read request.
A data detecting module 930, configured to detect whether the first data is consistent with the second data written into the target location.
A request response module 940, configured to respond to the read request with the first data when the first data is consistent with the second data.
In summary, according to the technical scheme provided by the embodiment of the application, when data is read from the storage device each time, correctness verification is performed on the read data to ensure that the read data is the data written before, which is helpful for finding out the problem of data errors in time and avoiding the situation that data is not read but not written.
In an alternative embodiment provided based on the embodiment of fig. 9, the data detection module 930 is configured to:
calculating a check code of the first data;
acquiring a check code of the second data from pre-stored check data;
detecting whether the check code of the first data is the same as the check code of the second data;
if the check code of the first data is the same as the check code of the second data, determining that the first data is consistent with the second data;
and if the check code of the first data is different from the check code of the second data, determining that the first data is inconsistent with the second data.
In another optional embodiment provided based on the embodiment of fig. 9, the apparatus further comprises: the device comprises an operation stopping module, a request sending module and a data receiving module.
And the operation stopping module is used for stopping the read-write operation related to the target storage equipment when the first data is inconsistent with the second data.
A request sending module, configured to send a data repair request to the target storage device, where the data repair request is used to trigger the target storage device to repair the first data at the target location into the second data according to the snapshot and the log.
And the data receiving module is used for receiving the second data sent by the target storage device.
The request response module 940 is further configured to respond to the read request with the second data.
In another optional embodiment provided based on the embodiment of fig. 9, the apparatus further comprises: the device comprises a data writing module, a check calculation module and a check storage module.
The request obtaining module 910 is further configured to obtain a write request from the virtual machine, where the write request is used to request that the second data be written at a target location of the target storage device.
And the data writing module is used for writing the second data at the target position of the target storage equipment according to the writing request.
And the check calculation module is used for calculating the check code of the second data.
And the check storage module is used for storing the check code of the second data.
Optionally, the data writing module is specifically configured to:
detecting whether data has been written at a target location of the target storage device;
if the data is written in the target position of the target storage equipment, reading third data from the target position of the target storage equipment; detecting whether the read third data is consistent with the previously written data; if the read third data is consistent with the previously written data, determining fourth data actually written into the target position of the target storage device according to the second data and the third data; writing the fourth data at a target location of the target storage device;
and if no data is written in the target position of the target storage equipment, writing the second data in the target position of the target storage equipment.
Optionally, the check saving module is specifically configured to save a check code of the second data in an internal file system and a hot cache.
Optionally, the apparatus further comprises: and a cache cleaning module.
The cache cleaning module is used for cleaning the stored data by adopting an LRU algorithm when the stored data in the hot cache reaches a preset threshold value; and/or clearing stored data which is not read and written in the thermal cache in the latest time period every preset time period.
In another optional embodiment provided based on the embodiment of fig. 9, the apparatus further comprises: and a periodic checking module.
The periodic checking module is used for reading data to be checked from the target storage equipment when the checking period of the target storage equipment is reached; detecting whether the data to be verified is consistent with the data written into the target storage equipment or not; and if the data to be verified is inconsistent with the data written into the target storage equipment, triggering the repair operation of the data to be verified.
Optionally, the periodic checking module is further configured to obtain a load of the target storage device when a checking period of the target storage device arrives; detecting whether the load is smaller than a preset threshold value; and if the load is smaller than the preset threshold value, executing the step of reading the data to be verified from the target storage device.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
In an exemplary embodiment of the present application, a data storage system is also provided. Wherein:
the virtual machine is configured to generate a read request, and submit the read request to the host 12, where the read request is used to request to read data from a target location of a target storage device of the storage device cluster 13.
The host 12 is configured to read first data from a target position of the target storage device according to the read request each time the read request from the virtual machine is acquired; detecting whether the first data is consistent with second data written into the target position; responding to the read request with the first data when the first data is consistent with the second data.
Optionally, the host 12 detects whether the first data is consistent with the second data written into the target location, and specifically is configured to:
calculating a check code of the first data;
acquiring a check code of the second data from pre-stored check data;
detecting whether the check code of the first data is the same as the check code of the second data;
if the check code of the first data is the same as the check code of the second data, determining that the first data is consistent with the second data;
and if the check code of the first data is different from the check code of the second data, determining that the first data is inconsistent with the second data.
Optionally, the host 12 calculates a check code of the first data, and is specifically configured to:
for each first data block with a preset length contained in the first data, respectively calculating a check code corresponding to each first data block to obtain a check code of the first data;
wherein the first data comprises at least one of the first data blocks.
Optionally, the host 12 is further configured to stop the read-write operation related to the target storage device when the first data is inconsistent with the second data; and sending a data repair request to the target storage device.
The target storage device is used for repairing the first data at the target position into the second data according to the snapshot and the log; sending the second data to the host 12.
The host 12 is further configured to respond to the read request with the second data.
Optionally, the virtual machine is further configured to submit a write request to the host 12, where the write request is used to request that the second data be written at the target location of the target storage device.
The host 12 is further configured to obtain the write request; writing the second data at a target location of the target storage device according to the write request; calculating a check code of the second data; and saving the check code of the second data.
Optionally, the host 12 writes the second data at the target location of the target storage device according to the write request, specifically to:
detecting whether data has been written at a target location of the target storage device;
if the data is written in the target position of the target storage equipment, reading third data from the target position of the target storage equipment; detecting whether the read third data is consistent with the previously written data; if the read third data is consistent with the previously written data, determining fourth data actually written into the target position of the target storage device according to the second data and the third data; writing the fourth data at a target location of the target storage device;
and if no data is written in the target position of the target storage equipment, writing the second data in the target position of the target storage equipment.
Optionally, the host 12 calculates a check code of the second data, and is specifically configured to:
for each second data block with a preset length contained in the second data, respectively calculating a check code of each second data block to obtain the check code of the second data;
wherein the second data comprises at least one of the second data blocks.
Optionally, the host 12 is specifically configured to store the check code of the second data in the memory file system and the hot cache.
Optionally, the host 12 is further configured to read data to be verified from the target storage device when the verification period of the target storage device arrives; detecting whether the data to be verified is consistent with the data written into the target storage equipment or not; and when the data to be verified is inconsistent with the data written into the target storage equipment, triggering the repair operation of the data to be verified.
Referring to fig. 10, a block diagram of a computer device according to an embodiment of the present application is shown. For example, the computer device may be a server. The computer device is used for implementing the video processing method provided in the above embodiment. Specifically, the method comprises the following steps:
the computer apparatus 1000 includes a Central Processing Unit (CPU)1001, a system memory 1004 including a Random Access Memory (RAM)1002 and a Read Only Memory (ROM)1003, and a system bus 1005 connecting the system memory 1004 and the central processing unit 1001. The computer device 1000 also includes a basic input/output system (I/O system) 1006, which facilitates the transfer of information between devices within the computer, and a mass storage device 1007, which stores an operating system 1013, application programs 1014, and other program modules 1015.
The basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1008 and input device 1009 are connected to the central processing unit 1001 through an input-output controller 1010 connected to the system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 10010 and its associated computer-readable media provide non-volatile storage for the computer device 1000. That is, the mass storage device 1007 may include a computer readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1004 and mass storage device 1007 described above may be collectively referred to as memory.
The computer device 1000 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the invention. That is, the computer device 1000 may be connected to the network 1012 through the network interface unit 1011 connected to the system bus 1005, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1011.
In an example embodiment, there is also provided a computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions. The at least one instruction, at least one program, set of codes, or set of instructions is configured to be executed by one or more processors to implement the above-described data reading method.
In an exemplary embodiment, a computer readable storage medium is also provided, in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, which when executed by a processor of a computer device implements the above data reading method.
Alternatively, the computer-readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided for implementing the above-described data reading method when the computer program product is executed.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A data reading method is applied to a host machine of a virtual machine, and the method comprises the following steps:
reading first data from a target position of a target storage device of a storage device cluster according to a read request when the read request for requesting to read data from the storage device cluster from the virtual machine is acquired every time;
for each first data block with a preset length contained in the first data, respectively calculating a check code corresponding to each first data block to obtain a check code of the first data, wherein the first data comprises at least one first data block;
acquiring a check code of second data written in the target position from pre-stored check data, wherein the check data is stored in a memory file system and a hot cache;
detecting whether the check code of the first data is the same as the check code of the second data;
if the check code of the first data is the same as the check code of the second data, determining that the first data is consistent with the second data;
if the first data is consistent with the second data, adopting the first data to respond to the read request;
if the first data is inconsistent with the second data, stopping the read-write operation related to the target storage equipment;
sending a data repair request to the target storage device, wherein the data repair request is used for triggering the target storage device to repair the first data at the target position into the second data according to the snapshot and the log;
receiving the second data sent by the target storage device;
responding to the read request with the second data.
2. The method of claim 1, further comprising:
and if the check code of the first data is different from the check code of the second data, determining that the first data is inconsistent with the second data.
3. The method of claim 1, further comprising:
obtaining a write request from the virtual machine, the write request requesting that the second data be written at a target location of the target storage device;
writing the second data at a target location of the target storage device according to the write request;
calculating a check code of the second data;
and saving the check code of the second data.
4. The method of claim 3, wherein the writing the second data at the target location of the target storage device according to the write request comprises:
detecting whether data has been written at a target location of the target storage device;
if the data is written in the target position of the target storage equipment, reading third data from the target position of the target storage equipment; detecting whether the read third data is consistent with the previously written data; if the read third data is consistent with the previously written data, determining fourth data actually written into the target position of the target storage device according to the second data and the third data; writing the fourth data at a target location of the target storage device;
and if no data is written in the target position of the target storage equipment, writing the second data in the target position of the target storage equipment.
5. The method of claim 3, wherein the calculating the check code of the second data comprises:
for each second data block with a preset length contained in the second data, respectively calculating a check code of each second data block to obtain the check code of the second data;
wherein the second data comprises at least one of the second data blocks.
6. The method of claim 1, further comprising:
when the stored data in the hot cache reaches a preset threshold value, cleaning the stored data by adopting a least recently used LRU algorithm;
and/or the presence of a gas in the gas,
and clearing the stored data which is not read and written in the thermal cache in the latest time period at preset time periods.
7. The method according to any one of claims 1 to 6, further comprising:
when the verification period of the target storage equipment is reached, reading data to be verified from the target storage equipment;
detecting whether the data to be verified is consistent with the data written into the target storage equipment or not;
and if the data to be verified is inconsistent with the data written into the target storage equipment, triggering the repair operation of the data to be verified.
8. The method of claim 7, further comprising:
when the verification period of the target storage equipment is reached, acquiring the load of the target storage equipment;
detecting whether the load is smaller than a preset threshold value;
and if the load is smaller than the preset threshold value, executing the step of reading the data to be verified from the target storage device.
9. A data reading apparatus, applied to a host of a virtual machine, the apparatus comprising:
the data reading module is used for reading first data from a target position of a target storage device of a storage device cluster according to a reading request when the reading request which is used for requesting to read data from the storage device cluster and is from the virtual machine is acquired every time;
the data detection module is configured to calculate, for each first data block with a preset length included in the first data, a check code corresponding to each first data block, to obtain a check code of the first data, where the first data includes at least one first data block; acquiring a check code of second data written in the target position from pre-stored check data, wherein the check data is stored in a memory file system and a hot cache; detecting whether the check code of the first data is the same as the check code of the second data; if the check code of the first data is the same as the check code of the second data, determining that the first data is consistent with the second data;
a request response module, configured to respond to the read request with the first data when the first data is consistent with the second data;
an operation stopping module, configured to stop a read-write operation related to the target storage device if the first data is inconsistent with the second data;
a request sending module, configured to send a data repair request to the target storage device, where the data repair request is used to trigger the target storage device to repair the first data at the target location into the second data according to the snapshot and the log;
the data receiving module is used for receiving the second data sent by the target storage device;
a request response module to respond to the read request with the second data.
10. A data storage system, the system comprising: the system comprises a virtual machine, a host machine of the virtual machine and a storage device cluster;
the virtual machine is used for generating a read request and submitting the read request to the host machine, wherein the read request is used for requesting to read data from a target position of a target storage device of the storage device cluster;
the host machine is used for reading first data from a target position of the target storage equipment according to the read request when the read request from the virtual machine is acquired every time; for each first data block with a preset length contained in the first data, respectively calculating a check code corresponding to each first data block to obtain a check code of the first data, wherein the first data comprises at least one first data block; acquiring a check code of second data written in the target position from pre-stored check data, wherein the check data is stored in a memory file system and a hot cache; detecting whether the check code of the first data is the same as the check code of the second data; if the check code of the first data is the same as the check code of the second data, determining that the first data is consistent with the second data; when the first data is consistent with the second data, responding to the read request by adopting the first data; if the first data is inconsistent with the second data, stopping the read-write operation related to the target storage equipment; sending a data repair request to the target storage device, wherein the data repair request is used for triggering the target storage device to repair the first data at the target position into the second data according to the snapshot and the log; receiving the second data sent by the target storage device; responding to the read request with the second data.
11. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a data reading method according to any one of claims 1 to 8.
12. A computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement a data reading method according to any one of claims 1 to 8.
CN201810300369.2A 2018-04-04 2018-04-04 Data reading method, device, equipment and system Active CN108509156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810300369.2A CN108509156B (en) 2018-04-04 2018-04-04 Data reading method, device, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810300369.2A CN108509156B (en) 2018-04-04 2018-04-04 Data reading method, device, equipment and system

Publications (2)

Publication Number Publication Date
CN108509156A CN108509156A (en) 2018-09-07
CN108509156B true CN108509156B (en) 2021-06-11

Family

ID=63380792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810300369.2A Active CN108509156B (en) 2018-04-04 2018-04-04 Data reading method, device, equipment and system

Country Status (1)

Country Link
CN (1) CN108509156B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308288B (en) * 2018-09-26 2020-12-08 新华三云计算技术有限公司 Data processing method and device
CN110968577B (en) * 2018-09-27 2023-04-07 阿里巴巴集团控股有限公司 Method and system for writing and reading resources and time sequence storage system
WO2020151002A1 (en) * 2019-01-25 2020-07-30 华为技术有限公司 Data repair method and device
CN111274268B (en) * 2020-01-15 2023-09-05 平安科技(深圳)有限公司 Internet of things data transmission method and device, medium and electronic equipment
CN111736762B (en) * 2020-05-21 2023-04-07 平安国际智慧城市科技股份有限公司 Synchronous updating method, device, equipment and storage medium of data storage network
CN114442925A (en) * 2021-12-07 2022-05-06 苏州浪潮智能科技有限公司 Nonvolatile storage hard disk multi-queue submission scheduling method, device and storage medium
CN114153649B (en) * 2021-12-09 2023-04-14 合肥康芯威存储技术有限公司 Data storage device, control method thereof and electronic device
CN114785714B (en) * 2022-03-01 2023-08-22 阿里巴巴(中国)有限公司 Message transmission delay detection method, storage medium and equipment
CN117319242A (en) * 2022-06-23 2023-12-29 华为技术有限公司 Data storage method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103823708A (en) * 2014-02-27 2014-05-28 深圳市深信服电子科技有限公司 Virtual machine read-write request processing method and device
US9152545B1 (en) * 2013-04-16 2015-10-06 Emc Corporation Read-write access in a read-only environment
CN105183382A (en) * 2015-09-09 2015-12-23 浪潮(北京)电子信息产业有限公司 Data block protection method and device
CN106201349A (en) * 2015-12-31 2016-12-07 华为技术有限公司 A kind of method and apparatus processing read/write requests in physical host
CN106648969A (en) * 2016-10-26 2017-05-10 郑州云海信息技术有限公司 Method and system for inspecting damaged data in disk

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152545B1 (en) * 2013-04-16 2015-10-06 Emc Corporation Read-write access in a read-only environment
CN103823708A (en) * 2014-02-27 2014-05-28 深圳市深信服电子科技有限公司 Virtual machine read-write request processing method and device
CN105183382A (en) * 2015-09-09 2015-12-23 浪潮(北京)电子信息产业有限公司 Data block protection method and device
CN106201349A (en) * 2015-12-31 2016-12-07 华为技术有限公司 A kind of method and apparatus processing read/write requests in physical host
CN106648969A (en) * 2016-10-26 2017-05-10 郑州云海信息技术有限公司 Method and system for inspecting damaged data in disk

Also Published As

Publication number Publication date
CN108509156A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN108509156B (en) Data reading method, device, equipment and system
US8943358B2 (en) Storage system, apparatus, and method for failure recovery during unsuccessful rebuild process
JP4321705B2 (en) Apparatus and storage system for controlling acquisition of snapshot
US8689047B2 (en) Virtual disk replication using log files
US9946655B2 (en) Storage system and storage control method
US20130339784A1 (en) Error recovery in redundant storage systems
US20180293145A1 (en) Managing health conditions to determine when to restart replication after a swap triggered by a storage health event
US20130339569A1 (en) Storage System and Method for Operating Thereof
US8819478B1 (en) Auto-adapting multi-tier cache
US9507668B2 (en) System and method for implementing a block-based backup restart
JP5963228B2 (en) Storage system and data backup method
JP2006221623A (en) Detection and recovery of dropped write in storage device
JP4903244B2 (en) Computer system and failure recovery method
US20190026179A1 (en) Computing system and error handling method for computing system
WO2012053085A1 (en) Storage control device and storage control method
CN110413218B (en) Method, apparatus and computer program product for fault recovery in a storage system
US20160196085A1 (en) Storage control apparatus and storage apparatus
US8782465B1 (en) Managing drive problems in data storage systems by tracking overall retry time
JP2003345528A (en) Storage system
US10235255B2 (en) Information processing system and control apparatus
JP4535371B2 (en) Disk array control program, method and apparatus
JP6599725B2 (en) Information processing apparatus, log management method, and computer program
US8495256B2 (en) Hard disk drive availability following transient vibration
CN111240903A (en) Data recovery method and related equipment
JP2001075741A (en) Disk control system and data maintenance method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant