CA3129984A1 - Method and system for accessing distributed block storage system in user mode - Google Patents

Method and system for accessing distributed block storage system in user mode Download PDF

Info

Publication number
CA3129984A1
CA3129984A1 CA3129984A CA3129984A CA3129984A1 CA 3129984 A1 CA3129984 A1 CA 3129984A1 CA 3129984 A CA3129984 A CA 3129984A CA 3129984 A CA3129984 A CA 3129984A CA 3129984 A1 CA3129984 A1 CA 3129984A1
Authority
CA
Canada
Prior art keywords
data
block storage
distributed block
accessing
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3129984A
Other languages
French (fr)
Inventor
Jian Shen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
10353744 Canada Ltd
Original Assignee
10353744 Canada Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 10353744 Canada Ltd filed Critical 10353744 Canada Ltd
Publication of CA3129984A1 publication Critical patent/CA3129984A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Abstract

The present invention discloses a method and a system of accessing distributed block storage system in user state, LIO TCMU of a computing-node includes an accessing module, and the method comprises: receiving a data read request sent by the iSCSI initiator via the accessing module, after being connected between the LIO TCMU and the iSCSI initiator;
judging whether there is target data corresponding to the data read request in a cache of the accessing module; and if yes, returning the target data to a data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, to facilitate the thread to request corresponding target data from the distributed block storage cluster and return the target data to the data accessor. By adding such functions as caching, pre-reading and write-merging etc. in the client side, the present invention moves forward tasks originally processed by the server side to the client side, thereby also reducing bandwidth overhead of cluster and providing services to more computing-nodes, to enhance servicing capability and response speed of the entire cluster and improve accessing performance.

Description

METHOD AND SYSTEM FOR ACCESSING DISTRIBUTED BLOCK STORAGE
SYSTEM IN USER MODE
BACKGROUND OF THE INVENTION
Technical Field [0001] The present invention relates to the field of distributed storage technology, and more particularly to a method and a system of accessing distributed block storage system in user state.
Description of Related Art
[0002] Conventionally, distributed block storage is mainly employed to provide cloud disk services for physical servers and virtual machines. In the state of the art, the SCSI standard block interface is generally not directly provided while distributed block storage is being realized, rather, the iSCSI mode is employed to uniformly provide accessing interfaces to the outside, as shown in Fig. 1. To facilitate realization, many solutions chose to perform secondary development based on TGT at the initial state of design, so as to realize support by iSCSI Target to the distributed block storage.
[0003] However, such modus operandi in the state of the art has the following inherent disadvantages:
[0004] Designs are uniformly made with respect to the application scenario of the physical server and the application scenario of the virtual machine, and there is no optimized design directed to their respective characteristics;
[0005] Realization by the use of TGT exhibits inferior performance in the scenario in which many initiators access one unified Target; and
[0006] TGT creates, by default, sixteen 10 threads for each LUN, while there is no function of automatic adjustment according to pressure, and resources are relatively wasted.
[0007] In summary, there is an urgent need to propose a novel method of accessing distributed block storage system, so as to address the aforementioned problems.

Date recue / Date received 202 1-1 1-03 SUMMARY OF THE INVENTION
[0008] In order to solve prior-art problems, embodiments of the present invention provide a method and a system of accessing distributed block storage system in user state, so as to solve problems existent in the state of art.
[0009] To solve one or more of the aforementioned technical problems, the present invention proposes the following technical solutions.
[0010] According to the first aspect, there is provided a method of accessing distributed block storage system in user state, the distributed block storage system contains a computing node and a distributed block storage cluster, the computing node includes iSCSI initiator and LIO TCMU, the LIO TCMU is provided therein with an accessing module, and the method comprises the following steps:
[0011] receiving a data read request coming from a data accessor and sent by the iSCSI initiator via the accessing module, after the LIO TCMU and the iSCSI initiator having been connected with each other;
[0012] judging whether there is target data corresponding to the data read request in a cache of the accessing module; and
[0013] if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
[0014] Further, the step of generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor further includes:
[0015] writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of Date recue / Date received 202 1-1 1-03 the thread.
[0016] Further, the method further comprises:
[0017] receiving a data write request coming from the data accessor and sent by the iSCSI
initiator via the accessing module, the data write request including to-be-processed data to be written in the distributed block storage cluster;
[0018] writing the to-be-processed data into a cache of the computing node, and generating a corresponding data write task based on the data write request;
[0019] periodically executing the data write task to preprocess the to-be-processed data; and
[0020] generating a corresponding thread in the preconfigured thread pool, and writing the preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
[0021] Further, the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
[0022] According to the second aspect, there is provided a method of accessing distributed block storage system in user state, the distributed block storage system contains a computing node and a distributed block storage cluster, the computing node includes a virtual machine deployed on a physical server, the virtual machine includes an accessing module, and the method comprises the following steps:
[0023] receiving a data read request sent by a data accessor via the accessing module;
[0024] judging whether there is target data corresponding to the data read request in a cache of the accessing module; and
[0025] if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.

Date recue / Date received 202 1-1 1-03
[0026] Further, the step of generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor further includes:
[0027] writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of the thread.
[0028] Further, the method further comprises:
[0029] receiving a data write request sent by the data accessor via the accessing module, the data write request including to-be-processed data to be written in the distributed block storage cluster;
[0030] writing the to-be-processed data into the cache of the computing node, and generating a corresponding data write task based on the data write request;
[0031] periodically executing the data write task to preprocess the to-be-processed data; and
[0032] generating a corresponding thread in the preconfigured thread pool, and writing the preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
[0033] Further, the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
[0034] According to the third aspect, there is provided a distributed block storage system, the system comprises a computing node and a distributed block storage cluster, the computing node includes iSCSI initiator and LIO TCMU, wherein the LIO TCMU is provided with an accessing module, and the accessing module includes:
[0035] a data receiving module, for receiving a data read request coming from a data accessor and sent by the iSCSI initiator, after the LIO TCMU and the iSCSI initiator having been connected with each other;

Date recue / Date received 202 1-1 1-03
[0036] a data judging module, for judging whether there is target data corresponding to the data read request in a cache of the accessing module;
[0037] a data returning module, for returning the target data to the data accessor; and
[0038] a data requesting module, for generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
[0039] According to the fourth aspect, there is provided a distributed block storage system, the system comprises a computing node and a distributed block storage cluster, the computing node includes a virtual machine deployed on a physical server, the virtual machine includes an accessing module, and the accessing module includes:
[0040] a data receiving module, for receiving a data read request sent by a data accessor;
[0041] a data judging module, for judging whether there is target data corresponding to the data read request in a cache of the accessing module;
[0042] a data returning module, for returning the target data to the data accessor; and
[0043] a data requesting module, for generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
[0044] Technical solutions provided by the embodiments of the present invention bring about the following advantageous effects.
[0045] In the method and system of accessing distributed block storage system in user state provided by the embodiments of the present invention, by receiving a data read request coming from a data accessor and sent by the iSCSI initiator via the accessing module after the LIO TCMU and the iSCSI initiator have been connected with each other, judging whether there is target data corresponding to the data read request in a cache of the Date recue / Date received 202 1-1 1-03 accessing module, if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor, and by adding such functions as caching, pre-reading and write merging etc. in the client side, the tasks originally processed by the server side are moved forward to the client side, thereby also reducing bandwidth overhead of the cluster, making it possible to provide services to more computing nodes, and lowering the total ownership cost of the cluster by the enterprise, at the same time of enhancing servicing capability and response speed of the entire cluster and improving the accessing performance.
[0046] In the method and system of accessing distributed block storage system in user state provided by the embodiments of the present invention, by receiving a data read request sent by a data accessor via the accessing module, judging whether there is target data corresponding to the data read request in a cache of the accessing module, if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor, and by adding such functions as caching, pre-reading and write merging etc. in the client side, the tasks originally processed by the server side are moved forward to the client side, thereby also reducing bandwidth overhead of the cluster, making it possible to provide services to more computing nodes, lowering the total ownership cost of the cluster by the enterprise, reducing component parts of the framework as a whole, simplifying the framework, and making it convenient for maintenance and deployment, at the same time of enhancing servicing capability and response speed of the entire cluster and improving the accessing performance.
BRIEF DESCRIPTION OF THE DRAWINGS
[0047] To more clearly explain the technical solutions in the embodiments of the present invention, drawings required for use in the following explanation of the embodiments are Date recue / Date received 202 1-1 1-03 briefly described below. Apparently, the drawings described below are merely directed to some embodiments of the present invention, while it is further possible for persons ordinarily skilled in the art to acquire other drawings based on these drawings, and no creative effort will be spent in the process.
[0048] Fig. 1 is a view illustrating the architecture of a prior-art distributed block storage system shown according to an exemplary embodiment;
[0049] Fig. 2 is a view illustrating the architecture of a separately designed distributed block storage system shown according to an exemplary embodiment;
[0050] Fig. 3 is a view illustrating the architecture of a distributed block storage system under the application scenario of a physical server shown according to an exemplary embodiment;
[0051] Fig. 4 is a view illustrating the architecture of a distributed block storage system under the application scenario of a virtual machine shown according to an exemplary embodiment;
[0052] Fig. 5 is a flowchart of the method of accessing distributed block storage system in user state shown according to an exemplary embodiment;
[0053] Fig. 6 is a flowchart of the method of accessing distributed block storage system in user state under the application scenario of a physical server shown according to an exemplary embodiment; and
[0054] Fig. 7 is a flowchart of the method of accessing distributed block storage system in user state under the application scenario of a virtual machine shown according to an exemplary embodiment.
DETAILED DESCRIPTION OF THE INVENTION
[0055] To make more lucid and clear the objectives, technical solutions and advantages of the present invention, technical solutions in the embodiments of the present invention will be described more clearly and completely below with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the embodiments described below are partial, rather than the entire, embodiments of the present invention. All other Date recue / Date received 202 1-1 1-03 embodiments achievable by persons ordinarily skilled in the art on the basis of the embodiments in the present invention without creative effort shall all fall within the protection scope of the present invention.
[0056] Embodiment 1
[0057] As noted in the Description of Related Art, and with reference to Fig.
1, the SCSI
standard block interface is generally not directly provided while distributed block storage is being realized, rather, the iSCSI mode is employed to provide accessing interfaces to the outside. For instance, the physical server performs access still via iSCSI, but realization of TGT itself restricts its use under commercial environment, as computing resources cannot be utilized highly effectively. Moreover, the activity of TGT open source community is also not very high, and there is certain worry in the use of TGT. In view of these problems, in order to facilitate realization under the application scenario of a physical server, many solutions chose to perform secondary development based on TGT at the initial state of design, so as to realize support by iSCSI Target to the distributed block storage. However, the modus operandi under such application scenario has the following inherent disadvantages:
[0058] Realization by the use of TGT exhibits inferior performance in the scenario in which plural initiators access a single target; and
[0059] TGT creates, by default, sixteen 10 threads for each LUN, while there is no function of automatic adjustment according to pressure, and resources are relatively wasted.
[0060] To achieve function reuse and to reduce the difficulty of engineering realization, a method of accessing distributed block storage system in user state is creatively proposed in Embodiment 1 of the present invention, the method is adapted for application in a scenario of a physical server, an accessing module that realizes unified distributed block storage is disposed in the LIO TCMU of the computing node in this method, and interface accessing can be provided with the Lib mode. This accessing module mainly realizes functions of the client side of the distributed block storage, directly communicates with the Date recue / Date received 202 1-1 1-03 distributed block storage cluster via a private protocol, enables LIO to directly support a self-defined distributed block device type, can hence directly access the distributed block storage cluster, performs such operations as read and write, and enhances the servicing capability and response speed of the entire cluster.
[0061] Fig. 2 is a view illustrating the architecture of a separately designed distributed block storage system shown according to an exemplary embodiment, and Fig. 3 is a view illustrating the architecture of a distributed block storage system under the application scenario of a physical server shown according to an exemplary embodiment. With reference to Figs. 2 and 3, in the embodiments of the present invention, in order to replace the TGT mode, LIO is selected for use in performing secondary development, so as to provide an iSCSI interface to access the distributed block device.
Specifically speaking, the distributed block storage accessing module is accessed via LIO TCMU
interface, and LIO backend storage types are increased. LIO TCMU is a user-state interface provided by an LIO kernel module, and thus can be avoided the difficulty of kernel development and system interference. With further reference to Figs. 2 and 3, the distributed block storage system at least includes a computing node and a distributed block storage cluster, the distributed block storage cluster includes a plurality of distributed block storage devices, the computing node includes iSCSI initiator and LIO TCMU, wherein LIO TCMU is provided with an accessing module, through which can be realized such functions as caching, pre-reading and write merging, etc. directly at the client side, and the tasks originally processed by the server side are moved forward to the client side, whereby servicing capability and response speed of the entire cluster are enhanced.
[0062] Specifically, the foregoing solution can be realized via the following steps.
[0063] Step 1 ¨ realizing distributed block storage accessing module, which accessing module includes, but is not limited to include, such functions as link managing function, message transceiving function, data caching function, pre-reading function, write caching function, etc.
[0064] Specifically, in order to enhance the reading/writing performance of the distributed Date recue / Date received 202 1-1 1-03 block device, an accessing module (LibBlockSys) is disposed in the LIO TCMU of the computing node, and this accessing module externally provides operation interfaces with respect to the block storage device, including, but not limited to, creating a block device, turning on the block device, turning off the block storage device, reading operation, writing operation, capacity expanding operation, and snapshot operation etc. The accessing module can at least realize the following functions.
[0065] The accessing module can establish multi-link communications with the each node of the distributed block storage cluster, concurrent capacity is enhanced, and the number of links is automatically and dynamically adjusted according to the message pressing extent.
[0066] The accessing module can be realized as a multi-thread mode, and the threads are mainly divided into two types: TO transceiving threads and TO processing threads. The two types of threads each constitute a thread pool, namely TO transceiving thread pool responsible for transceiving network data, and TO processing thread pool responsible for specifically processing data, such as controlling message analysis, message processing, EC
processing, and so on.
[0067] The accessing module can realize cache mechanism of local data. In the process of use, the reading operation preferentially hits local cache, and sends a read request to the cluster if hitting is not done. The writing operation is first cached in a local memory or SSD, write data is subsequently merged, aggregated, and removed of redundancy via timed tasks, and a write request is then sent to the cluster.
[0068] As should be noted here, as a preferred embodiment in the embodiments of the present invention, the cache mechanism can adopt B-tree storage and LRU mode to cache hotspot data during specific implementation.
[0069] Step 2¨ adding the accessing module in TCMU-runner, and invoking a distributed block storage client side interface to support the new storage type, wherein TCMU-runner is a daemon that processes userspace side of LIO user backstore.
[0070] Specifically, after the accessing module (LibBlockSys) has been realized, it is possible Date recue / Date received 202 1-1 1-03 to invoke this accessing module in TCMU and Hypervisor software. Under the application scenario of the physical server, secondary development is performed on TCMU-runner, and a self-defined block device accessing module is added therein, so that LIO
is enabled to support distributed block storage, and to directly access the distributed block storage cluster. During specific implementation, Makefile file is first amended, that is to say, CMakefile.txt. in the item open-iscsi/tcmu-runner is amended to compile the distributed block device accessing module into TMCU. The well compiled TCMU is subsequently installed, targetcli is used to configure LUN in backstore/user:xxx, and the corresponding target is configured in iSCSI. Finally, it is possible to use iSCSI initiator to access the configured target, and to perform reading and writing operations on the block device that is locally mapped.
[0071] As should be noted here, Hypervisor is a middle-layer software running between the physical server and the operating system, and allows a plurality of operating systems and applications to share a set of basic physical hardware. Hypervisor can be regarded as a "meta" operating system in a virtual environment, it can coordinate all physical devices and virtual machines on the access server, so it is also referred to as a virtual machine monitor. Hypervisor is the kernel of all virtualization techniques, and uninterrupted support of multi workload migration is the basic function of hypervisor. When the server starts and executes hypervisor, adequate amount of memory, CPU, network and disk resources will be allocated to each virtual machine, and guest operating systems of all virtual machines will be loaded.
[0072] Embodiment 2
[0073] Fig. 5 is a flowchart of the method of accessing distributed block storage system in user state shown according to an exemplary embodiment, and Fig. 6 is a flowchart of the method of accessing distributed block storage system in user state under the application scenario of a physical server shown according to an exemplary embodiment. Referring to Figs. 5 and 6, the method comprises the following steps.
[0074] S101 - receiving a data read request coming from a data accessor and sent by the iSCSI

Date recue / Date received 202 1-1 1-03 initiator via the accessing module, after the LIO TCMU and the iSCSI initiator having been connected with each other.
[0075] Specifically, under the application scenario of a physical server, the computing node includes iSCSI initiator (iSCSI initiating party) and LIO TCMU, of which LIO
specifically means Linux-I0, and TCMU specifically means TCM in userspace. In LIO TCMU is disposed an accessing module that can realize such functions as caching, pre-reading, and write merging etc. at the client side. After the data accessor sends the data read request, the data read request is transmitted via iSCSI initiator to the accessing module in LIO TCMU, and is received and processed by the accessing module. That is to say, after iSCSI initiator has been connected with LIO, LIO directly invokes the distributed device accessing module interface while processing the iSCSI message, turns on the block device, and completes the connection with the cluster and creation of threads via the accessing module.
While the initiator reads and writes the block device, LIO directly invokes reading/writing interface realized by the accessing module, performs read/write request interaction with the cluster, and completes read/write tasks.
[0076] S102 - judging whether there is target data corresponding to the data read request in a cache of the accessing module.
[0077] Specifically, in the embodiments of the present invention, the accessing module is configured to have cache mechanism of local data. After receiving the data read request, the accessing module performs analysis processing on the data read request, and judge whether there is target data corresponding to the data read request in a cache of the computing node based on the analysis result.
[0078] S103 - if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
[0079] Specifically, if target data corresponding to the data read request is found in the cache, Date recue / Date received 202 1-1 1-03 the target data is directly returned to the data accessor, whereby it is no longer required to request the target data from the distributed block storage cluster, and this enhances servicing capability and response speed of the entire cluster. In the case target data corresponding to the data read request is not found in the cache, at this time the data read request is sent to the distributed block storage cluster, so as to obtain target data corresponding to the data read request and return the target data to the data accessor.
[0080] Specifically, in the embodiments of the present invention, the accessing module can be realized as a multi-thread mode, and the threads are mainly divided into two types: TO
transceiving threads and TO processing threads. The two types of threads each constitute a thread pool, namely TO transceiving thread pool responsible for transceiving network data, and TO processing thread pool responsible for specifically processing data, such as controlling message analysis, message processing, EC processing, and so on.
When target data corresponding to the data read request is not found in the cache, a corresponding thread is generated in a preconfigured thread pool on the basis of the data read request, the target data corresponding to the data read request is then requested from the distributed block storage cluster through execution of the thread, and the target data returned by the distributed block storage cluster is sent to the data accessor.
[0081] As a preferred embodiment in the embodiments of the present invention, the step of generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor further includes:
[0082] writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of the thread.
[0083] Specifically, in the embodiments of the present invention, after the target data corresponding to the data read request has been requested from the distributed block storage cluster, it is further needed to write the target data into the cache, so that it is Date recue / Date received 202 1-1 1-03 possible to hit the data directly from the cache at subsequent reception of this read request of the target data, whereby the rounds of switch between user state and kernel state are reduced. During specific operation, it is possible to write the target data into the cache by processing the thread.
[0084] As a preferred embodiment in the embodiments of the present invention, the method further comprises:
[0085] receiving a data write request coming from the data accessor and sent by the iSCSI
initiator via the accessing module, the data write request including to-be-processed data to be written into the distributed block storage cluster;
[0086] writing the to-be-processed data into a cache of the computing node, and generating a corresponding data write task based on the data write request;
[0087] periodically executing the data write task to preprocess the to-be-processed data; and
[0088] generating a corresponding thread in the preconfigured thread pool, and writing the preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
[0089] Specifically, likewise in the embodiments of the present invention, after the data accessor sends the data write request, the data write request is transmitted via iSCSI
initiator to the accessing module in LIO TCMU, and is received and processed by the accessing module. When the accessing module receives the data write request, the to-be-processed data carried with the write request is first written into the cache of the computing node (namely the cache of the kernel module), a corresponding data write task is subsequently generated according to the data write request, the data write task is periodically executed to preprocess the to-be-processed data, finally, a corresponding thread is generated in the preconfigured thread pool, and the preprocessed to-be-processed data is written into the distributed block storage cluster through execution of the thread. As should be noted here, in the embodiments of the present invention, preprocess of the to-be-processed data includes such operations on the to-be-processed data as merging, Date recue / Date received 202 1-1 1-03 aggregating and removing of redundancy, etc., to which no repetition is made in this context.
[0090] As a preferred embodiment in the embodiments of the present invention, the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
[0091] Embodiment 3
[0092] In the embodiments of the present invention, there is further provided a distributed block storage system corresponding to Embodiment 2, the system comprises a computing node and a distributed block storage cluster, the computing node includes iSCSI
initiator and LIO TCMU, the LIO TCMU is provided therein with an accessing module, and the accessing module includes:
[0093] a data receiving module, for receiving a data read request coming from a data accessor and sent by the iSCSI initiator, after the LIO TCMU and the iSCSI initiator having been connected with each other;
[0094] a data judging module, for judging whether there is target data corresponding to the data read request in a cache of the accessing module;
[0095] a data returning module, for returning the target data to the data accessor; and
[0096] a data requesting module, for generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
[0097] As a preferred embodiment in the embodiments of the present invention, the data requesting module is further employed for:
[0098] writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of the thread.
[0099] As a preferred embodiment in the embodiments of the present invention, the accessing Date recue / Date received 202 1-1 1-03 module is further employed for:
[0100] receiving a data write request coming from the data accessor and sent by the iSCSI
initiator via the accessing module, the data write request including to-be-processed data to be written into the distributed block storage cluster;
[0101] writing the to-be-processed data into a cache of the computing node, and generating a corresponding data write task based on the data write request;
[0102] periodically executing the data write task to preprocess the to-be-processed data; and
[0103] generating a corresponding thread in the preconfigured thread pool, and writing the preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
[0104] As a preferred embodiment in the embodiments of the present invention, the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
[0105] Embodiment 4
[0106] With further reference to Fig. 1, under virtual application scenario, the currently relatively general practice is to realize corresponding reading/writing interfaces in the open source iSCSI Target, where the host maps the distributed block device as a local device via iSCSI Initiator, and hence provides the virtual machine with reading/writing accesses. At the same time, in order to guarantee the access clear of malfunction, a multipath design using Multipath, for example, is required to be made on the iSCSI channel.
Such application scenario has the following defects.
[0107] The paths by which the virtual machine accesses distributed block storage are divided into three sections, namely iSCSI Initiator, iSCSI Target, and distributed cluster, and the reading/writing access paths are long, so that the efficiency for the virtual machine to read and write a distributed block device is not very high;
[0108] The multipath design makes more inconvenient the entire cluster deployment and the Date recue / Date received 202 1-1 1-03 use of the virtual machine, the more component parts the cluster has, the more fault points there will be in the cluster, thus increasing use cost and maintenance difficulty.
[0109] Likewise, to achieve function reuse and to reduce the difficulty of engineering realization, a method of accessing distributed block storage system in user state is creatively proposed in Embodiment 4 of the present invention, the method is adapted for application in a scenario of a virtual machine, in this method the computing node is a virtual machine deployed on a physical server, and an accessing module that realizes unified distributed block storage is disposed in the virtual machine. This accessing module mainly realizes functions of the client side of the distributed block storage, directly communicates with the distributed block storage cluster via a private protocol, can hence directly access the distributed block storage cluster, performs such operations as read and write, and enhances the servicing capability and response speed of the entire cluster.
[0110] Fig. 4 is a view illustrating the architecture of a distributed block storage system under the application scenario of a virtual machine shown according to an exemplary embodiment.
Referring to Figs. 2 and 4, in the embodiments of the present invention, in order to shorten the access path of the virtual machine to distributed block storage, iSCSI
component parts are reduced, secondary development is carried out on the virtual machine software, and backend storage driver is added by disposing an accessing module in the virtual machine, so as to support the self-developed distributed block storage access. The direct access of the virtual machine to distributed block storage via a private protocol through the accessing module not only shortens read/write delay, but also avoids multipath design considerations, and makes less the fault points of the entire cluster. Further referring to Figs. 2 and 4, the distributed block storage system at least includes a computing node and a distributed block storage cluster, the distributed block storage cluster includes a plurality of distributed block storage devices, the computing node includes a virtual machine deployed on a physical server, and the virtual machine includes an accessing module, through which can be realized such functions as caching, pre-reading and write merging, etc.
directly at the client side, and the tasks originally processed by the server side are moved forward to the client Date recue / Date received 202 1-1 1-03 side, whereby servicing capability and response speed of the entire cluster are enhanced.
[0111] Specifically, the foregoing solution can be realized via the following steps.
[0112] Step 1 ¨ realizing distributed block storage accessing module, which accessing module includes, but is not limited to include, such functions as link managing function, message transceiving function, data caching function, pre-reading function, write caching function, etc.
[0113] Specifically, in order to enhance the reading/writing performance of the distributed block device, an accessing module (LibBlockSys) is disposed in the virtual machine, and this accessing module externally provides operation interfaces with respect to the block storage device, including, but not limited to, creating a block device, turning on the block device, turning off the block storage device, reading operation, writing operation, capacity expanding operation, and snapshot operation etc. The accessing module can at least realize the following functions.
[0114] The accessing module can establish multi-link communications with the various nodes of the distributed block storage cluster, concurrent capacity is enhanced, and the number of links is automatically and dynamically adjusted according to the message pressing extent.
[0115] The accessing module can be realized as a multi-thread mode, and the threads are mainly divided into two types: TO transceiving threads and TO processing threads. The two types of threads each constitute a thread pool, namely TO transceiving thread pool responsible for transceiving network data, and TO processing thread pool responsible for specifically processing data, such as controlling message analysis, message processing, EC
processing, and so on.
[0116] The accessing module can realize cache mechanism of local data. In the process of use, the reading operation preferentially hits local cache, and sends a read request to the cluster if hitting is not done. The writing operation is first cached in a local memory or SSD, write data is subsequently merged, aggregated, and removed of redundancy via timed tasks, and a write request is then sent to the cluster.

Date recue / Date received 202 1-1 1-03
[0117] As should be noted here, as a preferred embodiment in the embodiments of the present invention, the cache mechanism can adopt B-tree storage and LRU mode to cache hotspot data during specific implementation.
[0118] Step 2 ¨ adding the accessing module in the virtual machine.
[0119] Specifically, after the accessing module (LibBlockSys) has been realized, it is possible to invoke this accessing module in the virtual machine. Under the application scenario of the virtual machine, the virtualization management software usually used includes QEMU/KVM, XEN, VirtualBox, etc., all of these software are open source, and secondary development can be performed thereon to add thereto a self-defined block device backend module (namely the accessing module), so that the virtual machine can directly access the distributed block storage cluster. Taking QEMU/KVM for example, during specific implementation, invoking of the distributed block device is added to the QEMU
block module, a protocol name of the distributed block device is added, so that QEMU
can support the self-developed block storage protocol. Makefile file is amended, so that the distributed block device accessing module is compiled in QEMU. The well compiled QEMU is thereafter started, the self-defined protocol name and block storage cluster configuration files are configured during the start, and QEMU loads the block device based on the configuration items. This distributed block device will appear in the virtual machine, and formatting, mounting and accessing can be performed thereon.
[0120] Embodiment 5
[0121] Fig. 7 is a flowchart of the method of accessing distributed block storage system in user state under the application scenario of a virtual machine shown according to an exemplary embodiment. Referring to Figs. 5 and 7, the method comprises the following steps.
[0122] S101 - receiving a data read request sent by a data accessor via the accessing module.
[0123] Specifically, under the application scenario of a virtual machine, the computing node includes a virtual machine deployed on a physical server, and the virtual machine includes an accessing module that can realize such functions as caching, pre-reading, and write Date recue / Date received 202 1-1 1-03 merging etc. at the client side. After the data accessor sends the data read request, the request is received and processed by the accessing module. During specific implementation, distributed block device information that should be connected is configured in the start parameters of the virtual machine. After the virtual machine is started, it directly invokes the distributed device backend module interface, turns on the block device, and completes the connection with the cluster and creation of threads via the module. While the virtual machine reads and writes the block device, reading/writing interface realized by the accessing module is directly invoked, to perform read/write request interaction with the cluster, and to complete read/write tasks.
[0124] S102 - judging whether there is target data corresponding to the data read request in a cache of the accessing module.
[0125] Specifically, in the embodiments of the present invention, the accessing module is configured to have cache mechanism of local data. After receiving the data read request, the accessing module performs analysis processing on the data read request, and judges whether there is target data corresponding to the data read request in a cache of the computing node based on the analysis result.
[0126] S103 - if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
[0127] Specifically, if target data corresponding to the data read request is found in the cache, the target data is directly returned to the data accessor, whereby it is no longer required to request the target data from the distributed block storage cluster, and this enhances servicing capability and response speed of the entire cluster. In the case target data corresponding to the data read request is not found in the cache, at this time the data read request is sent to the distributed block storage cluster, so as to obtain target data corresponding to the data read request and return the target data to the data accessor.
Date recue / Date received 202 1-1 1-03
[0128] Specifically, in the embodiments of the present invention, the accessing module can be realized as a multi-thread mode, and the threads are mainly divided into two types: TO
transceiving threads and TO processing threads. The two types of threads each constitute a thread pool, namely TO transceiving thread pool responsible for transceiving network data, and TO processing thread pool responsible for specifically processing data, such as controlling message analysis, message processing, EC processing, and so on.
When target data corresponding to the data read request is not found in the cache, a corresponding thread is generated in a preconfigured thread pool on the basis of the data read request , the target data corresponding to the data read request is then requested from the distributed block storage cluster through execution of the thread, and the target data returned by the distributed block storage cluster is sent to the data accessor.
[0129] As a preferred embodiment in the embodiments of the present invention, the step of generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor further includes:
[0130] writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of the thread.
[0131] Specifically, in the embodiments of the present invention, after the target data corresponding to the data read request has been requested from the distributed block storage cluster, it is further needed to write the target data into the cache, so that it is possible to directly hit the data from the cache during subsequent reception of this read request of the target data, whereby the rounds of switch between user state and kernel state are reduced. During specific operation, it is possible to write the target data into the cache by processing the thread.
[0132] As a preferred embodiment in the embodiments of the present invention, the method further comprises:

Date recue / Date received 202 1-1 1-03
[0133] receiving a data write request sent by the data accessor via the accessing module, the data write request including to-be-processed data to be written in the distributed block storage cluster;
[0134] writing the to-be-processed data into the cache of the computing node, and generating a corresponding data write task based on the data write request;
[0135] periodically executing the data write task to preprocess the to-be-processed data; and
[0136] generating a corresponding thread in the preconfigured thread pool, and writing the preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
[0137] Specifically, likewise in the embodiments of the present invention, after the data accessor sends the data write request, the data write request is received and processed by the accessing module. When the accessing module receives the data write request, the to-be-processed data carried with the write request is first written into the cache of the computing node (namely the cache of the kernel module), a corresponding data write task is subsequently generated according to the data write request, the data write task is periodically executed to preprocess the to-be-processed data, finally, a corresponding thread is generated in the preconfigured thread pool, and the preprocessed to-be-processed data is written into the distributed block storage cluster through execution of the thread. As should be noted here, in the embodiments of the present invention, preprocess of the to-be-processed data includes such operations on the to-be-processed data as merging, aggregating and removing of redundancy, etc., to which no repetition is made in this context.
[0138] As a preferred embodiment in the embodiments of the present invention, the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
[0139] Embodiment 6
[0140] In the embodiments of the present invention, there is further provided a distributed block storage system corresponding to Embodiment 5, the system comprises a computing node Date recue / Date received 202 1-1 1-03 and a distributed block storage cluster, the computing node includes a virtual machine deployed on a physical server, the virtual machine includes an accessing module, and the accessing module includes:
[0141] a data receiving module, for receiving a data read request sent by a data accessor;
[0142] a data judging module, for judging whether there is target data corresponding to the data read request in a cache of the accessing module;
[0143] a data returning module, for returning the target data to the data accessor; and
[0144] a data requesting module, for generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
[0145] As a preferred embodiment in the embodiments of the present invention, the data requesting module is further employed for:
[0146] writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of the thread.
[0147] As a preferred embodiment in the embodiments of the present invention, the accessing module is further employed for:
[0148] receiving a data write request sent by the data accessor via the accessing module, the data write request including to-be-processed data to be written into the distributed block storage cluster;
[0149] writing the to-be-processed data into the cache of the computing node, and generating a corresponding data write task based on the data write request;
[0150] periodically executing the data write task to preprocess the to-be-processed data; and
[0151] generating a corresponding thread in the preconfigured thread pool, and writing the Date recue / Date received 202 1-1 1-03 preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
[0152] As a preferred embodiment in the embodiments of the present invention, the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
[0153] To sum it up, technical solutions provided by the embodiments of the present invention bring about the following advantageous effects.
[0154] In the method and system of accessing distributed block storage system in user state provided by the embodiments of the present invention, by receiving a data read request coming from a data accessor and sent by the iSCSI initiator via the accessing module after the LIO TCMU and the iSCSI initiator have been connected with each other, judging whether there is target data corresponding to the data read request in a cache of the accessing module, if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor, and by adding such functions as caching, pre-reading and write merging etc. in the client side, the tasks originally processed by the server side are moved forward to the client side, thereby also reducing bandwidth overhead of the cluster, making it possible to provide services to more computing nodes, and lowering the total ownership cost of the cluster by the enterprise, at the same time of enhancing servicing capability and response speed of the entire cluster and improving the accessing performance.
[0155] In the method and system of accessing distributed block storage system in user state provided by the embodiments of the present invention, by receiving a data read request sent by a data accessor via the accessing module, judging whether there is target data corresponding to the data read request in a cache of the accessing module, if yes, returning the target data to the data accessor, if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request Date recue / Date received 202 1-1 1-03 target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor, and by adding such functions as caching, pre-reading and write merging etc. in the client side, the tasks originally processed by the server side are moved forward to the client side, thereby also reducing bandwidth overhead of the cluster, making it possible to provide services to more computing nodes, lowering the total ownership cost of the cluster by the enterprise, reducing component parts of the framework as a whole, simplifying the framework, and making it convenient for maintenance and deployment, at the same time of enhancing servicing capability and response speed of the entire cluster and improving the accessing performance.
[0156] As should be noted, the various embodiments are progressively described in this Description, identical or similar sections of the embodiments can be cross-referenced from one another, while the gist of each embodiment lies in its difference from other embodiments. Particularly, with regard to system or system embodiment, since these are substantially similar to method embodiment, their descriptions are relatively simple, as relevant sections can be cross-referenced from the corresponding sections of the method embodiment. The foregoing descriptions of system or system embodiment are merely schematic, and units explained as separate parts may be or may not be physically separate, while parts shown as units may be or may not be physical units, that is to say, these can be located at a single site, and can also be distributed on a plural of network units. It is possible to select partial or entire modules therefrom as practically required to realize the objectives of the embodiment solutions to the effect that they are understandable and implementable without creative effort from persons ordinarily skilled in the art.
[0157] As comprehensible to persons ordinarily skilled in the art, the entire or partial steps in the aforementioned embodiments can be completed via hardware, or via a program instructing relevant hardware, the program can be stored in a computer-readable storage medium, and the storage medium can be a read-only memory, a magnetic disk or an optical disk, etc.
[0158] The foregoing embodiments are merely preferred embodiments of the present invention, Date recue / Date received 202 1-1 1-03 and they are not to be construed as restrictive to the present invention. Any amendment, equivalent substitution, and improvement makeable within the spirit and principle of the present invention shall all fall within the protection scope of the present invention.

Date recue / Date received 202 1-1 1-03

Claims (10)

What is claimed is:
1. A method of accessing distributed block storage system in user state, the distributed block storage system containing a computing node and a distributed block storage cluster, characterized in that the computing node includes iSCSI initiator and LIO
TCMU, wherein the LIO TCMU is provided with an accessing module, and that the method comprises the following steps:
receiving a data read request coming from a data accessor and sent by the iSCSI initiator via the accessing module, after the LIO TCMU and the iSCSI initiator having been connected with each other;
judging whether there is target data corresponding to the data read request in a cache of the accessing module; and if yes, returning the target data to the data accessor; and if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
2. The method of accessing distributed block storage system in user state according to Claim 1, characterized in that the step of generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor further includes:
writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of the thread.
3. The method of accessing distributed block storage system in user state according to Claim 1 Date recue / Date received 2021-11-03 or 2, characterized in further comprising:
receiving a data write request coming from the data accessor and sent by the iSCSI initiator via the accessing module, the data write request including to-be-processed data to be written into the distributed block storage cluster;
writing the to-be-processed data into a cache of the computing node, and generating a corresponding data write task based on the data write request;
periodically executing the data write task to preprocess the to-be-processed data; and generating a corresponding thread in the preconfigured thread pool, and writing the preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
4. The method of accessing distributed block storage system in user state according to Claim 1 or 2, characterized in that the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
5. A method of accessing distributed block storage system in user state, the distributed block storage system containing a computing node and a distributed block storage cluster, characterized in that the computing node includes a virtual machine deployed on a physical server, wherein the virtual machine includes an accessing module, and that the method comprises the following steps:
receiving a data read request sent by a data accessor via the accessing module;
judging whether there is target data corresponding to the data read request in a cache of the accessing module; and if yes, returning the target data to the data accessor; and if not, generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data Date recue / Date received 2021-11-03 accessor.
6. The method of accessing distributed block storage system in user state according to Claim 5, characterized in that the step of generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor further includes:
writing the target data into the cache after the target data corresponding to the data read request has been requested from the distributed block storage cluster through execution of the thread.
7. The method of accessing distributed block storage system in user state according to Claim 5 or 6, characterized in further comprising:
receiving a data write request sent by the data accessor via the accessing module, the data write request including to-be-processed data to be written into the distributed block storage cluster;
writing the to-be-processed data into the cache of the computing node, and generating a corresponding data write task based on the data write request;
periodically executing the data write task to preprocess the to-be-processed data; and generating a corresponding thread in the preconfigured thread pool, and writing the preprocessed to-be-processed data into the distributed block storage cluster through execution of the thread.
8. The method of accessing distributed block storage system in user state according to Claim 5 or 6, characterized in that the accessing module is communicable with the distributed block storage cluster through a preset communication protocol, so as to perform read and/or write operation(s).
9. A distributed block storage system, comprising a computing node and a distributed block storage cluster, characterized in that the computing node includes iSCSI
initiator and LIO

Date recue / Date received 2021-11-03 TCMU, wherein the LIO TCMU is provided with an accessing module, and that the accessing module includes:
a data receiving module, for receiving a data read request coming from a data accessor and sent by the iSCSI initiator, after the LIO TCMU and the iSCSI initiator having been connected with each other;
a data judging module, for judging whether there is target data corresponding to the data read request in a cache of the accessing module;
a data returning module, for returning the target data to the data accessor;
and a data requesting module, for generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
10. A distributed block storage system, comprising a computing node and a distributed block storage cluster, characterized in that the computing node includes a virtual machine deployed on a physical server, wherein the virtual machine includes an accessing module, and that the accessing module includes:
a data receiving module, for receiving a data read request sent by a data accessor;
a data judging module, for judging whether there is target data corresponding to the data read request in a cache of the accessing module;
a data returning module, for returning the target data to the data accessor;
and a data requesting module, for generating a corresponding thread in a preconfigured thread pool in the accessing module, so as to facilitate the thread to request target data corresponding to the data read request from the distributed block storage cluster and return the target data to the data accessor.
Date recue / Date received 2021-11-03
CA3129984A 2020-09-03 2021-09-03 Method and system for accessing distributed block storage system in user mode Pending CA3129984A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010919809.X 2020-09-03
CN202010919809.XA CN112052291A (en) 2020-09-03 2020-09-03 Method and system for accessing distributed block storage system by user mode

Publications (1)

Publication Number Publication Date
CA3129984A1 true CA3129984A1 (en) 2022-03-03

Family

ID=73608339

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3129984A Pending CA3129984A1 (en) 2020-09-03 2021-09-03 Method and system for accessing distributed block storage system in user mode

Country Status (2)

Country Link
CN (1) CN112052291A (en)
CA (1) CA3129984A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542412B (en) * 2021-07-16 2024-01-05 天翼云科技有限公司 Data transmission method, device, electronic equipment and storage medium
CN115981547A (en) * 2021-10-14 2023-04-18 华为技术有限公司 Data system, data access method, data access device and data processing unit
CN114047874A (en) * 2021-10-20 2022-02-15 北京天融信网络安全技术有限公司 Data storage system and method based on TCMU virtual equipment
CN114003328B (en) * 2021-11-01 2023-07-04 北京天融信网络安全技术有限公司 Data sharing method and device, terminal equipment and desktop cloud system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544324B (en) * 2013-11-11 2017-09-08 北京搜狐新媒体信息技术有限公司 A kind of data access method of kernel state, apparatus and system
CN111008233B (en) * 2019-11-24 2023-02-28 浪潮电子信息产业股份有限公司 Method, device and equipment for accessing KV database
CN112039999A (en) * 2020-09-03 2020-12-04 苏宁云计算有限公司 Method and system for accessing distributed block storage system in kernel mode

Also Published As

Publication number Publication date
CN112052291A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
US9317320B2 (en) Hypervisor-based server duplication system and method and storage medium storing server duplication computer program
US11809753B2 (en) Virtual disk blueprints for a virtualized storage area network utilizing physical storage devices located in host computers
CA3129984A1 (en) Method and system for accessing distributed block storage system in user mode
US20200387405A1 (en) Communication Method and Apparatus
CA2480459C (en) Persistent key-value repository with a pluggable architecture to abstract physical storage
US8495254B2 (en) Computer system having virtual storage apparatuses accessible by virtual machines
US9582221B2 (en) Virtualization-aware data locality in distributed data processing
US10877787B2 (en) Isolation of virtual machine I/O in multi-disk hosts
US20200125403A1 (en) Dynamic multitasking for distributed storage systems
US11205244B2 (en) Resiliency schemes for distributed storage systems
US11099952B2 (en) Leveraging server side cache in failover scenario
US8875132B2 (en) Method and apparatus for implementing virtual proxy to support heterogeneous systems management
CA3129982A1 (en) Method and system for accessing distributed block storage system in kernel mode
US20140082275A1 (en) Server, host and method for reading base image through storage area network
US8838768B2 (en) Computer system and disk sharing method used thereby
CN116032930A (en) Network storage method, storage system, data processing unit and computer system
US8583852B1 (en) Adaptive tap for full virtual machine protection
CN113691465A (en) Data transmission method, intelligent network card, computing device and storage medium
CN113704165B (en) Super fusion server, data processing method and device
CN117093158B (en) Storage node, system and data processing method and device of distributed storage system
CN112965790B (en) PXE protocol-based virtual machine starting method and electronic equipment
WO2023179040A1 (en) Node switching method and related system
AU2003220549B2 (en) Key-value repository with a pluggable architecture
CN117581205A (en) Virtualization engine for virtualizing operations in a virtualized system

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916