CN118034579A - Access method and device for storage pool - Google Patents

Access method and device for storage pool Download PDF

Info

Publication number
CN118034579A
CN118034579A CN202310330666.2A CN202310330666A CN118034579A CN 118034579 A CN118034579 A CN 118034579A CN 202310330666 A CN202310330666 A CN 202310330666A CN 118034579 A CN118034579 A CN 118034579A
Authority
CN
China
Prior art keywords
storage pool
data
computing device
application program
data forwarding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310330666.2A
Other languages
Chinese (zh)
Inventor
陈雷明
何锐
刘宸睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to PCT/CN2023/128077 priority Critical patent/WO2024093958A1/en
Publication of CN118034579A publication Critical patent/CN118034579A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a storage pool access method which is used for improving the flexibility of a computing device cluster in accessing a storage pool. The method of the embodiment of the application comprises the following steps: the computing device cluster provides an application program interface for computing nodes in the computing device cluster to access the storage pool through the application program interface, and a software toolkit SDK of the application program interface is deployed on a virtual machine or container of the computing device cluster. The computing device cluster sends a storage pool access request to a data forwarding proxy module by calling an application program interface, and the data forwarding proxy module is used for processing a data forwarding process of the computing device cluster. The computing device cluster sends a storage pool access request to storage nodes in the storage pool through the data forwarding agent module.

Description

Access method and device for storage pool
The present application claims priority from the chinese patent application filed on month 1 of 2022, 11, filed with the chinese national intellectual property agency, application number 202211366811.4, entitled "a method and apparatus for storage pool access", the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the field of computers, in particular to a method and a device for accessing a storage pool.
Background
With the development of data access service technology, more and more clusters of computing devices use storage pools for data access. In the process of accessing a storage pool, an engine layer of a data service is needed to realize storage and data extraction of the storage by the current computing device cluster.
At present, an engine layer of a computing device cluster is deployed based on a physical machine, namely, the computing device accesses a storage pool through a physical network interface, and when the computing device cluster needs to apply for new storage resources to the storage pool, the physical network interface needs to be added, so that the computing device cluster cannot elastically apply for the storage resources.
Although the engine layer of the existing computing device cluster can be deployed based on the virtual machine, the computing device cluster accesses the storage resources of the storage pool in a cloud disk mode, the cloud disk is required to be synchronously added for the capacity expansion of the computing device cluster, and different computing nodes only can access the corresponding cloud disk, so that the computing device cluster is not flexible enough to access the storage pool, and the capacity expansion elasticity is poor.
Disclosure of Invention
The embodiment of the application provides a method and a device for accessing a storage pool, which are used for improving the flexibility of accessing the storage pool.
The method for accessing the storage pool according to the first aspect of the embodiment of the present application may be performed by a computing device, or may be performed by a component of a computing device cluster, for example, a processor, a chip, or a chip system of the computing device cluster, or may be implemented by a logic module or software capable of implementing all or part of the functions of the computing device cluster. Taking a computing device cluster implementation as an example, the method provided in the first aspect includes the following steps: the computing device cluster provides an application program interface for computing nodes in the computing device cluster to access the storage pool through the application program interface, and a software toolkit SDK of the application program interface is deployed on a virtual machine or container of the computing device cluster. The computing device cluster sends a storage pool access request to a data forwarding proxy module by calling an application program interface, and the data forwarding proxy module is used for processing a data forwarding process of the computing device cluster. The computing device cluster sends a storage pool access request to storage nodes in the storage pool through the data forwarding agent module.
According to the embodiment of the application, the virtual machine or the container of the computing node in the computing device cluster can deploy the software tool package of the application program interface, so that the virtual machine or the container of the computing node can access any storage node in the storage pool based on the application program interface, and the flexibility of the computing device cluster in accessing the storage nodes in the storage pool is improved compared with the case that the computing device cluster accesses the storage pool through a physical network interface or accesses the storage pool in a cloud disk mode.
In a possible implementation manner, in the process that the computing device cluster sends the storage pool access request to the data forwarding proxy module by calling the application program interface, the computing device cluster calls the application program interface, and sends the storage pool access request to the data forwarding proxy module of the data processing unit through the direct memory access DMA controller, without participation of a central processor of a registered computing node, so that direct transmission of the storage pool access request is realized.
According to the embodiment of the application, the computing node of the computing device cluster can send the memory pool access request to the data forwarding proxy module in a mode of directly accessing the DMA through the memory, so that the communication efficiency of the computing node and the data processing unit is improved, and the input/output IO performance of the computing device cluster is improved.
In a possible implementation manner, in a process that the computing device cluster sends a storage pool access request to the data forwarding agent module by calling an application program interface, the computing device cluster sends a log data write request to the data forwarding agent module by calling the application program interface, where the log data write request carries a log identifier (log ID), and the log identifier is used to indicate a data writing location of the data forwarding agent module in the storage pool.
According to the embodiment of the application, the virtual machine of the computing device can send the log data writing request to the data forwarding proxy module through the application program interface, so that the flexibility of writing log data in the storage pool of the computing device cluster is improved, and the log data writing efficiency of the computing device cluster in the storage pool is improved.
In a possible implementation manner, in a process that the computing device cluster sends a storage pool access request to the data forwarding proxy module by calling the application program interface, the computing device cluster sends a log data reading request to the data forwarding proxy module by calling the application program interface, the log data reading request carries a log identifier, and the log identifier is used for indicating a data reading position of the data forwarding proxy module in a storage pool.
According to the embodiment of the application, the virtual machine of the computing device can send the log data reading request to the data forwarding agent module through the application program interface, so that the flexibility of the computing device cluster in reading the log data from the storage pool is improved, and the reading efficiency of the computing device cluster in reading the log data from the storage pool is improved.
In one possible implementation, a cluster of computing devices determines partition identification (pt ID) based on log identification. And the computing device cluster queries the routing table according to the partition identifications and determines one or more storage node identifications corresponding to the partition identifications. Specifically, the computing device cluster determines partition identifications based on the log identifications and the partition numbers, queries the routing table based on the partition identifications, and determines one or more storage node identifications (disk IDs) corresponding to the partition identifications.
According to the embodiment of the application, the computing device can determine the identification of the storage node to be accessed by the computing device cluster according to the log identification carried in the storage pool access request, so that the computing node can accurately determine the position of the storage node to be accessed, and the realizability of the computing device cluster for accessing the storage node in the storage pool is improved.
In a possible implementation, the data forwarding agent module is deployed on the data processing unit DPU chip. The computing node can unload the data forwarding process to the data forwarding agent module of the data processing unit, and the data forwarding agent module in the data processing unit forwards the storage pool access message.
The computing node can unload the data forwarding process to the data forwarding agent module of the data processing unit for execution, so that the processing efficiency of the computing device cluster on the computing task is improved, and the input/output IO performance of the computing device cluster is also improved.
In a possible implementation, the application program interface includes a log interface, and the virtual machine or container of the computing node can call the log interface to write log data to the storage pool, or the virtual machine of the computing node can call the log interface to read log data from the storage node of the storage pool.
The application program interface in the embodiment of the application can be various application program interfaces, wherein the application program interface comprises a log interface, and the virtual machine or the container of the computing node can call the log interface to read the log data in the storage node of the storage pool, so that the reading and writing efficiency of the computing device cluster to the log data is improved.
The second aspect of the embodiment of the application provides a device for accessing a storage pool, which comprises a receiving and transmitting unit and a processing unit. The processing unit is used for providing an application program interface, the application program interface is used for enabling the computing nodes in the computing device cluster to access the storage pool through the application program interface, and a software tool kit SDK of the application program interface is deployed on a virtual machine or a container of the computing device cluster. The receiving and transmitting unit is used for sending a storage pool access request to the data forwarding proxy module by calling the application program interface, and the data forwarding proxy module is used for processing the data forwarding process of the computing device cluster. The transceiver unit is further configured to send a storage pool access request to a storage node in the storage pool through the data forwarding agent module.
In one possible implementation manner, the transceiver unit is specifically configured to invoke the application program interface and send the storage pool access request to the forwarding agent module through the DMA controller.
In a possible implementation manner, the transceiver unit is specifically configured to send a data write request to the data forwarding agent module by calling an application program interface, where the data write request carries a log identifier, and the log identifier is used to indicate a data writing location of the data forwarding agent module in the storage pool.
In a possible implementation manner, the transceiver unit is specifically configured to invoke the application program interface to send a data read request to the data forwarding agent module, where the data read request carries a log identifier, and the log identifier is used to refer to a data read location of the data forwarding agent module in the storage pool.
In a possible implementation manner, the processing unit is further configured to determine a partition identifier based on the log identifier, query the routing table according to the disk table identifier, and determine one or more storage node identifiers corresponding to the partition identifier.
In a possible implementation, the data forwarding agent module is deployed on the data processing unit DPU chip.
A third aspect of the embodiments of the present application provides a computing device cluster comprising one or more computing devices, the computing devices comprising a processor coupled to a memory, the processor for storing instructions that, when executed by the processor, cause a cloud server to perform the method of the first aspect or any one of the possible implementations of the first aspect.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium having stored thereon instructions that, when executed, cause a computer to perform the method of the first aspect or any of the possible implementation manners of the first aspect.
A fifth aspect of the embodiments of the present application provides a computer program product comprising instructions which, when executed, cause a computer to implement the method of the first aspect or any one of the possible implementation manners of the first aspect.
It should be appreciated that any of the above-mentioned advantages achieved by the computing device cluster, the computer-readable medium, or the computer program product may refer to the advantages of the corresponding method, and will not be described herein.
Drawings
FIG. 1 is a schematic diagram of a system architecture of a storage pool access system according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for accessing a storage pool according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a method for accessing a storage pool according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another method for accessing a storage pool according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another method for accessing a storage pool according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a storage pool access device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a computing device according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a computing device cluster according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another computing device cluster according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method and a device for accessing a storage pool, which are used for improving the flexibility of accessing the storage pool by a computing device cluster.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
First, some terms related to embodiments of the present application will be described to facilitate understanding of the scheme by those skilled in the art.
A Virtual Machine (VM) refers to a complete computer system with complete hardware system functions, which runs in a completely isolated environment through software simulation, and the work that can be done in a server can be implemented in a virtual machine. When creating a virtual machine in a server, part of hard disks and memory capacities of physical machines are required to be used as the hard disks and memory capacities of the virtual machines, each virtual machine is provided with an independent hard disk and an operating system, and users of the virtual machines can operate the virtual machines like the server.
The remote data service (remote dictionary server, redis) is an open-source high-performance key-value database developed by using the C language, and the Redis can store data in a memory and is commonly used as a distributed cache so as to improve the performance of reading or writing the data.
A data processing unit (data processing unit, DPU) is a special purpose processor configured centrally with data, supporting a variety of infrastructure layer services, such as storage, security, and quality of service, the DPU enhancing the processing power of computing devices for network security and network protocols, as well as for distributed storage.
The system architecture to which the storage pool access method provided by the embodiment of the present application is applied is described below.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture to which a storage pool access method according to an embodiment of the present application is applied. In the example shown in fig. 1, storage pool access system 100 includes a cluster of computing devices 101 and a storage pool 102, wherein cluster of computing devices 101 includes computing nodes 1011 and data processing units 1012, computing nodes 1011 in cluster of computing devices 101 may be one or more computing nodes, each computing node corresponding to a data processing unit 1012, and storage pool 102 includes one or more storage nodes 1021. The specific functions of the various portions of the storage pool access system 100 provided by the practice of the present application are described below.
The computing device cluster 101 is configured to process computing tasks of a user and perform data transmission to storage nodes in the storage pool 101. Computing device cluster 101 includes one or more computing nodes 1011, which computing nodes 1011 may be computing devices that make up computing device cluster 101, computing nodes 1011 being, for example, physical servers or desktop computers. At the hardware level, a processor and memory are provided in the computing node 1011. At the software level, computing node 1011 has applications and clients running thereon. The client is configured to receive a data access request triggered by an application program, and interact with the storage node 102 through the data processing unit 1012, and send the data access request to the storage node 102.
One or more virtual machines or containers may also be deployed on computing node 1011, the present application being described in terms of virtual machines. The user is able to process the computing tasks based on the virtual machines and persist data resulting from executing the computing tasks to the storage pool 102. The virtual machine is also capable of providing a variety of data services to the user, such as a remote dictionary service (remote dictionary server, redis), and the virtual machine is capable of invoking an application program interface, sending data generated by the Redis service to the data processing unit 1012, and forwarding the data to the storage pool 102 for data persistence by the data processing unit 1012. The virtual machine may also invoke an application program interface to store data from storage pool 102 to the memory of computing node 1011.
The data processing unit 1012 is used to handle the interaction process between the computing node 101 and the storage pool 102. The data processing unit 1012 includes a data forwarding agent module capable of forwarding a storage pool access request sent by a virtual machine in the computing node 101 to a storage node of the storage pool 102. The data processing unit 1012 may be integrated as a chip in the computing device or may be a stand alone device, and is not limited in particular.
The storage pool 102 is used to store data sent by the cluster of computing devices 101. The storage pool 102 includes one or more storage nodes 1021, such as servers, desktop computers, and hard disks. The storage node 1021 includes one or more controllers and one or more hard disks. The hard disk is used to store data, and may be a magnetic disk or other type of storage medium, such as a solid state disk, or the like. The controller is configured to write data to the hard disk or read data from the hard disk according to the data access request sent by the computing node 1011. In the process of reading and writing data, the controller needs to convert an address carried in the storage pool access request into an address which can be identified by the hard disk.
The following describes a method and a device for accessing a storage pool according to an embodiment of the present application with reference to the accompanying drawings.
Referring to fig. 2, fig. 2 is a flowchart of a method for accessing a storage pool according to an embodiment of the present application. In the example shown in fig. 2, the storage pool access method provided by the embodiment of the present application includes the following steps:
201. The computing device cluster provides an application program interface for computing nodes in the computing device cluster to access the storage pool through the application program interface.
The cluster of computing devices provides an application program interface for computing nodes in the cluster of computing devices to access the storage pool through the application program interface. The software tool package SDK of the application program interface is deployed on a virtual machine or a container of a computing node in the computing device cluster, and the application program interface can be called in the virtual machine of the computing node to communicate with a data processing unit of the computing device cluster.
The data service in the embodiment of the application comprises a Redis service, and the Redis service can realize caching of the data in the storage pool. Specifically, the Redis service can call an application program interface in the virtual machine to read the data in the storage pool to the memory, or can call the application program interface in the virtual machine to write the data in the memory to the storage pool, so as to realize the persistence of the data.
The application program interface in the embodiment of the application comprises a log interface, the log interface is used for transmitting log data, and the virtual machine of the computing node can call the log interface to write the log data into the storage pool. The log data is record oriented (record only) data and append only (application only) data, the log data is not stored in a single byte form, but is recorded in an inseparable data block form, and the log data can only be appended and written and cannot be modified.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a calling application program interface according to an embodiment of the application. In the embodiment shown in fig. 3, a software tool package of a log interface is installed in a virtual machine of a computing node in a computing device cluster, the software tool being capable of providing the log interface, and the virtual machine accessing a storage pool by calling the log interface. For example, virtual machine VM1 additionally writes log data generated by the Redis service to the storage pool by calling the log interface.
202. The computing device cluster sends a storage pool access request to the data forwarding agent module by invoking an application program interface.
The computing device cluster sends a storage pool access request to a data forwarding proxy module of the data processing unit by calling an application program interface, wherein the data forwarding proxy module is used for processing a data forwarding process of the computing device cluster. For example, a virtual machine of a compute node in a computing device cluster sends a storage pool access request to a data forwarding proxy module by invoking a log interface, where the storage pool access request is used to request writing or reading of log data in a storage pool, and the data forwarding proxy module is capable of processing a forwarding process of the log data.
In a possible implementation manner, the data forwarding proxy module is deployed on a data processing unit DPU chip, that is, the computing node can offload a data forwarding process to the data processing unit, and the data processing unit forwards the storage pool access request. The DPU chip of the data processing unit in the embodiment of the application can be embedded into the network card of the computing device cluster, and the DPU chip is not particularly limited.
According to the embodiment of the application, the computing node can unload the data forwarding process to the data forwarding agent module of the data processing unit for execution, so that the processing efficiency of the computing equipment cluster on the computing task is improved, the data transmission efficiency of the data processing unit is also improved, and the IO performance of the computing equipment cluster is improved.
In a possible implementation manner, when the computing device cluster sends a storage pool access request to the data forwarding proxy module by calling an application program interface, a virtual machine of a computing node in the computing device cluster calls the application program interface, and sends the storage pool access request to the data forwarding proxy module through the direct memory access DMA controller.
For example, the virtual machine of the computing node calls the log interface, and directly sends the log data from the memory of the computing node to the data forwarding proxy module of the data processing unit through the direct memory access DMA controller, without participation of a central processing unit which has recorded the computing node, so as to realize direct transmission of the log data.
With continued reference to fig. 3, in the embodiment shown in fig. 3, the virtual machine of the computing node in the computing device cluster sends a storage pool access request to the forwarding agent module of the data processing unit by calling the log interface, where the storage pool access request includes a log data write request and a log data read request. For example, the Redis service of the virtual machine VM1 sends the log data writing request to the data forwarding proxy module of the data processing unit by calling the log interface, and the virtual machine VM1 can send the log data writing request to the data forwarding proxy module by means of direct memory access.
According to the embodiment of the application, the computing node can send the storage pool access request to the data forwarding proxy module in a Direct Memory Access (DMA) mode, so that the communication efficiency of the computing node and the data processing unit is improved, and the IO performance of the computing device cluster is further improved.
In a possible implementation manner, in a process that the computing device cluster sends a storage pool access request to the data forwarding agent module by calling an application program interface, the computing device cluster sends a log data writing request to the data forwarding agent module by calling the application program interface, where the log data writing request carries a log identifier (log ID), and the log identifier is used to indicate a data writing location of the data forwarding agent module in the storage pool, where the data writing location is, for example, a storage node identifier.
Referring to fig. 4, fig. 4 is a schematic diagram of a storage pool data writing process according to an embodiment of the present application. In the example shown in fig. 4, the virtual machine of the computing node sends log data generated by the dis service to the data forwarding agent module through a log interface, for example, the virtual machine is to send 2M log data, where the log data includes 4 data blocks, and each log data block has a size of 512k. After the virtual machine calls the log interface to send a log data writing request, the QoS QOS control module of the computing node firstly checks whether the current data flow is overloaded, and if the current data flow is overloaded, the QOS control module refuses the log data writing request.
In the example shown in fig. 4, if the QOS control module checks that the current data traffic is not overloaded, the check module of the compute node continues to perform the Cyclic Redundancy Check (CRC) check on the log data write request, and if the check is not passed, the check module denies the log data write request. If the verification is passed, the computing node performs Erasure Code (EC) encoding on the log data in the log data writing request through the encoding module, so as to generate an encoded data block, where the data block includes a log data block and a verification block, for example, the computing node performs EC encoding on the log data in the log data writing request through the encoding module to obtain 6 data blocks, where 4 data blocks are log data blocks and 2 data blocks are verification blocks. And the computing equipment sends the data block coded by the EC to the data forwarding agent module according to the log data writing request, and the data forwarding agent module sends the data block to a storage node of the storage pool.
In one possible implementation, a cluster of computing devices determines partition identification (pt ID) based on log identification. Specifically, the computing device cluster determines a partition identifier according to the log identifier (log ID) and the partition number (pt num), where the partition identifier is used to query the routing table for a storage node identifier (disk ID) corresponding to the partition identifier. The computing device cluster can query the routing table based on the partition identifications, determine one or more storage node identifications corresponding to the partition identifications.
In the example shown in fig. 4, in the process that the computing device sends the EC-encoded data block to the sending data forwarding agent module according to the log data writing request, it is required to determine the writing position of the log data in the storage node according to the log identifier carried in the log data writing request. Specifically, the computing node calculates partition identifiers according to the log identifiers and the partition numbers, queries the routing table according to the partition identifiers, and determines storage node identifiers corresponding to the partition identifiers. And the computing equipment sends the data writing request to a buffer corresponding to the storage node identifier in the data forwarding agent module according to the storage node identifier.
According to the embodiment of the application, the computing device cluster can determine the partition identification according to the log identification and the partition number. The following calculation relation is satisfied among the partition identification, the log identification and the partition number:
ptID=log ID%pt Num
According to the embodiment of the application, the computing device can determine the identification of the storage node to be accessed by the computing device cluster according to the log identification carried in the storage pool access request, so that the computing node can accurately determine the position of the storage node to be accessed, and the accuracy of the computing device cluster in accessing the storage node in the storage pool is improved.
In a possible implementation manner, in a process that the computing device cluster sends a storage pool access request to the data forwarding proxy module by calling the application program interface, the computing device cluster sends a data reading request to the data forwarding proxy module by calling the application program interface, where the data reading request carries a log identifier, and the log identifier is used to indicate a data reading position of the data forwarding proxy module in the storage pool, and the data reading position is, for example, an identifier of a storage node.
Referring to fig. 5, fig. 5 is a schematic diagram of a flow chart of data reading in a storage pool according to an embodiment of the application. In the example shown in fig. 5, after the Redis service of the virtual machine in the computing node needs to read the log data from the storage pool and the virtual machine calls the log interface to send the log data read request, the QOS control module of the computing node first checks whether the current data traffic is overloaded, and if the current data traffic is overloaded, the QOS control module denies the log data read request of the Redis service.
In the example shown in fig. 5, if the QOS control module checks that the current data traffic is not overloaded, the splitter (splitter) of the computing node determines a log data block corresponding to the log data to be read by the log data read request, determines a reading position for reading each log data in the storage pool according to the log identifier of the log data read request, and sends the log data read request corresponding to each log data block to the data forwarding proxy module of the data processing unit.
In the example shown in fig. 5, the log data read request sent by the compute node carries a compensation amount (offset) and a length (len) that are used to indicate a specific location of the log data in the storage node, for example, a storage address of the log data in the storage node. And the computing node sends a log data reading request to the data forwarding agent module through the memory direct access DMA controller.
203. The computing device cluster sends a storage pool access request to storage nodes in the storage pool through the data forwarding agent module.
The computing device cluster sends a storage pool access request to storage nodes in the storage pool through the data forwarding agent module. Specifically, after receiving a storage pool access request sent by a virtual machine of a computing node, a data forwarding agent module in a data processing unit sends the storage pool access request to a controller corresponding to the storage node according to the storage pool access request.
In one possible implementation, the data forwarding agent in the data processing unit may communicate data with the storage nodes in the storage pool via a transmission control protocol (transmission control protocol, TCP).
After the controller of the storage node receives the storage pool access request, data is written in the hard disk or read from the hard disk according to the storage pool access request. Specifically, after the controller of the storage node receives the log data writing request, the log data is written into the hard disk according to the log data writing request, and after the controller of the storage node receives the log data reading request, the controller of the storage node reads the log data in the hard disk according to the log data reading request.
With continued reference to fig. 3, in the embodiment shown in fig. 3, after the data forwarding proxy module of the data processing unit receives a storage pool access request sent by the virtual machine of the computing node through the log interface, the data forwarding processing module forwards the storage pool access request to an IO controller corresponding to the storage node in the storage pool, where the storage pool access request includes a log data write request and a log data read request.
For example, in the example shown in fig. 3, the Redis service in the virtual machine VM1 sends log data to the data forwarding proxy module of the data processing unit by calling the log interface, the data forwarding proxy module forwards the log data to the storage node corresponding controller in the storage pool, and the storage node controller writes the log data to the hard disk of the storage node.
With continued reference to fig. 4, in the example shown in fig. 4, after the data forwarding agent module of the data processing unit receives the data write request sent by the computing node, the data forwarding agent module sends the log data write request to the storage pool. Specifically, the data forwarding agent module sends the log data writing request to the storage node corresponding to the storage node identifier in the storage pool according to the storage node identifier, so that the log data is written into the corresponding storage node, and the persistence of the log data is completed.
With continued reference to fig. 5, in the example shown in fig. 5, after the data forwarding agent module of the data processing unit receives the data read request sent by the computing node, the data forwarding agent module sends a log data read request to the storage pool. Specifically, the data forwarding agent module sends a log data reading request to a storage node corresponding to a storage node identifier in a storage pool, after the computing node reads log data blocks from different storage nodes, the computing device performs Cyclic Redundancy Check (CRC) on the read log data blocks, and the log data blocks are combined through the combining (merger) module to obtain log data to be read.
From the above embodiments, it can be seen that, in the embodiments of the present application, a virtual machine or a container of a computing node can deploy a software tool package of an application program interface, so that the virtual machine or the container of the computing node can access any storage node in a storage pool based on the application program interface, thereby improving flexibility of accessing a computing device cluster to each storage node in the storage pool.
The storage pool access method provided by the embodiment of the application is introduced above, and the storage pool access device provided by the embodiment of the application is introduced below.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a storage pool access device according to an embodiment of the present application. In the example shown in fig. 6, the storage pool access device 600 is used to implement the steps performed by the computing node in the above embodiments, and the storage pool access device 600 includes a transceiver unit 601 and a processing unit 602.
The processing unit 602 is configured to provide an application program interface, where the application program interface is configured to access a storage pool through the application program interface by a computing node in the computing device cluster, and a software tool kit SDK of the application program interface is deployed on a virtual machine or a container of the computing device cluster. The transceiver unit 601 is configured to send a storage pool access request to a data forwarding proxy module by calling an application program interface, where the data forwarding proxy module is configured to process a data forwarding process of a computing device cluster. The transceiver unit 601 is further configured to send a storage pool access request to a storage node in a storage pool through the data forwarding agent module.
In a possible implementation manner, the transceiver unit 601 is specifically configured to invoke an application program interface and send a storage pool access request to the forwarding agent module through the DMA controller.
In a possible implementation manner, the transceiver unit 601 is specifically configured to send a data write request to the data forwarding agent module by calling an application program interface, where the data write request carries a log identifier, and the log identifier is used to indicate a data writing location of the data forwarding agent module in the storage pool.
In a possible implementation manner, the transceiver unit 601 is specifically configured to invoke the application program interface to send a data read request to the data forwarding agent module, where the data read request carries a log identifier, and the log identifier is used to refer to a data read location of the data forwarding agent module in the storage pool.
In a possible implementation manner, the processing unit 602 is further configured to determine a partition identifier based on the log identifier, query the routing table according to the disk table identifier, and determine one or more storage node identifiers corresponding to the partition identifier.
In a possible implementation, the data forwarding agent module is deployed on the data processing unit DPU chip.
It should be understood that the division of the units in the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated when actually implemented. And the units in the device can be all realized in the form of software calls through the processing element; or can be realized in hardware; it is also possible that part of the units are implemented in the form of software, which is called by the processing element, and part of the units are implemented in the form of hardware. For example, each unit may be a processing element that is set up separately, may be implemented as integrated in a certain chip of the apparatus, or may be stored in a memory in the form of a program, and the functions of the unit may be called and executed by a certain processing element of the apparatus. Furthermore, all or part of these units may be integrated together or may be implemented independently. The processing element described herein may in turn be a processor, which may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each unit above may be implemented by an integrated logic circuit of hardware in a processor element or in the form of software called by a processing element.
It should be noted that, for simplicity of description, the above method embodiments are all described as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, and further, those skilled in the art should also understand that the embodiments described in the specification belong to the preferred embodiments, and the actions involved are not necessarily essential to the present application.
Other reasonable combinations of steps that can be conceived by those skilled in the art from the foregoing description are also within the scope of the application. Furthermore, those skilled in the art will be familiar with the preferred embodiments of the application, and the description of the preferred embodiments is not necessarily intended to require the application.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the application. As shown in fig. 7, the computing device 700 includes: a processor 701, a memory 702, a communication interface 703 and a bus 704, the processor 701, the memory 702 and the communication interface 703 being coupled by a bus (not labeled in the figures). The memory 702 stores instructions that, when executed in the memory 702, the computing device 700 performs the methods performed by the computing nodes in the method embodiments described above.
Computing device 700 may be one or more integrated circuits configured to implement the above methods, for example: one or more Application SPECIFIC INTEGRATED Circuits (ASIC), or one or more microprocessors (DIGITAL SIGNAL processors, DSP), or one or more field programmable gate arrays (field programmable GATE ARRAY, FPGA), or a combination of at least two of these integrated circuit forms. For another example, when the units in the apparatus may be implemented in the form of a scheduler of processing elements, the processing elements may be general-purpose processors, such as a central processing unit (central processing unit, CPU) or other processor that may invoke a program. For another example, the units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
The processor 701 may be a central processing unit (central processing unit, CPU), but may also be other general purpose processor, digital signal processor (DIGITAL SIGNAL processor, DSP), application Specific Integrated Circuit (ASIC), field programmable gate array (field programmable GATE ARRAY, FPGA), or other programmable logic device, transistor logic device, hardware component, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The memory 702 may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA DATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM).
The memory 702 stores executable program codes, and the processor 701 executes the executable program codes to implement the functions of the foregoing transceiver unit and the processing unit, respectively, thereby implementing the foregoing storage pool access method. That is, the memory 702 has stored thereon instructions for performing the above-described pool access method.
Communication interface 703 enables communication between computing device 700 and other devices or communication networks using a transceiver module such as, but not limited to, a network interface card, transceiver, or the like.
The bus 704 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. The bus may be a peripheral component interconnect express (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIe) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, a unified bus (unified bus, ubus or UB), a computer express link (compute express link, CXL), a cache coherent interconnect protocol (cache coherent interconnect for accelerators, CCIX), or the like. The buses may be divided into address buses, data buses, control buses, etc.
Referring to fig. 8, fig. 8 is a schematic diagram of a computing device cluster according to an embodiment of the application. As shown in fig. 8, the computing device cluster 800 includes at least one computing device 700.
As shown in fig. 8, the computing device cluster 800 includes at least one computing device 700. The same instructions for performing the above-described storage pool access method may be stored in memory 702 in one or more computing devices 700 in the computing device cluster 800.
In some possible implementations, the memory 702 of one or more computing devices 700 in the computing device cluster 800 may also each have stored therein a portion of instructions for performing the above-described pool access method. In other words, a combination of one or more computing devices 700 may collectively execute instructions for performing the above-described storage pool access method.
It should be noted that, the memory 702 in different computing devices 700 in the computing device cluster 800 may store different instructions for performing part of the functions of the storage pool access apparatus described above. That is, the instructions stored by the memory 702 in the different computing devices 700 may implement the functionality of one or more modules in the transceiver unit and the processing unit.
In some possible implementations, one or more computing devices 700 in the computing device cluster 800 may be connected by a network. Wherein the network may be a wide area network or a local area network, etc.
Referring to fig. 9, fig. 9 is a schematic diagram of a network connection of computer devices in a computer cluster according to an embodiment of the present application. As shown in fig. 9, two computing devices 700A and 700B are connected by a network. Specifically, the connection to the network is made through a communication interface in each computing device.
In one possible implementation, instructions to perform the functions of the transceiver module are stored in memory in computing device 700A. Meanwhile, instructions to perform the functions of the processing module are stored in memory in computing device 700B.
It should be appreciated that the functionality of computing device 700A shown in fig. 9 may also be performed by a plurality of computing devices. Likewise, the functionality of computing device 700B may be performed by multiple computing devices.
In another embodiment of the present application, there is also provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor of a device, perform a method performed by the computing device in the above-described method embodiment.
In another embodiment of the present application, there is also provided a computer program product comprising computer-executable instructions stored in a computer-readable storage medium. The apparatus performs the method performed by the computing apparatus in the method embodiments described above when the processor of the apparatus executes the computer-executable instructions.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (15)

1. A method for accessing a storage pool, comprising:
The method comprises the steps that a computing device cluster provides an application program interface, wherein the application program interface is used for enabling computing nodes in the computing device cluster to access a storage pool through the application program interface, and a software tool kit SDK of the application program interface is deployed on a virtual machine or a container of the computing device cluster;
The computing device cluster sends a storage pool access request to a data forwarding proxy module by calling the application program interface, wherein the data forwarding proxy module is used for processing a data forwarding process of the computing device cluster;
the computing device cluster sends the storage pool access request to storage nodes in the storage pool through the data forwarding agent module.
2. The method of claim 1, wherein the computing device cluster sending a storage pool access request to a data forwarding agent module by invoking the application program interface comprises:
And the computing equipment cluster calls the application program interface and sends the storage pool access request to the data forwarding agent module through a Direct Memory Access (DMA) controller.
3. The method of claim 1 or 2, wherein the computing device cluster sending a storage pool access request to a data forwarding proxy module by invoking the application program interface comprises:
The computing device cluster sends a log data writing request to the data forwarding agent module by calling the application program interface, wherein the log data writing request carries a log identifier, and the log identifier is used for indicating a data writing position of the data forwarding agent module in the storage pool.
4. The method of claim 1 or 2, wherein the computing device cluster sending a storage pool access request to a data forwarding proxy module by invoking the application program interface comprises:
The computing device cluster sends a log data reading request to the data forwarding agent module by calling the application program interface, wherein the log data reading request carries a log identifier, and the log identifier is used for indicating a data reading position of the data forwarding agent module in the storage pool.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
the computing device cluster determines partition identifications based on the log identifications;
and the computing device cluster queries a routing table according to the partition identification, and determines one or more storage node identifications corresponding to the partition identification.
6. The method according to any of claims 1 to 5, wherein the data forwarding agent module is deployed on a data processing unit, DPU, chip.
7. An access device for a storage pool, comprising:
The processing unit is used for providing an application program interface, wherein the application program interface is used for enabling a computing node in the computing device cluster to access a storage pool through the application program interface, and a software tool kit SDK of the application program interface is deployed on a virtual machine or a container of the computing device cluster;
the receiving and transmitting unit is used for sending a storage pool access request to the data forwarding proxy module by calling the application program interface, and the data forwarding proxy module is used for processing a data forwarding process of the computing device cluster;
The receiving and transmitting unit is further configured to send, through the data forwarding proxy module, the storage pool access request to a storage node in the storage pool.
8. The device according to claim 7, wherein the transceiver unit is specifically configured to:
And calling the application program interface, and sending the storage pool access request to the data forwarding proxy module through a Direct Memory Access (DMA) controller.
9. The apparatus according to claim 7 or 8, wherein the transceiver unit is specifically configured to:
And sending a data writing request to the data forwarding agent module by calling the application program interface, wherein the data writing request carries a log identifier, and the log identifier is used for indicating the data writing position of the data forwarding agent module in the storage pool.
10. The device according to claim 7 or 8, wherein the transceiver unit is specifically configured to:
And calling the application program interface to send a data reading request to the data forwarding agent module, wherein the data reading request carries a log identifier, and the log identifier is used for indicating a data reading position of the data forwarding agent module in the storage pool.
11. The apparatus according to claim 9 or 10, wherein the processing unit is further configured to:
determining a partition identification based on the log identification;
And inquiring a routing table according to the disk table identifier, and determining one or more storage node identifiers corresponding to the partition identifiers.
12. The apparatus according to any of claims 7 to 11, wherein the data forwarding agent module is deployed on a data processing unit, DPU, chip.
13. A cluster of computing devices, comprising a processor coupled with a memory, the processor to store instructions that, when executed by the processor, cause the cluster of computing devices to perform the method of any of claims 1-6.
14. A computer readable storage medium having instructions stored thereon, which when executed, cause a computer to perform the method of any of claims 1 to 6.
15. A computer program product comprising instructions which, when executed, cause a computer to carry out the method of any one of claims 1 to 6.
CN202310330666.2A 2022-11-01 2023-03-30 Access method and device for storage pool Pending CN118034579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/128077 WO2024093958A1 (en) 2022-11-01 2023-10-31 Access method and apparatus for storage pool

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022113668114 2022-11-01
CN202211366811 2022-11-01

Publications (1)

Publication Number Publication Date
CN118034579A true CN118034579A (en) 2024-05-14

Family

ID=90993951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310330666.2A Pending CN118034579A (en) 2022-11-01 2023-03-30 Access method and device for storage pool

Country Status (1)

Country Link
CN (1) CN118034579A (en)

Similar Documents

Publication Publication Date Title
US11868617B2 (en) Virtualizing non-volatile storage at a peripheral device
US11093148B1 (en) Accelerated volumes
CN110119304B (en) Interrupt processing method and device and server
WO2024041412A1 (en) Storage system and method, and hardware offload card
US11582285B2 (en) Asynchronous workflow and task api for cloud based processing
CN110046187B (en) Data processing system, method and device
CN111694519B (en) Method, system and server for mounting cloud hard disk on bare metal server
CN110870286B (en) Fault tolerance processing method and device and server
CN111666184B (en) Solid state drive SSD hard disk testing method and device and electronic equipment
CN113067875A (en) Access method, device and equipment based on dynamic flow control of micro-service gateway
CN109857553B (en) Memory management method and device
CN115470156A (en) RDMA-based memory use method, system, electronic device and storage medium
CN114416470A (en) Cloud monitoring method, system, equipment and computer storage medium
CN107276998B (en) OpenSSL-based performance optimization method and device
CN112650710A (en) Data migration sending method and device, storage medium and electronic device
US20200326942A1 (en) Parameter management between programs
CN118034579A (en) Access method and device for storage pool
CN115934338A (en) Inter-process communication method and device
CN114840307A (en) Container loading method, device, equipment and storage medium
WO2024093958A1 (en) Access method and apparatus for storage pool
CN116346382A (en) Method and device for blocking malicious TCP connection and electronic equipment
CN112637201A (en) Request processing method, device, equipment and system of web server
CN118606079B (en) Socket interface-based communication method and system
CN113722110B (en) Computer system, memory access method and device
CN116155890B (en) Method and device for realizing distributed file system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication