CN112596669A - Data processing method and device based on distributed storage - Google Patents

Data processing method and device based on distributed storage Download PDF

Info

Publication number
CN112596669A
CN112596669A CN202011339302.3A CN202011339302A CN112596669A CN 112596669 A CN112596669 A CN 112596669A CN 202011339302 A CN202011339302 A CN 202011339302A CN 112596669 A CN112596669 A CN 112596669A
Authority
CN
China
Prior art keywords
network card
intelligent network
controller
storage
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011339302.3A
Other languages
Chinese (zh)
Inventor
钟晋明
彭洪渊
管树发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Cloud Technologies Co Ltd
Original Assignee
New H3C Cloud Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Cloud Technologies Co Ltd filed Critical New H3C Cloud Technologies Co Ltd
Priority to CN202011339302.3A priority Critical patent/CN112596669A/en
Publication of CN112596669A publication Critical patent/CN112596669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Abstract

The present application relates to the field of data storage technologies, and in particular, to a data processing method and apparatus based on distributed storage. Each storage node in the distributed storage is respectively provided with a corresponding intelligent network card, distributed storage service is operated on each intelligent network card, and each intelligent network card establishes a data channel with a controller used for managing local storage resources on the corresponding storage node, wherein the method comprises the following steps: the intelligent network card receives a data read-write request sent by a client; the intelligent network card forwards the data reading and writing request to a controller which is used for managing local storage resources on a corresponding storage node, so that the controller performs data processing on the data reading and writing request through a corresponding data channel, and sends a processing result to the intelligent network card; and the intelligent network card receives the processing result sent by the controller and forwards the processing result to the client.

Description

Data processing method and device based on distributed storage
Technical Field
The present application relates to the field of data storage technologies, and in particular, to a data processing method and apparatus based on distributed storage.
Background
Distributed storage is the distributed storage of data on a plurality of independent devices. The traditional network storage system adopts a centralized storage server to store all data, the storage server becomes the bottleneck of the system performance, is also the focus of reliability and safety, and cannot meet the requirement of large-scale storage application. The distributed network storage system adopts an expandable system structure, utilizes a plurality of storage servers to share the storage load, and utilizes the position server to position the storage information, thereby not only improving the reliability, the availability and the access efficiency of the system, but also being easy to expand.
The traditional Serversan (software defined storage) distributed storage architecture is: the server is provided with a common network card, and the distributed storage software runs on an x86CPU of the host. The communication between the nodes adopts a TCP protocol and passes through a kernel network protocol stack. Distributed storage on each node manages NVMe devices for physical host local (PCIe). The cluster linkage provides storage service to the outside and ensures the consistency of the stored data.
However, in the conventional distributed storage architecture, the distributed storage software running on the physical machine occupies system resources such as cpu and memory of the physical machine; the physical machine cpu resources are shared with the virtual machines, and the number of cpus occupied by the distributed storage software influences the number of the virtual machines which can be established. The performance of distributed storage running on a physical machine may be affected by the operation of the physical machine OS and the virtual machine. The distributed storage software running in the physical machine manages its local storage resources, and also needs to consume a part of system resources, namely, access to an abstract block layer through a system call.
Disclosure of Invention
The application provides a data processing method and device based on distributed storage, which are used for solving the problem of low reliability of storage service caused by the shortage of system resources of a storage server in the prior art.
In a first aspect, the present application provides a data processing method based on distributed storage, where each storage node in the distributed storage is configured with a corresponding intelligent network card, each intelligent network card runs a distributed storage service, and each intelligent network card establishes a data channel with a controller on the corresponding storage node for managing local storage resources, where the method includes:
the intelligent network card receives a data read-write request sent by a client;
the intelligent network card forwards the data reading and writing request to a controller which is used for managing local storage resources on a corresponding storage node, so that the controller performs data processing on the data reading and writing request through a corresponding data channel, and sends a processing result to the intelligent network card;
and the intelligent network card receives the processing result sent by the controller and forwards the processing result to the client.
Optionally, before the intelligent network card receives a data read-write request sent by the client, the method further includes:
the method comprises the steps that initialization operation is carried out on the intelligent network card based on user configuration, so that the intelligent network card runs distributed storage service, and a data channel is established between the intelligent network card and a controller which is used for managing local storage resources and is arranged on a storage node corresponding to the intelligent network card.
Optionally, the step of performing an initialization operation by the intelligent network card based on the user configuration to enable the intelligent network card to run the distributed storage service includes:
the intelligent network card locally runs the distributed storage service based on the configuration instruction issued by the user, and forms a distributed storage service cluster with other intelligent network cards running the distributed storage service.
Optionally, the step of performing an initialization operation on the intelligent network card based on user configuration, so that the controller for managing local storage resources on the intelligent network card and the corresponding storage node establishes a data channel includes:
the method comprises the steps that an intelligent network card configures an NVMe over Fabrics protocol locally based on a configuration command issued by a user, and establishes a remote direct data access RDMA data channel as an initiator end and a controller used for managing local storage resources on a storage node corresponding to the initiator end, wherein the controller used for managing the local storage resources on the storage node corresponding to the intelligent network card is configured as a target end of the NVMe over Fabrics protocol.
Optionally, the step of configuring the NVMe over Fabrics protocol locally by the intelligent network card based on a configuration command issued by a user, and establishing a remote direct data access RDMA data channel as a controller for managing local storage resources on an initiator end and a storage node corresponding to the initiator end, includes:
the intelligent network card locally configures the IP address of RDMA based on a configuration command issued by a user, and uses an initiator tool in an NVMe over Fabrics protocol to connect a controller for managing local storage resources on a corresponding storage node; the controller used for managing local storage resources on the storage node corresponding to the intelligent network card configures an RDMA IP address on a local network card chip interface with RDMA capability based on a configuration command issued by a user, and configures an NVMe over Fabrics protocol on a network card chip with NVMe-oF target off-load capability as a target end.
In a second aspect, the present application provides a data processing apparatus based on distributed storage, where each storage node in the distributed storage is configured with a corresponding apparatus, each apparatus runs a distributed storage service, and each apparatus establishes a data channel with a controller on the corresponding storage node for managing local storage resources, where the apparatus includes:
the first receiving unit is used for receiving a data read-write request sent by a client;
the forwarding unit is used for forwarding the data reading and writing request to a controller which is used for managing local storage resources on a corresponding storage node, so that the controller performs data processing on the data reading and writing request through a corresponding data channel, and sends a processing result to the device;
and the second receiving unit is used for receiving the processing result sent by the controller and forwarding the processing result to the client.
Optionally, the apparatus further comprises:
the device comprises an initialization unit used for carrying out initialization operation based on user configuration so as to enable the device to run the distributed storage service and enable the device to establish a data channel with a controller used for managing local storage resources on a corresponding storage node.
Optionally, an initialization operation is performed based on user configuration, so that when the apparatus runs the distributed storage service, the initialization unit is specifically configured to:
and based on a configuration instruction issued by a user, locally operating the distributed storage service, and forming a distributed storage service cluster with other devices operating the distributed storage service.
Optionally, when an initialization operation is performed based on user configuration, so that the apparatus establishes a data channel with a controller on a corresponding storage node, where the controller is used to manage local storage resources, the initialization unit is specifically configured to:
based on a configuration command issued by a user, configuring an NVMe over Fabrics protocol locally on the device, and establishing a remote direct data access RDMA data channel as an initiator end and a controller used for managing local storage resources on a storage node corresponding to the initiator end, wherein the controller used for managing the local storage resources on the storage node corresponding to the device is configured as a target end of the NVMe over Fabrics protocol.
Optionally, when the NVMe over Fabrics protocol is configured locally based on a configuration command issued by a user, and a controller used for managing local storage resources on a storage node corresponding to the initiator end establishes a remote direct data access RDMA data channel, the initialization unit is specifically configured to:
based on a configuration command issued by a user, locally configuring an RDMA (remote direct memory access) IP address of the device as an initiator end, and connecting an initiator tool in an NVMe over Fabrics protocol to a controller for managing local storage resources on a corresponding storage node; the controller used for managing local storage resources on the storage node corresponding to the device configures an RDMA IP address on a local network card chip interface with RDMA capability based on a configuration command issued by a user, and configures an NVMe over Fabrics protocol on the network card chip with NVMe-oF target off-load capability as a target end.
In a third aspect, an embodiment of the present application provides an intelligent network card, where the intelligent network card includes:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps of the method according to any one of the above first aspects in accordance with the obtained program instructions.
In a fourth aspect, the present application further provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the steps of the method according to any one of the above first aspects.
To sum up, in the data processing method based on distributed storage provided in the embodiment of the present application, each storage node in the distributed storage is respectively configured with a corresponding intelligent network card, each intelligent network card runs a distributed storage service, each intelligent network card respectively establishes a data channel with a controller for managing local storage resources on the corresponding storage node, and the method includes: the intelligent network card receives a data read-write request sent by a client; the intelligent network card forwards the data reading and writing request to a controller which is used for managing local storage resources on a corresponding storage node, so that the controller performs data processing on the data reading and writing request through a corresponding data channel, and sends a processing result to the intelligent network card; and the intelligent network card receives the processing result sent by the controller and forwards the processing result to the client.
By adopting the data processing method based on distributed storage provided by the embodiment of the application, the functions of distributed storage service, disk management and the like on the storage server are migrated to the corresponding intelligent network card, the disk is managed, and when the disk data is read and written, the system resources of the storage server are not occupied, the distributed storage cluster service is provided by linkage of a plurality of intelligent network cards in the distributed storage, and the distributed storage performance is not influenced by the system resources of the storage server. The reliability of the distributed storage service is improved while the performance of the storage server is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a schematic structural diagram of a data processing system based on distributed storage according to an embodiment of the present application;
fig. 2 is a detailed flowchart of a data processing method based on distributed storage according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a data processing apparatus based on distributed storage according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an intelligent network card provided in the embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The following describes in detail a structure of the distributed storage based data processing system provided in the embodiments of the present application with reference to a specific application scenario. Exemplarily, referring to fig. 1, a schematic structural diagram of a data processing system based on distributed storage provided by the present application is shown, where the distributed storage includes 3 storage servers (storage server 1, storage server 2, and storage server 3), each storage server is configured with a corresponding intelligent network card (intelligent network card 1, intelligent network card 2, and intelligent network card 3), each storage server includes a storage resource for storing data and a controller for managing a local storage resource, each intelligent network card runs a distributed storage service, and a data channel is established between the intelligent network card configured by each storage server and the controller for managing the local storage resource on the storage server corresponding to the intelligent network card. Each controller managed storage resource may be comprised of multiple disks. The disk may specifically be a disk conforming to a Non-Volatile Memory host controller interface specification (NVMe), and may be referred to as an NVMe disk in this specification.
It should be noted that the above system structure, the number of devices, the number of disks, etc. are only used for illustration and are not limited.
The intelligent network card may have an independent Central Processing Unit (CPU). The smart network card may also be called an accelerator card, has certain network and storage acceleration capabilities, and can be generally implemented by a Field-Programmable Gate Array (FPGA). Since the intelligent network card can perform data input/output processing based on distributed storage, the processor of the storage server can be liberated from the service, and only the intelligent network card processes data input/output.
Exemplarily, referring to fig. 2, a detailed flowchart of a data processing method based on distributed storage provided in an embodiment of the present application is shown, where each storage node in the distributed storage is respectively configured with a corresponding intelligent network card, each intelligent network card runs a distributed storage service, and each intelligent network card respectively establishes a data channel with a controller for managing local storage resources on the corresponding storage node, where the method includes the following steps:
step 200: and the intelligent network card receives a data read-write request sent by the client.
Specifically, after receiving a data read-write request sent by a client, a management device serving as a management node allocates the data read-write request to one storage server in the distributed storage for processing, that is, the management device sends the data read-write request to an intelligent network card corresponding to the storage server for processing the data read-write request.
Further, in this embodiment of the application, before the intelligent network card receives the data read-write request sent by the client, the method further includes the following steps:
the method comprises the steps that initialization operation is carried out on the intelligent network card based on user configuration, so that the intelligent network card runs distributed storage service, and a data channel is established between the intelligent network card and a controller which is used for managing local storage resources and is arranged on a storage node corresponding to the intelligent network card.
Specifically, in the embodiment of the present application, the intelligent network card performs initialization operation based on user configuration, so that when the intelligent network card runs the distributed storage service, a preferred implementation manner is that the intelligent network card runs the distributed storage service locally based on a configuration instruction issued by a user, and forms a distributed storage service cluster with other intelligent network cards running the distributed storage service.
That is to say, in the embodiment of the present application, the distributed storage service is run on the smart network card.
Further, when the intelligent network card performs an initialization operation based on user configuration, so that the intelligent network card establishes a data channel with a controller for managing local storage resources on a storage node corresponding to the intelligent network card, a preferred implementation manner is that the intelligent network card configures an NVMe over fabric (NVMe-oF) protocol locally based on a configuration command issued by a user, and establishes a remote direct data access RDMA data channel as an initiator end and the controller for managing local storage resources on the storage node corresponding to the intelligent network card, wherein the controller for managing local storage resources on the storage node corresponding to the intelligent network card is configured as a target end oF the NVMe over fabric protocol.
Specifically, when the intelligent network card configures an NVMe over Fabrics protocol locally based on a configuration command issued by a user and establishes a remote direct data access RDMA data channel as a controller for managing local storage resources on an initiator end and a storage node corresponding to the initiator end, a preferred implementation manner is that the intelligent network card configures an RDMA IP address locally based on the configuration command issued by the user and uses an initiator tool in the NVMe over Fabrics protocol to connect the controller for managing local storage resources on the storage node corresponding to the intelligent network card as the initiator end; the controller used for managing local storage resources on the storage node corresponding to the intelligent network card configures an RDMA IP address on a local network card chip interface with RDMA capability based on a configuration command issued by a user, and configures an NVMe over Fabrics protocol on a network card chip with NVMe-oF target off-load capability as a target end. That is to say, the network card chip included in the controller on each storage server is a network card chip integrated with the RDMA function and the NVMe-af target offset capability.
For example, an intelligent network card and each controller corresponding to each storage server are initialized, an NVMe over Fabrics protocol is configured for each intelligent network card and each controller, the intelligent network card is used as an initiator end, the controller is used as a target end to connect, and the distributed storage service is started.
Optionally, the initialization process may specifically include the following steps:
step 1: each smart network card may be configured with an area for storing metadata corresponding to the storage data in the storage server corresponding to the smart network card, and therefore, may also be referred to as a metadata area. In the initialization process, metadata stored in the storage resource is loaded to the metadata area.
In another preferred implementation, an area for storing metadata corresponding to storage data in a storage resource on the storage server is configured in the memory of the storage server, and therefore may also be referred to as a metadata area. Then, in the initialization process, the metadata in the storage resource is loaded into the metadata area of the memory.
Step 2: each controller may configure an Internet Protocol address (IP) of Remote Direct Memory Access (RDMA) and configure an NVMe over Fabrics Protocol as a target end.
And step 3: each intelligent network card can be configured with an RDMA IP address and an NVMe over Fabrics protocol as an initiator end, and an initiator tool in the NVMe over Fabrics protocol is used for connecting a corresponding controller thereof to start the distributed storage service.
The starting of the distributed storage service specifically may include: and each intelligent network card marks and records the distributed disk locally, and writes the metadata corresponding to the storage data in the distributed disk into a metadata area configured by the intelligent network card through an RDMA protocol.
Each intelligent network card can perform input/output processing on the stored data through the RDMA protocol and the metadata corresponding to the stored data.
In the embodiment of the present application, when the RDMA data channel of each storage server is enabled, a preferred implementation is that an IP address such as 1.1.1.2, 1.1.2.2, is configured for the RDMA-capable network card chip interface of each storage server, and the RoCEv2 protocol may be used. And an NVMe over Fabrics protocol is configured on the network card chip with the NVMe-oF target offset capability to serve as a target end.
When the RDMA data channel of each intelligent network card is enabled, a preferred implementation is to configure IP addresses 1.1.1.1, 1.1.2.1, … … for the RDMA capable network card chip interface on each intelligent network card, and the RoCEv2 protocol may be used.
That is to say, each storage server enables an RDMA data channel, that is, an IP address (for example, IP 11) is configured for a network card chip interface with an RDMA capability on the storage server 1, and an NVMe over fabric protocol is configured on the network card chip as a target end, wherein the network card chip is a network card chip integrated with an RDMA function and an NVMe-ot f target offset capability, each intelligent network card enables an RDMA data channel, that is, an IP address (IP 21) is configured for a network card chip interface with an RDMA capability on the intelligent network card 1, wherein the intelligent network card corresponding to the storage server 1 is the intelligent network card 1, and then the intelligent network card 1 can serve as an initiator end, and a data channel is established between the network card chip interface configured with the IP address as IP 11 and the network card chip interface configured with the IP address as IP 21 on the storage server 1.
Step 210: and the intelligent network card forwards the data reading and writing request to a controller which is used for managing local storage resources on a corresponding storage node, so that the controller performs data processing on the data reading and writing request through a corresponding data channel, and sends a processing result to the intelligent network card.
Specifically, in the embodiment of the present application, after receiving a data read-write request sent by a client, an intelligent network card sends the data read-write request to a controller in a corresponding storage server, and when receiving the data read-write request, the controller parses the data read-write request, initiates a DMA operation through an RDMA data channel established between the intelligent network card and the intelligent network card, and returns a data read-write result to the intelligent network card. Namely, the intelligent network card directly accesses the storage resources on the storage server to perform read-write operation on the storage resources.
Step 220: and the intelligent network card receives the processing result sent by the controller and forwards the processing result to the client.
Specifically, after receiving the data read-write result, the intelligent network card forwards the data read-write result to the client, and completes the data read-write operation.
Therefore, the CPU of the storage server does not need to analyze the data read-write request, the distributed storage service is operated on the intelligent network card, the intelligent network card is directly adopted to manage the disk on the storage server, and the read-write operation of the data on the disk can be realized through the intelligent network card.
Based on the same inventive concept as the method embodiment, the following describes in detail the structure of the data processing apparatus based on distributed storage according to the embodiment of the present application with reference to a specific application scenario.
Exemplarily, referring to fig. 3, a schematic structural diagram of a data processing apparatus for distributed storage according to an embodiment of the present application is shown, where each storage node in the distributed storage is respectively configured with a corresponding apparatus, each apparatus runs a distributed storage service, and each apparatus respectively establishes a data channel with a controller for managing local storage resources on the corresponding storage node, where the apparatus includes:
a first receiving unit 30, configured to receive a data read-write request sent by a client;
the forwarding unit 31 is configured to forward the data read-write request to a controller on a corresponding storage node, where the controller is configured to manage local storage resources, so that the controller performs data processing on the data read-write request through a corresponding data channel of the controller, and sends a processing result to the device;
a second receiving unit 32, configured to receive the processing result sent by the controller, and forward the processing result to the client.
Optionally, the apparatus further comprises:
the device comprises an initialization unit used for carrying out initialization operation based on user configuration so as to enable the device to run the distributed storage service and enable the device to establish a data channel with a controller used for managing local storage resources on a corresponding storage node.
Optionally, an initialization operation is performed based on user configuration, so that when the apparatus runs the distributed storage service, the initialization unit is specifically configured to:
and based on a configuration instruction issued by a user, locally operating the distributed storage service, and forming a distributed storage service cluster with other devices operating the distributed storage service.
Optionally, when an initialization operation is performed based on user configuration, so that the apparatus establishes a data channel with a controller on a corresponding storage node, where the controller is used to manage local storage resources, the initialization unit is specifically configured to:
based on a configuration command issued by a user, configuring an NVMe over Fabrics protocol locally on the device, and establishing a remote direct data access RDMA data channel as an initiator end and a controller used for managing local storage resources on a storage node corresponding to the initiator end, wherein the controller used for managing the local storage resources on the storage node corresponding to the device is configured as a target end of the NVMe over Fabrics protocol.
Optionally, when the NVMe over Fabrics protocol is configured locally based on a configuration command issued by a user, and a controller used for managing local storage resources on a storage node corresponding to the initiator end establishes a remote direct data access RDMA data channel, the initialization unit is specifically configured to:
based on a configuration command issued by a user, locally configuring an RDMA (remote direct memory access) IP address of the device as an initiator end, and connecting an initiator tool in an NVMe over Fabrics protocol to a controller for managing local storage resources on a corresponding storage node; the controller used for managing local storage resources on the storage node corresponding to the device configures an RDMA (remote direct memory access) IP (Internet protocol) address on a local network card chip interface with RDMA (remote direct memory Access) capability based on a configuration command issued by a user, and configures an NVMe over Fabrics protocol on a network card chip with NVMe-oF target off-load capability to serve as a target end.
The above units may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above units is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Further, in the intelligent network card provided in the embodiment of the present application, from a hardware aspect, a schematic diagram of a hardware architecture of the intelligent network card may be shown in fig. 4, where the intelligent network card may include: a memory 40 and a processor 41, which,
memory 40 is used to store program instructions; processor 41 calls program instructions stored in memory 40 and executes the above-described method embodiments in accordance with the obtained program instructions. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the present application further provides an intelligent network card, which includes at least one processing element (or chip) for executing the above method embodiments.
Optionally, the present application also provides a program product, such as a computer-readable storage medium, having stored thereon computer-executable instructions for causing the computer to perform the above-described method embodiments.
Here, a machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and so forth. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A data processing method based on distributed storage is characterized in that each storage node in the distributed storage is respectively provided with a corresponding intelligent network card, each intelligent network card runs distributed storage service, each intelligent network card respectively establishes a data channel with a controller used for managing local storage resources on the corresponding storage node, and the method comprises the following steps:
the intelligent network card receives a data read-write request sent by a client;
the intelligent network card forwards the data reading and writing request to a controller which is used for managing local storage resources on a corresponding storage node, so that the controller performs data processing on the data reading and writing request through a corresponding data channel, and sends a processing result to the intelligent network card;
and the intelligent network card receives the processing result sent by the controller and forwards the processing result to the client.
2. The method of claim 1, wherein before the smart network card receives the data read-write request sent by the client, the method further comprises:
the method comprises the steps that initialization operation is carried out on the intelligent network card based on user configuration, so that the intelligent network card runs distributed storage service, and a data channel is established between the intelligent network card and a controller which is used for managing local storage resources and is arranged on a storage node corresponding to the intelligent network card.
3. The method of claim 2, wherein the step of the intelligent network card performing an initialization operation based on the user configuration to cause the intelligent network card to run the distributed storage service comprises:
the intelligent network card locally runs the distributed storage service based on the configuration instruction issued by the user, and forms a distributed storage service cluster with other intelligent network cards running the distributed storage service.
4. The method of claim 2, wherein the step of the intelligent network card performing initialization operations based on user configuration so that the intelligent network card establishes a data channel with a controller on its corresponding storage node for managing local storage resources comprises:
the method comprises the steps that an intelligent network card configures an NVMe over Fabrics protocol locally based on a configuration command issued by a user, and establishes a remote direct data access RDMA data channel as an initiator end and a controller used for managing local storage resources on a storage node corresponding to the initiator end, wherein the controller used for managing the local storage resources on the storage node corresponding to the intelligent network card is configured as a target end of the NVMe over Fabrics protocol.
5. The method of claim 4, wherein the step of the intelligent network card configuring the NVMe over Fabrics protocol locally based on a configuration command issued by a user, and establishing a remote direct data access RDMA data channel as a controller on an initiator end and a storage node corresponding thereto for managing local storage resources comprises:
the intelligent network card locally configures the IP address of RDMA based on a configuration command issued by a user, and uses an initiator tool in an NVMe over Fabrics protocol to connect a controller for managing local storage resources on a corresponding storage node; the controller used for managing local storage resources on the storage node corresponding to the intelligent network card configures an RDMA IP address on a local network card chip interface with RDMA capability based on a configuration command issued by a user, and configures an NVMe over Fabrics protocol on a network card chip with NVMe-oF target off-load capability as a target end.
6. A data processing apparatus based on distributed storage, wherein each storage node in the distributed storage is configured with a corresponding apparatus, each apparatus runs a distributed storage service, and each apparatus establishes a data channel with a controller for managing local storage resources on its corresponding storage node, respectively, the apparatus comprising:
the first receiving unit is used for receiving a data read-write request sent by a client;
the forwarding unit is used for forwarding the data reading and writing request to a controller which is used for managing local storage resources on a corresponding storage node, so that the controller performs data processing on the data reading and writing request through a corresponding data channel, and sends a processing result to the device;
and the second receiving unit is used for receiving the processing result sent by the controller and forwarding the processing result to the client.
7. The apparatus of claim 6, wherein the apparatus further comprises:
the device comprises an initialization unit used for carrying out initialization operation based on user configuration so as to enable the device to run the distributed storage service and enable the device to establish a data channel with a controller used for managing local storage resources on a corresponding storage node.
8. The apparatus according to claim 7, wherein an initialization operation is performed based on a user configuration, so that when the apparatus runs the distributed storage service, the initialization unit is specifically configured to:
and based on a configuration instruction issued by a user, locally operating the distributed storage service, and forming a distributed storage service cluster with other devices operating the distributed storage service.
9. The apparatus according to claim 7, wherein when performing an initialization operation based on user configuration, so that the apparatus establishes a data channel with a controller on a corresponding storage node, where the controller is used to manage local storage resources, the initialization unit is specifically configured to:
based on a configuration command issued by a user, configuring an NVMe over Fabrics protocol locally on the device, and establishing a remote direct data access RDMA data channel as an initiator end and a controller used for managing local storage resources on a storage node corresponding to the initiator end, wherein the controller used for managing the local storage resources on the storage node corresponding to the device is configured as a target end of the NVMe over Fabrics protocol.
10. The apparatus of claim 9, wherein based on a configuration command issued by a user, when an NVMe over Fabrics protocol is configured locally and a controller for managing local storage resources on a storage node corresponding to an initiator end establishes an RDMA data channel, the initialization unit is specifically configured to:
based on a configuration command issued by a user, locally configuring an RDMA (remote direct memory access) IP address of the device as an initiator end, and connecting an initiator tool in an NVMe over Fabrics protocol to a controller for managing local storage resources on a corresponding storage node; the controller used for managing local storage resources on the storage node corresponding to the device configures an RDMA (remote direct memory access) IP (Internet protocol) address on a local network card chip interface with RDMA (remote direct memory Access) capability based on a configuration command issued by a user, and configures an NVMe over Fabrics protocol on a network card chip with NVMe-oF target off-load capability to serve as a target end.
CN202011339302.3A 2020-11-25 2020-11-25 Data processing method and device based on distributed storage Pending CN112596669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011339302.3A CN112596669A (en) 2020-11-25 2020-11-25 Data processing method and device based on distributed storage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011339302.3A CN112596669A (en) 2020-11-25 2020-11-25 Data processing method and device based on distributed storage

Publications (1)

Publication Number Publication Date
CN112596669A true CN112596669A (en) 2021-04-02

Family

ID=75183927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011339302.3A Pending CN112596669A (en) 2020-11-25 2020-11-25 Data processing method and device based on distributed storage

Country Status (1)

Country Link
CN (1) CN112596669A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327903A (en) * 2021-12-30 2022-04-12 苏州浪潮智能科技有限公司 NVMe-oF management system, resource allocation method and IO read-write method
CN114726883A (en) * 2022-04-27 2022-07-08 重庆大学 Embedded RDMA system

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125303A (en) * 2014-08-12 2014-10-29 北京奇虎科技有限公司 Data read-and-write requesting method, client end and data read-and-write requesting system
US20170034268A1 (en) * 2015-07-31 2017-02-02 Netapp, Inc. Systems, methods and devices for rdma read/write operations
CN106657365A (en) * 2016-12-30 2017-05-10 清华大学 High concurrent data transmission method based on RDMA (Remote Direct Memory Access)
CN106775434A (en) * 2015-11-19 2017-05-31 华为技术有限公司 A kind of implementation method of NVMe networkings storage, terminal, server and system
CN108063821A (en) * 2017-12-19 2018-05-22 国网湖南省电力有限公司 A kind of Electric Power Marketing System based on X86-based
CN108268208A (en) * 2016-12-30 2018-07-10 清华大学 A kind of distributed memory file system based on RDMA
CN108694021A (en) * 2017-04-03 2018-10-23 三星电子株式会社 The system and method for configuring storage device using baseboard management controller
CN109274647A (en) * 2018-08-27 2019-01-25 杭州创谐信息技术股份有限公司 Distributed credible memory exchanges method and system
CN109617735A (en) * 2018-12-26 2019-04-12 华为技术有限公司 Cloud computation data center system, gateway, server and message processing method
US20190163364A1 (en) * 2017-11-30 2019-05-30 Eidetic Communications Inc. System and method for tcp offload for nvme over tcp-ip
CN110113420A (en) * 2019-05-08 2019-08-09 重庆大学 Distributed Message Queue management system based on NVM
CN110191194A (en) * 2019-06-13 2019-08-30 华中科技大学 A kind of Distributed File System Data transmission method and system based on RDMA network
CN110445848A (en) * 2019-07-22 2019-11-12 阿里巴巴集团控股有限公司 Method and apparatus for issued transaction
CN110515724A (en) * 2019-08-13 2019-11-29 新华三大数据技术有限公司 Resource allocation method, device, monitor and machine readable storage medium
CN110535811A (en) * 2018-05-25 2019-12-03 中兴通讯股份有限公司 Remote memory management method and system, server-side, client, storage medium
CN110677402A (en) * 2019-09-24 2020-01-10 深圳前海微众银行股份有限公司 Data integration method and device based on intelligent network card
CN110941576A (en) * 2018-09-21 2020-03-31 苏州库瀚信息科技有限公司 System, method and device for memory controller with multi-mode PCIE function
US20200195718A1 (en) * 2018-12-12 2020-06-18 International Business Machines Corporation Workflow coordination in coordination namespace

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125303A (en) * 2014-08-12 2014-10-29 北京奇虎科技有限公司 Data read-and-write requesting method, client end and data read-and-write requesting system
US20170034268A1 (en) * 2015-07-31 2017-02-02 Netapp, Inc. Systems, methods and devices for rdma read/write operations
CN106775434A (en) * 2015-11-19 2017-05-31 华为技术有限公司 A kind of implementation method of NVMe networkings storage, terminal, server and system
CN106657365A (en) * 2016-12-30 2017-05-10 清华大学 High concurrent data transmission method based on RDMA (Remote Direct Memory Access)
CN108268208A (en) * 2016-12-30 2018-07-10 清华大学 A kind of distributed memory file system based on RDMA
CN108694021A (en) * 2017-04-03 2018-10-23 三星电子株式会社 The system and method for configuring storage device using baseboard management controller
US20190163364A1 (en) * 2017-11-30 2019-05-30 Eidetic Communications Inc. System and method for tcp offload for nvme over tcp-ip
CN108063821A (en) * 2017-12-19 2018-05-22 国网湖南省电力有限公司 A kind of Electric Power Marketing System based on X86-based
CN110535811A (en) * 2018-05-25 2019-12-03 中兴通讯股份有限公司 Remote memory management method and system, server-side, client, storage medium
CN109274647A (en) * 2018-08-27 2019-01-25 杭州创谐信息技术股份有限公司 Distributed credible memory exchanges method and system
CN110941576A (en) * 2018-09-21 2020-03-31 苏州库瀚信息科技有限公司 System, method and device for memory controller with multi-mode PCIE function
US20200195718A1 (en) * 2018-12-12 2020-06-18 International Business Machines Corporation Workflow coordination in coordination namespace
CN109617735A (en) * 2018-12-26 2019-04-12 华为技术有限公司 Cloud computation data center system, gateway, server and message processing method
CN110113420A (en) * 2019-05-08 2019-08-09 重庆大学 Distributed Message Queue management system based on NVM
CN110191194A (en) * 2019-06-13 2019-08-30 华中科技大学 A kind of Distributed File System Data transmission method and system based on RDMA network
CN110445848A (en) * 2019-07-22 2019-11-12 阿里巴巴集团控股有限公司 Method and apparatus for issued transaction
CN110515724A (en) * 2019-08-13 2019-11-29 新华三大数据技术有限公司 Resource allocation method, device, monitor and machine readable storage medium
CN110677402A (en) * 2019-09-24 2020-01-10 深圳前海微众银行股份有限公司 Data integration method and device based on intelligent network card

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
左旭彤 等: "低时延网络:架构,关键场景与研究展望", 《通信学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327903A (en) * 2021-12-30 2022-04-12 苏州浪潮智能科技有限公司 NVMe-oF management system, resource allocation method and IO read-write method
CN114327903B (en) * 2021-12-30 2023-11-03 苏州浪潮智能科技有限公司 NVMe-oF management system, resource allocation method and IO read-write method
CN114726883A (en) * 2022-04-27 2022-07-08 重庆大学 Embedded RDMA system
CN114726883B (en) * 2022-04-27 2023-04-07 重庆大学 Embedded RDMA system

Similar Documents

Publication Publication Date Title
CN112596960B (en) Distributed storage service switching method and device
US9413683B2 (en) Managing resources in a distributed system using dynamic clusters
JP5510556B2 (en) Method and system for managing virtual machine storage space and physical hosts
CN110069346B (en) Method and device for sharing resources among multiple processes and electronic equipment
US9916215B2 (en) System and method for selectively utilizing memory available in a redundant host in a cluster for virtual machines
TWI694700B (en) Data processing method and device, user terminal
CN109032533B (en) Data storage method, device and equipment
CN113204407B (en) Memory supermanagement method and device
WO2019028682A1 (en) Multi-system shared memory management method and device
CN112596669A (en) Data processing method and device based on distributed storage
WO2023160083A1 (en) Method for executing transactions, blockchain, master node, and slave node
CN108595346B (en) Feature library file management method and device
WO2021086693A1 (en) Management of multiple physical function non-volatile memory devices
CN113312182B (en) Cloud computing node, file processing method and device
CN112631994A (en) Data migration method and system
WO2024001025A1 (en) Pre-execution cache data cleaning method and blockchain node
CN114785662B (en) Storage management method, device, equipment and machine-readable storage medium
CN112800057A (en) Fingerprint table management method and device
CN110096355B (en) Shared resource allocation method, device and equipment
CN113704165B (en) Super fusion server, data processing method and device
CN109101188B (en) Data processing method and device
CN115499314A (en) Cluster node IP modification method and device
CN115037783B (en) Data transmission method and device
CN111258748B (en) Distributed file system and control method
CN117311729A (en) System deployment method, device, equipment and machine-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210402

RJ01 Rejection of invention patent application after publication