CN112995311B - Service providing method, device and storage medium - Google Patents

Service providing method, device and storage medium Download PDF

Info

Publication number
CN112995311B
CN112995311B CN202110184185.6A CN202110184185A CN112995311B CN 112995311 B CN112995311 B CN 112995311B CN 202110184185 A CN202110184185 A CN 202110184185A CN 112995311 B CN112995311 B CN 112995311B
Authority
CN
China
Prior art keywords
iscsi
storage node
client
address
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110184185.6A
Other languages
Chinese (zh)
Other versions
CN112995311A (en
Inventor
林福寿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Star Net Ruijie Networks Co Ltd
Original Assignee
Beijing Star Net Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Star Net Ruijie Networks Co Ltd filed Critical Beijing Star Net Ruijie Networks Co Ltd
Priority to CN202110184185.6A priority Critical patent/CN112995311B/en
Publication of CN112995311A publication Critical patent/CN112995311A/en
Application granted granted Critical
Publication of CN112995311B publication Critical patent/CN112995311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams

Abstract

The embodiment of the application provides a service providing method, service providing equipment and a storage medium. In the embodiment of the application, the iSCSI client is redirected to the first storage node with the same IP address as the iSCSI client, which means that the iSCSI server and the iSCSI client can be deployed on the first storage node at the same time, I/O data between the iSCSI client and the iSCSI server do not need to be transmitted across a node network, thereby realizing load localization, improving the I/O performance of the iSCSI server, and reducing the bandwidth requirement of the iSCSI service network. In this case, if load balancing is achieved when the iSCSI client is deployed on the storage node, load balancing of the iSCSI server may be achieved.

Description

Service providing method, device and storage medium
Technical Field
The present disclosure relates to the field of distributed storage technologies, and in particular, to a service providing method, device, and storage medium.
Background
The internet small computer interface (Internet Small Computer System Interface, iSCSI) is a storage technology based on the transmission control protocol/internet protocol (Transmission Control Protocol/Internet Protocol, TCP/IP) protocol. The iSCSI protocol employs a client/server mode, in which a client is commonly referred to as an initiator (initiator), and a server may be referred to as a target (target); encapsulation and reliable transfer of data is performed at the block-level (block-level) between initiator and target.
The iSCSI protocol is widely used in distributed storage systems as a commonly used block storage protocol. Where an initiator under the iSCSI protocol is typically deployed on a client of a distributed storage system, and a target is typically deployed in a storage node in the distributed storage system, for providing iSCSI services to the initiator. In practical applications, the target has the problem of low I/O performance when providing iSCSI services for the initiator.
Disclosure of Invention
Aspects of the present application provide a service providing method, apparatus, and storage medium for improving I/O performance of an iSCSI server.
The embodiment of the application provides a service providing method, which is applicable to a storage node serving as a master node in a distributed storage system and comprises the following steps: receiving a message sent by an iSCSI client of an Internet small computer interface, wherein the message comprises an Internet Interconnection Protocol (IP) address of the iSCSI client; judging whether a first storage node with the same IP address as the IP address of the iSCSI client exists in the distributed storage system or not; if so, redirecting the iSCSI client to the first storage node, and controlling the first storage node to run an iSCSI service end corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI service end.
The embodiment of the application also provides a server device, which comprises: a memory and a processor; a memory for storing a computer program; a processor coupled with the memory for executing the computer program for: receiving a message sent by an iSCSI client of an Internet small computer interface, wherein the message comprises an Internet Interconnection Protocol (IP) address of the iSCSI client; judging whether a first storage node with the same IP address as the IP address of the iSCSI client exists in the distributed storage system or not; if so, redirecting the iSCSI client to the first storage node, and controlling the first storage node to run an iSCSI service end corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI service end.
The present application also provides a computer-readable storage medium storing a computer program, which when executed by a processor causes the processor to implement the steps in the service providing method provided by the embodiments of the present application.
In the embodiment of the application, the iSCSI client is redirected to the first storage node with the same IP address as the iSCSI client, which means that the iSCSI server and the iSCSI client can be deployed on the first storage node at the same time, I/O data between the iSCSI client and the iSCSI server do not need to be transmitted across a node network, thereby realizing load localization, improving the I/O performance of the iSCSI server, and reducing the bandwidth requirement of the iSCSI service network. In this case, if load balancing is achieved when the iSCSI client is deployed on the storage node, load balancing of the iSCSI server may be achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1a is a schematic diagram of a distributed storage system according to an exemplary embodiment of the present application;
FIG. 1b is a schematic diagram of another distributed storage system according to an exemplary embodiment of the present application;
FIG. 1c is a schematic diagram of an internal module of a storage node according to an exemplary embodiment of the present application;
fig. 2 is a flowchart of a service providing method according to an exemplary embodiment of the present application;
fig. 3 is a schematic structural diagram of a server device according to an exemplary embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of a distributed storage system according to an exemplary embodiment of the present application. As shown in fig. 1a, the distributed storage system 100 includes: at least one storage node 101.
In this embodiment, the distributed storage system 100 dispersedly stores data on a plurality of independent devices, uses the plurality of storage devices to share the storage load, uses the server to locate the storage information, not only improves the reliability, availability and access efficiency of the system, but also is easy to expand, and constructs the dispersed storage devices into a virtual large storage pool for upper layer application. As shown in fig. 1b, the distributed storage system 100 is a general core component under the super-fusion system 104, the super-fusion system 104 is a basic system integrating computing resources and storage devices, in the distributed storage system 100 in the super-fusion system 104, storage functions and computing functions can be integrated on the same storage node 101, the storage node 101 can implement the function of integrating computing and storage resources, and multiple storage nodes 101 can be aggregated through a network to implement modularized seamless lateral expansion (Scale-Out) so as to form a uniform resource pool.
In this embodiment, as shown in fig. 1b, the super fusion system 104 further includes: a management node 102 that may perform functions such as attribute maintenance, file oplogging, authorizing access, creating or logging out nodes, and the like. In addition, the management node 102 may create the Storage node 101 in the distributed Storage system 100 on the basis of the node 103 in the super fusion system 104, where the node 103 in the super fusion system 104 includes a computing resource, is a resource provider of the super fusion system 104, and the node 103 includes a deployment (deployment) module, where the deployment module may provide an add interface to the outside, and the management node 102 may add the node 103 to the distributed Storage system 100 by calling the add interface, and create a Storage (Storage) module and a management (Zookeeper) module on the node 103 to obtain the Storage node 101, where, as shown in fig. 1c, the Storage node 101 at least includes: the device module, the Storage module and the Zookeeper module. The Storage module is used for running iSCSI service, providing disk virtualization function and supporting the functions of configuring Storage pool, volume and the like; the Zookeeper block is used for cluster management on one hand and can serve as a configuration center to store relevant configuration, such as storing virtual IP addresses of iSCSI service on the other hand.
In this embodiment, when the management node 102 creates a storage node 101 in the distributed storage system, when the first storage node 101 is created, a virtual IP address is transferred to a Zookeeper module, and the Zookeeper module stores the virtual IP address and shares the virtual IP address with other storage nodes 101 in the distributed storage system 100. In addition, when the storage node 101 is added to the distributed storage system 100, an actual IP address is assigned to each storage node. Wherein all storage nodes 101 in the distributed storage system 100 elect to generate a master node 101a, the virtual IP address is run by the master node 101a, and the storage nodes 101 other than the master node 101a run the actual IP address. Alternatively, when another storage node is reselected as the master node in the distributed storage system, the storage node reselected as the master node configures the virtual IP address as its own IP address for use, and the storage node as the master node clears the configured virtual IP address before the reselection, wherein the virtual IP address is only operable by the master node 101a, and it is possible to distinguish whether the storage node is the master node or not by the virtual IP address, so that the same virtual IP address is used regardless of which storage node is the master node.
In this embodiment, the iSCSI protocol is widely used in the distributed storage system 100 as a commonly used block storage protocol, and iSCSI is a protocol based on a TCP/IP protocol transmitting instructions, and uses a client/server mode, in this embodiment, a client is referred to as an iSCSI client, a server is referred to as an iSCSI server, and data is encapsulated and reliably transmitted between the iSCSI client and the iSCSI server at a block level (block-level). In this embodiment, the iSCSI server may be deployed on at least one storage node 101 in the distributed storage system 100; the iSCSI client may be deployed on at least one storage node 101 in the distributed storage system 100, or may be deployed on a node outside the distributed storage system 100, such as node 103, without limitation. In FIG. 1a, an example is illustrated where iSCSI clients are distributed across storage nodes 101 in a distributed storage system 100.
In this embodiment, in order to ensure high availability of iSCSI services, the distributed storage system 100 needs to configure a virtual IP address, and the storage node 101 running the virtual IP address is referred to as a master node, denoted as 101a, and the iSCSI client accesses the master node 101a using the virtual IP address, where the master node 101a redirects a login request to a certain storage node 101 in the distributed storage system 100, and the redirected storage node 101 provides iSCSI services for the iSCSI client.
In this embodiment, the iSCSI client may obtain the virtual IP address operated by the host node to establish a connection with the host node, which is not limited to the implementation manner in which the iSCSI client obtains the virtual IP address operated by the host node. In an alternative embodiment, the iSCSI client is deployed on a storage node 101 in a distributed storage system, and since the virtual IP address is configured by a management node 102, the iSCSI client may obtain the virtual IP address of the host node running through the management node 102. In yet another alternative embodiment, the iSCSI client is deployed on a node other than the distributed storage system, and when the iSCSI client is deployed on the other node, the iSCSI client is notified of the virtual IP address, and the iSCSI client may obtain the virtual IP address.
In this embodiment, the iSCSI client establishes a connection with the host node 101a through the virtual IP address, and the iSCSI client may send a packet to the host node 101a, where the packet carries the IP address of the iSCSI client, for example, the source IP address of the packet may be set as the IP address of the iSCSI client, and the destination address of the packet may be set as the virtual IP address of the host node 101 a. The IP address of the iSCSI client is the IP address of the node where the iSCSI client is located, e.g., if the iSCSI client is deployed on storage node 101, the IP address is the actual IP address of the storage node 101 running; if the iSCSI client is deployed on node 103, the IP address is the IP address that node 103 operates, which is not limited. The host node 101a may receive the packet, alternatively, an iSCSI service running on a Storage module on the host node 101a may monitor an iSCSI connection, and if a packet sent by an iSCSI client carries a virtual IP address running on the host node 101a, the host node 101a receives the packet.
In this embodiment, after receiving a packet sent by an iSCSI client, the master node 101a parses an IP address of the iSCSI client from the packet, and determines whether there is a first storage node 101b in the distributed storage system, where the IP address is the same as the IP address of the iSCSI client; if the first storage node 101b with the same IP address as the iSCSI client exists in the distributed storage system, redirecting the iSCSI client to the first storage node 101b, and controlling the first storage node 101b to run the iSCSI server corresponding to the iSCSI client so as to provide the iSCSI server for the iSCSI client by the iSCSI server.
In this embodiment, the iSCSI client is redirected to the first storage node 101b having the same IP address as the iSCSI client, the iSCSI client is redirected to the first storage node 101b, and the first storage node 101b is controlled to operate the iSCSI server corresponding to the iSCSI client, which means that the iSCSI client and the iSCSI server are deployed on the first storage node 101b where the iSCSI client is deployed at the same time, so that I/O data between the iSCSI client and the iSCSI server do not need to be transmitted across a node network, I/O performance of the iSCSI server is improved, load localization is implemented, and bandwidth requirements of an iSCSI service network are reduced. In this case, if load balancing is achieved when the iSCSI client is deployed on the storage node, load balancing of the iSCSI server may be achieved.
In this embodiment, the implementation of redirecting iSCSI clients to the first storage node 101b is not limited. An embodiment of redirecting an iSCSI client to a first storage node 101b, comprising: the IP address of the first storage node 101b and the port number of the iSCSI service are sent to the iSCSI client for the iSCSI client to establish a connection with the first storage node 101 b.
In an alternative embodiment, considering that the iSCSI client may be deployed outside the distributed storage system, in this case, the iSCSI server is deployed on the storage node 101 in the distributed storage system 100, where the iSCSI client and the iSCSI server cannot be deployed on the same node, that is, there is no first storage node 101b in the distributed storage system 100 with the IP address identical to the IP address of the iSCSI client, then an appropriate storage node 101 may be selected in the distributed storage system to deploy the iSCSI server to provide the iSCSI service for the iSCSI client. Based on this, if there is no first storage node 101b in the distributed storage system 100 with the IP address identical to that of the iSCSI client, based on the performance parameters of each storage node 101 in the distributed storage system 100, a second storage node 101c is selected, the iSCSI client is redirected to the second storage node 101c, and the second storage node 101c is controlled to operate an iSCSI service corresponding to the iSCSI client, so that the iSCSI service is provided for the iSCSI client by the iSCSI service.
In the present embodiment, the performance parameters of the storage node 101 are not limited, and the performance parameters of the storage node 101 include: at least one of the number of iSCSI clients connected, the number of computing units, the size of the storage space, the network bandwidth. The computing units can be the number of CPUs or GPUs, the number of cores and the like; the size of the storage space may be the size of a memory or a hard disk, for example, 1GB, 5GB, or 1T, etc.; the size of the network bandwidth may be 200Mbps (megabits per second), 500Mbps, or the like.
Further alternatively, an embodiment of selecting a second storage node 101c from among the storage nodes 101 based on performance parameters of each storage node 101 in the distributed storage system 100 comprises: performing weighted summation on the performance parameters of each storage node 101 in the distributed storage system 100 to obtain the comprehensive performance parameters of each storage node 101; from among the storage nodes 101, a storage node whose overall performance parameter meets the set condition is selected as the second target storage node 101c. The setting condition may be selecting a storage node with the optimal overall performance parameter as the second target storage node, or may be selecting a storage node at random from storage nodes with the overall performance parameter exceeding a set threshold value as the second storage node 101c, and the set threshold value may be 50%, 60%, 80%, or the like, which is not limited thereto. In this embodiment, based on the performance parameters of each storage node in the distributed storage system, the second storage node is selected from the performance parameters of each storage node, which can comprehensively consider the performance parameters of each storage node, and is beneficial to realizing load balancing of the distributed storage system.
In an alternative embodiment, to achieve high availability of the iSCSI server, before determining whether the first storage node 101b having the IP address identical to the IP address of the iSCSI client exists in the distributed storage system 100, it may also be determined whether the iSCSI server corresponding to the iSCSI client is already running on the storage node in the distributed storage system; if so, it means that in the distributed storage system 100, the iSCSI server corresponding to the iSCSI client has already been deployed, and in order to achieve high availability of the iSCSI server, it is not necessary to repeatedly deploy the iSCSI server for the iSCSI client in the distributed storage system 100, and then the iSCSI client may be directly redirected to the third storage node 101d running with the iSCSI server.
In this embodiment, the implementation of determining whether an iSCSI server corresponding to an iSCSI client has already been running on a storage node in the distributed storage system is not limited. In an alternative embodiment, the corresponding relation between the identification information of the iSCSI server and the storage node running the iSCSI server is maintained in the Zookeeper module, based on which, the host node 101a may also parse the identification information of the iSCSI server requested by the iSCSI client after receiving the packet sent by the iSCSI client, and based on the identification information, find the storage node corresponding to the identification information of the iSCSI server in combination with the corresponding relation between the identification information of the iSCSI server and the storage node running the iSCSI server, if the storage node cannot be found, it indicates that the iSCSI server requested by the iSCSI client is not deployed on the storage node 101 in the distributed storage system 100, and then the storage node in the distributed storage system 100 does not run the iSCSI server corresponding to the iSCSI client; if the storage node corresponding to the identification information of the iSCSI client is found, the storage node in the distributed storage system is indicated to have the iSCSI client corresponding to the iSCSI client already operated.
It should be noted that, the host node 101a may also operate an iSCSI server, that is, the host node 101a may redirect the iSCSI client to the host node 101a when redirecting the iSCSI client, that is, the first storage node 101b, the second storage node 101c, or the third storage node 101d may be the host node 101a or not the host node 101a, which is not limited.
In this embodiment, after the host node 101a redirects the iSCSI client to the other storage node (the first storage node 101b, the second storage node 101c, or the third storage node 101d, collectively referred to herein as the target storage node), the iSCSI client disconnects from the host node 101a and continues to initiate connection login after the redirection.
The login processing process after redirection comprises the following steps:
(1) The iSCSI service of the target storage node monitors the iSCSI connection on the IP address and the port number of the service network;
(2) The iSCSI client end receives the IP address of the target storage node and the port number of the iSCSI service sent by the host node 101a, establishes iSCSI connection based on the IP address and the port number, and sends a login message to the target storage node;
(3) The target storage node receives and analyzes the login message, performs access control list (Access Control Lists, ACL) rule check on the login message, further performs challenge handshake authentication protocol (Challenge Handshake Authentication Protocol, CHAP) authentication check, and sends a response message to the iSCSI client after the check is passed;
(4) And the iSCSI client receives the message and successfully logs in.
Fig. 2 is a flow chart of a service providing method according to an exemplary embodiment of the present application, which is applicable to a storage node serving as a master node in a distributed storage system, as shown in fig. 2, and the method includes:
s201, receiving a message sent by an iSCSI client, wherein the message comprises an IP address of the iSCSI client;
s202, judging whether a first storage node with the same IP address as the IP address of the iSCSI client exists in the distributed storage system or not;
s203, if the iSCSI client exists, redirecting the iSCSI client to the first storage node, and controlling the first storage node to operate an iSCSI service end corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI service end.
In an optional embodiment, if there is no first storage node in the distributed storage system, where the IP address of the first storage node is the same as the IP address of the iSCSI client, selecting a second storage node from the first storage node based on performance parameters of each storage node in the distributed storage system, redirecting the iSCSI client to the second storage node, and controlling the second storage node to operate an iSCSI server corresponding to the iSCSI client, so that the iSCSI server provides iSCSI services for the iSCSI client.
In an alternative embodiment, redirecting the iSCSI client to the first storage node or the second storage node comprises: and sending the IP address of the first storage node or the second storage node and the port number of the iSCSI service to the iSCSI client so that the iSCSI client can establish connection with the first storage node or the second storage node.
In an alternative embodiment, the performance parameters of the storage node include: at least one of the number of iSCSI clients connected, the number of computing units, the size of the storage space, the size of the network bandwidth; then selecting a second storage node from the plurality of storage nodes based on the performance parameters of each storage node in the distributed storage system, comprising: carrying out weighted summation on the performance parameters of all the storage nodes in the distributed storage system to obtain the comprehensive performance parameters of all the storage nodes; and selecting a storage node with the comprehensive performance parameters meeting the set conditions from the storage nodes as a second storage node.
In an optional embodiment, before determining whether the first storage node having the same IP address as the IP address of the iSCSI client exists in the distributed storage system, the method further includes: judging whether an iSCSI server corresponding to the iSCSI client is already operated on a storage node in the distributed storage system; if yes, the iSCSI client is redirected to a third storage node running with the iSCSI server.
In an alternative embodiment, receiving a message sent by an iSCSI client includes: if the message sent by the iSCSI client carries the virtual IP address, receiving the message; the virtual IP address is the IP address used by the master node.
In an alternative embodiment, the method provided in this embodiment further includes: when other storage nodes are reelected into the master node in the distributed storage system, the storage node reelected into the master node configures the virtual IP address as the self IP address for use, and the storage node serving as the master node clears the configured virtual IP address before reelecting.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of step S201 to step S203 may be the device a; for another example, the execution subject of steps S201 and S202 may be the device a, and the execution subject of step S203 may be the device B; etc.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as S201, S202, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
Fig. 3 is a schematic structural diagram of a server device according to an exemplary embodiment of the present application. As shown in fig. 3, the apparatus includes: a memory 34 and a processor 35.
Memory 34 is used to store computer programs and may be configured to store various other data to support operations on the server device. Examples of such data include instructions for any application or method operating on a server device.
The memory 34 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
A processor 35 coupled to the memory 34 for executing the computer program in the memory 34 for: receiving a message sent by an iSCSI client, wherein the message comprises an IP address of the iSCSI client; judging whether a first storage node with the same IP address as the IP address of the iSCSI client exists in the distributed storage system or not; if so, redirecting the iSCSI client to the first storage node, and controlling the first storage node to run an iSCSI service end corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI service end.
In an alternative embodiment, processor 35 is further configured to: if the distributed storage system does not have the first storage node with the same IP address as the iSCSI client, selecting a second storage node from the distributed storage system based on the performance parameters of each storage node, redirecting the iSCSI client to the second storage node, and controlling the second storage node to operate an iSCSI server corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI server.
In an alternative embodiment, the processor 35, when redirecting the iSCSI client to the first storage node or the second storage node, is specifically configured to: and sending the IP address of the first storage node or the second storage node and the port number of the iSCSI service to the iSCSI client so that the iSCSI client can establish connection with the first storage node or the second storage node.
In an alternative embodiment, the performance parameters of the storage node include: at least one of the number of iSCSI clients connected, the number of computing units, the size of the storage space, the size of the network bandwidth; the processor 35 is specifically configured to, when selecting the second storage node from among the storage nodes based on the performance parameters of each storage node in the distributed storage system: carrying out weighted summation on the performance parameters of all the storage nodes in the distributed storage system to obtain the comprehensive performance parameters of all the storage nodes; and selecting a storage node with the comprehensive performance parameters meeting the set conditions from the storage nodes as a second target storage node.
In an alternative embodiment, before determining whether there is a first storage node in the distributed storage system having an IP address that is the same as the IP address of the iSCSI client, the processor 35 is further configured to: judging whether an iSCSI server corresponding to the iSCSI client is already operated on a storage node in the distributed storage system; if yes, the iSCSI client is redirected to a third storage node running with the iSCSI server.
In an alternative embodiment, the processor 35, when receiving a packet sent by an iSCSI client, is specifically configured to: if the message sent by the iSCSI client carries the virtual IP address, receiving the message; the virtual IP address is the IP address used by the master node.
In an alternative embodiment, processor 35 is further configured to: when other storage nodes are reelected into the master node in the distributed storage system, the storage node reelected into the master node configures the virtual IP address to be used by the self IP address, and the configured virtual IP address is cleared by the storage node serving as the master node before reelect.
In the embodiment of the application, the iSCSI client is redirected to the first storage node with the same IP address as the iSCSI client, which means that the iSCSI server and the iSCSI client can be deployed on the first storage node at the same time, I/O data between the iSCSI client and the iSCSI server do not need to be transmitted across a node network, thereby realizing load localization, improving the I/O performance of the iSCSI server, and reducing the bandwidth requirement of the iSCSI service network. In this case, if load balancing is achieved when the iSCSI client is deployed on the storage node, load balancing of the iSCSI server may be achieved.
Further, as shown in fig. 3, the server device further includes: communication component 36, power component 38, and the like. Only some of the components are schematically shown in fig. 3, which does not mean that the server device only comprises the components shown in fig. 3. The server device of the present embodiment may be a conventional server, a cloud server, or a server array, or the like.
Accordingly, the embodiments of the present application further provide a computer readable storage medium storing a computer program, where the computer program when executed can implement the steps that may be executed by the server device in the embodiments of the service providing method.
The communication assembly of fig. 3 is configured to facilitate wired or wireless communication between the device in which the communication assembly is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a mobile communication network of WiFi,2G, 3G, 4G/LTE, 5G, etc., or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further comprises a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The power supply assembly in fig. 3 provides power for various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A service providing method, which is applicable to a storage node serving as a master node in a distributed storage system, comprising:
receiving a message sent by an internet small computer interface (iSCSI) client, wherein the message comprises an Internet Protocol (IP) address of the iSCSI client;
judging whether a first storage node with the same IP address as the IP address of the iSCSI client exists in the distributed storage system or not;
if so, redirecting the iSCSI client to the first storage node, and controlling the first storage node to run an iSCSI server corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI server.
2. The method as recited in claim 1, further comprising:
if the distributed storage system does not have the first storage node with the same IP address as the IP address of the iSCSI client, selecting a second storage node from the distributed storage system based on the performance parameters of each storage node, redirecting the iSCSI client to the second storage node, and controlling the second storage node to operate an iSCSI server corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI server.
3. A method as in claim 2 wherein redirecting the iSCSI client to the first storage node or second storage node comprises:
and sending the IP address of the first storage node or the second storage node and the port number of the iSCSI service to the iSCSI client so that the iSCSI client can establish connection with the first storage node or the second storage node.
4. The method of claim 2, wherein storing the performance parameters of the node comprises: at least one of the number of iSCSI clients connected, the number of computing units, the size of the storage space, the size of the network bandwidth;
selecting a second storage node from the plurality of storage nodes based on performance parameters of each storage node in the distributed storage system, comprising:
carrying out weighted summation on the performance parameters of all the storage nodes in the distributed storage system to obtain the comprehensive performance parameters of all the storage nodes;
and selecting a storage node with the comprehensive performance parameters meeting the set conditions from the storage nodes as a second storage node.
5. A method according to claim 1 or 2, further comprising, prior to determining whether a first storage node in the distributed storage system having an IP address that is the same as the IP address of the iSCSI client is present:
judging whether an iSCSI server corresponding to the iSCSI client is already operated on a storage node in the distributed storage system;
if yes, the iSCSI client is redirected to a third storage node running the iSCSI server.
6. A method as in claim 1 wherein receiving the message sent by the iSCSI client comprises:
if the message sent by the iSCSI client carries the virtual IP address, receiving the message; the virtual IP address is an IP address used by the master node.
7. The method as recited in claim 6, further comprising:
when the other storage nodes are reelected into the master node in the distributed storage system, the storage nodes reelected into the master node configure the virtual IP address as the self IP address for use, and the storage nodes serving as the master node clear the configured virtual IP address before reelect.
8. A server device, comprising: a memory and a processor;
the memory is used for storing a computer program;
the processor, coupled to the memory, is configured to execute the computer program for:
receiving a message sent by an internet small computer interface (iSCSI) client, wherein the message comprises an Internet Protocol (IP) address of the iSCSI client; judging whether a first storage node with the same IP address as the IP address of the iSCSI client exists in the distributed storage system or not; if so, redirecting the iSCSI client to the first storage node, and controlling the first storage node to run an iSCSI server corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI server.
9. The apparatus of claim 8, wherein the processor is further configured to:
if the distributed storage system does not have the first storage node with the same IP address as the IP address of the iSCSI client, selecting a second storage node from the distributed storage system based on the performance parameters of each storage node, redirecting the iSCSI client to the second storage node, and controlling the second storage node to operate an iSCSI server corresponding to the iSCSI client so as to provide iSCSI service for the iSCSI client by the iSCSI server.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1-7.
CN202110184185.6A 2021-02-08 2021-02-08 Service providing method, device and storage medium Active CN112995311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110184185.6A CN112995311B (en) 2021-02-08 2021-02-08 Service providing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110184185.6A CN112995311B (en) 2021-02-08 2021-02-08 Service providing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112995311A CN112995311A (en) 2021-06-18
CN112995311B true CN112995311B (en) 2023-05-30

Family

ID=76393354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110184185.6A Active CN112995311B (en) 2021-02-08 2021-02-08 Service providing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112995311B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115577197B (en) * 2022-12-07 2023-10-27 杭州城市大数据运营有限公司 Component discovery method, system and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244864B1 (en) * 2001-03-20 2012-08-14 Microsoft Corporation Transparent migration of TCP based connections within a network load balancing system
CN108600308A (en) * 2018-03-20 2018-09-28 新华三技术有限公司 Data uploading method, device, storage medium and server
CN111404978A (en) * 2019-09-06 2020-07-10 杭州海康威视系统技术有限公司 Data storage method and cloud storage system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10838620B2 (en) * 2016-05-26 2020-11-17 Nutanix, Inc. Efficient scaling of distributed storage systems
CN109474700B (en) * 2018-12-18 2021-09-24 创新科技术有限公司 Access method of iSCSI client, storage medium, client and storage node
CN112261079B (en) * 2020-09-11 2022-05-10 苏州浪潮智能科技有限公司 Distributed block storage service link management method and system based on iSCSI

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244864B1 (en) * 2001-03-20 2012-08-14 Microsoft Corporation Transparent migration of TCP based connections within a network load balancing system
CN108600308A (en) * 2018-03-20 2018-09-28 新华三技术有限公司 Data uploading method, device, storage medium and server
CN111404978A (en) * 2019-09-06 2020-07-10 杭州海康威视系统技术有限公司 Data storage method and cloud storage system

Also Published As

Publication number Publication date
CN112995311A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
US11895577B2 (en) Network slice selection method and apparatus
US11095731B2 (en) System and methods for generating a slice deployment description for a network slice instance
US10499276B2 (en) Method and system for end-to-end admission and congestion control based on network slicing
KR102401775B1 (en) Roaming support for next generation slice architecture
US11223976B2 (en) Multiple-slice application delivery based on network slice associations
CN113760452B (en) Container scheduling method, system, equipment and storage medium
US11445515B2 (en) Network slice selection based on requested service
KR102140636B1 (en) Building a pool-based M2M service layer through NFV
US20210036920A1 (en) Configuring network slices
KR102233894B1 (en) Network function and method for processing request using the same
US11696167B2 (en) Systems and methods to automate slice admission control
JP2019525604A (en) Network function NF management method and NF management apparatus
US20220322067A1 (en) Method and apparatus for configuring temporary user equipment (ue) external identifier in wireless communication system
US20210314371A1 (en) Network-based media processing (nbmp) workflow management through 5g framework for live uplink streaming (flus) control
CN113765816A (en) Flow control method, system, equipment and medium based on service grid
CN114342332A (en) Communication method, device and system
CN109845191B (en) Multi-state virtualized network functionality
CN112995311B (en) Service providing method, device and storage medium
US10993177B2 (en) Network slice instance creation
CN113300866B (en) Node capacity control method, device, system and storage medium
CN112953992B (en) Network system, communication and networking method, device and storage medium
US20230114943A1 (en) Method and apparatus for monitoring data usage in wireless communication system
US11595263B1 (en) Dynamic construction of virtual dedicated network slice based on software-defined network
US11432292B2 (en) Resource allocation method, and network device, terminal device and computer storage medium
Zhao et al. Experimental demonstration of VM designation in hybrid cloud-fog computing with software-defined optical networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant