WO2017133483A1 - 存储系统 - Google Patents

存储系统 Download PDF

Info

Publication number
WO2017133483A1
WO2017133483A1 PCT/CN2017/071830 CN2017071830W WO2017133483A1 WO 2017133483 A1 WO2017133483 A1 WO 2017133483A1 CN 2017071830 W CN2017071830 W CN 2017071830W WO 2017133483 A1 WO2017133483 A1 WO 2017133483A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
node
nodes
network
server
Prior art date
Application number
PCT/CN2017/071830
Other languages
English (en)
French (fr)
Inventor
王东临
金友兵
齐宇
Original Assignee
北京书生国际信息技术有限公司
书生云公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京书生国际信息技术有限公司, 书生云公司 filed Critical 北京书生国际信息技术有限公司
Priority to EP17746803.0A priority Critical patent/EP3413538A4/en
Publication of WO2017133483A1 publication Critical patent/WO2017133483A1/zh
Priority to US16/054,536 priority patent/US20180341419A1/en
Priority to US16/121,080 priority patent/US10782989B2/en
Priority to US16/139,712 priority patent/US10782898B2/en
Priority to US16/140,951 priority patent/US20190028542A1/en
Priority to US16/378,076 priority patent/US20190235777A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances

Definitions

  • the present invention relates to the technical field of data storage systems, and more particularly to a storage system.
  • FIG. 1 shows a schematic diagram of the architecture of a prior art storage system.
  • each storage node S is connected to a TCP/IP network through an access network switch (implemented by a core network switch).
  • Each storage node is a separate physical server, and each server has its own storage medium.
  • Each storage node is connected by a storage network such as an IP network to form a storage pool.
  • each compute node is also connected to the TCP/IP network (through the core network switch) through the access network switch to access the entire storage pool through the TCP/IP network. Access in this way is less efficient.
  • an object of embodiments of the present invention is to provide a capability that can be A storage system that requires physical data migration.
  • a storage system includes:
  • At least two storage nodes connected to the storage network
  • each storage device including at least one storage medium
  • the storage network is configured such that each storage node can access all storage media without resorting to other storage nodes.
  • the storage system provided by the embodiment of the present invention provides a storage pool that supports global access, which supports multi-point control, has excellent scalability and high availability, and can realize large-capacity storage by continuously increasing storage medium.
  • the system improves the reliability of the system against single point of failure of the storage node.
  • FIG. 1 is a schematic diagram showing the architecture of a prior art storage system
  • FIG. 2 is a block diagram showing the architecture of a specific storage system constructed in accordance with one embodiment of the present invention
  • FIG. 3 shows a block diagram of a storage system in accordance with an embodiment of the present invention.
  • the storage system includes a storage network, a storage node connected to the storage network, and a storage device that is also connected to the storage network.
  • Each storage device includes at least one storage medium.
  • a storage device commonly used by the inventors can place 45 storage media.
  • the storage network is configured such that each storage node can access all storage media without resorting to other storage nodes.
  • each storage node can access all storage media without using other storage nodes, so that all storage media of the present invention are actually shared by all storage nodes, thereby realizing global storage. The effect of the pool.
  • the storage node is located on the side of the storage medium, or strictly speaking, the storage medium is a built-in disk of the physical machine where the storage node is located. In the embodiment of the present invention, the storage node is located.
  • the physical machine is independent of the storage device, and the storage device is more used as a channel for connecting the storage medium to the storage network.
  • the storage node side further includes a computing node, and the computing node and the storage node are disposed in a physical server, and the physical server is connected to the storage device through the storage network.
  • the aggregated storage system in which the computing node and the storage node are located in the same physical machine constructed by using the embodiment of the present invention can reduce the number of physical devices required, thereby reducing the cost.
  • the compute node can also access the storage resources it wishes to access locally.
  • the data exchange between the two can be as simple as shared memory, and the performance is particularly excellent.
  • the length of the I/O data path between the computing node and the storage medium includes: (1) the storage medium to the storage node; and (2) the storage node to the computing node aggregated in the same physical server. (CPU bus path).
  • the length of the I/O data path between the compute node and the storage medium includes: (1) Storage medium to storage node; (2) storage node to storage network access network switch; (3) storage network access network switch to core network switch; (4) core network switch to computing network access network switch; and (5) Calculate the network access network switch to the compute node.
  • the total data path of the storage system of the embodiment of the present invention is only close to item (1) of the conventional storage system. That is, the storage system provided by the embodiment of the present invention can greatly improve the I/O channel performance of the storage system by extremely compressing the I/O data path length, and the actual running effect is very close to the I/O of the local hard disk. O channel.
  • the storage node may be a virtual machine of a physical server, a container, or a module directly running on a physical operating system of the server, and the computing node may also be a virtual machine of the same physical machine server, A container or a module running directly on the physical operating system of the server.
  • each storage node may correspond to one or more compute nodes.
  • one physical server may be divided into multiple virtual machines, one of which is used as a storage node, and the other virtual machine is used as a computing node; or a module on the physical OS is used as a storage node, so as to implement Better performance.
  • the virtualization technology forming the virtual machine may be KVM or Zen or VMware or Hyper-V virtualization technology
  • the container technology forming the container may be Docker or Rocket or Odin or Chef or LXC or Vagrant. Or Ansible or Zone or Jail or Hyper-V container technology.
  • each storage node is only responsible for managing a fixed storage medium at the same time, and one storage medium is not simultaneously written by multiple storage nodes to avoid data conflict, thereby enabling each storage node to be able to implement each storage node.
  • the storage medium managed by it is accessed without resorting to other storage nodes, and the integrity of the data stored in the storage system can be guaranteed.
  • all the storage media in the system may be divided according to storage logic.
  • the storage pool of the entire system may be divided into a logical storage hierarchy structure such as a storage area, a storage group, and a storage block.
  • the storage block is the smallest storage unit.
  • the storage pool may be divided into at least two storage areas.
  • each storage area may be divided into at least one storage group. In a preferred embodiment, each storage area is divided into at least two storage groups.
  • the storage area and the storage group can be merged such that one level can be omitted in the storage hierarchy.
  • each storage area may be composed of at least one storage block, wherein the storage block may be a complete storage medium or a part of a storage medium.
  • each storage area may be composed of at least two storage blocks, and when any one of the storage blocks fails, the complete storage block may be calculated from the remaining storage blocks in the group.
  • the data is stored.
  • the redundant storage mode can be multi-copy mode, independent redundant disk array (RAID) mode, and erasure code mode.
  • the redundant storage mode can be established by the ZFS file system.
  • the plurality of storage blocks included in each storage area (or storage group) are not located in the same storage medium, or even located in the same storage medium. In the storage device. In an embodiment of the invention, any two storage blocks included in each storage area (or storage group) are not located in the same storage medium/storage device. In another embodiment of the present invention, the number of storage blocks located in the same storage medium/storage device in the same storage area (or storage group) is preferably less than or equal to the redundancy of the redundant storage.
  • the redundant storage redundancy is 1, and the number of storage blocks in the same storage group of the same storage device is at most 1; for RAID 6, the redundant storage is With a redundancy of 2, the number of memory blocks in the same storage group on the same storage device is up to 2.
  • each storage node can only read and write its own managed storage area. Since the read operations of the same storage block by multiple storage nodes do not conflict with each other, and multiple storage nodes write one storage block at the same time, conflicts are easily generated. Therefore, in another embodiment, each storage node can only Write your own managed storage area, but you can read the storage area managed by yourself and the storage area managed by other storage nodes, that is, the write operation is local, but the read operation Work can be global.
  • the storage system may further include a storage control node coupled to the storage network for determining a storage area managed by each storage node.
  • each storage node may include a storage allocation module for determining a storage area managed by the storage node, which may be handled by communication and coordination between respective storage allocation modules included in each storage node. The algorithm is implemented, which may for example be based on the principle of load balancing between the various storage nodes.
  • other or all of the storage nodes may be configured such that the storage nodes take over the storage area previously managed by the failed storage node.
  • one of the storage nodes may take over a storage area managed by the failed storage node, or may be taken over by at least two other storage nodes, wherein each storage node takes over a portion of the storage area managed by the failed storage node, For example, at least two other storage nodes respectively take over different storage groups in the storage area.
  • the storage medium may include, but is not limited to, a hard disk, a flash memory, an SRAM, a DRAM, an NVME, or the like.
  • the access interface of the storage medium may include, but is not limited to, a SAS interface, a SATA interface, a PCI/e interface, a DIMM interface, NVMe interface, SCSI interface, and AHCI interface.
  • the storage network may include at least one storage switching device, and the storage node accesses the storage medium through data exchange between the storage switching devices included therein.
  • the storage node and the storage medium are respectively connected to the storage switching device through the storage channel.
  • the storage switching device may be a SAS switch or a PCI/e switch.
  • the storage channel may be a SAS (Serial Attached SCSI) channel or a PCI/e channel.
  • the SAS-based switching solution has the advantages of high performance, large bandwidth, and a large number of disks per device.
  • HBA host adapter
  • the SAS system Used in conjunction with the SAS interface on the host adapter (HBA) or server board, the SAS system The storage provided can be easily accessed by multiple servers connected at the same time.
  • the SAS switch is connected to the storage device through a SAS line, and the storage device and the storage medium are also connected by a SAS interface.
  • the storage device internally connects the SAS channel to each storage medium (may be in the storage device) Internally set a SAS switch chip). Since the bandwidth of a SAS network can reach 24Gb or 48Gb, which is several times that of Gigabit Ethernet, and several times that of an expensive 10 Gigabit Ethernet; at the same time, the link layer SAS has an order of magnitude improvement over the IP network, in transmission. Layer, because the TCP protocol three-way handshake is closed four times, the overhead is very high, and the TCP delay acknowledgement mechanism and slow start sometimes cause a delay of 100 milliseconds.
  • SAS networks offer significant advantages in terms of bandwidth and latency over Ethernet-based TCP/IP. Those skilled in the art will appreciate that the performance of the PCI/e channel can also be adapted to the needs of the system.
  • the storage network may include at least two storage switching devices, each of which may be connected to any one of the storage devices through any one of the storage switching devices, thereby being connected to the storage medium.
  • the storage node reads and writes data on the storage device through other storage switching devices.
  • the storage devices in the storage system 30 are constructed as a plurality of JBODs 307-310, which are respectively connected to the two SAS switches 305 and 306 through SAS data lines, which constitute the switching core of the storage network included in the storage system.
  • the front end is at least two servers 301 and 302, each of which is connected to the two SAS switches 305 and 306 via an HBA device (not shown) or a SAS interface on the motherboard.
  • Each server has a storage node that manages some or all of the disks in all JBOD disks using information obtained from the SAS links.
  • the storage area, the storage group, and the storage block described above in the application file may be used to divide the JBOD disk into different storage groups.
  • Each storage node manages one or more sets of such storage groups.
  • redundant storage redundantly stored metadata can be placed on disk so that redundant storage can be directly identified from disk by other storage nodes.
  • the storage node can install a monitoring and management module that is responsible for monitoring the status of local storage and other servers.
  • a JBOD is abnormal overall or a disk on the JBOD is abnormal, data reliability is ensured by redundant storage.
  • the management module in the storage node on another pre-configured server will locally identify and take over the disk managed by the storage node of the failed server according to the data on the disk.
  • the storage node originally provided by the storage node of the faulty server will also be extended on the storage node on the new server. So far, a new highly available global storage pool structure has been implemented.
  • the exemplary storage system 30 is constructed to provide a multi-point, controllable, globally accessible storage pool.
  • the hardware uses multiple servers to provide external services, and uses JBOD to store disks.
  • Multiple JBODs are connected to two SAS switches, and the two switches are respectively connected to the server's HBA cards, thereby ensuring that all disks on the JBOD can be accessed by all servers.
  • the SAS redundant link also ensures high availability on the link.
  • each server uses redundant storage technology to select redundant disks from each JBOD to avoid redundant data loss.
  • the module that monitors the overall state will schedule another server to access the disks managed by the storage node of the failed server through the SAS channel, and quickly take over the disks that the other party is responsible for, achieving high-available global storage.
  • JBOD storage disk is illustrated in FIG. 3 as an example, it should be understood that the embodiment of the present invention as shown in FIG. 3 also supports a storage device other than JBOD.
  • the above is an example in which one storage medium (entire) is used as one storage block, and the same applies to a case where a part of one storage medium is used as one storage block.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明涉及一种存储系统,其包括:存储网络;至少两个存储节点,连接至所述存储网络;以及至少一个存储设备,连接至所述存储网络,每个存储设备包括至少一个存储介质,其中,所述存储网络被配置为使得每一个存储节点都能够无需借助其他存储节点而访问所有存储介质。根据本发明的实施方式,提供了一种能够在动态平衡时无需进行物理数据迁移的存储系统。

Description

存储系统 技术领域
本发明涉及数据存储系统的技术领域,更具体地,涉及一种存储系统。
背景技术
随着计算机应用规模越来越大,对存储空间的需求也与日俱增。对应的,将复数设备的存储资源(比如,诸如磁盘组的存储介质)统合为一体作为一个存储池来提供存储服务成为了现在的主流。在传统的存储系统中,该存储系统通常是由TCP/IP网络连接多个分布式存储节点组成的。图1示出现有技术的存储系统的架构示意图。如图1所示,在传统的存储系统中,各存储节点S通过接入网交换机连接到TCP/IP网络(过核心网交换机实现)。每个存储节点都是单独一台物理服务器,每台服务器都有自己的若干存储介质。各存储节点通过如IP网络这样的存储网络连接起来,构成一个存储池。
另一侧,各计算节点也通过接入网交换机连接到TCP/IP网络(通过核心网交换机实现),以通过TCP/IP网络访问整个存储池。这种方式下的访问效率较低。
但更为重要的是,现有的存储系统,一旦涉及到动态平衡时,需要对存储节点上物理数据进行迁移,以达到平衡目的。
发明内容
有鉴于此,本发明实施方式的目的在于提供一种能够在动态平衡时无 需进行物理数据迁移的存储系统。
根据本发明的实施方式,提供一种存储系统。该存储系统包括:
存储网络;
至少两个存储节点,连接至所述存储网络;以及
至少一个存储设备,连接至所述存储网络,每个存储设备包括至少一个存储介质;
其中,所述存储网络被配置为使得每一个存储节点都能够无需借助其他存储节点而访问所有存储介质。
本发明实施方式所提供的存储系统,提供了一种支持多点控制的、支持全局访问的存储池,具有优良的扩展性和高可用性,能够通过不断增加存储介质的方式实现很大容量的存储系统,并且提升了系统抗存储节点单点故障的可靠性。
附图说明
图1示出现有技术的存储系统的架构示意图;
图2示出根据本发明一个实施方式所构建的一个具体的存储系统的架构示意图;
图3示出根据本发明的一个实施方式的存储系统的架构示意图。
具体实施方式
下文将参考附图更完整地描述本公开内容,其中在附图中显示了本公开内容的实施方式。但是这些实施方式可以用许多不同形式来实现并且不应该被解释为限于本文所述的实施方式。相反地,提供这些实例以使得本公开内容将是透彻和完整的,并且将全面地向本领域的熟练技术人员表达本公开内容的范围。
下面结合附图以示例的方式详细描述本发明的各种实施方式。
图2示出根据本发明的实施方式的存储系统的架构示意图。如图2所 示,该存储系统包括存储网络、存储节点,连接至所述存储网络;以及存储设备,同样连接至所述存储网络。每个存储设备包括至少一个存储介质。例如,发明人常用的存储设备可以放置45块存储介质。其中,所述存储网络被配置为使得每一个存储节点都能够无需借助其他存储节点而访问所有存储介质。
利用本发明实施例提供的存储系统,每一个存储节点都能够无需借助其他存储节点而访问所有存储介质,从而使得本发明所有的存储介质都实际上被所有的存储节点共享,进而实现了全局存储池的效果。
同时,从上述的描述可以看出,相比于现有技术,存储节点位于存储介质侧,或者严格来说,存储介质是存储节点所在物理机的内置盘;本发明实施例中,存储节点所在的物理机独立于存储设备,存储设备更多作为连接存储介质与存储网络的一个通道。
这样的方式,使得在需要进行动态平衡时,无需将物理数据在不同的存储介质中进行迁移,只需要通过配置平衡不同的存储节点所管理的存储介质即可。
在本发明另一实施例中,存储节点侧进一步包括计算节点,并且计算节点和存储节点设置在一台物理服务器中,该物理服务器通过存储网络与存储设备连接。利用本发明实施方式所构建的将计算节点和存储节点位于同一物理机的聚合式存储系统,从整体结构而言,可以减少所需物理设备的数量,从而降低成本。同时,计算节点也可以在本地访问到其希望访问的存储资源。另外,由于将计算节点和存储节点聚合在同一台物理服务器上,两者之间数据交换可以简单到仅仅是共享内存,性能特别优异。
本发明实施例提供的存储系统中,计算节点到存储介质之间的I/O数据路径长度包括:(1)存储介质到存储节点;以及(2)存储节点到聚合在同一物理服务器的计算节点(CPU总线通路)。而相比之下,图1所示现有技术的存储系统,其计算节点到存储介质之间的I/O数据路径长度包括:(1) 存储介质到存储节点;(2)存储节点到存储网络接入网交换机;(3)存储网络接入网交换机到核心网交换机;(4)核心网交换机到计算网络接入网交换机;以及(5)计算网络接入网交换机到计算节点。显然,本发明实施方式的存储系统的总数据路径只接近于传统存储系统的第(1)项。即,本发明实施例提供的存储系统,通过对I/O数据路径长度的极致压缩能够极大地提高了存储系统的I/O通道性能,其实际运行效果非常接近于读写本地硬盘的I/O通道。
在本发明一实施例中,存储节点可以是物理服务器的一个虚拟机、一个容器或直接运行在服务器的物理操作系统上的一个模块,计算节点也可以是同一个物理机服务器的一个虚拟机、一个容器或直接运行在所述服务器的物理操作系统上的一个模块。在一个实施例中,每个存储节点可以对应一个或多个计算节点。
具体而言,可以将一台物理服务器分成多个虚拟机,其中一台虚拟机做存储节点用,其它虚拟机做计算节点用;也可是利用物理OS上的一个模块做存储节点用,以便实现更好的性能。
在本发明一实施例中,形成虚拟机的虚拟化技术可以是KVM或Zen或VMware或Hyper-V虚拟化技术,形成所述容器的容器技术可以是Docker或Rockett或Odin或Chef或LXC或Vagrant或Ansible或Zone或Jail或Hyper-V容器技术。
在本发明一实施例中,各个存储节点同时只负责管理固定的存储介质,并且一个存储介质不会同时被多个存储节点进行写入,以避免数据冲突,从而能够实现每一个存储节点都能够无需借助其他存储节点而访问由其管理的存储介质,并且能够保证存储系统中存储的数据的完整性。
在本发明一实施例中,可以将系统中所有的存储介质按照存储逻辑进行划分,具体而言,可以将整个系统的存储池划分为存储区域、存储组、存储块这样的逻辑存储层级架构,其中,存储块为最小存储单位。在本发 明一实施例中,可以将存储池划分成至少两个存储区域。
在本发明一实施例中,每一个存储区域可以分为至少一个存储组。在一个较优的实施例中,每个存储区域至少被划分为两个存储组。
在一些实施例中,存储区域和存储组是可以合并的,从而可以在该存储层级架构中省略一个层级。
在本发明一实施例中,每个存储区域(或者存储组)可以由至少一个存储块组成,其中存储块可以是一个完整的存储介质、也可以是一个存储介质的一部分。为了在存储区域内部构建冗余存储,每个存储区域(或者存储组)可以由至少两个存储块组成,当其中任何一个存储块出现故障时,可以从该组中其余存储块中计算出完整的被存储数据。冗余存储方式可以为多副本模式、独立冗余磁盘阵列(RAID)模式、纠删码(erase code)模式。在本发明一实施例中,冗余存储方式可以通过ZFS文件系统建立。在本发明一实施例中,为了对抗存储设备/存储介质的硬件故障,每个存储区域(或者存储组)所包含的多个存储块不会位于同一个存储介质中,甚至也不位于同一个存储设备中。在本发明一实施例中,每个存储区域(或者存储组)所包含的任何两个存储块都不会位于同一个存储介质/存储设备中。在本发明另一实施例中,同一存储区域(或者存储组)中位于同一存储介质/存储设备的存储块数量最好小于或等于冗余存储的冗余度。举例说明,当存储冗余采取的RAID 5方式时,其冗余存储的冗余度为1,那么位于同一存储设备的同一存储组的存储块数量最多为1;对RAID6,其冗余存储的冗余度为2,那么位于同一存储设备的同一存储组的存储块数量最多为2。
在本发明一实施例中,每个存储节点都只能读和写自己管理的存储区域。由于多个存储节点对同一个存储块的读操作并不会互相冲突,而多个存储节点同时写一个存储块容易发生冲突,因此,在另一个实施例中,可以是每个存储节点只能写自己管理的存储区域,但是可以读自己管理的存储区域以及其它存储节点管理的存储区域,即写操作是局域性的,但读操 作可以是全局性。
在一个实施方式中,存储系统还可以包括存储控制节点,其连接至存储网络,用于确定每个存储节点管理的存储区域。在另一个实施方式中,每个存储节点可以包括存储分配模块,用于确定该存储节点所管理的存储区域,这可以通过每个存储节点所包括的各个存储分配模块之间的通信和协调处理算法来实现,该算法可以例如以使得各个存储节点之间的负载均衡为原则。
在一个实施例中,在监测到一个存储节点发生故障时,可以对其他部分或全部存储节点进行配置,使得这些存储节点接管之前由所述发生故障的存储节点管理的存储区域。例如,可以由其中一个存储节点接管出现故障的存储节点管理的存储区域,或者,可以由其它至少两个存储节点进行接管,其中每个存储节点接管出现故障的存储节点管理的部分的存储区域,比如其他至少两个存储节点分别接管该存储区域内的不同存储组。
在一个实施例中,存储介质可以包括但不限于硬盘、闪存、SRAM、DRAM、NVME或其它形式,存储介质的访问接口可以包括但不限于SAS接口、SATA接口、PCI/e接口、DIMM接口、NVMe接口、SCSI接口、AHCI接口。
在本发明一实施例中,存储网络可以包括至少一个存储交换设备,通过其中包括的存储交换设备之间的数据交换来实现存储节点对存储介质的访问。具体而言,存储节点和存储介质分别通过存储通道与存储交换设备连接。
在本发明一实施例中,存储交换设备可以是SAS交换机或PCI/e交换机,对应地,存储通道可以是SAS(串行连接SCSI)通道或PCI/e通道。
以SAS通道为例,相比传统的基于IP协议的存储方案,基于SAS交换的方案,拥有着性能高,带宽大,单台设备磁盘数量多等优点。在与主机适配器(HBA)或者服务器主板上的SAS接口结合使用后,SAS体系所提 供的存储能够很容易的被连接的多台服务器同时访问。
具体而言,SAS交换机到存储设备之间通过一根SAS线连接,存储设备与存储介质之间也是由SAS接口连接,比如,存储设备内部将SAS通道连到每个存储介质(可以在存储设备内部设置一个SAS交换芯片)。由于SAS网络的带宽可以达到24Gb或48Gb,是千兆以太网的几十倍,以及昂贵的万兆以太网的数倍;同时在链路层SAS比IP网有大约一个数量级的提升,在传输层,由于TCP协议三次握手四次关闭,开销很高且TCP的延迟确认机制和慢启动有时会导致100毫秒级的延时,SAS协议的延时只有TCP的几十分之一,性能有更大的提升。总之,SAS网络比基于以太网的TCP/IP在带宽、延时性方面具有巨大优势。本领域技术人员可以理解,PCI/e通道的性能也可以适应系统的需求。
在本发明一实施例中,存储网络可以包括至少两个存储交换设备,所述每个存储节点都可以通过任意一个存储交换设备连接到任何一个存储设备,进而连接至存储介质。当任何一个存储交换设备或连接到一个存储交换设备的存储通道出现故障时,存储节点通过其它存储交换设备读写存储设备上的数据。
参考图3,其示出了根据本发明一个实施方式所构建的一个具体的存储系统30。存储系统30中的存储设备被构建成多台JBOD307-310,分别通过SAS数据线连接至两个SAS交换机305和306,这两个SAS交换机构成了存储系统所包括的存储网络的交换核心。前端为至少两个服务器301和302,每台服务器通过HBA设备(未示出)或主板上SAS接口连接至这两个SAS交换机305和306。服务器之间存在基本的网络连接用来监控和通信。每台服务器中都有一个存储节点,利用从SAS链路获取的信息,管理所有JBOD磁盘中的部分或全部磁盘。具体而言,可以利用本申请文件以上描述的存储区域、存储组、存储块来将JBOD磁盘划分成不同的存储组。每个存储节点都管理一组或多组这样的存储组。当每个存储组内部采 用冗余存储的方式时,可以将冗余存储的元数据存在于磁盘之上,使得冗余存储能够被其他存储节点直接从磁盘识别。
在所示的示例性存储系统30中,存储节点可以安装监控和管理模块,负责监控本地存储和其它服务器的状态。当某台JBOD整体异常,或者JBOD上某个磁盘异常时,数据可靠性由冗余存储来确保。当某台服务器故障时,另一台预先设定好的服务器上的存储节点中的管理模块,将按照磁盘上的数据,在本地识别并接管原来由故障服务器的存储节点所管理的磁盘。故障服务器的存储节点原本对外提供的存储服务,也将在新的服务器上的存储节点得到延续。至此,实现了一种全新的高可用的全局存储池结构。
可见,所构建的示例性存储系统30提供了一种多点可控的、全局访问的存储池。硬件方面使用多台服务器来对外提供服务,使用JBOD来存放磁盘。将多台JBOD各自连接两台SAS交换机,两台交换机再分别连接服务器的HBA卡,从而确保JBOD上所有磁盘,能够被所有服务器访问。SAS冗余链路也确保了链路上的高可用性。
在每台服务器本地,利用冗余存储技术,从每台JBOD上选取磁盘组成冗余存储,避免单台JBOD的损失造成数据不可用。当一台服务器失效时,对整体状态进行监控的模块将调度另一台服务器,通过SAS通道访问失效服务器的存储节点所管理的磁盘,快速接管对方负责的这些磁盘,实现高可用的全局存储。
虽然在图3中是以JBOD存放磁盘为例进行了说明,但是应当理解,如图3所示的本发明的实施方式还支持JBOD以外的存储设备。另外,以上是以一块存储介质(整个的)作为一个存储块为例,也同样适用于将一个存储介质的一部分作为一个存储块的情形。
应当理解,为了不模糊本发明的实施方式,说明书仅对一些关键、未必必要的技术和特征进行了描述,而可能未对一些本领域技术人员能够实 现的特征做出说明。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换等,均应包含在本发明的保护范围之内。

Claims (17)

  1. 一种存储系统,其特征在于,包括:
    存储网络;
    至少两个存储节点,连接至所述存储网络;以及
    至少一个存储设备,连接至所述存储网络,每个存储设备包括至少一个存储介质,
    其中,所述存储网络被配置为使得每一个存储节点都能够无需借助其他存储节点而访问所有存储介质。
  2. 如权利要求1所述的系统,其特征在于,所述存储系统所包括的所有存储介质构成一个存储池,将所述存储池划分成至少两个存储区域,每个存储节点负责管理零到多个存储区域。
  3. 根据权利要求1所述的存储系统,其特征在于,还包括:
    存储控制节点,连接至所述存储网络,用于确定所述至少两个存储节点中的每个存储节点管理的存储区域;或
    所述存储节点还包括:
    存储分配模块,用于确定所述存储节点所管理的存储区域。
  4. 根据权利要求1所述的存储系统,其特征在于,每个存储节点对应一个或多个计算节点,并且每个存储节点与其对应的计算节点都位于同一服务器。
  5. 根据权利要求4所述的存储系统,其特征在于,所述存储节点是所述服务器的一个虚拟机、一个容器或直接运行在所述服务器的物理操作系统上的一个模块;和/或
    所述计算节点是所述服务器的一个虚拟机、一个容器或直接运行在所述服务器的物理操作系统上的一个模块。
  6. 根据权利要求5所述的存储系统,其特征在于,形成所述虚拟机的 虚拟化技术是KVM、Zen、VMware或Hyper-V虚拟化技术;和/或
    形成所述容器的容器技术是Docker、Rockett、Odin、Chef、LXC、Vagrant、Ansible、Zone、Jail或Hyper-V容器技术。
  7. 根据权利要求3所述的存储系统,其特征在于,设置所述每个存储节点只能读写自己管理的存储区域;或
    设置每个存储节点只能写自己管理的存储区域,但可以读自己管理的存储区域以及其它存储节点管理的存储区域;或
    设置每一个存储节点对其存储区域的管理可以由另外一个或多个存储节点进行接管。
  8. 根据权利要求1-7中任一项所述的存储系统,其特征在于,所述存储网络包括至少一个存储交换设备,所有至少两个存储节点和所述至少一个存储设备的所有存储介质都通过相应的存储通道与存储交换设备连接。
  9. 根据权利要求8所述的存储系统,其特征在于,所述存储交换设备是SAS交换机或PCI/e交换机;所述相应的存储通道是SAS通道或PCI/e通道。
  10. 根据权利要求1-7中任一项所述的存储系统,其特征在于,所述存储网络包括至少两个存储交换设备,所述至少两个存储节点中的每一个存储节点都可以通过任意一个存储交换设备连接到所述至少一个存储介质中的每一个存储介质。
  11. 根据权利要求10所述的存储系统,其特征在于,当任何一个存储交换设备或连接到一个存储交换设备的存储通道出现故障时,存储节点通过其它存储交换设备读写存储介质。
  12. 根据权利要求1-7中任一项所述的存储系统,其特征在于,所述至少两个存储区域中的每个存储区域由至少两个存储块组成,其中存储块是一个完整的存储介质或者是一个存储介质的一部分。
  13. 根据权利要求12所述的存储系统,其特征在于,组成所述每个存 储区域的至少两个存储块被划分为一个或多个存储组,每个存储组内的存储块以冗余存储方式保存数据。
  14. 根据权利要求13所述的存储系统,其特征在于,所述冗余存储方式为RAID、纠删码或者多副本模式。
  15. 根据权利要求13所述的存储系统,其特征在于,一个存储组中位于同一存储设备的存储块数量小于或等于冗余存储的冗余度。
  16. 根据权利要求13所述的存储系统,其特征在于,一个存储组在一个存储设备中最多只有一个存储块。
  17. 根据权利要求1-7中任一项所述的存储系统,其特征在于,所述存储设备为JBOD;和/或所述存储介质是硬盘、闪存、SRAM或DRAM;和/或所述存储介质的接口是SAS接口、SATA接口、PCI/e接口、DIMM接口、NVMe接口、SCSI接口、AHCI接口。
PCT/CN2017/071830 2011-10-11 2017-01-20 存储系统 WO2017133483A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP17746803.0A EP3413538A4 (en) 2016-02-03 2017-01-20 Storage system
US16/054,536 US20180341419A1 (en) 2016-02-03 2018-08-03 Storage System
US16/121,080 US10782989B2 (en) 2016-02-03 2018-09-04 Method and device for virtual machine to access storage device in cloud computing management platform
US16/139,712 US10782898B2 (en) 2016-02-03 2018-09-24 Data storage system, load rebalancing method thereof and access control method thereof
US16/140,951 US20190028542A1 (en) 2016-02-03 2018-09-25 Method and device for transmitting data
US16/378,076 US20190235777A1 (en) 2011-10-11 2019-04-08 Redundant storage system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610076422.6 2016-02-03
CN201610076422.6A CN105472047B (zh) 2016-02-03 2016-02-03 存储系统

Related Parent Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2017/077758 Continuation-In-Part WO2017162179A1 (zh) 2011-10-11 2017-03-22 用于存储系统的负载再均衡方法及装置
US15/594,374 Continuation-In-Part US20170249093A1 (en) 2011-10-11 2017-05-12 Storage method and distributed storage system
US16/139,712 Continuation-In-Part US10782898B2 (en) 2011-10-11 2018-09-24 Data storage system, load rebalancing method thereof and access control method thereof

Related Child Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2017/077751 Continuation-In-Part WO2017162174A1 (zh) 2011-10-11 2017-03-22 一种存储系统
US16/054,536 Continuation-In-Part US20180341419A1 (en) 2011-10-11 2018-08-03 Storage System

Publications (1)

Publication Number Publication Date
WO2017133483A1 true WO2017133483A1 (zh) 2017-08-10

Family

ID=55609308

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/071830 WO2017133483A1 (zh) 2011-10-11 2017-01-20 存储系统

Country Status (4)

Country Link
US (1) US20180341419A1 (zh)
EP (1) EP3413538A4 (zh)
CN (1) CN105472047B (zh)
WO (1) WO2017133483A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105786414A (zh) * 2016-03-24 2016-07-20 天津书生云科技有限公司 存储系统、存储系统的访问方法和存储系统的访问装置
CN105472047B (zh) * 2016-02-03 2019-05-14 天津书生云科技有限公司 存储系统
CN106020737A (zh) * 2016-06-16 2016-10-12 浪潮(北京)电子信息产业有限公司 一种全局共享磁盘的高密度存储架构系统
CN106708745A (zh) * 2016-12-05 2017-05-24 郑州云海信息技术有限公司 一种24盘位nvme动态分配结构及方法
CN106708653B (zh) * 2016-12-29 2020-06-30 广州中国科学院软件应用技术研究所 一种基于纠删码与多副本的混合税务大数据安全保护方法
CN109726153B (zh) * 2017-10-27 2023-02-24 伊姆西Ip控股有限责任公司 用于存储设备的集成装置、相应存储设备及其制造方法
CN110515536B (zh) * 2018-05-22 2020-10-27 杭州海康威视数字技术股份有限公司 数据存储系统
CN110557354B (zh) * 2018-05-31 2020-10-13 杭州海康威视数字技术股份有限公司 一种实现节点间通讯的方法、装置及电子设备
CN111324311B (zh) * 2020-02-28 2021-09-14 苏州浪潮智能科技有限公司 一种lun划分方法和设备
US11899585B2 (en) 2021-12-24 2024-02-13 Western Digital Technologies, Inc. In-kernel caching for distributed cache
US11934663B2 (en) 2022-01-10 2024-03-19 Western Digital Technologies, Inc. Computational acceleration for distributed cache
US11797379B2 (en) * 2022-02-04 2023-10-24 Western Digital Technologies, Inc. Error detection and data recovery for distributed cache

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6148349A (en) * 1998-02-06 2000-11-14 Ncr Corporation Dynamic and consistent naming of fabric attached storage by a file system on a compute node storing information mapping API system I/O calls for data objects with a globally unique identification
CN201699750U (zh) * 2010-05-10 2011-01-05 北京月新时代科技有限公司 一种集群存储器
CN201805454U (zh) * 2010-09-21 2011-04-20 北京同有飞骥科技股份有限公司 一种具有并行Cache同步链路的高性能存储系统
CN103634350A (zh) * 2012-08-24 2014-03-12 阿里巴巴集团控股有限公司 一种存储系统及其实现方法
CN105472047A (zh) * 2016-02-03 2016-04-06 天津书生云科技有限公司 存储系统
CN105872031A (zh) * 2016-03-26 2016-08-17 天津书生云科技有限公司 存储系统
CN105897859A (zh) * 2016-03-25 2016-08-24 天津书生云科技有限公司 一种存储系统
CN205620984U (zh) * 2016-04-01 2016-10-05 南京紫光云信息科技有限公司 一种数据分层存储设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6542961B1 (en) * 1998-12-22 2003-04-01 Hitachi, Ltd. Disk storage system including a switch
US8161134B2 (en) * 2005-09-20 2012-04-17 Cisco Technology, Inc. Smart zoning to enforce interoperability matrix in a storage area network
US8332402B2 (en) * 2007-06-28 2012-12-11 Apple Inc. Location based media items
US9135044B2 (en) * 2010-10-26 2015-09-15 Avago Technologies General Ip (Singapore) Pte. Ltd. Virtual function boot in multi-root I/O virtualization environments to enable multiple servers to share virtual functions of a storage adapter through a MR-IOV switch
CN203982354U (zh) * 2014-06-19 2014-12-03 天津书生投资有限公司 一种冗余存储系统
US10140136B2 (en) * 2013-11-07 2018-11-27 Datrium, linc. Distributed virtual array data storage system and method
JP6354290B2 (ja) * 2014-04-24 2018-07-11 富士通株式会社 情報処理システム、情報処理システムの制御方法および情報処理システムの制御プログラム
CN104657316B (zh) * 2015-03-06 2018-01-19 北京百度网讯科技有限公司 服务器
CN105045336A (zh) * 2015-06-25 2015-11-11 北京百度网讯科技有限公司 Jbod
US9823849B2 (en) * 2015-06-26 2017-11-21 Intel Corporation Method and apparatus for dynamically allocating storage resources to compute nodes
CN104965677B (zh) * 2015-06-26 2018-04-13 北京百度网讯科技有限公司 存储系统
CN105068836A (zh) * 2015-08-06 2015-11-18 北京百度网讯科技有限公司 一种基于sas网络的远程可共享的启动系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6148349A (en) * 1998-02-06 2000-11-14 Ncr Corporation Dynamic and consistent naming of fabric attached storage by a file system on a compute node storing information mapping API system I/O calls for data objects with a globally unique identification
CN201699750U (zh) * 2010-05-10 2011-01-05 北京月新时代科技有限公司 一种集群存储器
CN201805454U (zh) * 2010-09-21 2011-04-20 北京同有飞骥科技股份有限公司 一种具有并行Cache同步链路的高性能存储系统
CN103634350A (zh) * 2012-08-24 2014-03-12 阿里巴巴集团控股有限公司 一种存储系统及其实现方法
CN105472047A (zh) * 2016-02-03 2016-04-06 天津书生云科技有限公司 存储系统
CN105897859A (zh) * 2016-03-25 2016-08-24 天津书生云科技有限公司 一种存储系统
CN105872031A (zh) * 2016-03-26 2016-08-17 天津书生云科技有限公司 存储系统
CN205620984U (zh) * 2016-04-01 2016-10-05 南京紫光云信息科技有限公司 一种数据分层存储设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3413538A4 *

Also Published As

Publication number Publication date
EP3413538A4 (en) 2018-12-26
US20180341419A1 (en) 2018-11-29
CN105472047A (zh) 2016-04-06
CN105472047B (zh) 2019-05-14
EP3413538A1 (en) 2018-12-12

Similar Documents

Publication Publication Date Title
WO2017133483A1 (zh) 存储系统
WO2017162179A1 (zh) 用于存储系统的负载再均衡方法及装置
WO2017162177A1 (zh) 冗余存储系统、冗余存储方法和冗余存储装置
WO2017162176A1 (zh) 存储系统、存储系统的访问方法和存储系统的访问装置
US8595434B2 (en) Smart scalable storage switch architecture
US8898385B2 (en) Methods and structure for load balancing of background tasks between storage controllers in a clustered storage environment
JP5523468B2 (ja) 直接接続ストレージ・システムのためのアクティブ−アクティブ・フェイルオーバー
WO2017167106A1 (zh) 存储系统
US8099532B2 (en) Intelligent dynamic multi-zone single expander connecting dual ported drives
US8788753B2 (en) Systems configured for improved storage system communication for N-way interconnectivity
JP5635621B2 (ja) ストレージシステム及びストレージシステムのデータ転送方法
WO2017162178A1 (zh) 对存储系统的访问控制方法及装置
US20150160878A1 (en) Non-disruptive configuration of a virtualization cotroller in a data storage system
US10782898B2 (en) Data storage system, load rebalancing method thereof and access control method thereof
JP2020533689A (ja) クラウド・ベースのランクを使用するシン・プロビジョニング
Dufrasne et al. IBM DS8870 Architecture and Implementation (release 7.5)
US10782989B2 (en) Method and device for virtual machine to access storage device in cloud computing management platform
JP2021124796A (ja) 分散コンピューティングシステム及びリソース割当方法
US11831762B1 (en) Pre-generating secure channel credentials
US11237916B2 (en) Efficient cloning of logical storage devices
US11467930B2 (en) Distributed failover of a back-end storage director
JP5856665B2 (ja) ストレージシステム及びストレージシステムのデータ転送方法
Lebedev Creating a storage system for use in Smart IoT Lab
KR102353930B1 (ko) 분리된 메모리 기기
US20180095924A1 (en) Controlling data storage devices across multiple servers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17746803

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017746803

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017746803

Country of ref document: EP

Effective date: 20180903