WO2005015349A2 - Procede d'integration d'un serveur dans un sous-systeme de stockage - Google Patents

Procede d'integration d'un serveur dans un sous-systeme de stockage Download PDF

Info

Publication number
WO2005015349A2
WO2005015349A2 PCT/US2004/025383 US2004025383W WO2005015349A2 WO 2005015349 A2 WO2005015349 A2 WO 2005015349A2 US 2004025383 W US2004025383 W US 2004025383W WO 2005015349 A2 WO2005015349 A2 WO 2005015349A2
Authority
WO
WIPO (PCT)
Prior art keywords
processors
processor
storage
server
medium
Prior art date
Application number
PCT/US2004/025383
Other languages
English (en)
Other versions
WO2005015349A3 (fr
Inventor
Wayne Karpoff
David Southwell
Jason Gunthorpe
Original Assignee
Yottayotta, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yottayotta, Inc. filed Critical Yottayotta, Inc.
Priority to EP04780250A priority Critical patent/EP1668518A4/fr
Priority to CA002535097A priority patent/CA2535097A1/fr
Publication of WO2005015349A2 publication Critical patent/WO2005015349A2/fr
Publication of WO2005015349A3 publication Critical patent/WO2005015349A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • FIG. 1 In a Storage Area Network (SAN) architecture, the division is typically set forth as described in Figure 1.
  • software functionality managing block related functionality such as the block virtualization layer 134 and block cache management 136, are implemented on a separate storage subsystem.
  • Higher-level functionality such as file system functionality 124 and other data management functionality 122 is implemented on a traditional computer system operating as a server 102.
  • Examples of data management functionality include databases, data life cycle management software, hierarchical storage management software, and specialized software such as PACCS software used in the medical industry.
  • PACCS software used in the medical industry.
  • Various data management software systems may be used in combination with each other.
  • Communication between the server 102 and the storage subsystem 104 involves industry standard protocols such as Fibre Channel 142 driven by layers of device drivers 126 and 128 on the server side and target drivers 130 and 132 on the storage subsystem side.
  • This physical network 142 combined with layers of device drivers and target software adds considerable latency to I/O operations.
  • Positioning the file system within the server makes heterogeneous operation a challenge as building a single file system that supports multiple operating systems is non-trivial.
  • What is commonly referred to as Network Attached File Systems (NAS), as shown in Figure 2 moves most of the file system functionality 132 into the .storage subsystem.
  • Industry standard protocols, such as NFS and CIFS allow multiple operating systems to communicate to a single file system image.
  • multiple heterogeneous servers can share a single file system.
  • Communication between the server 202 and the storage subsystem 204 typically uses common networks such as Ethernet.
  • a server is embedded directly into a storage subsystem.
  • Data management functionality written for traditional servers may be implemented within a stand-alone storage subsystem, generally without software changes to the ported subsystems.
  • the hardware executing the storage subsystem and server subsystem are implemented in a way that provides reduced or negligible latency, compared to traditional architectures, when communicating between the storage subsystem and the server subsystem.
  • a plurality of clustered controllers are used.
  • traditional load-balancing software can be used to provide scalability of server functions.
  • One end-result is a storage system that provides a wide range of data management functionality, that supports a heterogeneous collection of clients, that can be quickly customized for specific applications, that easily leverages existing third party software, and that provides optimal performance.
  • a method for embedding functionality normally present in a server computer system into a storage system.
  • the method typically includes providing a storage system having a first processor and a second processor coupled to the first processor by an interconnect medium, wherein processes for controlling the storage system execute on the first processor, porting an operating system normally found on a server system to the second processor, and modifying the operating system to allow for low latency communications between the first and second processors.
  • a storage system typically includes a first processor configured to control storage functionality, a second processor, an interconnect medium communicably coupling the first and second processors, an operating system ported to the second processor, wherein said operating system is normally found on a server system, and wherein the operating system is modified to allow low latency communication between the first and second processors.
  • a method is provided for optimizing communication performance between server and storage system functionality in a storage system.
  • the method typically includes providing a storage system having a first processor and a second processor coupled to the first processor by an interconnect medium, porting an operating system normally found on a server system to the second processor, modifying the operating system to allow for low latency communications between the first and second processors, and porting one or more file system and data management applications normally resident on a server system to the second processor.
  • a method for implementing clustered embedded server functionality in a storage system controlled by a plurality of storage controllers.
  • the method typically includes providing a plurality of storage controllers, each storage controller having a first processor and a second processor communicably coupled to the first processor by a first interconnect medium, wherein for each storage controller, an operating system normally found on a server system is ported to the second processor, wherein said operating system is allows low latency communications between the first and second processors.
  • the method also typically includes providing a second interconnect medium between each of said plurality of storage controllers.
  • the second communication medium may handle all inter-processor communications.
  • a third interconnect medium is provided in some aspects, wherein inter-processor communications between the first processors occur over one of the second and third mediums and inter- processor communications between the second processors occur over the other one of the second and third mediums.
  • a storage system that implements clustered embedded server functionality using a plurality of storage controllers.
  • the system typically includes a plurality of storage controllers, each storage controller having a first processor and a second processor communicably coupled to the first processor by a first interconnect medium, wherein for each storage controller, processes for controlling the storage system execute on the first processor, an operating system normally found on a server system is ported to the second processor, wherein said operating system is allows low latency communications between the first and second processors, and one or more file system and data management applications normally resident on a server system are ported to the second processor.
  • the system also typically includes a second interconnect medium between each of said plurality of storage controllers, wherein said second interconnect medium handles inter- processor communications between the controller cards.
  • a third interconnect medium is provided in some aspects, wherein inter-processor communications between the first processors occur over one of the second and third mediums and inter-processor communications between the second processors occur over the other one of the second and third mediums.
  • FIG. 1 illustrates traditional storage area network (SAN) software towers.
  • FIG. 2 illustrates traditional network attached storage (NAS) software towers.
  • NAS network attached storage
  • FIG. 3 illustrates a server tower embedded in a storage system, such as a storage controller node, according to one embodiment of the present invention.
  • FIG. 4 illustrates embedded server hardware in a storage system, such as a storage controller node, according to one embodiment of the present invention.
  • FIG. 5 illustrates an alternate I/O module supporting Infiniband according to one embodiment of the present invention.
  • FIG. 6 illustrates an alternate I/O module supporting 8-Gigabit Ethernet ports according to one embodiment of the present invention.
  • FIG. 7 illustrates embedded server software modules according to one embodiment.
  • FIG. 8 illustrates a memory allocation scheme according to one embodiment.
  • the data management functionality is moved within the storage subsystem, i order to maximize the utilization of existing software, including third party software, and to minimize porting effort
  • the data management functionality is implemented as two separate software towers running on two separate microprocessors. While any high speed communication between the processors could be used, a preferred implementation involves implementing hardware having two (or more) microprocessors that are used to house a storage software tower and a server software tower, but allowing each microprocessor having direct access to a common memory.
  • An example of a server tower embedded in a storage system according to one embodiment is shown in Figures 3 and 4.
  • both processors 410 and 412 can access both banks of memory 420 and 422 via the HyperTransportTM bus 330.
  • the HyperTransportTM architecture is described in http://www.hvpertransport.org/tech specifications.htmL which is hereby incorporated by reference. It will be apparent to one skilled in the art that alternate bus architectures may be used, such as Ethernet, a system bus, PCI, proprietary networks and busses, etc.
  • the processors 410 and 412, bus 430 and memory 420 and 422 in Figure 4 are implemented in a single storage controller node , e.g., in a single NetStoragerTM controller card as shown in FIG. 3, according to one embodiment.
  • processor virtualization software can be used to emulate two separate processors executing a single 'real' processor. It will also be apparent that the software tower can run as a task of the server tower.
  • connectors 430 are used to connect the I/O portions of the hardware. This allows alternate I/O modules to be used to provide alternate host protocol connections such as hifiniBand®, e.g. as shown in Figure 5, or to increase Ethernet connectivity, e.g. as shown in Figure 6. The preferred implementation allows resulting I/O ports to be assigned to either software tower as desired.
  • a second tower that normally runs on an external server 706 is placed on the second processor, e.g., processor 412 of FIG. 4.
  • a traditional operating system, such as Linux 756 is ported to the second processor and used to host the overlying software layers. This allows easy adoption, usually without modification, of existing software designed for a traditional server environment.
  • the common memory e.g., memory 420 and 422 of FIG. 4, is partitioned into two regions, one for each software tower.
  • a small common region is reserved for managing data structures involved with inter-processor communications.
  • a two-way mailbox algorithm is used in one aspect for communicating between the "shared memory device drivers" running on each of the two processors as follows. Each processor maintains a work queue for the other processor. Only the initiator links work onto the end of the queue. When one processor "A” needs to message processor "B" of a communication, the following steps occur in one aspect:
  • Processor A (the initiator) allocates a control block from its own memory space. 2. Processor A sets the "completed" flag to false. 3. Processor A fills in other fields as required by the request. 4. Processor A links the request on the end of a linked list of requests destined for processor B. 5. ⁇ Processor A notifies processor B via an interrupt or event trap of the presence of work in the queue. 6. Processor B starts at the top of the queue processing uncompleted requests. When a request is completed, Processor B sets the respective "completed" flag to True and provides an interrupt to Processor A. 7. Processor A begins at the top of the queue, noting which transactions have been completed and unlinking them. The order of storing addresses is important to ensure that transactions can be unlinked without a semaphore.
  • an integer field representing priority is included and the link is scanned multiple times looking for decreasing priorities of requests.
  • data buffers are pre-allocated by the request target and can be used by the source processor to receive actual data
  • the processor initiating the request is responsible for copying the data blocks from its memory to the pre-allocated buffer on the receiving processor.
  • the actual data coping is deferred until deemed more convenient, thus minimizing latency associated with individual transactions.
  • This is preferably done without modifications outside the device driver layer of the Linux operating system; e.g. during a write operation, by "nailing" the I/O page to be written and using the Linux page image for the I/O operations in the storage system.
  • the page can be replicated as a background function on the Storage System processor (the processor implementing storage system control functionality).
  • the Server Device Driver is notified that the page is now "clean” and can be "un-nailed.”
  • the virtual memory management modules of both the Sever operating system and the storage system work cooperatively in using common I O buffers, thus advantageously avoiding unnecessary copies and minimizing the redundancy of space usage.
  • MMUs memory management units
  • multiple storage system controller nodes are clustered together.
  • the concept of clustering controllers was introduced in US Patent No. 6, 148,414. Additional refinements of clustered controllers were introduced in US Application No. 2002/0188655.
  • One advantageous result of implementing aspects of the present invention in multiple storage system controllers is that multiple Storage System Towers can export a given virtual volume of storage to multiple embedded servers. The performance scales as additional Storage System towers are added.
  • Clustered file systems are now common, wherein multiple file system modules running on multiple servers can export to their host a common file system image.
  • An example of a clustered file system is the Red Hat Global File System (http://www.redhat.com/software/rha/gfs/). If the file system 726 (FIG. 7) chosen is a clustered file system, then software layers above the file system, regardless what controller they are residing on, will see a common file system image.
  • Data Management Applications 722 that support multiple invocations to a single file image will now scale as more storage controller modules are added. Examples of software that can benefit from this environment include web servers and parallel databases. I/O intensive applications, such as data mining applications, obtain significant performance benefits from running directly on the storage controller.
  • the file system allocates its buffer space using the common buffer allocation routines described above.
  • the buffers are the largest storage consumer of a file system. Allocating them from the common pool 810, rather than the Server Tower specific pool 840, optimizes the usage of controller memory and makes the overall system more flexible.
  • Porting common software that balances application software execution load between multiple servers, such as LSF from Platform computing, onto the server tower 724 allows single instance applications to benefit from the scalability of the overall platform.
  • the load-balancing layer 724 moves applications between controllers to balance the execution performance of controllers and allow additional controllers to be added to a live system to increase performance.

Abstract

Selon l'invention, un serveur est intégré directement dans un sous-système de stockage. Lors d'un déplacement entre le domaine de sous-système de stockage et le domaine de serveur, la copie de données est réduite au minimum. La fonctionnalité de gestion de données écrite pour des serveurs classiques est mise en oeuvre dans un sous-système de stockage autonome, en général sans changements logiciels apportés aux sous-systèmes transportés. Le matériel qui exécute le sous-système de stockage et le sous-système de serveur peut être mis en oeuvre de façon à produire une latence réduite, en comparaison avec des architectures classiques, lors de communications entre le sous-système de stockage et le sous-système de serveur. Lors de l'utilisation d'une pluralité de contrôleurs groupés, un logiciel d'équilibrage de charge classique peut servir à produire une extensibilité des fonctions de serveur. L'invention permet d'obtenir en guise de résultat final un système de stockage qui fournit une plage étendue de fonctionnalités de gestion de données, qui supporte un ensemble hétérogène de clients, qui peut être rapidement personnalisé pour des applications spécifiques, qui peut augmenter rapidement l'utilisation de logiciels tiers existants, et qui assure une performance optimale.
PCT/US2004/025383 2003-08-08 2004-08-06 Procede d'integration d'un serveur dans un sous-systeme de stockage WO2005015349A2 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP04780250A EP1668518A4 (fr) 2003-08-08 2004-08-06 Procede d'integration d'un serveur dans un sous-systeme de stockage
CA002535097A CA2535097A1 (fr) 2003-08-08 2004-08-06 Procede d'integration d'un serveur dans un sous-systeme de stockage

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US49396403P 2003-08-08 2003-08-08
US60/493,964 2003-08-08

Publications (2)

Publication Number Publication Date
WO2005015349A2 true WO2005015349A2 (fr) 2005-02-17
WO2005015349A3 WO2005015349A3 (fr) 2005-12-01

Family

ID=34135305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/025383 WO2005015349A2 (fr) 2003-08-08 2004-08-06 Procede d'integration d'un serveur dans un sous-systeme de stockage

Country Status (3)

Country Link
EP (1) EP1668518A4 (fr)
CA (1) CA2535097A1 (fr)
WO (1) WO2005015349A2 (fr)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5163131A (en) * 1989-09-08 1992-11-10 Auspex Systems, Inc. Parallel i/o network file server architecture
EP0510245A1 (fr) * 1991-04-22 1992-10-28 Acer Incorporated Système et procédé d'écriture rapide d'informations d'un ordinateur à un système de mémoire
DE4328862A1 (de) * 1993-08-27 1995-03-02 Sel Alcatel Ag Verfahren und Vorrichtung zum Zwischenspeichern von Datenpaketen sowie Vermittlungsstelle mit einer solchen Vorrichtung
US5873103A (en) * 1994-02-25 1999-02-16 Kodak Limited Data storage management for network interconnected processors using transferrable placeholders
US6928575B2 (en) * 2000-10-12 2005-08-09 Matsushita Electric Industrial Co., Ltd. Apparatus for controlling and supplying in phase clock signals to components of an integrated circuit with a multiprocessor architecture
US7325051B2 (en) * 2001-11-06 2008-01-29 International Business Machines Corporation Integrated storage appliance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP1668518A4 *

Also Published As

Publication number Publication date
WO2005015349A3 (fr) 2005-12-01
EP1668518A2 (fr) 2006-06-14
CA2535097A1 (fr) 2005-02-17
EP1668518A4 (fr) 2009-03-04

Similar Documents

Publication Publication Date Title
US11934883B2 (en) Computer cluster arrangement for processing a computation task and method for operation thereof
JP5347396B2 (ja) マルチプロセッサシステム
US7676625B2 (en) Cross-coupled peripheral component interconnect express switch
US8046425B1 (en) Distributed adaptive network memory engine
US9354954B2 (en) System and method for achieving high performance data flow among user space processes in storage systems
US7451278B2 (en) Global pointers for scalable parallel applications
US20140208072A1 (en) User-level manager to handle multi-processing on many-core coprocessor-based systems
US20060020769A1 (en) Allocating resources to partitions in a partitionable computer
CN101163133B (zh) 一种多机虚拟环境下实现资源共享的通信系统及通信方法
Hou et al. Cost effective data center servers
EP2284702A1 (fr) Fonctionnement de processeurs de cellule sur un réseau
JP2002342280A (ja) 区分処理システム、区分処理システムにおけるセキュリティを設ける方法、およびそのコンピュータ・プログラム
US11922537B2 (en) Resiliency schemes for distributed storage systems
US20040093390A1 (en) Connected memory management
US20070150699A1 (en) Firm partitioning in a system with a point-to-point interconnect
CA2335561A1 (fr) Methode, systeme et produit de programme client-serveur heterogene pour un environnement de traitement partitionne
US20190042456A1 (en) Multibank cache with dynamic cache virtualization
US11093161B1 (en) Storage system with module affinity link selection for synchronous replication of logical storage volumes
CN1464415A (zh) 一种多处理器系统
US20050071545A1 (en) Method for embedding a server into a storage subsystem
CN110447019B (zh) 存储器分配管理器及由其执行的用于管理存储器分配的方法
WO2005015349A2 (fr) Procede d'integration d'un serveur dans un sous-systeme de stockage
Theodoropoulos et al. REMAP: Remote mEmory manager for disaggregated platforms
Osmon et al. The Topsy project: a position paper
KR20230107086A (ko) 캐시-일관성을 위한 장치 및 방법

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase in:

Ref document number: 2535097

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2004780250

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2004780250

Country of ref document: EP