WO2003050707A1 - Gestion de ressources de stockage rattachees a un reseau de donnees - Google Patents

Gestion de ressources de stockage rattachees a un reseau de donnees Download PDF

Info

Publication number
WO2003050707A1
WO2003050707A1 PCT/IB2002/005214 IB0205214W WO03050707A1 WO 2003050707 A1 WO2003050707 A1 WO 2003050707A1 IB 0205214 W IB0205214 W IB 0205214W WO 03050707 A1 WO03050707 A1 WO 03050707A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
virtual
storage resources
resources
client
Prior art date
Application number
PCT/IB2002/005214
Other languages
English (en)
Other versions
WO2003050707A8 (fr
Inventor
Avraham Shillo
Original Assignee
Monosphere Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IL14707301A external-priority patent/IL147073A0/xx
Application filed by Monosphere Limited filed Critical Monosphere Limited
Priority to EP02781614A priority Critical patent/EP1456766A1/fr
Priority to CA002469624A priority patent/CA2469624A1/fr
Priority to AU2002348882A priority patent/AU2002348882A1/en
Priority to KR10-2004-7008877A priority patent/KR20040071187A/ko
Priority to JP2003551695A priority patent/JP2005512232A/ja
Publication of WO2003050707A1 publication Critical patent/WO2003050707A1/fr
Publication of WO2003050707A8 publication Critical patent/WO2003050707A8/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers

Definitions

  • the present invention relates to the field of data networks. More particularly, the invention is related to a method for dynamic management and allocation of storage resources attached to a data network to a plurality of workstations also connected to said data network.
  • a central dedicated file server is used as a repository of computer storage for a network. If the number of files is large, then the file server may be distributed over multiple computer systems. However, with the increase of the volume of the computer storage, the use of dedicated file servers for storage represents a potential bottleneck. The data throughput required for transmitting many files to and from a central dedicated file server, is one of the major factors for the networks' congestion.
  • QoS Quality of Service
  • the present invention is directed to a method for dynamically managing and allocating storage resources, attached to a data network, to applications executed by users being connected to the data network through access points.
  • the physical storage resource allocated to each application, and the performance of the physical storage resource are periodically monitored.
  • One or more physical storage resources are represented by a corresponding virtual storage space, which is aggregated in a virtual storage repository.
  • the physical storage requirements of each application are periodically monitored.
  • Each physical storage resource is divided into a plurality of physical storage segments, each of which having performance attributes that correspond to the performance of its physical storage resource.
  • the repository is divided into a plurality of virtual storage segments and each of physical storage segments is mapped to a corresponding virtual storage segment having similar performance attributes.
  • a virtual storage resource consisting of a combination of virtual storage segments being optimized for the application according to the performance attributes of their corresponding physical storage segments and the requirements, is introduced.
  • a physical storage space is re-allocated to the application by redirecting each virtual storage segment of the combination to a corresponding physical storage segment.
  • the parameters for evaluating performance are the level of usage of data/data files stored in the physical storage resource, by the application; the reliability of the physical storage resource; the available storage space on the physical storage resource; the access time to data stored in the physical storage resource; and the delay of data exchange between the computer executing the application and the access point of the physical storage resource.
  • the performance of each physical storage resource is repeatedly evaluated and the physical storage requirements of each application are monitored.
  • the redirection of each virtual storage segment to another corresponding physical storage segment is dynamically changed in response to changes in the performance and/or the requirements.
  • Evaluation may be performed by defining a plurality of storage nodes, each of which representing an access point to a physical storage resource connected thereto. One or more parameters associated with each storage node are monitored and a dynamic score is assigned to each storage node.
  • a storage priority is assigned to each storage node.
  • Each virtual storage segment associated with an application having execution priority is redirected to a set of storage nodes having higher storage priority values.
  • the performance of each storage node is dynamically monitored and the storage node priority is changed in response to the monitoring results. Whenever desired, the redirection of each virtual storage segment is changed.
  • the access time of an application to required data blocks is decreased by storing duplicates of the data files in several different storage nodes and allowing the application to access the duplicate stored in a storage node having the best performance.
  • Physical storage resources are added to/removed from the data network in a way being transparent to currently executed applications, by updating the content of the repository according to the addition/removal of a physical storage resource, evaluating the performance of each added physical storage resource and dynamically changing the redirection of at least one virtual storage segment to physical storage segments derived from the added physical storage resource and/or to another corresponding physical storage segment, in response to the performance.
  • a data read operation from a virtual storage resource may be carried out by sending a request from the application, such that the request specifies the location of requested data in the virtual storage resource.
  • the location of requested data in the virtual storage resource is mapped into a pool of at least one storage node, containing at least a portion of the requested data.
  • One or more storage nodes having the shortest response time to fulfill the request are selected from the pool.
  • the request is directed to the selected storage nodes having the lowest data exchange load and the application is allowed to read the requested data from the selected storage nodes.
  • a data write operation from a virtual storage resource is carried out by sending a request from the application, such that the request determines the data to be written, and the location in the virtual storage resource to which the data should be written.
  • a pool of potential storage nodes for storing the data is created. At least one storage node, whose physical location in the data network has the shortest response time to fulfill the request, is selected from the pool. The request is directed to the selected storage nodes having the lowest data exchange load and the application is allowed to write the data into the selected storage nodes.
  • Each application can access each storage node by using a computer linked to at least one storage node and having access to physical storage resources which are inaccessible by the application as a mediator between the application and the inaccessible storage resources.
  • the data throughput performance of each mediator is evaluated for each application, and the load required to provide accessibility to inaccessible storage resources, for each application, is dynamically distributed between two or more mediators, according to the evaluation results.
  • Physical storage space is re-allocating for each application by redirecting the virtual storage segments that correspond to the application to two or more storage nodes, such that the load is dynamically distributing between the two or more storage nodes, according their corresponding scores, thereby balancing the load between the two or more storage nodes.
  • the re-allocation of the physical storage resources to each application may be carried out by continuously, or periodically, monitoring the level of demand of actual physical storage space, allocating actual physical storage space for the application in response to the level of demand for the time period during which the physical storage space is actually required by the application, and by dynamically changing the level of allocation in response to changes in the level of the demand.
  • the present invention is also directed to a system for dynamically managing and allocating storage resources, attached to a data network, to applications executed by users being connected to the data network through access points, operating according the method described hereinabove.
  • FIG. 1 schematically illustrates the architecture of a system for dynamically managing and allocating storage resources to application servers/workstations, connected to a data network, according to a preferred embodiment of the invention
  • FIG. 2 schematically illustrates the structure and mapping between physical and virtual storage resources, according to a preferred embodiment of the invention.
  • FIGs. 3A and 3B schematically illustrate read and write operations performed in a system for dynamically managing and allocating storage resources to application servers/workstations connected to a data network, according to a preferred embodiment of the invention.
  • the present invention comprises the following components:
  • a Storage Domain Supervisor located on a System Management server for managing a storage allocation policy and distributing storage to storage clients;
  • Storage Clients located on every computer that needs to use the storage space.
  • Fig. 1 schematically illustrates the architecture of a system for dynamically managing and allocating storage resources to application servers/workstations connected to a data network, according to a preferred embodiment of the invention.
  • the data network 100 includes a Local-Area-Network (LAN) 101 that comprises a network administrator 102, a plurality of workstations 103 to 106, each of which having a local storage 103a to 106a, respectively, and a plurality of Network- Area-Storage (NAS) servers 110 and 111, each of which contains large amounts of storage space, for the LAN's usage.
  • LAN Local-Area-Network
  • NAS Network- Area-Storage
  • the NAS servers 110 and 111 conduct a continuous communication (over communication path 170) with application servers 121 to 123, which are connected to LAN 100, and where applications used by the workstations 102 to 105 are run.
  • This communication path 170 is used to temporarily store data files required for running the applications by workstations in the LAN 101.
  • the application servers 121 to 123 may contain their own (local storage) hard disk 121a, or they can use storage services provide by an external Storage Area Network (SAN) 140, by utilizing several of its storage disks 141 to 143.
  • SAN Storage Area Network
  • Each access point of an independent storage resource (a physical storage component such as a hard disk), to the network is referred to as a storage node.
  • each of the application servers 121 to 123 would store its applications' data on its own respective hard disk 121a (if sufficient, or its corresponding disk 141 to 143, allocated by the SAN 140.
  • a managing server 150 is added to the network administrator 101.
  • the managing server 150 identifies all the physical storage resources (i.e., all the hard-disks) that are connected to the network 100 and collects them into a virtual storage pool 160, which is actually implemented by a plurality of segments that are distributed, using predetermined criteria that are dynamically processed and evaluated, among the physical storage resources, such that the distribution is transparent to each application.
  • the managing server 150 monitors (by running the Storage Domain Supervisor component installed therein) all the various applications that are currently being used by the network's workstations 103 to 106.
  • the server 150 can therefore detect how much disk space each application actually consumes from the application server that runs this application.
  • server 150 re-allocates virtual storage resources to each application according to its actual needs and the level of usage.
  • the server 150 processes the collected knowledge, in order to generate dynamic indications to the network administrator 102, for regulating and re-allocating the available storage space among the running applications, while introducing, to each application, the amount of virtual storage space expected by that application for proper operation.
  • the server 150 is situated so that it is parallel to the network communication path 171 between the LAN 101 and the application servers 121 to 123. This configuration assures that the server 150 is not a bottleneck to the data flowing through communication path 171, and thus, data congestion is eliminated.
  • the re-allocation process is based on the fact that many applications, while consuming great quantities of disk resources, actually utilize only parts of these resources.
  • the remaining resources, which the applications do not utilize, are only needed for the applications to be aware of, but not operate on. For example, an application may consume 15 GB of memory, while only 10GB are actually used in the disk for installation and data files. In order to properly operate, the application requires the remaining 5 GB to be available on its allocated disk, but hardly ever (or never) uses them.
  • the re-allocation process takes over these unused portions of disk resources, and allocates them to applications that need them for their actual operation. This way, the network's virtual storage volume can be sized above the actual physical storage space.
  • Allocation of the actual physical storage space is performed for each application on demand (dynamically), and only for the time period during which it is actually required by that application.
  • the level of demand is continuously, or periodically, monitored and if a reduction in the level of the demand is detected, the amount of allocated physical storage space is reduced accordingly for that application, and may be allocated for other applications which currently increase their level of demand. The same may be done for allocating a virtual storage resource for each application.
  • a further optional feature that can be carried out by the system is its liquidity - which is an indication of how much additional storage resources the system should allocate for immediate use by an application. Liquidity provides better storage allocation performance and ensures that an application will not run out of storage resources, due to an unexpected increase in storage demand. Storage volume usage indicators alert the System Manager before the application runs out of available storage resources.
  • Yet a further optional feature of the system is its accessibility - which allows an application server to access all of the network's storage devices (storage nodes), even if some of those storage devices can only be accessed by a limited number of computers within the network. This is achieved by using computers which have access to inaccessible disks to act as mediators and induce their access to applications which request the inaccessible data.
  • the data throughput performance of each mediator i.e., the amount of data handled successfully by that mediator in a given time period
  • the load required to fulfill the accessibility is dynamically distributed between different mediators for each application according to the evaluation results (load balancing between mediators).
  • the server 150 creates virtual storage volumes 161, 162 and 163 (in the virtual storage pool 160), for application servers 121, 122 and 123, respectively. These virtual volumes are reflected as virtual disks 121b, 122b and 123b. This means that even though an application does not have all the physical disk resources required for running, it receives an indication from the network administrator 102 that all of these resources are available for it, where in fact its un-utilized resources are allocated to other applications.
  • the application servers therefore, only have knowledge about the sizes of their virtual disks instead of their physical disks. Since the resource demands of each application vary constantly, the sizes of the virtual disks seen by the application servers also vary.
  • Each virtual storage volume is divided into predetermined storage segments ("chunks"), which are dynamically mapped back to a physical storage resource (e.g., disks 121a, 141 to 143) by distributing them between corresponding physical storage resources.
  • a storage node agent is provided for each storage node, which is a software component that executes the redirection of data exchange between allocated physical and virtual storage resources.
  • the resources of each storage node that is linked to an end user's workstation are also added to the virtual storage pool 160. Mapping is carried out by defining a plurality of storage nodes, 130a to 130i, each of which being connected to a corresponding physical storage resource.
  • Each storage node is evaluated and characterized by performance parameters, derived from the predetermined criteria, for example, the available physical storage on that node, the resulting data delay to reach that node over the data network, access time to the disk that is connected to that storage node, etc.
  • server 150 dynamically evaluates each storage node and, for each application, distributes (by allocation) physical storage segments that correspond to that application between storage nodes that are found optimal for that application, in a way that is transparent to the application.
  • Each request from an application to access its data files is directed to the corresponding storage nodes that currently contain these data files.
  • the evaluation process is repeated and data files are moved from node to node according to the evaluation results.
  • server 150 The operation of server 150 is controlled from a management console 164, which communicates with it via a LAN/WAN 165, and provides dynamic indications to the network administrator 102.
  • Server 150 comprises pointers to locations in the virtual storage pool 160 that correspond to every file in the system, so an application making a request for a file need not know its actual physical location.
  • the virtual storage pool 160 maintains a set of tables that map the virtual storage space to the set of physical volumes of storage located on different disks (storage nodes) throughout the network.
  • Any client application can access every file on every storage disk connected to a network through the virtual storage pool 160.
  • a client application identifies itself during forwarding a request for data, so its security level of access can be extracted from an appropriate table in the virtual storage pool 160.
  • FIG. 2 schematically illustrates the structure and mapping between physical and virtual storage resources, according to a preferred embodiment of the invention.
  • Each virtual storage volume (e.g., 161) that is associated with each application is divided to equal storage "chunks", which are sub-divided into segments, such that each segment is associated (as a result of continuous evaluation) with an optimal storage node.
  • Each segment of a chunk is mapped through its corresponding optimal storage node into a "mini-chunk", located at a corresponding partition of the disk that is associated with that node.
  • each chunk may be mapped (distributed between) to a plurality of disks, each of which having different performances and located at different location on the data network.
  • the hierarchical architecture proposed by the invention allows scalability of the storage networks while essentially maintaining its performance.
  • a network is divided into areas (for example separate LANs), which are connected to each other.
  • a selected computer in each predetermined area maintains a local routing table that maps the virtual storage space to the set of physical storage resources located in this area. Whenever access to a storage volume which it is not mapped is required, the computer seeks the location of the requested storage volume in the virtual storage pool 160, and accesses its data.
  • the local routing tables are updated each time the data in the storage area is changed. Only the virtual storage pool 160 maintains a comprehensive view of the metadata (i.e., data related to attributes, structure and location of stored data files) changes for all areas. This way, the number of times that the virtual storage pool 160 should be accessed in order to access to files in any storage node on the network is minimized, as well as the traffic of metadata required for updating the local routing tables, particularly for large storage networks.
  • the physical storage resources may be implemented using a Redundant Array Of Independent Disks (RAID - a way of redundantly storing the same data on multiple hard-disks (i.e., in different places)). Maintaining multiple copies of files is a much more cost-efficient approach, since there is no operational delay involved in their restoration, and the backup of those files can be used immediately.
  • RAID Redundant Array Of Independent Disks
  • FIGs. 3A and 3B schematically illustrate read and write operations performed in a system for dynamically managing and allocating storage resources to application servers/workstations, connected to a data network, according to a preferred embodiment of the invention.
  • a user application running on a storage client
  • This request is forwarded through the File System, and accesses the Low Level Device component of the storage client, which is typically a disk.
  • the Low Level Device then calls the Blocks Allocator.
  • the Blocks Allocator uses the Volume Mapping table to convert the virtual location (the allocated virtual drive in the virtual storage pool 160) of the requested data (as specified by the volume and offset parameters of the request), into the physical location (the storage node) in the network, where this data is actually stored.
  • the storage client In order to decide from which storage nodes it's best to retrieve data, the storage client periodically sends a request for a file read to each storage node in the network, and measures the response time. It then builds a table of the optimal storage nodes having the shortest read access time (highest priority) with respect to the Storage Client's location. The Load Balancer uses this table to calculate the best storage nodes to retrieve the requested data from. Data can be retrieved from the storage node having the highest priority. Alternatively, if the storage node having the highest priority is congested due to parallel requests from other applications, data is retrieved from another storage node, having similar or next-best priority.
  • the RAID Controller which is in charge of I/O operations in the system, sends the request through the various network communication cards. It then accesses the appropriate storage nodes, and retrieves the requested data.
  • the write operation is performed similarly.
  • the request for writing data received from the user application again has three parameters, only this time, instead of the length of the data (which appeared in the read operation), there is now the actual data to be written.
  • the initial steps are the same, up to the point where the Blocks Allocator extracts the exact location into which the data should be written, from the Volume Mapping table.
  • the Blocks Allocator uses the Node Speed Results, and the Usage Information tables, to check all available storage nodes throughout the network, and form a pool of potential storage space for writing the data.
  • the Blocks Allocator allocates storage necessary for creating at least two duplicates of a data block for each request to create a new data file by a user.
  • the Load Balancer evaluates each remote storage node according to priority determined by the following parameters:
  • the amount of storage remaining on the storage node is the amount of storage remaining on the storage node.
  • Data is written to the storage node having the highest priority, or alternatively, by continuously (or periodically) evaluating the performance of each storage node for each application.
  • Data write operations can be dynamically distributed for each application between different (or even all) storage nodes, according to the evaluation results (load balancing between storage nodes). The combination of storage nodes used for each write operation varies with respect to each application in response to variations in the evaluation results.
  • the RAID Controller issues a write request to the appropriate NAS and SAN devices, and sends them the data via the various network communication cards. The data is then received and saved in the appropriate storage nodes inside the appropriate NAS and SAN devices.
  • multiple duplicates of every file are stored at least on two different nodes in the network for backup in case of a system failure.
  • the file usage patterns, stored in the profile table associated with that file, are evaluated for each requested file.
  • Data throughput over the network in increased by eliminating access contention for a file by evaluation and storing duplicates of the file in separate storage nodes on the network, according to the evaluation results.
  • File distribution can be performed by generating multiple file duplicates simultaneously in different nodes of a network, rather than by a central server. Consequently, the distribution is decentralized and bottleneck states are eliminated
  • mapping process is performed dynamically, without interrupting the application. Hence, new storage disks may be added to the data network by simply registering them in the virtual storage pool.
  • An updated metadata about the storage locations of every duplicate of every file and about every block (small-sized storage segment on a hard disk) of storage comprising those files is maintained dynamically in the tables of the virtual storage pool 160.
  • the level of redundancy for different files is also set dynamically, where files with important data are replicated in more locations throughout the network, and are thus better protected from storage failures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer And Data Communications (AREA)

Abstract

L'invention concerne un réseau informatique formé de multiples noeuds de stockage (103a-106a) comprenant chacun une ressource de stockage physique (121a). Un serveur de gestion du système (150) sur le réseau (100) identifie le dispositif de stockage physique (121a) sur le réseau (100) et l'ajoute à un groupe de stockage virtuel (160). Lorsqu'une application (121) exécutée sur un dispositif de stockage client accède au dispositif de stockage du réseau, le serveur de gestion du système (150) affecte un segment du groupe de stockage virtuel (160) à l'application. Le segment du groupe de stockage virtuel est stocké dans une ressource de stockage physique sur le réseau. Le serveur de gestion du système surveille l'utilisation par l'application du dispositif de stockage du réseau et réaffecte de manière transparente et dynamique le segment virtuel à une ressource de stockage physique optimale.
PCT/IB2002/005214 2001-12-10 2002-12-04 Gestion de ressources de stockage rattachees a un reseau de donnees WO2003050707A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP02781614A EP1456766A1 (fr) 2001-12-10 2002-12-04 Gestion de ressources de stockage rattachees a un reseau de donnees
CA002469624A CA2469624A1 (fr) 2001-12-10 2002-12-04 Gestion de ressources de stockage rattachees a un reseau de donnees
AU2002348882A AU2002348882A1 (en) 2001-12-10 2002-12-04 Managing storage resources attached to a data network
KR10-2004-7008877A KR20040071187A (ko) 2001-12-10 2002-12-09 데이터 네트워크에 배속된 스토리지 자원의 관리
JP2003551695A JP2005512232A (ja) 2001-12-10 2002-12-09 データ・ネットワークに付属されたストレージ資源の管理

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IL14707301A IL147073A0 (en) 2001-12-10 2001-12-10 Method for managing the storage resources attached to a data network
IL147073 2001-12-10
US10/279,755 US20030110263A1 (en) 2001-12-10 2002-10-23 Managing storage resources attached to a data network
US10/279,755 2002-10-23

Publications (2)

Publication Number Publication Date
WO2003050707A1 true WO2003050707A1 (fr) 2003-06-19
WO2003050707A8 WO2003050707A8 (fr) 2004-11-04

Family

ID=26324055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2002/005214 WO2003050707A1 (fr) 2001-12-10 2002-12-04 Gestion de ressources de stockage rattachees a un reseau de donnees

Country Status (5)

Country Link
EP (1) EP1456766A1 (fr)
JP (1) JP2005512232A (fr)
CN (1) CN1602480A (fr)
CA (1) CA2469624A1 (fr)
WO (1) WO2003050707A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008524A1 (fr) * 2003-07-16 2005-01-27 Joltid Ltd. Systeme de base de donnees repartie
JP2006048680A (ja) * 2004-07-30 2006-02-16 Hewlett-Packard Development Co Lp 複数のインスタンスアプリケーションに対し負荷分散装置を動作させるシステムおよび方法
CN100440888C (zh) * 2004-01-17 2008-12-03 中国科学院计算技术研究所 基于网络存储和资源虚拟化的大型服务系统的管理系统及其方法
CN100440830C (zh) * 2004-04-13 2008-12-03 中国科学院计算技术研究所 一种基于网络的计算环境可动态重构的系统及其方法
US7497384B2 (en) 2002-10-25 2009-03-03 Symbol Technologies, Inc. Methods and systems for the negotiation of a population of RFID tags with improved security
WO2009047192A3 (fr) * 2007-10-10 2009-09-11 Telefonaktiebolaget Lm Ericsson (Publ) Gestion des ressources dans un réseau de communications
US8918620B2 (en) 2010-06-21 2014-12-23 Fujitsu Limited Storage control apparatus, storage system and method
US9047216B2 (en) 2003-08-14 2015-06-02 Compellent Technologies Virtual disk drive system and method
EP2930910A4 (fr) * 2012-12-31 2015-11-25 Huawei Tech Co Ltd Procédé et système de partage de ressources mémoire
EP2983339A4 (fr) * 2014-05-22 2016-07-20 Huawei Tech Co Ltd Appareil d'interconnexion de n uds, noeud de commande de ressource et système de serveur

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
JP2009238114A (ja) * 2008-03-28 2009-10-15 Hitachi Ltd ストレージ管理方法、ストレージ管理プログラム、ストレージ管理装置およびストレージ管理システム
TWI413376B (zh) * 2008-05-02 2013-10-21 Hon Hai Prec Ind Co Ltd 網路存儲管理裝置和方法
CN101834904A (zh) * 2010-05-14 2010-09-15 杭州华三通信技术有限公司 一种数据备份方法和设备
WO2012147127A1 (fr) 2011-04-26 2012-11-01 Hitachi, Ltd. Système informatique et procédé de commande du système informatique
CN102255962B (zh) * 2011-07-01 2013-11-06 华为数字技术(成都)有限公司 一种分布式存储方法、装置和系统
CN103186349B (zh) * 2011-12-27 2016-03-02 杭州信核数据科技股份有限公司 块级分布式存储系统及其数据读写方法
WO2013112538A1 (fr) * 2012-01-23 2013-08-01 Citrix Systems, Inc. Chiffrement de mise en mémoire
CN105657057A (zh) * 2012-12-31 2016-06-08 华为技术有限公司 一种计算存储融合的集群系统
CA2960150C (fr) * 2014-09-04 2018-01-02 Iofabric Inc. Systeme et procede de stockage distribue centre sur des applications
JP6375849B2 (ja) 2014-10-09 2018-08-22 富士通株式会社 ファイルシステム、管理装置の制御プログラム、および、ファイルシステムの制御方法
CN110502184B (zh) * 2018-05-17 2021-01-05 杭州海康威视系统技术有限公司 一种存储数据的方法、读取数据的方法、装置及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030033398A1 (en) * 2001-08-10 2003-02-13 Sun Microsystems, Inc. Method, system, and program for generating and using configuration policies
US20030046369A1 (en) * 2000-10-26 2003-03-06 Sim Siew Yong Method and apparatus for initializing a new node in a network
US20030058277A1 (en) * 1999-08-31 2003-03-27 Bowman-Amuah Michel K. A view configurer in a presentation services patterns enviroment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030058277A1 (en) * 1999-08-31 2003-03-27 Bowman-Amuah Michel K. A view configurer in a presentation services patterns enviroment
US20030046369A1 (en) * 2000-10-26 2003-03-06 Sim Siew Yong Method and apparatus for initializing a new node in a network
US20030033398A1 (en) * 2001-08-10 2003-02-13 Sun Microsystems, Inc. Method, system, and program for generating and using configuration policies

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7497384B2 (en) 2002-10-25 2009-03-03 Symbol Technologies, Inc. Methods and systems for the negotiation of a population of RFID tags with improved security
US7480658B2 (en) 2003-07-16 2009-01-20 Joltid Ltd. Distributed database system and method having nodes co-ordinated in a decentralized manner
WO2005008524A1 (fr) * 2003-07-16 2005-01-27 Joltid Ltd. Systeme de base de donnees repartie
US9047216B2 (en) 2003-08-14 2015-06-02 Compellent Technologies Virtual disk drive system and method
US10067712B2 (en) 2003-08-14 2018-09-04 Dell International L.L.C. Virtual disk drive system and method
CN100440888C (zh) * 2004-01-17 2008-12-03 中国科学院计算技术研究所 基于网络存储和资源虚拟化的大型服务系统的管理系统及其方法
CN100440830C (zh) * 2004-04-13 2008-12-03 中国科学院计算技术研究所 一种基于网络的计算环境可动态重构的系统及其方法
JP2006048680A (ja) * 2004-07-30 2006-02-16 Hewlett-Packard Development Co Lp 複数のインスタンスアプリケーションに対し負荷分散装置を動作させるシステムおよび方法
JP4621087B2 (ja) * 2004-07-30 2011-01-26 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. 複数のインスタンスアプリケーションに対し負荷分散装置を動作させるシステムおよび方法
WO2009047192A3 (fr) * 2007-10-10 2009-09-11 Telefonaktiebolaget Lm Ericsson (Publ) Gestion des ressources dans un réseau de communications
US8538447B2 (en) 2007-10-10 2013-09-17 Telefonaktiebolaget L M Ericsson (Publ) Handling resources in a communications network
CN101822121A (zh) * 2007-10-10 2010-09-01 爱立信电话股份有限公司 处理通信网络中的资源
US8918620B2 (en) 2010-06-21 2014-12-23 Fujitsu Limited Storage control apparatus, storage system and method
EP2930910A4 (fr) * 2012-12-31 2015-11-25 Huawei Tech Co Ltd Procédé et système de partage de ressources mémoire
EP3188449A1 (fr) * 2012-12-31 2017-07-05 Huawei Technologies Co., Ltd. Procédé et système de partage de ressources de stockage
US9733848B2 (en) 2012-12-31 2017-08-15 Huawei Technologies Co., Ltd. Method and system for pooling, partitioning, and sharing network storage resources
US10082972B2 (en) 2012-12-31 2018-09-25 Huawei Technologies Co., Ltd. Method and system for pooling, partitioning, and sharing network storage resources
US10481804B2 (en) 2012-12-31 2019-11-19 Huawei Technologies Co., Ltd. Cluster system with calculation and storage converged
US11042311B2 (en) 2012-12-31 2021-06-22 Huawei Technologies Co., Ltd. Cluster system with calculation and storage converged
EP3244592A1 (fr) * 2014-05-22 2017-11-15 Huawei Technologies Co., Ltd. Appareil d'interconnexion de noeud, noeud de commande de ressources et système de serveur
EP2983339A4 (fr) * 2014-05-22 2016-07-20 Huawei Tech Co Ltd Appareil d'interconnexion de n uds, noeud de commande de ressource et système de serveur
US10310756B2 (en) 2014-05-22 2019-06-04 Huawei Technologies Co., Ltd. Node interconnection apparatus, resource control node, and server system
US11023143B2 (en) 2014-05-22 2021-06-01 Huawei Technologies Co., Ltd. Node interconnection apparatus, resource control node, and server system
EP4083777A1 (fr) * 2014-05-22 2022-11-02 Huawei Technologies Co., Ltd. N ud de commande de ressources et procédé
US11789619B2 (en) 2014-05-22 2023-10-17 Huawei Technologies Co., Ltd. Node interconnection apparatus, resource control node, and server system
US11899943B2 (en) 2014-05-22 2024-02-13 Huawei Technologies Co., Ltd. Node interconnection apparatus, resource control node, and server system

Also Published As

Publication number Publication date
WO2003050707A8 (fr) 2004-11-04
EP1456766A1 (fr) 2004-09-15
JP2005512232A (ja) 2005-04-28
CN1602480A (zh) 2005-03-30
CA2469624A1 (fr) 2003-06-19

Similar Documents

Publication Publication Date Title
US20030110263A1 (en) Managing storage resources attached to a data network
WO2003050707A1 (fr) Gestion de ressources de stockage rattachees a un reseau de donnees
US7181524B1 (en) Method and apparatus for balancing a load among a plurality of servers in a computer system
JP4634812B2 (ja) 複数のコントローラ間に仮想ストレージセグメントを割り当てる能力を有するストレージシステム
US6928459B1 (en) Plurality of file systems using weighted allocation to allocate space on one or more storage devices
KR100490723B1 (ko) 파일 레벨 스트라이핑 장치 및 방법
US6715054B2 (en) Dynamic reallocation of physical storage
US7171459B2 (en) Method and apparatus for handling policies in an enterprise
US7424491B2 (en) Storage system and control method
US6148377A (en) Shared memory computer networks
US11544226B2 (en) Metadata control in a load-balanced distributed storage system
US20040153481A1 (en) Method and system for effective utilization of data storage capacity
US6269410B1 (en) Method and apparatus for using system traces to characterize workloads in a data storage system
JP2005216306A (ja) データを移動することなく仮想ストレージデバイス群を移動させる能力を含むストレージシステム
US20020052980A1 (en) Method and apparatus for event handling in an enterprise
US6961727B2 (en) Method of automatically generating and disbanding data mirrors according to workload conditions
JP2004013547A (ja) データ割当方法、情報処理システム
US10657045B2 (en) Apparatus, system, and method for maintaining a context stack
US20210374097A1 (en) Access redirection in a distributive file system
US20080192643A1 (en) Method for managing shared resources
JP4224279B2 (ja) ファイル管理プログラム
AU2002348882A1 (en) Managing storage resources attached to a data network
US11755216B2 (en) Cache memory architecture and management
CN114816276A (zh) 在Kubernetes下基于逻辑卷管理提供磁盘限速的方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2002348882

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2003551695

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2469624

Country of ref document: CA

Ref document number: 1257/CHENP/2004

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 1020047008877

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 20028247108

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2002781614

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2002781614

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2002781614

Country of ref document: EP