US20200026779A1 - Storage system with decrement protection of reference counts - Google Patents
Storage system with decrement protection of reference counts Download PDFInfo
- Publication number
- US20200026779A1 US20200026779A1 US16/040,231 US201816040231A US2020026779A1 US 20200026779 A1 US20200026779 A1 US 20200026779A1 US 201816040231 A US201816040231 A US 201816040231A US 2020026779 A1 US2020026779 A1 US 2020026779A1
- Authority
- US
- United States
- Prior art keywords
- content
- request
- decrement
- based signature
- write
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G06F17/30097—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/14—Details of searching files based on file metadata
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/13—File access structures, e.g. distributed indices
- G06F16/137—Hash-based
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0632—Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
Definitions
- the field relates generally to information processing systems, and more particularly to storage in information processing systems.
- volatile write cache temporarily stores or caches data to be later written to a persistent data storage location (i.e., destaged) during a background destaging process.
- the information processing system may often have a fixed-size page granularity and the system may support write input/output (IO) requests for data segments smaller than the system's page granularity, i.e., small write requests.
- IO write input/output
- the write cache temporarily stores the new segment data associated with the small write request for later destaging in a write cache destaging process. During the destaging process, the new segment data associated with the small write request is hardened.
- the data in the data page targeted by the small write request may be read and combined with the new segment data of the small write request to form a new data page which then is stored in the persistent data storage location.
- a received write request is considered a pending or “inflight” write request prior to being stored in the persistent data storage location, e.g., while awaiting or being processed in the destaging process.
- the data pages in the persistent data storage location may each have an associated reference count that indicates the number of references to that page in an address-to-hash (A2H) mapping of the information processing system.
- the reference count for a given data page may be updated as the number of references to that given data page increases or decreases. For example, increment (“Incref') and decrement (”Decref') commands may be issued to increment or decrement the reference count associated with a data page in the persistent data storage location.
- the given data page may be removed or marked for removal since the data page is no longer used by the system.
- Illustrative embodiments provide techniques for decrement protection of reference counts for inflight small write requests in a storage system.
- a storage system comprises a plurality of storage devices and an associated storage controller.
- the plurality of storage devices are configured to store a plurality of data pages.
- Each of the data pages has a content-based signature derived from content of that data page.
- the content-based signatures of the data pages are associated with physical locations in the plurality of storage devices where the data pages are stored.
- the plurality of storage devices store a reference count for each physical location. A given reference count indicates a number of the data pages that map via their respective content-based signatures to the same physical location in the plurality of storage devices.
- the storage controller is configured to receive a write input/output (IO) request.
- the write IO request includes a data segment that is smaller than a page granularity of the plurality of storage devices.
- the storage controller In response to receiving the write IO request, the storage controller is configured to determine a content-based signature associated with the data segment.
- the content-based signature corresponds to a target data page stored at one of the physical locations.
- the storage controller In response to a decrement request to decrement a reference count of the physical location corresponding to the content-based signature of the target data page, the storage controller is configured to postpone the decrement request.
- the storage controller may be implemented using at least one processing device comprising a processor coupled to a memory.
- the storage controller may be further configured to increment an inflight write count corresponding to the determined content-based signature of the target data page in a data structure associated with the storage controller in response to determining the content-based signature associated with the data segment.
- the storage controller may be further configured to decrement the inflight write count in response to completion of the received write IO request.
- the storage controller may be further configured to execute the postponed decrement request in response to the inflight write count being decremented to a predetermined value.
- the storage controller in response to the decrement request, may be further configured to set a decrement postponed flag corresponding to the content-based signature of the target data page in a data structure associated with the storage controller.
- the storage controller may be further configured to determine whether the decrement postponed flag corresponding to the content-based signature of the target data page is set in the data structure. In response to determining that the decrement postponed flag corresponding to the content-based signature of the target data page is set in the data structure, the storage controller may be further configured to decrement the reference count of the physical location corresponding to the content-based signature of the target data page.
- the storage controller in response to a recovery of the storage system after an event, may be further configured to reset the data structure.
- the storage controller may be further configured to determine whether any recovered write IO requests include a data segment smaller than a page granularity of the plurality of storage devices and, for a given write IO request that includes a data segment smaller than a page granularity of the plurality of storage devices, the storage controller maybe further configured to increment the inflight write count corresponding to the content-based signature of a data page targeted by the given write IO request in the data structure.
- the storage controller may be further configured to determine whether the decrement request postponed journal includes a decrement request corresponding to the content-based signature of the data page targeted by the given write IO request.
- the storage controller may be further configured to set the decrement postponed flag corresponding to the content-based signature of the data page targeted by the given write IO request in the data structure.
- FIG. 1 is a block diagram of an information processing system comprising a content addressable storage system configured with functionality for decrement protection of reference counts for inflight small write requests in an illustrative embodiment.
- FIG. 2 shows an example of a set of user data pages in an illustrative embodiment.
- FIG. 3 shows an example of a set of metadata pages in an illustrative embodiment.
- FIG. 4 shows an example of a decref hash table in an illustrative embodiment.
- FIGS. 5A-5C are flow diagrams of portions of a process for decrement protection of reference counts for inflight small write requests in an illustrative embodiment.
- FIGS. 6 and 7 show examples of processing platforms that may be utilized to implement at least a portion of an information processing system in illustrative embodiments.
- ilustrarative embodiments will be described herein with reference to exemplary information processing systems and associated computers, servers, storage devices and other processing devices. It is to be appreciated, however, that these and other embodiments are not restricted to the particular illustrative system and device configurations shown. Accordingly, the term “information processing system” as used herein is intended to be broadly construed, so as to encompass, for example, processing systems comprising cloud computing and storage systems, as well as other types of processing systems comprising various combinations of physical and virtual processing resources. An information processing system may therefore comprise, for example, at least one data center or other cloud-based system that includes one or more clouds hosting multiple tenants that share cloud resources. Numerous other types of enterprise computing and storage systems are also encompassed by the term “information processing system” as that term is broadly used herein.
- FIG. 1 shows an information processing system 100 configured in accordance with an illustrative embodiment.
- the information processing system 100 comprises a computer system 101 that includes compute nodes 102 - 1 , 102 - 2 , . . . 102 -N.
- the compute nodes 102 communicate over a network 104 with a content addressable storage system 105 .
- the computer system 101 is assumed to comprise an enterprise computer system or other arrangement of multiple compute nodes associated with respective users.
- the compute nodes 102 illustratively comprise respective processing devices of one or more processing platforms.
- the compute nodes 102 can comprise respective virtual machines (VMs) each having a processor and a memory, although numerous other configurations are possible.
- VMs virtual machines
- the compute nodes 102 can additionally or alternatively be part of cloud infrastructure such as an Amazon Web Services (AWS) system.
- AWS Amazon Web Services
- Other examples of cloud-based systems that can be used to provide compute nodes 102 and possibly other portions of system 100 include Google Cloud Platform (GCP) and Microsoft Azure.
- GCP Google Cloud Platform
- Azure Microsoft Azure
- the compute nodes 102 may be viewed as examples of what are more generally referred to herein as “host devices” or simply “hosts.” Such host devices are configured to write data to and read data from the content addressable storage system 105 .
- the compute nodes 102 and the content addressable storage system 105 may be implemented on a common processing platform, or on separate processing platforms. A wide variety of other types of host devices can be used in other embodiments.
- the compute nodes 102 in some embodiments illustratively provide compute services such as execution of one or more applications on behalf of each of one or more users associated with respective ones of the compute nodes 102 .
- Compute and/or storage services may be provided for users under a platform-as-a-service (PaaS) model, although it is to be appreciated that numerous other cloud infrastructure arrangements could be used. Also, illustrative embodiments can be implemented outside of the cloud infrastructure context, as in the case of a stand-alone enterprise-based computing and storage system.
- PaaS platform-as-a-service
- Such users of the storage system 105 in some cases are referred to herein as respective “clients” of the storage system 105 .
- the network 104 is assumed to comprise a portion of a global computer network such as the Internet, although other types of networks can be part of the network 104 , including a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks.
- the network 104 in some embodiments therefore comprises combinations of multiple different types of networks each comprising processing devices configured to communicate using Internet Protocol (IP) or other communication protocols.
- IP Internet Protocol
- some embodiments may utilize one or more high-speed local networks in which associated processing devices communicate with one another utilizing Peripheral Component Interconnect express (PCIe) cards of those devices, and networking protocols such as InfiniBand, Gigabit Ethernet or Fibre Channel.
- PCIe Peripheral Component Interconnect express
- Numerous alternative networking arrangements are possible in a given embodiment, as will be appreciated by those skilled in the art.
- the content addressable storage system 105 is accessible to the compute nodes 102 of the computer system 101 over the network 104 .
- the content addressable storage system 105 comprises a plurality of storage devices 106 , an associated storage controller 108 , and an associated cache 109 .
- the storage devices 106 are configured to store metadata pages 110 and user data pages 112 , and may also store additional information not explicitly shown such as checkpoints and write journals.
- the metadata pages 110 and the user data pages 112 are illustratively stored in respective designated metadata and user data areas of the storage devices 106 . Accordingly, metadata pages 110 and user data pages 112 may be viewed as corresponding to respective designated metadata and user data areas of the storage devices 106 .
- a given “page” as the term is broadly used herein should not be viewed as being limited to any particular range of fixed sizes.
- a page size of 8 kilobytes (KB) is used, but this is by way of example only and can be varied in other embodiments.
- page sizes of 4 KB, 16 KB or other values can be used.
- illustrative embodiments can utilize any of a wide variety of alternative paging arrangements for organizing the metadata pages 110 and the user data pages 112 .
- the user data pages 112 are part of a plurality of logical units (LUNs) configured to store files, blocks, objects or other arrangements of data, each also generally referred to herein as a “data item,” on behalf of users associated with compute nodes 102 .
- LUNs logical units
- Each such LUN may comprise particular ones of the above-noted pages of the user data area.
- the user data stored in the user data pages 112 can include any type of user data that may be utilized in the system 100 .
- the term “user data” herein is therefore also intended to be broadly construed.
- the storage devices 106 comprise solid state drives (SSDs). Such SSDs are implemented using non-volatile memory (NVM) devices such as flash memory. Other types of NVM devices that can be used to implement at least a portion of the storage devices 106 include non-volatile random access memory (NVRAM), phase-change RAM (PC-RAM) and magnetic RAM (MRAM). Various combinations of multiple different types of NVM devices may also be used.
- SSDs solid state drives
- NVM non-volatile memory
- NVRAM non-volatile random access memory
- PC-RAM phase-change RAM
- MRAM magnetic RAM
- Various combinations of multiple different types of NVM devices may also be used.
- a given storage system as the term is broadly used herein can include a combination of different types of storage devices, as in the case of a multi-tier storage system comprising a flash-based fast tier and a disk-based capacity tier.
- each of the fast tier and the capacity tier of the multi-tier storage system comprises a plurality of storage devices with different types of storage devices being used in different ones of the storage tiers.
- the fast tier may comprise flash drives while the capacity tier comprises hard disk drives.
- the particular storage devices used in a given storage tier may be varied in other embodiments, and multiple distinct storage device types may be used within a single storage tier.
- storage device as used herein is intended to be broadly construed, so as to encompass, for example, flash drives, solid state drives, hard disk drives, hybrid drives or other types of storage devices.
- the content addressable storage system 105 illustratively comprises a scale-out all-flash storage array such as an XtremIOTM storage array from Dell EMC of Hopkinton, Mass.
- a scale-out all-flash storage array such as an XtremIOTM storage array from Dell EMC of Hopkinton, Mass.
- Other types of storage arrays including by way of example VNX® and Symmetrix VMAX® storage arrays also from Dell EMC, can be used to implement storage systems in other embodiments.
- storage system as used herein is therefore intended to be broadly construed, and should not be viewed as being limited to content addressable storage systems or flash-based storage systems.
- a given storage system as the term is broadly used herein can comprise, for example, network-attached storage (NAS), storage area networks (SANs), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage.
- NAS network-attached storage
- SANs storage area networks
- DAS direct-attached storage
- distributed DAS distributed DAS
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
- the content addressable storage system 105 in the embodiment of FIG. 1 is configured to generate hash metadata providing a mapping between content-based digests of respective ones of the user data pages 112 and corresponding physical locations of those pages in the user data area.
- Content-based digests generated using hash functions are also referred to herein as “hash digests.”
- Such hash digests or other types of content-based digests are examples of what are more generally referred to herein as “content-based signatures” of the respective user data pages 112 .
- the hash metadata generated by the content addressable storage system 105 is illustratively stored as metadata pages 110 in the metadata area.
- the generation and storage of the hash metadata is assumed to be performed under the control of the storage controller 108 .
- the hash metadata may be stored in the metadata area in a plurality of entries corresponding to respective buckets each comprising multiple cache lines, although other arrangements can be used.
- the hash metadata may also be loaded into cache 109 .
- Each of the metadata pages 110 characterizes a plurality of the user data pages 112 .
- a given set of user data pages 200 representing a portion of the user data pages 112 illustratively comprises a plurality of user data pages denoted User Data Page 1 , User Data Page 2 , . . . User Data Page n.
- Each of the user data pages in this example is characterized by a LUN identifier, an offset and a content-based signature.
- the content-based signature is generated as a hash function of content of the corresponding user data page.
- Illustrative hash functions that may be used to generate the content-based signature include SHA1, where SHA denotes Secure Hashing Algorithm, or other SHA protocols known to those skilled in the art.
- the content-based signature is utilized to determine the location of the corresponding user data page within the user data area of the storage devices 106 of the content addressable storage system 105 .
- Each of the metadata pages 110 in the present embodiment is assumed to have a signature that is not content-based.
- the metadata page signatures may be generated using hash functions or other signature generation algorithms that do not utilize content of the metadata pages as input to the signature generation algorithm.
- each of the metadata pages is assumed to characterize a different set of the user data pages.
- FIG. 3 shows a given set of metadata pages 300 representing a portion of the metadata pages 110 in an illustrative embodiment.
- the metadata pages in this example include metadata pages denoted Metadata Page 1 , Metadata Page 2 , . . . Metadata Page m, having respective signatures denoted Signature 1 , Signature 2 , . . . Signature m.
- Each such metadata page characterizes a different set of n user data pages.
- the characterizing information in each metadata page can include the LUN identifiers, offsets and content-based signatures for each of the n user data pages that are characterized by that metadata page.
- FIGS. 2 and 3 are examples only, and numerous alternative user data and metadata page configurations can be used in other embodiments.
- the content addressable storage system 105 in the FIG. 1 embodiment is implemented as at least a portion of a clustered storage system and includes a plurality of storage nodes 115 each comprising a corresponding subset of the storage devices 106 .
- Other clustered storage system arrangements comprising multiple storage nodes can be used in other embodiments.
- a given clustered storage system may include not only storage nodes 115 but also additional storage nodes 120 coupled to network 104 .
- the additional storage nodes 120 may be part of another clustered storage system of the system 100 .
- Each of the storage nodes 115 and 120 of the system 100 is assumed to be implemented using at least one processing device comprising a processor coupled to a memory.
- the storage controller 108 of the content addressable storage system 105 is implemented in a distributed manner so as to comprise a plurality of distributed storage controller components implemented on respective ones of the storage nodes 115 of the content addressable storage system 105 .
- the storage controller 108 is therefore an example of what is more generally referred to herein as a “distributed storage controller.” In subsequent description herein, the storage controller 108 may be more particularly referred to as a distributed storage controller.
- Each of the storage nodes 115 in this embodiment further comprises a set of processing modules configured to communicate over one or more networks with corresponding sets of processing modules on other ones of the storage nodes 115 .
- the sets of processing modules of the storage nodes 115 collectively comprise at least a portion of the distributed storage controller 108 of the content addressable storage system 105 .
- the distributed storage controller 108 in the present embodiment is configured to implement functionality for decrement protection of reference counts for inflight small write requests in the content addressable storage system 105 .
- the storage devices 106 are configured to store user data pages 200 and metadata pages 300 in respective user data page and metadata page areas.
- Each of the user data pages 200 comprises a logical address and a content-based signature derived from content of that data page
- each of the metadata pages 300 characterizes a plurality of the user data pages 200 and associates the content-based signatures of those user data pages with respective physical blocks in the storage devices 106 .
- the modules of the distributed storage controller 108 in the present embodiment more particularly comprise different sets of processing modules implemented on each of the storage nodes 115 .
- the set of processing modules of each of the storage nodes 115 comprises at least a control module 108 C, a data module 108 D and a routing module 108 R.
- the distributed storage controller 108 further comprises one or more management (“MGMT”) modules 108 M.
- MGMT management
- only a single one of the storage nodes 115 may include a management module 108 M. It is also possible that management modules 108 M may be implemented on each of at least a subset of the storage nodes 115 .
- Communication links may be established between the various processing modules of the distributed storage controller 108 using well-known communication protocols such as IP, Transmission Control Protocol (TCP), and remote direct memory access (RDMA).
- IP Transmission Control Protocol
- RDMA remote direct memory access
- respective sets of IP links used in data transfer and corresponding messaging could be associated with respective different ones of the routing modules 108 R.
- Ownership of a user data logical address space within the content addressable storage system 105 is illustratively distributed among the control modules 108 C.
- the cache 109 of storage system 105 in the FIG. 1 embodiment includes write cache entries 109 - 1 , 109 - 2 109 -N which store incoming IO request data for later destaging to storage devices 106 .
- Cache 109 may illustratively comprise volatile memory such as, e.g., random access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), or any other kind of volatile memory.
- cache 109 may additionally or alternatively comprise any non-volatile memory as described above with respect to storage devices 106 .
- cache 109 may support a variety of operations or functions of storage system 105 including, for example, write cache, read cache, temporary metadata storage, or other similar operations.
- cache 109 may be included as a component of storage controller 108 .
- the caches 109 of each storage node 115 may operate together as a single cache 109 of the content addressable storage system 105 where the components of a given storage node 115 may access any portion of the cache 109 including those portions included as components of other storage nodes 115 .
- decrement protection of reference counts for inflight small write requests
- distributed processing modules such as the processing modules 108 C, 108 D, 108 R and 108 M of the distributed storage controller 108 .
- the management module 108 M of the storage controller 108 may include decref protection logic 116 that engages corresponding control logic instances in all of the control modules 108 C and routing modules 108 R in order to implement processes for decrement protection of reference counts for inflight small write requests within the system 100 , as will be described in more detail below in conjunction with FIGS. 5A-5C .
- the content addressable storage system 105 comprises an XtremIOTM storage array suitably modified to incorporate techniques for decrement protection of reference counts for inflight small write requests as disclosed herein.
- the control modules 108 C, data modules 108 D and routing modules 108 R of the distributed storage controller 108 illustratively comprise respective C-modules, D-modules and R-modules of the XtremIOTM storage array.
- the one or more management modules 108 M of the distributed storage controller 108 in such arrangements illustratively comprise decref protection logic 116 , although other types and arrangements of system-wide management modules can be used in other embodiments.
- functionality for decrement protection of reference counts for inflight small write requests in some embodiments is implemented under the control of decref protection logic 116 of the distributed storage controller 108 , utilizing the C-modules, D-modules and R-modules of the XtremIOTM storage array.
- each user data page typically has a size of 8 KB and its content-based signature is a 20-byte signature generated using an SHA1 hash function. Also, each page has a LUN identifier and an offset, and so is characterized by ⁇ lun_id, offset, signature>.
- the content-based signature in the present example comprises a content-based digest of the corresponding data page.
- a content-based digest is more particularly referred to as a “hash digest” of the corresponding data page, as the content-based signature is illustratively generated by applying a hash function such as SHA1 to the content of that data page.
- the full hash digest of a given data page is given by the above-noted 20-byte signature.
- the hash digest may be represented by a corresponding “hash handle,” which in some cases may comprise a particular portion of the hash digest.
- the hash handle illustratively maps on a one-to-one basis to the corresponding full hash digest within a designated cluster boundary or other specified storage resource boundary of a given storage system.
- the hash handle provides a lightweight mechanism for uniquely identifying the corresponding full hash digest and its associated data page within the specified storage resource boundary.
- the hash digest and hash handle are both considered examples of “content-based signatures” as that term is broadly used herein.
- storage controller components in an XtremIOTM storage array illustratively include C-module, D-module and R-module components.
- C-module components in an XtremIOTM storage array
- D-module components in an XtremIOTM storage array
- R-module components in an XtremIOTM storage array
- separate instances of such components can be associated with each of a plurality of storage nodes in a clustered storage system implementation.
- the distributed storage controller in this example is configured to group consecutive pages into page groups, to arrange the page groups into slices, and to assign the slices to different ones of the C-modules.
- the D-module allows a user to locate a given user data page based on its signature.
- Each metadata page also has a size of 8 KB and includes multiple instances of the ⁇ lun_id, offset, signature> for respective ones of a plurality of the user data pages.
- Such metadata pages are illustratively generated by the C-module but are accessed using the D-module based on a metadata page signature.
- the metadata page signature in this embodiment is a 20-byte signature but is not based on the content of the metadata page. Instead, the metadata page signature is generated based on an 8-byte metadata page identifier that is a function of the LUN identifier and offset information of that metadata page.
- the metadata page signature is more particularly computed using a signature generation algorithm that generates the signature to include a hash of the 8-byte metadata page identifier, one or more ASCII codes for particular predetermined characters, as well as possible additional fields.
- the last bit of the metadata page signature may always be set to a particular logic value so as to distinguish it from the user data page signature in which the last bit may always be set to the opposite logic value.
- the metadata page signature is used to retrieve the metadata page via the D-module.
- This metadata page will include the ⁇ lun_id, offset, signature> for the user data page if the user page exists.
- the signature of the user data page is then used to retrieve that user data page, also via the D-module.
- control modules 108 C, data modules 108 D, routing modules 108 R and management module(s) 108 M of distributed storage controller 108 can be found in U.S. Pat. No. 9,104,326, entitled “Scalable Block Data Storage Using Content Addressing,” which is incorporated by reference herein.
- Alternative arrangements of these and other storage node processing modules of a distributed storage controller in a content addressable storage system can be used in other embodiments.
- Each of the storage nodes 115 of the storage system 105 comprises a set of processing modules configured to communicate over one or more networks with corresponding sets of processing modules on other ones of the storage nodes.
- a given such set of processing modules implemented on a particular storage node illustratively includes at least one control module 108 C, at least one data module 108 D and at least one routing module 108 R, and possibly a management module 108 M.
- These sets of processing modules of the storage nodes collectively comprise at least a portion of the distributed storage controller 108 .
- write request as used herein is intended to be broadly construed, so as to encompass one or more IO operations directing that at least one data item of a storage system be written to in a particular manner.
- a given write request is illustratively received in a storage system from a host device.
- a write request is received in a distributed storage controller of the storage system, and directed from one processing module to another processing module of the distributed storage controller. More particularly, in the embodiment to be described below in conjunction with FIGS. 5A-5C , a received write request is directed from a routing module of the distributed storage controller to a particular control module of the distributed storage controller.
- the write request is stored in the write cache portion of cache 109 , acknowledged, and subsequently destaged at a later time to a persistent data storage location on one or more of storage devices 106 .
- Other arrangements for receiving and processing write requests from one or more host devices can be used.
- Communications between control modules 108 C and routing modules 108 R of the distributed storage controller 108 may be performed in a variety of ways.
- An example embodiment is implemented in the XtremIOTM context, and the C-modules, D-modules and R-modules of the storage nodes 115 in this context are assumed to be configured to communicate with one another over a high-speed internal network such as an InfiniBand network.
- the C-modules, D-modules and R-modules coordinate with one another to accomplish various 10 processing tasks.
- the logical block addresses or LBAs of a logical layer of the storage system 105 correspond to respective physical blocks of a physical layer of the storage system 105 .
- the user data pages of the logical layer are organized by LBA and have reference via respective content-based signatures to particular physical blocks of the physical layer.
- Each of the physical blocks has an associated reference count 114 that is maintained within the storage system, for example, in storage devices 106 .
- Reference counts 114 may alternatively be stored or maintained in storage controller 108 or other portions of content addressable storage system 105 .
- the reference count 114 for a given physical block indicates the number of logical blocks that point to that same physical block.
- a dereferencing operation is generally executed for each of the LBAs being released. More particularly, the reference count 114 of the corresponding physical block is decremented. A reference count 114 of zero or another predetermined value, indicates that there are no longer any logical blocks that reference the corresponding physical block, and so that physical block can be released.
- the process is assumed to be carried out by the processing modules 108 C, 108 D, 108 R and 108 M. It is further assumed that the control modules 108 C temporarily store data pages in the cache 109 of the content addressable storage system 105 and later destage the temporarily stored data pages via the data modules 108 D in accordance with write requests received from host devices via the routing modules 108 R.
- the host devices illustratively comprise respective ones of the compute nodes 102 of the computer system 101 .
- the write requests from the host devices identify particular data pages to be written in the storage system 105 by their corresponding logical addresses each comprising a LUN ID and an offset.
- a given one of the content-based signatures illustratively comprises a hash digest of the corresponding data page, with the hash digest being generated by applying a hash function to the content of that data page.
- the hash digest may be uniquely represented within a given storage resource boundary by a corresponding hash handle.
- the storage system 105 utilizes a two-level mapping process to map logical block addresses to physical block addresses.
- the first level of mapping uses an address-to-hash (“A2H”) table and the second level of mapping uses a hash-to-physical (“H2P”) table, sometimes known as a hash metadata (“HMD”) table, with the A2H and H2P tables corresponding to respective logical and physical layers of the content-based signature mapping within the storage system 105 .
- A2H address-to-hash
- H2P hash-to-physical
- HMD hash metadata
- the first level of mapping using the A 2 H table associates logical addresses of respective data pages with respective content-based signatures of those data pages. This is also referred to as logical layer mapping.
- the second level of mapping using the H2P table associates respective ones of the content-based signatures with respective physical storage locations in one or more of the storage devices 106 . This is also referred to as physical layer mapping.
- both of the corresponding A2H and H2P tables are updated in conjunction with the processing of that write request.
- the A2H table may be updated when the page data for the write request is stored in cache 109 and the H2P table may be updated when the page data is hardened to storage devices 106 during a destaging process.
- mapping tables are examples of what are more generally referred to herein as “mapping tables” of respective first and second distinct types. Other types and arrangements of mapping tables or other content-based signature mapping information may be used in other embodiments.
- the reference counts 114 mentioned above are illustratively maintained for respective physical blocks in the storage devices 106 and each such reference count 114 indicates for its corresponding physical block the number of logical blocks that point to that same physical block. When all logical block references to a given physical block are removed, the reference count 114 for that physical block becomes zero or another predetermined value, and its capacity can be released.
- a given “dereferencing operation” as that term is broadly used herein is intended to encompass decrementing of a reference count 114 associated with a physical block.
- the storage controller 108 makes the released logical address space available to users, executes dereferencing operations for respective ones of the physical blocks corresponding to the released logical address space, and releases any physical capacity for which the corresponding reference counts 114 reach zero or another predetermined value.
- the logical address space illustratively comprises one or more ranges of logical block addresses or LBAs each comprising a LUN ID and an offset.
- LBAs can identify a particular one of the user data pages 200 .
- the LBAs each correspond to one or more physical blocks in the storage devices 106 .
- Other types of LBAs and logical address spaces can be used in other embodiments.
- the term “logical address” as used herein is therefore intended to be broadly construed.
- a given such logical address space may be released responsive to deletion of a corresponding storage volume, snapshot or any other arrangement of data stored in the storage system 105 .
- Other conditions within the storage system 105 can also result in release of logical address space including, for example, snapshot merges, write shadows, or other conditions.
- the storage controller 108 illustratively makes the released logical address space available to users in order of released logical address. More particularly, the storage controller 108 can make the released logical address space available to users in order of released logical address by making each of its corresponding released logical addresses immediately available responsive to that logical address being released. For example, release of one or more LBAs or a range of LBAs by one or more users can result in those LBAs being made available to one or more other users in the same order in which the LBAs are released.
- the corresponding physical blocks may be released in a different order, through accumulation and reordered execution of dereferencing operations as described in the above-cited U.S. patent application Ser. No. 15/884,577.
- the storage controller 108 in some embodiments accumulates multiple dereferencing operations for each of at least a subset of the metadata pages 300 , and executes the accumulated dereferencing operations for a given one of the metadata pages 300 responsive to the accumulated dereferencing operations for the given metadata page reaching a threshold number of dereferencing operations.
- each of the dereferencing operations In executing the accumulated dereferencing operations for the physical blocks, execution of each of the dereferencing operations more particularly involves decrementing a reference count 114 of a corresponding one of the physical blocks, and releasing the physical block responsive to the reference count 114 reaching a designated number, such as zero. Moreover, in executing the accumulated dereferencing operations for the physical blocks, at least a subset of the accumulated dereferencing operations are first reordered into an order that more closely matches a physical layout of the corresponding physical blocks on the storage devices 106 . The reordered dereferencing operations are then executed in that order.
- the physical blocks may be released in the storage system 105 in a different order than that in which their corresponding logical blocks are released.
- the storage controller 108 illustratively comprising the modules 108 C, 108 R and 108 M as illustrated in FIG. 1 as well as additional modules such as data modules 108 D, is configured to implement functionality for decrement protection of reference counts for inflight small write requests in the content addressable storage system 105 .
- Execution of a small write IO request received in the storage system 105 from a host device illustratively involves the following operations:
- the construction and hardening of the new data page is done in this stage by combining the target data page, e.g., the data page located on storage devices 106 at the mapped location corresponding to the content-based signature, and the new data segment stored in write cache during the synchronous part.
- the content-based signature of the target data page may be determined, for example, based on the received small write IO request which may specify an address, e.g., LUN+ offset. Using the A2H table, the content-based signature of the target data page may be determined.
- the target data page Since the target data page is combined with the new data segment during destaging of the write cache for the small write IO request, it is important that the target data page is not removed until a new data page generated based on the combined target data page and new data segment has been hardened as a new data page in the storage devices 106 .
- the reference count of the target data page should not be decremented to zero or another predetermined value while a new data segment targeting the data page is currently pending destaging, e.g., an inflight small write IO request.
- every data page stored in storage devices 106 has a reference count 114 that counts the number of references to the page in the A2H mapping.
- a page is removed when its reference count 114 is decremented to zero or another predetermined value.
- the storage controller 108 is responsible for the update of the reference counts 114 by sending increment (“Incref') and decrement (”Decref') commands to the storage devices 106 .
- Page reference counts 114 are generally decremented when overwriting an address and when volumes are deleted.
- LVM logical volume management
- the LVM component may detect that the content-based signature was fully shadowed by all the snapshots that were originated from an origin snapshot in a snapshot tree, and consequently initiate a decrement request to the reference count 114 corresponding to this content-based signature. If such decrement command is executed and reduces the reference count 114 of the target data page to zero or another predetermined value, the target data page may be deleted.
- One solution that prevents the reference count of the target data page from decrementing to zero or another predetermined value while there is an inflight small write IO request targeting the data page is to increment the reference count of the target data page for any inflight small write IO requests that target the data page.
- this solution may waste a significant amount of processing resources since such an increment operation on the reference count of the target data page would be performed for every small write IO request, regardless of whether the reference count of the target data page will be decremented by the storage controller 108 while the destaging of the small write IO request is pending.
- performing an increment operation on the reference count of the target data page for each inflight small write IO request may also increase IO latency as an additional operation must be performed during the synchronous part of each IO request process and may be performed on a reference count located at a different node thus wasting network resources.
- decref protection logic 116 addresses these issues by preventing the target data page from being deleted before all related small write IO request transactions targeting that data page are completed, e.g., by persisting a new data page to the storage devices 106 .
- the decref protection logic 116 postpones a Decref request issued by the storage controller 108 for a data page associated with a content-based signature if the data page is referenced by any inflight small write IO request transactions, until all corresponding inflight small write IO request transactions are completed.
- only the first Decref transaction for the data page associated with the content-based signature may be postponed.
- any subsequent Decref transactions may be executed normally. For example, since postponing even a single Decref transaction will prevent the reference count from being decremented to zero or another predetermined value and the target data page from being deleted, only one Decref transaction need be postponed to ensure that the target data page does not get deleted.
- all Decref transactions for the data page associated with the content-based signature may be postponed.
- no Decref transactions for a target data page may be allowed to proceed when an inflight write IO request targets that data page.
- a given instance of storage controller 108 comprises decref protection logic 116 , an associated decref hash table 400 , and an associated decref journal 118 .
- Decref protection logic 116 implements a process for decrement protection of reference counts for data pages targeted by inflight small write IO requests that are smaller in size than the page granularity of the system.
- the decref protection logic 116 may postpone a Decref transaction that would otherwise decrement the reference count 114 for a data page targeted by a small write IO request to zero or another predetermined value.
- Decref hash table 400 stores an inflight write count 404 and a decref postponed flag 406 corresponding to a content-based signature 402 , e.g., hash digest or hash handle, associated with a data page targeted by an inflight small write IO request.
- a content-based signature 402 e.g., hash digest or hash handle
- the content-based signature 402 may be used as an index into decref hash table 400 to access the inflight write count 404 and decref postponed flag 406 corresponding to the target data page.
- decref hash table 400 may be stored in a volatile memory of controller 108 , in cache 109 , or in other storage of system 105 .
- decref hash table 400 is described as a hash table in the illustrative embodiment, any other data structure may be used to store the content-based signature 402 , inflight write count 404 , and decref postponed flag 406 .
- Inflight write count 404 is a counter that reflects the number of inflight small write IO request transactions that are overwriting the target data page.
- Decref postponed flag 406 is a flag indicating whether or not a Decref transaction was postponed.
- Decref journal 118 is a data structure that is stored persistently, for example, NVRAM of storage system 105 , in storage devices 106 , or any other persistent storage associated with storage system 105 , and is configured to store a content-based signature for a postponed Decref transaction.
- An example process that occurs when a small write IO request is received may be implemented as follows:
- the content-based signature 402 e.g., hash digest, hash handle, or other content-based signature, of the data page targeted by the small write IO request may be used as an index into the decref hash table 400 :
- Decref request When a Decref request is issued by storage controller 108 , the Decref request is either executed or postponed according to the following logic:
- additional Decref requests may also be written to the decref journal 118 , e.g., accumulated for later execution in decrementing the reference count 114 of the target data page corresponding to the content-based signature 402 .
- the decref protection logic 116 described above guarantees that a data page is not removed (i.e. decremented to zero or another predetermined value) until all inflight small write IO request transactions referencing it are successfully completed, and thus guarantees the consistency of the second stage of the IO flow.
- the protection occurs in response to a Decref request instead of for each IO request, waste of processing resources may be reduced and IO latency may be preserved.
- the above-described decrement protection of reference counts for inflight small write requests functionality of the storage controller 108 is carried out under the control of the decref protection logic 116 of the storage controller 108 , operating in conjunction with corresponding control 108 C and routing 108 R modules, to access the data modules 108 D.
- the modules 108 C, 108 D, 108 R and 108 M of the distributed storage controller 108 therefore collectively implement an illustrative process for decrement protection of reference counts for inflight small write requests of content addressable storage system 105 .
- storage controller processing modules 108 C, 108 D, 108 R and 108 M as shown in the FIG. 1 embodiment is presented by way of example only. Numerous alternative arrangements of processing modules of a distributed storage controller may be used to implement functionality for decrement protection of reference counts for inflight small write requests in a clustered storage system in other embodiments.
- the storage controller 108 in other embodiments can be implemented at least in part within the computer system 101 , in another system component, or as a stand-alone component coupled to the network 104 .
- the computer system 101 and content addressable storage system 105 in the FIG. 1 embodiment are assumed to be implemented using at least one processing platform each comprising one or more processing devices each having a processor coupled to a memory.
- processing devices can illustratively include particular arrangements of compute, storage and network resources.
- processing devices in some embodiments are implemented at least in part utilizing virtual resources such as VMs or Linux containers (LXCs), or combinations of both as in an arrangement in which Docker containers or other types of LXCs are configured to run on VMs.
- virtual resources such as VMs or Linux containers (LXCs), or combinations of both as in an arrangement in which Docker containers or other types of LXCs are configured to run on VMs.
- the storage controller 108 can be implemented in the form of one or more LXCs running on one or more VMs. Other arrangements of one or more processing devices of a processing platform can be used to implement the storage controller 108 . Other portions of the system 100 can similarly be implemented using one or more processing devices of at least one processing platform.
- the computer system 101 and the content addressable storage system 105 may be implemented on respective distinct processing platforms, although numerous other arrangements are possible. For example, in some embodiments, at least portions of the computer system 101 and the content addressable storage system 105 are implemented on the same processing platform.
- the content addressable storage system 105 can therefore be implemented at least in part within at least one processing platform that implements at least a subset of the compute nodes 102 .
- processing platform as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and associated storage systems that are configured to communicate over one or more networks.
- distributed implementations of the system 100 are possible, in which certain components of the system reside in one data center in a first geographic location while other components of the cluster reside in one or more other data centers in one or more other geographic locations that are potentially remote from the first geographic location.
- Numerous other distributed implementations of one or both of the computer system 101 and the content addressable storage system 105 are possible. Accordingly, the content addressable storage system 105 can also be implemented in a distributed manner across multiple data centers.
- system components such as computer system 101 , compute nodes 102 , network 104 , content addressable storage system 105 , storage devices 106 , storage controller 108 and storage nodes 115 and 120 can be used in other embodiments.
- FIGS. 5A-5C more particularly show example processes for decrement protection of reference counts for inflight small write requests implemented in storage system such as content addressable storage system 105 of the FIG. 1 embodiment.
- the content addressable storage system 105 may comprise a scale-out all-flash storage array such as an XtremIOTM storage array.
- a given such storage array can be configured to provide storage redundancy using well-known RAID techniques such as RAID 5 or RAID 6, although other storage redundancy configurations can be used.
- storage system as used herein is therefore intended to be broadly construed, and should not be viewed as being limited to content addressable storage systems or flash-based storage systems.
- the storage devices of such a storage system illustratively implement a plurality of LUNs configured to store files, blocks, objects or other arrangements of data.
- a given storage system can be implemented using at least one processing platform each comprising one or more processing devices each having a processor coupled to a memory.
- processing devices can illustratively include particular arrangements of compute, storage and network resources.
- processing devices in some embodiments are implemented at least in part utilizing virtual resources such as VMs or LXCs, or combinations of both as in an arrangement in which Docker containers or other types of LXCs are configured to run on VMs.
- components of a distributed storage controller can each be implemented in the form of one or more LXCs running on one or more VMs.
- Other arrangements of one or more processing devices of a processing platform can be used to implement a distributed storage controller and/or its components.
- Other portions of the information processing system 100 can similarly be implemented using one or more processing devices of at least one processing platform.
- processing platform as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and associated storage systems that are configured to communicate over one or more networks.
- the process as shown in FIG. 5A includes steps 502 through 508 and illustrates a synchronous portion of the small write request, e.g., the temporary storage of the data segment associated with the small write request in cache 109 .
- the process as shown in FIG. 5B includes steps 510 through 518 and illustrates the functionality that occurs when a decref request is received.
- the process as shown in FIG. 5C includes steps 520 through 532 and illustrates an asynchronous portion of the small write request, e.g., the destaging of the data segment associated with the small write request from cache 109 into storage devices 106 .
- FIGS. 5A-5C are suitable for use in the system 100 but is more generally applicable to other types of information processing systems each comprising one or more storage systems.
- the steps are illustratively performed by cooperative interaction of control logic instances of processing modules of a distributed storage controller.
- a given such storage controller can therefore comprise a distributed storage controller implemented in the manner illustrated in FIGS. 1-4 .
- small write IO requests are received by storage controller 108 , for example, from computer system 101 or other host devices.
- the small write IO requests may include write requests for data segments that are smaller than the page granularity of the storage devices 106 .
- the storage controller 108 may generate one or more IO threads to service the small write IO requests.
- the content-based signatures of data pages targeted by the received small write IO requests may be determined, for example, as described above.
- the IO threads may store the data segments included in the small write IO requests in cache 109 .
- step 508 the inflight write count 404 stored in decref hash table 400 may be incremented for the content-based signatures corresponding to any data pages targeted by the small write IO requests.
- storage controller 108 may determine whether a decrement request has been issued.
- the decrement request may be issued by the controller 108 in response to another operation.
- the decrement request may be issued by another controller associated with storage controller 108 and received by storage controller 108 , e.g., as part of a distributed system. If no decrement request has been issued, the process ends.
- step 512 in response to a decrement request being issued, the storage controller determines whether the decref postponed flag 406 has been set for the corresponding content-based signature 402 in decref hash table 400 .
- step 514 if the decref postponed flag 406 for the corresponding content-based signature has not been set in decref hash table 400 , the decref postponed flag 406 is set and the decrement request is postponed in step 516 and the process ends.
- step 518 if the decref postponed flag 406 was determined to already be set in step 512 , the decrement request is executed, e.g., the reference count is decremented, and the process ends.
- step 520 storage controller 108 completes an inflight write IO request, e.g., by performing destaging on the data segment associated with the inflight small write IO request that is stored in the cache 109 .
- the data segment associated with the small write IO request is combined with the target data page and the combined data page may be persisted in storage devices 106 .
- step 522 in response to completion of a small write IO request, the storage controller 108 decrements the inflight write count 404 stored in decref hash table 400 at the content-based signature 402 corresponding to the target data page associated with the completed destaged small write IO request.
- step 524 the storage controller 108 determines whether the inflight write count 404 has been decremented to zero or to another predetermined value. If the inflight write count 404 has not been decremented to zero or to another predetermined value, the process ends.
- step 526 if the inflight write count 404 has been decremented to zero or to another predetermined value, storage controller 108 determines whether the decref postponed flag 406 for the corresponding content-based signature is set.
- step 528 the storage controller 108 removes the entry corresponding to the decref request from the decref journal 118 , e.g., the content-based signature of the target data page may be removed from the decref journal 118 .
- step 530 if the decref postponed flag 406 for the corresponding content-based signature is set the storage controller 108 executes the decrement request.
- step 532 the storage controller 108 removes the hash table entry included the in decref hash table 400 for corresponding content-based signature. The process then ends.
- step 526 if the decref postponed flag 406 is not set, the process proceeds to step 532 and the storage controller 108 removes the hash table entry included in decref hash table 400 for corresponding content-based signature.
- FIGS. 5A-5C and other features and functionality for decrement protection of reference counts for inflight small write requests as described above can be adapted for use with other types of information systems, including by way of example an information processing system in which the host devices and the storage system are both implemented on the same processing platform.
- FIGS. 5A-5C are presented by way of illustrative example only and should not be construed as limiting the scope of the disclosure in any way.
- Alternative embodiments can use other types of processing operations for implementing decrement protection of reference counts for inflight small write requests.
- the ordering of the process steps may be varied in other embodiments, or certain steps may be performed at least in part concurrently with one another rather than serially.
- one or more of the process steps may be repeated periodically, or multiple instances of the process can be performed in parallel with one another in order to implement a plurality of different process instances for decrement protection of reference counts for inflight small write requests for respective different storage systems or portions thereof within a given information processing system.
- FIGS. 5A-5C can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer or server.
- a memory or other storage device having executable program code of one or more software programs embodied therein is an example of what is more generally referred to herein as a “processor-readable storage medium.”
- a storage controller such as storage controller 108 that is configured to control performance of one or more steps of the processes of FIGS. 5A-5C can be implemented as part of what is more generally referred to herein as a processing platform comprising one or more processing devices each comprising a processor coupled to a memory.
- a given such processing device may correspond to one or more virtual machines or other types of virtualization infrastructure such as Docker containers or other types of LXCs.
- the storage controller 108 may be implemented at least in part using processing devices of such processing platforms.
- respective distributed modules of such a storage controller can be implemented in respective LXCs running on respective ones of the processing devices of a processing platform.
- the storage system comprises an XtremIOTM storage array suitably modified to incorporate techniques for decrement protection of reference counts for inflight small write requests as disclosed herein.
- control modules 108 C, data modules 108 D, routing modules 108 R and management module(s) 108 M of the distributed storage controller 108 in system 100 illustratively comprise C-modules, D-modules, R-modules and SYM module(s), respectively.
- These exemplary processing modules of the distributed storage controller 108 can be configured to implement functionality for decrement protection of reference counts for inflight small write requests in accordance with the processes of FIGS. 5A-5C .
- C-module, D-module, R-module and decref protection logic components of an XtremIOTM storage array can be incorporated into other processing modules or components of a centralized or distributed storage controller in other types of storage systems.
- Illustrative embodiments of content addressable storage systems or other types of storage systems with functionality for decrement protection of reference counts for inflight small write requests as disclosed herein can provide a number of significant advantages relative to conventional arrangements.
- some embodiments can advantageously inhibit the deletion of data pages that are required for inflight write IO requests which prevents data loss.
- some embodiments can advantageously reduce IO processing waste and latency, for example, by removing the need to increment the reference count for every data page having an associated pending write IO request and instead only postponing decrement requests specifically targeting data pages with inflight write IO requests.
- clustered storage systems comprising storage controllers that are distributed over multiple storage nodes. Similar advantages can be provided in other types of storage systems.
- a given such processing platform comprises at least one processing device comprising a processor coupled to a memory.
- the processor and memory in some embodiments comprise respective processor and memory elements of a virtual machine or container provided using one or more underlying physical machines.
- the term “processing device” as used herein is intended to be broadly construed so as to encompass a wide variety of different arrangements of physical processors, memories and other device components as well as virtual instances of such components.
- a “processing device” in some embodiments can comprise or be executed across one or more virtual processors. Processing devices can therefore be physical or virtual and can be executed across one or more physical or virtual processors. It should also be noted that a given virtual device can be mapped to a portion of a physical one.
- the cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines under the control of the hypervisor. It is also possible to use multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the system.
- cloud infrastructure as disclosed herein can include cloud-based systems such as AWS, GCP and Microsoft Azure.
- Virtual machines provided in such systems can be used to implement at least portions of one or more of a computer system and a content addressable storage system in illustrative embodiments.
- object stores such as Amazon S3, GCP Cloud Storage, and Microsoft Azure Blob Storage.
- the cloud infrastructure additionally or alternatively comprises a plurality of containers implemented using container host devices.
- a given container of cloud infrastructure illustratively comprises a Docker container or other type of LXC.
- the containers may run on virtual machines in a multi-tenant environment, although other arrangements are possible.
- the containers may be utilized to implement a variety of different types of functionality within the system 100 .
- containers can be used to implement respective processing devices providing compute and/or storage services of a cloud-based system.
- containers may be used in combination with other virtualization infrastructure such as virtual machines implemented using a hypervisor.
- processing platforms will now be described in greater detail with reference to FIGS. 6 and 7 . Although described in the context of system 100 , these platforms may also be used to implement at least portions of other information processing systems in other embodiments.
- FIG. 6 shows an example processing platform comprising cloud infrastructure 600 .
- the cloud infrastructure 600 comprises a combination of physical and virtual processing resources that may be utilized to implement at least a portion of the information processing system 100 .
- the cloud infrastructure 600 comprises multiple virtual machines (VMs) and/or container sets 602 - 1 , 602 - 2 , . . . 602 -L implemented using virtualization infrastructure 604 .
- the virtualization infrastructure 604 runs on physical infrastructure 605 , and illustratively comprises one or more hypervisors and/or operating system level virtualization infrastructure.
- the operating system level virtualization infrastructure illustratively comprises kernel control groups of a Linux operating system or other type of operating system.
- the cloud infrastructure 600 further comprises sets of applications 610 - 1 , 610 - 2 , . . . 610 -L running on respective ones of the VMs/container sets 602 - 1 , 602 - 2 , . . . 602 -L under the control of the virtualization infrastructure 604 .
- the VMs/container sets 602 may comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs.
- the VMs/container sets 602 comprise respective VMs implemented using virtualization infrastructure 604 that comprises at least one hypervisor.
- Such implementations can provide metadata loading control functionality of the type described above for one or more processes running on a given one of the VMs.
- each of the VMs can implement metadata loading control functionality for one or more processes running on that particular VM.
- hypervisor platform that may be used to implement a hypervisor within the virtualization infrastructure 604 is the VMware® vSphere® which may have an associated virtual infrastructure management system such as the VMware® vCenterTM.
- the underlying physical machines may comprise one or more distributed processing platforms that include one or more storage systems.
- the VMs/container sets 602 comprise respective containers implemented using virtualization infrastructure 604 that provides operating system level virtualization functionality, such as support for Docker containers running on bare metal hosts, or Docker containers running on VMs.
- the containers are illustratively implemented using respective kernel control groups of the operating system.
- Such implementations can provide metadata load control functionality of the type described above for one or more processes running on different ones of the containers.
- a container host device supporting multiple containers of one or more container sets can implement one or more instances of metadata load control logic for use in loading metadata into cache during a restart process.
- one or more of the processing modules or other components of system 100 may each run on a computer, server, storage device or other processing platform element.
- a given such element may be viewed as an example of what is more generally referred to herein as a “processing device.”
- the cloud infrastructure 600 shown in FIG. 6 may represent at least a portion of one processing platform.
- processing platform 700 shown in FIG. 7 is another example of such a processing platform.
- the processing platform 700 in this embodiment comprises a portion of system 100 and includes a plurality of processing devices, denoted 702 - 1 , 702 - 2 , 702 - 3 , . . . 702 -K, which communicate with one another over a network 704 .
- the network 704 may comprise any type of network, including by way of example a global computer network such as the Internet, a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks.
- the processing device 702 - 1 in the processing platform 700 comprises a processor 710 coupled to a memory 712 .
- the processor 710 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements.
- ASIC application-specific integrated circuit
- FPGA field-programmable gate array
- the memory 712 may comprise random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination.
- RAM random access memory
- ROM read-only memory
- the memory 712 and other memories disclosed herein should be viewed as illustrative examples of what are more generally referred to as “processor-readable storage media” storing executable program code of one or more software programs.
- Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments.
- a given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM or other electronic memory, or any of a wide variety of other types of computer program products.
- the term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used.
- network interface circuitry 714 is included in the processing device 702 - 1 , which is used to interface the processing device with the network 704 and other system components, and may comprise conventional transceivers.
- the other processing devices 702 of the processing platform 700 are assumed to be configured in a manner similar to that shown for processing device 702 - 1 in the figure.
- processing platform 700 shown in the figure is presented by way of example only, and system 100 may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, servers, storage devices or other processing devices.
- processing platforms used to implement illustrative embodiments can comprise different types of virtualization infrastructure, in place of or in addition to virtualization infrastructure comprising virtual machines.
- virtualization infrastructure illustratively includes container-based virtualization infrastructure configured to provide Docker containers or other types of LXCs.
- portions of a given processing platform in some embodiments can comprise converged infrastructure such as VxRailTM, VxRackTM, VxRackTM FLEX, VxBlockTM or Vblock® converged infrastructure from VCE, the Virtual Computing Environment Company, now the Converged Platform and Solutions Division of Dell EMC.
- converged infrastructure such as VxRailTM, VxRackTM, VxRackTM FLEX, VxBlockTM or Vblock® converged infrastructure from VCE, the Virtual Computing Environment Company, now the Converged Platform and Solutions Division of Dell EMC.
- components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device.
- a processor of a processing device For example, at least portions of the functionality of one or more components of the storage controller 108 of system 100 are illustratively implemented in the form of software running on one or more processing devices.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- The field relates generally to information processing systems, and more particularly to storage in information processing systems.
- In some information processing systems, volatile write cache temporarily stores or caches data to be later written to a persistent data storage location (i.e., destaged) during a background destaging process. The information processing system may often have a fixed-size page granularity and the system may support write input/output (IO) requests for data segments smaller than the system's page granularity, i.e., small write requests. When a small write request is received, the write cache temporarily stores the new segment data associated with the small write request for later destaging in a write cache destaging process. During the destaging process, the new segment data associated with the small write request is hardened. For example, the data in the data page targeted by the small write request may be read and combined with the new segment data of the small write request to form a new data page which then is stored in the persistent data storage location. A received write request is considered a pending or “inflight” write request prior to being stored in the persistent data storage location, e.g., while awaiting or being processed in the destaging process.
- In some systems, the data pages in the persistent data storage location may each have an associated reference count that indicates the number of references to that page in an address-to-hash (A2H) mapping of the information processing system. The reference count for a given data page may be updated as the number of references to that given data page increases or decreases. For example, increment (“Incref') and decrement (”Decref') commands may be issued to increment or decrement the reference count associated with a data page in the persistent data storage location.
- When the reference count for a given data page is decremented to zero, the given data page may be removed or marked for removal since the data page is no longer used by the system.
- Illustrative embodiments provide techniques for decrement protection of reference counts for inflight small write requests in a storage system.
- In one embodiment, a storage system comprises a plurality of storage devices and an associated storage controller. The plurality of storage devices are configured to store a plurality of data pages. Each of the data pages has a content-based signature derived from content of that data page. The content-based signatures of the data pages are associated with physical locations in the plurality of storage devices where the data pages are stored. The plurality of storage devices store a reference count for each physical location. A given reference count indicates a number of the data pages that map via their respective content-based signatures to the same physical location in the plurality of storage devices.
- The storage controller is configured to receive a write input/output (IO) request. The write IO request includes a data segment that is smaller than a page granularity of the plurality of storage devices.
- In response to receiving the write IO request, the storage controller is configured to determine a content-based signature associated with the data segment. The content-based signature corresponds to a target data page stored at one of the physical locations.
- In response to a decrement request to decrement a reference count of the physical location corresponding to the content-based signature of the target data page, the storage controller is configured to postpone the decrement request.
- The storage controller may be implemented using at least one processing device comprising a processor coupled to a memory.
- In some embodiments, the storage controller may be further configured to increment an inflight write count corresponding to the determined content-based signature of the target data page in a data structure associated with the storage controller in response to determining the content-based signature associated with the data segment.
- The storage controller may be further configured to decrement the inflight write count in response to completion of the received write IO request. The storage controller may be further configured to execute the postponed decrement request in response to the inflight write count being decremented to a predetermined value.
- In some embodiments, in response to the decrement request, the storage controller may be further configured to set a decrement postponed flag corresponding to the content-based signature of the target data page in a data structure associated with the storage controller.
- In response to a second decrement request to decrement the reference count of the physical location corresponding to the content-based signature of the target data page, the storage controller may be further configured to determine whether the decrement postponed flag corresponding to the content-based signature of the target data page is set in the data structure. In response to determining that the decrement postponed flag corresponding to the content-based signature of the target data page is set in the data structure, the storage controller may be further configured to decrement the reference count of the physical location corresponding to the content-based signature of the target data page.
- In some embodiments, in response to a recovery of the storage system after an event, the storage controller may be further configured to reset the data structure. The storage controller may be further configured to determine whether any recovered write IO requests include a data segment smaller than a page granularity of the plurality of storage devices and, for a given write IO request that includes a data segment smaller than a page granularity of the plurality of storage devices, the storage controller maybe further configured to increment the inflight write count corresponding to the content-based signature of a data page targeted by the given write IO request in the data structure. The storage controller may be further configured to determine whether the decrement request postponed journal includes a decrement request corresponding to the content-based signature of the data page targeted by the given write IO request. In response to determining that the decrement request postponed journal includes a decrement request corresponding to the content-based signature of the data page targeted by the given write IO request, the storage controller may be further configured to set the decrement postponed flag corresponding to the content-based signature of the data page targeted by the given write IO request in the data structure.
- These and other illustrative embodiments include, without limitation, apparatus, systems, methods and processor-readable storage media.
-
FIG. 1 is a block diagram of an information processing system comprising a content addressable storage system configured with functionality for decrement protection of reference counts for inflight small write requests in an illustrative embodiment. -
FIG. 2 shows an example of a set of user data pages in an illustrative embodiment. -
FIG. 3 shows an example of a set of metadata pages in an illustrative embodiment. -
FIG. 4 shows an example of a decref hash table in an illustrative embodiment. -
FIGS. 5A-5C are flow diagrams of portions of a process for decrement protection of reference counts for inflight small write requests in an illustrative embodiment. -
FIGS. 6 and 7 show examples of processing platforms that may be utilized to implement at least a portion of an information processing system in illustrative embodiments. - Illustrative embodiments will be described herein with reference to exemplary information processing systems and associated computers, servers, storage devices and other processing devices. It is to be appreciated, however, that these and other embodiments are not restricted to the particular illustrative system and device configurations shown. Accordingly, the term “information processing system” as used herein is intended to be broadly construed, so as to encompass, for example, processing systems comprising cloud computing and storage systems, as well as other types of processing systems comprising various combinations of physical and virtual processing resources. An information processing system may therefore comprise, for example, at least one data center or other cloud-based system that includes one or more clouds hosting multiple tenants that share cloud resources. Numerous other types of enterprise computing and storage systems are also encompassed by the term “information processing system” as that term is broadly used herein.
-
FIG. 1 shows aninformation processing system 100 configured in accordance with an illustrative embodiment. Theinformation processing system 100 comprises acomputer system 101 that includes compute nodes 102-1, 102-2, . . . 102-N. Thecompute nodes 102 communicate over anetwork 104 with a contentaddressable storage system 105. Thecomputer system 101 is assumed to comprise an enterprise computer system or other arrangement of multiple compute nodes associated with respective users. - The
compute nodes 102 illustratively comprise respective processing devices of one or more processing platforms. For example, thecompute nodes 102 can comprise respective virtual machines (VMs) each having a processor and a memory, although numerous other configurations are possible. - The
compute nodes 102 can additionally or alternatively be part of cloud infrastructure such as an Amazon Web Services (AWS) system. Other examples of cloud-based systems that can be used to providecompute nodes 102 and possibly other portions ofsystem 100 include Google Cloud Platform (GCP) and Microsoft Azure. - The
compute nodes 102 may be viewed as examples of what are more generally referred to herein as “host devices” or simply “hosts.” Such host devices are configured to write data to and read data from the contentaddressable storage system 105. Thecompute nodes 102 and the contentaddressable storage system 105 may be implemented on a common processing platform, or on separate processing platforms. A wide variety of other types of host devices can be used in other embodiments. - The
compute nodes 102 in some embodiments illustratively provide compute services such as execution of one or more applications on behalf of each of one or more users associated with respective ones of thecompute nodes 102. - The term “user” herein is intended to be broadly construed so as to encompass numerous arrangements of human, hardware, software or firmware entities, as well as combinations of such entities. Compute and/or storage services may be provided for users under a platform-as-a-service (PaaS) model, although it is to be appreciated that numerous other cloud infrastructure arrangements could be used. Also, illustrative embodiments can be implemented outside of the cloud infrastructure context, as in the case of a stand-alone enterprise-based computing and storage system.
- Such users of the
storage system 105 in some cases are referred to herein as respective “clients” of thestorage system 105. - The
network 104 is assumed to comprise a portion of a global computer network such as the Internet, although other types of networks can be part of thenetwork 104, including a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks. Thenetwork 104 in some embodiments therefore comprises combinations of multiple different types of networks each comprising processing devices configured to communicate using Internet Protocol (IP) or other communication protocols. - As a more particular example, some embodiments may utilize one or more high-speed local networks in which associated processing devices communicate with one another utilizing Peripheral Component Interconnect express (PCIe) cards of those devices, and networking protocols such as InfiniBand, Gigabit Ethernet or Fibre Channel. Numerous alternative networking arrangements are possible in a given embodiment, as will be appreciated by those skilled in the art.
- The content
addressable storage system 105 is accessible to thecompute nodes 102 of thecomputer system 101 over thenetwork 104. The contentaddressable storage system 105 comprises a plurality ofstorage devices 106, an associatedstorage controller 108, and an associatedcache 109. Thestorage devices 106 are configured to storemetadata pages 110 anduser data pages 112, and may also store additional information not explicitly shown such as checkpoints and write journals. The metadata pages 110 and theuser data pages 112 are illustratively stored in respective designated metadata and user data areas of thestorage devices 106. Accordingly,metadata pages 110 anduser data pages 112 may be viewed as corresponding to respective designated metadata and user data areas of thestorage devices 106. - A given “page” as the term is broadly used herein should not be viewed as being limited to any particular range of fixed sizes. In some embodiments, a page size of 8 kilobytes (KB) is used, but this is by way of example only and can be varied in other embodiments. For example, page sizes of 4 KB, 16 KB or other values can be used. Accordingly, illustrative embodiments can utilize any of a wide variety of alternative paging arrangements for organizing the
metadata pages 110 and the user data pages 112. - The
user data pages 112 are part of a plurality of logical units (LUNs) configured to store files, blocks, objects or other arrangements of data, each also generally referred to herein as a “data item,” on behalf of users associated withcompute nodes 102. Each such LUN may comprise particular ones of the above-noted pages of the user data area. The user data stored in theuser data pages 112 can include any type of user data that may be utilized in thesystem 100. The term “user data” herein is therefore also intended to be broadly construed. - It is assumed in the present embodiment that the
storage devices 106 comprise solid state drives (SSDs). Such SSDs are implemented using non-volatile memory (NVM) devices such as flash memory. Other types of NVM devices that can be used to implement at least a portion of thestorage devices 106 include non-volatile random access memory (NVRAM), phase-change RAM (PC-RAM) and magnetic RAM (MRAM). Various combinations of multiple different types of NVM devices may also be used. - However, it is to be appreciated that other types of storage devices can be used in other embodiments. For example, a given storage system as the term is broadly used herein can include a combination of different types of storage devices, as in the case of a multi-tier storage system comprising a flash-based fast tier and a disk-based capacity tier. In such an embodiment, each of the fast tier and the capacity tier of the multi-tier storage system comprises a plurality of storage devices with different types of storage devices being used in different ones of the storage tiers. For example, the fast tier may comprise flash drives while the capacity tier comprises hard disk drives. The particular storage devices used in a given storage tier may be varied in other embodiments, and multiple distinct storage device types may be used within a single storage tier. The term “storage device” as used herein is intended to be broadly construed, so as to encompass, for example, flash drives, solid state drives, hard disk drives, hybrid drives or other types of storage devices.
- In some embodiments, the content
addressable storage system 105 illustratively comprises a scale-out all-flash storage array such as an XtremIO™ storage array from Dell EMC of Hopkinton, Mass. Other types of storage arrays, including by way of example VNX® and Symmetrix VMAX® storage arrays also from Dell EMC, can be used to implement storage systems in other embodiments. - The term “storage system” as used herein is therefore intended to be broadly construed, and should not be viewed as being limited to content addressable storage systems or flash-based storage systems. A given storage system as the term is broadly used herein can comprise, for example, network-attached storage (NAS), storage area networks (SANs), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage.
- Other particular types of storage products that can be used in implementing a given storage system in an illustrative embodiment include all-flash and hybrid flash storage arrays such as Unity™, software-defined storage products such as ScaleIO™ and ViPR®, cloud storage products such as Elastic Cloud Storage (ECS), object-based storage products such as Atmos®, and scale-out NAS clusters comprising Isilon® platform nodes and associated accelerators, all from Dell EMC. Combinations of multiple ones of these and other storage products can also be used in implementing a given storage system in an illustrative embodiment.
- The content
addressable storage system 105 in the embodiment ofFIG. 1 is configured to generate hash metadata providing a mapping between content-based digests of respective ones of theuser data pages 112 and corresponding physical locations of those pages in the user data area. Content-based digests generated using hash functions are also referred to herein as “hash digests.” Such hash digests or other types of content-based digests are examples of what are more generally referred to herein as “content-based signatures” of the respective user data pages 112. The hash metadata generated by the contentaddressable storage system 105 is illustratively stored asmetadata pages 110 in the metadata area. - The generation and storage of the hash metadata is assumed to be performed under the control of the
storage controller 108. The hash metadata may be stored in the metadata area in a plurality of entries corresponding to respective buckets each comprising multiple cache lines, although other arrangements can be used. In some aspects, the hash metadata may also be loaded intocache 109. - Each of the metadata pages 110 characterizes a plurality of the user data pages 112. For example, as illustrated in
FIG. 2 , a given set ofuser data pages 200 representing a portion of theuser data pages 112 illustratively comprises a plurality of user data pages denotedUser Data Page 1,User Data Page 2, . . . User Data Page n. Each of the user data pages in this example is characterized by a LUN identifier, an offset and a content-based signature. The content-based signature is generated as a hash function of content of the corresponding user data page. Illustrative hash functions that may be used to generate the content-based signature include SHA1, where SHA denotes Secure Hashing Algorithm, or other SHA protocols known to those skilled in the art. The content-based signature is utilized to determine the location of the corresponding user data page within the user data area of thestorage devices 106 of the contentaddressable storage system 105. - Each of the
metadata pages 110 in the present embodiment is assumed to have a signature that is not content-based. For example, the metadata page signatures may be generated using hash functions or other signature generation algorithms that do not utilize content of the metadata pages as input to the signature generation algorithm. Also, each of the metadata pages is assumed to characterize a different set of the user data pages. - This is illustrated in
FIG. 3 , which shows a given set ofmetadata pages 300 representing a portion of themetadata pages 110 in an illustrative embodiment. The metadata pages in this example include metadata pages denotedMetadata Page 1,Metadata Page 2, . . . Metadata Page m, having respective signatures denotedSignature 1,Signature 2, . . . Signature m. Each such metadata page characterizes a different set of n user data pages. For example, the characterizing information in each metadata page can include the LUN identifiers, offsets and content-based signatures for each of the n user data pages that are characterized by that metadata page. It is to be appreciated, however, that the user data and metadata page configurations shown inFIGS. 2 and 3 are examples only, and numerous alternative user data and metadata page configurations can be used in other embodiments. - The content
addressable storage system 105 in theFIG. 1 embodiment is implemented as at least a portion of a clustered storage system and includes a plurality ofstorage nodes 115 each comprising a corresponding subset of thestorage devices 106. Other clustered storage system arrangements comprising multiple storage nodes can be used in other embodiments. A given clustered storage system may include not onlystorage nodes 115 but alsoadditional storage nodes 120 coupled tonetwork 104. Alternatively, theadditional storage nodes 120 may be part of another clustered storage system of thesystem 100. Each of thestorage nodes system 100 is assumed to be implemented using at least one processing device comprising a processor coupled to a memory. - The
storage controller 108 of the contentaddressable storage system 105 is implemented in a distributed manner so as to comprise a plurality of distributed storage controller components implemented on respective ones of thestorage nodes 115 of the contentaddressable storage system 105. Thestorage controller 108 is therefore an example of what is more generally referred to herein as a “distributed storage controller.” In subsequent description herein, thestorage controller 108 may be more particularly referred to as a distributed storage controller. - Each of the
storage nodes 115 in this embodiment further comprises a set of processing modules configured to communicate over one or more networks with corresponding sets of processing modules on other ones of thestorage nodes 115. The sets of processing modules of thestorage nodes 115 collectively comprise at least a portion of the distributedstorage controller 108 of the contentaddressable storage system 105. - The distributed
storage controller 108 in the present embodiment is configured to implement functionality for decrement protection of reference counts for inflight small write requests in the contentaddressable storage system 105. - As noted above, the
storage devices 106 are configured to storeuser data pages 200 andmetadata pages 300 in respective user data page and metadata page areas. Each of theuser data pages 200 comprises a logical address and a content-based signature derived from content of that data page, and each of the metadata pages 300 characterizes a plurality of theuser data pages 200 and associates the content-based signatures of those user data pages with respective physical blocks in thestorage devices 106. - The modules of the distributed
storage controller 108 in the present embodiment more particularly comprise different sets of processing modules implemented on each of thestorage nodes 115. The set of processing modules of each of thestorage nodes 115 comprises at least acontrol module 108C, adata module 108D and arouting module 108R. The distributedstorage controller 108 further comprises one or more management (“MGMT”)modules 108M. For example, only a single one of thestorage nodes 115 may include amanagement module 108M. It is also possible thatmanagement modules 108M may be implemented on each of at least a subset of thestorage nodes 115. - Communication links may be established between the various processing modules of the distributed
storage controller 108 using well-known communication protocols such as IP, Transmission Control Protocol (TCP), and remote direct memory access (RDMA). For example, respective sets of IP links used in data transfer and corresponding messaging could be associated with respective different ones of therouting modules 108R. - Ownership of a user data logical address space within the content
addressable storage system 105 is illustratively distributed among thecontrol modules 108C. - The
cache 109 ofstorage system 105 in theFIG. 1 embodiment includes write cache entries 109-1, 109-2 109-N which store incoming IO request data for later destaging tostorage devices 106.Cache 109 may illustratively comprise volatile memory such as, e.g., random access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), or any other kind of volatile memory. In some embodiments,cache 109 may additionally or alternatively comprise any non-volatile memory as described above with respect tostorage devices 106. In some embodiments,cache 109 may support a variety of operations or functions ofstorage system 105 including, for example, write cache, read cache, temporary metadata storage, or other similar operations. While illustrated as a separate component ofstorage system 105, in some embodiments,cache 109 may be included as a component ofstorage controller 108. In some aspects, thecaches 109 of eachstorage node 115 may operate together as asingle cache 109 of the contentaddressable storage system 105 where the components of a givenstorage node 115 may access any portion of thecache 109 including those portions included as components ofother storage nodes 115. - It is desirable in these and other storage system contexts to implement functionality for decrement protection of reference counts for inflight small write requests (“decref protection”) across multiple distributed processing modules, such as the
processing modules storage controller 108. - The
management module 108M of thestorage controller 108 may includedecref protection logic 116 that engages corresponding control logic instances in all of thecontrol modules 108C androuting modules 108R in order to implement processes for decrement protection of reference counts for inflight small write requests within thesystem 100, as will be described in more detail below in conjunction withFIGS. 5A-5C . - In some embodiments, the content
addressable storage system 105 comprises an XtremIO™ storage array suitably modified to incorporate techniques for decrement protection of reference counts for inflight small write requests as disclosed herein. In arrangements of this type, thecontrol modules 108C,data modules 108D androuting modules 108R of the distributedstorage controller 108 illustratively comprise respective C-modules, D-modules and R-modules of the XtremIO™ storage array. The one ormore management modules 108M of the distributedstorage controller 108 in such arrangements illustratively comprisedecref protection logic 116, although other types and arrangements of system-wide management modules can be used in other embodiments. Accordingly, functionality for decrement protection of reference counts for inflight small write requests in some embodiments is implemented under the control ofdecref protection logic 116 of the distributedstorage controller 108, utilizing the C-modules, D-modules and R-modules of the XtremIO™ storage array. - In the above-described XtremIO™ storage array example, each user data page typically has a size of 8 KB and its content-based signature is a 20-byte signature generated using an SHA1 hash function. Also, each page has a LUN identifier and an offset, and so is characterized by <lun_id, offset, signature>.
- The content-based signature in the present example comprises a content-based digest of the corresponding data page. Such a content-based digest is more particularly referred to as a “hash digest” of the corresponding data page, as the content-based signature is illustratively generated by applying a hash function such as SHA1 to the content of that data page. The full hash digest of a given data page is given by the above-noted 20-byte signature. The hash digest may be represented by a corresponding “hash handle,” which in some cases may comprise a particular portion of the hash digest. The hash handle illustratively maps on a one-to-one basis to the corresponding full hash digest within a designated cluster boundary or other specified storage resource boundary of a given storage system. In arrangements of this type, the hash handle provides a lightweight mechanism for uniquely identifying the corresponding full hash digest and its associated data page within the specified storage resource boundary. The hash digest and hash handle are both considered examples of “content-based signatures” as that term is broadly used herein.
- Examples of techniques for generating and processing hash handles for respective hash digests of respective data pages are disclosed in U.S. Pat. No. 9,208,162, entitled “Generating a Short Hash Handle,” and U.S. Pat. No. 9,286,003, entitled “Method and Apparatus for Creating a Short Hash Handle Highly Correlated with a Globally-Unique Hash Signature,” both of which are incorporated by reference herein.
- As mentioned previously, storage controller components in an XtremIO™ storage array illustratively include C-module, D-module and R-module components. For example, separate instances of such components can be associated with each of a plurality of storage nodes in a clustered storage system implementation.
- The distributed storage controller in this example is configured to group consecutive pages into page groups, to arrange the page groups into slices, and to assign the slices to different ones of the C-modules.
- The D-module allows a user to locate a given user data page based on its signature. Each metadata page also has a size of 8 KB and includes multiple instances of the <lun_id, offset, signature> for respective ones of a plurality of the user data pages. Such metadata pages are illustratively generated by the C-module but are accessed using the D-module based on a metadata page signature.
- The metadata page signature in this embodiment is a 20-byte signature but is not based on the content of the metadata page. Instead, the metadata page signature is generated based on an 8-byte metadata page identifier that is a function of the LUN identifier and offset information of that metadata page.
- If a user wants to read a user data page having a particular LUN identifier and offset, the corresponding metadata page identifier is first determined, then the metadata page signature is computed for the identified metadata page, and then the metadata page is read using the computed signature. In this embodiment, the metadata page signature is more particularly computed using a signature generation algorithm that generates the signature to include a hash of the 8-byte metadata page identifier, one or more ASCII codes for particular predetermined characters, as well as possible additional fields. The last bit of the metadata page signature may always be set to a particular logic value so as to distinguish it from the user data page signature in which the last bit may always be set to the opposite logic value.
- The metadata page signature is used to retrieve the metadata page via the D-module. This metadata page will include the <lun_id, offset, signature> for the user data page if the user page exists. The signature of the user data page is then used to retrieve that user data page, also via the D-module.
- Additional examples of content addressable storage functionality implemented in some embodiments by
control modules 108C,data modules 108D, routingmodules 108R and management module(s) 108M of distributedstorage controller 108 can be found in U.S. Pat. No. 9,104,326, entitled “Scalable Block Data Storage Using Content Addressing,” which is incorporated by reference herein. Alternative arrangements of these and other storage node processing modules of a distributed storage controller in a content addressable storage system can be used in other embodiments. - Each of the
storage nodes 115 of thestorage system 105 comprises a set of processing modules configured to communicate over one or more networks with corresponding sets of processing modules on other ones of the storage nodes. A given such set of processing modules implemented on a particular storage node illustratively includes at least onecontrol module 108C, at least onedata module 108D and at least onerouting module 108R, and possibly amanagement module 108M. These sets of processing modules of the storage nodes collectively comprise at least a portion of the distributedstorage controller 108. - The term “write request” as used herein is intended to be broadly construed, so as to encompass one or more IO operations directing that at least one data item of a storage system be written to in a particular manner. A given write request is illustratively received in a storage system from a host device. For example, in some embodiments, a write request is received in a distributed storage controller of the storage system, and directed from one processing module to another processing module of the distributed storage controller. More particularly, in the embodiment to be described below in conjunction with
FIGS. 5A-5C , a received write request is directed from a routing module of the distributed storage controller to a particular control module of the distributed storage controller. The write request is stored in the write cache portion ofcache 109, acknowledged, and subsequently destaged at a later time to a persistent data storage location on one or more ofstorage devices 106. Other arrangements for receiving and processing write requests from one or more host devices can be used. - Communications between
control modules 108C androuting modules 108R of the distributedstorage controller 108 may be performed in a variety of ways. An example embodiment is implemented in the XtremIO™ context, and the C-modules, D-modules and R-modules of thestorage nodes 115 in this context are assumed to be configured to communicate with one another over a high-speed internal network such as an InfiniBand network. The C-modules, D-modules and R-modules coordinate with one another to accomplish various 10 processing tasks. - The logical block addresses or LBAs of a logical layer of the
storage system 105 correspond to respective physical blocks of a physical layer of thestorage system 105. The user data pages of the logical layer are organized by LBA and have reference via respective content-based signatures to particular physical blocks of the physical layer. - Each of the physical blocks has an associated
reference count 114 that is maintained within the storage system, for example, instorage devices 106. Reference counts 114 may alternatively be stored or maintained instorage controller 108 or other portions of contentaddressable storage system 105. Thereference count 114 for a given physical block indicates the number of logical blocks that point to that same physical block. - In releasing logical address space in the storage system, a dereferencing operation is generally executed for each of the LBAs being released. More particularly, the
reference count 114 of the corresponding physical block is decremented. Areference count 114 of zero or another predetermined value, indicates that there are no longer any logical blocks that reference the corresponding physical block, and so that physical block can be released. - The manner in which functionality for decrement protection of reference counts for inflight small write requests is provided in the
FIG. 1 embodiment will now be described. The process is assumed to be carried out by theprocessing modules control modules 108C temporarily store data pages in thecache 109 of the contentaddressable storage system 105 and later destage the temporarily stored data pages via thedata modules 108D in accordance with write requests received from host devices via therouting modules 108R. The host devices illustratively comprise respective ones of thecompute nodes 102 of thecomputer system 101. - The write requests from the host devices identify particular data pages to be written in the
storage system 105 by their corresponding logical addresses each comprising a LUN ID and an offset. - As noted above, a given one of the content-based signatures illustratively comprises a hash digest of the corresponding data page, with the hash digest being generated by applying a hash function to the content of that data page. The hash digest may be uniquely represented within a given storage resource boundary by a corresponding hash handle.
- The
storage system 105 utilizes a two-level mapping process to map logical block addresses to physical block addresses. The first level of mapping uses an address-to-hash (“A2H”) table and the second level of mapping uses a hash-to-physical (“H2P”) table, sometimes known as a hash metadata (“HMD”) table, with the A2H and H2P tables corresponding to respective logical and physical layers of the content-based signature mapping within thestorage system 105. - The first level of mapping using the A2H table associates logical addresses of respective data pages with respective content-based signatures of those data pages. This is also referred to as logical layer mapping.
- The second level of mapping using the H2P table associates respective ones of the content-based signatures with respective physical storage locations in one or more of the
storage devices 106. This is also referred to as physical layer mapping. - For a given write request, both of the corresponding A2H and H2P tables are updated in conjunction with the processing of that write request. For example, the A2H table may be updated when the page data for the write request is stored in
cache 109 and the H2P table may be updated when the page data is hardened tostorage devices 106 during a destaging process. - The A2H and H2P tables described above are examples of what are more generally referred to herein as “mapping tables” of respective first and second distinct types. Other types and arrangements of mapping tables or other content-based signature mapping information may be used in other embodiments.
- The reference counts 114 mentioned above are illustratively maintained for respective physical blocks in the
storage devices 106 and eachsuch reference count 114 indicates for its corresponding physical block the number of logical blocks that point to that same physical block. When all logical block references to a given physical block are removed, thereference count 114 for that physical block becomes zero or another predetermined value, and its capacity can be released. A given “dereferencing operation” as that term is broadly used herein is intended to encompass decrementing of areference count 114 associated with a physical block. - As mentioned previously, in conjunction with release of logical address space in the
storage system 105, thestorage controller 108 makes the released logical address space available to users, executes dereferencing operations for respective ones of the physical blocks corresponding to the released logical address space, and releases any physical capacity for which the corresponding reference counts 114 reach zero or another predetermined value. - Techniques for efficient release of logical and physical capacity in a storage system such as
storage system 105 are disclosed in U.S. patent application Ser. No. 15/884,577, filed Jan. 31, 2018 and entitled “Storage System with Decoupling and Reordering of Logical and Physical Capacity Removal,” which is incorporated by reference herein. Such techniques may be utilized in illustrative embodiments disclosed herein, but are not required in any particular illustrative embodiment. - The logical address space illustratively comprises one or more ranges of logical block addresses or LBAs each comprising a LUN ID and an offset. For example, each LBA can identify a particular one of the user data pages 200. The LBAs each correspond to one or more physical blocks in the
storage devices 106. Other types of LBAs and logical address spaces can be used in other embodiments. The term “logical address” as used herein is therefore intended to be broadly construed. - A given such logical address space may be released responsive to deletion of a corresponding storage volume, snapshot or any other arrangement of data stored in the
storage system 105. Other conditions within thestorage system 105 can also result in release of logical address space including, for example, snapshot merges, write shadows, or other conditions. - The
storage controller 108 illustratively makes the released logical address space available to users in order of released logical address. More particularly, thestorage controller 108 can make the released logical address space available to users in order of released logical address by making each of its corresponding released logical addresses immediately available responsive to that logical address being released. For example, release of one or more LBAs or a range of LBAs by one or more users can result in those LBAs being made available to one or more other users in the same order in which the LBAs are released. - The corresponding physical blocks may be released in a different order, through accumulation and reordered execution of dereferencing operations as described in the above-cited U.S. patent application Ser. No. 15/884,577. For example, the
storage controller 108 in some embodiments accumulates multiple dereferencing operations for each of at least a subset of the metadata pages 300, and executes the accumulated dereferencing operations for a given one of themetadata pages 300 responsive to the accumulated dereferencing operations for the given metadata page reaching a threshold number of dereferencing operations. - In executing the accumulated dereferencing operations for the physical blocks, execution of each of the dereferencing operations more particularly involves decrementing a
reference count 114 of a corresponding one of the physical blocks, and releasing the physical block responsive to thereference count 114 reaching a designated number, such as zero. Moreover, in executing the accumulated dereferencing operations for the physical blocks, at least a subset of the accumulated dereferencing operations are first reordered into an order that more closely matches a physical layout of the corresponding physical blocks on thestorage devices 106. The reordered dereferencing operations are then executed in that order. - As a result, the physical blocks may be released in the
storage system 105 in a different order than that in which their corresponding logical blocks are released. This provides a number of significant advantages as outlined in the above-cited U.S. patent application Ser. No. 15/884,577. - Other embodiments can be configured to release physical capacity in other ways. For example, physical capacity in some embodiments can be released in the same order in which logical capacity is released.
- As indicated above, the
storage controller 108, illustratively comprising themodules FIG. 1 as well as additional modules such asdata modules 108D, is configured to implement functionality for decrement protection of reference counts for inflight small write requests in the contentaddressable storage system 105. - Execution of a small write IO request received in the
storage system 105 from a host device illustratively involves the following operations: - 1. A synchronous part where the new segment of data is persisted in the write cache portion of
cache 109 and the IO request is acknowledged. - 2. An asynchronous part that destages the new data segment by a background destager. The construction and hardening of the new data page is done in this stage by combining the target data page, e.g., the data page located on
storage devices 106 at the mapped location corresponding to the content-based signature, and the new data segment stored in write cache during the synchronous part. The content-based signature of the target data page may be determined, for example, based on the received small write IO request which may specify an address, e.g., LUN+ offset. Using the A2H table, the content-based signature of the target data page may be determined. - Since the target data page is combined with the new data segment during destaging of the write cache for the small write IO request, it is important that the target data page is not removed until a new data page generated based on the combined target data page and new data segment has been hardened as a new data page in the
storage devices 106. For example, the reference count of the target data page should not be decremented to zero or another predetermined value while a new data segment targeting the data page is currently pending destaging, e.g., an inflight small write IO request. - As mentioned above, every data page stored in
storage devices 106 has areference count 114 that counts the number of references to the page in the A2H mapping. A page is removed when itsreference count 114 is decremented to zero or another predetermined value. Thestorage controller 108 is responsible for the update of the reference counts 114 by sending increment (“Incref') and decrement (”Decref') commands to thestorage devices 106. Page reference counts 114 are generally decremented when overwriting an address and when volumes are deleted. - However, there are also logical volume management (LVM) flows that may not be aware of write cache dependencies and can issue a Decref request for a data page even when there are inflight small write IO request transactions referencing the content-based signature of the data page, e.g., the content-based signature of the data page targeted by the small write IO request. For example, the LVM component may detect that the content-based signature was fully shadowed by all the snapshots that were originated from an origin snapshot in a snapshot tree, and consequently initiate a decrement request to the
reference count 114 corresponding to this content-based signature. If such decrement command is executed and reduces thereference count 114 of the target data page to zero or another predetermined value, the target data page may be deleted. However, if there are inflight small write IO requests for a shadow write of this content-based signature, a new data page for this small write IO request could not be constructed, since the target data page required for its construction has been deleted. Hence such types of flows may result in data loss. - One solution that prevents the reference count of the target data page from decrementing to zero or another predetermined value while there is an inflight small write IO request targeting the data page is to increment the reference count of the target data page for any inflight small write IO requests that target the data page. However, this solution may waste a significant amount of processing resources since such an increment operation on the reference count of the target data page would be performed for every small write IO request, regardless of whether the reference count of the target data page will be decremented by the
storage controller 108 while the destaging of the small write IO request is pending. In addition, performing an increment operation on the reference count of the target data page for each inflight small write IO request may also increase IO latency as an additional operation must be performed during the synchronous part of each IO request process and may be performed on a reference count located at a different node thus wasting network resources. - In an illustrative embodiment,
decref protection logic 116 is disclosed that addresses these issues by preventing the target data page from being deleted before all related small write IO request transactions targeting that data page are completed, e.g., by persisting a new data page to thestorage devices 106. Thedecref protection logic 116 postpones a Decref request issued by thestorage controller 108 for a data page associated with a content-based signature if the data page is referenced by any inflight small write IO request transactions, until all corresponding inflight small write IO request transactions are completed. - In some illustrative embodiments, only the first Decref transaction for the data page associated with the content-based signature may be postponed. In this embodiment, any subsequent Decref transactions may be executed normally. For example, since postponing even a single Decref transaction will prevent the reference count from being decremented to zero or another predetermined value and the target data page from being deleted, only one Decref transaction need be postponed to ensure that the target data page does not get deleted.
- In some illustrative embodiments, all Decref transactions for the data page associated with the content-based signature may be postponed. For example, in this embodiment, no Decref transactions for a target data page may be allowed to proceed when an inflight write IO request targets that data page.
- With reference now to
FIGS. 1 and 4 , in some illustrative embodiments, a given instance ofstorage controller 108 comprisesdecref protection logic 116, an associated decref hash table 400, and an associateddecref journal 118.Decref protection logic 116 implements a process for decrement protection of reference counts for data pages targeted by inflight small write IO requests that are smaller in size than the page granularity of the system. For example, thedecref protection logic 116 may postpone a Decref transaction that would otherwise decrement thereference count 114 for a data page targeted by a small write IO request to zero or another predetermined value. - Decref hash table 400 stores an
inflight write count 404 and a decref postponedflag 406 corresponding to a content-basedsignature 402, e.g., hash digest or hash handle, associated with a data page targeted by an inflight small write IO request. For example, the content-basedsignature 402 may be used as an index into decref hash table 400 to access theinflight write count 404 and decref postponedflag 406 corresponding to the target data page. In an illustrative embodiment, decref hash table 400 may be stored in a volatile memory ofcontroller 108, incache 109, or in other storage ofsystem 105. While decref hash table 400 is described as a hash table in the illustrative embodiment, any other data structure may be used to store the content-basedsignature 402,inflight write count 404, and decref postponedflag 406. -
Inflight write count 404 is a counter that reflects the number of inflight small write IO request transactions that are overwriting the target data page. - Decref postponed
flag 406 is a flag indicating whether or not a Decref transaction was postponed. -
Decref journal 118 is a data structure that is stored persistently, for example, NVRAM ofstorage system 105, instorage devices 106, or any other persistent storage associated withstorage system 105, and is configured to store a content-based signature for a postponed Decref transaction. - An example process that occurs when a small write IO request is received may be implemented as follows:
- 1. On receipt of a small write IO request, the content-based
signature 402, e.g., hash digest, hash handle, or other content-based signature, of the data page targeted by the small write IO request may be used as an index into the decref hash table 400: -
- a. If the content-based
signature 402 for the target data page already exists in the decref hash table 400, increment theinflight write count 404 corresponding to that content-basedsignature 402. - b. If the content-based
signature 402 doesn't exist in the decref hash table 400, add an entry for the content-basedsignature 402 in the decref hash table 400, set the correspondinginflight write count 404 to 1, and clear the decref postponedflag 406.
- a. If the content-based
- 2. When a Decref request is issued by
storage controller 108, the Decref request is either executed or postponed according to the following logic: -
- 1. If the content-based
signature 402 is found in the decref hash table 400, e.g., there are inflight small write IO requests targeting the data page corresponding to that content-based signature 402:- a. If the decref postponed
flag 406 is cleared:- i. Add the Decref request to the
decrefjournal 118 for later execution (e.g., adding the content-based signature associated with the decref request as an entry in the decref journal 118). - ii. Set the decref postponed
flag 406 corresponding to the content-basedsignature 402 in the decref hash table 400.
- i. Add the Decref request to the
- b. Else (i.e. decref postponed
flag 406 is already set):- i. Execute the Decref request by decrementing the
reference count 114 for the data page corresponding to the content-basedsignature 402.
- i. Execute the Decref request by decrementing the
- a. If the decref postponed
- 1. If the content-based
- In some embodiments, when the decref postponed
flag 406 is already set, additional Decref requests may also be written to thedecref journal 118, e.g., accumulated for later execution in decrementing thereference count 114 of the target data page corresponding to the content-basedsignature 402. - 3. On completion of a small write IO Request:
-
- a. Decrement the
inflight write count 404 in the decref hash table 400 corresponding to the content-basedsignature 402 of the data page targeted by the small write IO request. - b. If
inflight write count 404 is decremented to zero or another predetermined value, and decref postponedflag 406 is set (i.e. a Decref request was postponed):- i. Remove the Decref request from the
decref journal 118. - ii. Remove the hash table entry of the decref hash table 400 corresponding to the content-based
signature 402 of the data page. - iii. Execute the Decref request for the target data page corresponding to the content-based
signature 402, e.g., decrementing thereference count 114 for the target data page instorage devices 106.
- i. Remove the Decref request from the
- a. Decrement the
- 4. On recovery (e.g., after a system failure due to power outage or other event): restore the decref hash table 400 by:
-
- a. Resetting the decref hash table 400, e.g., by clearing out any data stored in the hash table or resetting the decref hash table 400 to its original initialization state.
- b. Analyzing any recovered write cache transactions, inserting the corresponding content-based
signatures 402 to be protected into the decref hash table 400, e.g., the content-based signatures targeted by any recovered small write IO requests, and incrementing the correspondinginflight write count 404 for each small write IO request targeting a corresponding content-basedsignature 402. - c. Analyze any recovered Decref request entries from
decrefjournal 118 and set the corresponding Decref postponeflag 406 in the decref hash table 400.
- The
decref protection logic 116 described above guarantees that a data page is not removed (i.e. decremented to zero or another predetermined value) until all inflight small write IO request transactions referencing it are successfully completed, and thus guarantees the consistency of the second stage of the IO flow. In addition, since the protection occurs in response to a Decref request instead of for each IO request, waste of processing resources may be reduced and IO latency may be preserved. - The above-described decrement protection of reference counts for inflight small write requests functionality of the
storage controller 108 is carried out under the control of thedecref protection logic 116 of thestorage controller 108, operating in conjunction withcorresponding control 108C and routing 108R modules, to access thedata modules 108D. Themodules storage controller 108 therefore collectively implement an illustrative process for decrement protection of reference counts for inflight small write requests of contentaddressable storage system 105. - It should also be understood that the particular arrangement of storage
controller processing modules FIG. 1 embodiment is presented by way of example only. Numerous alternative arrangements of processing modules of a distributed storage controller may be used to implement functionality for decrement protection of reference counts for inflight small write requests in a clustered storage system in other embodiments. - Although illustratively shown as being implemented within the content
addressable storage system 105, thestorage controller 108 in other embodiments can be implemented at least in part within thecomputer system 101, in another system component, or as a stand-alone component coupled to thenetwork 104. - The
computer system 101 and contentaddressable storage system 105 in theFIG. 1 embodiment are assumed to be implemented using at least one processing platform each comprising one or more processing devices each having a processor coupled to a memory. Such processing devices can illustratively include particular arrangements of compute, storage and network resources. For example, processing devices in some embodiments are implemented at least in part utilizing virtual resources such as VMs or Linux containers (LXCs), or combinations of both as in an arrangement in which Docker containers or other types of LXCs are configured to run on VMs. - As a more particular example, the
storage controller 108 can be implemented in the form of one or more LXCs running on one or more VMs. Other arrangements of one or more processing devices of a processing platform can be used to implement thestorage controller 108. Other portions of thesystem 100 can similarly be implemented using one or more processing devices of at least one processing platform. - The
computer system 101 and the contentaddressable storage system 105 may be implemented on respective distinct processing platforms, although numerous other arrangements are possible. For example, in some embodiments, at least portions of thecomputer system 101 and the contentaddressable storage system 105 are implemented on the same processing platform. The contentaddressable storage system 105 can therefore be implemented at least in part within at least one processing platform that implements at least a subset of thecompute nodes 102. - The term “processing platform” as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and associated storage systems that are configured to communicate over one or more networks. For example, distributed implementations of the
system 100 are possible, in which certain components of the system reside in one data center in a first geographic location while other components of the cluster reside in one or more other data centers in one or more other geographic locations that are potentially remote from the first geographic location. Thus, it is possible in some implementations of thesystem 100 for different ones of thecompute nodes 102 to reside in different data centers than the contentaddressable storage system 105. Numerous other distributed implementations of one or both of thecomputer system 101 and the contentaddressable storage system 105 are possible. Accordingly, the contentaddressable storage system 105 can also be implemented in a distributed manner across multiple data centers. - It is to be appreciated that these and other features of illustrative embodiments are presented by way of example only, and should not be construed as limiting in any way.
- Accordingly, different numbers, types and arrangements of system components such as
computer system 101, computenodes 102,network 104, contentaddressable storage system 105,storage devices 106,storage controller 108 andstorage nodes - It should be understood that the particular sets of modules and other components implemented in the
system 100 as illustrated inFIG. 1 are presented by way of example only. In other embodiments, only subsets of these components, or additional or alternative sets of components, may be used, and such components may exhibit alternative functionality and configurations. For example, as indicated previously, in some illustrative embodiments a given content addressable storage system or other type of storage system with functionality for decrement protection of reference counts for inflight small write requests can be offered to cloud infrastructure customers or other users as a PaaS offering. - Additional details of illustrative embodiments will be described below with reference to the flow diagrams of
FIGS. 5A-5C .FIGS. 5A-5C more particularly show example processes for decrement protection of reference counts for inflight small write requests implemented in storage system such as contentaddressable storage system 105 of theFIG. 1 embodiment. The contentaddressable storage system 105 may comprise a scale-out all-flash storage array such as an XtremIO™ storage array. A given such storage array can be configured to provide storage redundancy using well-known RAID techniques such as RAID 5 or RAID 6, although other storage redundancy configurations can be used. - The term “storage system” as used herein is therefore intended to be broadly construed, and should not be viewed as being limited to content addressable storage systems or flash-based storage systems.
- The storage devices of such a storage system illustratively implement a plurality of LUNs configured to store files, blocks, objects or other arrangements of data.
- A given storage system can be implemented using at least one processing platform each comprising one or more processing devices each having a processor coupled to a memory. Such processing devices can illustratively include particular arrangements of compute, storage and network resources. For example, processing devices in some embodiments are implemented at least in part utilizing virtual resources such as VMs or LXCs, or combinations of both as in an arrangement in which Docker containers or other types of LXCs are configured to run on VMs.
- As a more particular example, components of a distributed storage controller can each be implemented in the form of one or more LXCs running on one or more VMs. Other arrangements of one or more processing devices of a processing platform can be used to implement a distributed storage controller and/or its components. Other portions of the
information processing system 100 can similarly be implemented using one or more processing devices of at least one processing platform. - The term “processing platform” as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and associated storage systems that are configured to communicate over one or more networks.
- The operation of the
information processing system 100 will now be further described with reference to the flow diagrams of the illustrative embodiment ofFIGS. 5A-5C . The process as shown inFIG. 5A includessteps 502 through 508 and illustrates a synchronous portion of the small write request, e.g., the temporary storage of the data segment associated with the small write request incache 109. The process as shown inFIG. 5B includessteps 510 through 518 and illustrates the functionality that occurs when a decref request is received. The process as shown inFIG. 5C includessteps 520 through 532 and illustrates an asynchronous portion of the small write request, e.g., the destaging of the data segment associated with the small write request fromcache 109 intostorage devices 106. The processes shown inFIGS. 5A-5C are suitable for use in thesystem 100 but is more generally applicable to other types of information processing systems each comprising one or more storage systems. The steps are illustratively performed by cooperative interaction of control logic instances of processing modules of a distributed storage controller. A given such storage controller can therefore comprise a distributed storage controller implemented in the manner illustrated inFIGS. 1-4 . - With reference now to
FIG. 5A , the synchronous portion of the small write request will now be described. - In
step 502, small write IO requests are received bystorage controller 108, for example, fromcomputer system 101 or other host devices. The small write IO requests may include write requests for data segments that are smaller than the page granularity of thestorage devices 106. In some embodiments, thestorage controller 108 may generate one or more IO threads to service the small write IO requests. - In
step 504, the content-based signatures of data pages targeted by the received small write IO requests may be determined, for example, as described above. - In
step 506, the IO threads may store the data segments included in the small write IO requests incache 109. - In
step 508, theinflight write count 404 stored in decref hash table 400 may be incremented for the content-based signatures corresponding to any data pages targeted by the small write IO requests. - With reference now to
FIG. 5B , the decref request functionality will now be described. - In
step 510,storage controller 108 may determine whether a decrement request has been issued. In some embodiments, the decrement request may be issued by thecontroller 108 in response to another operation. In some embodiments, the decrement request may be issued by another controller associated withstorage controller 108 and received bystorage controller 108, e.g., as part of a distributed system. If no decrement request has been issued, the process ends. - In
step 512, in response to a decrement request being issued, the storage controller determines whether the decref postponedflag 406 has been set for the corresponding content-basedsignature 402 in decref hash table 400. - In
step 514, if the decref postponedflag 406 for the corresponding content-based signature has not been set in decref hash table 400, the decref postponedflag 406 is set and the decrement request is postponed instep 516 and the process ends. - In
step 518, if the decref postponedflag 406 was determined to already be set instep 512, the decrement request is executed, e.g., the reference count is decremented, and the process ends. - With reference now to
FIG. 5C , the asynchronous portion of the small write request will now be described. - In
step 520,storage controller 108 completes an inflight write IO request, e.g., by performing destaging on the data segment associated with the inflight small write IO request that is stored in thecache 109. For example, the data segment associated with the small write IO request is combined with the target data page and the combined data page may be persisted instorage devices 106. - In
step 522, in response to completion of a small write IO request, thestorage controller 108 decrements theinflight write count 404 stored in decref hash table 400 at the content-basedsignature 402 corresponding to the target data page associated with the completed destaged small write IO request. - In
step 524, thestorage controller 108 determines whether theinflight write count 404 has been decremented to zero or to another predetermined value. If theinflight write count 404 has not been decremented to zero or to another predetermined value, the process ends. - In
step 526, if theinflight write count 404 has been decremented to zero or to another predetermined value,storage controller 108 determines whether the decref postponedflag 406 for the corresponding content-based signature is set. - In
step 528, thestorage controller 108 removes the entry corresponding to the decref request from thedecref journal 118, e.g., the content-based signature of the target data page may be removed from thedecref journal 118. - In
step 530, if the decref postponedflag 406 for the corresponding content-based signature is set thestorage controller 108 executes the decrement request. - In
step 532, thestorage controller 108 removes the hash table entry included the in decref hash table 400 for corresponding content-based signature. The process then ends. - Referring back to step 526, if the decref postponed
flag 406 is not set, the process proceeds to step 532 and thestorage controller 108 removes the hash table entry included in decref hash table 400 for corresponding content-based signature. - It is also to be appreciated that the processes of
FIGS. 5A-5C and other features and functionality for decrement protection of reference counts for inflight small write requests as described above can be adapted for use with other types of information systems, including by way of example an information processing system in which the host devices and the storage system are both implemented on the same processing platform. - The particular processing operations and other system functionality described in conjunction with the flow diagrams of
FIGS. 5A-5C are presented by way of illustrative example only and should not be construed as limiting the scope of the disclosure in any way. Alternative embodiments can use other types of processing operations for implementing decrement protection of reference counts for inflight small write requests. For example, the ordering of the process steps may be varied in other embodiments, or certain steps may be performed at least in part concurrently with one another rather than serially. Also, one or more of the process steps may be repeated periodically, or multiple instances of the process can be performed in parallel with one another in order to implement a plurality of different process instances for decrement protection of reference counts for inflight small write requests for respective different storage systems or portions thereof within a given information processing system. - Functionality such as that described in conjunction with the flow diagrams of
FIGS. 5A-5C can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer or server. As will be described below, a memory or other storage device having executable program code of one or more software programs embodied therein is an example of what is more generally referred to herein as a “processor-readable storage medium.” - For example, a storage controller such as
storage controller 108 that is configured to control performance of one or more steps of the processes ofFIGS. 5A-5C can be implemented as part of what is more generally referred to herein as a processing platform comprising one or more processing devices each comprising a processor coupled to a memory. A given such processing device may correspond to one or more virtual machines or other types of virtualization infrastructure such as Docker containers or other types of LXCs. Thestorage controller 108, as well as other system components, may be implemented at least in part using processing devices of such processing platforms. For example, in a distributed implementation of thestorage controller 108, respective distributed modules of such a storage controller can be implemented in respective LXCs running on respective ones of the processing devices of a processing platform. - In some embodiments, the storage system comprises an XtremIO™ storage array suitably modified to incorporate techniques for decrement protection of reference counts for inflight small write requests as disclosed herein.
- As described previously, in the context of an XtremIO™ storage array, the
control modules 108C,data modules 108D, routingmodules 108R and management module(s) 108M of the distributedstorage controller 108 insystem 100 illustratively comprise C-modules, D-modules, R-modules and SYM module(s), respectively. These exemplary processing modules of the distributedstorage controller 108 can be configured to implement functionality for decrement protection of reference counts for inflight small write requests in accordance with the processes ofFIGS. 5A-5C . - The techniques for decrement protection of reference counts for inflight small write requests implemented in the embodiments described above can be varied in other embodiments. For example, different types of process operations can be used in other embodiments.
- In addition, the above-described functionality associated with C-module, D-module, R-module and decref protection logic components of an XtremIO™ storage array can be incorporated into other processing modules or components of a centralized or distributed storage controller in other types of storage systems.
- Illustrative embodiments of content addressable storage systems or other types of storage systems with functionality for decrement protection of reference counts for inflight small write requests as disclosed herein can provide a number of significant advantages relative to conventional arrangements.
- For example, some embodiments can advantageously inhibit the deletion of data pages that are required for inflight write IO requests which prevents data loss. In addition, some embodiments can advantageously reduce IO processing waste and latency, for example, by removing the need to increment the reference count for every data page having an associated pending write IO request and instead only postponing decrement requests specifically targeting data pages with inflight write IO requests.
- These and other embodiments include clustered storage systems comprising storage controllers that are distributed over multiple storage nodes. Similar advantages can be provided in other types of storage systems.
- It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing system features and functionality as illustrated in the drawings and described above are exemplary only, and numerous other arrangements may be used in other embodiments.
- As mentioned previously, at least portions of the
information processing system 100 may be implemented using one or more processing platforms. A given such processing platform comprises at least one processing device comprising a processor coupled to a memory. The processor and memory in some embodiments comprise respective processor and memory elements of a virtual machine or container provided using one or more underlying physical machines. The term “processing device” as used herein is intended to be broadly construed so as to encompass a wide variety of different arrangements of physical processors, memories and other device components as well as virtual instances of such components. For example, a “processing device” in some embodiments can comprise or be executed across one or more virtual processors. Processing devices can therefore be physical or virtual and can be executed across one or more physical or virtual processors. It should also be noted that a given virtual device can be mapped to a portion of a physical one. - Some illustrative embodiments of a processing platform that may be used to implement at least a portion of an information processing system comprise cloud infrastructure including virtual machines implemented using a hypervisor that runs on physical infrastructure. The cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines under the control of the hypervisor. It is also possible to use multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the system.
- These and other types of cloud infrastructure can be used to provide what is also referred to herein as a multi-tenant environment. One or more system components such as
storage system 105, or portions thereof, are illustratively implemented for use by tenants of such a multi-tenant environment. - As mentioned previously, cloud infrastructure as disclosed herein can include cloud-based systems such as AWS, GCP and Microsoft Azure. Virtual machines provided in such systems can be used to implement at least portions of one or more of a computer system and a content addressable storage system in illustrative embodiments. These and other cloud-based systems in illustrative embodiments can include object stores such as Amazon S3, GCP Cloud Storage, and Microsoft Azure Blob Storage.
- In some embodiments, the cloud infrastructure additionally or alternatively comprises a plurality of containers implemented using container host devices. For example, a given container of cloud infrastructure illustratively comprises a Docker container or other type of LXC. The containers may run on virtual machines in a multi-tenant environment, although other arrangements are possible. The containers may be utilized to implement a variety of different types of functionality within the
system 100. For example, containers can be used to implement respective processing devices providing compute and/or storage services of a cloud-based system. Again, containers may be used in combination with other virtualization infrastructure such as virtual machines implemented using a hypervisor. - Illustrative embodiments of processing platforms will now be described in greater detail with reference to
FIGS. 6 and 7 . Although described in the context ofsystem 100, these platforms may also be used to implement at least portions of other information processing systems in other embodiments. -
FIG. 6 shows an example processing platform comprisingcloud infrastructure 600. Thecloud infrastructure 600 comprises a combination of physical and virtual processing resources that may be utilized to implement at least a portion of theinformation processing system 100. Thecloud infrastructure 600 comprises multiple virtual machines (VMs) and/or container sets 602-1, 602-2, . . . 602-L implemented usingvirtualization infrastructure 604. Thevirtualization infrastructure 604 runs onphysical infrastructure 605, and illustratively comprises one or more hypervisors and/or operating system level virtualization infrastructure. The operating system level virtualization infrastructure illustratively comprises kernel control groups of a Linux operating system or other type of operating system. - The
cloud infrastructure 600 further comprises sets of applications 610-1, 610-2, . . . 610-L running on respective ones of the VMs/container sets 602-1, 602-2, . . . 602-L under the control of thevirtualization infrastructure 604. The VMs/container sets 602 may comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs. - In some implementations of the
FIG. 6 embodiment, the VMs/container sets 602 comprise respective VMs implemented usingvirtualization infrastructure 604 that comprises at least one hypervisor. Such implementations can provide metadata loading control functionality of the type described above for one or more processes running on a given one of the VMs. For example, each of the VMs can implement metadata loading control functionality for one or more processes running on that particular VM. - An example of a hypervisor platform that may be used to implement a hypervisor within the
virtualization infrastructure 604 is the VMware® vSphere® which may have an associated virtual infrastructure management system such as the VMware® vCenter™. The underlying physical machines may comprise one or more distributed processing platforms that include one or more storage systems. - In other implementations of the
FIG. 6 embodiment, the VMs/container sets 602 comprise respective containers implemented usingvirtualization infrastructure 604 that provides operating system level virtualization functionality, such as support for Docker containers running on bare metal hosts, or Docker containers running on VMs. The containers are illustratively implemented using respective kernel control groups of the operating system. Such implementations can provide metadata load control functionality of the type described above for one or more processes running on different ones of the containers. For example, a container host device supporting multiple containers of one or more container sets can implement one or more instances of metadata load control logic for use in loading metadata into cache during a restart process. - As is apparent from the above, one or more of the processing modules or other components of
system 100 may each run on a computer, server, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” Thecloud infrastructure 600 shown inFIG. 6 may represent at least a portion of one processing platform. Another example of such a processing platform is processingplatform 700 shown inFIG. 7 . - The
processing platform 700 in this embodiment comprises a portion ofsystem 100 and includes a plurality of processing devices, denoted 702-1, 702-2, 702-3, . . . 702-K, which communicate with one another over anetwork 704. - The
network 704 may comprise any type of network, including by way of example a global computer network such as the Internet, a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks. - The processing device 702-1 in the
processing platform 700 comprises aprocessor 710 coupled to amemory 712. - The
processor 710 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements. - The
memory 712 may comprise random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination. Thememory 712 and other memories disclosed herein should be viewed as illustrative examples of what are more generally referred to as “processor-readable storage media” storing executable program code of one or more software programs. - Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used.
- Also included in the processing device 702-1 is
network interface circuitry 714, which is used to interface the processing device with thenetwork 704 and other system components, and may comprise conventional transceivers. - The
other processing devices 702 of theprocessing platform 700 are assumed to be configured in a manner similar to that shown for processing device 702-1 in the figure. - Again, the
particular processing platform 700 shown in the figure is presented by way of example only, andsystem 100 may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, servers, storage devices or other processing devices. - For example, other processing platforms used to implement illustrative embodiments can comprise different types of virtualization infrastructure, in place of or in addition to virtualization infrastructure comprising virtual machines. Such virtualization infrastructure illustratively includes container-based virtualization infrastructure configured to provide Docker containers or other types of LXCs.
- As another example, portions of a given processing platform in some embodiments can comprise converged infrastructure such as VxRail™, VxRack™, VxRack™ FLEX, VxBlock™ or Vblock® converged infrastructure from VCE, the Virtual Computing Environment Company, now the Converged Platform and Solutions Division of Dell EMC.
- It should therefore be understood that, in other embodiments, different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform.
- Also, numerous other arrangements of computers, servers, storage devices or other components are possible in the
information processing system 100. Such components can communicate with other elements of theinformation processing system 100 over any type of network or other communication media. - As indicated previously, components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, at least portions of the functionality of one or more components of the
storage controller 108 ofsystem 100 are illustratively implemented in the form of software running on one or more processing devices. - It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types of information processing systems, storage systems, storage nodes, storage devices, storage controllers, processing modules, decrement protection processes and associated control logic. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/040,231 US10558613B1 (en) | 2018-07-19 | 2018-07-19 | Storage system with decrement protection of reference counts |
US16/732,976 US10942895B2 (en) | 2018-07-19 | 2020-01-02 | Storage system with decrement protection of reference counts |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/040,231 US10558613B1 (en) | 2018-07-19 | 2018-07-19 | Storage system with decrement protection of reference counts |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/732,976 Continuation US10942895B2 (en) | 2018-07-19 | 2020-01-02 | Storage system with decrement protection of reference counts |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200026779A1 true US20200026779A1 (en) | 2020-01-23 |
US10558613B1 US10558613B1 (en) | 2020-02-11 |
Family
ID=69163050
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/040,231 Active 2038-08-10 US10558613B1 (en) | 2018-07-19 | 2018-07-19 | Storage system with decrement protection of reference counts |
US16/732,976 Active US10942895B2 (en) | 2018-07-19 | 2020-01-02 | Storage system with decrement protection of reference counts |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/732,976 Active US10942895B2 (en) | 2018-07-19 | 2020-01-02 | Storage system with decrement protection of reference counts |
Country Status (1)
Country | Link |
---|---|
US (2) | US10558613B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881211A (en) * | 2020-07-24 | 2020-11-03 | 北京浪潮数据技术有限公司 | Method, system and equipment for synchronizing storage data and computer storage medium |
CN112256206A (en) * | 2020-10-30 | 2021-01-22 | 新华三技术有限公司成都分公司 | IO processing method and device |
US20230409544A1 (en) * | 2022-06-15 | 2023-12-21 | Dell Products L.P. | Inline deduplication for ckd using hash table for ckd track meta data |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10884650B1 (en) | 2017-10-25 | 2021-01-05 | EMC IP Holding Company LLC | Opportunistic compression of replicated data in a content addressable storage system |
US10956078B2 (en) | 2018-03-27 | 2021-03-23 | EMC IP Holding Company LLC | Storage system with loopback replication process providing object-dependent slice assignment |
US10866969B2 (en) | 2018-03-28 | 2020-12-15 | EMC IP Holding Company LLC | Storage system with loopback replication process providing unique identifiers for collision-free object pairing |
US10983962B2 (en) | 2018-05-29 | 2021-04-20 | EMC IP Holding Company LLC | Processing device utilizing polynomial-based signature subspace for efficient generation of deduplication estimate |
US10977216B2 (en) | 2018-05-29 | 2021-04-13 | EMC IP Holding Company LLC | Processing device utilizing content-based signature prefix for efficient generation of deduplication estimate |
US11593313B2 (en) | 2018-05-29 | 2023-02-28 | EMC IP Holding Company LLC | Processing device configured for efficient generation of data reduction estimates for combinations of datasets |
US11609883B2 (en) | 2018-05-29 | 2023-03-21 | EMC IP Holding Company LLC | Processing device configured for efficient generation of compression estimates for datasets |
US10826990B2 (en) | 2018-07-23 | 2020-11-03 | EMC IP Holding Company LLC | Clustered storage system configured for bandwidth efficient processing of writes at sizes below a native page size |
US10684915B2 (en) * | 2018-07-25 | 2020-06-16 | EMC IP Holding Company LLC | Efficient packing of compressed data in storage system implementing data striping |
US10635533B2 (en) | 2018-07-30 | 2020-04-28 | EMC IP Holding Company LLC | Efficient computation of parity data in storage system implementing data striping |
US10754559B1 (en) | 2019-03-08 | 2020-08-25 | EMC IP Holding Company LLC | Active-active storage clustering with clock synchronization |
US11137929B2 (en) | 2019-06-21 | 2021-10-05 | EMC IP Holding Company LLC | Storage system configured to support cascade replication |
US11249654B2 (en) | 2020-02-18 | 2022-02-15 | EMC IP Holding Company LLC | Storage system with efficient data and parity distribution across mixed-capacity storage devices |
US11144232B2 (en) | 2020-02-21 | 2021-10-12 | EMC IP Holding Company LLC | Storage system with efficient snapshot pair creation during synchronous replication of logical storage volumes |
US11079969B1 (en) | 2020-02-25 | 2021-08-03 | EMC IP Holding Company LLC | Disk array enclosure configured for metadata and data storage processing |
US11281386B2 (en) | 2020-02-25 | 2022-03-22 | EMC IP Holding Company LLC | Disk array enclosure with metadata journal |
US11061618B1 (en) | 2020-02-25 | 2021-07-13 | EMC IP Holding Company LLC | Disk array enclosure configured to determine metadata page location based on metadata identifier |
US11144461B2 (en) | 2020-03-09 | 2021-10-12 | EMC IP Holding Company LLC | Bandwidth efficient access to persistent storage in a distributed storage system |
US11010251B1 (en) | 2020-03-10 | 2021-05-18 | EMC IP Holding Company LLC | Metadata update journal destaging with preload phase for efficient metadata recovery in a distributed storage system |
US11157198B2 (en) | 2020-03-12 | 2021-10-26 | EMC IP Holding Company LLC | Generating merge-friendly sequential IO patterns in shared logger page descriptor tiers |
US11157177B2 (en) | 2020-03-16 | 2021-10-26 | EMC IP Holding Company LLC | Hiccup-less failback and journal recovery in an active-active storage system |
US11126361B1 (en) | 2020-03-16 | 2021-09-21 | EMC IP Holding Company LLC | Multi-level bucket aggregation for journal destaging in a distributed storage system |
US11194664B2 (en) | 2020-04-20 | 2021-12-07 | EMC IP Holding Company LLC | Storage system configured to guarantee sufficient capacity for a distributed raid rebuild process |
US11169880B1 (en) | 2020-04-20 | 2021-11-09 | EMC IP Holding Company LLC | Storage system configured to guarantee sufficient capacity for a distributed raid rebuild process |
US11494301B2 (en) | 2020-05-12 | 2022-11-08 | EMC IP Holding Company LLC | Storage system journal ownership mechanism |
US11392295B2 (en) | 2020-05-27 | 2022-07-19 | EMC IP Holding Company LLC | Front-end offload of storage system processing |
US11093161B1 (en) | 2020-06-01 | 2021-08-17 | EMC IP Holding Company LLC | Storage system with module affinity link selection for synchronous replication of logical storage volumes |
US11513882B2 (en) | 2020-06-08 | 2022-11-29 | EMC IP Holding Company LLC | Dynamic modification of IO shaping mechanisms of multiple storage nodes in a distributed storage system |
US11886911B2 (en) | 2020-06-29 | 2024-01-30 | EMC IP Holding Company LLC | End-to-end quality of service mechanism for storage system using prioritized thread queues |
US11327812B1 (en) | 2020-10-19 | 2022-05-10 | EMC IP Holding Company LLC | Distributed storage system with per-core rebalancing of thread queues |
US11436138B2 (en) | 2020-10-21 | 2022-09-06 | EMC IP Holding Company LLC | Adaptive endurance tuning of solid-state storage system |
US11853568B2 (en) | 2020-10-21 | 2023-12-26 | EMC IP Holding Company LLC | Front-end offload of storage system hash and compression processing |
US11531470B2 (en) | 2020-10-21 | 2022-12-20 | EMC IP Holding Company LLC | Offload of storage system data recovery to storage devices |
US11616722B2 (en) | 2020-10-22 | 2023-03-28 | EMC IP Holding Company LLC | Storage system with adaptive flow control using multiple feedback loops |
US11314416B1 (en) | 2020-10-23 | 2022-04-26 | EMC IP Holding Company LLC | Defragmentation of striped volume in data storage system |
US11687245B2 (en) | 2020-11-19 | 2023-06-27 | EMC IP Holding Company LLC | Dynamic slice assignment in a distributed storage system |
US11435921B2 (en) | 2020-11-19 | 2022-09-06 | EMC IP Holding Company LLC | Selective deduplication in a distributed storage system |
US11494405B2 (en) | 2020-12-21 | 2022-11-08 | EMC IP Holding Company LLC | Lock contention resolution for active-active replication performed in conjunction with journal recovery |
US11481291B2 (en) | 2021-01-12 | 2022-10-25 | EMC IP Holding Company LLC | Alternative storage node communication channel using storage devices group in a distributed storage system |
US11875198B2 (en) | 2021-03-22 | 2024-01-16 | EMC IP Holding Company LLC | Synchronization object issue detection using object type queues and associated monitor threads in a storage system |
US11520527B1 (en) | 2021-06-11 | 2022-12-06 | EMC IP Holding Company LLC | Persistent metadata storage in a storage system |
US11775202B2 (en) | 2021-07-12 | 2023-10-03 | EMC IP Holding Company LLC | Read stream identification in a distributed storage system |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7484096B1 (en) | 2003-05-28 | 2009-01-27 | Microsoft Corporation | Data validation using signatures and sampling |
US7444464B2 (en) | 2004-11-08 | 2008-10-28 | Emc Corporation | Content addressed storage device configured to maintain content address mapping |
US20070283117A1 (en) * | 2006-06-05 | 2007-12-06 | Microsoft Corporation | Unmanaged memory accessor |
US7788464B2 (en) * | 2006-12-22 | 2010-08-31 | Microsoft Corporation | Scalability of virtual TLBs for multi-processor virtual machines |
US8295615B2 (en) | 2007-05-10 | 2012-10-23 | International Business Machines Corporation | Selective compression of synchronized content based on a calculated compression ratio |
US8095726B1 (en) | 2008-03-31 | 2012-01-10 | Emc Corporation | Associating an identifier with a content unit |
US9495382B2 (en) | 2008-12-10 | 2016-11-15 | Commvault Systems, Inc. | Systems and methods for performing discrete data replication |
US8214612B1 (en) | 2009-09-28 | 2012-07-03 | Emc Corporation | Ensuring consistency of replicated volumes |
US9678968B1 (en) * | 2010-05-03 | 2017-06-13 | Panzura, Inc. | Deleting a file from a distributed filesystem |
US9104326B2 (en) | 2010-11-15 | 2015-08-11 | Emc Corporation | Scalable block data storage using content addressing |
EP3467832B1 (en) * | 2010-12-17 | 2020-05-20 | Everspin Technologies, Inc. | Memory controller and method for interleaving dram and mram accesses |
US10241810B2 (en) * | 2012-05-18 | 2019-03-26 | Nvidia Corporation | Instruction-optimizing processor with branch-count table in hardware |
US8977602B2 (en) | 2012-06-05 | 2015-03-10 | Oracle International Corporation | Offline verification of replicated file system |
US9152686B2 (en) | 2012-12-21 | 2015-10-06 | Zetta Inc. | Asynchronous replication correctness validation |
US8949488B2 (en) | 2013-02-15 | 2015-02-03 | Compellent Technologies | Data replication with dynamic compression |
US9639461B2 (en) * | 2013-03-15 | 2017-05-02 | Sandisk Technologies Llc | System and method of processing of duplicate data at a data storage device |
US9268806B1 (en) | 2013-07-26 | 2016-02-23 | Google Inc. | Efficient reference counting in content addressable storage |
US9208162B1 (en) | 2013-09-26 | 2015-12-08 | Emc Corporation | Generating a short hash handle |
US9001608B1 (en) * | 2013-12-06 | 2015-04-07 | Intel Corporation | Coordinating power mode switching and refresh operations in a memory device |
US9286003B1 (en) | 2013-12-31 | 2016-03-15 | Emc Corporation | Method and apparatus for creating a short hash handle highly correlated with a globally-unique hash signature |
US9606870B1 (en) | 2014-03-31 | 2017-03-28 | EMC IP Holding Company LLC | Data reduction techniques in a flash-based key/value cluster storage |
US10467246B2 (en) | 2014-11-25 | 2019-11-05 | Hewlett Packard Enterprise Development Lp | Content-based replication of data in scale out system |
CN107615252A (en) | 2015-01-05 | 2018-01-19 | 邦存科技有限公司 | Metadata management in storage system extending transversely |
US9569357B1 (en) * | 2015-01-08 | 2017-02-14 | Pure Storage, Inc. | Managing compressed data in a storage system |
US10884633B2 (en) | 2015-01-13 | 2021-01-05 | Hewlett Packard Enterprise Development Lp | System and method for optimized signature comparisons and data replication |
US9600193B2 (en) | 2015-02-04 | 2017-03-21 | Delphix Corporation | Replicating snapshots from a source storage system to a target storage system |
US10296219B2 (en) * | 2015-05-28 | 2019-05-21 | Vmware, Inc. | Data deduplication in a block-based storage system |
US10496672B2 (en) | 2015-12-30 | 2019-12-03 | EMC IP Holding Company LLC | Creating replicas at user-defined points in time |
US10402120B2 (en) * | 2016-07-15 | 2019-09-03 | Advanced Micro Devices, Inc. | Memory controller arbiter with streak and read/write transaction management |
US10235396B2 (en) * | 2016-08-29 | 2019-03-19 | International Business Machines Corporation | Workload optimized data deduplication using ghost fingerprints |
-
2018
- 2018-07-19 US US16/040,231 patent/US10558613B1/en active Active
-
2020
- 2020-01-02 US US16/732,976 patent/US10942895B2/en active Active
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881211A (en) * | 2020-07-24 | 2020-11-03 | 北京浪潮数据技术有限公司 | Method, system and equipment for synchronizing storage data and computer storage medium |
CN112256206A (en) * | 2020-10-30 | 2021-01-22 | 新华三技术有限公司成都分公司 | IO processing method and device |
US20230409544A1 (en) * | 2022-06-15 | 2023-12-21 | Dell Products L.P. | Inline deduplication for ckd using hash table for ckd track meta data |
US11954079B2 (en) * | 2022-06-15 | 2024-04-09 | Dell Products L.P. | Inline deduplication for CKD using hash table for CKD track meta data |
Also Published As
Publication number | Publication date |
---|---|
US20200142859A1 (en) | 2020-05-07 |
US10942895B2 (en) | 2021-03-09 |
US10558613B1 (en) | 2020-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10942895B2 (en) | Storage system with decrement protection of reference counts | |
US10838863B2 (en) | Storage system with write cache release protection | |
US20200272542A1 (en) | Storage system with snapshot generation control utilizing monitored differentials of respective storage volumes | |
US10261693B1 (en) | Storage system with decoupling and reordering of logical and physical capacity removal | |
US11392551B2 (en) | Storage system utilizing content-based and address-based mappings for deduplicatable and non-deduplicatable types of data | |
US10691373B2 (en) | Object headers facilitating storage of data in a write buffer of a storage system | |
US10705965B2 (en) | Metadata loading in storage systems | |
US10754736B2 (en) | Storage system with scanning and recovery of internal hash metadata structures | |
US10831735B2 (en) | Processing device configured for efficient generation of a direct mapped hash table persisted to non-volatile block memory | |
US10817385B2 (en) | Storage system with backup control utilizing content-based signatures | |
US10852999B2 (en) | Storage system with decoupling of reference count updates | |
US10996887B2 (en) | Clustered storage system with dynamic space assignments across processing modules to counter unbalanced conditions | |
US10747677B2 (en) | Snapshot locking mechanism | |
US11144461B2 (en) | Bandwidth efficient access to persistent storage in a distributed storage system | |
US10296451B1 (en) | Content addressable storage system utilizing content-based and address-based mappings | |
US10929047B2 (en) | Storage system with snapshot generation and/or preservation control responsive to monitored replication data | |
US11126361B1 (en) | Multi-level bucket aggregation for journal destaging in a distributed storage system | |
US11010251B1 (en) | Metadata update journal destaging with preload phase for efficient metadata recovery in a distributed storage system | |
US10922147B2 (en) | Storage system destaging based on synchronization object with watermark | |
US11645174B2 (en) | Recovery flow with reduced address lock contention in a content addressable storage system | |
US11086558B2 (en) | Storage system with storage volume undelete functionality | |
US11494301B2 (en) | Storage system journal ownership mechanism | |
US10942654B2 (en) | Hash-based data recovery from remote storage system | |
US10996871B2 (en) | Hash-based data recovery from remote storage system responsive to missing or corrupted hash digest | |
US10671320B2 (en) | Clustered storage system configured with decoupling of process restart from in-flight command execution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHVEIDEL, VLADIMIR;KAMRAN, LIOR;BARUCH, ORAN;REEL/FRAME:046596/0261 Effective date: 20180719 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:047648/0422 Effective date: 20180906 Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT (CREDIT);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:047648/0346 Effective date: 20180906 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223 Effective date: 20190320 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 047648 FRAME 0346;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058298/0510 Effective date: 20211101 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST AT REEL 047648 FRAME 0346;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058298/0510 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 047648 FRAME 0346;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058298/0510 Effective date: 20211101 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (047648/0422);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060160/0862 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (047648/0422);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060160/0862 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (047648/0422);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060160/0862 Effective date: 20220329 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |