CN116795282A - System and method for sending commands to storage devices - Google Patents
System and method for sending commands to storage devices Download PDFInfo
- Publication number
- CN116795282A CN116795282A CN202310279793.4A CN202310279793A CN116795282A CN 116795282 A CN116795282 A CN 116795282A CN 202310279793 A CN202310279793 A CN 202310279793A CN 116795282 A CN116795282 A CN 116795282A
- Authority
- CN
- China
- Prior art keywords
- storage
- access
- storage device
- operation request
- access granularity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 230000004044 response Effects 0.000 claims abstract description 42
- 230000000977 initiatory effect Effects 0.000 claims description 4
- 235000019580 granularity Nutrition 0.000 description 197
- 238000004891 communication Methods 0.000 description 27
- 238000004590 computer program Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 10
- 238000013507 mapping Methods 0.000 description 8
- 230000008685 targeting Effects 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 239000007787 solid Substances 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000013515 script Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 101100498823 Caenorhabditis elegans ddr-2 gene Proteins 0.000 description 1
- 241001477931 Mythimna unipuncta Species 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000011010 flushing procedure Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method includes storing, at a computing device, access granularity criteria associated with a storage area. The method further includes receiving a storage operation request requesting access to a first portion of a storage area at a first access granularity. The method also includes sending a command from the computing device to the storage device based on the storage operation request in response to the storage operation request meeting the access granularity criteria.
Description
Cross reference to related applications
The present application claims priority and benefit from U.S. provisional application No. 63/322,221, entitled "CXL SSD FOR THE NEXT-GEN DATA CENTER INFRASTRUCTURE," filed on 3 months 21 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates generally to systems and methods for sending commands to storage devices (storage devices).
Background
The storage device may store data on behalf of applications executing at the computing device. During execution, the application may issue one or more commands to the storage device that may change the data.
The above information disclosed in this background section is only for enhancement of understanding of the background of the present disclosure and therefore it may contain information that does not form the prior art.
Disclosure of Invention
In various embodiments, described herein include systems, methods, and apparatuses related to sending commands to storage devices.
A method includes storing, at a computing device, access granularity criteria associated with a storage area. The method further includes receiving a storage operation request requesting access to a first portion of the storage region at a first access granularity. The method also includes sending a command from the computing device to the storage device based on the storage operation request in response to the storage operation request meeting the access granularity criteria.
A computer-readable storage device storing instructions executable by a processor to perform operations comprising storing, at a computing device, access granularity criteria associated with a storage region. The operations further include receiving a storage operation request requesting access to a first portion of the storage region at a first access granularity. The operations further comprise: in response to the storage operation request meeting the access granularity criteria, a command is sent from the computing device to the storage device based on the storage operation request.
A system includes a storage device and a computing device. The computing device is configured to store access granularity criteria associated with a storage area of the storage device. The computing device is further configured to receive a storage operation request requesting access to a first portion of the storage area at a first access granularity. The computing device is further configured to send a command to the storage device based on the storage operation request in response to the storage operation request meeting the access granularity criteria.
Drawings
The above-described and other aspects of the present technology will be better understood when the present application is read in view of the following drawings, in which like reference numerals refer to similar or identical elements:
FIG. 1 is a diagram of a system for sending commands to a storage device.
FIG. 2 is another diagram of a system for sending commands to a storage device.
FIG. 3 is another diagram of a system for sending commands to a storage device.
FIG. 4 is another diagram of a system for sending commands to a storage device.
FIG. 5 is a diagram of another system for sending commands to a storage device.
FIG. 6 is a diagram of another system for sending commands to a storage device.
FIG. 7 is a flow chart of a method for sending commands to a storage device.
FIG. 8 is a flow chart of another method for sending commands to a storage device
FIG. 9 is a flow chart of a method for flushing a cache.
FIG. 10 is a diagram of a computing device including a computer-readable storage device having instructions executable to send commands to the storage device
While the technology is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described. The figures may not be drawn to scale. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the technology to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present technology as defined by the appended claims.
Detailed Description
The details of one or more embodiments of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Various embodiments of the present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments are shown. Indeed, this disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. The term "or" is used herein in an alternative and connective sense unless otherwise indicated. The terms "illustrative" and "example" are used as examples without quality level indications. Like numbers refer to like elements throughout. The arrows in each figure depict bi-directional data flow and/or bi-directional data flow capabilities. The terms "path," "pathway," and "route" are used interchangeably herein.
Embodiments of the present disclosure may be implemented in various ways, including as a computer program product comprising an article of manufacture. The computer program product may include a non-transitory computer-readable storage medium storing an application, a program component, a script, a source code, a program code, an object code, a byte code, a compiled code, an interpreted code, a machine code, an executable instruction, etc. (also referred to herein as executable instructions, instructions for execution, a computer program product, a program code, and/or similar terms are used interchangeably herein). Such non-transitory computer-readable storage media include all computer-readable media (including volatile and nonvolatile media).
In one embodiment, the non-volatile computer-readable storage medium may include a floppy disk, a flexible disk, a hard disk, a solid state memory (SSS) (e.g., a Solid State Drive (SSD)), a Solid State Card (SSC), a solid state component (SSM), an enterprise flash drive, a magnetic tape, or any other non-transitory magnetic medium, and the like. The non-volatile computer-readable storage medium may also include punch cards, paper tape, optical marking sheets (or any other physical medium having a hole pattern or other optically identifiable marking), compact disc read-only memory (CD-ROM), compact disc rewriteable (CD-RW), digital Versatile Discs (DVD), blu-ray discs (BD), any other non-transitory optical medium, etc. Such non-volatile computer-readable storage media may also include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory (e.g., serial, NAND, NOR, etc.), multimedia Memory Cards (MMC), secure Digital (SD) memory cards, smart media cards, compactFlash (CF) cards, memory sticks, and the like. In addition, the non-volatile computer readable storage medium may also include Conductive Bridge Random Access Memory (CBRAM), phase change random access memory (PRAM), ferroelectric random access memory (FeRAM), non-volatile random access memory (NVRAM), magnetoresistive Random Access Memory (MRAM), resistive Random Access Memory (RRAM), silicon-oxide-nitride-oxide-silicon memory (SONOS), floating gate random access memory (FJGRAM), armyworm memory, racetrack memory, and the like.
In one embodiment, the volatile computer-readable storage medium may include Random Access Memory (RAM), dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), fast page mode dynamic random access memory (FPM DRAM), extended data output dynamic random access memory (EDO DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), double data rate double-type synchronous dynamic random access memory (DDR 2 SDRAM), double data rate triple-type synchronous dynamic random access memory (DDR 3 SDRAM), rambus Dynamic Random Access Memory (RDRAM), double transistor RAM (TTRAM), thyristor RAM (T-RAM), zero capacitor (Z-RAM), rambus direct memory component (RIMM), dual inline memory component (DIMM), single inline memory component (SIMM), video Random Access Memory (VRAM), cache memory (including various levels), flash memory, register memory, and the like. It should be appreciated that where an embodiment is described as using a computer-readable storage medium, other types of computer-readable storage media may be used in place of or in addition to the computer-readable storage media described above.
It should be appreciated that the various embodiments of the present disclosure may also be implemented as a method, apparatus, system, computing device, computing entity, or the like. As such, embodiments of the present disclosure may take the form of an apparatus, system, computing device, computing entity, or the like that executes instructions stored on a computer-readable storage medium to perform certain steps or operations. Accordingly, embodiments of the present disclosure may also take the form of an entirely hardware embodiment, an entirely computer program product embodiment and/or an embodiment containing a combination of computer program products and hardware performing certain steps or operations.
Embodiments of the present disclosure are described below with reference to block diagrams and flowcharts. Accordingly, it should be understood that each block of the block diagrams and flowchart illustrations may be implemented in the form of computer program products, entirely hardware embodiments, combinations of hardware and computer program products, and/or apparatus, systems, computing devices, computing entities, etc. that perform instructions, operations, steps, and similar words (e.g., executable instructions, instructions for execution, program code, etc.) on a computer readable storage medium. For example, retrieval, loading, and execution of code may be performed sequentially, such that one instruction is retrieved, loaded, and executed at a time. In some example embodiments, the retrieving, loading, and/or executing may be performed in parallel such that multiple instructions are retrieved, loaded, and/or executed together. Thus, such embodiments may result in a specially configured machine performing the steps or operations specified in the block diagrams and flowchart illustrations. Accordingly, the block diagrams and flowchart illustrations support various combinations of embodiments for performing the specified instructions, operations or steps.
Systems and methods for sending commands to storage devices are disclosed. The systems and methods may selectively send requested commands to the storage device based on access granularity in order to maintain a consistent view of data stored at the storage device.
The storage devices may support access at different granularities. For example, the storage device may support block level (e.g., 4 Kilobytes (KB), 512 bytes (B), etc.) and byte level access. The storage device may support other granularities and may support more than two granularities. Accessing storage areas at different granularities may result in corrupted data. For example, access paths of different granularity may have different cache systems. Thus, if one access path has cached a particular memory address, altering that memory address using another granularity of access may lead to consistency issues. The present disclosure provides systems and methods for controlling memory access based on granularity of access (or associated factors, such as access paths associated with granularity). Thus, the disclosed systems and methods may provide consistent access to storage devices at various granularities.
Referring to FIG. 1, a system 100 for sending commands to a storage device is shown. The system 100 may support more than one access granularity for storage commands. The system 100 includes a computing device 102 and a storage device (storage device) 108. The computing device 102 includes a processor 104 and a memory device (memory device) 106.
The processor 104 includes a Central Processing Unit (CPU), a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), another type of processor, or any combination thereof. The processor 104 may be implemented with a Complex Instruction Set Computer (CISC) architecture, a Reduced Instruction Set Computer (RISC) architecture, another type of computer architecture, or any combination thereof.
The storage device 106 includes volatile memory, non-volatile memory, another type of memory, or any combination thereof. Examples of volatile memory include Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), resistive random access memory (ReRAM), and the like. Examples of non-volatile memory include read-only memory (ROM), programmable read-only memory (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), flash memory, hard disk drives, and the like.
Computing device 102 may correspond to a personal computer, a mobile telephone device, a server computer, another type of computer, or any combination thereof. The storage device 108 includes volatile memory, non-volatile memory, another type of memory, or any combination thereof. In some implementations, the storage device 108 is a component of the computing device 102.
The computing device 102 is directly or indirectly connected to the storage device 108. An indirect connection refers to a connection including an intermediary device, and a direct connection refers to a connection not including an intermediary device. The connection may be wireless or wired. As will be further discussed herein, in certain examples, the computing device 102 communicates with the storage device 108 over a peripheral component interconnect express (PCIe) link (or other link) using a computing express link (CXL) protocol (or another cache coherence protocol).
The storage device 106 stores access granularity criteria 116 associated with a storage area. The access granularity criteria 116 may be an association between the storage region and a first access granularity (e.g., 4KB, 64B, etc.). The access granularity criteria 116 may be placed in the storage device 106 by an application executed by the processor 104, an operating system executed by the processor 104, other software executed by the processor 104, or by another source. The storage area may correspond to a physical storage space (e.g., an address range) of the storage device 108 or to a virtualized address space that may be translated to an address of the storage device 108. In some examples, the storage area corresponds to a file or a region of a file. The access granularity criteria 116 may be between the storage area and the first access granularity, or may be explicit between the storage area and an attribute associated with the first access granularity. For example, the access granularity criteria 116 may be between a storage region and an access path, protocol, etc. associated with the first access granularity. In a particular example, the access granularity criteria 116 corresponds to a lock indicating that a storage region is to be exclusively accessed using a first granularity (or corresponding attribute, such as an access path, protocol, etc.). In another example, the storage area may include a physical address range of the storage device 108 that may be mapped to more than one virtual storage address used by the computing device 102. Each of the more than one virtual memory addresses may be used by the computing device 102 to access the physical address range at a different granularity. The access granularity criteria 116 between the storage region and the first access granularity may correspond to locking one or more of the more than one virtual storage addresses.
In operation, the processor 104 receives a storage operation request 110 (e.g., a read command, a write command, or another type of storage access command). In some examples, the store operation request 110 may be received from an application executing at the processor 104. The store operation request 110 indicates a first portion of the storage area. Based on (e.g., in response to) the storage operation request 110, the processor 104 may determine that the storage operation request 110 meets (e.g., has an access granularity indicated as granted for the storage region, is associated with an access protocol granted access to the storage region, targets virtual storage addresses associated with the storage region and unlocked, targets unlocked virtual storage addresses, etc.) the access granularity criteria 116. Based on the storage operation request 110 satisfying the access granularity criteria 116, the processor 104 may issue a command 114 to the storage device based on the storage operation request 110. The command 114 may simply be the store operation request 110 or may correspond to a translation of the store operation request 110. The command 114 corresponds to a storage operation (e.g., read, write, etc.) that targets a first portion of the storage area at a first granularity. Thus, commands meeting the stored access granularity criteria 116 may be communicated by the processor 104 to the storage device 108.
In a particular example, the storage operation request 110 indicates that a storage area is to be accessed via cxl.io using a non-volatile memory express (NVMe) protocol (e.g., access according to the protocol may be associated with a 4KB granularity). The access granularity criteria 116 may correspond to a lock indicating that a storage region (e.g., a range of physical addresses on the storage device 108) is locked to NVMe protocol (e.g., 4KB granularity) access. The processor 104 may confirm that the access granularity criteria 116 is satisfied by a first access granularity of the storage operation request 110 (e.g., an NVMe protocol command type of the storage operation request 110) and, based on this determination, issue a command 114 to the storage device 108. In this example, the command 114 may correspond to an NVMe command.
In another example, the access granularity criteria 116 may indicate that a virtual memory address range used by a load/store access path (e.g., a CXL.mem path) to access a physical address range of a storage device at 64B granularity is unlocked. The store operation request 110 may target virtual store addresses in the unlocked virtual store address range. Thus, the processor 104 may communicate the command 114 to the storage device 108. The command 114 may include a virtual memory address or a translation of a virtual memory address. Alternatively, the access granularity criteria 116 may not include information about the virtual memory address range utilized by the load/store access path (e.g., may not include a lock for that range). In this case, the processor 104 may also consider the virtual memory address to meet the access granularity criteria.
Controlling access to the storage device 108 based on the requested access granularity (or associated attributes, such as access path/protocol/etc.) may provide a mechanism for consistent access to the storage device 108 at different granularities.
FIG. 2 illustrates an example of the system 100 rejecting a store operation request based on access granularity. In operation, the processor 104 receives a second storage operation request 210 (e.g., a read command, a write command, or another type of memory access command). In some examples, the second store operation request 210 may be received from an application executing at the processor 104. The second storage operation request 210 indicates a second portion of the storage area. Based on (e.g., in response to) the second storage operation request 210, the processor 104 may determine that the second access granularity of the second storage operation request 210 fails to meet the access granularity criteria 116 (e.g., has an access granularity that is different than the access granularity of the granted access storage region, is associated with a protocol or access path that is not granted access to the storage region, targets a virtual address associated with the storage region but locked, etc.). Based on the second storage operation request 210 failing to meet the access granularity criteria 116, the processor 104 may issue a rejection indication 214 (e.g., to an application generating the second storage operation request 210). Thus, requests that do not meet the stored access granularity criteria 116 may be denied by the processor 104.
In a particular example, the second store operation request 210 indicates that a store region is to be accessed using a load or store operation via cxl.mem (e.g., access according to this protocol may be associated with a 64B granularity). The access granularity criteria 116 may correspond to a lock indicating that a storage area (e.g., a range of physical addresses on the storage device 108) is locked to NVMe protocol (e.g., 4KB granularity) access. The processor 104 may confirm that the access granularity criteria 116 is not satisfied by the second store operation request 210 (e.g., the load/store command type of the second store operation request 210) and based on this determination, issue a rejection indication 214.
In another example, the access granularity criteria 116 can indicate that a virtual storage address range used by the NVMe access path to access a physical address range of the storage device at block granularity is locked. The second storage operation request 210 may target a virtual storage address in the locked virtual storage address range. Thus, the processor 104 may issue a rejection 214.
Denying access to a memory location in the storage device 108 based on the requested access granularity (or associated attributes, such as access path, protocol, target virtual address, etc.) may prevent access to the memory location by different access paths with different cache systems. Thus, consistency of the data stored in the storage device 108 may be maintained.
Fig. 3 illustrates an example of the system 100 updating access granularity criteria. In operation, the processor 104 receives a request 310 to update access granularity criteria associated with a storage area. For example, the request 310 may indicate a new permitted access granularity, a new unpermitted access granularity, a new permitted access path or protocol (e.g., associated with a particular access granularity), a new unpermitted access path or protocol, a new unlocking virtual address (associated with a particular access granularity), a new locking virtual address, or a combination thereof. The request 310 may be received from an application or operating system executing at the processor 104. The processor 104 stores the updated access granularity criteria 316 in the storage device 306. The updated access granularity criteria 316 is associated with a storage area.
Fig. 4 illustrates that the system 100 may send commands to the storage device 108 based on the updated access granularity criteria 316. In the illustrated example, the computing device 102 receives the second storage operation request 210. Based on the second storage operation request 210 meeting the updated access granularity criteria 316, the processor 104 issues a second command 414 to the storage device. Determining, by the processor 104, that the second storage operation request 210 meets the updated access granularity criteria 316 may follow the same procedure as described with respect to fig. 1 that the storage operation request 110 meets the access granularity criteria 116 by the processor 104.
Thus, the system 100 may switch between supported access granularities for particular storage areas in the storage device 108. In some implementations, once the system 100 locks a particular memory region to a particular access granularity, memory access requests of other granularities are not allowed until the lock is removed (e.g., by an application executing at the processor 104).
The system 100 of fig. 1-4 may include various components in addition to those shown. For example, computing device 102 may include additional processors, communication interfaces, storage devices, output devices, and the like. Further, the storage device 108 may include a processor, a storage medium, a communication interface, and the like.
Referring to FIG. 5, a system 500 is illustrated for controlling commands sent to CXL storage based on access granularity. System 500 may correspond to system 100 described above. System 500 includes a computing device 502 and a CXL storage device.
Computing device 602 executes applications 604 and operating system 606. The computing device 602 may correspond to the computing device 102 of fig. 1. Computing device 602 may include a personal computer, a mobile device (such as a smart phone device), a server computer, or other type of computing device. Computing device 502 executes applications 504 and operating system 506 (e.g., at a processor such as processor 104). Application 504 may correspond to any computing application that accesses memory. In some implementations, the applications 504 include a Deep Learning Recommendation Model (DLRM) application. DLRM applications can access relatively large amounts of data (e.g., terabytes). Thus, data access at a first relatively large granularity (e.g., 512B or 4KB blocks) may be efficient. However, some functions of DLRM applications may depend on relatively small amounts of data. Accessing a relatively small amount of data using the first access granularity may result in movement of more data that the DLRM application will use for some functions. Thus, for some functions, data access at a second relatively smaller granularity (e.g., 64B) may be more efficient.
Operating system 506 manages the memory space accessible to applications 504. Managing memory space can include translating between virtual addresses used by application 504 and addresses (e.g., physical addresses or additional virtual addresses) identified by CXL storage device 510. In some implementations, the operating system 506 sends commands of a first access granularity (e.g., NVMe commands) to the CXL storage device 510 via a first protocol (e.g., cxl.io) and commands of a second access granularity (e.g., memory load/store commands) via a second protocol (e.g., cxl.mem). Managing memory space may also include placing locks (e.g., access criteria) on portions of memory (e.g., memory ranges, objects (such as files), etc.). In some cases, a lock may restrict all access to a portion of memory, restrict access to a particular access granularity, restrict access to a particular access protocol (e.g., NVMe, load/store, cxl.mem, cxl.io, etc.), restrict access to another criterion, or a combination thereof.
The computing device includes PCIe connector 508.PCIe connector 508 may include a u.2 connector, a m.2 connector, or another type of connector.
The CXL storage device 510 includes a PCIe connector 512, an FPGA526, and a PCIe storage device 518.PCIe storage 518 may include a solid state drive, hard drive, other storage device, or a combination thereof configured to operate over PCIe. The CXL storage device 510 is configured to provide access to the PCIe storage device 518 through the PCIe connector 512 at more than one access granularity. PCIe connector 512 may include a u.2 connector, m.2 connector, or another type of connector.
FPGA526 includes CXL Endpoint (EP) Intellectual Property (IP) block 522. The CXL EP IP block 522 is configured to manage CXL protocol messages exchanged between the computing device 502 and the CXL storage device 510.
FPGA526 also includes a cache 516. The cache 516 may include DRAM, SRAM, another type of memory, or a combination thereof. The cache 516 is configured to cache data retrieved from the PCIe storage device 518 at a first granularity (e.g., 512B or 4KB blocks) to provide access at a second granularity (e.g., 64B granularity). The cache 516 may also be configured to store data to be written to the PCIe storage device 518 at a second granularity. The data may ultimately be written to PCIe storage device 518 at a first granularity.
FPGA 526 also includes NVMe request generator IP block 514. The NVMe request generator IP block 514 is configured to generate NVMe requests based on signals from the CXL EP IP block 522. These NVMe requests are sent to PCIe storage device 518. For example, CXL EP IP block 522 can instruct NVMe request generator IP block 514 to generate an NVMe request for a data block in response to a cache miss at cache 516.
FPGA 526 also includes CXL to PCI IP block 520. The CXL to PCI IP block 520 is configured to translate messages received over cxl.io (e.g., NVMe messages over CXL) to PCIe messages (e.g., NVMe messages over PCIe) based on signals from the CXL EP IP block 522. For example, the CXL to PCI IP block 520 may extract the NVMe read request from the cxl.io message and encapsulate the NVMe read request in a PCIe message for transmission to the PCIe storage device 518.
FPGA 526 also includes PCIe IP block 524.PCIe IP block 524 is configured to exchange PCIe messages with PCIe storage device 518. In some examples, the PCIe IP block includes a u.2 connector, a m.2 connector, or another type of PCIe connector.
In a first example operation, application 504 sends a write command targeting a virtual address to operating system 506. The operating system 506 translates the virtual address to a translated address associated with the CXL storage device 510, generates NVMe commands targeting the translated address, and sends the NVMe commands to the CXL storage device 510 over the PCIe connector 508 using the CXL.io protocol. The CXL storage device 510 receives NVMe commands at the PCIe connector 512. The CXL EP IP block 522 forwards the NVMe message through cxl.io to the CXL to PCI IP block 520. The CXL to PCI IP block 520 converts the NVMe message over cxl.io to an NVMe message over PCIe and sends it to the PCIe IP block 524 for transmission to the PCIe storage device 518. Based on the NVMe commands, the PCIe storage device 518 writes the data to the PCIe storage device at a first granularity (e.g., 512B or 4KB blocks).
In a second example operation, application 504 sends a read command targeting a virtual address to operating system 506. The operating system 506 translates the virtual address to a translated address associated with the CXL storage device 510, generates NVMe commands targeting the translated address, and sends the NVMe commands to the CXL storage device 510 over the PCIe connector 508 using the CXL.io protocol. The CXL storage device 510 receives NVMe commands at the PCIe connector 512. The CXL EP IP block 522 forwards the NVMe message through cxl.io to the CXL to PCI IP block 520. The CXL to PCI IP block 520 converts the NVMe message over cxl.io to an NVMe message over PCIe and sends it to the PCIe IP block 524 for transmission to the PCIe storage device 518. Based on the NVMe commands, PCIe storage device 518 returns data to computing device 502 at a first granularity.
In a third example operation, application 504 sends a storage command targeting a virtual address to operating system 506. Operating system 506 translates the virtual address to a translated address associated with CXL storage device 510, generates a memory storage command targeting the translated address, and sends the memory storage command to CXL storage device 510 over PCIe connector 508 using the CXL.mem protocol. The CXL storage device 510 receives the memory storage command at the PCIe connector 512. CXL EP IP block 522 determines whether the translated address is cached in cache 516. In response to cache 516 caching the translated address, CXL EP IP block 522 is configured to overwrite the cache entry for the translated address at a second access granularity (e.g., 64B). In response to a cache miss for the translated address, CXL EP IP block 522 is configured to store the data in cache 516 in the new entry. The CXL EP IP block 522 is configured to trigger a write to the NVMe request generator IP block 514 to generate NVMe requests for writing data to the PCIe storage device 518 at a first granularity according to a cache eviction policy. The PCIe IP block 518 transmits the NVMe request to the PCIe storage device 518, and the PCIe storage device 518 writes the data to a storage medium of the PCIe storage device 518 at a first granularity.
In a fourth example operation, application 504 sends a load command targeting a virtual address to operating system 506. Operating system 506 translates the virtual address to a translated address associated with CXL storage device 510, generates a memory load command targeting the translated address, and sends the memory load command to CXL storage device 510 over PCIe connector 508 using the CXL.mem protocol. The CXL storage device 510 receives a memory load command at the PCIe connector 512. CXL EP IP block 522 determines whether the translated address is cached in cache 516. In response to cache 516 caching the translated address, CXL EP IP block 522 is configured to return a cache entry for the translated address to computing device 502 at a second access granularity (e.g., 64B). In response to a cache miss of the translated address, CXL EP IP block 522 is configured to trigger NVMe request generator IP block 514 to generate an NVMe request to request data at the translated address from PCIe storage device 518 at a first granularity. PCIe IP block 518 transmits NVMe requests to PCIe storage device 518, and PCIe storage device 518 returns data to FPGA 526 at a first granularity for storage in cache 516. The CXL EP IP block 522 then returns the entries of the cache 516 to the computing device 502 at a second granularity.
Thus, while the underlying storage device supports one access granularity, the CXL storage device 510 supports access to more than one access granularity by implementing a first access path (e.g., cxl.io) that operates at the native access granularity of the PCIe storage device 518 and a second access path (e.g., cxl.mem) that utilizes a cache to cache data from the underlying storage device at the first access granularity so that data can be accessed and manipulated at the second access granularity where fewer transactions are sent to the underlying storage device. Because a different cache structure is used in each access path, if a particular physical address of PCIe storage device 518 is to be accessed through both paths at the same time, computing device 502 may receive conflicting views of data stored in PCIe storage device 518. To prevent inconsistent views of data stored at PCIe storage device 518, computing device 502 manages access to CXL storage device 510 based on access granularity criteria, as described herein.
It should be noted that system 500 is provided for illustrative purposes and may be modified or replaced with other systems that provide access to storage devices with varying access granularity. For example, computing device 502 and CXL storage device 510 may communicate via a protocol other than PCIe (such as ethernet). As another example, CXL storage device 510 may be replaced with a storage device that supports other multi-protocol access. Thus, computing device 502 may send access requests through protocols other than cxl.io and cxl.mem. As another example, FPGA526 may be replaced by an ASIC, central processor unit, or other type of processor. In some implementations, the functionality of FPGA526 is implemented by a controller (ASIC or other processing device) of PCIe storage device 518. Thus, computing device 502 may communicate directly with PCIe storage device 518 through a PCIe connection. In some implementations, PCIe storage 518 may be replaced with another type of storage device, such as serial ATA (SATA), universal serial bus, serial Attached SCSI (SAS), or other type of storage device. Furthermore, the storage device may operate according to protocols other than NVMe. As with the other figures shown and described herein, additional components other than those shown may be included in the examples.
Fig. 6 is a diagram illustrating an abstraction of a storage address space in a system 600 for sending commands to storage devices. In some examples, system 600 corresponds to system 100 or system 500. The system 600 includes a computing device 602, such as the computing device 102 or the computing device 502. Computing device 602 executes applications 604 and operating system 606. The application 604 accesses one or more storage spaces managed by the operating system 606. Application 604 may correspond to application 504 and operating system 606 may correspond to operating system 506.
The system 600 also includes a storage device 646 and a storage device 648. The storage space managed by operating system 606 may correspond to physical storage space in storage device 646, storage device 648, or a combination thereof. The storage 646 may include volatile storage such as DRAM, SRAM, or the like. Storage device 648 may include non-volatile memory, such as a solid state drive, a hard disk drive, another type of non-volatile memory, or a combination thereof. Storage 648 may also include volatile memory. In some examples, storage device 646, storage device 648, or a combination thereof, correspond to components of CXL storage device 510. Storage device 648 may correspond to PCIe storage device 518.
The operating system 606 provides file system 610 space for the first access granularity storage operation to the application 604. In addition, the operating system 606 provides the virtual storage address range 616 to the application 604 for use in the second access granularity storage operation.
The operating system 606 is configured to map the virtual memory 608 to a memory pool (memory pool) that includes a first portion 622 and a second portion 624. For example, the operating system 606 may receive memory access requests (e.g., load or store operations) from the applications 604. The memory access request may identify a virtual address in virtual memory 608. The operating system 606 may then translate the virtual address to a translated address in the storage pool and output a command containing the translated address to a storage device 648 (e.g., CXL storage device 510 or storage device 108).
In addition, the operating system 606 is configured to map the file system 610 to a storage pool 636. For example, the operating system 606 may receive memory access requests (e.g., read or write requests) from the applications 604. The memory access request may identify a virtual address or object in the file system 610. Operating system 606 may then translate the virtual address or addresses to translated addresses in repository 636 and output commands containing the translated addresses to repository 648 (e.g., CXL repository 510 or repository 108).
The operating system 606 is configured to send memory accesses to the storage device 646 for a first portion 622 of the memory pool and to send memory accesses to the storage device 648 for a second portion 624 of the memory pool. The storage device 648 is configured to map the second portion 624 of the storage pool to a physical address in the storage device 648. Storage device 648 is also configured to map storage pool 636 to a physical address in storage device 648. Thus, physical addresses in storage 648 may be accessed by both a first path through file system 610 and a second path through virtual memory 608. The application 604 may issue memory access requests of a first granularity to the file system 610 and memory access requests of a second granularity to the virtual memory 608.
In operation, application 604 may issue a command to write file 620 to file system 610. The operating system 606 may then issue commands (e.g., NVMe commands) to write the files to the storage pool 636 at the first storage pool location 638, the second storage pool location 640, the third storage pool location 642, and the fourth storage pool location 644. Storage device 648 may translate storage pool locations 638, 640, 642, 644 to physical addresses in storage device 648 and write files to physical addresses.
The application 604 may also issue a memory mapping command 621 to the operating system 606 to map files in the file system 610 to the virtual memory 608 at the virtual memory address range 616. Based on the memory map command 621, the operating system 606 places the file at the file mapped virtual storage address range 616 in virtual memory 608, and instructs the storage device 648 to place the file 620 at the first location 628, the second location 630, the third location 632, and the fourth location 634 in the second portion 624 of the memory pool. Rather than moving the data in the storage device 648, the storage device 648 may map the physical address of the file 620 in the storage device 648 to locations 628, 630, 632, 634 in the second portion 624 of the storage pool. To prevent inconsistent views of file 620, operating system 606 may place a lock on the virtual address in file system 610 corresponding to file 620. Because memory accesses through virtual memory 608 and file system 610 use different access granularities, locks may be considered access granularity criteria. Based on the lock, the operating system 606 may deny memory access requests to the files 620 in the file system 610. Thus, system 600 can provide a consistent view of file 620 by employing access granularity-based control of memory accesses.
In some implementations, the operating system 606 can create a mapping between locations 628, 630, 632, 634 and physical address ranges in the storage device 648 in response to the memory mapped command 621 and without intermediate commands. In other implementations, the operating system 606 can create a mapping between locations 628, 630, 632, 634 when a memory access command is received from the application 604 regarding a corresponding portion of the file 620. For example, in response to an access request from application 604 to a virtual storage address in address range 616, operating system 606 may add a first portion of file 620 to first location 628. Waiting for portions of file 620 to be added to a storage pool may reduce the overhead associated with creating and maintaining a mapping (e.g., page table) between the storage pool and storage device 648.
The operating system 606 may release the lock based on a command from the application 604. In some implementations, releasing the lock on the file 620 in the file system 610 can include placing the lock on the virtual storage address range 616. Releasing the lock may also include operating system 606 issuing a command to storage device 648 (e.g., CXL storage device 510 or storage device 108) to flush (e.g., evict) cache entries associated with virtual storage address range 616 to storage device 648. For example, operating system 506 may issue a command to CXL EP IP block 522 to flush entries of cache 516 corresponding to memory locations 628, 630, 632, 634 to PCIe storage device 518. Accordingly, CXL EP IP block 522 can instruct NVMe request generator IP block 514 to generate one or more NVMe requests to write cached entries to PCIe storage 518 (e.g., at block granularity).
Referring to FIG. 7, a method 700 of sending a command to a storage device is shown. Method 700 may be performed by a computing device, such as computing device 102, computing device 502, or computing device 602. The method 700 includes storing access granularity criteria associated with a storage area at 702. For example, the computing device 102 (e.g., the processor 104 of the computing device 102) may store access granularity criteria 116 associated with storage areas in the storage device 106. The access granularity criteria 116 may include locks on storage objects (e.g., files), storage addresses (e.g., virtual or physical storage addresses of the storage device 108), or storage ranges (e.g., virtual or physical storage address ranges of the storage device 108). A storage object, storage address, or storage range may be associated with accessing data at the storage device 108 at a particular access granularity (e.g., through a particular access path associated with the particular access granularity, such as cxl.io or cxl.mem). The lock may prevent access to a particular physical address of the storage device 108 at a particular access granularity. The access granularity criteria 116 may correspond to an association between a storage object, storage address, or storage address range and an access granularity or a characteristic associated with an access granularity, such as an access path (e.g., cxl.io or cxl.mem) or an access protocol (e.g., NVMe or memory load/store). The association may indicate that the access granularity is granted or not granted access to a storage object, storage address, or storage address range.
As another example, the operating system 506 may store locks on particular memory objects, memory addresses, or memory address ranges associated with physical addresses that access the PCIe storage device 518 using block-based NVMe commands, or may store locks on particular memory objects, memory addresses, or memory address ranges associated with physical addresses that access the PCIe storage device 518 using byte-addressable memory/load store commands.
As another example, the operating system 606 may store locks that prevent access to files 620 in the file system 610. Thus, block level access to the file may be disabled. Alternatively, the operating system 606 may store locks that prevent access to the virtual storage address range 616. Thus, byte level access to the file may be disabled.
The method 700 further includes receiving a storage operation request requesting access to a first portion of a storage region at a first access granularity at 704. For example, the processor 104 may receive a store operation request 110 requesting access to a first portion of a storage area. The store operation request 110 can explicitly indicate the requested access granularity or implicitly indicate the requested access granularity (e.g., based on indicated memory addresses, memory objects, memory ranges, access protocols, access paths, etc.).
As another example, application 504 may issue a store operation command to operating system 506. The store operation command may include an address associated with accessing data stored at the PCIe storage device 518 using a cxl.mem path (e.g., byte level granularity) or a cxl.io path (e.g., block level granularity).
As another example, an application 604 may issue a store operation command to an operating system 606. The storage operation command may include an address of the file system 610 (e.g., a virtual address for block level access of data on the storage device 648) or an address in the virtual memory 608 (a virtual address for byte level access of data on the storage device 648).
The method 700 further includes, at 706, sending a command to the storage device based on the storage operation request in response to the storage operation request meeting the access granularity criteria. For example, the processor 104 may send the command 114 in response to the store operation request 110 meeting access granularity criteria 116 associated with the storage region. To illustrate, the processor 104 may send the command 114 in response to the access granularity criteria 116 indicating that the address to which the storage operation request 110 is directed is unlocked (e.g., by including an explicit indication that the address is unlocked or by excluding an indication that the address is locked) or in response to the access granularity or associated characteristics of the storage operation request 110 corresponding to the permitted access granularity or associated characteristics of the storage region as indicated by the access granularity criteria 116.
As another example, operating system 506 can issue a command to CXL storage device 510 through cxl.mem or cxl.io in response to determining that the target address of the request from application 504 is unlocked.
As another example, operating system 606 may issue a command to storage device 648 in response to determining that the target address of the request from application 604 is unlocked.
Thus, the method 700 may selectively issue storage commands to the storage devices based on the access granularity criteria. Thus, the method 700 may be used in a system that supports multiple access granularities for accessing a storage device in order to present a consistent view of data in the storage device. In some implementations, a storage device, such as CXL storage device 510, may perform method 700 to selectively issue a command to another storage device (e.g., PCIe storage device 518). For example, CXL EP IP block 522 of FPGA526 can be configured to perform method 700.
Referring to fig. 8, a method 800 of selectively sending or rejecting commands to a storage device is illustrated. Method 800 may be performed by a computing device, such as computing device 102, computing device 502, or computing device 602. Further, method 800 can be performed by a storage device (e.g., by CXL storage device 510) that manages access to another storage device.
The method 800 includes receiving a storage operation request requesting access to a first portion of a storage area at 802. For example, the processor 104 may receive the storage operation request 110 or the second storage operation request 210 (e.g., from an application executing at the processor 104). The requests 110, 210 may include a memory load request, a memory store request, a write request (e.g., NVMe write), a read request (e.g., NVMe read), another type of memory access, or a combination thereof. The requests 110, 210 may target a storage area (e.g., a physical storage range) of the storage device 108.
The method 800 further includes determining whether the store operation request meets access granularity criteria at 804. For example, the processor 104 may determine whether the storage operation request 110 or the storage operation request 210 meets the access granularity criteria 116 associated with the storage region. This determination may include determining whether the request 110, 210 targets a locked (or unlocked) storage address, storage address range, storage object, etc., as indicated by the access granularity criteria 116. The determination may include determining whether the access granularity of the requests 110, 210 or associated characteristics (e.g., access path, access protocol, etc.) satisfy the association stored in the access granularity criteria 116. The access granularity criteria 116 may indicate permitted access, non-permitted access, or a combination thereof.
The method 800 further includes, at 806, sending a command to the storage device based on the storage operation request in response to the storage operation request meeting the access granularity criteria. For example, the processor 104 may send the command 114 to the storage device 108 in response to the storage operation request 110 meeting the access granularity criteria 116. The command may correspond to a translation of the store operation request 110. For example, the command 114 may include a translation of an address indicated by the store operation request 110, the command 114 may be translated into a different protocol than the store operation request 110, the command 114 may encapsulate the store operation request 110, or a combination thereof.
The method 800 further includes outputting a rejection indication in response to the store operation request failing to meet the access granularity criteria at 808. For example, the processor 104 may output the rejection indication 214 in response to the second storage operation request 210 failing to satisfy the access granularity criteria 116. The rejection indication 214 may be output to an application executing at the processor 104. In some implementations, the rejection indication 214 corresponds to an error message or error flag.
Thus, the method 800 may selectively send a storage command or a rejection based on access granularity criteria. Thus, the method 800 may present a consistent view of data stored at a storage device that supports multiple access granularities (e.g., through different access paths).
Referring to fig. 9, a method 900 of mapping a storage area to a second space is shown. Method 900 may be performed by a computing device, such as computing device 102, computing device 502, or computing device 602. Further, method 800 can be performed by a storage device (e.g., by CXL storage device 510) that manages access to another storage device.
The method 900 includes, at 902, mapping a storage region to a first space associated with a first access granularity. For example, the operating system 606 may place the files 620 in the file system 610 and map the files 620 in the file system 610 to the storage pool locations 638, 640, 642, 644. The storage pool locations 638, 640, 642, 644 can be mapped (e.g., by the operating system 606 or by a storage device 648, such as the CXL storage device 510) to physical addresses in the storage device 648 (e.g., storage area). The location or storage pool locations 638, 640, 642, 644 of the file 620 in the file system 610 may correspond to a first space. Accessing file 620 through file system 610 is associated with a first access granularity (e.g., 512B or 5KB blocks).
In another example, operating system 506 may map virtual addresses associated with CXL.mem accesses to a physical address range of PCIe storage device 518 at 64B granularity.
The method 900 further includes receiving a request to map a storage area to a second space associated with a second access granularity at 904. For example, operating system 606 can receive memory-mapped command 621 from application 604. The memory map command 621 may request that the file 620 be placed into virtual memory 608. Virtual memory 608 is associated with a second access granularity (e.g., 64B).
In another example, operating system 506 may receive a request to map a physical address range of PCIe storage device 518 to a virtual address associated with cxl.io access at 512B or 4KB block granularity.
The method 900 further includes initiating a cache flush at 906. For example, operating system 606 may flush any caches of data stored at file system 610 maintained by computing device 602, storage device 648, or storage device 648.
In another example, operating system 506 may instruct CXL EP IP block 522 to flush an entry associated with a physical address range in cache 516 to PCIe storage 518.
The method 900 further includes mapping the storage region to a second space associated with a second access granularity. For example, the operating system 606 may map the address range 616 in the virtual memory 608 to storage pool locations 628, 630, 632, 634, which are mapped to physical address ranges of the storage device 648.
In another example, operating system 506 maps the physical address range of PCIe storage device 518 to a virtual address associated with cxl.io access at 512B or 4KB block granularity.
Thus, method 900 may flush caches associated with one access granularity in response to requests to access data at another access granularity. It should be noted that more caches in the access path than shown in the figure may be flushed. For example, computing device 102, computing device 502, or computing device 602 may maintain one or more caches associated with one or more access granularities, and may flush these caches based on requests to access data at different access granularities. Similarly, storage 108, CXL storage 510, PCIe storage 618, or storage 648 may include additional caching mechanisms than those illustrated. In response to requests to access data at different access granularities, the cache mechanism associated with one access granularity may be flushed.
With reference to FIG. 10, computing device 1000 is shown as a device that includes a processor 1004 and a computer-readable storage device 1006. The computer readable storage 1006 may include non-volatile memory, optical storage, another type of storage, or a combination thereof. The computer-readable storage 1006 stores control instructions 1008 based on access granularity, the control instructions 1008 being executable by the processor 1004 to perform one or more of the methods or operations described herein with respect to fig. 1-9. A similar computer readable storage device may store instructions to program an FPGA to perform one or more of the operations described herein.
In some examples, X corresponds to Y based on X matching Y. For example, the first ID may be determined to correspond to a second ID that matches (e.g., has the same value as) the first ID. In other examples, X corresponds to Y based on X being associated with (e.g., linked to) Y. For example, X may be associated with Y through a mapping data structure.
Some embodiments may be implemented in one or a combination of hardware, firmware, and software. Other embodiments may also be implemented as instructions stored on a computer-readable storage device, which may be read and executed by at least one processor to perform the operations described herein. A computer-readable storage device may include any non-transitory memory mechanism for storing information in a form readable by a machine (e.g., a computer). For example, computer-readable storage devices may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, and other storage devices and media.
As used in this document, the term "communication" is intended to include transmission, or reception, or both transmission and reception. This may be particularly useful in the claims when describing the organization of data sent by one device and received by another device, but only the functionality of one of these devices is needed to infringe the claim. Similarly, when only the function of one of these devices is claimed, the bidirectional data exchange between two devices (both devices transmitting and receiving during the exchange) may be described as "communication". The term "transmitting" as used herein with respect to wireless communication signals includes transmitting wireless communication signals and/or receiving wireless communication signals. For example, a wireless communication unit capable of transmitting wireless communication signals may include a wireless transmitter for transmitting wireless communication signals to at least one other wireless communication unit, and/or a wireless communication receiver for receiving wireless communication signals from at least one other wireless communication unit.
Some embodiments may be used in conjunction with various devices and systems, such as Personal Computers (PCs), desktop computers, mobile computers, laptop computers, notebook computers, tablet computers, server computers, handheld devices, personal Digital Assistant (PDA) devices, handheld PDA devices, in-vehicle devices, off-vehicle devices, hybrid devices, vehicle devices, off-vehicle devices, mobile or portable devices, consumer devices, non-mobile or non-portable devices, wireless communication stations, wireless communication devices, wireless Access Points (APs), wired or wireless routers, wired or wireless modems, video devices, audio-video (a/V) devices, wired or wireless networks, wireless area networks, wireless Video Area Networks (WVAN), local Area Networks (LANs), wireless LANs (WLANs), personal Area Networks (PANs), wireless PANs (WPANs), and the like.
Some embodiments may be associated with a one-way and/or two-way radio communication system, a cellular radiotelephone communication system, a mobile telephone, a cellular telephone, a radiotelephone, a Personal Communications System (PCS) device, a PDA device that includes a wireless communication device, a mobile or portable Global Positioning System (GPS) device, a device that includes a GPS receiver or transceiver or chip, a device that includes a Radio Frequency Identification (RFID) element or chip, a multiple-input multiple-output (MIMO) transceiver or device, a single-input multiple-output (SIMO) transceiver or device, a multiple-input single-output (MISO) transceiver or device, a device having one or more internal and/or external antennas, a Digital Video Broadcasting (DVB) device or system, a multi-criteria radio device or system, a wired or wireless handheld device such as a smart phone, a Wireless Application Protocol (WAP) device, or the like.
Some embodiments may be used in conjunction with one or more types of wireless communication signals and/or systems that conform to one or more wireless communication protocols, such as, for example, radio Frequency (RF), infrared (IR), frequency Division Multiplexing (FDM), orthogonal FDM (OFDM), time Division Multiplexing (TDM), time Division Multiple Access (TDMA), spread TDMA (E-TDMA), general Packet Radio Service (GPRS), spread GPRS, code Division Multiple Access (CDMA), wideband CDMA (WCDMA), CDMA2000, single carrier CDMA, multi-carrier modulation (MDM), discrete Multitone (DMT), bluetooth TM, or bluetooth TM. Global Positioning System (GPS), wi-Fi, wi-Max, zigbee (tm), ultra Wideband (UWB), global system for mobile communications (GSM), 2G, 2.5G, 3G, 3.5G, 4G, fifth generation (5G) mobile networks, 3GPP, long Term Evolution (LTE), LTE-advanced, enhanced data rates for GSM evolution (EDGE), and the like. Other embodiments may be used in various other devices, systems, and/or networks.
Although an example processing system has been described above, embodiments of the subject matter and functional operations described herein may be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
Embodiments of the subject matter and operations described herein may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described herein may be implemented as one or more computer programs, i.e., one or more components of computer program instructions, encoded on a computer storage medium for execution by, or to control the operation of, information/data processing apparatus. Alternatively or additionally, the program instructions may be encoded on a manually-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information/data for transmission to suitable receiver apparatus for execution by information/data processing apparatus. The computer storage medium may be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Furthermore, while the computer storage medium is not a propagated signal, the computer storage medium may be a source or destination of computer program instructions encoded in an artificially generated propagated signal. Computer storage media may also be or be included in one or more separate physical components or media (e.g., a plurality of CDs, disks, or other storage devices).
The operations described herein may be implemented as operations performed by an information/data processing apparatus on information/data stored on one or more computer-readable storage devices or received from other sources.
The term "data processing apparatus" encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system-on-a-chip, or multiple ones or combinations of the foregoing. The apparatus may comprise a dedicated logic circuit, such as an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). In addition to hardware, the apparatus may include code that creates an execution environment for the computer program in question, such as code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment may implement a variety of different computing model infrastructures, such as web services, distributed computing, and grid computing infrastructures.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a component, subroutine, object, or other unit suitable for use in a computing environment. The computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or information/data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more components, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input information/data and generating output. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and information/data from a read-only memory or a random access memory or both. Elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive information/data from or transfer information/data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, the computer need not have such a device. Devices suitable for storing computer program instructions and information/data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disk; CDROM and DVD-ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information/data to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input. In addition, the computer may interact with the user by sending and receiving documents to and from the device used by the user; for example, by sending a web page to a web browser on a user's client device in response to a request received from the web browser.
Embodiments of the subject matter described herein can be implemented in a computing system that includes a back-end component, e.g., as an information/data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an embodiment of the subject matter described herein, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital information/data communication (e.g., a communication network). Examples of communication networks include local area networks ("LANs") and wide area networks ("WANs"), internetworks (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, the server sends information/data (e.g., HTML pages) to the client device (e.g., for purposes of displaying information/data to and receiving user input from a user interacting with the client device). Information/data generated at the client device (e.g., results of user interactions) may be received at the server from the client device.
While this specification contains many specifics of particular embodiments, these should not be construed as limitations on the scope of any embodiments or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Additionally, the processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
Many modifications and other embodiments of the disclosure set forth herein will come to mind to one skilled in the art to which these embodiments pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the embodiments are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
The following claims describe examples according to the present disclosure, however, these claims do not limit the scope of the present disclosure.
Statement 1: a disclosed method includes storing, at a computing device, access granularity criteria associated with a storage area. The disclosed method further includes receiving a storage operation request requesting access to a first portion of a storage area at a first access granularity. The disclosed method further comprises: in response to the storage operation request meeting the access granularity criteria, a command is sent from the computing device to the storage device based on the storage operation request.
Statement 2: the method of claim 1, further comprising: receiving a second storage operation request requesting access to a second portion of the storage region at a second access granularity different from the first access granularity; and outputting an indication that the second storage operation request is denied based on the access granularity criteria.
Statement 3: the method of any of claims 1 or 2, further comprising storing updated access granularity criteria. The method may further include receiving a second storage operation request requesting access to a second portion of the storage region at a second access granularity. The method may further comprise: in response to the second storage operation request meeting the updated access granularity criteria, a second command is sent from the computing device to the storage device based on the second storage operation request.
Statement 4: in the method of claim 3, the updated access granularity criteria may be stored in response to a request from an application.
Statement 5: the method of any of claims 4 or 5, further comprising initiating a cache flush at the storage device.
Statement 6: in the method of any of claims 1-5, the storage area may correspond to a file.
Statement 7: in the method of any of claims 1-5, the storage area may correspond to a region of a file.
Statement 8: the method of any of claims 1-7, wherein the storage region may correspond to an address range in an address space.
Statement 9: the method of any of claims 1-7, wherein the access granularity criteria may correspond to locks on a virtual storage address range associated with accessing the storage area at the second access granularity.
Statement 10: a computer-readable storage device may store instructions executable by a processor to perform operations comprising storing, at a computing device, access granularity criteria associated with a storage area. The operations may also include receiving a storage operation request requesting access to a first portion of the storage region at a first access granularity. The operations may further include: in response to the storage operation request meeting the access granularity criteria, a command is sent from the computing device to the storage device based on the storage operation request.
Statement 11: the computer-readable storage device of claim 10, wherein the operations further comprise receiving a second storage operation request, the second storage operation request requesting access to a second portion of the storage area at a second access granularity different from the first access granularity. The operations may further include outputting an indication that the second storage operation request was denied based on the access granularity criteria.
Statement 12: in a computer readable storage device according to any of claims 10 or 11, the operations further comprising storing updated access granularity criteria. The operations may further include receiving a second storage operation request requesting access to a second portion of the storage region at a second access granularity. Operations may further include: in response to the second storage operation request meeting the updated access granularity criteria, a second command is sent from the computing device to the storage device based on the second storage operation request.
Statement 13: in the computer-readable storage device of claim 12, the updated access granularity criteria may be stored in response to a request from an application.
Statement 14: in the computer-readable storage device of any of claims 12 or 13, the operations may further comprise initiating a cache flush at the storage device.
Statement 15: the computer readable storage device of any of claims 11-14, wherein the storage area corresponds to a file.
Statement 16: the computer readable storage device of any of claims 11-14, wherein the storage area corresponds to a region of a file.
Statement 17: a system may include a storage device and a computing device. The computing device may be configured to store access granularity criteria associated with a storage area of the repository device. The computing device may be further configured to receive a storage operation request requesting access to a first portion of the storage region at a first access granularity. The computing device may be further configured to send a command to the storage device based on the storage operation request in response to the storage operation request meeting the access granularity criteria.
Statement 18: the system of claim 17, wherein the computing device is further configured to receive a second storage operation request requesting access to a second portion of the storage area at a second access granularity different from the first access granularity. The computing device may be further configured to output an indication that the second storage operation request is denied based on the access granularity criteria.
Statement 19: the system of any of claims 17-18, wherein the computing device is further configured to store updated access granularity criteria. The computing device may be further configured to receive a second storage operation request requesting access to a second portion of the storage area at a second access granularity. The computing device may be further configured to send a second command to the storage device based on the second storage operation request in response to the second storage operation request meeting the updated access granularity criteria.
Statement 20: the system of any of claims 17-19, wherein the storage device may include a cache, and the computing device may be further configured to initiate an eviction of an entry in the cache.
Claims (20)
1. A method, comprising:
storing, at the computing device, access granularity criteria associated with the storage region;
receiving a storage operation request requesting access to a first portion of a storage area at a first access granularity; and
in response to the storage operation request meeting the access granularity criteria, a command is sent from the computing device to the storage device based on the storage operation request.
2. The method of claim 1, further comprising:
receiving a second storage operation request requesting access to a second portion of the storage region at a second access granularity different from the first access granularity; and
an indication that the second store operation request is denied is output based on the access granularity criteria.
3. The method of claim 1, further comprising:
storing updated access granularity criteria;
receiving a second storage operation request requesting access to a second portion of the storage area at a second access granularity; and
in response to the second storage operation request meeting the updated access granularity criteria, a second command is sent from the computing device to the storage device based on the second storage operation request.
4. A method according to claim 3, wherein the updated access granularity criteria is stored in response to a request from an application.
5. A method according to claim 3, further comprising: a cache flush is initiated at the storage device.
6. The method of claim 1, wherein the storage area corresponds to a file.
7. The method of claim 1, wherein the storage area corresponds to a region of a file.
8. The method of claim 1, wherein the storage area corresponds to an address range in an address space.
9. The method of claim 1, wherein the access granularity criteria corresponds to a lock on a virtual storage address range associated with accessing the storage region at the second access granularity.
10. A computer-readable storage device storing instructions executable by a processor to perform operations comprising:
storing, at the computing device, access granularity criteria associated with the storage region;
receiving a storage operation request requesting access to a first portion of a storage area at a first access granularity; and
in response to the storage operation request meeting the access granularity criteria, a command is sent from the computing device to the storage device based on the storage operation request.
11. The computer-readable storage device of claim 10, wherein the operations further comprise:
receiving a second storage operation request requesting access to a second portion of the storage region at a second access granularity different from the first access granularity; and
an indication that the second store operation request is denied is output based on the access granularity criteria.
12. The computer-readable storage device of claim 10, wherein the operations further comprise:
storing updated access granularity criteria;
receiving a second storage operation request requesting access to a second portion of the storage area at a second access granularity; and
in response to the second storage operation request meeting the updated access granularity criteria, a second command is sent from the computing device to the storage device based on the second storage operation request.
13. The computer-readable storage device of claim 12, wherein the updated access granularity criteria is stored in response to a request from an application.
14. The computer-readable storage device of claim 12, wherein the operations further comprise initiating a cache flush at the storage device.
15. The computer-readable storage device of claim 11, wherein the storage area corresponds to a file.
16. The computer-readable storage device of claim 11, wherein the storage area corresponds to a region of a file.
17. A system, comprising:
a storage device; and
a computing device configured to:
storing access granularity criteria associated with a storage area of a storage device;
receiving a storage operation request requesting access to a first portion of a storage area at a first access granularity; and
in response to the storage operation request meeting the access granularity criteria, a command is sent to the storage device based on the storage operation request.
18. The system of claim 17, wherein the computing device is further configured to:
receiving a second storage operation request requesting access to a second portion of the storage region at a second access granularity different from the first access granularity; and
an indication that the second store operation request is denied is output based on the access granularity criteria.
19. The system of claim 17, wherein the computing device is further configured to:
storing updated access granularity criteria;
receiving a second storage operation request requesting access to a second portion of the storage area at a second access granularity; and
in response to the second storage operation request meeting the updated access granularity criteria, a second command is sent to the storage device based on the second storage operation request.
20. The system of claim 17, wherein the storage device comprises a cache, and wherein the computing device is further configured to initiate an eviction of an entry in the cache.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/322,221 | 2022-03-21 | ||
US18/123,252 US20230297517A1 (en) | 2022-03-21 | 2023-03-17 | Systems and methods for sending a command to a storage device |
US18/123,252 | 2023-03-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116795282A true CN116795282A (en) | 2023-09-22 |
Family
ID=88042820
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310279793.4A Pending CN116795282A (en) | 2022-03-21 | 2023-03-21 | System and method for sending commands to storage devices |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116795282A (en) |
-
2023
- 2023-03-21 CN CN202310279793.4A patent/CN116795282A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4064086A1 (en) | Secure applications in computational storage devices | |
US20220236902A1 (en) | Systems and methods for data transfer for computational storage devices | |
US20240143203A1 (en) | Systems and methods for storage device resource management | |
KR20170099351A (en) | System and methods for providing fast cacheable access to a key-value device through a filesystem interface | |
EP4130971A1 (en) | Systems, methods, and apparatus for the management of device local memory | |
CN116795282A (en) | System and method for sending commands to storage devices | |
EP4250123A1 (en) | Systems and methods for sending a command to a storage device | |
KR102715658B1 (en) | Systems and methods for data transfer for computational storage devices | |
TW202314527A (en) | Computer-readable medium and system and method for computational offloading | |
EP4099165A1 (en) | Virtual computational storage devices | |
US20240220139A1 (en) | Resource isolation in computational storage devices | |
EP4379525A1 (en) | Systems, methods, and apparatus for memory protection for computational storage devices | |
CN118114317A (en) | Electronic system and method and device for managing memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |