CN116594931B - Computing system and method for operating a computing system - Google Patents
Computing system and method for operating a computing system Download PDFInfo
- Publication number
- CN116594931B CN116594931B CN202310609420.9A CN202310609420A CN116594931B CN 116594931 B CN116594931 B CN 116594931B CN 202310609420 A CN202310609420 A CN 202310609420A CN 116594931 B CN116594931 B CN 116594931B
- Authority
- CN
- China
- Prior art keywords
- memory
- interface
- processor
- enhancement
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000012937 correction Methods 0.000 claims abstract description 15
- 230000006835 compression Effects 0.000 claims abstract description 12
- 238000007906 compression Methods 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 36
- 230000009977 dual effect Effects 0.000 claims description 13
- 230000001360 synchronised effect Effects 0.000 claims description 13
- 230000002085 persistent effect Effects 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims 1
- 230000003416 augmentation Effects 0.000 abstract description 7
- 230000006993 memory improvement Effects 0.000 abstract description 5
- 238000007726 management method Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 7
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 229920000642 polymer Polymers 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1652—Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
- G06F13/1657—Access to multiple memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
- G06F12/0882—Page mode
Abstract
A computing system and method for operating a computing system are provided. A pseudo-main memory system includes memory adapter circuitry to perform memory augmentation using compression, deduplication, and/or error correction. The memory adapter circuit is coupled to the memory and employs a memory enhancement method to increase the effective storage capacity of the memory. The memory adapter circuit is also connected to the memory bus and implements an NVDIMM-F interface or a modified NVDIMM-F interface for connecting to the memory bus.
Description
The present application is a divisional application of the inventive patent application filed on 2017, 9, 30, with the application number 201710913938.6, entitled "computing System and method for operating a computing System".
The present application claims the benefit of U.S. provisional application Ser. No. 62/489,997, filed on 25 at 4 at 2017, U.S. patent application Ser. No. 15/282,848, filed on 9 at 30 at 2016, and U.S. patent application Ser. No. 15/663,619, filed on 28 at 7 at 2017, each of which is incorporated herein by reference in its entirety.
Technical Field
One or more aspects in accordance with embodiments of the present invention relate to data storage, and more particularly, to a system for storing data using memory augmentation (memory augmentation).
Background
Some modern applications (such as databases, virtual desktop infrastructure, and data analysis) may have large main memory footprints. As the system scale expands, such capacity needs to grow super-linearly.
Accordingly, there is a need for a system and method that provides greater storage capacity.
Disclosure of Invention
Aspects of embodiments of the present disclosure are directed to a pseudo-main memory system. The system comprises: memory adapter circuitry to perform memory enhancement using compression, deduplication (deduplication), and/or error correction. The memory adapter circuit is coupled to the memory and employs a memory enhancement method to increase the effective storage capacity of the memory. The memory adapter circuit is also connected to the memory bus and implements a non-volatile dual inline memory module (NVDIMM-F) interface or a modified NVDIMM-F interface with flash memory for connection to the memory bus.
According to an embodiment of the present invention, there is provided a computing system including: a central processing unit; a memory system comprising a memory adapter circuit and a first memory, wherein the memory adapter circuit has a first memory interface connected to a central processor and a second memory interface connected to the first memory, the first memory interface being a dual data rate synchronous dynamic random access memory interface, the memory adapter circuit configured to store data in and retrieve data from the first memory with an increase in storage capacity of the first memory.
In one embodiment, the enhancement includes at least one of compression, deduplication, and error correction.
In one embodiment, the first memory interface is a second or higher generation dual data rate synchronous dynamic random access memory interface.
In one embodiment, the second memory interface is a second or higher generation dual data rate synchronous dynamic random access memory interface.
In one embodiment, the first memory interface is an NVDIMM-F interface, and the computing system is configured to operate the memory system as a block device.
In one embodiment, the central processor is connected to the memory adapter circuit through the memory management circuit.
In one embodiment, the first memory is a dynamic random access memory and the second memory interface is a second or higher generation dual data rate synchronous dynamic random access memory interface.
In one embodiment, the memory adapter circuit is a separate integrated circuit configured to perform compression, deduplication, and error correction.
In one embodiment, the computing system includes a second memory connected to the central processor through a memory management circuit.
In one embodiment, the second memory is connected to the memory management circuit through a third memory interface, wherein the third memory interface is a second or higher generation dual data rate synchronous dynamic random access memory interface.
In one embodiment, the central processor is configured to maintain a page cache in the second memory, and the central processor is configured to: when a clean page is evicted from the page cache, invoking a clean cache function for the clean page, wherein the clean cache function is configured to: storing the clearance page in the first memory when sufficient space is available in the first memory; otherwise, the clearance page is stored in persistent memory.
In one embodiment, the clean-up caching function is configured to evaluate whether sufficient space is available in the first memory based on an estimated enhancement ratio, wherein the estimated enhancement ratio is a function of an enhancement ratio of data stored in the first memory during a set time interval.
In one embodiment, the central processor is configured to maintain a user memory space in the second memory, and the central processor is configured to: when a dirty page is reclaimed from a user memory space, a front-end swap function for the dirty page is invoked, wherein the front-end swap function is configured to: storing the dirty pages in the first memory when sufficient space is available in the first memory; otherwise, dirty pages are stored in persistent memory.
In one embodiment, the front-end switching function is configured to evaluate whether sufficient space is available in the first memory based on an estimated enhancement ratio, wherein the estimated enhancement ratio is a function of the enhancement ratio of data stored in the first memory during the set time interval.
In one embodiment, the central processor is configured to: executing one or more applications, and returning, in response to the one or more applications being applied to the sysinfo function: a value of total available memory based on the size of the first memory and the size of the second memory, and a value of total free memory based on the amount of free memory in the first memory and the amount of free memory in the second memory.
In one embodiment, the value of the total free memory is the sum of the product of the amount of free memory in the second memory and the minimum enhancement ratio and the amount of free memory in the first memory, wherein the minimum enhancement ratio is a function of the enhancement ratio of the data stored in the first memory relative to the set interval when the set time interval has elapsed due to system start-up; otherwise, the minimum enhancement ratio is 2.0.
According to an embodiment of the present invention, there is provided a method for operating a computer system including: a central processing unit; a memory system, comprising: a memory adapter circuit; a first memory, wherein the memory adapter circuit has a first memory interface connected to the central processor and a second memory interface connected to the first memory, the first memory interface being a dual data rate synchronous dynamic random access memory interface, the method comprising: the method further includes storing data in the first memory and retrieving data from the first memory with an increase in storage capacity of the first memory.
In one embodiment, the enhancement includes at least one of compression, deduplication, and error correction.
In one embodiment, the method comprises: the memory system is used as a block device operable using the NVDIMM-F protocol.
In one embodiment, the memory adapter circuit is a separate integrated circuit configured to perform compression, deduplication, and error correction.
According to an embodiment of the present invention, there is provided a computing system including: a central processing unit; a memory system, comprising: a first memory; a memory adapter device for storing data in and retrieving data from a first memory utilizing an enhancement of a storage capacity of the first memory, wherein the memory adapter device has a first memory interface connected to a central processor, the first memory interface being an NVDIMM-F interface, and a second memory interface connected to the first memory, the computing system configured to operate the memory system as a block device.
Drawings
These and other features and advantages of the present invention will be appreciated and understood with reference to the present specification, claims and appended drawings, wherein:
FIG. 1 is a block diagram of a system memory hierarchy according to an embodiment of the present invention;
FIG. 2 is a hardware block diagram of a computing system according to an embodiment of the invention;
FIG. 3 is a hybrid hardware software block diagram of a computing system according to an embodiment of the invention;
FIG. 4 is a software block diagram of a system for modifying a response to a sysinfo function call according to an embodiment of the invention.
Detailed Description
The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary embodiments of a pseudo-main memory system (pseudo main memory system) provided in accordance with the present invention and is not intended to represent the only form in which the present invention may be constructed or utilized. The description sets forth the features of the invention with respect to the illustrated embodiments. It is to be understood, however, that the same or equivalent functions and structures may be accomplished by different embodiments that are also intended to be encompassed within the spirit and scope of the invention. Like element numerals are intended to indicate like elements or features as expressed elsewhere herein.
In some embodiments, the system is an efficient pseudo-memory mechanism, which may be referred to as "memory ABCDE", for deploying in-line memory augmentation (memory augmentation) through compression and/or deduplication (de-duplication) and/or error correction. Such a system may significantly increase memory density when relying on hardware technology that is entirely local to the memory system. Challenges for the memory ABCDE system include: integration on a double data rate synchronous dynamic random access memory (DDR) bus, and management of variable memory densities that such a system may provide (e.g., memory densities that vary with application data or external conditions (e.g., external conditions that may affect errors)). Further, some applications may not be written to explicitly use the additional capacity provided by the memory ABCDE system. In some embodiments, the system provides simulated system memory capacity to enable user space applications to address large volumes of memory.
In some embodiments, the operating system is aware of the physical organization and capacity of the underlying memory and performs related operations for masking these details for applications. Under user space, an operating system Memory Management Unit (MMU) (or "memory management circuit") reuses LINUX TM The presence of excellent memory features (transcendent memory feature) in the core to represent the memory ABCDE system as a fast-swap (fast-swap) block device for DDR interfaces, such as the fourth generation DDR (DDR 4) interface.
FIG. 1 illustrates a system memory hierarchy using a memory ABCDE system as a pseudo main memory according to one embodiment. Since the memory ABCDE system may operate as a block device, it may be referred to as a memory ABCDE "driver.
FIG. 2 illustrates a hardware block diagram of a computing system, according to one embodiment. In some embodiments, a computing system using the memory ABCDE system may include the following three components. First, the computing system may include a memory ABCDE system 240 based on a non-volatile dual in-line memory module (NVDIMM) with flash (NVDIMM-F) form factor and an interface (e.g., in fig. 2, the interface between the memory ABCDE system 240 and the memory management circuit 220). NVDIMM-F memory may have the same attributes as memory ABCDE system because, for example, both may exhibit variable storage densities. Second, the computing system may use a software infrastructure based on excellent memory. Such an infrastructure may include drivers used by the operating system to access the memory ABCDE system, referred to as memory ABCDE drivers. Third, the computing system may employ a modified system function (e.g., a modified sysinfo () function) to simulate an increased main memory capacity.
NVDIMM-F protocol may be employed in related art applications for filling flash modules on DDR memory buses. Such applications may use interfaces that support only short access bursts (access bursts) with 64 byte cache lines to enable block access with long access latency. In such applications, the address space may be large enough that the DDR command bus cannot issue a Logical Block Address (LBA) within its pin limits. Thus, the NVDIMM-F interface relies on the DRAM data bus to send commands (including addresses) to the flash module.
In some embodiments, the NVDIMM-F protocol is instead employed to include access to a Dynamic Random Access Memory (DRAM) based memory ABCDE system to provide block access. In some embodiments, since the memory ABCDE system may have a lower capacity than a flash-based system, the NVDIMM-F protocol may be modified to use command and address buses (rather than data buses) for commands and addresses. In such an embodiment, the address is written (by the memory ABCDE driver) on the command and address bus, thereby making direct access to the memory location (rather than writing the address itself in a small buffer as in the unmodified NVDIMM-F protocol).
In some embodiments, the system ensures that read and write commands to the memory ABCDE system are uncached, so commands are sent directly to the memory ABCDE system, rather than waiting to be flushed (flush) from the Central Processing Unit (CPU) cache. To achieve this, the memory ABCDE driver uses flush cache line (e.g., CLFLUSH) CPU instructions and flush-on-demand (e.g., PCOMMIT) CPU instructions to ensure that commands reach the ABCDE module. In addition, an efficient memory-to-memory Direct Memory Access (DMA) engine in the memory ABCDE system may be employed to transfer data from the block-based internal pages in the memory ABCDE system to the DDR4 bus of the computing system for rapidly migrating pages back and forth between the main memory and the memory ABCDE system.
In the embodiment of fig. 2, central processor 210 communicates with main memory 230 (which may be DDR memory (e.g., DDR4 memory), or other memory configured to interface to a second or higher generation dual data rate synchronous dynamic random access memory (e.g., DDR2, DDR3, DDR4, or DDR5 memory) through memory management circuit 220. The memory management circuit 220 is also connected to a memory ABCDE system 240, the memory ABCDE system 240 comprising a memory adapter circuit 250 and an intermediate memory 260 (referred to as "intermediate memory" since its role can be considered intermediate between that of the main memory 230 and that of a persistent storage device, such as a Solid State Drive (SSD)).
The memory adapter circuit 250 may be a system on a chip (SoC), for example, it may be a separate integrated circuit including a processor, memory (for storing programs and data for the processor), and other logic and driver circuitry. The memory adapter circuit 250 may have: a first memory interface (e.g., a DDR4 interface) through which the memory adapter circuit 250 is connected to the memory management circuit 220 (e.g., through a DDR bus); the second memory interface, through which the memory adapter circuit 250 is connected to the intermediate memory 260. The second memory interface may be any suitable interface compatible with the intermediate memory 260 (e.g., if the intermediate memory 260 is a DDR4 memory, the second memory interface is a DDR4 interface). As described above, memory adapter circuit 250 may implement the NVDIMM-F protocol or a modified NVDIMM-F protocol at the first memory interface (the latter sending addresses on the command and address bus instead of the data bus).
In some embodiments, the memory adapter circuit 250 is further configured to provide memory enhancements by one or more of compression, deduplication, and error correction. For example, the memory adapter circuit 250 may compress data received from the memory management circuit 220 and store the compressed data in the intermediate memory 260, and upon request of the memory management circuit 220, the memory adapter circuit 250 may retrieve the compressed data from the intermediate memory 260, decompress the compressed data, and send the decompressed data to the memory management circuit 220. Similarly, memory adapter circuit 250 may clear duplicate data from data stored in intermediate memory 260 (recover duplicate entries when the data originally containing the duplicate entries is requested by memory management circuit 220), and may encode the data using an error correction code prior to storing the data in intermediate memory 260 and perform error correction on any data retrieved from intermediate memory 260.
Fig. 3 illustrates a hybrid hardware-software block diagram according to some embodiments. The central processor 210 is capable of accessing a federated (conjoined) memory 310 that includes a main memory 230 and a memory ABCDE system 240. At start-up, instructions in the basic input/output system (BIOS) register an address range on the DDR bus to the memory ABCDE system, disable addresses interleaved with the address range, and designate the address range as corresponding to a block device. Memory ABCDE system 240 is registered as a block device because (i) some of its operations (such as compressing and decompressing data) may be more suitable for block access than single word access (ii) and thus central processor 210 will not be unduly dependent on acting as a synchronous DDR memory.
In some embodiments, when the operating system is loaded, memory ABCDE driver 320 registers and implements front-end swap (frontswap) functionality and clean-up cache (clearache) functionality. If LINUX TM Excellent memory features exist in the kernel and are available, then these functions are called LINUX TM This feature of the kernel calls. Excellent memory feature interceptableThe kernel operates to evict (i) the clean cache page or (ii) the dirty user page and make calls to the clean cache function and the front-end swap function of the memory ABCDE driver 320. For example, when a net cache page is evicted, the excellent memory feature may intercept the eviction operation and invoke a clean-up cache function, which may copy the page from main memory 230 to memory ABCDE system 240, which may then be accessed faster by central processor 210 or copied back to main memory 230 than if the page were deleted (thus a subsequent access would require the page to be restored from persistent memory (e.g., from an SSD or hard drive). When a dirty user page is evicted by the kernel, the excellent memory feature may intercept the eviction operation and invoke a front-end swap function, which may copy the dirty user page being evicted to memory ABCDE system 240, which may be completed faster than writing the page to persistent memory.
Memory ABCDE driver 320 may include a method for determining whether to accept or reject any write requests received by memory ABCDE system 240. Memory ABCDE driver 320 may make the determination by multiplying the free space in intermediate memory 260 by the estimated enhancement ratio (augmentation ratio) and comparing the product to the amount of data in the write request. For example, the estimated enhancement ratio may be an estimated deduplication ratio, i.e., an estimate of how much data is available to be stored as a result of using deduplication. In some embodiments, memory ABCDE driver 320 is configured to generate an estimated enhancement ratio based on an actual enhancement ratio that varies slowly over time for completed write operations, such that large fluctuations in the actual enhancement ratio (e.g., the actual enhancement ratio for write operations involving small amounts of data) do not result in large fluctuations in the estimated enhancement ratio. For example, the estimated enhancement ratio may be set to 1.0 at start-up, and after the lapse of a time interval of a set length, the estimated enhancement ratio may be periodically updated to be equal to the average of the actual enhancement ratios during the time interval having the set length and currently ending.
End users and application frameworks may be specifically designed to avoid the use of non-paged memory because other non-paged memory systems may exist in secondary memory (e.g., persistent memory) and may have long access latency. In some embodiments, since memory ABCDE system 240 provides non-paged memory, this may result in such an application that unnecessarily relinquishes the benefit of memory ABCDE system 240. One possible solution for a developer may be to rewrite the application library and middleware framework, but this can pose significant challenges involving modification of a large number of existing frameworks.
Thus, in some embodiments, the kernel may be modified to allow memory ABCDE system 240 to emulate main memory for the purpose of responding to system calls, such as calls to a system information (sysinfo) function.
FIG. 4 is a software block diagram of a system for modifying a response to a sysinfo function call according to an embodiment of the invention. Referring to fig. 4, in some embodiments, when an application or middleware 410 makes a call to sysinfo, the returned structure 420 may include (i) a value of total available memory ("tran" in fig. 4) based on the size of the main memory 230 and the size of the intermediate memory 260, and (ii) a value of total free memory ("fram" in fig. 4) based on the amount of free memory in the main memory 230 and the amount of free memory in the intermediate memory 260.
The value of the total free memory ("frame" in fig. 4) based on the amount of free memory in the intermediate memory 260 ("add_frame" in fig. 4) is added to return information.
The value of the total available memory (the "tran" in fig. 4) based on the size of the intermediate memory 260 (the "add_tran" in fig. 4) is added to return information.
The system memory information ("si_meminfo" in fig. 4) of the main memory 230 includes totalram, freeram, sharedram and bufferra.
The total ram in fig. 4 is a value of total available memory based on the size of main memory 230.
Freeram in fig. 4 is a value of total free memory based on the amount of free memory of main memory 230.
The sharederam in fig. 4 is a memory shared by the SoC 250 and the CPU.
The bufferram in fig. 4 is the area occupied by the CPU.
The sysinfo function is converted into a do_sysinfo command in the kernel ("do_sysinfo" in fig. 4).
The structure sysinfo info command ("struct sysinfo info" in fig. 4) includes do_sysinfo.
By executing the structure sysinfo info information command, the values of the memory of the total and freeram are returned to the application or middleware 410.
The amount of total memory and free memory added to account for memory in intermediate memory 260 may take into account the desired enhancement ratio for the data to be stored in intermediate memory 260. In some embodiments, the value of the total free memory returned is equal to the sum of: (i) the amount of free memory in main memory 230; (ii) (1) the estimated boost ratio multiplied by (2) the amount of free memory in intermediate memory 260. The estimated enhancement ratio may be calculated as described above, or may be calculated according to a conservative algorithm (for producing an estimate that may be referred to as a "minimum enhancement ratio") (e.g., by using a value of 1.0 or 2.0 at system start-up or when a meaningful estimate is established from the data that is otherwise unavailable). When meaningful estimates are available from the data, the estimated enhancement ratio may be calculated using, for example, the minimum actual enhancement ratio for the completed write operation over a period of time.
This approach may present further challenges: implementation of the mlock () system function. The system function is designed to clamp or lock a specific amount of memory starting from the virtual address into the main memory when the system function is called, to avoid swapping it to the secondary memory
A reservoir. In operation, in some embodiments, it may occur that a portion of this memory exists in main memory 230 and another portion exists in memory ABCDE system 240. To perform user space requests, memory ABCDE driver 320 may therefore be configured to ensure that current pages in memory ABCDE system 240 remain locked in their locations and to defer swapping them to secondary (e.g., persistent) memory.
In summary, some embodiments provide a pseudo-main memory system. The system includes a memory adapter circuit for performing memory augmentation using compression, deduplication, and/or error correction. The memory adapter circuit is coupled to the memory and employs a memory enhancement method to increase the effective storage capacity of the memory.
It will be understood that, although the terms "first," "second," "third," etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed above could be termed a second element, component, region, layer or section without departing from the spirit and scope of the present inventive concept.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concepts. As used herein, the terms "substantially," "about," and the like are used as approximation terms and not as degree terms, and are intended to explain the inherent deviations in measured or calculated values that would be recognized by one of ordinary skill in the art. As used herein, the term "primary" component refers to a component that is present in a composition, polymer, or product in an amount greater than the amount of any other individual component in the composition or product. In contrast, the term "primary component" refers to a component that comprises at least 50% or more of the weight of the composition, polymer, or product. As used herein, when the term "primary portion" is applied to a plurality of items, it means at least half of those items.
As used herein, the singular is intended to include the plural unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any or all combinations of one or more of the associated listed items. When a statement such as "at least one of … …" follows a column of elements, the entire column of elements is modified instead of modifying a single element of the column. Furthermore, when embodiments of the inventive concept are described using "may" it is intended that "one or more embodiments of the inventive concept" are present. Furthermore, the term "exemplary" is intended to mean either exemplary or illustrative. As used herein, the term "use" may be considered equivalent to the term "utilization".
It will be understood that when an element or layer is referred to as being "on," connected to, "coupled to," or "adjacent" another element or layer, it can be directly on, connected to, coupled to, or directly adjacent to the other element or layer, or one or more intervening elements or layers may be present. In contrast, when an element or layer is referred to as being "directly on," "directly connected to," "directly coupled to," or "directly adjacent to" another element or layer, there are no intervening elements or layers present. When there are intermediate elements between a first element and a second element connected to the first element, the first element may be referred to as being connected to the second element "through" the intermediate elements.
Any numerical range recited herein is intended to include all sub-ranges subsumed with the same numerical precision within the recited range. For example, a range of "1.0 to 10.0" is intended to include all subranges between the recited minimum value of 1.0 and the recited maximum value of 10.0 (inclusive), i.e., having a minimum value equal to or greater than 1.0 and a maximum value equal to or less than 10.0, such as 2.4 to 7.6. Any maximum numerical limitation recited herein is intended to include all lower numerical limits subsumed therein and any minimum numerical limitation recited in the present specification is intended to include all upper numerical limits subsumed therein.
While exemplary embodiments of a pseudo-main memory system have been specifically described and illustrated herein, many modifications and variations will be apparent to those skilled in the art, and it will be understood that a pseudo-main memory system constructed in accordance with the principles of the present invention may be implemented other than as specifically described herein. The invention is also defined in the claims and their equivalents.
Claims (20)
1. A system, comprising:
a processor;
a memory system including an adapter module and a first memory; and
a second memory connected to the processor through the management module,
wherein the adapter module has a first interface connected to the processor and a second interface connected to the first memory,
wherein the adapter module is configured to store data in and retrieve data from the first memory with an enhancement of the storage capacity of the first memory, the enhancement being configured to increase the free memory of the first memory according to an enhancement ratio,
wherein the adapter module is further configured to estimate the enhancement ratio based on compression and/or deduplication of previous write operations, and
wherein the processor is further configured to evaluate that sufficient space is available in the first memory based on an estimated boost ratio that is a function of the boost ratio of the data stored in the first memory over the time interval.
2. The system of claim 1, wherein the enhancement comprises at least one of:
compressing;
repeating and deleting;
error correction.
3. The system of claim 1, wherein the first interface is a second or higher generation dual data rate synchronous dynamic random access interface.
4. A system according to claim 3, wherein the second interface is a second or higher generation dual data rate synchronous dynamic random access interface.
5. The system of claim 3, wherein the first interface is an NVDIMM-F interface and the system is configured to operate the memory system as a block device.
6. The system of claim 1, wherein the processor is connected to the adapter module through the management module.
7. The system of claim 1, wherein the first memory is a dynamic random access memory and the second interface is a second or higher generation dual data rate synchronous dynamic random access memory interface.
8. The system of claim 1, wherein the adapter module is a separate integrated circuit configured to perform at least one of:
compressing;
repeating and deleting;
error correction.
9. The system of claim 1, wherein the second memory is connected to the management module through a third interface, the third interface being a second or higher generation dual data rate synchronous dynamic random access memory interface.
10. The system of claim 1, wherein the processor is configured to maintain a page cache in the second memory,
wherein the processor is configured to invoke a clean cache function for the clean page when the clean page is evicted from the page cache, wherein the clean cache function is configured to:
storing the clearance page in the first memory when sufficient space is available in the first memory;
otherwise, the clearance page is stored in persistent memory.
11. The system of claim 10, wherein the clean-up caching function is configured to evaluate that sufficient space is available in the first memory based on the estimated enhancement ratio.
12. The system of claim 1, wherein the processor is configured to maintain user memory space in the second memory, and
the processor is configured to invoke a front-end swap function for a dirty page when the dirty page is evicted from the user memory space, wherein the front-end swap function is configured to:
storing the dirty pages in the first memory when sufficient space is available in the first memory;
otherwise, dirty pages are stored in persistent memory.
13. The system of claim 12, wherein the front-end switching function is configured to evaluate that sufficient space is available in the first memory based on the estimated enhancement ratio.
14. The system of claim 1, wherein the processor is configured to:
the execution of one or more of the applications is performed,
returning, in response to the one or more applications being applied to the sysinfo function:
based on the value of the total available memory of the size of the first memory and the size of the second memory,
a value of total free memory based on an amount of free memory in the first memory and an amount of free memory in the second memory.
15. The system of claim 14, wherein the value of the total free memory is a sum of:
the amount of free memory in the second memory,
a product of the minimum enhancement ratio and an amount of free memory in the first memory, wherein:
when a set time interval has elapsed as a result of system start-up, the minimum enhancement ratio is a function of the enhancement ratio of the data stored in the first memory relative to the set time interval;
otherwise, the minimum enhancement ratio is 2.0.
16. A method for operating a computer system, the computer system comprising:
a processor;
a memory system including an adapter module and a first memory; and
a second memory connected to the processor through the management module,
wherein the adapter module has a first interface connected to the processor and a second interface connected to the first memory,
the method comprises the following steps: storing data in and retrieving data from a first memory with an enhancement of a storage capacity of the first memory, the enhancement being configured to increase a free memory of the first memory according to an enhancement ratio,
wherein the adapter module is configured to estimate the enhancement ratio based on compression and/or deduplication of previous write operations, and
wherein the processor is further configured to evaluate that sufficient space is available in the first memory based on an estimated boost ratio that is a function of the boost ratio of the data stored in the first memory over the time interval.
17. The method of claim 16, wherein the enhancing comprises at least one of:
compressing;
repeating and deleting;
error correction.
18. The method of claim 16, comprising: the memory system is used as a block device capable of operating using the NVDIMM-F protocol.
19. The method of claim 16, wherein the adapter module is a separate integrated circuit configured to perform at least one of:
compressing;
repeating and deleting;
error correction.
20. A system, comprising:
a processor;
a memory system, comprising:
a first memory; and
memory adapter means for storing data in and retrieving data from the first memory with an enhancement of the storage capacity of the first memory, the enhancement being configured to increase the free memory of the first memory according to an enhancement ratio, the memory adapter means being configured to estimate the enhancement ratio based on compression and/or de-duplication of previous write operations,
the memory adapter device has a first interface connected to the processor and a second interface connected to the first memory,
the first interface is an NVDIMM-F interface
The system is configured to operate the memory system as a block device,
wherein the system further comprises: a second memory connected to the processor through the management module,
wherein the processor is further configured to evaluate that sufficient space is available in the first memory based on an estimated boost ratio that is a function of the boost ratio of the data stored in the first memory over the time interval.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310609420.9A CN116594931B (en) | 2016-09-30 | 2017-09-30 | Computing system and method for operating a computing system |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/282,848 US10372606B2 (en) | 2016-07-29 | 2016-09-30 | System and method for integrating overprovisioned memory devices |
US15/282,848 | 2016-09-30 | ||
US201762489997P | 2017-04-25 | 2017-04-25 | |
US62/489,997 | 2017-04-25 | ||
US15/663,619 | 2017-07-28 | ||
US15/663,619 US10515006B2 (en) | 2016-07-29 | 2017-07-28 | Pseudo main memory system |
CN201710913938.6A CN107885676B (en) | 2016-09-30 | 2017-09-30 | Computing system and method for operating a computing system |
CN202310609420.9A CN116594931B (en) | 2016-09-30 | 2017-09-30 | Computing system and method for operating a computing system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710913938.6A Division CN107885676B (en) | 2016-09-30 | 2017-09-30 | Computing system and method for operating a computing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116594931A CN116594931A (en) | 2023-08-15 |
CN116594931B true CN116594931B (en) | 2024-04-05 |
Family
ID=61781195
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310609420.9A Active CN116594931B (en) | 2016-09-30 | 2017-09-30 | Computing system and method for operating a computing system |
CN201710913938.6A Active CN107885676B (en) | 2016-09-30 | 2017-09-30 | Computing system and method for operating a computing system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710913938.6A Active CN107885676B (en) | 2016-09-30 | 2017-09-30 | Computing system and method for operating a computing system |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP6788566B2 (en) |
KR (1) | KR102266477B1 (en) |
CN (2) | CN116594931B (en) |
TW (1) | TWI710903B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11074007B2 (en) * | 2018-08-08 | 2021-07-27 | Micron Technology, Inc. | Optimize information requests to a memory system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101201783A (en) * | 2006-12-14 | 2008-06-18 | 英业达股份有限公司 | Method for early warning insufficiency of memory space of network memory system |
CN103631533A (en) * | 2012-08-27 | 2014-03-12 | 三星电子株式会社 | Computing device and operating method of computing device |
CN104364760A (en) * | 2012-05-06 | 2015-02-18 | 桑迪士克科技股份有限公司 | Parallel computation with multiple storage devices |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5668957A (en) * | 1995-11-02 | 1997-09-16 | International Business Machines Corporation | Method and apparatus for providing virtual DMA capability on an adapter connected to a computer system bus with no DMA support |
JP2004139503A (en) * | 2002-10-21 | 2004-05-13 | Matsushita Electric Ind Co Ltd | Storage device and its control method |
US8001294B2 (en) * | 2004-09-28 | 2011-08-16 | Sony Computer Entertainment Inc. | Methods and apparatus for providing a compressed network in a multi-processing system |
JP4700562B2 (en) * | 2006-05-18 | 2011-06-15 | 株式会社バッファロー | Data storage device and data storage method |
US20100161909A1 (en) * | 2008-12-18 | 2010-06-24 | Lsi Corporation | Systems and Methods for Quota Management in a Memory Appliance |
KR20100133710A (en) * | 2009-06-12 | 2010-12-22 | 삼성전자주식회사 | Memory system and code data loading method therof |
JP2011128792A (en) * | 2009-12-16 | 2011-06-30 | Toshiba Corp | Memory management device |
US8966477B2 (en) * | 2011-04-18 | 2015-02-24 | Intel Corporation | Combined virtual graphics device |
US20150242432A1 (en) * | 2014-02-21 | 2015-08-27 | Microsoft Corporation | Modified Memory Compression |
-
2017
- 2017-09-04 TW TW106130111A patent/TWI710903B/en active
- 2017-09-28 KR KR1020170126199A patent/KR102266477B1/en active IP Right Grant
- 2017-09-29 JP JP2017190020A patent/JP6788566B2/en active Active
- 2017-09-30 CN CN202310609420.9A patent/CN116594931B/en active Active
- 2017-09-30 CN CN201710913938.6A patent/CN107885676B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101201783A (en) * | 2006-12-14 | 2008-06-18 | 英业达股份有限公司 | Method for early warning insufficiency of memory space of network memory system |
CN104364760A (en) * | 2012-05-06 | 2015-02-18 | 桑迪士克科技股份有限公司 | Parallel computation with multiple storage devices |
CN103631533A (en) * | 2012-08-27 | 2014-03-12 | 三星电子株式会社 | Computing device and operating method of computing device |
Also Published As
Publication number | Publication date |
---|---|
TW201823994A (en) | 2018-07-01 |
JP6788566B2 (en) | 2020-11-25 |
KR20180036591A (en) | 2018-04-09 |
CN107885676A (en) | 2018-04-06 |
CN116594931A (en) | 2023-08-15 |
TWI710903B (en) | 2020-11-21 |
JP2018060538A (en) | 2018-04-12 |
KR102266477B1 (en) | 2021-06-18 |
CN107885676B (en) | 2023-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11030088B2 (en) | Pseudo main memory system | |
US10956323B2 (en) | NVDIMM emulation using a host memory buffer | |
US8938601B2 (en) | Hybrid memory system having a volatile memory with cache and method of managing the same | |
JP5752989B2 (en) | Persistent memory for processor main memory | |
Bae et al. | 2B-SSD: The case for dual, byte-and block-addressable solid-state drives | |
US7991963B2 (en) | In-memory, in-page directory cache coherency scheme | |
KR102443600B1 (en) | hybrid memory system | |
CN109952565B (en) | Memory access techniques | |
JP2015026379A (en) | Controller management of memory array of storage device using magnetic random access memory (mram) | |
US11520520B2 (en) | Memory system and method of controlling nonvolatile memory | |
CN110851076A (en) | Memory system and deduplication memory system | |
CN110597742A (en) | Improved storage model for computer system with persistent system memory | |
CN110851074B (en) | Embedded reference counter and special data pattern automatic detection | |
CN116594931B (en) | Computing system and method for operating a computing system | |
JP2022516000A (en) | Compression of data stored in the cache memory in the cache memory hierarchy | |
CN115168317B (en) | LSM tree storage engine construction method and system | |
EP3916567B1 (en) | Method for processing page fault by processor | |
CN114625307A (en) | Computer readable storage medium, and data reading method and device of flash memory chip | |
US20220188228A1 (en) | Cache evictions management in a two level memory controller mode | |
Li et al. | A NUMA-aware Key-Value Store for Hybrid Memory Architecture | |
US20220365712A1 (en) | Method and device for accessing memory | |
KR20200034560A (en) | Storage device and operating method of storage device | |
CN113448487A (en) | Computer readable storage medium, method and device for writing flash memory management table | |
KR20170017355A (en) | Method for storaging and restoring system status of computing apparatus and computing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |