US20140359241A1 - Memory data management - Google Patents

Memory data management Download PDF

Info

Publication number
US20140359241A1
US20140359241A1 US13/906,691 US201313906691A US2014359241A1 US 20140359241 A1 US20140359241 A1 US 20140359241A1 US 201313906691 A US201313906691 A US 201313906691A US 2014359241 A1 US2014359241 A1 US 2014359241A1
Authority
US
United States
Prior art keywords
data
computer
memory units
physical memory
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/906,691
Other versions
US9043569B2 (en
Inventor
Timothy J. Dell
Manoj Dusanapudi
Prasanna Jayaraman
Anil B. Lingambudi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/906,691 priority Critical patent/US9043569B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELL, TIMOTHY J., LINGAMBUDI, ANIL B., JAYARAMAN, PRASANNA, DUSANAPUDI, MANOJ
Publication of US20140359241A1 publication Critical patent/US20140359241A1/en
Application granted granted Critical
Publication of US9043569B2 publication Critical patent/US9043569B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/349Arrangements for evaluating degradation, retention or wearout, e.g. by counting erase cycles
    • G11C16/3495Circuits or methods to detect or delay wearout of nonvolatile EPROM or EEPROM memory devices, e.g. by counting numbers of erase or reprogram cycles, by using multiple memory areas serially or cyclically
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/348Circuit details, i.e. tracer hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0625Power saving in storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/88Monitoring involving counting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/25Using a specific main memory architecture
    • G06F2212/253Centralized memory
    • G06F2212/2532Centralized memory comprising a plurality of modules

Definitions

  • This disclosure generally relates to memory systems, and in particular, to management of data in a memory system.
  • Modern computer systems such as servers, use a packaged type of volatile memory in their main memories.
  • the main memory is the place where the computer holds current programs and data that are in use. These programs in the main memory hold the instructions that the processor executes and the data that those instructions work with.
  • the main memory is an important part of the main processing subsystem of the computer, tied in with the processor, cache, motherboard, and chipset allowing the computer system to function.
  • a method for rearranging data in physical memory units.
  • a method may include monitoring utilization counters.
  • the method may further include, comparing the utilization counters for a match with an instance in a first table containing one or more instances when data may be rearranged in the physical memory units.
  • the table may further include where the data should be relocated by a rearrangement.
  • the method may also include, continuing to monitor the utilization counters if a match is not found with an instance in the first table.
  • the method may further include, rearranging the data in the physical memory units if a match between the utilization counters and an instance in the first table is found.
  • a computer-readable storage media for rearranging data in physical memory units.
  • the computer-readable storage media may provided for monitoring utilization counters.
  • the computer-readable storage media may further provide for, comparing the utilization counters for a match with an instance in a first table containing one or more instances when data may be rearranged in the physical memory units.
  • the table may further include where the data should be relocated by a rearrangement.
  • the computer-readable storage media may further provide for, monitoring utilization counters for a match with an instance in the first table.
  • the computer-readable storage media may also provide for, continuing to monitor the utilization counters if a match is not found with an instance in the first table.
  • the computer-readable storage media is may further provide for, rearranging the data in the physical memory units if a match between the utilization counters and an instance in the first table is found.
  • FIG. 1A depicts a high-level block diagram of an exemplary system, according to an embodiment of the invention.
  • FIG. 1B is a block diagram of an example data that may reside in the memory subsystem, according to an embodiment of the invention.
  • FIG. 2 is a flowchart illustrating a method 200 for managing data in memory, according to an embodiment of the invention.
  • FIG. 3 is a detailed illustration of a table that may be found in block 210 of FIG. 2 , according to an embodiment of the invention.
  • FIG. 4 is an exemplary diagram of data rearrangement, according to an embodiment of the invention.
  • memory physical memory
  • This may include operating systems, application software, and information used by the systems or applications.
  • the memory may be composed of more than one element resulting in a plurality of physical memory units.
  • the memory may include multiple chips, modules, or cards.
  • Systems may also use a mixture of memory types and locations.
  • the frequency and type of information access may vary along with the type of access occurring. The access may vary even for part of a single program or application. For example, for some applications the graphical information may be loaded into memory and frequently accessed while a data table of options or preferences may also be loaded into the memory but accessed far less frequently than the graphical information.
  • Embodiments of the presented invention may allow for the management of the location, or placement, of data loaded into memory units.
  • the management of the location of data in memory units may allow for improved energy usage, run times, or system life.
  • FIG. 1A depicts a high-level block diagram of an exemplary system for implementing an embodiment of the invention.
  • the mechanisms and apparatus of embodiments of the present invention apply equally to any appropriate computing system.
  • the major components of the computer system 001 comprise one or more CPUs 002 , a memory subsystem 004 , a terminal interface 012 , a storage interface 014 , an I/O (Input/Output) device interface 016 , and a network interface 018 , all of which are communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 003 , an I/O bus 008 , and an I/O bus interface unit 010 .
  • the computer system 001 contains one or more general-purpose programmable central processing units (CPUs) 002 A, 002 B, 002 C, and 002 D, herein generically referred to as the CPU 002 .
  • the computer system 001 contains multiple processors typical of a relatively large system; however, in another embodiment the computer system 001 may alternatively be a single CPU system.
  • Each CPU 002 executes instructions stored in the memory subsystem 004 and may comprise one or more levels of on-board cache.
  • the memory subsystem 004 may comprise a random-access semiconductor memory, storage device, or storage medium (either volatile or non-volatile) for storing data and programs.
  • the memory subsystem 004 represents the entire virtual memory of the computer system 001 , and may also include the virtual memory of other computer systems coupled to the computer system 001 or connected via a network.
  • the memory subsystem 004 is conceptually a single monolithic entity, but in other embodiments the memory subsystem 004 is a more complex arrangement, such as a hierarchy of caches and other memory devices.
  • memory may exist in multiple levels of caches, and these caches may be further divided by function, so that one cache holds instructions while another holds non-instruction data, which is used by the processor or processors.
  • Memory may be further distributed and associated with different CPUs or sets of CPUs, as is known in any of various so-called non-uniform memory access (NUMA) computer architectures.
  • NUMA non-uniform memory access
  • the main memory or memory subsystem 004 may contain elements for control and flow of memory used by the CPU 002 . This may include all or a portion of the following: a memory controller 005 , a memory buffers 006 , and one or more memory units, or devices, 007 a , 007 b , 007 c , and 007 d (generically referred to as 007 ).
  • the memory devices may be individual or sets of dual in-line memory modules (DIMMs), which are a series of dynamic random-access memory integrated circuits mounted on a printed circuit board and designed for use in personal computers, workstations, and servers. In various embodiments, these elements may be connected with buses for communication of data and instructions.
  • DIMMs dual in-line memory modules
  • these elements may be combined into single chips that perform multiple duties or integrated into various types of memory modules.
  • the illustrated elements are shown as being contained within the memory subsystem 004 in the computer system 001 , in other embodiments some or all of them may be on different computer systems and may be accessed remotely, e.g., via a network.
  • the memory bus 003 is shown in FIG. 1A as a single bus structure providing a direct communication path among the CPUs 002 , the memory subsystem 004 , and the I/O bus interface 010
  • the memory bus 003 may in fact comprise multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration.
  • the I/O bus interface 010 and the I/O bus 008 are shown as single respective units, the computer system 001 may, in fact, contain multiple I/O bus interface units 010 , multiple I/O buses 008 , or both. While multiple I/O interface units are shown, which separate the I/O bus 008 from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices are connected directly to one or more system I/O buses.
  • the computer system 001 is a multi-user mainframe computer system, a single-user system, or a server computer or similar device that has little or no direct user interface, but receives requests from other computer systems (clients).
  • the computer system 001 is implemented as a desktop computer, portable computer, laptop or notebook computer, tablet computer, pocket computer, telephone, smart phone, or any other appropriate type of electronic device.
  • FIG. 1A is intended to depict the representative major components of an exemplary computer system 001 . But individual components may have greater complexity than represented in FIG. 1A , components other than or in addition to those shown in FIG. 1A may be present, and the number, type, and configuration of such components may vary. Several particular examples of such complexities or additional variations are disclosed herein. The particular examples disclosed are for example only and are not necessarily the only such variations.
  • FIG. 1B is a block diagram of an example data that may reside in the memory subsystem 004 , according to an embodiment of the invention.
  • data may refer to program instructions or information that is processed using program instruction.
  • the memory subsystem 004 may contain data such as a hypervisor 101 , an operating system 102 , an application A 103 , and application B 104 , the table 105 , and utilization counters 106 .
  • the utilization counters may be implemented in hardware, such as within the memory controller 005 .
  • the memory subsystem 004 may also include firmware 108 . In various embodiments, these examples of data may be more numerous or individual ones may be absent.
  • This data may subdivided and may fully or partially reside in one or more of the elements of the memory subsystem 004 such as the memory buffer 006 or DIMMs 007 a - 007 b .
  • the complete data for one of the example data may only be partially in the memory subsystem 004 .
  • the operating system 102 may only be partially loaded into the memory subsystem 004 .
  • the operating system 102 may load additional data parts from storage using the storage interface 014 , for example, as needed.
  • the hypervisor 101 or operating system 102 may be fully or partially in virtual memory.
  • the processor 002 deals with virtual memory addresses and the virtual memory addresses are translated through known means to access physical memory units, such as DIMMs 007 a - 007 d .
  • the hypervisor 101 or operating system 102 may be fully or partially in the physical memory, such as DIMMs 007 a - 007 b , of the memory subsystem 004 .
  • FIG. 2 is a flowchart illustrating a method 200 that may be used for the management of the location, or placement, of data loaded into memory units, according to an embodiment of the invention.
  • the method may start at block 201 .
  • utilization counters 106 may be monitored. Counters may be used to track hot and cold pages.
  • the utilization counters 106 may be used to track data density and access frequency in one or more memory units. For example, the utilization counters 106 may track data density, access frequency, and types of access for each DIMM in a group of DIMMs 007 a - 007 d .
  • the use of the utilization counters 106 may allow for the determination of the capability to split the data on a DIMMs 007 a - 007 d .
  • the use of the utilization counter 106 may also allow for the data occupancy, such as a percentage of DIMM 007 with data in it, in a particular DIMM 007 such as DIMM 007 a to be identified.
  • a table 105 may be accessed and compared to the utilization counters 106 being monitored in block 205 .
  • the table 105 may include one or more instances when data may be relocated.
  • the table 105 may also include where or how data should be relocated for a specific instance.
  • This table 105 may be located anywhere that it may be accessed by the system. This may include such locations as a hard disk drive, solid state device (SSD), or removable memory cards, optical storage, flash memory devices, network attached storage (NAS), or connections to storage area network (SAN) devices, or to a data cloud, or other devices that may store non-volatile data.
  • a copy of the table 105 may also be stored within the memory subsystem 004 in some systems. The table 105 is discussed in greater detail below.
  • entity such as the hypervisor 101 or operating system 102
  • the determination may stop once a single match between the utilization counters 106 and an instance in the table 105 is made.
  • the method may look for multiple or all matches between the utilization counters 106 and instances on the table 105 .
  • the selection between multiple instances may be based upon a chosen performance metric.
  • the selection may be based off a prioritization of performance metrics built into the table 105 for the various instances.
  • the selection of performance metric may be based upon user input or settings.
  • the selection of performance metric may be based upon the programs or application that may be affected by the rearrangement of the data in the memory units.
  • the user may indicate or set a preference that the performance metric that may be used in selecting an instance should be the one with the greatest energy efficiency.
  • a program may require that the instance that results in fastest data access be selected based off the preferred performance metric.
  • performance metrics may be power consumption, access speed, and load balancing. It is contemplated that other performance metrics may be used and be in the scope of the invention.
  • an entity doing the rearrangement of data in the memory units may be selected.
  • an entity may be a hypervisor 101 or an operating system 102 .
  • hypervisor manager that takes care of memory allocation it may be selected to rearrange the data in the memory units.
  • there may be determination the data in the memory units is, or may be, used by multiple operating systems or within multiple manifestations of the same Operating systems that may require the data rearrangement to be done by a hypervisor 101 .
  • the memory mapping may be within the operating system 102 and the data rearrangement may be managed by operating system 102 .
  • the hypervisor 101 may have to be informed or updated.
  • a trigger for the rearrangement of the data in the memory units may be initialized.
  • the operating system 102 or hypervisor 101 may poll the counters to initiate the data rearrangement.
  • interrupt methodology may be used to trigger the memory management.
  • the data is rearranged. As previously discussed this may result in moving the some or all of the data from one or more memory units to one or more different memory units.
  • the rearrangement of data may be automatic. In alternate embodiments, the rearrangement of may occur incrementally. In these embodiments, the rearrangement may happen according to an on need basis, or tied to utilization, or can be periodic. The method may end in block 250 .
  • FIG. 3 is detailed illustration of a table 105 that may be found in block 210 of FIG. 2 , according to an embodiment of the invention.
  • the table 105 may enable movement of data present in DIMM 007 a or DIMM 007 b .
  • the relocation or rearrangement of the data may be based off the frequency of access (columns 4 & 5) or type of access (column 6 & 7) being made of the data using the table 105 .
  • the table 105 may give rules or guidelines about the conditions for data movement to occur (columns 2 through 7).
  • the table 105 may also give instruction on the action to take (column 8) if a case is matched.
  • the table 105 may contain guidance for cases of scarcely utilized memory configurations, where power savings are of higher priority.
  • the table 105 may base rearrangement of data on hot and cold pages, so that data movement may occur to favor performance.
  • a multitude of tables may be available to the system.
  • the selection of the table 105 to use may be based upon user selection, program or application preferences, or operating system programming.
  • the table selection may be based upon a preferred or designated performance metric similar to the performance metric used for selecting between instances on a single table 105 .
  • performance metrics may be power consumption, access speed, and load balancing.
  • an algorithm may select different tables to use if currently favoring performance over power. It is contemplated that, a variety of tables, parameters, performance metrics, and selection criteria may possibly be used and be within the scope of the invention.
  • FIG. 4 is an exemplary diagram of data rearrangement, according to an embodiment of the invention.
  • DIMM 007 a and DIMM 007 b are shown as being used by a system and containing data.
  • DIMM 007 a is initially shown containing low frequency accessed (LFA) data 420 with a density of less than 30% and unused space 410 a .
  • DIMM 007 b is initially shown containing high frequency accessed (HFA) data 430 with a density of less than 50% and unused space 410 b .
  • the method 200 may be used with table 105 to determine that data may be rearranged, or moved.
  • the LFA data may be accessed less than 30% and the access may contain greater than 50% reads.
  • the information on the density, frequency, and type of access may be found in utilization counters 106 .
  • the data on DIMMs 007 a and 007 b , per the utilization counters 106 matches case “1” on table 105 .
  • data movement 230 may occur since there is match between information in the utilization counters 106 and a case on table 105 .
  • DIMM 007 a contains only unused space 410 c .
  • DIMM 007 b now contains unused space 410 d , LFA data 420 , and HFA data 430 .
  • DIMM 007 a has had all data rearranged, or migrated, off of it and may be powered down as directed by table 105 . This embodiment is for example purposes only and other embodiments may vary as previously mentioned.
  • embodiments have been described in the context of a fully functional for rearranging data in physical memory units. Readers of skill in the art will recognize, however, that embodiments also may include a computer program product disposed upon computer-readable storage medium or media (or machine-readable storage medium or media) for use with any suitable data processing system or storage system.
  • the computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art.
  • Persons skilled in the art will immediately recognize that any computer or storage system having suitable programming means will be capable of executing the steps of a method disclosed herein as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the claims.
  • aspects may be embodied as a system, method, or computer program product. Accordingly, aspects may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer readable signal medium or a computer readable storage medium may be a non-transitory medium in an embodiment.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • the computer readable storage medium includes the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, or on one module or on two or more modules of a storage system.
  • the program code may execute partly on a user's computer or one module and partly on a remote computer or another module, or entirely on the remote computer or server or other module.
  • the remote computer other module may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider an Internet Service Provider
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function or act specified in the flowchart, or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions or acts specified in the flowchart, or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • server and “mobile client” are used herein for convenience only, and in various embodiments a computer system that operates as a mobile client computer in one environment may operate as a server computer in another environment, and vice versa.
  • the mechanisms and apparatus of embodiments of the present invention apply equally to any appropriate computing system, including a computer system that does not employ the mobile client-server model.

Abstract

A method and computer-readable storage media are provided for rearranging data in physical memory units. In one embodiment, a method may include monitoring utilization counters. The method may further include, comparing the utilization counters for a match with an instance in a first table containing one or more instances when data may be rearranged in the physical memory units. The table may further include where the data should be relocated by a rearrangement. The method may also include, continuing to monitor the utilization counters if a match is not found with an instance in the first table. The method may further include, rearranging the data in the physical memory units if a match between the utilization counters and an instance in the first table is found.

Description

    TECHNICAL FIELD
  • This disclosure generally relates to memory systems, and in particular, to management of data in a memory system.
  • BACKGROUND
  • Modern computer systems, such as servers, use a packaged type of volatile memory in their main memories. The main memory is the place where the computer holds current programs and data that are in use. These programs in the main memory hold the instructions that the processor executes and the data that those instructions work with. The main memory is an important part of the main processing subsystem of the computer, tied in with the processor, cache, motherboard, and chipset allowing the computer system to function.
  • SUMMARY
  • In one embodiment, a method is provided for rearranging data in physical memory units. In one embodiment, a method may include monitoring utilization counters. The method may further include, comparing the utilization counters for a match with an instance in a first table containing one or more instances when data may be rearranged in the physical memory units. The table may further include where the data should be relocated by a rearrangement. The method may also include, continuing to monitor the utilization counters if a match is not found with an instance in the first table. The method may further include, rearranging the data in the physical memory units if a match between the utilization counters and an instance in the first table is found.
  • In another embodiment, a computer-readable storage media is provided for rearranging data in physical memory units. In one embodiment, the computer-readable storage media may provided for monitoring utilization counters. The computer-readable storage media may further provide for, comparing the utilization counters for a match with an instance in a first table containing one or more instances when data may be rearranged in the physical memory units. The table may further include where the data should be relocated by a rearrangement. The computer-readable storage media may further provide for, monitoring utilization counters for a match with an instance in the first table. The computer-readable storage media may also provide for, continuing to monitor the utilization counters if a match is not found with an instance in the first table. The computer-readable storage media is may further provide for, rearranging the data in the physical memory units if a match between the utilization counters and an instance in the first table is found.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements or steps.
  • FIG. 1A depicts a high-level block diagram of an exemplary system, according to an embodiment of the invention.
  • FIG. 1B is a block diagram of an example data that may reside in the memory subsystem, according to an embodiment of the invention.
  • FIG. 2 is a flowchart illustrating a method 200 for managing data in memory, according to an embodiment of the invention.
  • FIG. 3 is a detailed illustration of a table that may be found in block 210 of FIG. 2, according to an embodiment of the invention.
  • FIG. 4 is an exemplary diagram of data rearrangement, according to an embodiment of the invention.
  • DETAILED DESCRIPTION
  • In many systems such as computer systems and electronic devices physical memory (herein referred to as memory) may be used to store data used by the system. This may include operating systems, application software, and information used by the systems or applications. In many systems, the memory may be composed of more than one element resulting in a plurality of physical memory units. For example, the memory may include multiple chips, modules, or cards. Systems may also use a mixture of memory types and locations. The frequency and type of information access may vary along with the type of access occurring. The access may vary even for part of a single program or application. For example, for some applications the graphical information may be loaded into memory and frequently accessed while a data table of options or preferences may also be loaded into the memory but accessed far less frequently than the graphical information. Embodiments of the presented invention may allow for the management of the location, or placement, of data loaded into memory units. The management of the location of data in memory units may allow for improved energy usage, run times, or system life.
  • FIG. 1A depicts a high-level block diagram of an exemplary system for implementing an embodiment of the invention. The mechanisms and apparatus of embodiments of the present invention apply equally to any appropriate computing system. The major components of the computer system 001 comprise one or more CPUs 002, a memory subsystem 004, a terminal interface 012, a storage interface 014, an I/O (Input/Output) device interface 016, and a network interface 018, all of which are communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 003, an I/O bus 008, and an I/O bus interface unit 010.
  • The computer system 001 contains one or more general-purpose programmable central processing units (CPUs) 002A, 002B, 002C, and 002D, herein generically referred to as the CPU 002. In an embodiment, the computer system 001 contains multiple processors typical of a relatively large system; however, in another embodiment the computer system 001 may alternatively be a single CPU system. Each CPU 002 executes instructions stored in the memory subsystem 004 and may comprise one or more levels of on-board cache.
  • In an embodiment, the memory subsystem 004 may comprise a random-access semiconductor memory, storage device, or storage medium (either volatile or non-volatile) for storing data and programs. In another embodiment, the memory subsystem 004 represents the entire virtual memory of the computer system 001, and may also include the virtual memory of other computer systems coupled to the computer system 001 or connected via a network. The memory subsystem 004 is conceptually a single monolithic entity, but in other embodiments the memory subsystem 004 is a more complex arrangement, such as a hierarchy of caches and other memory devices. For example, memory may exist in multiple levels of caches, and these caches may be further divided by function, so that one cache holds instructions while another holds non-instruction data, which is used by the processor or processors. Memory may be further distributed and associated with different CPUs or sets of CPUs, as is known in any of various so-called non-uniform memory access (NUMA) computer architectures.
  • The main memory or memory subsystem 004 may contain elements for control and flow of memory used by the CPU 002. This may include all or a portion of the following: a memory controller 005, a memory buffers 006, and one or more memory units, or devices, 007 a, 007 b, 007 c, and 007 d (generically referred to as 007). In the illustrated embodiment, the memory devices may be individual or sets of dual in-line memory modules (DIMMs), which are a series of dynamic random-access memory integrated circuits mounted on a printed circuit board and designed for use in personal computers, workstations, and servers. In various embodiments, these elements may be connected with buses for communication of data and instructions. In other embodiments, these elements may be combined into single chips that perform multiple duties or integrated into various types of memory modules. The illustrated elements are shown as being contained within the memory subsystem 004 in the computer system 001, in other embodiments some or all of them may be on different computer systems and may be accessed remotely, e.g., via a network.
  • Although the memory bus 003 is shown in FIG. 1A as a single bus structure providing a direct communication path among the CPUs 002, the memory subsystem 004, and the I/O bus interface 010, the memory bus 003 may in fact comprise multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration. Furthermore, while the I/O bus interface 010 and the I/O bus 008 are shown as single respective units, the computer system 001 may, in fact, contain multiple I/O bus interface units 010, multiple I/O buses 008, or both. While multiple I/O interface units are shown, which separate the I/O bus 008 from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices are connected directly to one or more system I/O buses.
  • In various embodiments, the computer system 001 is a multi-user mainframe computer system, a single-user system, or a server computer or similar device that has little or no direct user interface, but receives requests from other computer systems (clients). In other embodiments, the computer system 001 is implemented as a desktop computer, portable computer, laptop or notebook computer, tablet computer, pocket computer, telephone, smart phone, or any other appropriate type of electronic device.
  • FIG. 1A is intended to depict the representative major components of an exemplary computer system 001. But individual components may have greater complexity than represented in FIG. 1A, components other than or in addition to those shown in FIG. 1A may be present, and the number, type, and configuration of such components may vary. Several particular examples of such complexities or additional variations are disclosed herein. The particular examples disclosed are for example only and are not necessarily the only such variations.
  • FIG. 1B is a block diagram of an example data that may reside in the memory subsystem 004, according to an embodiment of the invention. With reference to the content of a memory, the term data as used herein may refer to program instructions or information that is processed using program instruction. In the illustrated embodiment, the memory subsystem 004 may contain data such as a hypervisor 101, an operating system 102, an application A 103, and application B 104, the table 105, and utilization counters 106. In alternative embodiments, the utilization counters may be implemented in hardware, such as within the memory controller 005. The memory subsystem 004 may also include firmware 108. In various embodiments, these examples of data may be more numerous or individual ones may be absent. This data may subdivided and may fully or partially reside in one or more of the elements of the memory subsystem 004 such as the memory buffer 006 or DIMMs 007 a-007 b. In various embodiments, the complete data for one of the example data may only be partially in the memory subsystem 004. For example, the operating system 102 may only be partially loaded into the memory subsystem 004. The operating system 102 may load additional data parts from storage using the storage interface 014, for example, as needed. In various embodiments, the hypervisor 101 or operating system 102 may be fully or partially in virtual memory. It is understood that the processor 002 deals with virtual memory addresses and the virtual memory addresses are translated through known means to access physical memory units, such as DIMMs 007 a-007 d. In various embodiments, the hypervisor 101 or operating system 102 may be fully or partially in the physical memory, such as DIMMs 007 a-007 b, of the memory subsystem 004.
  • FIG. 2 is a flowchart illustrating a method 200 that may be used for the management of the location, or placement, of data loaded into memory units, according to an embodiment of the invention. The method may start at block 201. In block 205, utilization counters 106 may be monitored. Counters may be used to track hot and cold pages. In an embodiment of the invention the utilization counters 106 may be used to track data density and access frequency in one or more memory units. For example, the utilization counters 106 may track data density, access frequency, and types of access for each DIMM in a group of DIMMs 007 a-007 d. The use of the utilization counters 106 may allow for the determination of the capability to split the data on a DIMMs 007 a-007 d. The use of the utilization counter 106 may also allow for the data occupancy, such as a percentage of DIMM 007 with data in it, in a particular DIMM 007 such as DIMM 007 a to be identified.
  • In block 210, a table 105 may be accessed and compared to the utilization counters 106 being monitored in block 205. The table 105 may include one or more instances when data may be relocated. The table 105 may also include where or how data should be relocated for a specific instance. This table 105 may be located anywhere that it may be accessed by the system. This may include such locations as a hard disk drive, solid state device (SSD), or removable memory cards, optical storage, flash memory devices, network attached storage (NAS), or connections to storage area network (SAN) devices, or to a data cloud, or other devices that may store non-volatile data. A copy of the table 105 may also be stored within the memory subsystem 004 in some systems. The table 105 is discussed in greater detail below.
  • In block 215, a determination is made if there is a match between the use of the memory, such as DIMMS 007 a-007 d, according to the utilization counters 106 being monitored and an instance, or case, in the table 105. If the answer is “no” the method may proceed back to block 205 and monitor the utilization counters 106. If the answer is “yes”, the method may proceed to selection of entity, such as the hypervisor 101 or operating system 102, to handle the rearrangement of data in the memory units in block 220.
  • In various embodiments, the determination may stop once a single match between the utilization counters 106 and an instance in the table 105 is made. In other embodiments, the method may look for multiple or all matches between the utilization counters 106 and instances on the table 105. In embodiments where more than one match may be found, the selection between multiple instances may be based upon a chosen performance metric. In various embodiments, the selection may be based off a prioritization of performance metrics built into the table 105 for the various instances. In other embodiments, the selection of performance metric may be based upon user input or settings. In other embodiments, the selection of performance metric may be based upon the programs or application that may be affected by the rearrangement of the data in the memory units. For example, the user may indicate or set a preference that the performance metric that may be used in selecting an instance should be the one with the greatest energy efficiency. In another example, a program may require that the instance that results in fastest data access be selected based off the preferred performance metric. Some examples of performance metrics may be power consumption, access speed, and load balancing. It is contemplated that other performance metrics may be used and be in the scope of the invention.
  • In block 220, an entity doing the rearrangement of data in the memory units may be selected. In various embodiments, an entity may be a hypervisor 101 or an operating system 102. In some embodiments where there is hypervisor manager that takes care of memory allocation it may be selected to rearrange the data in the memory units. In various embodiments, there may be determination the data in the memory units is, or may be, used by multiple operating systems or within multiple manifestations of the same Operating systems that may require the data rearrangement to be done by a hypervisor 101. In various embodiments, the memory mapping may be within the operating system 102 and the data rearrangement may be managed by operating system 102. In embodiments where the data rearrangement is managed by an operating system 102 the hypervisor 101 may have to be informed or updated.
  • In block 225, a trigger for the rearrangement of the data in the memory units may be initialized. In various embodiments, the operating system 102 or hypervisor 101 may poll the counters to initiate the data rearrangement. In some embodiments, interrupt methodology may be used to trigger the memory management.
  • In block 230, the data is rearranged. As previously discussed this may result in moving the some or all of the data from one or more memory units to one or more different memory units. In various embodiments, the rearrangement of data may be automatic. In alternate embodiments, the rearrangement of may occur incrementally. In these embodiments, the rearrangement may happen according to an on need basis, or tied to utilization, or can be periodic. The method may end in block 250.
  • FIG. 3 is detailed illustration of a table 105 that may be found in block 210 of FIG. 2, according to an embodiment of the invention. In the illustrated embodiment, the table 105 may enable movement of data present in DIMM 007 a or DIMM 007 b. The relocation or rearrangement of the data may be based off the frequency of access (columns 4 & 5) or type of access (column 6 & 7) being made of the data using the table 105. The table 105 may give rules or guidelines about the conditions for data movement to occur (columns 2 through 7). The table 105 may also give instruction on the action to take (column 8) if a case is matched. In various embodiments, the table 105 may contain guidance for cases of scarcely utilized memory configurations, where power savings are of higher priority. In other embodiments, the table 105 may base rearrangement of data on hot and cold pages, so that data movement may occur to favor performance.
  • In various embodiments, a multitude of tables may be available to the system. In such embodiments, the selection of the table 105 to use may be based upon user selection, program or application preferences, or operating system programming. In various embodiments, the table selection may be based upon a preferred or designated performance metric similar to the performance metric used for selecting between instances on a single table 105. Some examples of performance metrics may be power consumption, access speed, and load balancing. For example, an algorithm may select different tables to use if currently favoring performance over power. It is contemplated that, a variety of tables, parameters, performance metrics, and selection criteria may possibly be used and be within the scope of the invention.
  • FIG. 4 is an exemplary diagram of data rearrangement, according to an embodiment of the invention. In the illustrated embodiment, DIMM 007 a and DIMM 007 b are shown as being used by a system and containing data. DIMM 007 a is initially shown containing low frequency accessed (LFA) data 420 with a density of less than 30% and unused space 410 a. DIMM 007 b is initially shown containing high frequency accessed (HFA) data 430 with a density of less than 50% and unused space 410 b. In this embodiment the method 200 may be used with table 105 to determine that data may be rearranged, or moved. In this example, the LFA data may be accessed less than 30% and the access may contain greater than 50% reads. The information on the density, frequency, and type of access may be found in utilization counters 106. Using the example parameters, the data on DIMMs 007 a and 007 b, per the utilization counters 106, matches case “1” on table 105.
  • In the illustrated embodiment, data movement 230 may occur since there is match between information in the utilization counters 106 and a case on table 105. Following the data movement, DIMM 007 a contains only unused space 410 c. DIMM 007 b now contains unused space 410 d, LFA data 420, and HFA data 430. In the embodiment, DIMM 007 a has had all data rearranged, or migrated, off of it and may be powered down as directed by table 105. This embodiment is for example purposes only and other embodiments may vary as previously mentioned.
  • Exemplary embodiments have been described in the context of a fully functional for rearranging data in physical memory units. Readers of skill in the art will recognize, however, that embodiments also may include a computer program product disposed upon computer-readable storage medium or media (or machine-readable storage medium or media) for use with any suitable data processing system or storage system. The computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer or storage system having suitable programming means will be capable of executing the steps of a method disclosed herein as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the claims.
  • As will be appreciated by one skilled in the art, aspects may be embodied as a system, method, or computer program product. Accordingly, aspects may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) may be used. The computer readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer readable signal medium or a computer readable storage medium may be a non-transitory medium in an embodiment. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, or on one module or on two or more modules of a storage system. The program code may execute partly on a user's computer or one module and partly on a remote computer or another module, or entirely on the remote computer or server or other module. In the latter scenario, the remote computer other module may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • Aspects are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function or act specified in the flowchart, or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions or acts specified in the flowchart, or block diagram block or blocks.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • The terms “server and “mobile client” are used herein for convenience only, and in various embodiments a computer system that operates as a mobile client computer in one environment may operate as a server computer in another environment, and vice versa. The mechanisms and apparatus of embodiments of the present invention apply equally to any appropriate computing system, including a computer system that does not employ the mobile client-server model.
  • While this disclosure has described the details of various embodiments shown in the drawings, these details are not intended to limit the scope of the invention as claimed in the appended claims.

Claims (20)

What is claimed is:
1. A computer implemented method for managing the use of physical memory units, comprising:
monitoring utilization counters;
comparing the utilization counters for a match with an instance in a first table containing one or more instances when data may be rearranged in the physical memory units and where the data should be relocated by a rearrangement;
continuing to monitor the utilization counters if a match is not found with an instance in the first table; and
rearranging the data in the physical memory units if a match between the utilization counters and an instance in the first table is found.
2. The method of claim 1, the rearrangement of data in the memory further comprising:
selecting an entity to rearrange the data;
rearranging the data in the physical memory units by the entity.
3. The method of claim 2, wherein the entity is a hypervisor.
4. The method of claim 2, wherein the entity is an operating system.
5. The method of claim 1, further comprising:
rearranging the data in physical memory units to modify a performance metric.
6. The method of claim 5, wherein the performance metric is power consumption.
7. The method of claim 5, wherein the performance metric is access speed.
8. The method of claim 5, wherein the performance metric is load balancing.
9. The method of claim 1, further comprising:
selecting the first table from a plurality of tables based upon a goal criteria.
10. The method of claim 1, wherein the physical memory units are a first and a second DIMM.
11. A computer-readable storage medium having executable code stored thereon to cause a machine to rearranging data in physical memory units, comprising:
monitoring utilization counters;
comparing the utilization counters for a match with an instance in a first table containing one or more instances when data may be rearranged in the physical memory units and where the data should be relocated by a rearrangement;
continuing to monitor the utilization counters if a match is not found with an instance in the first table; and
rearranging the data in the physical memory units if a match between the utilization counters and an instance in the first table is found.
12. The computer-readable storage medium of claim 11, the rearrangement of data in the physical memory units further comprising:
selecting an entity to rearrange the data;
rearranging the data in the physical memory units by the entity.
13. The computer-readable storage medium of claim 12, wherein the entity is a hypervisor.
14. The computer-readable storage medium of claim 12, wherein the entity is an operating system.
15. The computer-readable storage medium of claim 11, further comprising:
rearranging the data in the physical memory units to modify a performance metric.
16. The computer-readable storage medium of claim 15, wherein the performance metric is power consumption.
17. The computer-readable storage medium of claim 15, wherein the performance metric is access speed.
18. The computer-readable storage medium of claim 15, wherein the performance metric is load balancing.
19. The computer-readable storage medium of claim 11, further comprising:
selecting the first table from a plurality of tables based upon a goal criteria.
20. The computer-readable storage medium of claim 11, wherein the physical memory units are a first and a second DIMM.
US13/906,691 2013-05-31 2013-05-31 Memory data management Expired - Fee Related US9043569B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/906,691 US9043569B2 (en) 2013-05-31 2013-05-31 Memory data management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/906,691 US9043569B2 (en) 2013-05-31 2013-05-31 Memory data management

Publications (2)

Publication Number Publication Date
US20140359241A1 true US20140359241A1 (en) 2014-12-04
US9043569B2 US9043569B2 (en) 2015-05-26

Family

ID=51986514

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/906,691 Expired - Fee Related US9043569B2 (en) 2013-05-31 2013-05-31 Memory data management

Country Status (1)

Country Link
US (1) US9043569B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150046913A1 (en) * 2013-07-09 2015-02-12 International Business Machines Corporation Data splitting for multi-instantiated objects
US20150363325A1 (en) * 2014-06-11 2015-12-17 Vmware, Inc. Identification of low-activity large memory pages
US20160086654A1 (en) * 2014-09-21 2016-03-24 Advanced Micro Devices, Inc. Thermal aware data placement and compute dispatch in a memory system
US9501422B2 (en) 2014-06-11 2016-11-22 Vmware, Inc. Identification of low-activity large memory pages
US10680926B2 (en) * 2015-04-09 2020-06-09 Riverbed Technology, Inc. Displaying adaptive content in heterogeneous performance monitoring and troubleshooting environments

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11222004B2 (en) * 2016-11-03 2022-01-11 International Business Machines Corporation Management of a database with relocation of data units thereof
US10606696B2 (en) 2017-12-04 2020-03-31 International Business Machines Corporation Internally-generated data storage in spare memory locations

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030028711A1 (en) * 2001-07-30 2003-02-06 Woo Steven C. Monitoring in-use memory areas for power conservation
US20090100214A1 (en) * 2007-10-12 2009-04-16 Bei-Chuan Chen Management Platform For Extending Lifespan Of Memory In Storage Devices
US20100017632A1 (en) * 2006-07-21 2010-01-21 International Business Machines Corporation Managing Power-Consumption
US7761678B1 (en) * 2004-09-29 2010-07-20 Verisign, Inc. Method and apparatus for an improved file repository
US20110289296A1 (en) * 2010-05-18 2011-11-24 Hitachi, Ltd. Storage apparatus and control method thereof
US20110320754A1 (en) * 2010-02-23 2011-12-29 Hitachi, Ltd Management system for storage system and method for managing storage system
US20120272039A1 (en) * 2011-04-22 2012-10-25 Naveen Muralimanohar Retention-value associted memory
US20130179636A1 (en) * 2012-01-05 2013-07-11 Hitachi, Ltd. Management apparatus and management method of computer system
US20130191591A1 (en) * 2012-01-25 2013-07-25 Korea Electronics Technology Institute Method for volume management
US20130238832A1 (en) * 2012-03-07 2013-09-12 Netapp, Inc. Deduplicating hybrid storage aggregate
US20130268741A1 (en) * 2012-04-04 2013-10-10 International Business Machines Corporation Power reduction in server memory system
US20130275650A1 (en) * 2010-12-16 2013-10-17 Kabushiki Kaisha Toshiba Semiconductor storage device
US20140189196A1 (en) * 2013-01-02 2014-07-03 International Business Machines Corporation Determining weight values for storage devices in a storage tier to use to select one of the storage devices to use as a target storage to which data from a source storage is migrated

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7272734B2 (en) 2004-09-02 2007-09-18 International Business Machines Corporation Memory management to enable memory deep power down mode in general computing systems
US8359187B2 (en) 2005-06-24 2013-01-22 Google Inc. Simulating a different number of memory circuit devices
US20070050549A1 (en) 2005-08-31 2007-03-01 Verdun Gary J Method and system for managing cacheability of data blocks to improve processor power management
US8255628B2 (en) 2006-07-13 2012-08-28 International Business Machines Corporation Structure for multi-level memory architecture with data prioritization
US7496711B2 (en) 2006-07-13 2009-02-24 International Business Machines Corporation Multi-level memory architecture with data prioritization
US7707379B2 (en) 2006-07-13 2010-04-27 International Business Machines Corporation Dynamic latency map for memory optimization
US8108609B2 (en) 2007-12-04 2012-01-31 International Business Machines Corporation Structure for implementing dynamic refresh protocols for DRAM based cache
US20090307409A1 (en) 2008-06-06 2009-12-10 Apple Inc. Device memory management
US9727473B2 (en) 2008-09-30 2017-08-08 Intel Corporation Methods to communicate a timestamp to a storage system
US8463984B2 (en) 2009-12-31 2013-06-11 Seagate Technology Llc Dynamic data flow management in a multiple cache architecture
US20110252215A1 (en) 2010-04-09 2011-10-13 International Business Machines Corporation Computer memory with dynamic cell density
US8799553B2 (en) 2010-04-13 2014-08-05 Apple Inc. Memory controller mapping on-the-fly

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030028711A1 (en) * 2001-07-30 2003-02-06 Woo Steven C. Monitoring in-use memory areas for power conservation
US7761678B1 (en) * 2004-09-29 2010-07-20 Verisign, Inc. Method and apparatus for an improved file repository
US20100017632A1 (en) * 2006-07-21 2010-01-21 International Business Machines Corporation Managing Power-Consumption
US20090100214A1 (en) * 2007-10-12 2009-04-16 Bei-Chuan Chen Management Platform For Extending Lifespan Of Memory In Storage Devices
US20110320754A1 (en) * 2010-02-23 2011-12-29 Hitachi, Ltd Management system for storage system and method for managing storage system
US20110289296A1 (en) * 2010-05-18 2011-11-24 Hitachi, Ltd. Storage apparatus and control method thereof
US20130275650A1 (en) * 2010-12-16 2013-10-17 Kabushiki Kaisha Toshiba Semiconductor storage device
US20120272039A1 (en) * 2011-04-22 2012-10-25 Naveen Muralimanohar Retention-value associted memory
US20130179636A1 (en) * 2012-01-05 2013-07-11 Hitachi, Ltd. Management apparatus and management method of computer system
US20130191591A1 (en) * 2012-01-25 2013-07-25 Korea Electronics Technology Institute Method for volume management
US20130238832A1 (en) * 2012-03-07 2013-09-12 Netapp, Inc. Deduplicating hybrid storage aggregate
US20130268741A1 (en) * 2012-04-04 2013-10-10 International Business Machines Corporation Power reduction in server memory system
US20140189196A1 (en) * 2013-01-02 2014-07-03 International Business Machines Corporation Determining weight values for storage devices in a storage tier to use to select one of the storage devices to use as a target storage to which data from a source storage is migrated

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150046913A1 (en) * 2013-07-09 2015-02-12 International Business Machines Corporation Data splitting for multi-instantiated objects
US9311065B2 (en) * 2013-07-09 2016-04-12 International Business Machines Corporation Data splitting for multi-instantiated objects
US20150363325A1 (en) * 2014-06-11 2015-12-17 Vmware, Inc. Identification of low-activity large memory pages
US9330015B2 (en) * 2014-06-11 2016-05-03 Vmware, Inc. Identification of low-activity large memory pages
US9501422B2 (en) 2014-06-11 2016-11-22 Vmware, Inc. Identification of low-activity large memory pages
US20160086654A1 (en) * 2014-09-21 2016-03-24 Advanced Micro Devices, Inc. Thermal aware data placement and compute dispatch in a memory system
US9947386B2 (en) * 2014-09-21 2018-04-17 Advanced Micro Devices, Inc. Thermal aware data placement and compute dispatch in a memory system
US10680926B2 (en) * 2015-04-09 2020-06-09 Riverbed Technology, Inc. Displaying adaptive content in heterogeneous performance monitoring and troubleshooting environments

Also Published As

Publication number Publication date
US9043569B2 (en) 2015-05-26

Similar Documents

Publication Publication Date Title
US9043569B2 (en) Memory data management
Skourtis et al. Flash on rails: Consistent flash performance through redundancy
US11748322B2 (en) Utilizing different data compression algorithms based on characteristics of a storage system
US20160202931A1 (en) Modular architecture for extreme-scale distributed processing applications
Moon et al. Introducing ssds to the hadoop mapreduce framework
US10990291B2 (en) Software assist memory module hardware architecture
US11210282B2 (en) Data placement optimization in a storage system according to usage and directive metadata embedded within the data
US9984102B2 (en) Preserving high value entries in an event log
US20150052328A1 (en) User-controlled paging
JP2020021417A (en) Database management system and method
US9147499B2 (en) Memory operation of paired memory devices
US20220382672A1 (en) Paging in thin-provisioned disaggregated memory
US10089228B2 (en) I/O blender countermeasures
US20220237112A1 (en) Tiered persistent memory allocation
US9305036B2 (en) Data set management using transient data structures
US8964495B2 (en) Memory operation upon failure of one of two paired memory devices
WO2017079373A1 (en) Redundant disk array using heterogeneous disks
US9875037B2 (en) Implementing multiple raid level configurations in a data storage device
US20180165291A1 (en) Disk storage allocation
US10120616B1 (en) Storage management system and method
US10740130B1 (en) Administrative system for rendering a user interface within a virtual machine to allow a user to administer the virtual machine and group of underlying hardware of a hypervisor
US20140181385A1 (en) Flexible utilization of block storage in a computing system
US9210032B2 (en) Node failure management
KR20130101779A (en) Hybrid virtual disk service system

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DELL, TIMOTHY J.;DUSANAPUDI, MANOJ;JAYARAMAN, PRASANNA;AND OTHERS;SIGNING DATES FROM 20130419 TO 20130422;REEL/FRAME:030522/0843

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20190526