US20120191900A1 - Memory management device - Google Patents

Memory management device Download PDF

Info

Publication number
US20120191900A1
US20120191900A1 US13/351,582 US201213351582A US2012191900A1 US 20120191900 A1 US20120191900 A1 US 20120191900A1 US 201213351582 A US201213351582 A US 201213351582A US 2012191900 A1 US2012191900 A1 US 2012191900A1
Authority
US
United States
Prior art keywords
data
memory
region
information
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/351,582
Other languages
English (en)
Inventor
Atsushi Kunimatsu
Masaki Miyagawa
Hiroshi Nozue
Kazuhiro Kawagome
Hiroto Nakai
Hiroyuki Sakamoto
Tsutomu Owa
Tsutomu Unesaki
Reina Nishino
Kenichi Maeda
Mari Takada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2009169371A external-priority patent/JP2011022933A/ja
Priority claimed from JP2010048335A external-priority patent/JP2011186559A/ja
Priority claimed from JP2010048332A external-priority patent/JP5322978B2/ja
Priority claimed from JP2010048337A external-priority patent/JP2011186561A/ja
Priority claimed from JP2010048331A external-priority patent/JP2011186555A/ja
Priority claimed from JP2010048338A external-priority patent/JP2011186562A/ja
Priority claimed from JP2010048339A external-priority patent/JP2011186563A/ja
Priority claimed from JP2010048329A external-priority patent/JP2011186554A/ja
Priority claimed from JP2010048328A external-priority patent/JP2011186553A/ja
Priority claimed from JP2010048333A external-priority patent/JP2011186557A/ja
Priority claimed from JP2010048334A external-priority patent/JP2011186558A/ja
Application filed by Individual filed Critical Individual
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OWA, TSUTOMU, UNESAKI, TSUTOMU, KUNIMATSU, ATSUSHI, MAEDA, KENICHI, NAKAI, HIROTO, SAKAMOTO, HIROYUKI, TAKADA, MARI, KAWAGOME, KAZUHIRO, NISHINO, REINA, NOZUE, HIROSHI, MIYAGAWA, MASAKI
Publication of US20120191900A1 publication Critical patent/US20120191900A1/en
Priority to US14/938,589 priority Critical patent/US10776007B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/205Hybrid memory, e.g. using both volatile and non-volatile memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7202Allocation control and policies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices

Definitions

  • the present invention relates to a memory management device that manages access to a memory.
  • a volatile semiconductor memory for example, a DRAM (Dynamic Random Access Memory) is used as a main memory device of a processor.
  • a nonvolatile semiconductor memory is used as a secondary storage device in combination with the volatile semiconductor memory.
  • Patent Literature 1 Jpn. Pat. Appln. KOKAI Publication No. 2008-242944 proposes an integrated memory management device.
  • a NAND flash memory is used as a main memory for an MPU.
  • a cache controller of the integrated memory management device implements, in addition to memory management of the primary cache memory and the secondary cache memory, memory management of the main memory.
  • Patent Literature 2 Jpn. Pat. Appln. KOKAI Publication No. 7-146820 discloses a technology that adopts a flash memory as the main memory device of an information processing device.
  • a flash memory is connected to a memory bus of a system via a cache memory, which is a volatile memory.
  • the cache memory is provided with an address array that records information such as addresses and an access history of data stored in the cache memory.
  • a controller references an access destination address to supply data in the cache memory or the flash memory to the memory bus or to store data in the memory bus.
  • Patent Literature 3 Jpn. Pat. Appln. KOKAI Publication No. 2001-266580 discloses an invention allowing different kinds of semiconductor memory devices to connect to a common bus.
  • a semiconductor memory device includes a random access memory chip and a package including the random access memory chip.
  • the package has a plurality of pins to electrically connect the random access memory chip to an external device.
  • the plurality of pins provides a memory function commonly to the random access memory chip and a nonvolatile semiconductor memory that can electrically be erased and programmed.
  • Each of the plurality of pins is arranged in the position of a corresponding pin of the nonvolatile semiconductor memory.
  • the present invention provides a memory management device capable of efficiently using a nonvolatile semiconductor memory.
  • a memory management device controls writing into and reading from a main memory including a nonvolatile semiconductor memory and a volatile semiconductor memory in response to a writing request and a reading request from a processor.
  • the memory management device includes a coloring information storage unit that stores coloring information generated based on a data characteristic of write target data to be written into at least one of the nonvolatile semiconductor memory and the volatile semiconductor memory, and a writing management unit that references the coloring information to determine a region into which the write target data is written from the nonvolatile semiconductor memory and the volatile semiconductor memory.
  • a memory management device capable of efficiently using a nonvolatile semiconductor memory can be provided.
  • FIG. 1 is a block diagram showing an example of a structure of a memory management device and an information processing device according to a first embodiment of the present invention.
  • FIG. 2 is a block diagram showing an example of a structure of the memory management device and the information processing device according to the first embodiment.
  • FIG. 3 is a diagram showing an example of a memory map of a mixed main memory according to the first embodiment.
  • FIG. 4 is a diagram showing an example of address conversion information according to the first embodiment.
  • FIG. 5 is a diagram showing an example of a coloring table according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of static color information according to the first embodiment.
  • FIG. 7 is a flow chart showing an example of data arrangement processing according to the first embodiment.
  • FIG. 8 is a diagram showing an example of a configuration of the coloring table according to the first embodiment.
  • FIG. 9 is a diagram showing a first example of a setting of static color information to various kinds of data.
  • FIG. 10 is a diagram showing a second example of settings of static color information to various kinds of data.
  • FIG. 11 is a flow chart showing an example of generation processing of the coloring table according to the first embodiment.
  • FIG. 12 is a flow chart showing an example of generation processing of an entry of the coloring table according to the first embodiment.
  • FIG. 13 is a diagram showing a first example of an alignment of entries of the coloring table according to the first embodiment.
  • FIG. 14 is a diagram showing a second example of the alignment of entries of the coloring table according to the first embodiment.
  • FIG. 15 is a diagram showing an example of a method of calculating a dynamic writing frequency DW_color and a dynamic reading frequency DR_color based DR_color dynamic color information and static color information.
  • FIG. 16 is a flow chart showing an example of reading processing of data according to the first embodiment.
  • FIG. 17 is a flow chart showing an example of decision processing of reading method of data according to the first embodiment.
  • FIG. 18 is a flow chart showing an example of writing processing of data according to the first embodiment.
  • FIG. 19 is a flow chart showing an example of decision processing of writing destination region of data according to the first embodiment.
  • FIG. 20 is a diagram illustrating decision processing of a block into which data is to be written according to the first embodiment.
  • FIG. 21 is a graph showing an example of a change of an erasure count in an arbitrary block region of the nonvolatile semiconductor memory.
  • FIG. 22 shows graphs showing an example of a change when a threshold for a difference of an erasure count is set small for wear leveling.
  • FIG. 23 is a graph showing an example of grouping of a block region in accordance with the erasure count.
  • FIG. 24 is a diagram showing determination criteria for grouping the block region in accordance with the erasure count.
  • FIG. 25 is a diagram showing an example of a search of the block region for wear leveling.
  • FIG. 26 is a block diagram showing an example of a memory management device further including a cache memory in the memory management device.
  • FIG. 27 is a block diagram showing implementation examples of the memory management device, the mixed main memory, and a processor.
  • FIG. 28 is a block diagram showing of an example of another structure aspect of the memory management device and the information processing device according to the first embodiment of the present invention.
  • FIG. 29 is a perspective view showing an example of the plurality of memory management devices managing the plurality of nonvolatile semiconductor memories.
  • FIG. 30 shows a physical address space of a volatile semiconductor memory according to a second embodiment.
  • FIG. 31 shows an example of a relationship between the coloring information and areas of the volatile semiconductor memory.
  • FIG. 32 shows another example of the relationship between the coloring information and the areas of the volatile semiconductor memory.
  • FIG. 33 shows an example of a data structure for managing a free space and a used space of the volatile semiconductor memory according to the second embodiment.
  • FIG. 34 shows an example of write processing to the volatile semiconductor memory according to the second embodiment.
  • FIG. 35 shows an example of an erasure processing to the volatile semiconductor memory according to the second embodiment.
  • FIG. 36 is diagram showing a truth value of a valid/invalid flag of nonvolatile semiconductor memory in the address conversion information according to the third embodiment of the present invention.
  • FIG. 37 is diagram showing a state transition of the valid/invalid flag of the nonvolatile semiconductor memory.
  • FIG. 38 is a flow diagram showing processing when a release of the mixed main memory is requested, according to the third embodiment.
  • FIG. 39 is a diagram illustrating a formation of explicit free space in the volatile semiconductor memory when the release of a memory in FIG. 38 is requested.
  • FIG. 40 is a flow diagram showing processing when acquisition of the mixed main memory is requested, according to the third embodiment.
  • FIG. 41 is a flow chart diagram showing processing when memory data reading is requested in FIG. 40 .
  • FIG. 42 is a flow chart showing processing when memory data writing is requested in FIG. 40 .
  • FIG. 43 is a block diagram showing an example of a principal portion of a functional configuration of a memory management device according to a fourth embodiment of the present invention.
  • FIG. 44 is a diagram showing an example of a data structure of a block size when write target data is not classified based on the coloring information.
  • FIG. 45 is a diagram showing an example of a data structure of a block size when write target data is classified based on the coloring information.
  • FIG. 46 is a diagram showing an example of a relationship between the address conversion information the physical address space (NAND logical address) of the nonvolatile semiconductor memory according to the fourth embodiment.
  • FIG. 47 is a diagram showing an example of a logical/physical conversion table (NAND logical/physical conversion table) of the nonvolatile semiconductor memory.
  • FIG. 48 is a data structure diagram showing an example of a reservation list.
  • FIG. 49 is a flow chart showing an example of processing of a group value calculation unit and a reservation list management unit according to the fourth embodiment.
  • FIG. 50 is a diagram showing an example of a state transition of the address conversion information according to the fourth embodiment.
  • FIG. 51 is a diagram showing an example of a dirty bit field according to a fifth embodiment.
  • FIG. 52 is a flow chart showing shut down processing according to the fifth embodiment.
  • FIG. 53 is a diagram showing the coloring table applied in the fifth embodiment.
  • FIG. 54 is a flow chart showing setting processing of pre-reading hint information according to the fifth embodiment.
  • FIG. 55 is a flow chart showing an example of processing of an operating system when activation according to the fifth embodiment.
  • FIG. 56 is a block diagram showing an example of a relationship between a virtual address region in a virtual address space and attribute information according to a sixth embodiment.
  • FIG. 57 is a flow chart showing an example of setting processing of second attribute information of virtual address region data by the operating system.
  • FIG. 58 is a diagram showing an example of a setting of static color information based on the virtual address region data.
  • FIG. 59 is a diagram showing an example of a dependence relationship between commands and libraries.
  • FIG. 60 is a diagram showing an example of scores of the commands and scores of the libraries.
  • FIG. 61 is a diagram showing another calculation example of the scores of the libraries based on the scores of commands.
  • FIG. 62 is a diagram showing an example of a setting of static color information using the scores of the libraries.
  • FIG. 63 is a diagram showing an example of variables or functions brought together by a compiler.
  • FIG. 64 is a diagram showing an example of a setting of the static color information using the compiler.
  • FIG. 65 is a diagram showing an example of a setting of the static color information based on a usage frequency of a dynamically generated memory region.
  • FIG. 66 is a block diagram showing an example of configurations of a memory management device, information processing device, and memory device according to a seventh embodiment of the present invention.
  • FIG. 67 is a graph showing an example of a change of an erasure count of a memory unit.
  • FIG. 68 is a graph showing an example of a usage state of the memory device based on the erasure count of the memory device.
  • FIG. 69 is a graph showing an example of the usage state of the memory device based on a reading occurrence count of the memory device.
  • FIG. 70 is a flow chart showing an example of processing notifying the memory device of the usage state based on the erasure count of the memory device.
  • FIG. 71 is a flow chart showing an example of notifying the memory device of the usage state based on the reading occurrence count of the memory device H 32 a.
  • FIG. 72 is a diagram showing an example of data included in management information.
  • FIG. 73 is a flow chart showing an example of processing after the memory device is electrically connected to the memory management device until access to the memory device is started.
  • FIG. 74 is a flow chart showing processing after the memory management device receives a removal notification from the memory device until the memory device becomes removable.
  • FIG. 75 is a diagram showing an example of removing state of the memory device.
  • FIG. 76 is a block diagram showing an example of a reuse of the memory device.
  • FIG. 77 is a block diagram showing an example of a change of an access count when a control executes so that an access count for one memory device becomes larger than an access count for another memory device, based on the coloring information.
  • FIG. 78 is a diagram showing an example of a configuration of a memory management device according to an eighth embodiment of the present invention.
  • FIG. 79 is a schematic diagram showing a first example of dynamic switching of nonvolatile semiconductor memories according to the eighth embodiment.
  • FIG. 80 is a schematic diagram showing a second example of dynamic switching of nonvolatile semiconductor memories according to the eighth embodiment.
  • FIG. 81 is a state transition diagram showing a first example of switching control of a memory region by a switching control unit according to the eighth embodiment.
  • FIG. 82 is a state transition diagram showing a second example of switching control of a memory region by a switching control unit according to the eighth embodiment.
  • FIG. 83 is a block diagram showing an example of a relationship between a memory management device according to a ninth embodiment of the present invention and a address space.
  • FIG. 84 is a flow chart showing an example of a writing operation by a processor 3 b and the memory management device according to the ninth embodiment.
  • FIG. 85 is a diagram showing an example of a configuration of an information processing device and a network system according to a tenth embodiment of the present invention.
  • FIG. 86 is a flow chart showing an example of processing of a profile information management unit according to the tenth embodiment.
  • FIG. 87 is a flow chart showing an example of upload processing of profile information by a user terminal according to the tenth embodiment.
  • FIG. 88 is a flow chart showing an example of download processing of the profile information by the user terminal according to the tenth embodiment.
  • FIG. 89 is a block diagram showing an example of a network system according to an eleventh embodiment of the present invention.
  • FIG. 90 is a block diagram showing an example of a configuration of a memory management device according to the eleventh embodiment.
  • FIG. 91 is a block diagram showing a first relationship between a processor logical address and a network logical address according to the eleventh embodiment.
  • FIG. 92 is a block diagram showing a second relationship between a processor logical address and a network logical address according to the eleventh embodiment.
  • FIG. 93 is a block diagram showing a third relationship between a processor logical address and a network logical address according to the eleventh embodiment.
  • FIG. 94 is a block diagram showing a fourth relationship between a processor logical address and a network logical address according to the eleventh embodiment.
  • FIG. 95 is a block diagram showing a fifth relationship between a processor logical address and a network logical address according to the eleventh embodiment.
  • FIG. 96 is a block diagram showing an example of a virtual address space of the network system according to the eleventh embodiment.
  • FIG. 97 is a block diagram showing a first example of a configuration of the processor logical address and the network logical address according to the eleventh embodiment.
  • FIG. 98 is a block diagram showing a second example of a configuration of the processor logical address and the network logical address according to the eleventh embodiment.
  • FIG. 99 is a block diagram showing a third example of a configuration of the processor logical address and the network logical address according to the eleventh embodiment.
  • FIG. 100 is a diagram showing an example of calculation to estimate the number of bits of an address needed to access data stored in a large number of devices connected to a network.
  • FIG. 1 is a block diagram showing an example of the memory management device and the information processing device according to the present embodiment.
  • the information processing device 100 includes the memory management device 1 , a mixed main memory 2 , and processors 3 a , 3 b , 3 c.
  • the processor 3 a , 3 b , or 3 c is, for example, a MPU (Micro Processor Unit) or GPU (Graphical Processor Unit).
  • the processors 3 a , 3 b , 3 c include primary cache memories 4 a , 4 b , 4 c and secondary cache memories 5 a , 5 b , 5 c respectively.
  • the processors 3 a , 3 b , 3 c execute processes 6 a , 6 b , 6 c to process various kinds of data respectively. In the execution of the processes 6 a , 6 b , 6 c , the processors 3 a , 3 b , 3 c specify data by using a virtual address.
  • the processors 3 a , 3 b , 3 c To write data (write target data) into the mixed main memory 2 , the processors 3 a , 3 b , 3 c generate a writing request. To read data (read target data) from the mixed main memory 2 , the processors 3 a , 3 b , 3 c generate a reading request.
  • Each of the processors 3 a , 3 b , 3 c includes a page table (not shown) for converting a virtual address into a physical address (logical address for the mixed main memory 2 ) of the MPU or GPU.
  • a page table (not shown) for converting a virtual address into a physical address (logical address for the mixed main memory 2 ) of the MPU or GPU.
  • the processors 3 a , 3 b , 3 c convert a virtual address into a logical address based on the page table to specify write target data by the logical address.
  • the processors 3 a , 3 b , 3 c convert a virtual address into a logical address based on the page table to specify read target data by the logical address.
  • the memory management device 1 manages access (writing, reading) to the mixed main memory 2 by the processors 3 a , 3 b , 3 c .
  • the memory management device 1 includes a processing unit 15 , a working memory 16 , and an information storage unit 17 .
  • the memory management device 1 stores memory usage information 11 , memory specific information 12 , address conversion information 13 , and a coloring table 14 described later in the information storage unit 17 .
  • the coloring table 14 stored in the information storage unit 17 of the memory management device 1 may be a portion of the coloring table 14 stored in nonvolatile semiconductor memories 9 , 10 .
  • data of the coloring table 14 used frequently of the coloring table 14 stored in the nonvolatile semiconductor memories 9 , 10 may be stored in the information storage unit 17 of the memory management device 1 .
  • the memory management device 1 references the coloring table 14 and the like to manage access to the mixed main memory 2 by the processors 3 a , 3 b , 3 c . Details thereof will be described later.
  • the mixed main memory 2 includes a first memory, a second memory, and a third memory.
  • the first memory has a greater accessible upper limit count than the second memory.
  • the second memory has a greater accessible upper limit count than the third memory. Note that the accessible upper limit count is a statistically expected value and does not mean that the relationship is always guaranteed.
  • the first memory may have a faster data transfer speed (access speed) than the second memory.
  • the first memory is assumed to be a volatile semiconductor memory 8 .
  • the volatile semiconductor memory 8 for example, a memory commonly used in a computer as the main memory such as a DRAM (Dynamic Random Access Memory), FPM-DRAM, EDO-DRAM, or SDRAM is used.
  • a nonvolatile semiconductor memory such as an MRAM (Magnetoresist Random Access Memory) or FeRAM (Ferroelectric Random Access Memory) may also be adopted if accessed at high speed just as fast as the DRAM with essentially no accessible upper limit count.
  • the second memory is assumed to be the nonvolatile semiconductor memory 9 .
  • the nonvolatile semiconductor memory 9 for example, an SLC (Single Level Cell)-type NAND flash memory is used.
  • SLC Single Level Cell
  • MLC Multi Level Cell
  • the SLC can be read and written into faster and has higher reliability.
  • the SLC has higher bit costs than the MLC and is not suitable for increased capacities.
  • the third memory is assumed to be the nonvolatile semiconductor memory 10 .
  • the nonvolatile semiconductor memory 10 for example, an MLC-type NAND flash memory is used.
  • the MLC can be read and written into more slowly and has lower reliability.
  • the MLC has lower bit costs than the SLC and is suitable for increased capacities.
  • the nonvolatile semiconductor memory 9 is an SLC-type NAND flash memory and the nonvolatile semiconductor memory 10 is an MLC-type NAND flash memory, but, for example, the nonvolatile semiconductor memory 9 may be a 2-bit/Cell MLC-type NAND flash memory and the nonvolatile semiconductor memory 10 may be a 3-bit/Cell MLC-type NAND flash memory.
  • Reliability means the degree of resistance to an occurrence of data corruption (durability) when data is read from a storage device. Durability of the SLC is higher than durability of the MLC. High durability means a greater accessible upper limit count and lower durability means a smaller accessible upper limit count.
  • the SLC can store 1-bit information in one memory cell.
  • the MLC can store 2-bit information or more in one memory cell. That is, the mixed main memory 2 according to the present embodiment has decreasing order of durability of the volatile semiconductor memory 8 , the nonvolatile semiconductor memory 9 , and the nonvolatile semiconductor memory 10 .
  • the nonvolatile semiconductor memories 9 , 10 such as NAND flash memories are cheap and can be increased in capacity.
  • the nonvolatile semiconductor memories 9 , 10 instead of NAND flash memories, for example, other kinds of flash memory such as NOR flash memories, PRAM (Phase change memory), or ReRAM (Resistive Random access memory) can be used.
  • an MLC may be adopted as the third memory and an MLC in which a pseudo SLC mode that writes data by using only lower pages of the MLC may be adopted as the second memory.
  • the second memory and the third memory can be configured by a common chip only, which is advantageous in terms of manufacturing costs.
  • the information processing device including the mixed main memory 2 formed by mixing the volatile semiconductor memory 8 , the nonvolatile semiconductor memory 9 of SLC, and the nonvolatile semiconductor memory 10 of MLC as a main memory is realized.
  • the mixed main memory 2 is a heterogeneous main memory in which arrangement of data is managed by the memory management device 1 .
  • the memory usage information 11 , the memory specific information 12 , the address conversion information 13 , and the coloring table 14 are stored in predetermined regions of the nonvolatile semiconductor memories 9 , 10 .
  • the memory usage information 11 includes the number of times of writing occurrences and the number of times of reading occurrences of each page region of the nonvolatile semiconductor memories 9 , 10 , the number of times of erasure of each block region, and the size of the region being used.
  • the memory specific information 12 includes the memory size of the volatile semiconductor memory 8 , the memory sizes of the nonvolatile semiconductor memories 9 , 10 , the page sizes and block sizes of the nonvolatile semiconductor memories 9 , 10 , and the accessible upper limit counts (the writable upper limit count, readable upper limit count, and erasable upper limit count) of each region.
  • the page size is the unit of data size for writing into or reading from the nonvolatile semiconductor memories 9 , 10 .
  • the block size is the unit of data erasure size of the nonvolatile semiconductor memories 9 , 10 . In the nonvolatile semiconductor memories 9 , 10 , the block size is larger than the page size.
  • the address conversion information 13 is information used to convert a logical address provided by the processors 3 a , 3 b , 3 c into a physical address corresponding to the logical address. Details of the address conversion information 13 will be described later.
  • the coloring table 14 is a table to hold coloring information for each piece of data.
  • the coloring information includes static color information and dynamic color information. Details thereof will be described later.
  • FIG. 2 is a block diagram showing an example of the configuration of the memory management device 1 and the information processing device 100 according to the present embodiment.
  • the processor 3 b of the processors 3 a , 3 b , 3 c in FIG. 1 is selected as the processor to be described, but the description that follows also applies to the other processors 3 a , 3 c.
  • An operating system 27 is executed by the processor 3 b .
  • the operating system 27 is executed by the processor 3 b and has a right to access the coloring table 14 stored in the information storage unit 17 .
  • the processing unit 15 of the memory management device 1 includes an address management unit 18 , a reading management unit 19 , a writing management unit 20 , a coloring information management unit 21 , a memory usage information management unit 22 , and a relocation unit 23 . Further, the coloring information management unit 21 includes an access frequency calculation unit 24 and a dynamic color information management unit 25 .
  • the processing unit 15 performs various kinds of processing based on information stored in the information storage unit 17 while using the working memory 16 .
  • the working memory 16 is used, for example, as a buffer and is used as a working region for various data conversions and the like.
  • the functional blocks included in the processing unit 15 can be realized by one of hardware and software (for the example, the operating system 27 , firmware or the like) or a combination of both. Whether the functional blocks are realized as hardware or software depends on the concrete embodiment or design limitations imposed on the whole information processing device 100 . A person skilled in the art can realize these functions by various methods for each concrete embodiment and determining such an embodiment is included in the scope of the present invention. This also applies to the functional blocks used in the description that follows.
  • the address management unit 18 allocates a physical address to a logical address and stores the allocated physical address and the logical address into the address conversion information 13 . Accordingly, the processing unit 15 can acquire a physical address corresponding to a logical address with reference to the address conversion information 13 .
  • the reading management unit 19 manages read processing of read target data to be read from the mixed main memory 2 when the processors 3 a , 3 b , 3 c issue a reading request.
  • the writing management unit 20 manages write processing of write target data into the mixed main memory 2 when the processors 3 a , 3 b , 3 c issue a writing request.
  • the coloring information management unit 21 manages the coloring table 14 .
  • the memory usage information management unit 22 manages the memory usage information 11 of the mixed main memory 2 .
  • the relocation unit 23 relocates data arranged at a physical address corresponding to any logical address based on coloring information included in the coloring table 14 asynchronously to the operations of the processors 3 a , 3 b , 3 c .
  • the relocation unit 23 periodically relocates data included in the nonvolatile semiconductor memory 10 whose reading frequency or writing frequency is high into the nonvolatile semiconductor memory 9 based on, for example, dynamic color information described later.
  • the relocation unit 23 periodically relocates data included in the nonvolatile semiconductor memory 9 whose reading frequency or writing frequency is low into the nonvolatile semiconductor memory 10 based on, for example, the dynamic color information.
  • the relocation unit 23 can relocate data between the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 .
  • Write processing by the writing management unit 20 described later relocates data by performing determination processing of a writing destination memory region and determination processing of a writing destination block region each time an update of data occurs.
  • the relocation unit 23 periodically relocates data.
  • the trigger of starting the operation of the relocation unit 23 may be a period set by the developer or the period that can be set through the user interface.
  • the relocation unit 23 may operate when the information processing device 100 pauses.
  • the access frequency calculation unit 24 calculates access frequency information (a dynamic writing frequency DR_color and a dynamic reading frequency DR_color) of data based on coloring information included in the coloring table 14 .
  • the dynamic color information management unit 25 manages dynamic color information included in the coloring table 14 .
  • FIG. 3 is a diagram showing an example of a memory map of the mixed main memory 2 according to the present embodiment.
  • the mixed main memory 2 includes the volatile semiconductor memory 8 (DRAM region), the nonvolatile semiconductor memory 9 (SLC region), and the nonvolatile semiconductor memory 10 (2-bit/Cell region, 3-bit/Cell region, 4-bit/Cell region).
  • the 2-bit/Cell region, 3-bit/Cell region, and 4-bit/Cell region constitute an MLC region.
  • the DRAM region, SLC region, 2-bit/Cell region, 3-bit/Cell region, and 4-bit/Cell region are called a memory region by the gross.
  • the volatile semiconductor memory 8 is composed of, for example, a 128-Mbyte DRAM region.
  • the nonvolatile semiconductor memory 9 is composed of, for example, a 2-Gbyte B region, a 128-Mbyte B redundant block region, a 2-Gbyte C region, and a 128-Mbyte C redundant block region.
  • Each memory region of the nonvolatile semiconductor memory 9 is an SLC-type NAND flash memory.
  • the nonvolatile semiconductor memory 10 is composed of, for example, a 2-bit/Cell region composed of a 4-Gbyte A region and a 128-Mbyte A redundant block region, a 3-bit/Cell region composed of a 4-Gbyte D region and a 128-Mbyte D redundant block region, and a 4-bit/Cell region composed of a 4-Gbyte E region and a 128-Mbyte E redundant block region.
  • Each memory region of the nonvolatile semiconductor memory 10 is an MLC-type NAND flash memory. As shown in FIG. 3 , a physical address is allocated to each memory region.
  • the memory specific information 12 includes 1) the memory size of the volatile semiconductor memory 8 (DRAM region) in a memory space of the mixed main memory 2 , 2) the memory sizes of the nonvolatile semiconductor memories 9 , 10 in the memory space of the mixed main memory 2 , 3) the block size and page size of the NAND flash memory constituting the memory space of the mixed main memory 2 , 4) memory space information (containing the erasable upper limit count, readable upper limit count, and writable upper limit count) allocated as an SLC region (binary region) in the nonvolatile semiconductor memory 9 , 5) memory space information (containing the erasable upper limit count, readable upper limit count, and writable upper limit count) allocated to the 2-bit/Cell region, 6) memory space information (containing the erasable upper limit count, readable upper limit count, and writable upper limit count) allocated to the 3-bit/Cell region, and 7) memory space information (containing the erasable upper limit count (DRAM region) in a memory space of
  • FIG. 4 is a diagram showing an example of the address conversion information 13 according to the present embodiment.
  • the logical address, physical address of the volatile semiconductor memory 8 , physical address of the nonvolatile semiconductor memories 9 , 10 , and valid/invalid flag are managed in tabular form.
  • each entry of the address conversion information 13 a logical address, at least one of physical addresses of the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 corresponding to the logical address, and the valid/invalid flag are registered.
  • the valid/invalid flag is information indicating whether or not each entry is valid. 1 of the valid/invalid flag indicates valid and 0 of the valid/invalid flag indicates invalid. The initial value of the valid/invalid flag of each entry is 0. An entry whose valid/invalid flag is 0 is an entry to which no logical address is mapped or an entry whose logical address is erased after being mapped thereto. An entry whose valid/invalid flag is 1 has a logical address mapped thereto and a physical address corresponding to the logical address is present at least in one of the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 .
  • the logical address, the physical address of the volatile semiconductor memory 8 , and the physical address of the nonvolatile semiconductor memories 9 , 10 are managed by one entry of the address conversion information 13 .
  • the logical address and the physical address of the volatile semiconductor memory 8 may be managed by the address conversion information 13 so that the logical address and the physical address of the nonvolatile semiconductor memories 9 , 10 are managed by another tag RAM.
  • the tag RAM is first referenced and if no physical address corresponding to the logical address is found in the tag RAM, the address conversion information 13 is referenced.
  • FIG. 5 is a diagram showing an example of the coloring table 14 according to the present embodiment.
  • coloring information is provided for each piece of data.
  • the unit of data size of data to which coloring information is provided is, for example, the minimum unit of reading and writing.
  • the minimum unit of reading and writing is the page size of a NAND flash memory.
  • the coloring table 14 associates coloring information for each piece of data and stores the coloring information in units of entry.
  • An index is attached to each entry of the coloring table 14 .
  • the index is a value generated based on a logical address.
  • the coloring information includes static color information and dynamic color information.
  • the static color information is information generated based on property of the data to which the coloring information is attached and is a kind of hint information offering a hint to determine an arrangement (writing) region of the data in the mixed main memory 2 .
  • the dynamic color information is information containing at least one of the number of times and the frequency of reading and writing data. The dynamic color information may be used as hint information.
  • FIG. 6 is a diagram illustrating an example of static color information according to the present embodiment.
  • the static color information includes at least one piece of information of “importance”, “reading frequency/writing frequency”, and “data life” of the data.
  • the reading frequency described with reference to FIG. 6 corresponds to a static reading frequency described later and the writing frequency corresponds to a static writing frequency.
  • Importance is a value set by estimating the importance of data based on the type of the data or the like.
  • Reading frequency/writing frequency is a value set by estimating the frequency with which data is read or written based on the type of the data or the like.
  • Data life is a value set by estimating a period (data life) in which data is used without being erased based on the type of the data or the like.
  • Importance “reading frequency/writing frequency”, and “data life” are estimated from, for example, a property of a file held by a file system or a property of a region temporarily used for a program.
  • a property of a file held by a file system is a property determined based on a data attribute added to the file of file data containing the data to which coloring information is attached.
  • a Data attribute added to the file include header information of the file, a file name, a file position, or file management data (information held in inodd). If, for example, the file is positioned in the Trash of the file system as the file position, it is estimated that the importance of the property of data contained in the file is low, the reading frequency/writing frequency is low, and the data life is short. Based on the property, a low writing frequency, a low reading frequency, and a short data life are estimated for coloring information of the data.
  • a property of a region temporarily used for a program includes a property determined based on the data type when program execution of a program in which the data to which coloring information is attached is handled and a property determined based on the data type when generation of a program file.
  • the data type when program execution is the data type classified based on, for example, which region of a stack region, a heap region, and a text region the data is mapped to when program execution.
  • a property of data mapped to the stack region or heap region are estimated that the writing frequency is high, the reading frequency is high, the importance is high, and the data life is short.
  • a high writing frequency, a high reading frequency, high importance, and a short data life are estimated for static coloring information of the data.
  • a property of data mapped to the text region are estimated that the writing frequency is low, the reading frequency is high, the importance is high, and the data life is long because the data is read-only data.
  • a high writing frequency, a high reading frequency, high importance, and a long data life are estimated for static coloring information of the data.
  • the data type estimation when generation of a program file is to estimate the importance, reading frequency, and writing frequency of data handled by a program when the program is generated.
  • the static color information may be directly set by the user through the user interface.
  • FIG. 7 is a flow chart showing an example of data arrangement processing.
  • the mixed main memory 2 includes the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 .
  • the memory region of the volatile semiconductor memory 8 or the nonvolatile semiconductor memories 9 , 10 is determined as an arrangement destination.
  • the writing management unit 20 references coloring information attached to the write target data (step S 1 ).
  • the writing management unit 20 references “data life” of the coloring information to determine the data life of the write target data (step S 2 ).
  • the writing management unit 20 references “importance” of the coloring information of the write target data to determine the importance of the write target data (step S 5 ).
  • the writing management unit 20 selects the nonvolatile semiconductor memory 9 with high durability (reliability) as a memory region in which the write target data is arranged (step S 7 ). Further, the writing management unit 20 determines whether to cache the write target data in the volatile semiconductor memory 8 based on the coloring information of the write target data (cache method based on coloring information) (step S 8 ) and determines the nonvolatile semiconductor memory 9 as the memory region in which the write target data is arranged (step S 12 ).
  • the writing management unit 20 selects the nonvolatile semiconductor memory 10 with low durability as a memory region in which the write target data is arranged (step S 9 ). Further, the writing management unit 20 determines the reading frequency and the writing frequency of the write target data based on the coloring information (dynamic color information, static color information) of the write target data (step S 10 ).
  • the writing management unit 20 selects the nonvolatile semiconductor memory 9 as a memory region in which the write target data is arranged (step S 7 ). Further, the writing management unit 20 determines whether to cache the write target data in the volatile semiconductor memory 8 based on the coloring information of the write target data (cache method based on coloring information) (step S 8 ) and determines the nonvolatile semiconductor memory 9 as the memory region in which the write target data is arranged (step S 12 ).
  • the writing management unit 20 determines whether to cache the write target data in the volatile semiconductor memory 8 based on the coloring information of the write target data (cache method based on coloring information) (step S 8 ) and determines the nonvolatile semiconductor memory 10 as the memory region in which the write target data is arranged (step S 12 ).
  • FIG. 8 is a diagram showing an example of the configuration of the coloring table 14 according to the present embodiment.
  • the coloring table 14 shown in FIG. 8 a case when particularly the reading frequency, writing frequency, and data life of the coloring information shown in FIGS. 5 and 6 are used as the coloring information will be described.
  • coloring information one of “importance”, “reading frequency/writing frequency”, and “data life” may be used, any two may be combined, or all may be combined. Further, other coloring information that is not shown in FIG. 6 may be separately defined and used.
  • the coloring table 14 is a table that associates coloring information with each piece of data and holds the coloring information in units of entry.
  • the data size of data associated with the coloring information by the coloring table 14 is, for example, the minimum unit of reading or writing.
  • the minimum data size of reading or writing is the page size of a NAND flash memory. It is assumed below that the data size of data associated with the coloring information by the coloring table 14 is the page size, but the present embodiment is not limited to such an example.
  • An index is attached to each entry of the coloring table 14 .
  • Coloring information held by the coloring table 14 includes static color information and dynamic color information.
  • the index is a value generated based on a logical address.
  • the static color information includes a value SW_color indicating the static writing frequency, SR_color indicating the static reading frequency, a data life SL_color, a time ST_color at which data is generated.
  • the static writing frequency SW_color is a value set by estimating the frequency with which data is written based on the type of the data or the like.
  • the static reading frequency SR_color is a value set by estimating the frequency with which data is read based on the type of the data or the like. For example, an increasing value is set to the static writing frequency SW_color with estimated data having an increasing writing frequency. For example, an increasing value is set to the static reading frequency SR_color with estimated data having an increasing reading frequency.
  • the data life SL_color is a value set by estimating a period (data life) in which data is used without being erased based on the type of the data or the like.
  • the static color information is a statically predetermined value by a program (process) that generates data.
  • the operating system 27 executed in the information processing device 100 may predict static color information based on a file extension, a file header of data, or the like.
  • the dynamic color information includes a writing count DWC_color of data and a reading count DRC_color of data.
  • the writing count DWC_color of data is the number of times the data is written into the mixed main memory 2 .
  • the reading count DRC_color of data is the number of times the data is read from the mixed main memory 2 .
  • the dynamic color information management unit 25 manages for each piece of data the number of times the data is written into the mixed main memory 2 based on the writing count DWC_color.
  • the dynamic color information management unit 25 manages for each piece of data the number of times the data is read from the mixed main memory 2 based on the reading count DRC_color.
  • the mixed main memory 2 is used as a main memory.
  • the dynamic color information management unit 25 increments the writing count DWC_color of data each time the data is written.
  • the dynamic color information management unit 25 also increments the reading count DWC_color of data each time the data is read.
  • the access frequency calculation unit 24 calculates the dynamic writing frequency DW_color from the writing count DWC_color of data.
  • the access frequency calculation unit 24 calculates the dynamic reading frequency DR_color from the reading count DRC_color of data.
  • the dynamic writing frequency DW_color is a value indicating the frequency with which the data is written into the mixed main memory 2 .
  • the dynamic reading frequency DR_color is a value indicating the frequency with which the data is read from the mixed main memory 2 .
  • the calculation method of the dynamic writing frequency DW_color and the dynamic reading frequency DR_color will be described later.
  • the memory management device 1 determines the write region, reading method and the like by referencing coloring information.
  • FIG. 9 is a diagram showing a first example of a setting of static color information (the static writing frequency SW_color, the static reading frequency SR_color, and the data life SW_color) to various kinds of data.
  • FIG. 10 is a diagram showing a second example of a setting of static color information (the static writing frequency SW_color, the static reading frequency SR_color, and the data life SR_color) to various kinds of data.
  • the reading frequency of the text region of a kernel is normally high and the writing frequency thereof is low.
  • the operating system 27 sets the static reading frequency SR_color of the text region in which the operating system 27 operates to 5 and the static writing frequency SW_color to 1.
  • the operating system 27 predicts that the data life SL_color of the text region of the kernel is long.
  • both the reading frequency and the writing frequency of the data region of the kernel are normally high.
  • the operating system 27 sets the static reading frequency SR_color to 5 and the static writing frequency SW_color to 5 for the data region of the kernel.
  • the data life SL_color is assumed to be SHORT.
  • the reading frequency of the text region of a user program when compared with the kernel reenterably invoked by all processes, is low. However, when a process is active, like the kernel, the reading frequency is high. Thus, the static writing frequency SW_color is set to 1 and the static reading frequency SR_color is set to 4 for the text region of the user program.
  • the data life SR_color for the text region of the user program is commonly long because the data life SL_color is a period until the program is uninstalled. Thus, the data life SL_color for the text region of the user program is set to LONG.
  • a region dynamically secured for a program is roughly divided into two regions.
  • One type of the region is data (including the stack region) discarded when execution of a program ends.
  • data has the short data life SR_color and the reading frequency and writing frequency thereof are high.
  • the static reading frequency SR_color is set to 4
  • the static writing frequency SW_color is set to 4 for data discarded when execution of a program ends.
  • Another region dynamically secured for the program is a region generated by the program for a new file. Data generated by the program has the long data life SL_color and the read and write frequencies thereof depend on the type of a generated file.
  • the data life SL_color of a file is set to be long for data handled as a file to be referenced by a process.
  • a case when a system file whose file extension is, for example, SYS, dll, DRV and the like is read will be described.
  • Data having such an extension is a file read when the operating system 27 performs various kinds of processing.
  • the operating system 27 When the operating system 27 is installed on the mixed main memory 2 , data having such an extension is rarely updated after being written once.
  • a file having such an extension is predicted that the access frequency thereof is, among files, relatively high, but when compared with the text region of a program (kernel), the access frequency thereof is low. Therefore, the operating system 27 sets the static writing frequency SW_color having such an extension to 1 and the static reading frequency SR_color to 3.
  • This setting shows that the writing frequency predicted from data is extremely low and the predicted reading frequency is high. That is, data having such an extension is predicted that the data may be rewritten several times when the operating system 27 is updated or another program is installed and thus is handled almost like read-only data.
  • the number of users who use a program to edit an audio file is small.
  • the frequency of writing music data compressed by, for example, MP3 is considered to be low.
  • the frequency of reading music data is considered to be higher than the frequency of writing music data.
  • the static writing frequency SW_color of music data compressed by MP3 or the like is set to 1 and the static reading frequency SW_color thereof to 2.
  • the number of users who use a video editing program is small.
  • the frequency of writing video data compressed by, for example, MPEG is considered to be low.
  • the frequency of reading video data is considered to be higher than the frequency of writing video data.
  • the static writing frequency SW_color of video data compressed by MP3 or the like is set to 1 and the static reading frequency SW_color thereof to 2.
  • the static writing frequency SW_color of the text file is set to 3 and the static reading frequency SW_color thereof to 3.
  • the reading frequency and writing frequency of a browser cache file are considered to be equal to or higher than those of a media file of music data or video data. Therefore, the static writing frequency SW_color of the browser cache file is set to 1 and the static reading frequency SW_color thereof to 3.
  • the static writing frequency SW_color of a file arranged in a directory whose access frequency is low such as the Trash is set to 1 and the static reading frequency SW_color thereof to 1.
  • Photo data whose extension is typically JPEG and movie data whose extension is typically MOV are rarely rewritten after being written once.
  • the predicted frequency with which such photo data or movie data is accessed from a program is low.
  • the operating system 27 sets a small value to the static writing frequency SW_color and the static reading frequency SR_color of photo data and movie data.
  • FIG. 11 is a flow chart showing an example of generation processing of the coloring table 14 .
  • the coloring table 14 is generated when the system is initially activated.
  • the coloring table 14 is arranged in any region of the nonvolatile semiconductor memories 9 , 10 .
  • the address at which the coloring table 14 is arranged may be determined by the implementation of the memory management device 1 .
  • step T 1 the information device 100 is turned on and activated.
  • step T 2 the coloring information management unit 21 converts a base address of the coloring table 14 to a logical address and generates an index for each piece of data.
  • step T 3 the coloring information management unit 21 sets the base address of the coloring table 14 to the information storage unit 17 .
  • the information storage unit 17 is composed of, for example, registers.
  • the base address of the coloring table 14 is set to, for example, a coloring table register.
  • FIG. 12 is a flow chart showing an example of generation processing of an entry of the coloring table 14 .
  • the processes 6 a , 6 b , 6 c executed by the processors 3 a , 3 b , 3 c issue a request to secure a region in the logical address space to arrange new data (step U 1 ).
  • Unused regions in the logical address space are managed by the operating system 27 and the logical address is determined by the operating system 27 (step U 2 ).
  • the operating system 27 when new data is generated by the processes 6 a , 6 b , 6 c , the operating system 27 generates static color information based on the type of the newly generated data or the like (step U 3 ).
  • the static color information is generated for each page size of the generated data. If, for example, the data size of the generated data is larger than the page size, the data is divided into the page size and static color information is generated for each divided page size. It is assumed below that the data size of the write target data is equal to the page size, but the present embodiment is not limited to such an example.
  • the operating system 27 references the coloring table 14 based on the base address set to the information storage unit 17 (step U 4 ).
  • the operating system 27 registers the generated static color information with an entry of the coloring table 14 to which the index corresponding to the secured logical address is attached (step U 5 ).
  • the processes 6 a , 6 b , 6 c executed by the processors 3 a , 3 b , 3 c issue a reading request or writing request to the secured logical address space.
  • the address management unit 18 determines the physical address for the logical address to which data is written and this processing will be described later.
  • FIG. 13 is a diagram showing a first example of an alignment of entries of the coloring table 14 .
  • FIG. 14 is a diagram showing a second example of an alignment of entries of the coloring table 14 .
  • Entries of the coloring table 14 are compatible with the minimum read size of data (for example, the page size of a NAND flash memory), but the processes 6 a , 6 b , 6 c are not forced to map after entries being aligned to the minimum read size of data when data is mapped to the logical address space. Thus, there is the possibility that a plurality of pieces of data corresponds to one entry of the coloring table 14 .
  • the operating system 27 causes, among the plurality of pieces of data corresponds to one entry, the data whose reading frequency and writing frequency are estimated to be the highest to represent.
  • the operating system 27 sets weighted average values of the static writing frequency SW_color and the static reading frequency SR_color of each piece of data with the size of data occupying one entry set a weight.
  • the static writing frequency SW_color and the static reading frequency SR_color shown in the coloring table 14 are embedded in source code such as the operating system 27 by a program developer or predicted by the operating system 27 .
  • a file or photo data may be used for another purpose than intended by the program developer.
  • data such as photo data is accessed almost exclusively for reading and content of photo data is rarely rewritten.
  • the static writing frequency SW_color and the static reading frequency SR_color of the coloring table 14 can be rewritten by the user, a specific file can be moved to a region that allows for a more number of times of rewrite at a higher speed.
  • the file system of the operating system 27 so that coloring information of each piece of data can be rewritten by software of the operating system 27 .
  • FIG. 15 is a diagram showing an example of the method of calculating the dynamic writing frequency DR_color and the dynamic reading frequency DR_color based on dynamic color information and static color information.
  • the horizontal axis represents the time and the vertical axis represents the number of times of access (the reading count DWC_color or the writing count DRC_color).
  • coloring information (including the data generation time) is generated for the newly generated data and registered with a new entry of the coloring table 14 and then, the data is written into the mixed main memory 12 .
  • the number of times of access (the writing count DWC_color and the reading count DRC_color) increases with the passage of time.
  • the number of times of access is increased by the dynamic color information management unit 25 .
  • the access frequency calculation unit 24 of the memory management device 1 calculates the dynamic writing frequency DW_color and the dynamic reading frequency DR_color based on the number of times of access.
  • the writing count DWC_color of the data and the reading count DRC_color of the data at the current time can be determined by referencing the coloring table 14 .
  • the dynamic writing frequency DW_color at the current time is determined by a time average (average rate of change a) of the writing count DWC_color from the data generation time ST_color to the current time.
  • the dynamic reading frequency DR_color at the current time is determined by a time average (average rate of change a) of the reading count DRC_color from the data generation time ST_color to the current time. Accordingly, the dynamic writing frequency DW_color and the dynamic reading frequency DR_color of the data are calculated based on the dynamic color information (the writing count DWC_color and the reading count DRC_color).
  • High or low of the frequency of access to the data is determined based on the calculated dynamic writing frequency DW_color and dynamic reading frequency DR_color.
  • High or low of frequency of access is determined based on, for example, the memory specific information 11 of the mixed main memory 2 into which the data is written and the calculated dynamic writing frequency DW_color and dynamic reading frequency DR_color.
  • “accessible upper limit count ⁇ weight 1 /data life” is set as the inclination of Formula A and “accessible upper limit count ⁇ weight 2 /data life” is set as the inclination of Formula B, where weight 1 >weight 2 holds.
  • Weight 1 and weight 2 can arbitrarily be set in accordance with the mixed main memory 2 into which the data from which the dynamic writing frequency DW_color and the dynamic reading frequency DR_color are calculated is written.
  • the dynamic access frequency of the data is determined to be high.
  • the dynamic access frequency of the data is determined to be medium.
  • the dynamic access frequency of the data is determined to be low.
  • FIG. 16 is a flow chart showing an example of the processing to read the data.
  • the processes 6 a , 6 b , 6 c executed by the processors 3 a , 3 b , 3 c cause a reading request of data (read target data) (step W 1 ).
  • a virtual address specifying the read target data is converted into a logical address based on a page table (not shown) included in the processors 3 a , 3 b , 3 c (step W 2 ).
  • the reading management unit 19 references the valid/invalid flag of the entry of the logical address corresponding to the read target data of the address conversion information 13 (step W 3 ).
  • step W 3 a If the valid/invalid flag of the address conversion information 13 is 0 (step W 3 a ), data is undefined because writing for the logical address has not occurred at once. In this case, the reading management unit 19 behaves as if to read 0 data for the size of the reading request (step W 8 ) before proceeding to processing in step W 10 .
  • step W 3 a If the valid/invalid flag of the address conversion information 13 is 1 (step W 3 a ), data writing for the logical address has occurred at least once. In this case, the reading management unit 19 references the address conversion information 13 to determine whether data corresponding to the logical address is stored in the volatile semiconductor memory 8 (step W 4 ).
  • step W 4 a If the reading management unit 19 determines that data corresponding to the logical address is stored in the volatile semiconductor memory 8 (step W 4 a ), the processing proceeds to step W 10 to read the data from the volatile semiconductor memory 8 .
  • the reading management unit 19 determines that data corresponding to the logical address is not stored in the volatile semiconductor memory 8 (step W 4 a ). If the reading management unit 19 determines the method of reading the read target data from the nonvolatile semiconductor memories 9 , 10 by referencing the coloring table 14 (step W 5 ). Decision processing of the reading method will be described later.
  • the reading management unit 19 determines whether the read target data needs to be moved (rewritten) by referencing the memory specific information 11 and the memory usage information 12 of the nonvolatile semiconductor memories 9 , 10 in which the read target data is stored (step W 6 ).
  • step W 6 a If the reading management unit 19 determines that the read target data does not need to be moved (step W 6 a ), the processing proceeds to step W 9 .
  • step W 6 a If the reading management unit 19 determines that the read target data needs to be moved (step W 6 a ), the reading management unit 19 moves the read target data to another region of the nonvolatile semiconductor memories 9 , 10 (step W 7 ) and then the processing proceeds to step W 9 .
  • step W 9 the memory usage information management unit 22 increments the reading count of the memory usage information 11 when data is read from a nonvolatile memory region.
  • step W 10 the dynamic color information management unit 25 increments the reading count DRC_color of data of the coloring table 14 when the data is read.
  • step W 11 the reading management unit 19 reads data based on a physical address obtained from a logical address and the address conversion information 13 .
  • FIG. 17 is a flow chart showing an example of decision processing of the reading method of data.
  • the decision processing of the reading method is processing to determine whether to use a memory region of the volatile semiconductor memory 8 as a cache when data is read from a memory region of the nonvolatile semiconductor memories 9 , 10 . This processing corresponds to step W 5 in FIG. 16 .
  • the mixed main memory 2 includes, as described above, the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 .
  • a portion of the volatile semiconductor memory 8 can be used as a cache memory.
  • data whose reading frequency is high is read after being cached in the volatile semiconductor memory 8 .
  • data whose reading frequency is low is read directly from the nonvolatile semiconductor memories 9 , without being cached in the volatile semiconductor memory 8 .
  • the reading management unit 19 checks whether there is free space into which the read target data can be written in the volatile semiconductor memory 8 (DRAM region) (step V 4 ). If there is free space in the volatile semiconductor memory 8 (step V 4 a ), the reading management unit 19 caches the read target data in the volatile semiconductor memory 8 (DRAM region) from the nonvolatile semiconductor memories 9 , 10 (step V 5 ).
  • the reading management unit 19 secures free space by writing data stored in the volatile semiconductor memory 8 back to the nonvolatile semiconductor memories 9 , 10 to erase the data stored in the volatile semiconductor memory 8 (step V 6 ).
  • the reading management unit 19 checks for free space of the volatile semiconductor memory 8 again (step V 7 ). The processing proceeds to step V 5 if free space is present in the volatile semiconductor memory 8 (step V 7 a ) and the processing proceeds to step V 8 if free space is not present in the volatile semiconductor memory 8 (step V 7 a ).
  • the reading management unit 19 does not cache the read target data in the volatile semiconductor memory 8 and reads the read target data directly from the nonvolatile semiconductor memories 9 , 10 (step V 8 ).
  • the reading method is determined, as described above, by referencing the static reading frequency SR_color and the dynamic reading frequency DR_color.
  • FIG. 17 a determination of the data life SL_color is not executed. The reason therefor will be described. As will be described later, data whose data life SL_color is short is arranged in the volatile semiconductor memory 8 when the data is written. Thus, data whose valid/invalid flag is 1 and whose data life SL_color indicates a short life will be stored in the volatile semiconductor memory 8 . As a result, the determination based on the data life SL_color is not needed in FIG. 17 .
  • the reading method of the data shown in FIGS. 9 and 10 is determined as described below by following the flow chart of the decision processing of the reading method of data illustrated in FIG. 17 .
  • a high reading frequency and a low writing frequency are estimated for the text region of the kernel for which 5 is set to the static reading frequency SR_color and 1 is set to the static writing frequency SW_color.
  • First data in the text region of the kernel is read when the operating system 27 performs various kinds of processing and thus, the reading count increases and it becomes necessary to read the first data still faster.
  • the memory management device 1 writes the first data read from the nonvolatile semiconductor memories 9 , 10 into the secondary cache memory 5 b or the primary cache memory 4 b of the processor 3 b and also transfers the read first data to the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 in parallel.
  • the first data is read from the secondary cache memory 5 b or the primary cache memory 4 b of the processor 3 b or, if no cache hit occurs, from the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 .
  • the first data stored in the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 is held in the volatile semiconductor memory 8 till power-off as long as the memory region of the volatile semiconductor memory 8 is not exhausted.
  • the data region of the kernel for which 5 is set to the static reading frequency SR_color and 5 is set to the static writing frequency SW_color is a region that is newly generated and initialized each time the system (the information processing device 100 ) is activated.
  • a second data life SL_color in the data region of the kernel is estimated to be short.
  • the memory management device 1 first references the second data life SL_color.
  • Second data is present in the volatile semiconductor memory 8 as long as the memory region of the volatile semiconductor memory 8 is not exhausted and is erased from the volatile semiconductor memory 8 at power-off.
  • the reading frequency for the region of a user program for which 4 is set to the static reading frequency SR_color and 1 is set to the static writing frequency SW_color is lower than the reading frequency of the kernel that is reenterably invoked by all processes.
  • Third data in the region of user program is arranged in the memory region of the volatile semiconductor memory 8 , but if the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 is fully occupied, the third data is to be written back from the volatile semiconductor memory 8 to the memory region of the nonvolatile semiconductor memories 9 , 10 .
  • the order of third data to be written back is determined based on information of the coloring table 14 . When written back, the third data is moved from the volatile semiconductor memory 8 to the nonvolatile semiconductor memories 9 , 10 in ascending order of reading count.
  • Fourth data whose data life SL_color is set to be short of fourth data in a region for which 4 is set to the static reading frequency SR_color and 4 is set to the static writing frequency SW_color and which is dynamically secured by a program is present, like in the data region of the kernel, in the volatile semiconductor memory 8 as long as the memory region of the volatile semiconductor memory 8 is not exhausted and is erased from the volatile semiconductor memory 8 at power-off.
  • fourth data whose data life SL_color is set to be long is arranged in the memory region of the volatile semiconductor memory 8 , but if the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 is fully occupied, the fourth data is to be written back from the volatile semiconductor memory 8 to the memory region of the nonvolatile semiconductor memories 9 , 10 .
  • An extremely low writing frequency and a high predicted reading frequency are estimated by the operating system 27 for fifth data included in a file class for which 1 is set to the static writing frequency SW_color and 3 is set to the static reading frequency SR_color.
  • the memory management device 1 arranges the fifth data in the memory region of the volatile semiconductor memory 8 , but if the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 is fully occupied, the fifth data is to be written back from the volatile semiconductor memory 8 to the memory region of the nonvolatile semiconductor memories 9 , 10 .
  • the extremely low static writing frequency SW_color and the low predicted static reading frequency SR_color are estimated by the operating system 27 for sixth data included in a file class for which 1 is set to the static writing frequency SW_color and 2 is set to the static reading frequency SR_color. If the static reading frequency SR_color is not determined to be high like in this case, the memory management device 1 directly accesses the nonvolatile semiconductor memories 9 , 10 without passing through a cache of the volatile semiconductor memory 8 when reading data.
  • the extremely low static writing frequency SW_color and the extremely low predicted static reading frequency SR_color are estimated by the operating system 27 for seventh data included in a file class for which 1 is set to the static writing frequency SW_color and 1 is set to the static reading frequency SR_color. If the static reading frequency is not determined to be high like in this case, the memory management device 1 directly accesses the nonvolatile semiconductor memories 9 , 10 without passing through a cache of the volatile semiconductor memory 8 when reading data.
  • the reading method of the read target data is determined, as described above, based on coloring information of the read target data. Accordingly, the reading method suited to the characteristic of the read target data (the static reading frequency SR_color, the static writing frequency SW_color, and the data life SL_color) can be used, improving efficiency to read data.
  • FIG. 18 is a flow chart showing an example of write processing of data.
  • the processes 6 a , 6 b , 6 c executed by the processors 3 a , 3 b , 3 c cause a writing request of data (write target data) (step X 1 ).
  • a virtual address specifying the write target data is converted into a logical address based on a page table (not shown) included in the processors 3 a , 3 b , 3 c (step X 2 ).
  • the writing management unit 20 determines a write target memory region of the mixed main memory 2 by referencing the coloring table 14 (step X 3 ). The selection of the write target memory region will be described later.
  • the writing management unit 20 determines whether the write target memory selected in step X 3 is the volatile semiconductor memory 8 (step X 4 ). If, as a result of the determination, the selected write target memory is the volatile semiconductor memory 8 (step X 4 a ), processing in step X 7 is performed and, if the selected write target memory is a nonvolatile semiconductor memory (step X 4 a ), processing in step X 5 is performed.
  • step X 5 the writing management unit 20 determines a write target block region in the memory region of the nonvolatile semiconductor memories 9 , 10 by referencing the memory usage information 11 and the coloring table 14 .
  • step X 6 the address management unit 18 updates the address conversion information 13 based on the physical address of a page in the write target block. If the nonvolatile semiconductor memories 9 , 10 are NAND flash memories, the same physical address is not overwritten and thus, an update of the physical address accompanying the writing is needed.
  • the writing management unit 20 After the physical address of writing destination is being determined, the writing management unit 20 performs write processing of data (step X 7 ). Subsequently, the address management unit 18 sets the valid/invalid flag of the address conversion information 13 to 1 (step X 8 ). The dynamic color information management unit 25 increments the writing count DWC_color of the coloring table 14 (step X 9 ) and the memory usage information management unit 22 increments the writing count of the memory usage information 11 (step X 10 ).
  • FIG. 19 is a flow chart showing an example of decision processing of the writing destination region of data.
  • step Y 1 the writing management unit 20 references the data life SL_color of the write target data.
  • step Y 2 the writing management unit 20 determines whether or not the data life SL_color is longer than a predetermined value. If the data life SL_color is equal to or longer than the predetermined value, the processing proceeds to step Y 9 .
  • step Y 3 the writing management unit 20 checks for free space of the DRAM region and, in step Y 4 , the writing management unit 20 determines whether there is free space in the DRAM region.
  • step Y 5 the writing management unit 20 writes the write target data into the DRAM region.
  • step Y 6 the writing management unit 20 performs write-back processing from the DRAM region to the other nonvolatile semiconductor memory. Then, in step Y 7 , the writing management unit 20 checks for free space of the DRAM region and, in step Y 8 , the writing management unit 20 determines whether there is free space in the DRAM region.
  • step Y 5 If there is free space in the DRAM region, the processing returns to step Y 5 and the writing management unit 20 writes the write target data into the DRAM region.
  • step Y 9 If there is no free space in the DRAM region, the processing proceeds to step Y 9 .
  • step Y 9 the writing management unit 20 references the static writing frequency SW_color of the write target data managed by the coloring table 14 .
  • step Y 10 the writing management unit 20 determines whether 5 is set to the static writing frequency SW_color (whether or not the static writing frequency SW_color of the write target data is high).
  • the processing proceeds to Y 13 and the writing management unit 20 selects the B region as the writing destination of the write target data.
  • step Y 11 the memory management device 1 references the static reading frequency SR_color of the write target data managed by the coloring table 14 .
  • step Y 12 the writing management unit 20 determines to which of 1 to 5 the static reading frequency SR_color is set.
  • step Y 12 If, in step Y 12 , 5 is set to the static reading frequency SR_color, in step Y 13 , the writing management unit 20 selects the B region as the writing destination of the write target data.
  • step Y 14 the writing management unit 20 selects the A region as the writing destination of the write target data.
  • step Y 12 3 is set to the static reading frequency SR_color
  • step Y 15 the writing management unit 20 calculates the dynamic writing frequency DW_color of the data based on coloring information of the data.
  • step Y 16 the writing management unit 20 references the static writing frequency SW_color of the write target data managed by the coloring table 14 .
  • step Y 17 the writing management unit 20 determines whether or not “the static writing frequency SW_color is equal to or more than 3 or the dynamic writing frequency DW_color of data is at a high level” holds.
  • step Y 17 If, in step Y 17 , “SW_color is equal to or more than 3 or the dynamic writing frequency DW_color of data is at a high level” does not hold, the processing proceeds to step Y 14 and the writing management unit 20 selects the A region.
  • step Y 17 If, in step Y 17 , “SW_color is equal to or more than 3 or the dynamic writing frequency DW_color of data is at a high level” holds, the processing proceeds to step Y 18 and the writing management unit 20 selects the C region.
  • step Y 19 the writing management unit 20 calculates the dynamic writing frequency DW_color of the data based on coloring information of the data.
  • step Y 20 the writing management unit 20 references the static writing frequency SW_color of the write target data managed by the coloring table 14 .
  • step Y 21 the writing management unit 20 determines whether or not “SW_color is equal to or more than 3 or the calculated dynamic writing frequency DW_color is at a high level” holds.
  • step Y 21 If, in step Y 21 , “SW_color is equal to or more than 3 or the calculated dynamic writing frequency DW_color is at a high level” holds, the processing proceeds to step Y 18 and the writing management unit 20 selects the C region.
  • step Y 21 If, in step Y 21 , “SW_color is equal to or more than 3 or the calculated dynamic writing frequency DW_color is at a high level” does not hold, the processing proceeds to step Y 22 .
  • step Y 22 the writing management unit 20 determines whether or not “SW_color is equal to or more than 2 or the calculated dynamic writing frequency DW_color is at a medium level” holds.
  • step Y 22 If, in step Y 22 , “SW_color is equal to or more than 2 or the calculated dynamic writing frequency DW_color is at a medium level” holds, the processing proceeds to step Y 23 and the writing management unit 20 selects the D region.
  • step Y 22 If, in step Y 22 , “SW_color is equal to or more than 2 or the calculated dynamic writing frequency DW_color is at a medium level” does not hold, the processing proceeds to step Y 24 and the writing management unit 20 selects the E region.
  • step Y 12 If, in step Y 12 , 1 is set to the static reading frequency SR_color, in step Y 25 , the writing management unit 20 calculates the dynamic writing frequency DW_color of the data based on coloring information of the data.
  • step Y 26 the writing management unit 20 references the static reading frequency SR_color of the write target data managed by the coloring table 14 . Then, the processing returns to step Y 21 .
  • the writing destination region of data is determined by using the static color information and the dynamic color information, but the writing destination region of data may be determined by using only static color information. That is, a portion of the flow chart in the example of FIG. 19 may be diverted to determine the writing destination region of data based on the static color information.
  • the developer of the operating system 27 makes settings as shown in FIGS. 9 and 10 for implementation of the data reading method of the reading management unit 19 and the data writing method of the writing management unit 20 .
  • the number of times the first data is read from the text region of the kernel for which 5 is set to SR_color and 1 is set to SW_color is estimated to be large and the number of times the first data is written thereinto is estimated to be small.
  • the first data is moved to the volatile semiconductor memory 8 during system operation before being read or written based on the decision operation of the reading method shown in FIG. 17 .
  • the frequency with which the first data is actually written into the nonvolatile semiconductor memories 9 , 10 is low.
  • the writing management unit 20 writes the first data into the B region of the nonvolatile semiconductor memory 9 , which is an SLC.
  • the data region of the kernel for which 5 is set to SR_color and 5 is set to SW_color is a region that is newly generated and initialized each time the information processing device 100 is activated and thus, the data life of the second data in the data region of the kernel is estimated to be short.
  • the writing management unit 20 first references the data life SL_color of the second data.
  • the second data is always present in the volatile semiconductor memory 8 during operation of the information processing device 100 and is erased from the volatile semiconductor memory 8 at power-off. Therefore, the second data is not written into the nonvolatile semiconductor memories 9 , 10 .
  • the reading frequency for the region of the user program for which 4 is set to SR_color and 1 is set to SW_color is lower than the reading frequency of the kernel that is reenterably invoked by all processes.
  • the third data in the region of the user program is written into the memory region of the nonvolatile semiconductor memories 9 , 10 only if not accessed for a long time by the reading method shown in FIG. 16 .
  • the frequency with which the third data is written into the nonvolatile semiconductor memories 9 , 10 is low.
  • the third data is low in importance when compared with data in the text region of the kernel and so is written into the A region, which is an MLC region in FIG. 19 .
  • the fourth data whose data life SL_color is set to be short of fourth data in a region for which 4 is set to SR_color and 4 is set to SW_color and which is dynamically secured by a program is always present, like in the data region of the kernel, in the volatile semiconductor memory 8 during operation of the information processing device 100 .
  • the writing management unit 20 first references the data life SL_color of the second data.
  • the fourth data is always present in the volatile semiconductor memory 8 during system operation, is erased from the volatile semiconductor memory 8 at power-off and thus is not written into the nonvolatile semiconductor memories 9 , 10 .
  • the fourth data whose data life SL_color is set to be long is arranged in the memory region of the volatile semiconductor memory 8 , but if the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 is fully occupied, the fourth data is to be written back from the volatile semiconductor memory 8 to the memory region of the nonvolatile semiconductor memories 9 , 10 .
  • the text region of the program is high in importance of data and thus, data in the text region of the program is written into the C region, which is an SLC.
  • An extremely low writing frequency and a high predicted reading frequency are estimated by the operating system 27 for the fifth data in a system file class for which 1 is set to SW_color and 3 is set to SR_color.
  • the writing management unit 20 arranges the fifth data in the memory region of the volatile semiconductor memory 8 , but if the memory region of the volatile semiconductor memory 8 of the mixed main memory 2 is fully occupied, the fifth data is to be written back from the volatile semiconductor memory 8 to the memory region of the nonvolatile semiconductor memories 9 , 10 .
  • the writing frequency of the fifth data is determined to be low and thus, the writing management unit 20 arranges the fifth data in the MLC region.
  • An extremely high writing frequency and a high predicted reading frequency are estimated by the operating system 27 for a file class for which 3 is set to SW_color and 3 is set to SR_color.
  • the writing management unit 20 arranges data in the file class for which 3 is set to SW_color and 3 is set to SR_color in the SLC region.
  • An extremely low writing frequency and a low predicted reading frequency are estimated by the operating system 27 for the sixth data included in a file class for which 1 is set to SW_color and 2 is set to SR_color.
  • the sixth data is determined to be low in importance as a file and thus, the writing management unit 20 arranges the sixth data in the MLC region.
  • An extremely low writing frequency and an extremely low predicted reading frequency are estimated by the operating system 27 for the seventh data included in a file class for which 1 is set to SW_color and 1 is set to SR_color.
  • the seventh data is determined to be low in importance as a file and thus, the writing management unit 20 arranges the seventh data in the MLC region.
  • the writing management unit 20 determines the physical address of writing destination. In this case, the writing management unit 20 suppresses an occurrence of wear leveling to reduce unnecessary erasure processing by referencing the coloring table 14 to appropriately select the physical address of writing destination.
  • the wear leveling means interchanging (exchanging) data between blocks so that, for example, a difference between the maximum erasure count of a block and the minimum erasure count of a block is within a predetermined threshold. For example, data in a NAND flash memory cannot be overwritten without erasure processing and thus, a data movement destination needs to be an unused block and erasure processing of a block that has stored data arises.
  • FIG. 20 is a diagram illustrating decision processing of a write target block for data.
  • Data in the nonvolatile semiconductor memories 9 is erased in units of block.
  • An erasure count EC for each block region of the nonvolatile semiconductor memories 9 , 10 can be acquired by referencing the memory usage information 11 .
  • the ratio of the erasure count EC to the upper limit of the erasure count (erasable upper limit count) of a block region is set as a wear-out rate.
  • the wear-out rate is 100%. If the wear-out rate is 100%, data is not written into the block region.
  • the writing management unit 20 writes write target data whose writing frequency (the static writing frequency SW_color, the dynamic writing frequency DW_color) is low (for example, SW_color is 1 and DW_color is “medium”) into a block region with a high wear-out rate (for example, the wear-out rate is less than 90%) by referencing the coloring table 14 .
  • the erasure count EC of a block region is lower than the upper limit of the erasure count of the block region (for example, 10%), numbers of data writing for the block region may be executed.
  • the writing management unit 20 writes write target data whose writing frequency (the static writing frequency SW_color, the dynamic writing frequency DW_color) is high (for example, SW_color is 5 and DW_color is “high”) into a block region with a low wear-out rate (for example, the wear-out rate is less than 10%) by referencing the coloring table 14 .
  • the block region into which the write target data is written is determined, as described above, based on coloring information of the write target data and the wear-out rate of the block region. Accordingly, the write target block region suited to properties (writing frequency) of the write target data can be selected, improving reliability of data. Moreover, as will be described below, the life of a mixed main memory can be prolonged.
  • FIG. 21 is a graph showing an example of a change of the erasure count in an arbitrary block of the nonvolatile semiconductor memories 9 , 10 .
  • the vertical axis represents the erasure count and the horizontal axis represents the time.
  • FIG. 21 shows a change of the erasure count of an arbitrary block region of the nonvolatile semiconductor memories 9 , 10 . It is preferable for the erasure count of a block region to reach the erasable upper limit count when the life expected of the block region is reached.
  • the threshold for a difference of the erasure count of each block region can be set small for wear leveling.
  • FIG. 22 shows graphs showing an example of a change when the threshold for a difference of the erasure count is set small for wear leveling.
  • FIG. 22 shows the range of a variation of the erasure count of each block region. As shown in FIG. 22 , the variation of the erasure count of each block region is made smaller by reducing the threshold, but an occurrence count of erasure processing for wear leveling increases, which could result in a shorter life of the whole nonvolatile semiconductor memories 9 , 10 .
  • the writing management unit 20 makes a selection of the erasure block region based on the memory usage information 11 , the memory usage information 12 , and the coloring information 14 when data is written.
  • FIG. 23 is a graph showing an example of grouping of block regions in accordance with the erasure count.
  • FIG. 24 is a diagram showing determination criteria for grouping block regions in accordance with the erasure count.
  • each block region is grouped based on the erasure count.
  • Information showing a result of grouping a block region is stored as the memory usage information 11 .
  • the information showing the result of grouping the block region may also be stored as the memory specific information 12 .
  • a thick line in FIG. 23 shows a change of a minimum erasure count and a broken line shows a threshold of wear leveling. As shown in FIG. 23 , each block region is classified into a group of a respective erasure count within a range of the threshold (within a range of a variation) of wear leveling.
  • the memory usage information management unit 22 determines to which group the block region belongs based on a determination table as shown in FIG. 24 and stores the group in the memory usage information 11 .
  • an interval between a minimum erasure count of erasure counts of all block regions and a value obtained by adding the threshold for determining whether to implement wear leveling to the minimum erasure count is divided by the number of groups.
  • the groups are set as h, g, f, e, d, c, b, a upward in the divided range.
  • the upper limit of the erasure count and the lower limit of the erasure count are set for each group.
  • FIG. 25 is a diagram showing an example of a search of block regions for wear leveling.
  • the writing management unit 20 determines the group serving as a reference to search for the block region of write target data based on information of the coloring table 14 . If, for example, the access frequency of the write target data is high, a group whose erasure count is small is determined as the reference and if the access frequency of the write target data is low, a group whose erasure count is large is determined as the reference. It is assumed below that the group c is determined for the write target data.
  • the writing management unit 20 searches for a block region belonging to the determined group c of the write target data based on the memory usage information 11 .
  • the block region is determined as the writing destination of the write target data.
  • the writing management unit 20 searches for a block region belonging to the group b in the neighborhood of the determined group c of the write target data.
  • the block region belonging to the neighboring group b is selected as the writing destination of the write target data.
  • a search of the neighboring group d of the group c for the write target data is further performed similarly until the block region is determined.
  • the writing management unit 20 writes the data and the address management unit 18 updates the address conversion information 13 .
  • the writing management unit 20 may determine an address of the writing destination by using another search method of a block region.
  • the writing management unit 20 manages writable block regions (erasure processed) as a tree structure (such as B ⁇ Tree, B+Tree, RB ⁇ Tree, or the like) in which the erasure count is used as a key and an erasure block region is used as a node and stores the tree structure in the memory specific information 12 or the memory usage information 11 .
  • the writing management unit 20 searches the tree by using a reference erasure count as a key to extract a block region with the closest erasure count.
  • the operating system 27 erases content of the coloring table 14 about the data.
  • the address management unit 18 erases a physical address corresponding to a logical address of the erased data in the address conversion information 13 .
  • FIG. 26 is a block diagram showing an example of the memory management device further including a cache memory in the memory management device 1 according to the present embodiment.
  • the processor 3 b of the processors 3 a , 3 b , 3 c will representatively be described, but the other processors 3 a , 3 c can also be described in the same manner.
  • the memory management device 1 further includes a cache memory 28 .
  • the processor 3 b can directly access the primary cache memory 4 b the secondary cache memory 5 b , and further the cache memory 28 .
  • the memory management device 28 accesses the mixed main memory 2 .
  • FIG. 27A is a block diagram showing a first implementation example of the memory management device 1 , the mixed main memory 2 , and the processor 3 a .
  • the volatile semiconductor memory 8 is a DRAM and the nonvolatile semiconductor memories 9 , 10 are NAND flash memories will be described, but the present embodiment is not limited to such an example.
  • the processor 3 a includes a memory controller (MMU) 3 ma , the primary cache memory 4 a , ands the secondary cache memory 4 b .
  • the memory management device 1 includes a DRAM controller.
  • the processor 3 a and the memory management device 1 are formed on the same board (for example, SoC).
  • the volatile semiconductor memory 8 is controlled by the DRAM controller included in the memory management device 1 .
  • the nonvolatile semiconductor memories 9 , 10 are controlled by the memory management device 1 .
  • the memory module on which the volatile semiconductor memory 8 is mounted and the memory module on which the nonvolatile semiconductor memories 9 , 10 are mounted are separate modules.
  • FIG. 27B is a block diagram showing a first implementation example of the memory management device 1 , the mixed main memory 2 , and the processor 3 a .
  • the volatile semiconductor memory 8 is a DRAM and the nonvolatile semiconductor memories 9 , 10 are NAND flash memories will be described, but the present embodiment is not limited to such an example.
  • the description of the same elements as those in FIG. 27A is omitted.
  • the memory management device 1 is electrically connected to the chip on which the processor 3 a is mounted from outside. Also, the volatile semiconductor memory 8 is connected to the memory management device 1 .
  • the memory management device 1 includes the DRAM controller (not shown).
  • FIG. 28 Another configuration mode of the memory management device 1 and the information processing device 100 according to the present embodiment will be described with reference to FIG. 28 .
  • counting incrementing for the writing count DWC_color and the reading count RWC_color of data are managed by the dynamic color information management unit 22 of the memory management device 1 .
  • the writing count DWC_color and the reading count RWC_color of data are counted by memory controllers 3 ma , 3 mb , 3 mc included in the processors 3 a , 3 b , 3 c .
  • the memory controller 3 ma of the memory controllers 3 ma , 3 mb , 3 mc will representatively be described, but the other memory controllers 3 mb , 3 mc are also described in the same manner.
  • the memory controller 3 ma included in the processor 3 a includes a counter cta that counts the writing count DWC_color and the reading count DRC_color of data. Further, the memory controller 3 ma includes count information cia that manages the writing count DWC_color and the reading count DRC_color of data.
  • the processor 3 a When, for example, the processor 3 a causes a load instruction on data, the counter cta counts (increments) the reading count DRC_color of the data and updates the count information cia. Also when, for example, the processor 3 a causes a store instruction on data, the counter cta counts (increments) the writing count DWC_color of the data and updates the count information cia.
  • the writing count DWC_color and the reading count DRC_color of data managed by the count information cia are periodically reflected in the writing count DWC_color and the reading count DRC_color of the coloring table 14 of the memory management device 1 of the data.
  • the configuration mode in FIG. 28 the following effect is gained. That is, if the operating frequency of the memory management device 1 is on the order of MHz while the operating frequency of the processor 3 a is on the order of GHz, a case when it is difficult for the memory management device 1 to count writing and reading caused by the processor 3 a can be considered. In the configuration mode in FIG. 28 , by contrast, writing and reading are counted by the counter cta of the processor 3 a and thus, the writing count and reading count at high operating frequency can be counted.
  • FIG. 29 is a perspective view showing an example of the plurality of memory management devices managing the plurality of nonvolatile semiconductor memories.
  • one memory module 30 is formed from the one memory management device 1 and a plurality of NAND flash memories 29 .
  • the three memory modules 30 are formed.
  • the plurality of nonvolatile semiconductor memories 29 is, for example, a NAND flash memory and is used as the nonvolatile semiconductor memories 9 , 10 described above.
  • the memory management device 1 manages access to the plurality of nonvolatile semiconductor memories 29 belonging to the same memory module 30 .
  • the plurality of the memory management devices 1 included in a plurality of the memory modules 30 operates like one memory management device in cooperation with each other.
  • the memory management device 1 of the memory module 30 includes an ECC function and a RAID function for the plurality of nonvolatile semiconductor memories 29 in the memory module 30 and performs mirroring and striping.
  • each of the nonvolatile semiconductor memories 29 can be hot-swapped (exchanged).
  • a button 31 is associated with each of the plurality of nonvolatile semiconductor memories 29 .
  • the button 31 includes a warning output unit (for example, an LED). If, for example, the warning output unit is in a first color (green), the normal state is indicated and if the warning output unit is in a second color (red), a state requiring swapping is indicated.
  • a warning output unit for example, an LED
  • buttons 31 If the button 31 is pressed, a notification is sent to the processes 6 a , 6 b , 6 c and the operating system 27 and if it is safe to dismount such as when no access occurs, the button 31 turns to a third color (blue) and the nonvolatile semiconductor memory 29 corresponding to the button 31 becomes hot-swappable.
  • a lamp indicating that the nonvolatile semiconductor memory 29 is hot-swappable is lit when write-back is completed after the button 31 requesting hot-swapping being pressed and then, the nonvolatile semiconductor memory 29 is swapped.
  • the processing unit 15 of the memory management device 1 determines whether or not the writing count or reading count of each of the nonvolatile semiconductor memories 29 has reached a predetermined ratio of the accessible upper limit count written in the memory specific information 12 by referencing the memory usage information 11 and the memory specific information 12 stored in the information storage unit 17 . If the writing count or reading count is reached the predetermined ratio of the writable upper limit count or readable upper limit count, the processing unit 15 issues a notification or warning of memory swapping.
  • the processing unit 15 of the memory management device 1 pre-loads data likely to be accessed frequently in the cache memory 28 in advance by referencing coloring information corresponding to data stored in the nonvolatile semiconductor memories 29 .
  • the processing unit 15 pre-loads periodic data that is likely to be accessed in a predetermined time prior to the predetermined time.
  • the arrangement of data is determined based on durability of each memory in the mixed main memory 2 so that the life of the mixed main memory 2 can be prolonged. Moreover, fast access to the mixed main memory 2 can be realized.
  • Swapping can be eliminated by using the memory management device 1 and the mixed main memory 2 according to the present embodiment.
  • the nonvolatile semiconductor memories 9 , 10 are used as a main memory. Accordingly, the storage capacity of the main memory can be increased and a second storage device using a hard disk or SSD (Solid State Disk) does not have to be used.
  • a hard disk or SSD Solid State Disk
  • nonvolatile semiconductor memories 9 , 10 are used as a main memory in the present embodiment, instant-on can be made faster.
  • the basic type of computer architecture has a problem called the von Neumann bottleneck caused by a difference between the CPU's frequency and main memory's speed.
  • a volatile memory such as an SRAM
  • this problem has been mitigated by installing a high-speed cache memory (such as an SRAM) between the main memory and CPU core.
  • a memory management device capable of improving the hit rate of the cache memory when a nonvolatile semiconductor memory is used as the main memory will be described.
  • the present embodiment uses the nonvolatile semiconductor memories 9 , 10 as the main memory and a portion of the volatile semiconductor memory 8 as the cache memory.
  • the volatile semiconductor memory 8 used as the cache memory will be described.
  • FIG. 30 shows a physical address space of the volatile semiconductor memory (hereinafter, simply called the cache memory) 8 .
  • the physical address space of the cache memory 8 is divided into a plurality of areas (L 0 to L 5 ). Each area does not have to be contiguous in the physical address space.
  • the size of each area is set in such a way that, for example, the physical address space increases from lower to upper areas. Further, an upper area is enabled to expand the area thereof to the adjacent lower area.
  • the maximum expansion size of each area is managed by an area limit ELM.
  • An upper area has a larger area size and thus, data in the area is likely to be held for a long period of time.
  • a lower area is a smaller area size and thus, data in the area is likely to be held for only a short period of time.
  • data whose write out priority is low is arranged in an upper area and data whose write out priority is high is arranged in a lower area.
  • the arrangement processing is performed by, for example, the writing management unit 20 in FIG. 1 .
  • the write out priority is determined by using coloring information. “Write out” means movement of data from the volatile semiconductor memory 8 to the nonvolatile semiconductor memories 9 , 10 .
  • the cache memory 8 includes a cache header CHD.
  • the cache header CHD stores management information of each area. That is, the area limit ELM, a free cache line list FCL, and an area cache line list ECL of each area are stored in the cache header CHD.
  • the free cache line list FCL is a data structure that manages free space of the cache memory 8 and stores a plurality of nodes as management information corresponding to cache lines belonging to no area.
  • the area cache line list ECL is a data structure that manages used space of the cache memory 8 and stores nodes acquired from the free cache line list FCL for each area.
  • a content of the cache header CHD is initialized by reading from a nonvolatile semiconductor memory when the information processing device 100 is activated.
  • the content of the cache header CHD is saved in the nonvolatile semiconductor memory.
  • the area limit ELM can be set by the user to fit to the usage form of the user and an interface to enable the setting may be provided.
  • Data written into the mixed main memory 2 includes, as described above, coloring information as hint information to determine an arrangement (writing) region in the mixed main memory 2 .
  • coloring information as hint information to determine an arrangement (writing) region in the mixed main memory 2 .
  • FIGS. 31A and 31B and FIGS. 32A and 32B show examples of tables (CET) showing a correspondence relationship between coloring information of the coloring table 14 and each area of the cache memory 8 shown in FIG. 30 .
  • FIG. 31A gives a higher priority to read access to enable improvement of the hit rate of reading. More specifically, FIG. 31A shows the correspondence relationship among the data life SL_color as coloring information, the static reading frequency information SR_color, and the dynamic reading frequency DR_color, and the area of the volatile semiconductor memory 8 . As shown in FIG. 31A , data having an increasing reading frequency with an increasing value of the static reading frequency information SR_color is arranged in an increasingly upper area of the volatile semiconductor memory 8 . That is, to give a higher priority to read access, the static reading frequency information SR_color and the dynamic reading frequency DR_color are referenced to arrange the static reading frequency information SR_color and the dynamic reading frequency DW_color in an upper area with a larger area size. The upper area has a larger area size and data in the area is likely to be held for a long period of time. Thus, the cache hit rate of read access can be improved.
  • Data whose data life is “S” is arranged in area L 5 regardless of other coloring information. For example, data in the process of operation has a short data life and the need for writing the data into the nonvolatile semiconductor memories 9 , 10 is low. However, a large number of pieces of such data exist. Thus, such data is arranged in area L 5 with the largest size in the cache memory 8 .
  • FIG. 31B gives a higher priority to write access to enable improvement of the hit rate of writing. More specifically, FIG. 31B shows the correspondence relationship among the data life SR_color as coloring information, the static writing frequency information SR_color, and the dynamic writing frequency information DW_color, and the area of the volatile semiconductor memory 8 . That is, to give a higher priority to write access, the static writing frequency information SW_color and the dynamic writing frequency information DW_color are referenced to arrange the static writing frequency information SR_color and the dynamic writing frequency SW_color in an upper area with a larger area size. Accordingly, the cache hit rate of write access can be improved.
  • Data whose data life is “S” is arranged, like in FIG. 31A , in area L 5 .
  • FIG. 32A takes both of the reading frequency and the writing frequency into consideration and improvement of the hit rate is enabled if at least one of the reading frequency and the writing frequency is high. More specifically, FIG. 32A shows the correspondence relationship among the data life SL_color as coloring information, the sum of the value of the static reading frequency information SR_color and the value of the static writing frequency information SW_color, and the area of the volatile semiconductor memory 8 .
  • FIG. 32B is a modification of FIG. 32A , the reading frequency and writing frequency are weighted, and enables improvement of the hit rate by setting weights to the reading frequency and writing frequency.
  • the area of the volatile semiconductor memory 8 is associated with the value of SR_color*W+SW_color*(1 ⁇ W).
  • FIGS. 32A and 32B data whose data life is “S” is arranged, like in FIGS. 31A and 31B , in area L 5 .
  • One of the tables CET showing relationships between coloring information and areas shown in FIGS. 31A and 31B and FIGS. 32A and 32B is stored in, for example, the information storage unit 17 .
  • Relationships between coloring information and areas are not limited to examples shown in FIGS. 31A and 31B and FIGS. 32A and 32B and can be changed in response to a user's request.
  • areas of the volatile semiconductor memory 8 are set to be expandable to have expandability.
  • FIG. 33 shows an example of the free cache line list FCL and the area cache line list ECL stored in the cache header CHD of the cache memory 8 .
  • the free cache line list FCL is, as described above, a data structure showing a free space of the cache memory 8 and is composed of a plurality of nodes ND corresponding to cache lines. Each node ND is composed of a physical address of a cache line, a belonging area, and an update flag.
  • the cache line corresponds to the page size (I/O size) of the nonvolatile semiconductor memories 9 , 10 .
  • Each node ND stores the physical address of a cache line.
  • the belonging area is one of areas L 0 to L 5 set to the cache memory.
  • the update flag is a flag indicating whether or not an update of data of the cache line has occurred. “0” of the update flag indicates that data has been erased or data has been written into the volatile semiconductor memory 8 and the written data has not been updated.
  • “1” of the update flag indicates that data in a cache line has been updated and the update of the data has not been reflected in the nonvolatile semiconductor memories 9 , 10 .
  • the update flag is controlled by, for example, the processing unit 15 .
  • the processing unit 15 sets the corresponding update flag to “0” when data is written from the nonvolatile semiconductor memories 9 , 10 into the cache memory 8 and sets the update flag to “1” when the written data is updated in the cache memory 8 .
  • the processing unit 15 also sets the corresponding update flag to “0” when data in the cache memory 8 is erased and further sets the corresponding update flag to “0” when an update of data of the cache memory 8 is reflected in the nonvolatile semiconductor memories 9 , 10 .
  • the update flag may not be arranged in each node and, for example, a content of a field indicating a dirty bit stored in the information storage unit 17 may be referenced.
  • the area cache line list ECL is, as described above, a data structure that manages a used space of the cache memory 8 and stores the node corresponding to the cache line contained in each area. That is, when data read from the nonvolatile semiconductor memories 9 , 10 is written into the cache memory 8 , a belonging area of each node of the free cache line list FCL is searched based on coloring information attached to the data and if free space is available, the node thereof is acquired and arranged in the corresponding area of the area cache line list ECL. If write data is data to be written into area L 5 , each node of the free cache line list FCL is searched and one node of area L 5 or lower areas L 4 to L 0 as an expansion region is acquired. The acquired node is connected to the area cache line list ECL corresponding to area L 5 .
  • the data is also written into the cache memory 8 according to the physical address of the cache line of the acquired node. Further, the update flag of the node ND is set to “0”.
  • the area cache line list ECL is managed based on an algorithm such as FIFO (First-in/First-out) and LRU (Least Recently Used).
  • FIFO First-in/First-out
  • LRU Least Recently Used
  • the cache line corresponding to the node positioned, for example, at the head of the area cache line list ECL is always a write out target of the area.
  • the number of nodes arranged corresponding to each area in the area cache line list ECL is managed by the area limit ELM so that the length of the list of each area should not exceed the area limit ELM.
  • the management by software processing using the cache header is described as the management method of the cache area, but a management by hardware using a configuration in which the cache line is managed by a cache tag may also be used.
  • FIG. 34 shows write processing of data by, for example, the processing unit 15 . That is, FIG. 34 shows a flow of processing when data is newly read from the nonvolatile semiconductor memories 9 , 10 and an arrangement of the data in the volatile semiconductor memory 8 is determined.
  • the size of each area is variable in the present embodiment and thus, the process until data is written changes depending on whether or not an area is expandable.
  • step S 31 when data is to be arranged in the cache memory 8 , first a data arrangement area of the cache memory 8 is determined (step S 31 ). That is, an area of the cache memory 8 to arrange the read data is determined based on the correspondence relationships shown in FIGS. 31A and 31B and FIGS. 32A and 32B .
  • the table CET shown in FIG. 31A is referenced based on coloring information attached to data read from the nonvolatile semiconductor memories 9 , 10 . If the data life of the coloring information attached to data is “L”, the value of the static reading frequency information SR_color is “1”, and the reading frequency is “high”, the data is arranged in the area L 0 . If the data life of the coloring information attached to data is “L”, the value of SR_color is “4”, and the reading frequency is “high”, the data is arranged in the area L 4 .
  • step S 32 whether or not the area is expandable is determined.
  • the current size of the area can be recognized from, for example, the number of nodes of the area cache line list. Thus, the current size compares with the value of the area limit ELM written in the cache header CHD. If, as a result, the current size is smaller than the value of the area limit ELM, the area is determined to be expandable.
  • step S 33 whether or not the node ND corresponding to the area is present in the free cache line list FCL is determined. That is, belonging areas of nodes in the free cache line list FCL are searched to determine whether the corresponding area is present. In this case, if data is data to be written into the area L 4 , the area L 4 is expandable to a portion of the area L 3 and thus, the area L 4 and area L 3 are searched.
  • the node ND is acquired from the free cache line list (step S 34 ).
  • the physical address of the cache line is acquired from the acquired node ND. Based on the physical address, the data read from the nonvolatile semiconductor memories 9 , 10 is written into the cache memory 8 (step S 35 ).
  • the cache header CHD is updated (step S 36 ). That is, the node ND acquired from the free cache line list FCL is moved to the area cache line list ECL and the update flag is set to “0”.
  • step S 37 the address conversion table is updated (step S 37 ). That is, the physical address of the nonvolatile semiconductor memories 9 , 10 corresponding to the data written into the cache memory 8 is written into the address conversion table.
  • step S 33 if, in step S 33 , the corresponding node ND is determined not to be present in the free cache line list FCL, the area cache line list ECL is searched from the bottom area (step S 38 ). That is, to generate the new node ND, it is necessary to transfer any one piece of data in the cache memory 8 to the nonvolatile semiconductor memories 9 , 10 to generate a free area. Thus, all areas from the bottom area L 0 to area L 5 of the area cache line list ECL shown in FIG. 33 are searched.
  • the area L 4 is expandable to a portion of the lower area.
  • the node ND of the lower area of the area cache line list ECL is acquired.
  • step S 39 whether the node ND has been acquired is determined.
  • the physical address of the cache line is acquired from the acquired node ND and the data in the cache memory 8 is written into the nonvolatile semiconductor memories 9 , 10 based on the physical address (step S 40 ).
  • the cache header CHD is updated (step S 41 ). That is, the free node ND is generated by the data corresponding to the node ND of the area cache line list ECL being written into the nonvolatile semiconductor memories 9 , 10 . The node ND is moved to the free cache line list FCL and the update flag is set to data “0”.
  • step S 33 the control is moved to step S 33 .
  • the free node ND is present in the free cache line list FCL and thus, the node ND is acquired and the data is written to the physical address specified by the node (steps S 33 to S 35 ).
  • the cache header CHD and the address conversion table are updated (steps S 36 and S 37 ).
  • step S 32 If, in step S 32 , the area expansion is determined to be difficult, the nodes ND of the area in the area cache line list ECL are searched and the first node ND is acquired (step S 42 ).
  • the acquired node ND is a node of an area whose priority is low.
  • the physical address of the cache line is acquired from the acquired node, and the data in the cache memory 8 is written into the nonvolatile semiconductor memories 9 , based on the physical address (step S 40 ). Then, the cache header is updated (step S 41 ).
  • step S 39 if, in the step S 39 , the node ND cannot be acquired as a result of searching the area cache line list ECL, the cache memory 8 cannot be used and thus, the data is written into the nonvolatile semiconductor memories 9 , 10 (step S 43 ). Then, the address conversion table is updated (step S 37 ). (Erasure of the cache memory)
  • FIG. 35 shows an example of an erasure operation of the cache memory 8 .
  • the cache memory 8 is assumed to be erasable by software.
  • step S 51 If, as shown in FIG. 35 , an erasure request of data stored in the cache memory 8 is issued (step S 51 ), update flags of each node ND are searched to detect data not yet updated to the nonvolatile semiconductor memories 9 , 10 (step S 52 ). That is, for example, a node whose update flag is data “1” in the area cache line list ECL is detected. As a result, if there is no update flag with the data “1”, the processing ends.
  • step S 53 If an update flag with the data “1” is detected, the data in the cache memory 8 is written into the nonvolatile semiconductor memories 9 , 10 based on the physical address of the cache line of the node ND (step S 53 ).
  • step S 54 the cache header CHD is updated (step S 54 ). That is, the node of the area cache line list ECL is moved to the free cache line list FCL and the update flag is set to data “0”. Next, the control is moved to step S 52 . Such an operation is repeated until there is no longer an update flag with the data “1”.
  • data whose importance is high is stored in an upper area of the volatile semiconductor memory 8 based on the relationship between coloring information attached to data and areas of the cache memory 8 . Therefore, the hit rate of the cache memory 8 can be improved.
  • the hit rate of the cache memory 8 is high, the number of times of accessing the nonvolatile semiconductor memories 9 , 10 can be reduced so that the nonvolatile semiconductor memories 9 , 10 can be protected.
  • upper areas have an expansion region and data can be written thereinto until the expansion region is full. If the area is small, data whose importance is high but is not accessed frequently is likely to be written back from the cache memory 8 based on, for example, an algorithm of LRU. However, data infrequently accessed can be left in the cache memory by making an upper area expandable to lower areas to secure a wide area including the expansion region. Therefore, the hit rate of the cache memory 8 can be improved.
  • the cache memory 8 is divided into the areas of L 0 to L 5 for each piece of coloring information.
  • the area L 5 as an upper area stores data equal to or more than a specified size
  • the area can be expanded to a portion of the area L 4 thereunder. If data is written into an expansion region and the area cannot be further expanded, data in the cache memory 8 is written back to the nonvolatile semiconductor memories 9 , 10 based on an algorithm such as FIFO, LRU, or the like.
  • the bottom area L 0 has no expansion region and if the area becomes full, data in the cache memory 8 is written back based on an algorithm such as FIFO, LRU, or the like.
  • the present embodiment is a modification of the first embodiment.
  • the present embodiment relates to an example capable of reducing the number of times of accessing the nonvolatile semiconductor memories (NAND flash memories) 9 , 10 so that the memory life can be prolonged.
  • NAND flash memories nonvolatile semiconductor memories
  • the initial value of the flag is “0”.
  • the flag “0(invalid)” indicates that the corresponding logical address is not mapped to the nonvolatile semiconductor memories 9 , 10 or has been erased after being mapped.
  • the flag “1(valid)” indicates that the corresponding logical address is mapped to at least one of the nonvolatile semiconductor memories 9 , 10 .
  • the flag “0(invalid)” indicates that when all pages in a block in the nonvolatile semiconductor memories 9 , 10 has the flag “0(invalid)”, all data in the block can be erased. Even a page having the flag “0(valid)” makes the block non-erasable.
  • the valid/invalid flag makes a state transition from the flag “0” to “1”.
  • the valid/invalid flag makes a state transition from the flag “1” to “0”.
  • step S 001 for example, an operating system OS (corresponding to the operating system 27 in the first embodiment) of the processor 3 a sends a memory release request (argument: logical address) of the volatile semiconductor memory 8 to the memory management device 1 .
  • an operating system OS corresponding to the operating system 27 in the first embodiment
  • a memory release request argument: logical address
  • step S 002 the memory management device 1 that has received the memory release request references the address conversion information (address conversion table) 13 to examine whether a physical address in the volatile semiconductor memory 8 corresponding to the logical address as the argument of the memory release request has a value that is not undefined and also the valid/invalid bit of the nonvolatile semiconductor memories 9 , 10 to check whether or not an applicable data is in the volatile semiconductor memory 8 or the nonvolatile semiconductor memories 9 , 10 .
  • address conversion table address conversion table
  • the memory management device 1 determines that the logical address as the argument is not mapped to the nonvolatile semiconductor memories 9 , 10 and if the valid/invalid bit of the nonvolatile semiconductor memories 9 , 10 is “1”, the memory management device 1 determines that the logical address as the argument is mapped to the nonvolatile semiconductor memories 9 , 10 .
  • the memory management device 1 references the physical address of the volatile semiconductor memory 8 and the physical addresses of the nonvolatile semiconductor memories 9 , to check presence/absence of the data in the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 and exercises the following control:
  • step S 003 if the data is present in the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 , the memory management device 1 erases data at the physical address in the volatile semiconductor memory 8 corresponding to the logical address requested to release to form explicit free space in the volatile semiconductor memory 8 and sets a dirty bit of the volatile semiconductor memory 8 to “0”.
  • the dirty bit of the volatile semiconductor memory 8 is a bit indicating that data in the volatile semiconductor memory 8 has been rewritten and is present, for example, in a header region or the like of the volatile semiconductor memory 8 .
  • step S 004 the memory management device 1 sets a valid/invalid bit of a physical address in the nonvolatile semiconductor memories 9 , 10 corresponding to a logical address requested to release to “0” for invalidation as an erasure target, for the nonvolatile semiconductor memories 9 , 10 .
  • a data erasure operation actually is not performed on the nonvolatile semiconductor memories (NAND) 9 , 10 in a strict sense and only the valid bit is removed as an erasure target.
  • step S 005 if the data is present only in the volatile semiconductor memory 8 , the memory management device 1 similarly erases data at the physical address in the volatile semiconductor memory 8 corresponding to the logical address requested to release to form explicit free space and sets a dirty bit of the volatile semiconductor memory 8 to “0”.
  • the memory management device 1 receives a logical address specifying a release position for the mixed main memory 2 including the volatile semiconductor memory (first memory) 8 and the nonvolatile semiconductor memories (second memory) 9 , 10 from the processor 3 and examines the specified logical address, the physical address of the volatile semiconductor memory (first memory) 8 , the physical addresses of the nonvolatile semiconductor memories (second memory) 9 , 10 , and the valid/invalid flag of data at a physical address of the nonvolatile semiconductor memories (second memory) 9 , 10 by referencing the address conversion information 13 to check the physical address at which data corresponding to the logical address requested to release is present.
  • the memory management device 1 erases the data to form explicit free space and if the corresponding data is also present in the nonvolatile semiconductor memories (second memory) 9 , 10 , the memory management device 1 does not actually perform an erasure operation of the data, but invalidates the valid/invalid flag by setting the flag to “0”. In other words, the memory management device 1 forms explicit free space in the volatile semiconductor memory (DRAM) 8 for the logical address specified by the memory release request.
  • DRAM volatile semiconductor memory
  • FIG. 39 is a diagram illustrating a formation of explicit free space in the volatile semiconductor memory when a release of a memory in FIG. 38 is requested.
  • erased explicit free space FSO can be formed at a physical address xh corresponding to the logical address specified by a memory release request in memory space of the volatile semiconductor memory (DRAM) 8 .
  • the amount of data of the volatile semiconductor memory 8 can be reduced and thus, the number of times of accessing the nonvolatile semiconductor memories 9 , 10 can advantageously be reduced to prolong the memory life of the nonvolatile semiconductor memories 9 , 10 .
  • step S 011 for example, an application App in the processor 3 a sends a memory acquisition request to the operating system OS.
  • step S 012 the operating system OS secures a portion of a free (any) logical address.
  • the secured logical address is not yet mapped to a physical address in the volatile semiconductor memory or the nonvolatile semiconductor memory and will be mapped only when a writing request is received.
  • a memory region can be secured based on coloring information (hint information) before data reading and data writing described below.
  • the present example is advantageous in that the memory life of the nonvolatile semiconductor memories 9 , 10 can be prolonged.
  • step S 013 the application App requests data reading for the operating system OS.
  • step S 014 the operating system OS requests data reading for the memory management device 1 .
  • the memory management device 1 searches for an entry corresponding to the logical address for which a data reading request is made by referencing the address conversion information 13 .
  • step S 015 the application App requests data writing for the operating system OS.
  • step S 016 the operating system OS requests data writing for the memory management device 1 .
  • the memory management device 1 references the address conversion information 13 to enter a mapping result for the secured logical address (the physical address in the volatile semiconductor memory or the nonvolatile semiconductor memory). If mapped to the nonvolatile semiconductor memories 9 , 10 , the valid/invalid flag indicating presence/absence of data in the nonvolatile semiconductor memories 9 , 10 is validated by setting the flag to “1”.
  • step S 014 The processing flow of a memory data reading request in step S 014 will be described in detail along FIG. 41 .
  • step S 201 the application App of the processor 3 a first requests reading for the operating system OS and the operating system OS requests a memory data reading for the memory management device 1 by specifying the logical address.
  • step S 202 the memory management device 1 that is received a memory data reading request determines whether data corresponding to the logical address is present in the volatile semiconductor memory 8 by referencing the address conversion information (table) 13 .
  • step S 203 if a determination is made in step S 202 that data corresponding to the logical address is present in the volatile semiconductor memory 8 (Yes), the operating system OS reads the data at the physical address in the volatile semiconductor memory 8 corresponding to the logical address by the memory management device 1 and terminates the operation (End).
  • step S 204 if a determination is made in step S 202 that data corresponding to the logical address is not present in the volatile semiconductor memory 8 (No), the memory management device 1 determines whether data corresponding to the logical address is present in the nonvolatile semiconductor memories 9 , 10 by referencing the address conversion information (table) 13 again.
  • step S 205 if a determination is made in step S 204 that corresponding data is present in the nonvolatile semiconductor memories 9 , 10 (Yes), the operating system OS reads the data stored in the nonvolatile semiconductor memories 9 , 10 corresponding to the logical address by the memory management device 1 .
  • step S 206 the operating system OS writes the data read from the nonvolatile semiconductor memories 9 , 10 in step S 205 into the volatile semiconductor memory 8 by the memory management device 1 .
  • step S 207 the memory management device 1 sets the physical address in the volatile semiconductor memory 8 of an entry of the address conversion information 13 , sets the valid/invalid bit in the address conversion information 13 to “1”, sets a dirty bit to “0”, and terminates the operation (End).
  • step S 208 if a determination is made in step S 204 that a corresponding address is not present in the nonvolatile semiconductor memories 9 , 10 (No), the operating system OS sends zero-cleared data created by the memory management device 1 to the processor 3 a side, terminates the operation (End).
  • “Sending zero-cleared data to the processor 3 a side” essentially means that if the data is actually present in at least one of the volatile semiconductor memory and nonvolatile semiconductor memories, a content of the data present at the physical address corresponding to the logical address is sent. In this case, however, the data is not yet mapped and there is no corresponding data and thus, instead of actually sending the content of the data, data padded with zeros for the size is sent as data.
  • the zero-cleared data may be written into the volatile semiconductor memory 8 .
  • step S 016 The processing flow when a memory data writing is requested in step S 016 will be described in detail along FIG. 42 .
  • step S 301 for example, the application App of the processor 3 a first requests writing request for the operating system OS and the operating system OS requests memory data writing for the memory management device 1 by specifying the logical address.
  • step S 302 the memory management device 1 that receives a memory data writing request determines whether data corresponding to the logical address is present in the volatile semiconductor memory 8 by referencing the address conversion information (table) 13 .
  • step S 303 if a determination is made in step S 302 that data corresponding to the logical address is present in the volatile semiconductor memory 8 (Yes), the operating system OS writes the data to the physical address in the volatile semiconductor memory 8 corresponding to the logical address by the memory management device 1 .
  • step S 304 the memory management device 1 references the address conversion information 13 to set a dirty bit of an entry in the volatile semiconductor memory 8 corresponding to the address to “1” (End).
  • step S 305 if a determination is made in step S 302 that data corresponding to the logical address is not present in the volatile semiconductor memory 8 (No), the memory management device 1 determines whether data corresponding to the logical address is present in the nonvolatile semiconductor memories 9 , 10 by referencing the address conversion information 13 again.
  • step S 306 if a determination is made in step S 305 that corresponding data is present in the nonvolatile semiconductor memories 9 , 10 (Yes), the operating system OS reads the data at the physical address in the nonvolatile semiconductor memories 9 , 10 corresponding to the logical address by the memory management device 1 .
  • step S 307 if a determination is made in step S 305 that corresponding data is not present in the nonvolatile semiconductor memories 9 , 10 (No), the operating system OS sends data zero-cleared by the memory management device 1 to the processor 3 a side, and proceeds to next step S 308 .
  • step S 308 the memory management device 1 writes the data read from the nonvolatile semiconductor memory or the zero-cleared data into the volatile semiconductor memory 8 .
  • step S 309 the memory management device 1 sets the physical address in the volatile semiconductor memory 8 of a corresponding entry of the address conversion information (table) 13 , sets a valid/invalid bit in the address conversion information 13 to “1”, and sets a dirty bit to “0”.
  • step S 310 the memory management device 1 writes an updated data into the volatile semiconductor memory 8 , and terminates the operation (End).
  • data present in the nonvolatile semiconductor memories 9 , 10 is once read into the volatile semiconductor memory 8 and then, the data is overwritten with the updated data. This is intended to prevent the number of times of access from increasing (because reading+writing are needed) if the data is rewritten in the nonvolatile semiconductor memories 9 , 10 when the data is written.
  • step S 012 the operating system OS that receives the memory acquisition request from an application secures a portion of a free logical address. Then, when a writing request occurs, the operating system OS secures a memory region of the more appropriate memory of the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 in accordance with the coloring information accompanying the logical address writing count or the like.
  • the memory management device 1 creates explicit free space in the volatile semiconductor memory 8 and thus, data in the volatile semiconductor memory 8 to be written into the nonvolatile semiconductor memories 9 , 10 is reduced so that the number of times of accessing the nonvolatile semiconductor memories including NAND flash memories whose accessible count is limited can be reduced.
  • the memory life of the nonvolatile semiconductor memories 9 , 10 including NAND flash memories can advantageously be prolonged.
  • the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 are managed, as shown in FIG. 4 , by a common address conversion table, but the present embodiment is not limited to such an example and the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 may be managed separately.
  • the volatile semiconductor memory 8 may be managed by a cache tag (table).
  • the cache tag does not necessarily need to include coloring information.
  • a NAND flash memory is generally used as a secondary storage device.
  • data stored in a NAND flash memory mostly has a data size equal to or more than a physical block size.
  • a NAND flash memory is used as a secondary storage device, one block region rarely has a plurality of pieces of data with different erasure frequencies.
  • the size of data read from the NAND flash memory and the size of data written into the NAND flash memory are frequently estimated to be less than the physical block size of the NAND flash memory.
  • the present embodiment is a modification of the first embodiment and the memory management device 1 that classifies a plurality of pieces of write target data into a plurality of groups (color groups) based on coloring information and configures data of the block size by putting the plurality of pieces of write target data belonging to the same group together will be described.
  • coloring information for example, at least one of static color information and dynamic color information described in the first embodiment may be adopted.
  • static color information for example, at least one of “importance”, “reading frequency/writing frequency”, and “data life” may be adopted.
  • dynamic color information for example, at least one of numbers of times of reading and writing data and frequencies of reading and writing data may be adopted.
  • the nonvolatile semiconductor memories 9 , 10 are NAND flash memories
  • the type of the nonvolatile semiconductor memories 9 , 10 is not limited to this example.
  • An overwrite method will briefly be described here.
  • the overwrite method is one writing method of a memory system using a NAND flash memory.
  • a page once written cannot be rewritten unless a whole block including the page is erased.
  • the same physical address (the physical address of the NAND flash memory, hereinafter, referred to as the NAND physical address) cannot be overwritten unless the physical address is erased.
  • the correspondence relationship between the logical address (the logical address of the NAND flash memory, hereinafter, referred to as the NAND logical address) and the NAND physical address is managed by a logical/physical conversion table and the correspondence relationship can dynamically be changed. If the overwrite method is adopted, a memory system including a NAND flash memory behaves as if any logical address were overwritable from an upper layer.
  • the correspondence relationship between the NAND logical address in units of blocks (hereinafter, referred to the NLBA) and the NAND physical address in units of blocks (hereinafter, referred to the NPBA) is managed. Because the logical/physical conversion table of a NAND flash memory is managed in units of blocks, even if only data of the size equal to or less than the block size, for example, data for one page is updated, erasure processing of the whole block including the data is needed.
  • a new NPBA is allocated to the NLBA. Update data is written into the region corresponding to the new NPBA and at this point, non-updated data stored in the old NPBA is copied to the region corresponding to the new NPBA (involved relocation).
  • a plurality of NPBAs may be allocated to one NLBA for data exchange to execute the data exchanging involved in updating therebetween.
  • write target data is grouped based on, for example, the static writing frequency SW_color as coloring information
  • write target data may also be grouped based on various criteria, for example, the static reading frequency SR_color, the dynamic writing frequency DW_color, or the dynamic reading frequency DW_color, or further a combination of a plurality of criteria.
  • the management size to group a plurality of pieces of write target data is less than the block size of a NAND flash memory.
  • a page equal to the management unit of the coloring table 14 in size is used as a unit of the management size.
  • FIG. 43 is a block diagram showing an example of principal portions of a functional configuration of the memory management device 1 according to the present embodiment.
  • the coloring information management unit 21 includes, in addition to the access frequency calculation unit 24 and the dynamic color information management unit 25 described with reference to FIG. 2 , a group value calculation unit 201 and a reservation list management unit 202 .
  • the memory management device 1 further includes the writing management unit 20 , the coloring table 14 stored in the information storage unit 17 , and a reservation list 32 stored in the working memory 16 .
  • Other functional blocks contained in the memory management device 1 are the same as those described with reference to FIG. 2 and thus, an illustration and description thereof are omitted.
  • the group value calculation unit 201 references the coloring table 14 to calculate a color group value based on the static writing frequency SW_color of write target data.
  • the color group value is a value indicating to which color group of color groups determined in accordance with the static writing frequency SW_color the write target data belongs.
  • the color group value is calculated based on coloring information of the coloring table 14 and shows a grouping result of the write target data.
  • the group value calculation unit 201 calculates a color group value by using coloring information for each piece of data as an input value, but the calculation method can be changed in various ways.
  • the group value calculation unit 201 may use the static writing frequency SW_color or the dynamic writing frequency DW_color of data directly as a color group value.
  • the group value calculation unit 201 divides color groups so that the number of color groups should not be too many. For example, the group value calculation unit 201 may calculate a color group value based on at least one of the static writing frequency SW_color or the dynamic writing frequency DW_color of data.
  • the reservation list management unit 202 manages the reservation list 32 indicating a reservation state of write target data into a block allocated to each color group.
  • the reservation list 32 is stored in, for example, the working memory 16 , but may also be stored in another storage unit, for example, the information storage unit 17 . Details of the reservation list management unit 202 and the reservation list 32 will be described later.
  • the writing management unit 20 references the reservation list 32 to write data of the block size allocated to a reservation node and putting a plurality of pieces of write target data together into the block corresponding to the reservation node in the nonvolatile semiconductor memories 9 , 10 .
  • FIGS. 44 and 45 Differences between writing to a common NAND flash memory and writing by the memory management device 1 according to the present embodiment will be described using FIGS. 44 and 45 .
  • FIG. 44 is a diagram showing an example of a data configuration of the block size when write target data is not classified based on coloring information.
  • the erasure frequency of the block is proportional to data with the highest access frequency (for example, the static writing frequency SW_color) of data in the block.
  • FIG. 45 is a diagram showing an example of a data configuration of the block size when write target data is classified based on coloring information.
  • coloring information can be obtained based on the coloring table 14 and thus, write target data can be grouped in accordance with the access frequency (for example, the static writing frequency SW_color).
  • the group value calculation unit 201 classifies write target data less than the block size of a NAND flash memory as a color group having a comparable access frequency based on the coloring table 14 .
  • the reservation list management unit 202 puts write target data belonging to the same color group for the block size together to package the write target data for a block.
  • data with a high access frequency can be concentrated in a portion of blocks. Then, it becomes possible to decrease the number of blocks with a high erasure frequency and prolong the life of the NAND flash memory.
  • FIG. 46 is a diagram showing an example of a relationship between the address conversion information 13 according to the present embodiment and the physical address space of the nonvolatile semiconductor memories 9 , 10 , that is, the NAND logical address.
  • the address conversion information 13 includes the logical address, the physical address of the volatile semiconductor memory 8 , the physical addresses (NAND logical addresses) of the nonvolatile semiconductor memories 9 , 10 , and valid/invalid flag as items.
  • the physical address of the volatile semiconductor memory 8 is stored by associating with the logical address of the data in the address conversion information 13 .
  • the valid/invalid flag is a flag indicating whether or not each entry is valid.
  • write target data D 1 of a color group G 2 is first stored in the nonvolatile semiconductor memories 9 , 10 .
  • one block of a physical address (NAND logical address) region of the nonvolatile semiconductor memories 9 , 10 is reserved for the color group G 2 .
  • a physical address (NAND logical address) P 1 of one of a logical address L 1 of the write target data D 1 and the physical address (NAND logical address) region reserved for the color group G 2 and the valid/invalid flag 1 indicating validity are stored in the address conversion information 13 .
  • write target data D 2 of a color group G 4 is stored in the nonvolatile semiconductor memories 9 , 10 .
  • one block of a physical address region in the nonvolatile semiconductor memories 9 , 10 is reserved for the color group G 4 .
  • write target data D 3 belonging to the same color group G 2 as the write target data D 1 previously stored in the physical address space of the nonvolatile semiconductor memories 9 , 10 is stored in the nonvolatile semiconductor memories 9 , 10 .
  • the logical address of the write target data D 3 , another physical address P 2 of the physical address region reserved for the color group G 2 , and the valid/invalid flag 1 indicating validity are stored in the address conversion information 13 .
  • FIG. 47 is a diagram showing an example of a logical/physical conversion table (NAND logical/physical conversion table) 13 a of the nonvolatile semiconductor memories 9 , 10 .
  • the NAND logical/physical conversion table 13 a is stored in, for example, the information storage unit 17 .
  • the NAND logical/physical conversion table 13 a shows the correspondence between the NAND logical block address NLBA and the NAND physical block address NPBA.
  • NPBA 2 is allocated to NLBA 0
  • NPBA 1 is allocated to NLBA 1
  • NPBA 0 is allocated to NLBA 2
  • NLBA 0 corresponds to, for example, physical addresses P 1 , P 2 , . . . , Pn in the nonvolatile semiconductor memories 9 , 10 .
  • FIG. 48 is a data structure diagram showing an example of the reservation list 32 .
  • the reservation list 32 manages reservation nodes 321 to 326 representing physical address regions in units of reserved block regions.
  • the reservation list 32 has a management section structure to prevent data with a high access frequency and data with a low access frequency from being included in the same block.
  • a reservation node is managed by, for example, a list structure so that an increase/decrease of the number thereof can be handled flexibly.
  • Each of the reservation nodes 321 to 326 includes the color group value allocated to the respective reservation node, the reserved physical address (reserved NAND logical address), and the free space size.
  • the reserved physical address is, among physical addresses (NAND logical addresses) allocated to reservation nodes, a physical address (NAND logical address) that is not used and in which data is next to be arranged.
  • the free space size indicates the size of an unused region of physical address (NAND logical address) regions allocated to reservation nodes.
  • the reservation list management unit 202 scans the reservation list 32 . Next, the reservation list management unit 202 searches for a reservation node having the same color group value as the color group value of the new data and whose free space size is larger than the size of the new data.
  • the reserved physical address of the searched reservation node is used as the physical address of the new data.
  • the reservation list management unit 202 selects an unused address region from the physical address region allocated to the searched reservation node to update the reserved physical address of the searched reservation node.
  • the reservation list management unit 202 also reduces the free space size by the size of the new data to update the free space size of the searched reservation node.
  • the reservation list management unit 202 secures a new physical address region of the block size and adds a new reservation node to the reservation list 32 .
  • the reservation list management unit 202 sets the color group value of the new data as the color group value of the new reservation node, sets an unused physical address of the newly secured physical address region as the reserved physical address of the new reservation node, and sets the size of free space of the newly secured physical address region as the free space size of the new reservation node.
  • FIG. 49 is a flow chart showing an example of processing of the group value calculation unit 201 and the reservation list management unit 202 according to the present embodiment.
  • step A 1 the group value calculation unit 201 calculates a color group value of the write target data.
  • step A 2 the reservation list management unit 202 searches the reservation list 32 based on the color group value of the write target data.
  • the reservation list management unit 202 determines whether or not there is an appropriate reservation node having the color group value of the write target data and having free space equal to or more than the size of the write target data.
  • the reservation list management unit 202 references the memory usage information 11 , the memory specific information 12 , and the coloring table 12 to reserve a new physical address region of the block size from the physical address (NAND logical address) space.
  • the reservation list management unit 202 also updates the address conversion information 13 by associating the logical address of the write target data with one of the physical addresses (for example, the top physical address) of the reserved physical address region via the address management unit 18 .
  • step A 5 the reservation list management unit 202 adds a reservation node of the reserved one block region to the reservation list 32 and sets the color group value, reservation address, and free space size to the reservation node. Then, the processing proceeds to step A 8 a.
  • step A 6 the reservation list management unit 202 sets the reservation address of the appropriate reservation node as the physical address and updates the address conversion information 13 by associating the logical address of the write target data with the physical address via the address management unit 18 .
  • step A 6 the reservation list management unit 202 updates the reservation address of the appropriate reservation node and the free space size. Then, the processing proceeds to step A 8 a.
  • the reservation list management unit 202 determines whether or not the updated free space size of the appropriate reservation node is smaller than an optional size.
  • the processing ends.
  • step A 9 the reservation list management unit 202 discards the appropriate reservation node from the reservation list 32 , and then the processing ends.
  • FIG. 50 is a diagram showing an example of a state transition of the address conversion information 13 in the present embodiment.
  • the group value calculation unit 201 references the coloring table 14 based on the logical address “0x0010 — 0000” to calculate a color group value for the logical address “0x0010 — 0000”.
  • the reservation list management unit 202 searches the reservation list 32 based on the color group value.
  • the reservation list management unit 202 determines a physical address “0x0030 — 0000” for the logical address “0x0010 — 0000” based on the memory usage information 11 , the memory specific information 12 , and the coloring table 12 .
  • the group value calculation unit 201 reserves an address region for one block region from the physical address “0x0030 — 0000”.
  • the group value calculation unit 201 adds a reservation node corresponding to the reserved address region to the reservation list 32 .
  • the group value calculation unit 201 sets the color group value calculated in state 1 to the reservation node.
  • the group value calculation unit 201 references the coloring table 14 based on the logical address “0x0030 — 0000” to calculate a color group value for the logical address “0x0030 — 0000”.
  • the reservation list management unit 202 searches the reservation list 32 based on the color group value. In this example, it is assumed that a reservation node corresponding to the color group value is detected.
  • the reservation list management unit 202 determines a reserved physical address “0x0040 — 0000” of the detected reservation node as the physical address for the logical address “0x0030 — 0000”.
  • data of the block size is configured by a plurality of pieces of write target data belonging to a group of the same access frequency based on coloring information of the plurality of pieces of write target data.
  • data with a high access frequency can be concentrated in a specific block so that in the memory management device 1 adopting the overwrite method, it becomes possible to decrease the number of blocks with a high erasure frequency and prolong the life of the nonvolatile semiconductor memories 9 , 10 .
  • the MPU uses a DRAM as a main memory. If such a system is shut down, execution code and data in the main memory and a context of a process are stored in the secondary storage device. Thus, when the system is reactivated, it is necessary to reload necessary execution code and data into the memory from the secondary storage device via an I/O interface. Further, each program is initialized again. Thus, the activation time of the system is frequently long.
  • the memory management device capable of reducing the time needed for shutdown and activation and storing data with a high level of safety in consideration of properties of a nonvolatile memory will be described.
  • the fifth embodiment relates to data movement from the volatile semiconductor memory 8 to the nonvolatile semiconductor memories 9 , 10 when the information processing device 100 is shut down.
  • the memory map of the mixed main memory 2 is as shown in FIG. 3 .
  • the volatile semiconductor memory 8 DRAM region.
  • dirty data that is updated in the volatile semiconductor memory 8 and is not updated in the nonvolatile semiconductor memories 9 , 10 is present.
  • FIG. 51 shows an example of a dirty bit field DBF of the volatile semiconductor memory 8 provided in the information storage unit 17 .
  • Each column of the dirty bit field DBF corresponds to index information set based on a physical address and has flag data indicating whether data thereof is dirty set thereto.
  • Flag data “0” indicates that data corresponding to the entry thereof has been erased or data thereof has been read into the volatile semiconductor memory 8 , but has not yet been updated (synchronized) and flag data “1” indicates that the corresponding data is updated in the volatile semiconductor memory 8 and is not updated in the nonvolatile semiconductor memories 9 , 10 (not synchronized).
  • data corresponding to the entry of the data “1” needs to be transferred to the nonvolatile semiconductor memories 9 , 10 when shutdown and data corresponding to the entry of the data “0” need not be transferred to the nonvolatile semiconductor memories 9 , 10 .
  • the memory management device 1 sets the flag data of the corresponding entry to “0”.
  • the memory management device 1 sets the flag data of the corresponding entry to “1”.
  • FIG. 52 shows an example of processing when the information processing device 100 is shut down. This processing is performed by, for example, the processing unit 15 .
  • a total size SA of data not updated in the nonvolatile semiconductor memories 9 , 10 is calculated (step IS 31 ). That is, entries of the dirty bit field DBF are searched to detect data whose flag data is “1”. The size of the detected data whose flag data is “1” is totaled to calculate a non-updated data size SA.
  • Each entry of the dirty bit field DBF is set, as described above, for each page size of the nonvolatile semiconductor memory.
  • the updated data size SA can be determined by counting the number of entries whose flag data is “1” and multiplying the counted value by the page size.
  • a free space size SB of the nonvolatile semiconductor memory is calculated (step IS 32 ).
  • the data when data in the volatile semiconductor memory 8 is written into the nonvolatile semiconductor memory when shutdown, the data is written into an SLC region of the nonvolatile semiconductor memory 9 in consideration of faster writing and reading and the possibility that the data may be stored for a long period of time. More specifically, the data is preferentially written into, for example, a B region of the SLC region shown in FIG. 3 .
  • the memory management device 1 manages writing into the nonvolatile semiconductor memories 9 , 10 based on information of the coloring table 14 .
  • shutdown processing according to the present embodiment ignores the principle and causes, for example, the B region of the nonvolatile semiconductor memory 9 to preferentially store data for storage of data by maintaining, for example, high speed and high reliability.
  • the free space size of the B region is calculated.
  • the calculation of the free space size is determined based on, for example, the content of the memory usage information 11 .
  • step IS 33 the calculated non-updated data size SA and the free space size SB of the B region are compared. If, as a result, the calculated non-updated data size SA is equal to or less than the free space size SB of the B region, non-updated data in the volatile semiconductor memory 8 is written into the B region of the nonvolatile semiconductor memory (step IS 34 ). Next, based on the writing into the B region, the address management information shown in FIG. 4 is updated (step IS 35 ).
  • step IS 33 if, in step IS 33 , the calculated non-updated data size SA is determined to be larger than the free space size SB of the B region, normal write processing is performed. That is, according to the principle, data is written by referencing the coloring table 14 (step IS 36 ). Then, the address management information is updated (step IS 37 ).
  • the dirty bit field DBF is provided in the information storage unit 17 , whether or not data in the volatile semiconductor memory 8 is updated is managed based on flag data, and data in the volatile semiconductor memory 8 is transferred to the nonvolatile semiconductor memory 9 based on flag data of the dirty bit field DBF when the information processing device 100 is shut down. Therefore, non-updated data can reliably be transferred to the nonvolatile semiconductor memory 9 when the shutdown is executed.
  • non-updated data output from the volatile semiconductor memory 8 when the shutdown is executed is written into the SLC region of the nonvolatile semiconductor memory 9 .
  • the fifth embodiment is intended to make activation of the information processing device 100 faster.
  • an animation player and a browser are operating when shut down and the priority is set so that the animation player is likely to be scheduled next when the browser is operating, it is considered possible to cause the information processing device 100 to operate faster if code of the animation player with a higher priority has been transferred to the volatile semiconductor memory 8 after the information processing device 100 being activated.
  • pre-reading (look-ahead) hint information is added to the coloring table 14 for the purpose of making activation faster and the information processing device 100 is activated by using the pre-reading hint information.
  • the pre-reading hint information is set to the coloring table 14 when the shutdown is executed. That is, the operating system 27 can reduce memory access overheads at activation to enable faster activation by storing the pre-reading hint information in the coloring table 14 in a shutdown process.
  • FIG. 53 shows an example of the coloring table 14 applied in the present embodiment.
  • a field of pre-reading hint information is added to each entry for the coloring table 14 shown in FIG. 8 .
  • the pre-reading hint information is, for example, flag data provided in a field of the static color information.
  • the flag data is “0” indicates that data corresponding to the entry thereof is not read ahead and the flag data is “1” indicates that data corresponding to the entry thereof is read ahead.
  • the flag data is not limited to binary data and may be multi-valued data.
  • the flag data as the pre-reading hint information is set to the coloring table 14 in, for example, a shutdown process of the operating system 27 .
  • FIG. 54 shows setting processing of pre-reading hint information. This processing is performed by, for example, the processing unit 15 .
  • pre-reading hint information is first added to the address at which code data needed for activation is stored (step IS 41 ). That is, the flag data “1” is set to the corresponding entry of the coloring table 14 as the pre-reading hint information.
  • the pre-reading hint information is added to the context of the process with the highest priority (step IS 42 ). That is, the flag data “1” is set to the entry corresponding to the context of the process with the highest priority of the coloring table 14 as the pre-reading hint information.
  • Data with a high priority includes, for example, initialization code data of a device, the context of a process with a high priority when shut down or the like.
  • the flag data “0” as pre-reading hint information is set to data whose static color information, for example, the static reading frequency (SR_color) is low, even if related to a process with a high priority.
  • SR_color static reading frequency
  • an address space to which MPEG data is mapped corresponds to such data and the address space is set so that no pre-reading occurs.
  • step IS 43 whether pre-reading hint information is added to data of the set size is determined. That is, whether pre-read data exceeds the size of the volatile semiconductor memory 8 in which the data read ahead is stored is determined.
  • the usage size of the volatile semiconductor memory 8 is set by, for example, the user. Thus, whether the set size is exceeded is determined. If, as a result, the set size is not exceeded, the processing proceeds to step IS 42 to perform the above operation. If, as a result of the determination, the set size is determined to be exceeded, the processing ends. In this manner, pre-reading hint information is set to the coloring table 14 when shutdown.
  • execution code executed always and data to be read are present in an activation process of the information processing device 100 .
  • the operating system 27 can know execution code executed in an early stage of activation and the data region.
  • data is transferred from the nonvolatile semiconductor memory to the volatile semiconductor memory in parallel with the activation process by using pre-reading hint information set to the coloring table 14 .
  • FIG. 55 shows processing of the operating system 27 when activation.
  • the coloring table 14 is searched (step IS 51 ) to read flag data as pre-reading hint information of entries (step IS 52 ).
  • step IS 53 whether the flag data is “1” is determined. If, as a result, the flag data is “1”, data corresponding to the entry thereof is read from the nonvolatile semiconductor memories 9 , 10 (step IS 54 ). That is, data to which pre-reading hint information is attached and having a priority over other data is transferred from the volatile semiconductor memory 8 to the nonvolatile semiconductor memories 9 , 10 .
  • step IS 53 If the flag data is “0” in the determination in step IS 53 , data corresponding to the entry thereof is not read.
  • step IS 55 whether the next entry is present in the coloring table 14 is determined. If, as a result, the next entry is present, the control is moved to step IS 51 to repeat the above operation. If the next entry is not present, the processing ends.
  • the end condition of processing is not limited to the case when there is no next entry and the processing can be set to end if a write size when the volatile semiconductor memory 8 is activated is preset, the write size is reached. By setting the write size in this manner, free space can be secured in the volatile semiconductor memory 8 .
  • pre-reading hint information is added to the entry of the coloring table corresponding to data likely to be executed immediately after activation when the information processing device 100 is shut down and the pre-reading hint information is searched to preferentially transfer data from the nonvolatile semiconductor memories 9 , 10 to the volatile semiconductor memory 8 when activation.
  • the pre-reading hint information is searched to preferentially transfer data from the nonvolatile semiconductor memories 9 , 10 to the volatile semiconductor memory 8 when activation.
  • the operating system 27 sets static color information of the coloring table 14 shown in FIGS. 5 and 8 to each piece of data.
  • Setting methods of static color Information for the coloring table 14 include [1] a setting based on an extension or a name of a file, [2] a setting based on a name of a directory, [3] a setting based on a shadow file, [4] a setting using an extension attribute of a file system, [5] a setting based on a header attached to a file of software (for example, an application) or data (for example, video compressed data of MPEG2 or the like), [6] a setting based on attribute information of a virtual address space, [7] a setting based on a usage frequency of a dynamic link library, [8] a setting based on a compiler, [9] a setting based on a dynamically generated memory region, and [10] a setting using a profiler.
  • Each of the setting methods will be described below.
  • the operating system 27 receives a setting of the relationship between the extension of the file using a kernel command line and static color information from the user (including the program developer).
  • static color information “1”, “2” is set to the extensions “jpeg”, “mpeg” respectively
  • the relationship between the extension of the file and the static color information is set to the operating system 27 .
  • the operating system 27 determines the static color information of data based on the extension of the file corresponding to the data (the file in which the data is arranged) and sets the static color information to the coloring table 14 .
  • the operating system 27 manages mapping data associating the data with the file.
  • the operating system 27 may reference a table associating the extension of the file with the static color information.
  • the relationship between the extension of the file and the static color information may be set.
  • the operating system 27 receives a setting of the relationship between the name of the directory and the static color information using a kernel command line from the user.
  • static color information “3”, “4” is specified to directories “/tmp”, “/var/log” respectively
  • the relationship between the name of the directory and the static color information is set to the operating system 27 .
  • the operating system 27 determines the static color information of data based on the name of the directory in which the file corresponding to the data is arranged and sets the static color information to the coloring table 14 .
  • the operating system 27 may reference a table associating the name of the directory with the static color information.
  • the relationship between the static color information and the file or the relationship between the static color information and the directory may be individually set by the user in the file system.
  • the user For example, the user generates a shadow file for a file.
  • the shadow file is generated by changing an extension of the file corresponding to the shadow file.
  • a shadow file “.foo.ext.s_color” is generated in the same directory.
  • the user causes the shadow file to hold the relationship between the static color information and the file.
  • the static color information of the file “.foo.ext” is set into the shadow file “.foo.ext.s_color”.
  • the operating system 27 determines the static color information of data based on the shadow file of the file corresponding to the data and sets the static color information to the coloring table 14 .
  • the shadow file may be generated for a directory so that the relationship between the static color Information and the directory is held in the shadow file.
  • the relationship between the static color information and the file or the relationship between the static color information and the directory set by the user in the file system is set by using, for example, the extension attribute of the file system.
  • the extension attribute is a function to connect metadata that is not interpreted by the file system with a file or directory by the user.
  • the static color information of the file or directory is set into metadata connected with the file or directory.
  • the operating system 27 determines the static color information of the data based on the metadata connected with the file corresponding to the data and sets the static color information to the coloring table 14 .
  • the operating system 27 also determines the static color information of the data based on the metadata connected with the directory in which the data is arranged and sets the static color information to the coloring table 14 .
  • the user modifies the header of a software file or data file and sets the static color information to the header of the file.
  • the operating system 27 determines the static color information of the data based on the header of the file corresponding to the data and sets the static color information to the coloring table 14 .
  • the static color information may be set by using the above shadow file or extension attribute.
  • An application file may be divided into a plurality of sections to set static color information to each of the plurality of sections.
  • Control similar to the control of the memory management device 1 can also be realized for an SSD by generating a SATA vendor extension command used for the SSD and delivering data and static color information to the SSD.
  • FIG. 56 is a block diagram showing an example of a relationship between a virtual address region in a virtual address space and attribute information.
  • An Application uses virtual address regions J 34 a to J 34 f in a virtual address space J 32 .
  • the operating system 27 includes a virtual storage function.
  • the operating system 27 manages each of the virtual address regions J 34 a to J 34 f by using virtual address region data corresponding to each of the virtual address regions J 34 a to J 34 f .
  • Information J 33 is information about the virtual address space J 32 and includes the virtual address region data.
  • the virtual address region data corresponding to each of the virtual address regions J 34 a to J 34 f has a data structure including the start address, end address, first attribute information, and second attribute information. For example, at least one piece of virtual address region data is used for one process.
  • the start address and end address of each piece of virtual address region data show the start address and end address of the corresponding virtual address region.
  • the first attribute information of each piece of virtual address region data indicates whether the corresponding virtual address region is readable “r”, writable “w”, executable “x”, or an occupied region “p” or a shared region “s”.
  • the second attribute information of each piece of virtual address region data indicates whether the corresponding virtual address region is a heap region, stack region, or file map region.
  • the virtual address region data J 35 c , J 35 d corresponding to the virtual address regions J 34 c , J 34 d will be selected and described, but other virtual address region data has a similar feature.
  • the virtual address region J 34 c is readable, writable, and an occupied region and thus, the operating system 27 stores “r”, “w”, and “p” in the first attribute information of the virtual address region data J 35 c.
  • the virtual address region J 34 c is a heap region and thus, the operating system 27 stores “1” indicating the heap region in the second attribute information of the virtual address region data J 35 c.
  • the virtual address region J 34 d is readable, executable, and an occupied region and thus, the operating system 27 stores “r”, “x”, and “p” in the first attribute information of the virtual address region data J 35 d.
  • the virtual address region J 34 d is a file map region and thus, the operating system 27 stores “4” indicating the file map region in the second attribute information of the virtual address region data J 35 d.
  • FIG. 57 is a flow chart showing an example of setting processing of the second attribute information of virtual address region data by the operating system 27 .
  • step SE 1 the operating system 27 fetches the virtual address region to be set.
  • step SE 3 the operating system 27 determines whether or not the virtual address region is a heap region.
  • step SE 4 the operating system 27 sets “1” to the second attribute information.
  • step SE 5 the operating system 27 determines whether or not the virtual address region is a stack region.
  • step SE 6 the operating system 27 sets “2” to the second attribute information.
  • step SE 7 the operating system 27 determines whether or not the virtual address region is a map file region.
  • step SE 8 the operating system 27 sets “4” to the second attribute information.
  • step SE 9 the operating system 27 determines whether or not to set the second attribute information to another virtual address region.
  • step SE 1 If the second attribute information should be set to another virtual address region, the processing returns to step SE 1 .
  • FIG. 58 is a diagram showing an example of a setting of static color information based on the virtual address region data J 35 c.
  • FIG. 58 shows a case when static color information of the data arranged in the virtual address region J 34 c is set to the coloring table 14 based on the virtual address region data J 35 c managed by the operating system 27 .
  • the operating system 27 generates and sets to the coloring table 14 the static writing frequency SW_color, the static reading frequency SR_color, and data life SL_color for the data in the virtual address region J 34 c based on the first attribute and the second attribute of the virtual address region data J 35 c.
  • the operating system 27 If the data in the virtual address region J 34 c is allocated to a logical address space, which is a real memory, due to a page fault, the operating system 27 generates a data generation time ST_color for the data in the virtual address region J 34 c and sets the data generation time ST_color to the coloring table 14 .
  • the writing count and reading count for the data in the virtual address region J 34 c are updated by the memory management device 1 .
  • Commands and libraries have dependence relationships. For example, when some command is executed, the library on which the command is dependent is used.
  • the score of a command is determined in advance and the score of a (dynamically linked) library used by the command is determined based on the score of the command.
  • the score is assumed to be a value determined based on the usage frequency. In the example in FIGS. 59 and 60 described later, for example, the value of the score increases with an increasing usage frequency.
  • the static writing frequency SW_color and the static reading frequency SR_color for the data contained in a library are set based on the score of the library.
  • the score may be determined by using a dynamic linker that dynamically links a library.
  • the score of each library is incremented each time the library is linked by the dynamic linker. More specifically, if the dynamic linker is used, the score of a library is initialized to 0 in the initial stage and then, each time the library is linked, the score of the linked library is incremented. As a result, a library with an increasing number of times of being linked has an increasing score.
  • FIG. 59 is a diagram showing an example of the dependence relationships between commands and libraries.
  • a command uses at least one library.
  • the score of a command is preset.
  • the score of a library is the sum of scores of commands using the library or libraries using the library.
  • the score of a command “cp” is set to “5”.
  • the command “cp” uses libraries “libacl.so.1” and “libselenux.so.1”.
  • the scores of the libraries “libacl.so.1” and “libselenux.so.1” are set to the score “5” of the command “cp” using the libraries “libacl.so.1” and “libselenux.so.1”.
  • the score of a command “bash” is set to “10”.
  • the command “bash” uses a library “libncurses.so.5”.
  • the score of the library “libncurses.so.5” is set to the score “10” of the command “bash” using the library “libncurses.so.5”.
  • a library “libdl.so.2” is used by the libraries “libselenux.so.1” and “libncurses.so.5”.
  • the library “libdl.so.2” is set to a sum “15” of the scores of the libraries “libselenux.so.1” and “libncurses.so.5” using the library “libdl.so.2”.
  • the scores are set to other commands and libraries according to similar rules.
  • the score of each command can be modified.
  • the method of inheriting a score can also be modified in various ways. If, for example, a parent library has a dependence relationship in which the library is branched to a plurality of child libraries (when, for example, the parent library selects and uses one of the plurality of child libraries), the score of a child library may be a value obtained by dividing the score of the parent library by the number of child libraries. If the parent library needs the plurality of child libraries simultaneously, the same score as that of the parent library may be set to the child library.
  • FIG. 60 is a diagram showing an example of the scores of commands and the scores of libraries.
  • the scores of libraries calculated following the dependence relationships in FIG. 59 are shown.
  • FIG. 61 is a diagram showing another calculation example of the scores of libraries based on the scores of commands.
  • the dependence relationship between libraries is not used and the score of each library is calculated as a sum of the scores of commands using the library.
  • FIG. 62 is a diagram showing an example of a setting of static color information using a score of a library.
  • FIG. 62 a case when static color information of the data arranged in the virtual address region J 34 d is set to the coloring table 14 based on the virtual address region data J 35 d managed by the operating system 27 is shown.
  • the operating system 27 If the data in the virtual address region J 34 d is allocated to a logical address space due to a page fault, the operating system 27 generates the data generation time ST_color for the data in the virtual address region J 34 d and sets the data generation time ST_color to the coloring table 14 .
  • the writing count and reading count for the data in the virtual address region J 34 d are updated by the memory management device 1 .
  • a compiler has a function capable of predicting the frequency (usage frequency) of a variable or the frequency of a function.
  • the user sets static color information to data containing a variable or function based on the frequency of the variable or the frequency of the function predicted by the function of the compiler. Accordingly, the static color information can be set more finely than in units of files.
  • the compiler can bring user-specified variables or functions together in a specific section at compile time.
  • the user sets static color information to data containing variables and functions brought together by the function of the compiler. Accordingly, variables and functions with a comparable frequency can be brought together in the same write unit.
  • FIG. 64 is a diagram showing an example of a setting of static color information using a compiler.
  • the user predicts the frequency of a variable and the frequency of a function by using a compiler and divides the compiled software into sections to set static color information to each section.
  • the operating system 27 sets “low” to the static writing frequency SW_color and “high” to the static reading frequency SR_color for the section containing “exception handler”.
  • the operating system 27 sets “low” to the static writing frequency SW_color and “low” to the static reading frequency SR_color for the section containing “exception handler”.
  • the user sets static color information to a dynamically generated (secured, released) memory region based on the usage frequency obtained from a profiler described later or the predicted usage frequency.
  • static color information is made settable to data arranged in a dynamically generated memory region.
  • FIG. 65 is a diagram showing an example of a setting of static color information based on the usage frequency of a dynamically generated memory region.
  • the operating system 27 sets “low” to the static writing frequency SW_color and “high” to the static reading frequency SR_color for data arranged in a memory region “kernel page table”.
  • the operating system 27 sets “high” to the static writing frequency SW_color and “high” to the static reading frequency SR_color for data arranged in a memory region “kernel stack”.
  • the operating system 27 sets “high” to the static writing frequency SW_color and “high” to the static reading frequency SR_color for data arranged in a buffer region of an animation player.
  • a madvise( ) system call advises the kernel how to handle paging input/output of a memory block of length bytes starting at an address addr.
  • the kernel can accordingly select an appropriate method such as looking ahead and a cache.
  • a function to set static color information of the specified memory region may be added to the system call.
  • a new system call to set static color information of the specified memory region may be added.
  • a profiler has a function to acquire, for example, performance information of an application.
  • the performance information contains statistical Information such as the usage frequency.
  • the user sets static color information to an application based on performance information generated by a profiler.
  • static color information is not set based on the usage frequency predicted in advance and instead, static color information is set in accordance with an actual usage state.
  • static color information used by the memory management device 1 is set to the coloring table 14 and based on the static color information, the life of the nonvolatile semiconductor memories 9 , 10 can be prolonged.
  • FIG. 66 is a block diagram showing an example of the configuration of the memory management device 1 , the information processing device 100 , and memory devices H 32 a , H 32 b , H 32 c according to the present embodiment.
  • the same reference numerals are attached to the same or similar elements to those in the first embodiment and the description thereof is omitted.
  • the processor 3 b of the processors 3 a , 3 b , 3 c will representatively be described, but the other processors 3 a , 3 c can also be described in the same manner.
  • the processing unit 15 included in the memory management device 1 includes the memory usage information management unit 22 , a connection detection unit H 33 , a determination unit H 34 , a notification unit 35 H, and a replacement control unit H 36 .
  • the memory management information 11 , the memory specific information 12 , the address conversion information 13 , and the coloring table 14 described above are stored in the information storage unit 17 included in the memory management device. Further, the processing unit 15 of the memory management device 1 is connected to a plurality of connector portions H 44 a , H 44 b , H 44 c.
  • the memory devices H 32 a , H 32 b , H 32 c include memory units H 37 a , H 37 b , H 37 c , normal notification units H 38 a , H 38 b , H 38 c , warning notification units H 39 a , H 39 b , H 39 c , usage stop notification units H 40 a , H 40 b , H 40 c , and connection operation units H 41 a , H 41 b , H 41 c respectively.
  • the memory devices H 32 a , H 32 b , H 32 c include connector units H 42 a , H 42 b , H 42 c respectively.
  • Management information H 43 a , H 43 b , H 43 c is stored in the memory units H 37 a , H 37 b , H 37 c respectively. Details of the management information H 43 a , H 43 b , H 43 c will be described later.
  • the connector units H 42 a , H 42 b , H 42 c included in the memory devices H 32 a , H 32 b , H 32 c are connected to connector units H 44 a , H 44 b , H 44 c respectively.
  • the configuration of the memory management device 1 will be described in more detail.
  • the memory device H 32 a of the memory devices H 32 a , H 32 b , H 32 c will representatively be described, but the other memory devices H 32 b , H 32 c can also be described in the same manner.
  • the connection detection unit H 33 detects connection between the memory management device 1 and the memory device H 32 a .
  • the connection detection unit H 33 detects that the memory device H 32 a is electrically connected to the memory management device 1 (a “connected state” is detected).
  • the connection detection unit H 33 detects that the memory device H 32 a is electrically removed from the memory management device 1 (a “removal ready state” is detected).
  • the determination unit H 34 determines the usage state of the memory device H 32 a based on the memory usage information 11 .
  • the usage state includes, for example, “normal state”, “warning state”, and “usage stopped state”.
  • the determination unit H 34 determines the usage state of the memory device H 32 a , for example, periodically.
  • the determination unit H 34 also determines the usage state of the memory device H 32 a , for example, each time the memory device H 32 a is accessed. The method of determining the usage state will be described later.
  • the notification unit H 35 notifies the memory device H 32 a of the usage state based on the usage state determined by the determination unit H 34 .
  • the replacement control unit H 36 reads and stores in the memory usage information 11 the erasure count, writing occurrence count, and reading occurrence count for each predetermined region of the memory unit H 37 a contained in the management information H 43 a stored in the memory unit H 37 a .
  • the replacement control unit H 36 reads and stores in the management information H 43 a of the memory unit H 37 a the erasure count, writing occurrence count, and reading occurrence count for each predetermined region of the memory device H 32 a contained in the memory usage information 11 stored in the information storage unit 17 . Details of the management information H 43 a will be described later.
  • the erasure count is managed in units of block regions and the writing occurrence count and reading occurrence count are managed in units of page regions.
  • the memory unit H 37 a is an SLC type NAND flash memory or an MLC type NAND flash memory and corresponds to the nonvolatile semiconductor memories 9 , 10 in the first embodiment.
  • the memory unit H 37 a may be an SLC type NAND flash memory (SLC region) in a portion of regions thereof and an MLC type NAND flash memory (MLC region) in the region excluding the SLC region.
  • SLC region SLC type NAND flash memory
  • MLC region MLC type NAND flash memory
  • the normal notification unit H 38 a displays the normal state.
  • the normal notification unit H 38 a is an emitter of the first color (blue) and displays the normal state by being lit.
  • the warning notification unit H 39 a displays the warning state.
  • the warning notification unit H 39 a is an emitter of the second color (yellow) and displays the warning state by being lit.
  • the usage stop notification unit H 40 a When a notification of “usage stopped state” is received from the notification unit H 35 of the of the memory management device 1 , the usage stop notification unit H 40 a displays the stopped state.
  • the usage stop notification unit H 40 a is an emitter of the third color (red) and displays the usage stopped state by being lit.
  • connection operation unit H 41 a When the memory device H 32 a is electrically disconnected (removed) from the memory management device 1 , the connection operation unit H 41 a notifies the memory management device 1 that the memory device H 32 a has been removed (removal notification).
  • the connection operation unit H 41 a includes, for example, an electric or mechanical button and, when the memory device H 32 a is removed, makes a removal notification to the memory management device 1 by the button being pressed by the user.
  • the connection operation unit H 41 a notifies the memory management device 1 that the memory device H 32 a has been connected (mounting notification).
  • a mounting notification is made to the memory management device 1 by the button being pressed by the user.
  • the memory device H 32 a and the memory management device 1 are electrically connected by the connector unit H 42 a being connected to the connector unit H 44 a.
  • FIG. 67 is a graph showing an example of changes of the erasure count of the memory unit H 37 a .
  • the horizontal axis thereof represents the time and the vertical axis thereof represents the erasure count.
  • the memory unit H 37 a of the memory device H 32 a is accessed (read, written, erased) by the processor 3 b .
  • the erasure count, writing occurrence count, and reading occurrence count of the memory unit H 37 a increase with the passage of time and the erasure count reaches the erasable upper limit count of the memory unit H 37 a at some time.
  • the erasure count of the memory unit H 32 a reaches the erasable upper limit count, writing, reading, and erasure of data with respect to the memory unit H 32 a are not desirable from the viewpoint of reliability.
  • the memory management device 1 manages, as described above, the erasure count, writing occurrence count, and reading occurrence count of the nonvolatile semiconductor memories 9 , 10 (memory device H 32 a ) through the memory usage information 11 .
  • the memory management device 1 monitors the usage state of the memory device H 32 a based on the memory usage information 11 and warns the memory device H 32 a before the erasure count of the memory unit H 32 a reaches the erasure occurrence upper limit count.
  • FIG. 68 is a graph showing an example of the usage state of the memory device H 32 a based on the erasure count of the memory device H 32 a .
  • the horizontal axis thereof represents the time and the vertical axis thereof represents the erasure count.
  • writing can also be used, like the erasure, for determination of the usage state of the memory device H 32 a.
  • FIG. 68 shows an example of changes of the erasure count of the memory unit H 37 a by a broken line.
  • a regression curve ⁇ tERASE for example, a primary regression curve
  • An erasure count ERASE alert after a predetermined time (warning period) tERASE before from the current time is predicted from the primary regression curve. If ERASE alert exceeds the erasable upper limit count ERASE max , the usage state of the memory unit H 37 a is determined to be “warning state”. If ERASE alert does not exceed the erasable upper limit count ERASE max , the usage state of the memory unit H 37 a is determined to be “normal state”. If the erasure count at the current time exceeds the erasable upper limit count ERASE max , the usage state of the memory unit H 37 a is determined to be “usage stopped state”.
  • the erasure count of the memory unit H 37 a is managed in units of block regions.
  • the memory unit H 37 a contains a plurality of block regions. Variations of the erasure count between the plurality of block regions contained in the memory unit H 37 a are small due to wear leveling.
  • the average value of the erasure count of each of the plurality of block regions contained in the memory unit H 37 a is set as the erasure count of the memory unit H 37 a .
  • the maximum erasure count of the plurality of block regions contained in the memory unit H 37 a may be set as the erasure count of the memory unit H 37 a . This also applies to the reading occurrence count and writing occurrence count.
  • FIG. 69 is a graph showing an example of the usage state of the memory device H 32 a based on the reading occurrence count of the memory device H 32 a .
  • the horizontal axis thereof represents the time and the vertical axis thereof represents the reading occurrence count.
  • FIG. 69 shows an example of changes of the reading occurrence count of the memory unit H 37 a by a broken line.
  • a regression curve ⁇ tREAD for example, a primary regression curve
  • a reading occurrence count READ alert after a predetermined time (warning period) tREAD before from the current time is predicted from the primary regression curve. If READ alert exceeds the readable upper limit count REAd max , the usage state of the memory unit H 37 a is determined to be “warning state”. If READ alert does not exceed the erasable upper limit count READ max , the usage state of the memory unit H 37 a is determined to be “normal state”. If the reading occurrence count at the current time exceeds the readable upper limit count READ max , the usage state of the memory unit H 37 a is determined to be “usage stopped state”.
  • FIG. 70 is a flow chart showing an example of notifying the memory device H 32 a of the usage state based on the erasure count of the memory device H 32 a.
  • step HA 1 the memory usage information management unit 22 reads the memory usage information 11 .
  • step HA 2 the memory usage information management unit 22 reads the erasure count of the memory device H 32 a at the current time from the memory usage information 11 .
  • step HA 3 the determination unit H 34 calculates new ⁇ tERASE based on the current time, a time prior to the current time, the erasure count at the current time, the erasure count at the time prior to the current time, and past ⁇ tERASE stored in the memory usage information 11 .
  • the determination unit H 34 calculates ⁇ tERASE, which is the erasure count per unit time, based on the erasure start time, the current time, and the erasure count at the current time.
  • step HA 4 the determination unit H 34 determines whether the erasure count at the current time is equal to or less than the erasable upper limit count ERASE max .
  • step HA 5 the determination unit H 34 determines that the memory device H 32 a is in the usage stopped state and the processing proceeds to step HA 9 .
  • step HA 6 the determination unit H 34 calculates ⁇ tERASE ⁇ tERASE before +erasure count at the current time to determine a predicted value ERASE alert the erasure count after tERASE before passes from the current time.
  • step HA 7 the determination unit H 34 determines whether the predicted value ERASE alert is equal to or less than the erasable upper limit count ERASE max .
  • step HA 8 the determination unit H 34 determines that the memory device H 32 a is in the warning state and the processing proceeds to step HA 9 .
  • step HA 9 If the predicted value ERASE alert is equal to or less than the erasable upper limit count ERASE max , the processing proceeds to step HA 9 .
  • step HA 9 the determination unit H 34 updates the memory usage information 11 by storing the erasure count at the current time and ⁇ tERASE.
  • FIG. 71 is a flow chart showing an example of notifying the memory device H 32 a of the usage state based on the reading occurrence count of the memory device H 32 a.
  • Steps HB 1 to HB 9 in FIG. 71 are the same as steps HA 1 to HA 9 in FIG. 70 whose determination object is the erasure count except that the determination object is the reading occurrence count and thus, the description thereof is omitted.
  • the warning state is set.
  • the above determination processing can be modified in various ways. A modification of the determination processing in the present embodiment will be described below.
  • the determination unit H 34 calculates ⁇ tERASE.
  • the determination unit H 34 determines a time tERASE max at which the erasure count is predicted to reach ERASE max based on the erasure count at the current time, ⁇ tERASE, and ERASE max .
  • the determination unit H 34 determines a time tERASE alert at which the warning state should be set by subtracting tERASE before from tERASE max .
  • the determination unit H 34 determines that the usage state is the warning state. Alternatively, the determination unit H 34 determines the erasure count ERASE alert at which the warning state should be set based on the erasure start time, ⁇ tERASE, and the time tERASE alert at which a warning should be given and determines that the usage state is the warning state when the erasure count becomes equal to or more than the erasure count ERASE alert at which the warning state should be set.
  • FIG. 72 is a diagram showing an example of data included in the management information H 43 a.
  • the management information H 43 a contains the erasure count for each predetermined region of the memory unit H 37 a of the memory device H 32 a , the regression curve ⁇ tERASE for the erasure count, the erasable upper limit count ERASE max , the warning period tERASE before , and the erasure start time. Further, the management information H 43 a contains the reading occurrence count for each predetermined region of the memory unit H 37 a of the memory device H 32 a , the regression curve ⁇ tREAD for the reading occurrence count, the erasable upper limit count READ max , the warning period tREAD before , and the read start time.
  • the erasure count, the reading occurrence count, and the regression curves ⁇ tERASE, ⁇ tREAD are information managed by the memory usage information 11 of the memory management device 1 and are stored, as will be described later, in the management information H 43 a when the memory device H 32 a is removed from the memory management device 1 .
  • FIG. 73 is a flow chart showing an example of processing after the memory device H 32 a is electrically connected to the memory management device 1 until access to the memory device H 32 a is started.
  • step HC 1 the connection detection unit H 33 of the memory management device 1 detects that the memory device H 32 a is electrically connected (connected state) to the memory management device 1 by receiving a “mounting notification” from the memory device H 32 a.
  • step HC 1 the memory management device 1 determines whether the management information H 43 a is stored in the memory device H 32 a . If the management information H 43 a is stored in the memory device H 32 a , the processing proceeds to step HC 3 . If the management information H 43 a is not stored in the memory device H 32 a , the processing proceeds to step HC 4 .
  • step HC 3 the memory management device 1 reads and stores in the memory usage information 11 the erasure count, writing occurrence count, and reading occurrence count for each predetermined region of the memory unit H 37 a contained in the management information H 43 a .
  • the memory management device 1 also reads and stores in the memory specific information 12 the erasable upper limit count ERASE max , the readable upper limit count READ max , and the warning periods tERASE before , tREAD before of the memory unit H 37 a contained in the management information H 43 a.
  • step HC 4 the memory management device 1 generates the new management information H 43 a , writes the new management information H 43 a into the memory unit H 37 a , and stores “0” in the memory usage information 11 as the values of the erasure count, writing occurrence count, and reading occurrence count for each predetermined region.
  • Access to the memory device H 32 a is started after the processing in step HC 3 or HC 4 . If access to the memory device H 32 a occurs, as described above, the erasure count, writing occurrence count, and reading occurrence count for each predetermined region of the memory usage information 11 corresponding to the memory device H 32 a are updated.
  • FIG. 74 is a flow chart showing processing after the memory management device 1 receives a “removal notification” from the memory device H 32 a until the memory device H 32 a becomes removable.
  • step HD 1 the connection detection unit H 33 of the memory management device 1 receives a “removal notification” from the memory device H 32 a.
  • step HD 2 the replacement control unit H 36 of the memory management device 1 reads data stored in the memory device H 32 a from the memory device H 32 a and writes the data into another memory device (for example, the memory device H 32 b ).
  • step HD 3 the replacement control unit H 36 stores the writing occurrence count, read occurrence count, and erasure count for each predetermined region of the memory device H 32 a managed by the memory management device 1 in the memory unit H 37 a of the memory device H 32 a as the management information H 43 a.
  • usage information of the memory device H 32 a can be acquired by storing the writing occurrence count, reading occurrence count, and erasure count for each predetermined region of the memory device H 32 a in the memory unit H 37 a of the memory device H 32 a as the management information H 43 a when the memory device H 32 a is removed and next, reading the management information H 43 a when the memory device H 32 a is mounted.
  • FIG. 75 is a diagram showing an example of the replacement state of the memory device.
  • the information processing device 100 includes the processor 3 b , the memory management device 1 , and memory devices H 32 a to H 321 .
  • the information processing device 100 applies RAID technology to the memory devices H 32 a to H 321 .
  • the memory management device 1 that controls access to the memory devices H 32 a to H 321 supports hot swapping of hardware.
  • the information processing device 100 is assumes to be an device that needs to continuous operation such as a server device.
  • the memory devices H 32 a to H 32 m have upper limits of the memory reading count and memory erasure count and are replaced when the end of life thereof is reached.
  • the memory devices H 32 a to H 32 m include display units H 45 a to H 45 m respectively.
  • the display units H 45 a to H 45 m emit light, for example, in green when the memory devices H 32 a to H 32 m are in a normal state and emit light, for example, in red when the memory devices H 32 a to H 32 m are in a warning state or usage stopped state.
  • Buttons H 46 a to H 461 are allocated to the mounted memory devices H 32 a to H 321 respectively.
  • the display unit H 45 k of the memory device H 32 k emits light in red.
  • the user presses the corresponding button H 46 k .
  • a removal notification is sent to the memory management device 1 .
  • the memory management device 1 performs processing such as saving data in the memory device H 32 k and turning off the memory device H 32 k.
  • the memory device H 32 k may immediately be replaced without the data being saved.
  • the user removes the memory device H 32 k and mounts the new memory device H 32 m.
  • the memory device H 32 k is used as a main storage device of the information processing device 100 , for example, a server device, personal computer, or game machine and even if the memory device H 32 k enters the warning state, the memory device H 32 k can be reused, for example, as a medium like an alternative of CD-R or a photo-recording medium of a digital camera.
  • management information of the memory device H 32 k is stored in the memory device H 32 k and further, the display unit H 45 k is included in the memory device H 32 k.
  • Displays units for electronic ink may be used as the display units H 45 a to H 45 m .
  • the determination unit H 34 of the memory management device 1 determines the access state (for example, “erasure count/erasable upper limit count”, “reading occurrence count/readable upper limit count” and the like) to each of the memory devices H 32 a to H 321 based on the memory usage information 11 and the memory specific information 12 .
  • the notification unit H 35 of the memory management device 1 controls the display of the display units H 45 a to H 451 for electronic ink based on the access state to each of the memory devices H 32 a to H 321 .
  • the display units H 45 a to H 451 show the access state as a bar graph.
  • Display content of the display units H 45 a to H 451 for electronic ink is maintained even if the memory devices H 32 a to H 321 are removed from the memory management device 1 .
  • the user can mount the memory devices H 32 a to H 321 on another information processing device for reuse with reference to display content of the display units H 45 a to H 451 for electronic ink.
  • FIG. 76 is a block diagram showing an example of the reuse of the memory device H 32 a.
  • the memory management device 1 is assumed to be an device like a server device and personal computer from which high reliability is demanded for data storage and access.
  • an information processing device 100 A is assumed to be an device like a digital camera, printer, and mobile phone from which high reliability demanded from the information processing device 100 is not demanded for data storage and access.
  • the memory device H 32 a can be used until a usage stop notification arises even after a warning is issued.
  • the user can remove the memory device H 32 a from the information processing device 100 and mount the memory device H 32 a on the information processing device 100 A for use. In this manner, the memory device H 32 a can be effectively utilized.
  • the writing management unit 20 exercises control so that data with high static color information or data with high dynamic color information is written into the specific memory device H 32 a of the memory devices H 32 a , H 32 b , H 32 c based on coloring information. Accordingly, the access count (the erasure count, reading occurrence count, and writing occurrence count) to the memory device H 32 a increases earlier than the other memory devices H 32 b , H 32 c.
  • the specific memory device H 32 a enters the warning state earlier so that concentration of the warning period in a short period of time can be suppressed and an increase in work load such as replacing many memory devices in a short period of time can be prevented.
  • the memory devices H 32 a to H 321 whose access count has an upper limit can easily be mounted on the memory management device 1 and further can easily be removed.
  • the memory devices H 32 a to H 321 can be swapped while the information processing device 100 being continuously operated.
  • the memory devices H 32 a to H 321 that can be mounted on and removed from the memory management device 1 can be reused.
  • a high-reliability, high-speed, and large-capacity storage device combining the memory devices H 32 a to H 321 can be realized and the memory devices H 32 a to H 321 can easily be replaced so that the utilization rate of the information processing device 100 can be improved.
  • the present embodiment is a modification of the first embodiment.
  • a memory management device can dynamically switch the SLC region in the nonvolatile semiconductor memories 9 , to the MLC region and further can switch the MLC region to the SLC region.
  • the SLC region refers to a memory region used as an SLC type NAND flash memory in the nonvolatile semiconductor memories 9 , 10 .
  • the MLC region refers to a memory region used as an MLC type NAND flash memory in the nonvolatile semiconductor memories 9 , 10 .
  • the nonvolatile semiconductor memories 9 , 10 may be an SLC region or an MLC region in the whole memory region of the nonvolatile semiconductor memories 9 , 10 or a portion of the memory region of the nonvolatile semiconductor memories 9 , 10 may be an SLC region and the memory region that is not the SLC region may be an MLC region.
  • SLC/MLC region information Information about whether the memory region of the nonvolatile semiconductor memories 9 , 10 is an SLC region or an MLC region (hereinafter, referred to as “SLC/MLC region information”) is managed by, for example, the memory specific information 12 .
  • the memory specific information 12 holds information about whether the memory region specified by a physical address is an SLC region or an MLC region in the nonvolatile semiconductor memories 9 , 10 . While the SLC/MLC region information for each memory region is assumed to be managed by the memory specific information 12 , but may also be managed by the memory usage information 11 .
  • FIG. 78 is a diagram showing an example of the configuration of the memory management device according to the present embodiment.
  • a memory management device D 32 includes a processing unit D 33 , the working memory 16 , and the information storage unit 17 .
  • the processing unit D 33 includes a wear-out rate calculation unit D 34 , a switching determination unit D 35 , and a switching control unit D 36 . Further, the processing unit D 33 includes, like the processing unit 15 in the first embodiment described above, the address management unit 18 , the reading management unit 19 , the writing management unit 20 , the coloring information management unit 21 , the memory usage information management unit 22 , and the relocation unit 23 , but these units are omitted in FIG. 78 .
  • the memory management device D 32 in the present embodiment can switch the SLC region to the MLC region based on information about the wear-out rate of the SLC region in the nonvolatile semiconductor memories 9 , 10 . Further, the memory management device D 32 can switch the MLC region to the SLC region based on information about the wear-out rate of the MLC region in the nonvolatile semiconductor memories 9 , 10 .
  • the wear-out rate is a ratio of the writing count to the writable upper limit count of the memory region.
  • the memory management device D 32 can similarly switch the SLC region and the MLC region dynamically based on an erasure wear-out rate, which is the ratio of the erasure count to the erasable upper limit count, and a read wear-out rate, which is the ratio of the reading count to the readable upper limit count. Further, the memory management device D 32 can switch the SLC and the MLC dynamically based on at least two of the write wear-out rate, erasure wear-out rate, and read wear-out rate.
  • the wear-out rate calculation unit D 34 references the memory usage information 11 and the memory specific information 12 to calculate the write wear-out rate of a memory region based on the writing count and the writable upper limit count of the memory region. Similarly, the wear-out rate calculation unit D 34 can calculate the read wear-out rate and the erasure wear-out rate by referencing the memory usage information 11 and the memory specific information 12 .
  • the write wear-out rate and the read wear-out rate are calculated, for example, in units of page region or block region.
  • the erasure wear-out rate is calculated, for example, in units of block region.
  • the write wear-out rate is calculated for each of a plurality of block regions contained in the SLC region or the MLC region. Variations of the write wear-out rate are small between the plurality of block regions contained in the SLC region or the MLC region by wear leveling. Thus, for example, the average value of the write wear-out rates of the plurality of block regions contained in the SLC region or the MLC region is set as the write wear-out rate of the SLC region or the MLC region.
  • the maximum write wear-out rate of the write wear-out rates of the plurality of block regions contained in the SLC region or the MLC region may be set as the write wear-out rate of the SLC region or the MLC region. This also applies to the read wear-out rate and the erasure wear-out rate.
  • the switching determination unit D 35 determines whether the write wear-out rate of the SLC region exceeds the threshold (hereinafter, referred to as the “SLC threshold”) of the write wear-out rate set to the SLC region.
  • the switching determination unit D 35 also determines whether the write wear-out rate of the MLC region exceeds the threshold (hereinafter, referred to as the “MLC threshold”) of the write wear-out rate set to the MLC region.
  • Information of the SLC threshold and the MLC threshold of each memory region is managed by the memory specific information 11 .
  • the switching control unit D 36 exercises control to switch the SLC region to the MLC region. If the write wear-out rate of the MLC region exceeds the MLC threshold, the switching control unit D 36 exercises control to switch the MLC region to the SLC region. Further, the switching control unit D 36 updates “SLC/MLC region information” managed by the memory specific information 11 in accordance with switching of the SLC region and the MLC region.
  • the switching control unit D 36 exercises control to switch one of MLC regions to the SLC region. If switching from the MLC region to the SLC region occurs, the switching control unit D 36 also exercises control to switch one of MLC regions to the SLC region. Accordingly, the switching control unit D 36 exercises control to minimize a change in the ratio of the SLC regions and MLC regions before and after switching of memory regions by the switching control unit D 36 .
  • the SLC region and the MLC region are switched by memory regions to be switched in the nonvolatile semiconductor memories 9 , 10 being determined and a command being issued by the switching control unit D 36 .
  • the switching control unit D 36 moves data and updates the address conversion information 13 in accordance with the movement of data.
  • the memory usage information management unit 22 updates the memory usage information 11 (such as the writing count, erasure count, and reading count) of switched memory regions in accordance with switching of the SLC region and the MLC region by the switching control unit D 36 .
  • FIG. 79 is a schematic diagram showing a first example of dynamic switching of nonvolatile semiconductor memories according to the present embodiment.
  • the nonvolatile semiconductor memories 291 to 294 shown in FIG. 79 correspond to the nonvolatile semiconductor memories 9 , 10 and are used as the main memory of the information processing device 100 .
  • all memory regions of the nonvolatile semiconductor memories 291 to 293 are used as an SLC region (the nonvolatile semiconductor memories 291 to 293 are SLC type NAND flash memories).
  • all memory regions of the nonvolatile semiconductor memory 294 are used as an MLC region (the nonvolatile semiconductor memory 294 is an MLC type NAND flash memory).
  • the nonvolatile semiconductor memories 291 to 294 are, for example, memory cards.
  • the switching control unit D 36 switches the nonvolatile semiconductor memory 291 from the SLC type to the MLC type. Further, the switching control unit D 36 switches the nonvolatile semiconductor memory 294 with a low write wear-out rate from the MLC type to the SLC type. Accordingly, the nonvolatile semiconductor memory 291 with a high write wear-out rate is used as the MLC type and data with a low access frequency is written thereinto.
  • the nonvolatile semiconductor memory 294 with a low write wear-out rate is used as the SLC type and data with a high access frequency is written thereinto.
  • the life of the MLC type nonvolatile semiconductor memory 291 (period in which the MLC type nonvolatile semiconductor memory 291 can be used as the main memory) can be prolonged by applying strong ECC (Error-Correcting Code) to the MLC type nonvolatile semiconductor memory 291 . If strong ECC is applied, the reading speed at which data is read from a nonvolatile semiconductor memory generally falls, but the reading speed from a nonvolatile semiconductor memory may be low in the present embodiment and thus, strong ECC can be used.
  • ECC Error-Correcting Code
  • the nonvolatile semiconductor memories 291 to 294 may be removed from the information processing device 100 to use the memories 291 to 294 for an application with a low writing frequency such as CD-R use.
  • FIG. 80 is a schematic diagram showing a second example of dynamic switching of nonvolatile semiconductor memories according to the present embodiment.
  • a nonvolatile semiconductor memory 295 shown in FIG. 80 corresponds to the nonvolatile semiconductor memories 9 , 10 and is used as the main memory of the information processing device 100 .
  • the nonvolatile semiconductor memory 294 is composed of memory regions used as an SLC region and memory regions used as an MLC region.
  • the SLC region is switched to the MLC region based on wear-out rate information. Accordingly, effects similar to those of the example shown in FIG. 79 are gained.
  • processing to switch the SLC region to the MLC region when the write wear-out rate of the SLC region exceeds the SLC threshold is described, but processing to switch the MLC region to the SLC region when the write wear-out rate of the MLC region exceeds the MLC threshold is similar.
  • the MLC region has a lower writable upper limit count set thereto.
  • a higher writable upper limit count can be set by switching the MLC region to the SLC region. If, for example, the writable upper limit count of the MLC region is 1000 and the writable upper limit count of the SLC region is 10000, the MLC threshold is reached with 80% of the wear-out rate in the MLC region.
  • the region can be written into as an SLC region 2000 times more.
  • the SLC region like the MLC region, as a memory region into which data with a low access frequency is written, the life of the memory region can further be prolonged.
  • FIG. 81 is a state transition diagram showing a first example of switching control of memory regions by the switching control unit D 36 according to the present embodiment.
  • the processing described as steps OA 1 to OA 5 in FIG. 81 may be changed in order within the range in which switching of the SLC region and the MLC region, movement of data, and information updates are implemented normally.
  • a memory region MA of the nonvolatile semiconductor memory is an SLC region and memory regions MB, MC, MD of the nonvolatile semiconductor memory are MLC regions.
  • the memory regions MA, MB, MC store data Da, Db, Dc respectively.
  • the memory region MD is a save region.
  • step OA 1 it is assumed that the write wear-out rate of the memory region MA exceeds the SLC threshold.
  • the switching control unit D 36 selects one of the memory regions MB, MC (the memory region MB in the example of FIG. 81 ) in the MLC region and moves the data Db in the selected memory region MB to the save memory region MD.
  • the selection of the memory regions MB, MC in the MLC region may be made by preferentially selecting a memory region in the MLC region in which no data is stored, preferentially selecting a memory region in the MLC region in which data with low importance is stored based on the coloring table 14 , or preferentially selecting a memory region in the MLC region with a low write wear-out rate, read wear-out rate, or erasure wear-out rate. This selection may be modified in various ways.
  • data with a high access frequency of data contained in the data Db may be saved in the SLC region by referencing the color table 14 and data with a low access frequency of data contained in the data Db may be saved in the MLC region.
  • step OA 3 the switching control unit D 36 switches the selected memory MB in the MLC to the SLC and changes SLC/MLC region information of the memory region MB.
  • step OA 4 the switching control unit D 36 moves the data Da of the memory region MA in the SLC to be switched to the memory region MB newly switched to the SLC.
  • step OA 5 the switching control unit D 36 switches the memory region MA in the SLC to be switched to the MLC and changes SLC/MLC region information of the memory region MA.
  • the address conversion information 13 is updated to associate the physical address of the movement destination of data with the logical address of the data. If data writing, reading, or erasure occurs with the movement of data, the memory usage information 11 is updated.
  • FIG. 82 is a state transition diagram showing a second example of switching control of memory regions by the switching control unit D 36 according to the present embodiment.
  • the processing described as steps OB 1 to 085 in FIG. 82 may be changed in order within the range in which switching of the SLC and the MLC, movement of data, and information updates are implemented normally.
  • the memory region MA of the nonvolatile semiconductor memory is an SLC and the memory regions MB, MC, MD of the nonvolatile semiconductor memory are MLCs.
  • the memory regions MA, MB, MC store the data Da, Db, Dc respectively.
  • the memory region MD is a save region.
  • step OB 1 it is assumed that the write wear-out rate of the memory region MA exceeds the SLC threshold.
  • step OB 2 the switching control unit D 36 moves the data Da in the memory region MA to the save memory region MD.
  • step OB 3 the switching control unit D 36 selects one of the memory regions MB, MC (the memory region MB in the example of FIG. 82 ) in the MLC and moves the data Db in the selected memory region MB to the save memory region MD.
  • step OB 4 the switching control unit D 36 switches the memory region MA in the SLC to the MLC and the memory region MB in the MLC to the SLC. Further, the switching control unit D 36 changes SLC/MLC region information of the memory regions MA, MB.
  • step OB 5 the switching control unit D 36 moves the data Da in the save memory region MD to the memory region MB switched to the SLC and the data Db in the save memory region MD to the memory region MA switched to the MLC.
  • the address conversion information 13 is updated to associate the physical address of the movement destination of data with the logical address of the data. If data writing, reading, or erasure occurs with the movement of data, the memory usage information 11 is updated.
  • the coloring table 17 is referenced to write (arrange) data with a high access frequency into the SLC region and write (arrange) data with a low access frequency into the MLC region.
  • the SLC region can dynamically be switched to the MLC region in accordance with the usage state of the nonvolatile semiconductor memories 9 , 10 and also the MLC region can dynamically be switched to the SLC region.
  • the SLC region with a high write wear-out rate can be used as the MLC region.
  • the MLC region with a low write wear-out rate can be used as the SLC region. Accordingly, the life of the nonvolatile semiconductor memories 9 , 10 can be prolonged so that the nonvolatile semiconductor memories 9 , 10 can be used efficiently.
  • the present embodiment is a modification of the first embodiment.
  • a memory expansion device that expands the address space used by the processors 3 a , 3 b , 3 c will be described.
  • FIG. 83 is a block diagram showing an example of the relationship between the memory expansion device according to the present embodiment and the address space.
  • the processor 3 b of the processors 3 a , 3 b , 3 c will representatively be described, but the other processors 3 a , 3 c can also be described in the same manner.
  • the memory management device 1 described in the first embodiment makes an address conversion between a logical address space E 32 of memory and a physical address space E 33 of memory and also determines the writing destination of data.
  • the physical address space E 33 of memory contains the physical address space of the mixed main memory 2 .
  • the physical address space E 33 of memory may further contain the physical address space of another memory.
  • the logical address space E 32 of memory corresponds to a processor physical address space E 34 for the processor 3 b .
  • the processor physical address space E 34 for example, data management based on file systems E 34 a , E 34 b is realized.
  • the processor 3 b includes a memory management device E 35 .
  • the processor 3 b and the memory management device E 35 may be separate structures.
  • the processor 3 B executes a plurality of processes Pc 1 to Pcn.
  • processor logical address spaces PLA 1 to PLAn are used respectively. If, for example, the processor 3 b is a CPU (Central Processing Unit), the processor logical address spaces PLA 1 to PLAn are CPU logical address spaces.
  • the processor logical address spaces PLA 1 to PLAn have memory windows MW 1 to MWn respectively. Data in a portion of the processor physical address space E 34 is mapped (that is, copied or mapped) to the memory windows MW 1 to MWn.
  • the processor 3 b can access data in the memory windows MW 1 to MWn in parallel in the plurality of processes Pc 1 to Pcn so as to be able to execute the plurality of processes Pc 1 to Pcn at high speed.
  • the processor 3 b virtually can use a wide address space by using the memory windows MW 1 to MWn.
  • the processor 3 b updates data in the memory windows MW 1 to MWn and the needed data is thereby mapped newly to the memory windows MW 1 to MWn.
  • the processor 3 b can access the processor physical address space E 34 via the memory windows MW 1 to MWn.
  • the memory management device E 35 has a configuration similar to that of the memory management device 1 described in the first embodiment.
  • the memory management device E 35 further realizes a function as an MMU of the processor 3 b , but the memory management device E 35 and the MMU of the processor 3 b may be separate structures.
  • a major feature of the memory management device E 35 according to the present embodiment is that address conversions and writing destination decisions of data are made between the processor logical address spaces PLA 1 to PLAn and the processor physical address space E 34 .
  • the information storage unit 17 of the memory management device E 35 stores memory usage information E 36 , memory specific information E 37 , a coloring table E 38 , and address conversion information E 39 .
  • the processing unit 15 of the memory management device E 35 references or updates the memory usage information E 36 , the memory specific information E 37 , the coloring table E 38 , and the address conversion information E 39 in the information storage unit 17 while using the working memory 16 to perform processing similar to the processing described in the first embodiment.
  • the memory usage information E 36 contains, for example, the writing occurrence count and the reading occurrence count of each address region of the processor physical address space E 34 and the erasure count of each block region.
  • the memory usage information E 36 indicating the usage state of each address region of the processor physical address space E 34 can be calculated based on, for example, the memory usage information 11 and the address conversion information 13 managed by the memory management device 1 .
  • the memory specific information 12 contains, for example, the memory type of each address region of the processor physical address space E 34 (for example, whether to correspond to the volatile semiconductor memory 8 , the nonvolatile semiconductor memory 9 of SLC, or the nonvolatile semiconductor memory 10 of MLC), the memory size of the volatile semiconductor memory 8 , the memory size of the nonvolatile semiconductor memories 9 , 10 , the page size and block size of the nonvolatile semiconductor memories 9 , 10 , and the accessible upper limit count (the writable upper limit count, readable upper limit count, and erasable upper limit count) of each address region.
  • the accessible upper limit count the writable upper limit count, readable upper limit count, and erasable upper limit count
  • the memory specific information E 37 indicating specific information of each address region of the processor physical address space E 34 can be calculated based on, for example, the memory specific information 12 and the address conversion information 13 managed by the memory management device 1 .
  • the coloring table E 38 associates a processor logical address with coloring information of data indicated by the processor logical address.
  • the address conversion information E 39 is information associating processor logical addresses with processor physical addresses. If the memory windows MW 1 to MWn are updated, the address conversion information E 39 is updated so as to represent a state after the update.
  • the processing unit 15 makes address conversions and writing destination decisions of data between the processor logical address spaces PLAT to PLAn and the processor physical address space E 34 based on the memory usage information E 36 , the memory specific information E 37 , the coloring table E 38 , and the address conversion information E 39 .
  • the processing unit 15 exercises control so that no write-back processing from the memory windows MW 1 to MWn to the processor physical address space E 34 is performed on read-only data whose writing frequency of data is 0.
  • the processing unit 15 writes back the value in the memory windows MW 1 to MWn into the processor physical address space E 34 .
  • the processing unit 15 does not write back from the memory windows MW 1 to MWn into the processor physical address space E 34 even if the data is dirty data.
  • the processing unit 15 allocates data to the volatile semiconductor memory 8 and the nonvolatile semiconductor memories 9 , 10 based on the static writing frequency SW_color, the static reading frequency SR_color, the static erase frequency SE_color, the dynamic writing frequency DW_color, the dynamic reading frequency DR_color, the dynamic erase frequency DE_color, and the data type.
  • FIG. 84 is a flow chart showing an example of the write operation by the processor 3 b and the memory management device E 35 according to the present embodiment.
  • FIG. 84 an example of processing in which data writing to the memory windows MW 1 to MWn occurs and then changes of processor physical address regions allocated to the memory windows MW 1 to MWn occur is shown.
  • step EM 1 the memory management device E 35 initially allocates one of processor physical address regions to the memory windows MW 1 to MWn to generate the address conversion information E 39 .
  • the processor physical address region allocated to the memory windows MW 1 to MWn corresponds to a memory region in the volatile semiconductor memory 8 , a memory region in the nonvolatile semiconductor memory 9 of SLC, or a memory region in the nonvolatile semiconductor memory 10 of MLC.
  • step EM 2 the processor 3 b writes data into the memory windows MW 1 to MWn.
  • the memory management device E 35 updates coloring information (for example, the writing count DWC_color, the dynamic writing frequency DW_color and the like) of the write target data.
  • step EM 3 if the processor 3 b writes data in the memory windows MW 1 to MWn into the processor physical address space E 34 , the memory management device E 35 determines the writing destination of the write target data into the processor physical address space E 34 based on the memory usage information E 36 , the memory specific information E 37 , the coloring table E 38 , and the address conversion information E 39 and also updates the memory usage information E 36 and the address conversion information E 39 . Further, the memory management device E 35 writes the write target data into the determined processor physical address region.
  • the memory management device E 35 determines, among a memory region of the volatile semiconductor memory 8 , a memory region of the nonvolatile semiconductor memory 9 of SLC, and a memory region of the nonvolatile semiconductor memory 10 of MLC, the memory region into which the write target data should be written.
  • step EM 4 the processor 3 b causes access to data in another processor physical address region not allocated to the memory windows MW 1 to MWn.
  • step EM 5 the memory management device E 35 changes the allocation of the processor physical address region to the memory windows MW 1 to MWn and updates the address conversion information E 39 .
  • the allocation of the processor physical address region to the memory windows MW 1 to MWn is changed by, for example, a system call of the operating system 27 .
  • page addresses are changed. Actually, entries of a processor page table are changed.
  • the memory management device E 35 writes back data in the memory windows MW 1 to MWn before the change to update coloring information of the data and the memory usage information E 36 .
  • step EM 6 the processor 3 b uses data stored in the memory windows MW 1 to MWn after the change.
  • the processor physical address space E 34 larger than the processor logical address spaces PLA 1 to PLAn can be used so that the processor physical address space E 34 of the processor 3 b can be expanded.
  • data can efficiently be mapped between the memory windows MW 1 to MWn and the processor physical address space E 34 by using coloring information.
  • the writing destination of data into the processor physical address space E 34 is determined based on the memory usage information E 36 , the memory specific information E 37 , and coloring information of the coloring table E 38 .
  • the writing destination of data into the processor physical address space E 34 may be determined by using, for example, at least one of the memory usage information E 36 , the memory specific information E 37 , and the coloring table E 38 .
  • the processor logical address spaces PLA to PLn are formed for each of the plurality of processes Pc 1 to Pcn and the memory windows MW 1 to MWn are used for each. Accordingly, an operation such as accessing the mixed main memory 2 and the like in parallel is performed so that the plurality of processes Pc 1 to Pcn can be executed at high speed.
  • the present embodiment is an information processing device (server device) that shares coloring information used by the memory management device 1 in the first embodiment and sends the shared coloring information to the information processing device 100 .
  • the operating system 27 When new data is generated by the processes 6 a , 6 b , 6 c being executed by the processors 3 a , 3 b , 3 c in the information processing device 100 respectively as described above, the operating system 27 generates static color information based on the type of the newly generated data to give the static color information to the newly generated data. If the data should be written into the nonvolatile semiconductor memories 9 , 10 , the memory management device 1 can prolong the life of the nonvolatile semiconductor memories 9 , 10 by referencing static color information of the data to determine the write target memory region and the like. Thus, the life of the nonvolatile semiconductor memories 9 , 10 can be made still longer by optimizing coloring information including static color information.
  • FIG. 85 is a diagram showing an example of the configuration of an information processing device and a network system according to the present embodiment.
  • a network system K 32 has a configuration in which an information processing device K 33 , a profile generation terminal K 34 , and user terminals 100 A, 100 B are communicably connected via a network K 35 .
  • the network K 35 is, for example, a variety of communication media such as the Internet and a LAN (Local Area Network) and may be a wire network or a wireless network.
  • LAN Local Area Network
  • the configuration of the profile generation terminal K 34 will be described.
  • the profile generation terminal K 34 is, for example, a terminal of a program developer or a maker.
  • the profile generation terminal K 34 includes a setting unit K 34 a , a storage device K 34 b , and a communication unit K 34 c.
  • the setting unit K 34 a generates profile information K 36 based on, for example, a setting operation of a program developer or the like and stores the profile information K 36 in the storage device K 34 b.
  • the storage device K 34 b stores the profile information K 36 generated by the setting unit K 34 a.
  • the communication unit K 34 c sends the profile information K 36 stored in the storage device K 34 b to the information processing device K 33 via the network K 35 .
  • the configuration of the user terminals 100 A, 100 B will be described.
  • the user terminals 100 A, 100 B correspond to the information processing device 100 in the first embodiment and include the memory management device 1 and the mixed main memory 2 .
  • the coloring table 17 is stored in the information storage unit 17 of the memory management device 1 and the mixed main memory 2 included in the user terminals 100 A, 100 B.
  • the user terminals 100 A, 100 B generate profile information K 37 , K 38 automatically or according to user's instructions respectively. Details of generation of the profile information will be described later.
  • the user terminals 100 A, 100 B send the profile information K 37 , K 38 to the information processing device K 33 via the network K 35 respectively.
  • the user terminals 100 A, 100 B download (receive) profile information from the information processing device K 33 automatically or according to user's instructions.
  • the operating system 27 of the user terminals 100 A, 100 B references the downloaded profile information when generating coloring information for data.
  • the operating system 27 of the user terminals 100 A, 100 B generate static color information for data based on profile information and store the static color information in the coloring table 14 .
  • the configuration of the information processing device K 33 will be described.
  • the information processing device K 33 includes a communication unit K 33 a , a profile information management unit K 33 b , and a storage device K 33 c .
  • the profile information management unit K 33 may be realized by hardware or cooperation of software and hardware such as a processor.
  • the communication unit K 33 a sends and receives the profile information K 36 to K 38 between the profile generation terminal K 34 and the user terminals 100 A, 100 B.
  • the profile information management unit K 33 b stores profile information received via the communication unit K 33 a in the storage device K 33 c .
  • the profile information management unit K 33 b also sends profile information to the user terminals 100 A, 100 B and the profile generation terminal K 34 via the communication unit K 33 a.
  • the storage device K 33 c stores profile information. Further, the storage device K 33 c stores service data K 40 . The service data K 40 will be described later.
  • the profile information is, as described above, information referenced by the operating system 27 in the user terminals 100 A, 100 B when static color information is given (generated) to data.
  • the profile information is information associating, for example, data identification information, coloring information, and generator identification information.
  • the data identification information corresponds to, for example, the data format of FIGS. 9 and 10 in the first embodiment.
  • identification information of a file such as the name of a file and the extension of a file or information of the position (for example, a directory) where data is arranged in a file system is used.
  • the coloring information contains the static color information described above.
  • the static color information is a value set for each piece of the data identification information and contains, for example, like in FIGS. 9 and 10 in the first embodiment, the static writing frequency SW_color, the static reading frequency SR_color, and the data life SL_color.
  • the generator identification information is information to identify the generator of the profile information.
  • the generator identification information is additional information and is added if necessary.
  • the user terminals 100 A, 100 B hold data identification information and coloring information for the data identification information shown in FIGS. 9 and 10 .
  • the user terminals 100 A, 100 B generate profile information based on the held data identification information and coloring information.
  • the user terminals 100 A, 100 B may also generate profile information from the coloring table 14 managed by the memory management device 1 .
  • the coloring table 14 is managed by the index generated based on the logical address specifying data and coloring information of the data in units of entries.
  • the user terminals 100 A, 100 B identify data specified by the logical address from the index to extract data identification information of the data. Further, the user terminals 100 A, 100 B calculate static color information and dynamic color information (for example, the dynamic writing frequency DW_color and the dynamic reading frequency DR_color) of the data. Further, if necessary, the user terminals 100 A, 100 B change the dynamic color information to the data format similar to that of the static color information.
  • the dynamic color information is an actual access frequency to data and using, for example, a temporal average value of access frequency can be considered.
  • the user terminals 100 A, 100 B generate profile information based on coloring information containing the static color information and dynamic color information, data identification information, and generator identification information.
  • the actual access frequency to data can be provided to the user terminals 100 A, 100 B as profile information. Accordingly, coloring information given to data by the operating system 27 can be optimized.
  • a software vendor that has developed a new application may register the profile information K 36 about a file dedicated to the new application with the information processing device K 33 by using the profile generation terminal K 34 .
  • the service data K 40 contains explanatory data of various kinds of the profile information K 36 to K 38 stored in the storage device K 33 c and various kinds of advertising data.
  • the service data K 40 is sent from the information management unit K 33 to the user terminals 100 A, 100 B.
  • the user terminals 100 A, 100 B display the service data K 40 by using, for example, a browser.
  • the user can determine the profile information to be downloaded by referencing the explanatory data of the service data K 40 .
  • the profile information management unit K 33 b may apply a statistical method to the profile information K 36 to K 38 stored in the storage device K 33 c to send resultant profile information to the user terminals 100 A, 100 B.
  • a statistical method for example, a method of calculating an average value or determining a median for coloring information associated with the same data identification information.
  • the profile information management unit K 33 b generates profile information containing an average value or median of the static writing frequency SW_color, an average value or median of the static reading frequency SR_color, an average value or median of the data life SL_color, an average value or median of the dynamic writing frequency DW_color, an average value or median of the dynamic reading frequency DR_color for a plurality of pieces of coloring information associated with the same data identification information and sends the generated profile information to the user terminals 100 A, 100 B.
  • the profile information management unit K 33 b counts the number of downloads of the profile information K 36 to K 38 by a browser of the user terminals 100 A, 100 B.
  • the profile information management unit K 33 b calculates a compensation charge for the generator of each piece of profile information K 36 to K 38 by multiplying the download count of each piece of profile information K 36 to K 38 by a download charge per download of the profile information K 36 to K 38 .
  • the profile information management unit K 33 b generates compensation information in which compensation charges are assigned for generator identification information of each piece of profile information K 36 to K 38 and stores the compensation information in the storage device K 33 c.
  • the profile information management unit K 33 b generates usage charge information in which a usage charge per download of profile information is assigned for identification information (for example, the user ID) that identifies the download request source in response to a download request from the user terminals 100 A, 100 B and stores the usage charge information in the storage device K 33 c.
  • identification information for example, the user ID
  • FIG. 86 is a flow chart showing an example of processing of the profile information management unit K 33 b according to the present embodiment.
  • step C 1 the profile information management unit K 33 b determines from which of the profile generation terminal K 34 and the user terminals 100 A, 100 B one piece of the profile information K 36 to K 38 is received.
  • step C 3 If no profile information K 36 to K 38 is received, the processing proceeds to step C 3 .
  • step C 2 the profile information management unit K 33 b stores the received profile information in the storage device K 33 c.
  • step C 3 the profile information management unit K 33 b determines from which of the user terminals 100 A, 100 B a download request is received.
  • step C 6 If no download request is received, the processing proceeds to step C 6 .
  • step C 4 the profile information management unit K 33 b reads profile information corresponding to the received download request from the storage device K 33 c.
  • step C 5 the profile information management unit K 33 b sends the read profile Information to the user terminal of the download request source.
  • step C 6 the profile information management unit K 33 b determines whether the processing has ended. If the processing has not ended, the processing returns to step C 1 .
  • FIG. 87 b is a flow chart showing an example of upload processing of the profile information K 37 by the user terminal 100 A according to the present embodiment. Upload processing by the user terminal 100 B is almost the same as in FIG. 87 .
  • step UL 1 the user terminal 100 A generates the profile information K 37 by combining data identification information for data, coloring information in the coloring table 14 associated with the data, and generator identification information, for example, automatically or according to user's instructions.
  • step UL 2 the user terminal 100 A sends the generated profile information K 37 to the information management unit K 33 via the network K 35 .
  • FIG. 88 is a flow chart showing an example of download processing of profile information by the user terminal 100 A according to the present embodiment. Download processing by the user terminal 100 B is almost the same as in FIG. 88 and thus, the description thereof is omitted.
  • step DL 1 the user terminal 100 A sends a download request containing data identification information to the information management unit K 33 via the network K 35 , for example, automatically or according to user's instructions.
  • step DL 2 the user terminal 100 A receives profile information from the information management unit K 33 via the network K 35 as a response to the download request.
  • step DL 3 the operating system 27 of the user terminal 100 A stores static color information contained in the received profile information for data corresponding to the data identification information of the received profile information in the coloring table 14 .
  • step DL 4 the memory management device 1 of the user terminal 100 A determines whether dynamic color information is contained in the received profile information.
  • step DL 5 the memory management device 1 stores static color information contained in the received profile information for data corresponding to the data identification information of the received profile information in the coloring table 14 .
  • coloring information used by the memory management device 1 is generated by many such as makers and users and the generated coloring information is shared.
  • charges can be paid to the generator of coloring information in accordance with the number of times the coloring information is browsed or downloaded.
  • the operator of the information management unit K 33 can collect many pieces of coloring information and speedily provide various services concerning coloring information.
  • coloring information can be shared, the development of the memory management device 1 and the mixed main memory 2 can be hastened, and the memory management device 1 and the mixed main memory 2 can be popularized.
  • the present embodiment is a modification of the first embodiment.
  • a memory management device that accesses a memory connected via a network will be described.
  • FIG. 89 is block diagram showing an example of a network system according to the present embodiment.
  • the information processing device N 37 A includes a processor 3 A, a memory management device N 32 A, a volatile semiconductor memory 8 A, a nonvolatile semiconductor memory 9 A, and a network interface device N 39 A.
  • the processor 3 A is connected to the volatile semiconductor memory 8 A, the nonvolatile semiconductor memory 9 A, and the network interface device N 39 A via the memory management device N 32 A.
  • the processor 3 A may include an internal memory cache, but a description thereof is omitted in FIG. 89 .
  • the information processing device N 37 A may include a plurality of processors 3 A.
  • the volatile semiconductor memory 8 A is similar to the volatile semiconductor memory 8 in the first embodiment.
  • the nonvolatile semiconductor memory 9 A is similar to the nonvolatile semiconductor memory 9 or the nonvolatile semiconductor memory 10 in the first embodiment.
  • the volatile semiconductor memory 8 A and the nonvolatile semiconductor memory 9 A are used as the main memory of the information processing device N 37 A.
  • the volatile semiconductor memory 8 A and the nonvolatile semiconductor memory 9 A function as cache memories in the information processing device N 37 A by storing data with a high access frequency or data whose importance is high for the information processing device N 37 A of data in the other information processing device N 37 B.
  • the volatile semiconductor memory 8 A is used as the primary cache memory in the information processing device N 37 A and the nonvolatile semiconductor memory 9 A is used as the secondary cache memory in the information processing device N 37 A.
  • the network interface device N 39 A sends/receives network logical addresses or data to/from the network interface device N 39 A of the other information processing device N 37 B via the network N 38 .
  • FIG. 90 is a block diagram showing an example of the configuration of the memory management device N 32 A according to the present embodiment.
  • a processing unit N 33 A of the memory management device N 32 A includes, in addition to the address management unit 18 , the reading management unit 19 , the writing management unit 20 , the coloring information management unit 21 , the memory usage information management unit 22 , and the relocation unit 23 , a network address conversion unit N 34 and a communication unit N 35 .
  • the network address conversion unit N 34 converts a logical address of short address length used by the processor 3 A (hereinafter, referred to as a “processor logical address”) into a logical address of long address length used by a plurality of information processing devices connected by a network (hereinafter, referred to as a “network logical address”).
  • a hash function is used.
  • the processor logical address is a pointer stored in a register.
  • the working memory 16 has an address length conversion table AT stored therein.
  • the network address conversion unit N 34 references the address length conversion table AT to convert a processor logical address into a network logical address.
  • the address length conversion table AT is stored in the working memory 16 , but may also be stored in the information storage unit 17 .
  • the communication unit N 35 sends and receives network logical addresses and data specified by network logical addresses via the network N 38 by using the network interface device N 39 A.
  • the memory usage information 11 indicates the usage state of the whole network system N 37 (in the example of FIG. 90 , the memory usage information 11 includes the volatile semiconductor memory 8 A, the nonvolatile semiconductor memory 9 A, a volatile semiconductor memory 8 B, a nonvolatile semiconductor memory 9 B).
  • the memory specific information 12 indicates specific information of memory regions of the whole network system N 37 .
  • the address conversion information 13 indicates the relationship between network logical addresses and physical addresses used by the whole network system N 37 .
  • the coloring table 14 contains coloring Information of each piece of data in the whole network system N 37 .
  • the network system N 37 unique addresses are attached to all data. If a common network logical address space is used throughout the network system N 37 , the number of bits of needed addresses increases like 128 bits. In the network system N 37 , however, the processors 3 A, 3 B are assumed to have 32-bit or 64-bit registers. In this case, it is necessary to convert a processor logical address of the number of bits of the register into the number of bits of a network logical address. The conversion processing is performed by the network address conversion unit N 34 included in the memory management devices N 32 A, N 32 B.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
US13/351,582 2009-07-17 2012-01-17 Memory management device Abandoned US20120191900A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/938,589 US10776007B2 (en) 2009-07-17 2015-11-11 Memory management device predicting an erase count

Applications Claiming Priority (23)

Application Number Priority Date Filing Date Title
JP2009-169371 2009-07-17
JP2009169371A JP2011022933A (ja) 2009-07-17 2009-07-17 メモリ管理装置を含む情報処理装置及びメモリ管理方法
JP2010-048328 2010-03-04
JP2010-048331 2010-03-04
JP2010048331A JP2011186555A (ja) 2010-03-04 2010-03-04 メモリ管理装置及び方法
JP2010048338A JP2011186562A (ja) 2010-03-04 2010-03-04 メモリ管理装置及び方法
JP2010048339A JP2011186563A (ja) 2010-03-04 2010-03-04 メモリ管理装置およびメモリ管理方法
JP2010048333A JP2011186557A (ja) 2010-03-04 2010-03-04 メモリ管理装置及び方法
JP2010-048339 2010-03-04
JP2010-048337 2010-03-04
JP2010048335A JP2011186559A (ja) 2010-03-04 2010-03-04 メモリ管理装置
JP2010048332A JP5322978B2 (ja) 2010-03-04 2010-03-04 情報処理装置及び方法
JP2010-048334 2010-03-04
JP2010048334A JP2011186558A (ja) 2010-03-04 2010-03-04 メモリ管理装置及び方法
JP2010-048333 2010-03-04
JP2010-048332 2010-03-04
JP2010-048329 2010-03-04
JP2010048328A JP2011186553A (ja) 2010-03-04 2010-03-04 メモリ管理装置
JP2010-048338 2010-03-04
JP2010048329A JP2011186554A (ja) 2010-03-04 2010-03-04 メモリ管理装置及び方法
JP2010048337A JP2011186561A (ja) 2010-03-04 2010-03-04 メモリ管理装置
JP2010-048335 2010-03-04
PCT/JP2010/053817 WO2011007599A1 (ja) 2009-07-17 2010-03-08 メモリ管理装置

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/053817 Continuation WO2011007599A1 (ja) 2009-07-17 2010-03-08 メモリ管理装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/938,589 Continuation US10776007B2 (en) 2009-07-17 2015-11-11 Memory management device predicting an erase count

Publications (1)

Publication Number Publication Date
US20120191900A1 true US20120191900A1 (en) 2012-07-26

Family

ID=43449209

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/351,582 Abandoned US20120191900A1 (en) 2009-07-17 2012-01-17 Memory management device
US14/938,589 Active 2030-06-08 US10776007B2 (en) 2009-07-17 2015-11-11 Memory management device predicting an erase count

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/938,589 Active 2030-06-08 US10776007B2 (en) 2009-07-17 2015-11-11 Memory management device predicting an erase count

Country Status (6)

Country Link
US (2) US20120191900A1 (zh)
EP (1) EP2455865B1 (zh)
KR (1) KR20120068765A (zh)
CN (1) CN102473140B (zh)
TW (1) TWI460588B (zh)
WO (1) WO2011007599A1 (zh)

Cited By (243)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130030686A1 (en) * 2010-04-05 2013-01-31 Morotomi Kohei Collision judgment apparatus for vehicle
US20130262075A1 (en) * 2012-03-27 2013-10-03 Fujitsu Limited Processor emulation device and storage medium
US20130290669A1 (en) * 2012-04-30 2013-10-31 Oracle International Corporation Physical memory usage prediction
US8612692B2 (en) 2010-07-30 2013-12-17 Kabushiki Kaisha Toshiba Variable write back timing to nonvolatile semiconductor memory
US20130346674A1 (en) * 2012-06-26 2013-12-26 Phison Electronics Corp. Data writing method, memory controller and memory storage device
US8645612B2 (en) 2010-07-30 2014-02-04 Kabushiki Kaisha Toshiba Information processing device and information processing method
US20140075100A1 (en) * 2012-09-12 2014-03-13 Kabushiki Kaisha Toshiba Memory system, computer system, and memory management method
WO2014052157A1 (en) 2012-09-28 2014-04-03 Intel Corporation Methods, systems and apparatus to cache code in non-volatile memory
JP2014078231A (ja) * 2012-10-08 2014-05-01 Hgst Netherlands B V 低電力・低遅延・大容量ストレージ・クラス・メモリのための装置および方法
US20140181457A1 (en) * 2012-12-21 2014-06-26 Advanced Micro Devices, Inc. Write Endurance Management Techniques in the Logic Layer of a Stacked Memory
US20140281581A1 (en) * 2013-03-18 2014-09-18 Genusion, Inc. Storage Device
WO2014158154A1 (en) * 2013-03-28 2014-10-02 Hewlett-Packard Development Company, L.P. Regulating memory activation rates
US8943266B2 (en) 2013-03-13 2015-01-27 Hitachi, Ltd. Storage system and method of control for storage system
US20150074339A1 (en) * 2013-09-10 2015-03-12 Hicamp Systems, Inc. Hybrid main memory using a fine-grain level of remapping
US8984251B2 (en) 2012-12-04 2015-03-17 Apple Inc. Hinting of deleted data from host to storage device
CN105027211A (zh) * 2013-01-31 2015-11-04 惠普发展公司,有限责任合伙企业 自适应粒度行缓冲器高速缓存
JP2015204118A (ja) * 2014-04-15 2015-11-16 三星電子株式会社Samsung Electronics Co.,Ltd. ストレージコントローラ及びストレージ装置
US20160004631A1 (en) * 2014-07-03 2016-01-07 Pure Storage, Inc. Profile-Dependent Write Placement of Data into a Non-Volatile Solid-State Storage
US20160077737A1 (en) * 2014-09-11 2016-03-17 Kabushiki Kaisha Toshiba Information processing apparatus and memory system
US9305616B2 (en) 2012-07-17 2016-04-05 Samsung Electronics Co., Ltd. Semiconductor memory cell array having fast array area and semiconductor memory including the same
CN105573831A (zh) * 2014-10-13 2016-05-11 龙芯中科技术有限公司 数据转移方法和装置
US20160170663A1 (en) * 2014-12-15 2016-06-16 Konica Minolta, Inc. Nonvolatile memory control device, nonvolatile memory control method and computer readable storage medium
US9396078B2 (en) 2014-07-02 2016-07-19 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
JP5969130B2 (ja) * 2013-07-18 2016-08-17 株式会社日立製作所 情報処理装置
US9479466B1 (en) * 2013-05-23 2016-10-25 Kabam, Inc. System and method for generating virtual space messages based on information in a users contact list
US9477554B2 (en) 2014-06-04 2016-10-25 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US9501244B2 (en) 2014-07-03 2016-11-22 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US9525738B2 (en) 2014-06-04 2016-12-20 Pure Storage, Inc. Storage system architecture
KR20170008153A (ko) * 2015-07-13 2017-01-23 삼성전자주식회사 비휘발성 장치에서 데이터 속성 기반 데이터 배치를 활용하기 위해 컴퓨터를 구동하는 경험적 인터페이스
US20170060698A1 (en) * 2015-08-24 2017-03-02 HGST Netherlands B.V. Methods and systems for improving storage journaling
US20170075595A1 (en) * 2015-09-11 2017-03-16 Kabushiki Kaisha Toshiba Memory system
US20170115934A1 (en) * 2014-10-23 2017-04-27 Seagate Technology Llc Logical block addresses used for executing host commands
US20170154689A1 (en) * 2015-12-01 2017-06-01 CNEXLABS, Inc. Method and Apparatus for Logically Removing Defective Pages in Non-Volatile Memory Storage Device
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US20170160964A1 (en) * 2015-12-08 2017-06-08 Kyocera Document Solutions Inc. Electronic device and non-transitory computer readable storage medium
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US9798477B2 (en) 2014-06-04 2017-10-24 Pure Storage, Inc. Scalable non-uniform storage sizes
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US9818485B2 (en) 2012-07-11 2017-11-14 Samsung Electronics Co., Ltd. Nonvolatle memory device and memory system having the same, and related memory management, erase and programming methods
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US9870836B2 (en) 2015-03-10 2018-01-16 Toshiba Memory Corporation Memory system and method of controlling nonvolatile memory
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US9977611B2 (en) 2014-12-04 2018-05-22 Kabushiki Kaisha Toshiba Storage device, method, and computer-readable medium for selecting a write destination of target data to nonvolatile memories having different erase limits based upon a write interval
US20180150219A1 (en) * 2016-11-30 2018-05-31 Industrial Technology Research Institute Data accessing system, data accessing apparatus and method for accessing data
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US10013344B2 (en) 2014-01-14 2018-07-03 Avago Technologies General Ip (Singapore) Pte. Ltd. Enhanced SSD caching
US10037160B2 (en) 2014-12-19 2018-07-31 Samsung Electronics Co., Ltd. Storage device dynamically allocating program area and program method thereof
US10037271B1 (en) * 2012-06-27 2018-07-31 Teradata Us, Inc. Data-temperature-based control of buffer cache memory in a database system
CN108463811A (zh) * 2016-01-20 2018-08-28 Arm有限公司 记录组指示符
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US10178169B2 (en) 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US10180810B2 (en) 2016-03-10 2019-01-15 Kabushiki Kaisha Toshiba Memory controller and storage device which selects memory devices in which data is to be written based on evaluation values of a usable capacity of the memory devices
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10216411B2 (en) 2014-08-07 2019-02-26 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US10241909B2 (en) * 2015-02-27 2019-03-26 Hitachi, Ltd. Non-volatile memory device
US20190102310A1 (en) * 2017-10-02 2019-04-04 Arm Ltd Method and apparatus for control of a tiered memory system
US10255182B2 (en) 2015-02-11 2019-04-09 Samsung Electronics Co., Ltd. Computing apparatus and method for cache management
US20190107976A1 (en) * 2018-12-07 2019-04-11 Intel Corporation Apparatus and method for assigning velocities to write data
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
US20190138227A1 (en) * 2017-11-06 2019-05-09 Hitachi, Ltd. Storage system and control method thereof
US20190138226A1 (en) * 2017-11-06 2019-05-09 Toshiba Memory Corporation Memory system and method for controlling nonvolatile memory
US10303547B2 (en) 2014-06-04 2019-05-28 Pure Storage, Inc. Rebuilding data across storage nodes
US10324812B2 (en) 2014-08-07 2019-06-18 Pure Storage, Inc. Error recovery in a storage cluster
WO2019133233A1 (en) * 2017-12-27 2019-07-04 Spin Transfer Technologies, Inc. A method of writing contents in memory during a power up sequence using a dynamic redundancy register in a memory device
US10353609B2 (en) 2014-09-16 2019-07-16 Huawei Technologies Co., Ltd. Memory allocation method and apparatus
US10360964B2 (en) 2016-09-27 2019-07-23 Spin Memory, Inc. Method of writing contents in memory during a power up sequence using a dynamic redundancy register in a memory device
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US10366775B2 (en) 2016-09-27 2019-07-30 Spin Memory, Inc. Memory device using levels of dynamic redundancy registers for writing a data word that failed a write operation
US10372617B2 (en) 2014-07-02 2019-08-06 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US10372563B2 (en) * 2016-06-17 2019-08-06 Korea University Research And Business Foundation Analyzing system for managing information storage table and control method thereof
US10379763B2 (en) 2014-06-04 2019-08-13 Pure Storage, Inc. Hyperconverged storage system with distributable processing power
US10387062B2 (en) 2015-11-27 2019-08-20 Hitachi, Ltd. Storage system with cells changeable between two different level cell modes based on predicted lifetime
US10437491B2 (en) 2016-09-27 2019-10-08 Spin Memory, Inc. Method of processing incomplete memory operations in a memory device during a power up sequence and a power down sequence using a dynamic redundancy register
US10437723B2 (en) 2016-09-27 2019-10-08 Spin Memory, Inc. Method of flushing the contents of a dynamic redundancy register to a secure storage area during a power down in a memory device
US10446210B2 (en) 2016-09-27 2019-10-15 Spin Memory, Inc. Memory instruction pipeline with a pre-read stage for a write operation for reducing power consumption in a memory device that uses dynamic redundancy registers
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US10460781B2 (en) 2016-09-27 2019-10-29 Spin Memory, Inc. Memory device with a dual Y-multiplexer structure for performing two simultaneous operations on the same row of a memory bank
CN110392885A (zh) * 2017-04-07 2019-10-29 松下知识产权经营株式会社 增大了使用次数的非易失性存储器
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US10498580B1 (en) 2014-08-20 2019-12-03 Pure Storage, Inc. Assigning addresses in a storage system
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US10528419B2 (en) 2014-08-07 2020-01-07 Pure Storage, Inc. Mapping around defective flash memory of a storage array
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US10546625B2 (en) 2016-09-27 2020-01-28 Spin Memory, Inc. Method of optimizing write voltage based on error buffer occupancy
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US10579474B2 (en) 2014-08-07 2020-03-03 Pure Storage, Inc. Die-level monitoring in a storage cluster
US10592171B2 (en) 2016-03-16 2020-03-17 Samsung Electronics Co., Ltd. Multi-stream SSD QoS management
US10628316B2 (en) 2016-09-27 2020-04-21 Spin Memory, Inc. Memory device with a plurality of memory banks where each memory bank is associated with a corresponding memory instruction pipeline and a dynamic redundancy register
US20200126606A1 (en) * 2018-10-19 2020-04-23 Samsung Electronics Co., Ltd. Semiconductor device
CN111078128A (zh) * 2018-10-22 2020-04-28 浙江宇视科技有限公司 数据管理方法、装置及固态硬盘
US10650902B2 (en) 2017-01-13 2020-05-12 Pure Storage, Inc. Method for processing blocks of flash memory
US10656838B2 (en) 2015-07-13 2020-05-19 Samsung Electronics Co., Ltd. Automatic stream detection and assignment algorithm
US10671480B2 (en) 2014-06-04 2020-06-02 Pure Storage, Inc. Utilization of erasure codes in a storage system
US10678452B2 (en) 2016-09-15 2020-06-09 Pure Storage, Inc. Distributed deletion of a file and directory hierarchy
US10684785B2 (en) 2017-02-23 2020-06-16 Hitachi, Ltd. Storage system
US10691812B2 (en) 2014-07-03 2020-06-23 Pure Storage, Inc. Secure data replication in a storage grid
US10698808B2 (en) 2017-04-25 2020-06-30 Samsung Electronics Co., Ltd. Garbage collection—automatic data placement
US10705732B1 (en) 2017-12-08 2020-07-07 Pure Storage, Inc. Multiple-apartment aware offlining of devices for disruptive and destructive operations
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US10732905B2 (en) 2016-02-09 2020-08-04 Samsung Electronics Co., Ltd. Automatic I/O stream selection for storage devices
JP2020119007A (ja) * 2019-01-18 2020-08-06 富士通株式会社 情報処理装置、記憶制御装置および記憶制御プログラム
US10739995B2 (en) 2016-10-26 2020-08-11 Samsung Electronics Co., Ltd. Method of consolidate data streams for multi-stream enabled SSDs
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US10824353B2 (en) 2017-09-22 2020-11-03 Toshiba Memory Corporation Memory system
US10824576B2 (en) 2015-07-13 2020-11-03 Samsung Electronics Co., Ltd. Smart I/O stream detection based on multiple attributes
US10831594B2 (en) 2016-07-22 2020-11-10 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10866905B2 (en) 2016-05-25 2020-12-15 Samsung Electronics Co., Ltd. Access parameter based multi-stream storage device access
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US10901907B2 (en) 2017-10-19 2021-01-26 Samsung Electronics Co., Ltd. System and method for identifying hot data and stream in a solid-state drive
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US10949087B2 (en) 2018-05-15 2021-03-16 Samsung Electronics Co., Ltd. Method for rapid reference object storage format for chroma subsampled images
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US10983866B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Mapping defective memory in a storage system
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US11003577B2 (en) * 2017-01-24 2021-05-11 Fujitsu Limited Information processing apparatus, information processing method, and non-transitory computer-readable storage medium for storing program of access control with respect to semiconductor device memory
US11010114B2 (en) * 2018-12-31 2021-05-18 Kyocera Document Solutions Inc. Read/write direction-based memory bank control for imaging
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US11048624B2 (en) 2017-04-25 2021-06-29 Samsung Electronics Co., Ltd. Methods for multi-stream garbage collection
US11068389B2 (en) 2017-06-11 2021-07-20 Pure Storage, Inc. Data resiliency with heterogeneous storage
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US11106574B2 (en) * 2017-06-16 2021-08-31 Oneplus Technology (Shenzhen) Co., Ltd. Memory allocation method, apparatus, electronic device, and computer storage medium
US20210342263A1 (en) * 2019-06-19 2021-11-04 Micron Technology, Inc. Garbage collection adapted to host write activity
US11190580B2 (en) 2017-07-03 2021-11-30 Pure Storage, Inc. Stateful connection resets
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11194473B1 (en) * 2019-01-23 2021-12-07 Pure Storage, Inc. Programming frequently read data to low latency portions of a solid-state storage array
US11200110B2 (en) * 2018-01-11 2021-12-14 Commvault Systems, Inc. Remedial action based on maintaining process awareness in data storage management
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US11301333B2 (en) 2015-06-26 2022-04-12 Commvault Systems, Inc. Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation
US11301177B2 (en) * 2015-03-23 2022-04-12 Netapp, Inc. Data structure storage and data management
US11307998B2 (en) 2017-01-09 2022-04-19 Pure Storage, Inc. Storage efficiency of encrypted host system data
US11327665B2 (en) * 2019-09-20 2022-05-10 International Business Machines Corporation Managing data on volumes
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US11372753B2 (en) * 2018-08-29 2022-06-28 Kioxia Corporation Memory system and method
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11437093B2 (en) * 2017-03-10 2022-09-06 Micron Technology, Inc. Methods for mitigating power loss events during operation of memory devices and memory devices employing the same
US11449253B2 (en) 2018-12-14 2022-09-20 Commvault Systems, Inc. Disk usage growth prediction system
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US11449256B2 (en) 2018-05-15 2022-09-20 Samsung Electronics Co., Ltd. Method for accelerating image storing and retrieving differential latency storage devices based on access rates
US11461010B2 (en) 2015-07-13 2022-10-04 Samsung Electronics Co., Ltd. Data property-based data placement in a nonvolatile memory device
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11474706B2 (en) 2013-04-30 2022-10-18 Hewlett Packard Enterprise Development Lp Memory access rate
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US11507326B2 (en) 2017-05-03 2022-11-22 Samsung Electronics Co., Ltd. Multistreaming in heterogeneous environments
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US20220391131A1 (en) * 2021-06-04 2022-12-08 Fujitsu Limited Computer-readable recording medium, information processing device control method and information processing device
US11544143B2 (en) 2014-08-07 2023-01-03 Pure Storage, Inc. Increased data reliability
US11550752B2 (en) 2014-07-03 2023-01-10 Pure Storage, Inc. Administrative actions via a reserved filename
US11567917B2 (en) 2015-09-30 2023-01-31 Pure Storage, Inc. Writing data and metadata into storage
US11581943B2 (en) 2016-10-04 2023-02-14 Pure Storage, Inc. Queues reserved for direct access via a user application
US20230069603A1 (en) * 2021-08-31 2023-03-02 Micron Technology, Inc. Overwriting at a memory system
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US11650976B2 (en) 2011-10-14 2023-05-16 Pure Storage, Inc. Pattern matching using hash tables in storage system
US11650843B2 (en) 2019-08-22 2023-05-16 Micron Technology, Inc. Hierarchical memory systems
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US11675762B2 (en) 2015-06-26 2023-06-13 Pure Storage, Inc. Data structures for key management
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US11714708B2 (en) 2017-07-31 2023-08-01 Pure Storage, Inc. Intra-device redundancy scheme
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11722455B2 (en) 2017-04-27 2023-08-08 Pure Storage, Inc. Storage cluster address resolution
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US11822444B2 (en) 2014-06-04 2023-11-21 Pure Storage, Inc. Data rebuild independent of error detection
US11832410B2 (en) 2021-09-14 2023-11-28 Pure Storage, Inc. Mechanical energy absorbing bracket apparatus
US11836348B2 (en) 2018-04-27 2023-12-05 Pure Storage, Inc. Upgrade for system with differing capacities
US11842053B2 (en) 2016-12-19 2023-12-12 Pure Storage, Inc. Zone namespace
US11847013B2 (en) 2018-02-18 2023-12-19 Pure Storage, Inc. Readable data determination
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
US11893023B2 (en) 2015-09-04 2024-02-06 Pure Storage, Inc. Deterministic searching using compressed indexes
EP4300319A4 (en) * 2022-05-18 2024-02-28 Changxin Memory Technologies, Inc. HOT PLUGGING METHOD AND APPARATUS FOR MEMORY MODULE, AND MEMORY MODULE
US11922070B2 (en) 2016-10-04 2024-03-05 Pure Storage, Inc. Granting access to a storage device based on reservations
US11947814B2 (en) 2017-06-11 2024-04-02 Pure Storage, Inc. Optimizing resiliency group formation stability
US11955187B2 (en) 2017-01-13 2024-04-09 Pure Storage, Inc. Refresh of differing capacity NAND
US11960371B2 (en) 2014-06-04 2024-04-16 Pure Storage, Inc. Message persistence in a zoned system
US11995336B2 (en) 2018-04-25 2024-05-28 Pure Storage, Inc. Bucket views
US11995318B2 (en) 2016-10-28 2024-05-28 Pure Storage, Inc. Deallocated block determination
US11994723B2 (en) 2021-12-30 2024-05-28 Pure Storage, Inc. Ribbon cable alignment apparatus
US12001684B2 (en) 2019-12-12 2024-06-04 Pure Storage, Inc. Optimizing dynamic power loss protection adjustment in a storage system
US12001688B2 (en) 2019-04-29 2024-06-04 Pure Storage, Inc. Utilizing data views to optimize secure data access in a storage system
US12008266B2 (en) 2010-09-15 2024-06-11 Pure Storage, Inc. Efficient read by reconstruction
US12032724B2 (en) 2017-08-31 2024-07-09 Pure Storage, Inc. Encryption in a storage array
US12032848B2 (en) 2021-06-21 2024-07-09 Pure Storage, Inc. Intelligent block allocation in a heterogeneous storage system
US12039165B2 (en) 2016-10-04 2024-07-16 Pure Storage, Inc. Utilizing allocation shares to improve parallelism in a zoned drive storage system
US12038927B2 (en) 2015-09-04 2024-07-16 Pure Storage, Inc. Storage system having multiple tables for efficient searching
US12056365B2 (en) 2020-04-24 2024-08-06 Pure Storage, Inc. Resiliency for a storage system
US12061814B2 (en) 2021-01-25 2024-08-13 Pure Storage, Inc. Using data similarity to select segments for garbage collection
US12067282B2 (en) 2020-12-31 2024-08-20 Pure Storage, Inc. Write path selection
US12067274B2 (en) 2018-09-06 2024-08-20 Pure Storage, Inc. Writing segments and erase blocks based on ordering
US12079494B2 (en) 2018-04-27 2024-09-03 Pure Storage, Inc. Optimizing storage system upgrades to preserve resources
US12079125B2 (en) 2019-06-05 2024-09-03 Pure Storage, Inc. Tiered caching of data in a storage system
US12087382B2 (en) 2019-04-11 2024-09-10 Pure Storage, Inc. Adaptive threshold for bad flash memory blocks
US12093545B2 (en) 2020-12-31 2024-09-17 Pure Storage, Inc. Storage system with selectable write modes
US12099441B2 (en) 2023-07-27 2024-09-24 Pure Storage, Inc. Writing data to a distributed storage system

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8909614B2 (en) 2010-03-29 2014-12-09 Nec Corporation Data access location selecting system, method, and program
TWI490690B (zh) * 2011-04-20 2015-07-01 Taejin Infotech Co Ltd 用於半導體儲存裝置的raid控制器
CN103946812B (zh) 2011-09-30 2017-06-09 英特尔公司 用于实现多级别存储器分级体系的设备和方法
EP3451176B1 (en) 2011-09-30 2023-05-24 Intel Corporation Apparatus and method for implementing a multi-level memory hierarchy having different operating modes
EP2761467B1 (en) 2011-09-30 2019-10-23 Intel Corporation Generation of far memory access signals based on usage statistic tracking
US10412235B2 (en) 2011-09-30 2019-09-10 Hewlett-Packard Development Company, L.P. Identification bit memory cells in data storage chip
EP2761480A4 (en) 2011-09-30 2015-06-24 Intel Corp APPARATUS AND METHOD FOR IMPLEMENTING MULTINIVE MEMORY HIERARCHY ON COMMON MEMORY CHANNELS
EP2761472B1 (en) 2011-09-30 2020-04-01 Intel Corporation Memory channel that supports near memory and far memory access
CN103999042B (zh) * 2011-10-26 2018-03-30 惠普发展公司,有限责任合伙企业 加载引导数据
JP2014530422A (ja) * 2011-10-27 2014-11-17 ▲ホア▼▲ウェイ▼技術有限公司 バッファマッピングを制御するための方法およびバッファシステム
JP5735711B2 (ja) * 2012-06-26 2015-06-17 東芝三菱電機産業システム株式会社 データ収集装置及びデータ収集プログラム
US9524248B2 (en) * 2012-07-18 2016-12-20 Micron Technology, Inc. Memory management for a hierarchical memory system
CN103678143B (zh) * 2012-09-25 2018-10-12 联想(北京)有限公司 文件存储方法、装置及电子设备
TW201417102A (zh) 2012-10-23 2014-05-01 Ind Tech Res Inst 電阻式記憶體裝置
KR102011135B1 (ko) * 2012-12-11 2019-08-14 삼성전자주식회사 모바일 장치 및 그것의 스왑을 통한 데이터 관리 방법
CN103902462B (zh) * 2012-12-27 2018-03-09 华为技术有限公司 内存管理方法、内存管理装置及计算机
TWI511035B (zh) * 2013-03-08 2015-12-01 Acer Inc 動態調整快取層級方法
CN104063182B (zh) * 2013-03-20 2017-04-12 宏碁股份有限公司 动态调整快取层级方法
CN104216837A (zh) * 2013-05-31 2014-12-17 华为技术有限公司 一种内存系统、内存访问请求的处理方法和计算机系统
JP5950470B2 (ja) * 2014-03-24 2016-07-13 ▲ホア▼▲ウェイ▼技術有限公司Huawei Technologies Co.,Ltd. バッファマッピングを制御するための方法およびバッファシステム
CN105094686B (zh) 2014-05-09 2018-04-10 华为技术有限公司 数据缓存方法、缓存和计算机系统
CN104123264A (zh) * 2014-08-01 2014-10-29 浪潮(北京)电子信息产业有限公司 一种基于异构融合架构的缓存管理方法及装置
EP3248106A4 (en) * 2015-01-20 2018-09-12 Ultrata LLC Distributed index for fault tolerant object memory fabric
US10007435B2 (en) 2015-05-21 2018-06-26 Micron Technology, Inc. Translation lookaside buffer in memory
CN106294202A (zh) * 2015-06-12 2017-01-04 联想(北京)有限公司 一种数据存储方法和装置
CN106325764B (zh) * 2015-07-08 2021-02-26 群联电子股份有限公司 存储器管理方法、存储器控制电路单元与存储器存储装置
JP6403162B2 (ja) 2015-07-23 2018-10-10 東芝メモリ株式会社 メモリシステム
US9940028B2 (en) * 2015-11-13 2018-04-10 Samsung Electronics Co., Ltd Multimode storage device
TWI720086B (zh) * 2015-12-10 2021-03-01 美商艾斯卡瓦公司 儲存在區塊處理儲存系統上的音頻資料和資料的縮減
JP6115740B1 (ja) * 2015-12-17 2017-04-19 ウィンボンド エレクトロニクス コーポレーション 半導体記憶装置
JP6515799B2 (ja) * 2015-12-18 2019-05-22 京セラドキュメントソリューションズ株式会社 電子機器及びメモリー寿命警告プログラム
KR102652293B1 (ko) * 2016-03-03 2024-03-29 에스케이하이닉스 주식회사 메모리 관리방법
CN107562367B (zh) * 2016-07-01 2021-04-02 阿里巴巴集团控股有限公司 基于软件化存储系统读写数据的方法以及装置
JP2018049385A (ja) * 2016-09-20 2018-03-29 東芝メモリ株式会社 メモリシステムおよびプロセッサシステム
TWI658405B (zh) * 2017-03-17 2019-05-01 合肥兆芯電子有限公司 資料程式化方法、記憶體儲存裝置及記憶體控制電路單元
CN107291381B (zh) * 2017-05-18 2020-04-28 记忆科技(深圳)有限公司 一种固态硬盘动态加速区的实现方法及固态硬盘
CN107168654B (zh) * 2017-05-26 2019-08-13 华中科技大学 一种基于数据对象热度的异构内存分配方法及系统
CN107506137A (zh) * 2017-08-11 2017-12-22 记忆科技(深圳)有限公司 一种提升固态硬盘写性能的方法
US10545685B2 (en) * 2017-08-30 2020-01-28 Micron Technology, Inc. SLC cache management
TWI647567B (zh) * 2017-12-13 2019-01-11 國立中正大學 使用記憶體位址定位冷熱存取區間之方法
CN109684237B (zh) * 2018-11-20 2021-06-01 华为技术有限公司 基于多核处理器的数据访问方法和装置
JP7305340B2 (ja) * 2018-12-11 2023-07-10 キヤノン株式会社 情報処理装置
KR20200077276A (ko) * 2018-12-20 2020-06-30 에스케이하이닉스 주식회사 저장 장치 및 그 동작 방법
US11270771B2 (en) * 2019-01-29 2022-03-08 Silicon Storage Technology, Inc. Neural network classifier using array of stacked gate non-volatile memory cells
DE102019102861A1 (de) * 2019-02-05 2020-08-06 Hyperstone Gmbh Verfahren und Vorrichtung zur Abschätzung der Abnutzung eines nicht-flüchtigen Informationsspeichers
US11113007B2 (en) * 2019-05-13 2021-09-07 Micron Technology, Inc. Partial execution of a write command from a host system
US11106595B2 (en) 2019-08-22 2021-08-31 Micron Technology, Inc. Hierarchical memory systems
CN110825662A (zh) * 2019-11-04 2020-02-21 深圳芯邦科技股份有限公司 一种数据更新方法、系统及相关装置
CN113467706A (zh) * 2020-03-30 2021-10-01 华为技术有限公司 一种固态硬盘管理方法及固态硬盘
DE102020123220A1 (de) * 2020-09-04 2022-03-10 Harman Becker Automotive Systems Gmbh Speichersystem, Verfahren zum Betrieb desselben
US20210216452A1 (en) * 2021-03-27 2021-07-15 Intel Corporation Two-level main memory hierarchy management
KR20220138759A (ko) * 2021-04-06 2022-10-13 에스케이하이닉스 주식회사 메모리 시스템 및 그 동작 방법
US12093547B2 (en) * 2021-04-15 2024-09-17 Sk Hynix Nand Product Solutions Corp. User configurable SLC memory size
CN113176964A (zh) * 2021-04-29 2021-07-27 深圳忆联信息系统有限公司 基于mpu的ssd固件检错方法、装置、计算机设备及存储介质
US12061551B2 (en) * 2022-08-26 2024-08-13 Micron Technology, Inc. Telemetry-capable memory sub-system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6807106B2 (en) * 2001-12-14 2004-10-19 Sandisk Corporation Hybrid density memory card
US20080114930A1 (en) * 2006-11-13 2008-05-15 Hitachi Global Storage Technologies Netherlands B.V. Disk drive with cache having volatile and nonvolatile memory
US20080215800A1 (en) * 2000-01-06 2008-09-04 Super Talent Electronics, Inc. Hybrid SSD Using A Combination of SLC and MLC Flash Memory Arrays
US20090043831A1 (en) * 2007-08-11 2009-02-12 Mcm Portfolio Llc Smart Solid State Drive And Method For Handling Critical Files
US20090144545A1 (en) * 2007-11-29 2009-06-04 International Business Machines Corporation Computer system security using file system access pattern heuristics
US20090150599A1 (en) * 2005-04-21 2009-06-11 Bennett Jon C R Method and system for storage of data in non-volatile media
US20090327586A1 (en) * 2008-06-25 2009-12-31 Silicon Motion, Inc. Memory device and data storing method
US20100281233A1 (en) * 2009-04-29 2010-11-04 Microsoft Corporation Storage optimization across media with differing capabilities
US20100293337A1 (en) * 2009-05-13 2010-11-18 Seagate Technology Llc Systems and methods of tiered caching
US8122220B1 (en) * 2006-12-20 2012-02-21 Marvell International Ltd. Memory usage in imaging devices

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07146820A (ja) 1993-04-08 1995-06-06 Hitachi Ltd フラッシュメモリの制御方法及び、それを用いた情報処理装置
JP3507132B2 (ja) * 1994-06-29 2004-03-15 株式会社日立製作所 フラッシュメモリを用いた記憶装置およびその記憶制御方法
JP3270397B2 (ja) * 1998-06-08 2002-04-02 松下電送システム株式会社 データ格納装置
US7932911B2 (en) * 1998-08-24 2011-04-26 Microunity Systems Engineering, Inc. Processor for executing switch and translate instructions requiring wide operands
US6571323B2 (en) * 1999-03-05 2003-05-27 Via Technologies, Inc. Memory-access management method and system for synchronous dynamic Random-Access memory or the like
KR100383774B1 (ko) 2000-01-26 2003-05-12 삼성전자주식회사 공통 인터페이스 방식의 메모리 장치들을 구비한 시스템
JP4869466B2 (ja) * 2000-02-24 2012-02-08 富士通セミコンダクター株式会社 記憶装置の制御方法、データ管理システム、記録媒体、及び記憶装置
JP4078010B2 (ja) * 2000-03-03 2008-04-23 株式会社日立グローバルストレージテクノロジーズ 磁気ディスク装置及び情報記録方法
US6831865B2 (en) * 2002-10-28 2004-12-14 Sandisk Corporation Maintaining erase counts in non-volatile storage systems
US7020762B2 (en) * 2002-12-24 2006-03-28 Intel Corporation Method and apparatus for determining a dynamic random access memory page management implementation
US7174437B2 (en) * 2003-10-16 2007-02-06 Silicon Graphics, Inc. Memory access management in a shared memory multi-processor system
CN1751508A (zh) * 2003-10-20 2006-03-22 松下电器产业株式会社 多媒体数据记录装置、监视系统、以及多媒体数据记录方法
US7032087B1 (en) * 2003-10-28 2006-04-18 Sandisk Corporation Erase count differential table within a non-volatile memory system
US20050132128A1 (en) * 2003-12-15 2005-06-16 Jin-Yub Lee Flash memory device and flash memory system including buffer memory
US20050160188A1 (en) * 2004-01-20 2005-07-21 Zohar Bogin Method and apparatus to manage memory access requests
US20080082736A1 (en) * 2004-03-11 2008-04-03 Chow David Q Managing bad blocks in various flash memory cells for electronic data flash card
TWI253564B (en) * 2004-06-29 2006-04-21 Integrated Circuit Solution In Method of efficient data management with flash storage system
JP4066381B2 (ja) * 2005-03-01 2008-03-26 三菱電機株式会社 車載電子制御装置
US7224604B2 (en) * 2005-03-14 2007-05-29 Sandisk Il Ltd. Method of achieving wear leveling in flash memory using relative grades
US7861122B2 (en) * 2006-01-27 2010-12-28 Apple Inc. Monitoring health of non-volatile memory
US7519792B2 (en) * 2006-02-21 2009-04-14 Intel Corporation Memory region access management
JP2007305210A (ja) * 2006-05-10 2007-11-22 Toshiba Corp 半導体記憶装置
US20090132621A1 (en) * 2006-07-28 2009-05-21 Craig Jensen Selecting storage location for file storage based on storage longevity and speed
JP4839164B2 (ja) * 2006-09-15 2011-12-21 株式会社日立製作所 ハードウェアモニタを用いた性能評価システム及び再構築可能な計算機システム
KR100791325B1 (ko) * 2006-10-27 2008-01-03 삼성전자주식회사 비휘발성 메모리를 관리하는 장치 및 방법
US8135900B2 (en) 2007-03-28 2012-03-13 Kabushiki Kaisha Toshiba Integrated memory management and memory management method
JP5032172B2 (ja) 2007-03-28 2012-09-26 株式会社東芝 統合メモリ管理装置及び方法並びにデータ処理システム
KR101498673B1 (ko) * 2007-08-14 2015-03-09 삼성전자주식회사 반도체 드라이브, 그것의 데이터 저장 방법, 그리고 그것을포함한 컴퓨팅 시스템
JP2009087509A (ja) * 2007-10-03 2009-04-23 Toshiba Corp 半導体記憶装置
US7849275B2 (en) * 2007-11-19 2010-12-07 Sandforce, Inc. System, method and a computer program product for writing data to different storage devices based on write frequency
CN101521039B (zh) * 2008-02-29 2012-05-23 群联电子股份有限公司 数据储存系统、控制器及方法
US8135907B2 (en) * 2008-06-30 2012-03-13 Oracle America, Inc. Method and system for managing wear-level aware file systems
US8082386B2 (en) * 2008-10-21 2011-12-20 Skymedi Corporation Method of performing wear leveling with variable threshold
US8283933B2 (en) 2009-03-13 2012-10-09 Qualcomm, Incorporated Systems and methods for built in self test jitter measurement
US8166232B2 (en) * 2009-04-02 2012-04-24 Hitachi, Ltd. Metrics and management for flash memory storage life
US8151137B2 (en) * 2009-05-28 2012-04-03 Lsi Corporation Systems and methods for governing the life cycle of a solid state drive
JP2012033047A (ja) 2010-07-30 2012-02-16 Toshiba Corp 情報処理装置、メモリ管理装置、メモリ管理方法、及びプログラム
JP2012033002A (ja) 2010-07-30 2012-02-16 Toshiba Corp メモリ管理装置およびメモリ管理方法
JP2012033001A (ja) 2010-07-30 2012-02-16 Toshiba Corp 情報処理装置および情報処理方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080215800A1 (en) * 2000-01-06 2008-09-04 Super Talent Electronics, Inc. Hybrid SSD Using A Combination of SLC and MLC Flash Memory Arrays
US6807106B2 (en) * 2001-12-14 2004-10-19 Sandisk Corporation Hybrid density memory card
US20090150599A1 (en) * 2005-04-21 2009-06-11 Bennett Jon C R Method and system for storage of data in non-volatile media
US20080114930A1 (en) * 2006-11-13 2008-05-15 Hitachi Global Storage Technologies Netherlands B.V. Disk drive with cache having volatile and nonvolatile memory
US8122220B1 (en) * 2006-12-20 2012-02-21 Marvell International Ltd. Memory usage in imaging devices
US20090043831A1 (en) * 2007-08-11 2009-02-12 Mcm Portfolio Llc Smart Solid State Drive And Method For Handling Critical Files
US20090144545A1 (en) * 2007-11-29 2009-06-04 International Business Machines Corporation Computer system security using file system access pattern heuristics
US20090327586A1 (en) * 2008-06-25 2009-12-31 Silicon Motion, Inc. Memory device and data storing method
US20100281233A1 (en) * 2009-04-29 2010-11-04 Microsoft Corporation Storage optimization across media with differing capabilities
US20100293337A1 (en) * 2009-05-13 2010-11-18 Seagate Technology Llc Systems and methods of tiered caching

Cited By (413)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130030686A1 (en) * 2010-04-05 2013-01-31 Morotomi Kohei Collision judgment apparatus for vehicle
US8868325B2 (en) * 2010-04-05 2014-10-21 Toyota Jidosha Kabushiki Kaisha Collision judgment apparatus for vehicle
US8612692B2 (en) 2010-07-30 2013-12-17 Kabushiki Kaisha Toshiba Variable write back timing to nonvolatile semiconductor memory
US8645612B2 (en) 2010-07-30 2014-02-04 Kabushiki Kaisha Toshiba Information processing device and information processing method
US12008266B2 (en) 2010-09-15 2024-06-11 Pure Storage, Inc. Efficient read by reconstruction
US11614893B2 (en) 2010-09-15 2023-03-28 Pure Storage, Inc. Optimizing storage device access based on latency
US11650976B2 (en) 2011-10-14 2023-05-16 Pure Storage, Inc. Pattern matching using hash tables in storage system
US9626201B2 (en) * 2012-03-27 2017-04-18 Fujitsu Limited Processor emulation device and storage medium
US20130262075A1 (en) * 2012-03-27 2013-10-03 Fujitsu Limited Processor emulation device and storage medium
US9367439B2 (en) * 2012-04-30 2016-06-14 Oracle International Corporation Physical memory usage prediction
US20130290669A1 (en) * 2012-04-30 2013-10-31 Oracle International Corporation Physical memory usage prediction
US20130346674A1 (en) * 2012-06-26 2013-12-26 Phison Electronics Corp. Data writing method, memory controller and memory storage device
US9141530B2 (en) * 2012-06-26 2015-09-22 Phison Electronics Corp. Data writing method, memory controller and memory storage device
US10037271B1 (en) * 2012-06-27 2018-07-31 Teradata Us, Inc. Data-temperature-based control of buffer cache memory in a database system
US9818485B2 (en) 2012-07-11 2017-11-14 Samsung Electronics Co., Ltd. Nonvolatle memory device and memory system having the same, and related memory management, erase and programming methods
US9305616B2 (en) 2012-07-17 2016-04-05 Samsung Electronics Co., Ltd. Semiconductor memory cell array having fast array area and semiconductor memory including the same
US20140075100A1 (en) * 2012-09-12 2014-03-13 Kabushiki Kaisha Toshiba Memory system, computer system, and memory management method
WO2014052157A1 (en) 2012-09-28 2014-04-03 Intel Corporation Methods, systems and apparatus to cache code in non-volatile memory
EP2901289A4 (en) * 2012-09-28 2016-04-13 Intel Corp METHOD, SYSTEMS AND DEVICE FOR CODE INTERMEDIATE STORAGE IN A NON-VOLATILE MEMORY
US10860477B2 (en) 2012-10-08 2020-12-08 Western Digital Tecnologies, Inc. Apparatus and method for low power low latency high capacity storage class memory
JP2014078231A (ja) * 2012-10-08 2014-05-01 Hgst Netherlands B V 低電力・低遅延・大容量ストレージ・クラス・メモリのための装置および方法
US8984251B2 (en) 2012-12-04 2015-03-17 Apple Inc. Hinting of deleted data from host to storage device
US9235528B2 (en) * 2012-12-21 2016-01-12 Advanced Micro Devices, Inc. Write endurance management techniques in the logic layer of a stacked memory
US20140181457A1 (en) * 2012-12-21 2014-06-26 Advanced Micro Devices, Inc. Write Endurance Management Techniques in the Logic Layer of a Stacked Memory
US20150371689A1 (en) * 2013-01-31 2015-12-24 Hewlett-Packard Development Company, L.P. Adaptive granularity row- buffer cache
CN105027211A (zh) * 2013-01-31 2015-11-04 惠普发展公司,有限责任合伙企业 自适应粒度行缓冲器高速缓存
US9620181B2 (en) * 2013-01-31 2017-04-11 Hewlett Packard Enterprise Development Lp Adaptive granularity row-buffer cache
US9529535B2 (en) 2013-03-13 2016-12-27 Hitachi, Ltd. Storage system and method of control for storage system
US8943266B2 (en) 2013-03-13 2015-01-27 Hitachi, Ltd. Storage system and method of control for storage system
US20140281581A1 (en) * 2013-03-18 2014-09-18 Genusion, Inc. Storage Device
CN105190566A (zh) * 2013-03-28 2015-12-23 惠普发展公司,有限责任合伙企业 调节存储器激活率
WO2014158154A1 (en) * 2013-03-28 2014-10-02 Hewlett-Packard Development Company, L.P. Regulating memory activation rates
US9804972B2 (en) 2013-03-28 2017-10-31 Hewlett-Packard Enterprise Development LP Regulating memory activation rates
US11474706B2 (en) 2013-04-30 2022-10-18 Hewlett Packard Enterprise Development Lp Memory access rate
US9479466B1 (en) * 2013-05-23 2016-10-25 Kabam, Inc. System and method for generating virtual space messages based on information in a users contact list
JP5969130B2 (ja) * 2013-07-18 2016-08-17 株式会社日立製作所 情報処理装置
US20150074339A1 (en) * 2013-09-10 2015-03-12 Hicamp Systems, Inc. Hybrid main memory using a fine-grain level of remapping
US9898410B2 (en) * 2013-09-10 2018-02-20 Intel Corporation Hybrid main memory using a fine-grain level of remapping
US10013344B2 (en) 2014-01-14 2018-07-03 Avago Technologies General Ip (Singapore) Pte. Ltd. Enhanced SSD caching
JP2015204118A (ja) * 2014-04-15 2015-11-16 三星電子株式会社Samsung Electronics Co.,Ltd. ストレージコントローラ及びストレージ装置
US10838633B2 (en) 2014-06-04 2020-11-17 Pure Storage, Inc. Configurable hyperconverged multi-tenant storage system
US10671480B2 (en) 2014-06-04 2020-06-02 Pure Storage, Inc. Utilization of erasure codes in a storage system
US10303547B2 (en) 2014-06-04 2019-05-28 Pure Storage, Inc. Rebuilding data across storage nodes
US11960371B2 (en) 2014-06-04 2024-04-16 Pure Storage, Inc. Message persistence in a zoned system
US11385799B2 (en) 2014-06-04 2022-07-12 Pure Storage, Inc. Storage nodes supporting multiple erasure coding schemes
US9477554B2 (en) 2014-06-04 2016-10-25 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US10379763B2 (en) 2014-06-04 2019-08-13 Pure Storage, Inc. Hyperconverged storage system with distributable processing power
US10809919B2 (en) 2014-06-04 2020-10-20 Pure Storage, Inc. Scalable storage capacities
US11593203B2 (en) 2014-06-04 2023-02-28 Pure Storage, Inc. Coexisting differing erasure codes
US10430306B2 (en) 2014-06-04 2019-10-01 Pure Storage, Inc. Mechanism for persisting messages in a storage system
US9798477B2 (en) 2014-06-04 2017-10-24 Pure Storage, Inc. Scalable non-uniform storage sizes
US11310317B1 (en) 2014-06-04 2022-04-19 Pure Storage, Inc. Efficient load balancing
US11138082B2 (en) 2014-06-04 2021-10-05 Pure Storage, Inc. Action determination based on redundancy level
US11036583B2 (en) 2014-06-04 2021-06-15 Pure Storage, Inc. Rebuilding data across storage nodes
US11057468B1 (en) 2014-06-04 2021-07-06 Pure Storage, Inc. Vast data storage system
US9525738B2 (en) 2014-06-04 2016-12-20 Pure Storage, Inc. Storage system architecture
US11399063B2 (en) 2014-06-04 2022-07-26 Pure Storage, Inc. Network authentication for a storage system
US11652884B2 (en) 2014-06-04 2023-05-16 Pure Storage, Inc. Customized hash algorithms
US11671496B2 (en) 2014-06-04 2023-06-06 Pure Storage, Inc. Load balacing for distibuted computing
US11500552B2 (en) 2014-06-04 2022-11-15 Pure Storage, Inc. Configurable hyperconverged multi-tenant storage system
US12066895B2 (en) 2014-06-04 2024-08-20 Pure Storage, Inc. Heterogenous memory accommodating multiple erasure codes
US11677825B2 (en) 2014-06-04 2023-06-13 Pure Storage, Inc. Optimized communication pathways in a vast storage system
US11714715B2 (en) 2014-06-04 2023-08-01 Pure Storage, Inc. Storage system accommodating varying storage capacities
US9967342B2 (en) 2014-06-04 2018-05-08 Pure Storage, Inc. Storage system architecture
US11822444B2 (en) 2014-06-04 2023-11-21 Pure Storage, Inc. Data rebuild independent of error detection
US10574754B1 (en) 2014-06-04 2020-02-25 Pure Storage, Inc. Multi-chassis array with multi-level load balancing
US10817431B2 (en) 2014-07-02 2020-10-27 Pure Storage, Inc. Distributed storage addressing
US11079962B2 (en) 2014-07-02 2021-08-03 Pure Storage, Inc. Addressable non-volatile random access memory
US11886308B2 (en) 2014-07-02 2024-01-30 Pure Storage, Inc. Dual class of service for unified file and object messaging
US11922046B2 (en) 2014-07-02 2024-03-05 Pure Storage, Inc. Erasure coded data within zoned drives
US9836245B2 (en) 2014-07-02 2017-12-05 Pure Storage, Inc. Non-volatile RAM and flash memory in a non-volatile solid-state storage
US11604598B2 (en) 2014-07-02 2023-03-14 Pure Storage, Inc. Storage cluster with zoned drives
US10572176B2 (en) 2014-07-02 2020-02-25 Pure Storage, Inc. Storage cluster operation using erasure coded data
US10372617B2 (en) 2014-07-02 2019-08-06 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US9396078B2 (en) 2014-07-02 2016-07-19 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US11385979B2 (en) 2014-07-02 2022-07-12 Pure Storage, Inc. Mirrored remote procedure call cache
US10114757B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Nonrepeating identifiers in an address space of a non-volatile solid-state storage
US10114714B2 (en) 2014-07-02 2018-10-30 Pure Storage, Inc. Redundant, fault-tolerant, distributed remote procedure call cache in a storage system
US10877861B2 (en) 2014-07-02 2020-12-29 Pure Storage, Inc. Remote procedure call cache for distributed system
US9817750B2 (en) * 2014-07-03 2017-11-14 Pure Storage, Inc. Profile-dependent write placement of data into a non-volatile solid-state storage
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US10691812B2 (en) 2014-07-03 2020-06-23 Pure Storage, Inc. Secure data replication in a storage grid
US10185506B2 (en) 2014-07-03 2019-01-22 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US10198380B1 (en) 2014-07-03 2019-02-05 Pure Storage, Inc. Direct memory access data movement
US10853285B2 (en) 2014-07-03 2020-12-01 Pure Storage, Inc. Direct memory access data format
US20160004631A1 (en) * 2014-07-03 2016-01-07 Pure Storage, Inc. Profile-Dependent Write Placement of Data into a Non-Volatile Solid-State Storage
US11550752B2 (en) 2014-07-03 2023-01-10 Pure Storage, Inc. Administrative actions via a reserved filename
US11928076B2 (en) 2014-07-03 2024-03-12 Pure Storage, Inc. Actions for reserved filenames
WO2016004411A1 (en) * 2014-07-03 2016-01-07 Pure Storage, Inc. Profile-dependent write placement of data into a non-volatile solid-state storage
US11494498B2 (en) 2014-07-03 2022-11-08 Pure Storage, Inc. Storage data decryption
US11392522B2 (en) 2014-07-03 2022-07-19 Pure Storage, Inc. Transfer of segmented data
US9501244B2 (en) 2014-07-03 2016-11-22 Pure Storage, Inc. Scheduling policy for queues in a non-volatile solid-state storage
US11204830B2 (en) 2014-08-07 2021-12-21 Pure Storage, Inc. Die-level monitoring in a storage cluster
US10579474B2 (en) 2014-08-07 2020-03-03 Pure Storage, Inc. Die-level monitoring in a storage cluster
US10528419B2 (en) 2014-08-07 2020-01-07 Pure Storage, Inc. Mapping around defective flash memory of a storage array
US11656939B2 (en) 2014-08-07 2023-05-23 Pure Storage, Inc. Storage cluster memory characterization
US11620197B2 (en) 2014-08-07 2023-04-04 Pure Storage, Inc. Recovering error corrected data
US11080154B2 (en) 2014-08-07 2021-08-03 Pure Storage, Inc. Recovering error corrected data
US10324812B2 (en) 2014-08-07 2019-06-18 Pure Storage, Inc. Error recovery in a storage cluster
US10990283B2 (en) 2014-08-07 2021-04-27 Pure Storage, Inc. Proactive data rebuild based on queue feedback
US10216411B2 (en) 2014-08-07 2019-02-26 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US11544143B2 (en) 2014-08-07 2023-01-03 Pure Storage, Inc. Increased data reliability
US11442625B2 (en) 2014-08-07 2022-09-13 Pure Storage, Inc. Multiple read data paths in a storage system
US10983866B2 (en) 2014-08-07 2021-04-20 Pure Storage, Inc. Mapping defective memory in a storage system
US10498580B1 (en) 2014-08-20 2019-12-03 Pure Storage, Inc. Assigning addresses in a storage system
US11734186B2 (en) 2014-08-20 2023-08-22 Pure Storage, Inc. Heterogeneous storage with preserved addressing
US11188476B1 (en) 2014-08-20 2021-11-30 Pure Storage, Inc. Virtual addressing in a storage system
US20160077737A1 (en) * 2014-09-11 2016-03-17 Kabushiki Kaisha Toshiba Information processing apparatus and memory system
US10061515B2 (en) * 2014-09-11 2018-08-28 Toshiba Memory Corporation Information processing apparatus and memory system
US10353609B2 (en) 2014-09-16 2019-07-16 Huawei Technologies Co., Ltd. Memory allocation method and apparatus
US10990303B2 (en) 2014-09-16 2021-04-27 Huawei Technologies Co., Ltd. Memory allocation method and apparatus
CN105573831A (zh) * 2014-10-13 2016-05-11 龙芯中科技术有限公司 数据转移方法和装置
US20170115934A1 (en) * 2014-10-23 2017-04-27 Seagate Technology Llc Logical block addresses used for executing host commands
US10025533B2 (en) * 2014-10-23 2018-07-17 Seagate Technology Llc Logical block addresses used for executing host commands
US9977611B2 (en) 2014-12-04 2018-05-22 Kabushiki Kaisha Toshiba Storage device, method, and computer-readable medium for selecting a write destination of target data to nonvolatile memories having different erase limits based upon a write interval
US20160170663A1 (en) * 2014-12-15 2016-06-16 Konica Minolta, Inc. Nonvolatile memory control device, nonvolatile memory control method and computer readable storage medium
US9898211B2 (en) * 2014-12-15 2018-02-20 Konica Minolta, Inc. Nonvolatile memory control device, nonvolatile memory control method and computer readable storage medium
CN105700822A (zh) * 2014-12-15 2016-06-22 柯尼卡美能达株式会社 非易失性存储器控制装置以及非易失性存储器控制方法
US10037160B2 (en) 2014-12-19 2018-07-31 Samsung Electronics Co., Ltd. Storage device dynamically allocating program area and program method thereof
US10255182B2 (en) 2015-02-11 2019-04-09 Samsung Electronics Co., Ltd. Computing apparatus and method for cache management
US10241909B2 (en) * 2015-02-27 2019-03-26 Hitachi, Ltd. Non-volatile memory device
US9870836B2 (en) 2015-03-10 2018-01-16 Toshiba Memory Corporation Memory system and method of controlling nonvolatile memory
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US11294893B2 (en) 2015-03-20 2022-04-05 Pure Storage, Inc. Aggregation of queries
US11301177B2 (en) * 2015-03-23 2022-04-12 Netapp, Inc. Data structure storage and data management
US11954373B2 (en) 2015-03-23 2024-04-09 Netapp, Inc. Data structure storage and data management
US11775428B2 (en) 2015-03-26 2023-10-03 Pure Storage, Inc. Deletion immunity for unreferenced data
US10853243B2 (en) 2015-03-26 2020-12-01 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US12086472B2 (en) 2015-03-27 2024-09-10 Pure Storage, Inc. Heterogeneous storage arrays
US11188269B2 (en) 2015-03-27 2021-11-30 Pure Storage, Inc. Configuration for multiple logical storage arrays
US10353635B2 (en) 2015-03-27 2019-07-16 Pure Storage, Inc. Data control across multiple logical arrays
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US10693964B2 (en) 2015-04-09 2020-06-23 Pure Storage, Inc. Storage unit communication within a storage system
US11722567B2 (en) 2015-04-09 2023-08-08 Pure Storage, Inc. Communication paths for storage devices having differing capacities
US11240307B2 (en) 2015-04-09 2022-02-01 Pure Storage, Inc. Multiple communication paths in a storage system
US12069133B2 (en) 2015-04-09 2024-08-20 Pure Storage, Inc. Communication paths for differing types of solid state storage devices
US10178169B2 (en) 2015-04-09 2019-01-08 Pure Storage, Inc. Point to point based backend communication layer for storage processing
US11144212B2 (en) 2015-04-10 2021-10-12 Pure Storage, Inc. Independent partitions within an array
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US10496295B2 (en) 2015-04-10 2019-12-03 Pure Storage, Inc. Representing a storage array as two or more logical arrays with respective virtual local area networks (VLANS)
US11231956B2 (en) 2015-05-19 2022-01-25 Pure Storage, Inc. Committed transactions in a storage system
US10140149B1 (en) 2015-05-19 2018-11-27 Pure Storage, Inc. Transactional commits with hardware assists in remote memory
US12050774B2 (en) 2015-05-27 2024-07-30 Pure Storage, Inc. Parallel update for a distributed system
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US10712942B2 (en) 2015-05-27 2020-07-14 Pure Storage, Inc. Parallel update to maintain coherency
US11301333B2 (en) 2015-06-26 2022-04-12 Commvault Systems, Inc. Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation
US11675762B2 (en) 2015-06-26 2023-06-13 Pure Storage, Inc. Data structures for key management
US12093236B2 (en) 2015-06-26 2024-09-17 Pure Storage, Inc. Probalistic data structure for key management
US11983077B2 (en) 2015-06-26 2024-05-14 Commvault Systems, Inc. Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation
KR102401596B1 (ko) 2015-07-13 2022-05-24 삼성전자주식회사 비휘발성 장치에서 데이터 속성 기반 데이터 배치를 활용하기 위해 컴퓨터를 구동하는 경험적 인터페이스
US11461010B2 (en) 2015-07-13 2022-10-04 Samsung Electronics Co., Ltd. Data property-based data placement in a nonvolatile memory device
US11704073B2 (en) 2015-07-13 2023-07-18 Pure Storage, Inc Ownership determination for accessing a file
US11249951B2 (en) 2015-07-13 2022-02-15 Samsung Electronics Co., Ltd. Heuristic interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device
US10509770B2 (en) 2015-07-13 2019-12-17 Samsung Electronics Co., Ltd. Heuristic interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device
JP2017021805A (ja) * 2015-07-13 2017-01-26 三星電子株式会社Samsung Electronics Co.,Ltd. 不揮発性メモリ装置内でデータ属性基盤データ配置を利用可能にするインターフェイス提供方法及びコンピュータ装置
EP3118745B1 (en) * 2015-07-13 2020-09-16 Samsung Electronics Co., Ltd. A heuristic interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device
US10983732B2 (en) 2015-07-13 2021-04-20 Pure Storage, Inc. Method and system for accessing a file
US10656838B2 (en) 2015-07-13 2020-05-19 Samsung Electronics Co., Ltd. Automatic stream detection and assignment algorithm
KR20170008153A (ko) * 2015-07-13 2017-01-23 삼성전자주식회사 비휘발성 장치에서 데이터 속성 기반 데이터 배치를 활용하기 위해 컴퓨터를 구동하는 경험적 인터페이스
US10824576B2 (en) 2015-07-13 2020-11-03 Samsung Electronics Co., Ltd. Smart I/O stream detection based on multiple attributes
US11392297B2 (en) 2015-07-13 2022-07-19 Samsung Electronics Co., Ltd. Automatic stream detection and assignment algorithm
US11989160B2 (en) 2015-07-13 2024-05-21 Samsung Electronics Co., Ltd. Heuristic interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device
US11232079B2 (en) 2015-07-16 2022-01-25 Pure Storage, Inc. Efficient distribution of large directories
US10108503B2 (en) * 2015-08-24 2018-10-23 Western Digital Technologies, Inc. Methods and systems for updating a recovery sequence map
US20170060698A1 (en) * 2015-08-24 2017-03-02 HGST Netherlands B.V. Methods and systems for improving storage journaling
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US11099749B2 (en) 2015-09-01 2021-08-24 Pure Storage, Inc. Erase detection logic for a storage system
US11740802B2 (en) 2015-09-01 2023-08-29 Pure Storage, Inc. Error correction bypass for erased pages
US11893023B2 (en) 2015-09-04 2024-02-06 Pure Storage, Inc. Deterministic searching using compressed indexes
US12038927B2 (en) 2015-09-04 2024-07-16 Pure Storage, Inc. Storage system having multiple tables for efficient searching
US20170075595A1 (en) * 2015-09-11 2017-03-16 Kabushiki Kaisha Toshiba Memory system
US9865351B2 (en) * 2015-09-11 2018-01-09 Toshiba Memory Corporation Memory system with non-volatile memory device that is capable of single or simulataneous multiple word line selection
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US11971828B2 (en) 2015-09-30 2024-04-30 Pure Storage, Inc. Logic module for use with encoded instructions
US10887099B2 (en) 2015-09-30 2021-01-05 Pure Storage, Inc. Data encryption in a distributed system
US10211983B2 (en) 2015-09-30 2019-02-19 Pure Storage, Inc. Resharing of a split secret
US11489668B2 (en) 2015-09-30 2022-11-01 Pure Storage, Inc. Secret regeneration in a storage system
US12072860B2 (en) 2015-09-30 2024-08-27 Pure Storage, Inc. Delegation of data ownership
US11838412B2 (en) 2015-09-30 2023-12-05 Pure Storage, Inc. Secret regeneration from distributed shares
US10853266B2 (en) 2015-09-30 2020-12-01 Pure Storage, Inc. Hardware assisted data lookup methods
US11567917B2 (en) 2015-09-30 2023-01-31 Pure Storage, Inc. Writing data and metadata into storage
US10277408B2 (en) 2015-10-23 2019-04-30 Pure Storage, Inc. Token based communication
US11070382B2 (en) 2015-10-23 2021-07-20 Pure Storage, Inc. Communication in a distributed architecture
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US11582046B2 (en) 2015-10-23 2023-02-14 Pure Storage, Inc. Storage system communication
US10387062B2 (en) 2015-11-27 2019-08-20 Hitachi, Ltd. Storage system with cells changeable between two different level cell modes based on predicted lifetime
US20170154689A1 (en) * 2015-12-01 2017-06-01 CNEXLABS, Inc. Method and Apparatus for Logically Removing Defective Pages in Non-Volatile Memory Storage Device
US10593421B2 (en) * 2015-12-01 2020-03-17 Cnex Labs, Inc. Method and apparatus for logically removing defective pages in non-volatile memory storage device
US10437488B2 (en) * 2015-12-08 2019-10-08 Kyocera Document Solutions Inc. Electronic device and non-transitory computer readable storage medium
US20170160964A1 (en) * 2015-12-08 2017-06-08 Kyocera Document Solutions Inc. Electronic device and non-transitory computer readable storage medium
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution
US11204701B2 (en) 2015-12-22 2021-12-21 Pure Storage, Inc. Token based transactions
US12067260B2 (en) 2015-12-22 2024-08-20 Pure Storage, Inc. Transaction processing with differing capacity storage
US10599348B2 (en) 2015-12-22 2020-03-24 Pure Storage, Inc. Distributed transactions with token-associated execution
CN108463811A (zh) * 2016-01-20 2018-08-28 Arm有限公司 记录组指示符
US10732905B2 (en) 2016-02-09 2020-08-04 Samsung Electronics Co., Ltd. Automatic I/O stream selection for storage devices
US10180810B2 (en) 2016-03-10 2019-01-15 Kabushiki Kaisha Toshiba Memory controller and storage device which selects memory devices in which data is to be written based on evaluation values of a usable capacity of the memory devices
US11586392B2 (en) 2016-03-16 2023-02-21 Samsung Electronics Co., Ltd. Multi-stream SSD QoS management
US12073125B2 (en) 2016-03-16 2024-08-27 Samsung Electronics Co., Ltd. Multi-stream SSD QOS management
US10592171B2 (en) 2016-03-16 2020-03-17 Samsung Electronics Co., Ltd. Multi-stream SSD QoS management
US11847320B2 (en) 2016-05-03 2023-12-19 Pure Storage, Inc. Reassignment of requests for high availability
US10261690B1 (en) 2016-05-03 2019-04-16 Pure Storage, Inc. Systems and methods for operating a storage system
US11550473B2 (en) 2016-05-03 2023-01-10 Pure Storage, Inc. High-availability storage array
US10649659B2 (en) 2016-05-03 2020-05-12 Pure Storage, Inc. Scaleable storage array
US10866905B2 (en) 2016-05-25 2020-12-15 Samsung Electronics Co., Ltd. Access parameter based multi-stream storage device access
US10372563B2 (en) * 2016-06-17 2019-08-06 Korea University Research And Business Foundation Analyzing system for managing information storage table and control method thereof
US11861188B2 (en) 2016-07-19 2024-01-02 Pure Storage, Inc. System having modular accelerators
US11886288B2 (en) 2016-07-22 2024-01-30 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US10768819B2 (en) 2016-07-22 2020-09-08 Pure Storage, Inc. Hardware support for non-disruptive upgrades
US10831594B2 (en) 2016-07-22 2020-11-10 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
US11409437B2 (en) 2016-07-22 2022-08-09 Pure Storage, Inc. Persisting configuration information
US11449232B1 (en) 2016-07-22 2022-09-20 Pure Storage, Inc. Optimal scheduling of flash operations
US11080155B2 (en) 2016-07-24 2021-08-03 Pure Storage, Inc. Identifying error types among flash memory
US11604690B2 (en) 2016-07-24 2023-03-14 Pure Storage, Inc. Online failure span determination
US10216420B1 (en) 2016-07-24 2019-02-26 Pure Storage, Inc. Calibration of flash channels in SSD
US10366004B2 (en) 2016-07-26 2019-07-30 Pure Storage, Inc. Storage system with elective garbage collection to reduce flash contention
US11340821B2 (en) 2016-07-26 2022-05-24 Pure Storage, Inc. Adjustable migration utilization
US11886334B2 (en) 2016-07-26 2024-01-30 Pure Storage, Inc. Optimizing spool and memory space management
US10776034B2 (en) 2016-07-26 2020-09-15 Pure Storage, Inc. Adaptive data migration
US11797212B2 (en) 2016-07-26 2023-10-24 Pure Storage, Inc. Data migration for zoned drives
US10203903B2 (en) 2016-07-26 2019-02-12 Pure Storage, Inc. Geometry based, space aware shelf/writegroup evacuation
US11030090B2 (en) 2016-07-26 2021-06-08 Pure Storage, Inc. Adaptive data migration
US11734169B2 (en) 2016-07-26 2023-08-22 Pure Storage, Inc. Optimizing spool and memory space management
US11422719B2 (en) 2016-09-15 2022-08-23 Pure Storage, Inc. Distributed file deletion and truncation
US11301147B2 (en) 2016-09-15 2022-04-12 Pure Storage, Inc. Adaptive concurrency for write persistence
US11656768B2 (en) 2016-09-15 2023-05-23 Pure Storage, Inc. File deletion in a distributed system
US11922033B2 (en) 2016-09-15 2024-03-05 Pure Storage, Inc. Batch data deletion
US10678452B2 (en) 2016-09-15 2020-06-09 Pure Storage, Inc. Distributed deletion of a file and directory hierarchy
US10446210B2 (en) 2016-09-27 2019-10-15 Spin Memory, Inc. Memory instruction pipeline with a pre-read stage for a write operation for reducing power consumption in a memory device that uses dynamic redundancy registers
US10437491B2 (en) 2016-09-27 2019-10-08 Spin Memory, Inc. Method of processing incomplete memory operations in a memory device during a power up sequence and a power down sequence using a dynamic redundancy register
US10424393B2 (en) 2016-09-27 2019-09-24 Spin Memory, Inc. Method of reading data from a memory device using multiple levels of dynamic redundancy registers
US10437723B2 (en) 2016-09-27 2019-10-08 Spin Memory, Inc. Method of flushing the contents of a dynamic redundancy register to a secure storage area during a power down in a memory device
US10366775B2 (en) 2016-09-27 2019-07-30 Spin Memory, Inc. Memory device using levels of dynamic redundancy registers for writing a data word that failed a write operation
US10360964B2 (en) 2016-09-27 2019-07-23 Spin Memory, Inc. Method of writing contents in memory during a power up sequence using a dynamic redundancy register in a memory device
US10546625B2 (en) 2016-09-27 2020-01-28 Spin Memory, Inc. Method of optimizing write voltage based on error buffer occupancy
US10460781B2 (en) 2016-09-27 2019-10-29 Spin Memory, Inc. Memory device with a dual Y-multiplexer structure for performing two simultaneous operations on the same row of a memory bank
US10366774B2 (en) 2016-09-27 2019-07-30 Spin Memory, Inc. Device with dynamic redundancy registers
US10628316B2 (en) 2016-09-27 2020-04-21 Spin Memory, Inc. Memory device with a plurality of memory banks where each memory bank is associated with a corresponding memory instruction pipeline and a dynamic redundancy register
US11922070B2 (en) 2016-10-04 2024-03-05 Pure Storage, Inc. Granting access to a storage device based on reservations
US12039165B2 (en) 2016-10-04 2024-07-16 Pure Storage, Inc. Utilizing allocation shares to improve parallelism in a zoned drive storage system
US11581943B2 (en) 2016-10-04 2023-02-14 Pure Storage, Inc. Queues reserved for direct access via a user application
US10739995B2 (en) 2016-10-26 2020-08-11 Samsung Electronics Co., Ltd. Method of consolidate data streams for multi-stream enabled SSDs
US11048411B2 (en) 2016-10-26 2021-06-29 Samsung Electronics Co., Ltd. Method of consolidating data streams for multi-stream enabled SSDs
US11995318B2 (en) 2016-10-28 2024-05-28 Pure Storage, Inc. Deallocated block determination
US20180150219A1 (en) * 2016-11-30 2018-05-31 Industrial Technology Research Institute Data accessing system, data accessing apparatus and method for accessing data
US11842053B2 (en) 2016-12-19 2023-12-12 Pure Storage, Inc. Zone namespace
US11762781B2 (en) 2017-01-09 2023-09-19 Pure Storage, Inc. Providing end-to-end encryption for data stored in a storage system
US11307998B2 (en) 2017-01-09 2022-04-19 Pure Storage, Inc. Storage efficiency of encrypted host system data
US11955187B2 (en) 2017-01-13 2024-04-09 Pure Storage, Inc. Refresh of differing capacity NAND
US11289169B2 (en) 2017-01-13 2022-03-29 Pure Storage, Inc. Cycled background reads
US10650902B2 (en) 2017-01-13 2020-05-12 Pure Storage, Inc. Method for processing blocks of flash memory
US11003577B2 (en) * 2017-01-24 2021-05-11 Fujitsu Limited Information processing apparatus, information processing method, and non-transitory computer-readable storage medium for storing program of access control with respect to semiconductor device memory
US10979223B2 (en) 2017-01-31 2021-04-13 Pure Storage, Inc. Separate encryption for a solid-state drive
US10684785B2 (en) 2017-02-23 2020-06-16 Hitachi, Ltd. Storage system
US11437093B2 (en) * 2017-03-10 2022-09-06 Micron Technology, Inc. Methods for mitigating power loss events during operation of memory devices and memory devices employing the same
US10942869B2 (en) 2017-03-30 2021-03-09 Pure Storage, Inc. Efficient coding in a storage system
US10528488B1 (en) 2017-03-30 2020-01-07 Pure Storage, Inc. Efficient name coding
US11449485B1 (en) 2017-03-30 2022-09-20 Pure Storage, Inc. Sequence invalidation consolidation in a storage system
US11592985B2 (en) 2017-04-05 2023-02-28 Pure Storage, Inc. Mapping LUNs in a storage memory
US11016667B1 (en) 2017-04-05 2021-05-25 Pure Storage, Inc. Efficient mapping for LUNs in storage memory with holes in address space
CN110392885A (zh) * 2017-04-07 2019-10-29 松下知识产权经营株式会社 增大了使用次数的非易失性存储器
US10698808B2 (en) 2017-04-25 2020-06-30 Samsung Electronics Co., Ltd. Garbage collection—automatic data placement
US11630767B2 (en) 2017-04-25 2023-04-18 Samsung Electronics Co., Ltd. Garbage collection—automatic data placement
US11048624B2 (en) 2017-04-25 2021-06-29 Samsung Electronics Co., Ltd. Methods for multi-stream garbage collection
US11194710B2 (en) 2017-04-25 2021-12-07 Samsung Electronics Co., Ltd. Garbage collection—automatic data placement
US10141050B1 (en) 2017-04-27 2018-11-27 Pure Storage, Inc. Page writes for triple level cell flash memory
US10944671B2 (en) 2017-04-27 2021-03-09 Pure Storage, Inc. Efficient data forwarding in a networked device
US11869583B2 (en) 2017-04-27 2024-01-09 Pure Storage, Inc. Page write requirements for differing types of flash memory
US11722455B2 (en) 2017-04-27 2023-08-08 Pure Storage, Inc. Storage cluster address resolution
US11847355B2 (en) 2017-05-03 2023-12-19 Samsung Electronics Co., Ltd. Multistreaming in heterogeneous environments
US11507326B2 (en) 2017-05-03 2022-11-22 Samsung Electronics Co., Ltd. Multistreaming in heterogeneous environments
US11467913B1 (en) 2017-06-07 2022-10-11 Pure Storage, Inc. Snapshots with crash consistency in a storage system
US11068389B2 (en) 2017-06-11 2021-07-20 Pure Storage, Inc. Data resiliency with heterogeneous storage
US11782625B2 (en) 2017-06-11 2023-10-10 Pure Storage, Inc. Heterogeneity supportive resiliency groups
US11947814B2 (en) 2017-06-11 2024-04-02 Pure Storage, Inc. Optimizing resiliency group formation stability
US11138103B1 (en) 2017-06-11 2021-10-05 Pure Storage, Inc. Resiliency groups
US11106574B2 (en) * 2017-06-16 2021-08-31 Oneplus Technology (Shenzhen) Co., Ltd. Memory allocation method, apparatus, electronic device, and computer storage medium
US11190580B2 (en) 2017-07-03 2021-11-30 Pure Storage, Inc. Stateful connection resets
US11689610B2 (en) 2017-07-03 2023-06-27 Pure Storage, Inc. Load balancing reset packets
US12086029B2 (en) 2017-07-31 2024-09-10 Pure Storage, Inc. Intra-device and inter-device data recovery in a storage system
US11714708B2 (en) 2017-07-31 2023-08-01 Pure Storage, Inc. Intra-device redundancy scheme
US12032724B2 (en) 2017-08-31 2024-07-09 Pure Storage, Inc. Encryption in a storage array
US10210926B1 (en) 2017-09-15 2019-02-19 Pure Storage, Inc. Tracking of optimum read voltage thresholds in nand flash devices
US10877827B2 (en) 2017-09-15 2020-12-29 Pure Storage, Inc. Read voltage optimization
US11733888B2 (en) 2017-09-22 2023-08-22 Kioxia Corporation Memory system
US12086439B2 (en) 2017-09-22 2024-09-10 Kioxia Corporation Memory storage with selected performance mode
US10824353B2 (en) 2017-09-22 2020-11-03 Toshiba Memory Corporation Memory system
US10866899B2 (en) * 2017-10-02 2020-12-15 Arm Ltd Method and apparatus for control of a tiered memory system
US20190102310A1 (en) * 2017-10-02 2019-04-04 Arm Ltd Method and apparatus for control of a tiered memory system
US10901907B2 (en) 2017-10-19 2021-01-26 Samsung Electronics Co., Ltd. System and method for identifying hot data and stream in a solid-state drive
US11704066B2 (en) 2017-10-31 2023-07-18 Pure Storage, Inc. Heterogeneous erase blocks
US10545687B1 (en) 2017-10-31 2020-01-28 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US11604585B2 (en) 2017-10-31 2023-03-14 Pure Storage, Inc. Data rebuild when changing erase block sizes during drive replacement
US11024390B1 (en) 2017-10-31 2021-06-01 Pure Storage, Inc. Overlapping RAID groups
US11086532B2 (en) 2017-10-31 2021-08-10 Pure Storage, Inc. Data rebuild with changing erase block sizes
US10496330B1 (en) 2017-10-31 2019-12-03 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US12046292B2 (en) 2017-10-31 2024-07-23 Pure Storage, Inc. Erase blocks having differing sizes
US10515701B1 (en) 2017-10-31 2019-12-24 Pure Storage, Inc. Overlapping raid groups
US11074016B2 (en) 2017-10-31 2021-07-27 Pure Storage, Inc. Using flash storage devices with different sized erase blocks
US10884919B2 (en) 2017-10-31 2021-01-05 Pure Storage, Inc. Memory management in a storage system
US20190138227A1 (en) * 2017-11-06 2019-05-09 Hitachi, Ltd. Storage system and control method thereof
US11747989B2 (en) 2017-11-06 2023-09-05 Kioxia Corporation Memory system and method for controlling nonvolatile memory
US20190138226A1 (en) * 2017-11-06 2019-05-09 Toshiba Memory Corporation Memory system and method for controlling nonvolatile memory
US11042305B2 (en) * 2017-11-06 2021-06-22 Toshiba Memory Corporation Memory system and method for controlling nonvolatile memory
US10838628B2 (en) * 2017-11-06 2020-11-17 Hitachi, Ltd. Storage system and control method of maintaining reliability of a mounted flash storage
US11275681B1 (en) 2017-11-17 2022-03-15 Pure Storage, Inc. Segmented write requests
US11741003B2 (en) 2017-11-17 2023-08-29 Pure Storage, Inc. Write granularity for storage system
US10860475B1 (en) 2017-11-17 2020-12-08 Pure Storage, Inc. Hybrid flash translation layer
US10990566B1 (en) 2017-11-20 2021-04-27 Pure Storage, Inc. Persistent file locks in a storage system
US10705732B1 (en) 2017-12-08 2020-07-07 Pure Storage, Inc. Multiple-apartment aware offlining of devices for disruptive and destructive operations
US10929053B2 (en) 2017-12-08 2021-02-23 Pure Storage, Inc. Safe destructive actions on drives
US10719265B1 (en) 2017-12-08 2020-07-21 Pure Storage, Inc. Centralized, quorum-aware handling of device reservation requests in a storage system
US10929031B2 (en) 2017-12-21 2021-02-23 Pure Storage, Inc. Maximizing data reduction in a partially encrypted volume
US11782614B1 (en) 2017-12-21 2023-10-10 Pure Storage, Inc. Encrypting data to optimize data reduction
WO2019133233A1 (en) * 2017-12-27 2019-07-04 Spin Transfer Technologies, Inc. A method of writing contents in memory during a power up sequence using a dynamic redundancy register in a memory device
US11815993B2 (en) 2018-01-11 2023-11-14 Commvault Systems, Inc. Remedial action based on maintaining process awareness in data storage management
US11200110B2 (en) * 2018-01-11 2021-12-14 Commvault Systems, Inc. Remedial action based on maintaining process awareness in data storage management
US11442645B2 (en) 2018-01-31 2022-09-13 Pure Storage, Inc. Distributed storage system expansion mechanism
US10976948B1 (en) 2018-01-31 2021-04-13 Pure Storage, Inc. Cluster expansion mechanism
US10467527B1 (en) 2018-01-31 2019-11-05 Pure Storage, Inc. Method and apparatus for artificial intelligence acceleration
US11797211B2 (en) 2018-01-31 2023-10-24 Pure Storage, Inc. Expanding data structures in a storage system
US10915813B2 (en) 2018-01-31 2021-02-09 Pure Storage, Inc. Search acceleration for artificial intelligence
US11966841B2 (en) 2018-01-31 2024-04-23 Pure Storage, Inc. Search acceleration for artificial intelligence
US10733053B1 (en) 2018-01-31 2020-08-04 Pure Storage, Inc. Disaster recovery for high-bandwidth distributed archives
US11847013B2 (en) 2018-02-18 2023-12-19 Pure Storage, Inc. Readable data determination
US11494109B1 (en) 2018-02-22 2022-11-08 Pure Storage, Inc. Erase block trimming for heterogenous flash memory storage devices
US11995336B2 (en) 2018-04-25 2024-05-28 Pure Storage, Inc. Bucket views
US10853146B1 (en) 2018-04-27 2020-12-01 Pure Storage, Inc. Efficient data forwarding in a networked device
US11836348B2 (en) 2018-04-27 2023-12-05 Pure Storage, Inc. Upgrade for system with differing capacities
US12079494B2 (en) 2018-04-27 2024-09-03 Pure Storage, Inc. Optimizing storage system upgrades to preserve resources
US10931450B1 (en) 2018-04-27 2021-02-23 Pure Storage, Inc. Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers
US10949087B2 (en) 2018-05-15 2021-03-16 Samsung Electronics Co., Ltd. Method for rapid reference object storage format for chroma subsampled images
US11449256B2 (en) 2018-05-15 2022-09-20 Samsung Electronics Co., Ltd. Method for accelerating image storing and retrieving differential latency storage devices based on access rates
US11947826B2 (en) 2018-05-15 2024-04-02 Samsung Electronics Co., Ltd. Method for accelerating image storing and retrieving differential latency storage devices based on access rates
US11436023B2 (en) 2018-05-31 2022-09-06 Pure Storage, Inc. Mechanism for updating host file system and flash translation layer based on underlying NAND technology
US11438279B2 (en) 2018-07-23 2022-09-06 Pure Storage, Inc. Non-disruptive conversion of a clustered service from single-chassis to multi-chassis
US11372753B2 (en) * 2018-08-29 2022-06-28 Kioxia Corporation Memory system and method
US11868309B2 (en) 2018-09-06 2024-01-09 Pure Storage, Inc. Queue management for data relocation
US11846968B2 (en) 2018-09-06 2023-12-19 Pure Storage, Inc. Relocation of data for heterogeneous storage systems
US12067274B2 (en) 2018-09-06 2024-08-20 Pure Storage, Inc. Writing segments and erase blocks based on ordering
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11354058B2 (en) 2018-09-06 2022-06-07 Pure Storage, Inc. Local relocation of data stored at a storage device of a storage system
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US10454498B1 (en) 2018-10-18 2019-10-22 Pure Storage, Inc. Fully pipelined hardware engine design for fast and efficient inline lossless data compression
US20200126606A1 (en) * 2018-10-19 2020-04-23 Samsung Electronics Co., Ltd. Semiconductor device
US11227647B2 (en) 2018-10-19 2022-01-18 Samsung Electronics Co., Ltd. Semiconductor device
US10878873B2 (en) * 2018-10-19 2020-12-29 Samsung Electronics Co., Ltd. Semiconductor device
CN111078128A (zh) * 2018-10-22 2020-04-28 浙江宇视科技有限公司 数据管理方法、装置及固态硬盘
US10976947B2 (en) 2018-10-26 2021-04-13 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US12001700B2 (en) 2018-10-26 2024-06-04 Pure Storage, Inc. Dynamically selecting segment heights in a heterogeneous RAID group
US20190107976A1 (en) * 2018-12-07 2019-04-11 Intel Corporation Apparatus and method for assigning velocities to write data
US11231873B2 (en) * 2018-12-07 2022-01-25 Intel Corporation Apparatus and method for assigning velocities to write data
US11449253B2 (en) 2018-12-14 2022-09-20 Commvault Systems, Inc. Disk usage growth prediction system
US11941275B2 (en) 2018-12-14 2024-03-26 Commvault Systems, Inc. Disk usage growth prediction system
US11010114B2 (en) * 2018-12-31 2021-05-18 Kyocera Document Solutions Inc. Read/write direction-based memory bank control for imaging
JP2020119007A (ja) * 2019-01-18 2020-08-06 富士通株式会社 情報処理装置、記憶制御装置および記憶制御プログラム
JP7219397B2 (ja) 2019-01-18 2023-02-08 富士通株式会社 情報処理装置、記憶制御装置および記憶制御プログラム
US11194473B1 (en) * 2019-01-23 2021-12-07 Pure Storage, Inc. Programming frequently read data to low latency portions of a solid-state storage array
US11334254B2 (en) 2019-03-29 2022-05-17 Pure Storage, Inc. Reliability based flash page sizing
US11775189B2 (en) 2019-04-03 2023-10-03 Pure Storage, Inc. Segment level heterogeneity
US12087382B2 (en) 2019-04-11 2024-09-10 Pure Storage, Inc. Adaptive threshold for bad flash memory blocks
US11899582B2 (en) 2019-04-12 2024-02-13 Pure Storage, Inc. Efficient memory dump
US11099986B2 (en) 2019-04-12 2021-08-24 Pure Storage, Inc. Efficient transfer of memory contents
US12001688B2 (en) 2019-04-29 2024-06-04 Pure Storage, Inc. Utilizing data views to optimize secure data access in a storage system
US12079125B2 (en) 2019-06-05 2024-09-03 Pure Storage, Inc. Tiered caching of data in a storage system
US20210342263A1 (en) * 2019-06-19 2021-11-04 Micron Technology, Inc. Garbage collection adapted to host write activity
US11714572B2 (en) 2019-06-19 2023-08-01 Pure Storage, Inc. Optimized data resiliency in a modular storage system
US11874772B2 (en) * 2019-06-19 2024-01-16 Lodestar Licensing Group, Llc Garbage collection adapted to host write activity
US11281394B2 (en) 2019-06-24 2022-03-22 Pure Storage, Inc. Replication across partitioning schemes in a distributed storage system
US11822807B2 (en) 2019-06-24 2023-11-21 Pure Storage, Inc. Data replication in a storage system
US11650843B2 (en) 2019-08-22 2023-05-16 Micron Technology, Inc. Hierarchical memory systems
US11327665B2 (en) * 2019-09-20 2022-05-10 International Business Machines Corporation Managing data on volumes
US11893126B2 (en) 2019-10-14 2024-02-06 Pure Storage, Inc. Data deletion for a multi-tenant environment
US11416144B2 (en) 2019-12-12 2022-08-16 Pure Storage, Inc. Dynamic use of segment or zone power loss protection in a flash device
US11947795B2 (en) 2019-12-12 2024-04-02 Pure Storage, Inc. Power loss protection based on write requirements
US12001684B2 (en) 2019-12-12 2024-06-04 Pure Storage, Inc. Optimizing dynamic power loss protection adjustment in a storage system
US11704192B2 (en) 2019-12-12 2023-07-18 Pure Storage, Inc. Budgeting open blocks based on power loss protection
US11847331B2 (en) 2019-12-12 2023-12-19 Pure Storage, Inc. Budgeting open blocks of a storage unit based on power loss prevention
US11188432B2 (en) 2020-02-28 2021-11-30 Pure Storage, Inc. Data resiliency by partially deallocating data blocks of a storage device
US11656961B2 (en) 2020-02-28 2023-05-23 Pure Storage, Inc. Deallocation within a storage system
US11507297B2 (en) 2020-04-15 2022-11-22 Pure Storage, Inc. Efficient management of optimal read levels for flash storage systems
US11256587B2 (en) 2020-04-17 2022-02-22 Pure Storage, Inc. Intelligent access to a storage device
US11474986B2 (en) 2020-04-24 2022-10-18 Pure Storage, Inc. Utilizing machine learning to streamline telemetry processing of storage media
US12056365B2 (en) 2020-04-24 2024-08-06 Pure Storage, Inc. Resiliency for a storage system
US11775491B2 (en) 2020-04-24 2023-10-03 Pure Storage, Inc. Machine learning model for storage system
US12079184B2 (en) 2020-04-24 2024-09-03 Pure Storage, Inc. Optimized machine learning telemetry processing for a cloud based storage system
US11416338B2 (en) 2020-04-24 2022-08-16 Pure Storage, Inc. Resiliency scheme to enhance storage performance
US11768763B2 (en) 2020-07-08 2023-09-26 Pure Storage, Inc. Flash secure erase
US11681448B2 (en) 2020-09-08 2023-06-20 Pure Storage, Inc. Multiple device IDs in a multi-fabric module storage system
US11513974B2 (en) 2020-09-08 2022-11-29 Pure Storage, Inc. Using nonce to control erasure of data blocks of a multi-controller storage system
US11789626B2 (en) 2020-12-17 2023-10-17 Pure Storage, Inc. Optimizing block allocation in a data storage system
US11487455B2 (en) 2020-12-17 2022-11-01 Pure Storage, Inc. Dynamic block allocation to optimize storage system performance
US11847324B2 (en) 2020-12-31 2023-12-19 Pure Storage, Inc. Optimizing resiliency groups for data regions of a storage system
US12056386B2 (en) 2020-12-31 2024-08-06 Pure Storage, Inc. Selectable write paths with different formatted data
US12093545B2 (en) 2020-12-31 2024-09-17 Pure Storage, Inc. Storage system with selectable write modes
US11614880B2 (en) 2020-12-31 2023-03-28 Pure Storage, Inc. Storage system with selectable write paths
US12067282B2 (en) 2020-12-31 2024-08-20 Pure Storage, Inc. Write path selection
US12061814B2 (en) 2021-01-25 2024-08-13 Pure Storage, Inc. Using data similarity to select segments for garbage collection
US11630593B2 (en) 2021-03-12 2023-04-18 Pure Storage, Inc. Inline flash memory qualification in a storage system
US12099742B2 (en) 2021-03-15 2024-09-24 Pure Storage, Inc. Utilizing programming page size granularity to optimize data segment storage in a storage system
US12067032B2 (en) 2021-03-31 2024-08-20 Pure Storage, Inc. Intervals for data replication
US11507597B2 (en) 2021-03-31 2022-11-22 Pure Storage, Inc. Data replication to meet a recovery point objective
US20220391131A1 (en) * 2021-06-04 2022-12-08 Fujitsu Limited Computer-readable recording medium, information processing device control method and information processing device
US12032848B2 (en) 2021-06-21 2024-07-09 Pure Storage, Inc. Intelligent block allocation in a heterogeneous storage system
US11755237B2 (en) * 2021-08-31 2023-09-12 Micron Technology, Inc. Overwriting at a memory system
US20230069603A1 (en) * 2021-08-31 2023-03-02 Micron Technology, Inc. Overwriting at a memory system
US11832410B2 (en) 2021-09-14 2023-11-28 Pure Storage, Inc. Mechanical energy absorbing bracket apparatus
US11994723B2 (en) 2021-12-30 2024-05-28 Pure Storage, Inc. Ribbon cable alignment apparatus
EP4300319A4 (en) * 2022-05-18 2024-02-28 Changxin Memory Technologies, Inc. HOT PLUGGING METHOD AND APPARATUS FOR MEMORY MODULE, AND MEMORY MODULE
US12008245B2 (en) 2022-05-18 2024-06-11 Changxin Memory Technologies, Inc. Method and device for hot swapping memory, and memory
US12101379B2 (en) 2023-05-04 2024-09-24 Pure Storage, Inc. Multilevel load balancing
US12099441B2 (en) 2023-07-27 2024-09-24 Pure Storage, Inc. Writing data to a distributed storage system

Also Published As

Publication number Publication date
CN102473140A (zh) 2012-05-23
CN102473140B (zh) 2015-05-13
EP2455865A1 (en) 2012-05-23
EP2455865B1 (en) 2020-03-04
EP2455865A4 (en) 2014-12-10
TW201106157A (en) 2011-02-16
WO2011007599A1 (ja) 2011-01-20
US20160062660A1 (en) 2016-03-03
KR20120068765A (ko) 2012-06-27
US10776007B2 (en) 2020-09-15
TWI460588B (zh) 2014-11-11

Similar Documents

Publication Publication Date Title
US10776007B2 (en) Memory management device predicting an erase count
US11669444B2 (en) Computing system and method for controlling storage device
US11467955B2 (en) Memory system and method for controlling nonvolatile memory
CN101673245B (zh) 包括存储器管理装置的信息处理装置和存储器管理方法
Gupta et al. Leveraging Value Locality in Optimizing {NAND} Flash-based {SSDs}
US20130124794A1 (en) Logical to physical address mapping in storage systems comprising solid state memory devices
TWI712881B (zh) 電子機器及其控制方法、電腦系統及其控制方法以及主機之控制方法
JP2011022933A (ja) メモリ管理装置を含む情報処理装置及びメモリ管理方法
CN113778662B (zh) 内存回收方法及装置
JP2011186555A (ja) メモリ管理装置及び方法
JP2011186561A (ja) メモリ管理装置
JP2011186562A (ja) メモリ管理装置及び方法
JP2011186553A (ja) メモリ管理装置
US20170097897A1 (en) Information processing device, access controller, information processing method, and computer program
JP2011186563A (ja) メモリ管理装置およびメモリ管理方法
JP2011186558A (ja) メモリ管理装置及び方法
JP5322978B2 (ja) 情報処理装置及び方法
JP2011186554A (ja) メモリ管理装置及び方法
JP2011186559A (ja) メモリ管理装置
JP2011186557A (ja) メモリ管理装置及び方法

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUNIMATSU, ATSUSHI;MIYAGAWA, MASAKI;NOZUE, HIROSHI;AND OTHERS;SIGNING DATES FROM 20120117 TO 20120328;REEL/FRAME:028019/0935

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION