CN101833516A - Storage system and method for improving access efficiency of flash memory - Google Patents

Storage system and method for improving access efficiency of flash memory Download PDF

Info

Publication number
CN101833516A
CN101833516A CN201010161955A CN201010161955A CN101833516A CN 101833516 A CN101833516 A CN 101833516A CN 201010161955 A CN201010161955 A CN 201010161955A CN 201010161955 A CN201010161955 A CN 201010161955A CN 101833516 A CN101833516 A CN 101833516A
Authority
CN
China
Prior art keywords
data
flash memory
speed cache
write
working area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201010161955A
Other languages
Chinese (zh)
Inventor
林金岷
林凤书
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Genesys Logic Inc
Original Assignee
Genesys Logic Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genesys Logic Inc filed Critical Genesys Logic Inc
Priority to CN201010161955A priority Critical patent/CN101833516A/en
Publication of CN101833516A publication Critical patent/CN101833516A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)
  • Read Only Memory (AREA)

Abstract

The invention discloses storage system and method for improving access efficiency of a flash memory and provides a caching unit used for temporarily storing data to be written in the flash memory or data read from the flash memory. In the reading process, since the data read from the flash memory for the first time can be stored in the caching unit temporarily, the data do not need to be read from the flash memory when the same data is read for the second time, thereby preparation time for reading data from the flash memory is greatly shortened. In writing process, since small-sized written file data are stored in the caching temporary-storage region of the caching unit temporarily and then are written in the flash memory once when the caching unit is full of stored data, the preparation time for writing data in the flash memory is shortened.

Description

Improve the storage system and the method for access efficiency of flash memory
The application is that the name of on Dec 14th, 2007 application is called the dividing an application of No. 200710160973.1 patented claim of " storage system and the method for improving access efficiency of flash memory ".
Technical field
The present invention more particularly, is storage system and a method thereof of improving access efficiency of flash memory about a kind of relevant for a kind of storage system and method thereof of access flash memory.
Background technology
Flash memory (Flash Memory) is the internal memory of non-volatile (non-volatile), still can preserve the data that had before write when power-off.With other medium (as hard disk, floppy disk or tape etc.) relatively, flash memory has that volume is little, in light weight, do not have the machinery action when against shock, access postpones and characteristics such as low power consumption.Because these characteristics of flash memory, therefore data storage medium such as consumption electronic products, embedded system or portable computer adopt all in a large number in recent years.
Flash memory mainly can divide the Two kind: NOR type flash memory and NAND type flash memory.The advantage of NOR type flash memory is that low-voltage, access are fast and stability is high, therefore portable electronic devices and electronic communication equipment have been widely used in, such as personal computer (Personal Computer, PC), mobile phone, personal digital assistant (Personal Digital Assistance, PDA) and change frequently device (Set-top Box, STB) etc.NAND type flash memory is the flash memory that designs for the data storage purposes specially, is applied to store and preserve the storage medium of a large amount of data usually, as Portable memory card (SD Memory Card, Compact Flash Card, Memory Stick or the like).When flash memory writes (Write), erases (Erase) and reads (Read) running in execution, can see through inner capacitive coupling (Coupling) and control upward movement of electric charges of floating lock (Floating Gate) effectively, and then make that this floating lock can determine the transistorized threshold voltage of lower floor according to this movement of electric charges.In other words, when negatron injected this floating lock, the store status of this floating lock just can become 0 from 1; And when negatron after floating lock is removed from this, the store status of this floating lock just can become 1 from 0.
See also Fig. 1, Fig. 1 is a synoptic diagram of using the nand flash memory of prior art.Nand flash memory 100 inside are made up of 12 of several blocks (block).Each block 12 comprises several pages or leaves (page) 14, each 14 of page or leaf can be divided into data storage district 141 and spare area (spare area) 142, the data capacity in data storage district 141 can be 512 bytes, be used for storing the use data, spare area 142 be used for the storage errors correcting code (Error Correction Code, ECC).Different with NOR type flash memory, NAND type flash memory read and the unit of writing is all a page or leaf, the action of data reading-writing must be earlier to chip send read or write instruction after just can carry out.
Yet flash memory itself can't the direct data for updating in original place (update-in-place), that is to say, if will be when writing the data position and write data once more, and the action that execution is earlier erased.And the nand flash memory unit of writing is page or leaf, and the unit of erasing is a block.So write when request when sending to chip, the whole block 12 of must erasing earlier could write to data the page or leaf 14 of this block 12.And in general erase time that action needs of block 12 is about 10~20 times of 14 write activity times of page or leaf.If when a unit of erasing greater than the unit that writes, this expression be if will carry out the block action of erasing, and must be first active page in the block of desiring to erase be moved to other block and just can carry out.
Moreover (limited erase counts) is restricted for the number of times of erasing of flash memory.This is because write or read when operating in execution when flash memory, because the electric capacity in the reality all has the phenomenon of electric leakage, therefore when repeating to write or read, flash memory surpasses after 100,000 times, the potential difference (PD) that will cause this electric capacity to be stored is not enough to make floating lock stored charge deficiency, and then the data loss that causes this flash memory to store, severe patient more may make this flash memory begin to decay and can't carry out the running of reading.That is to say, surpassed available Number if a certain block often erases, can cause this block to write/erase stroke defect.
Because the characteristic of above-mentioned flash memory, therefore the management system that can effectively manage flash memory is in demand.Traditionally, at present flash memory as the designed archives economy framework of medium just like archives economies such as Microsoft FFS, JFFS2 and YAFFS.These archives economies are more efficient, but can only use on the medium of management with the flash memory construction.Another kind is to adopt a FTL (Flash Translation Layer) middle layer as rule, is block device with flash memory emulation, as Winchester disk drive.Therefore just can use general archives economy on the upper strata of FTL, as FAT32 or EXT3 or the like, lower floor be sent section (sector) read-write requests, via FTL Come access flash content.FTL comprises a logic-physical address table of comparisons, and in order to the corresponding informance of store logical addresses and physical address, the corresponding informance formats stored is logical address (flash memory block address-page or leaf is in the position of block).See also Fig. 2, Fig. 2 is an example of store logical addresses and physical address.Suppose that each block has the data of n page or leaf.When the upper strata archives economy requires to read the data of logical address 1, learn that by logic-physical address table of comparisons 16 1 pair of deserved physical address of logical address is (block 0-page or leaf 1), so system can obtain the data in the physical address (block 0-page or leaf 1) and pass back.If the upper strata archives economy requires the more content of new logical addresses 3, owing to do not allow directly to write once more, so system writes (block 2-page or leaf 0) to (block 2-page or leaf 2) with physical address (block 0-page or leaf 0) to (block 0-page or leaf 2) earlier, again data for updating is write to (block 2-page or leaf 3), and physical address (block 0-page or leaf 4) write (block 2-page or leaf 4) to (block 2-page or leaf n-1) to (block 0-page or leaf n-1), it is invalid then the data of physical address (block 0) to be denoted as, corresponding informance with logical address 3 in the address translation table 16 changes (B2-P3) into by (B0-P3) at last, so want the data of access logical address 3, will correspond to physical address (block 2-page or leaf 3) access data next time.Thus, therefore the flash memory problem that characteristic causes of " erasing before writing " can achieve a solution.
Use FTL management flash memory the problem of handling can be concentrated on the characteristic of flash memory, and need not consider to handle in the archives economy as problems such as archives, catalogues, and can look the archives economy of using required selection FTL upper strata, but because everything must pass through the FTL layer, so need long processing time and more internal memory to expend.For instance, if the archives economy on upper strata will write the continuous data of 10 2K bytes, suppose that these data all are positioned at same block, if these 10 data separately write for 10 times, whole block will be replicated ten times, obviously wastes many doubling times.
In addition, if will read the data of a 2K byte in the flash memory from a host side, then reading order can be communicated to flash memory by main frame, then flash memory can be found out the data that will read in each block, then the data transmission that will all find is returned host side, after data transmission was finished, flash memory passed that a message status returns host side back and the flow process of finishing whole data read.Read in the process whole, main frame pass on reading order to flash memory and from the setup time that flash memory biography message status returns host side all be the extra time that the design because of the FTL layer produces.Though data transmission period can increase along with the increase of data volume, whole setup time summation can't increase because of the increase of data volume.If read the data of continuous 20K byte-sized, the data of 2K byte are only read in each order if be divided into that flash memory is read in 10 orders, then read data at every turn and will correspond to a reading order, therefore cause waste of time.Finish if the data of 20K byte-sized are once read, then can shorten the data read time.
Summary of the invention
In view of this, the invention provides a kind of storage system and method thereof of improving the flash memory device access efficiency, some data that read continuously or write are kept in to a high-speed cache working area earlier, send out in the lump again, to save the time of data transmission.
One of purpose of the present invention provides a kind of storage system of improving access efficiency of flash memory, and it comprises a flash memory, a cache element and a control module.This flash memory comprises several blocks (block), and each block comprises several pages or leaves (page), is used for storing data.This cache element comprises several high-speed cache working areas, and this cache element is used for keeping in the data of this flash memory.This control module is used for reading request so that one first data that read this flash memory and this first data storage are among these high-speed cache working areas in receiving one first, read this first data from these high-speed cache working areas, and when being used in one second request of reading that receives is not stored in these high-speed cache working areas with one second data that read this flash memory and stored and this second data among, the data of the block of these second data of storage are temporary to these high-speed cache working areas of this cache element.
According to the present invention, the embodiment of the data capacity in each high-speed cache working area is 64K byte or 128K byte, and most preferred embodiment equals the data capacity of each block for the data capacity in each high-speed cache working area.
A further object of the present invention provides a kind of method of improvement one access efficiency of flash memory, and this flash memory comprises several blocks, and each block comprises several pages or leaves, and this method comprises: a high-speed cache is provided, and it comprises several high-speed cache working areas; Read request so that one first data that read this flash memory and this first data storage are among these high-speed cache working areas when receiving one first, read this first data from these high-speed cache working areas; And when one second request of reading that receives is not stored in these high-speed cache working areas with one second data that read this flash memory and stored and this second data among the time, the data of the block of these second data of storage are temporary to these high-speed cache working areas of this cache element.
According to the present invention, this method also comprises step: when the data in these high-speed cache working areas are full up and receive a third reading and get when request, read the minimum of these high-speed cache working areas to such an extent that the high-speed cache working area writes this flash memory.
Another object of the present invention provides a kind of storage system of improving the flash memory device access efficiency, and it comprises a flash memory, a cache element and a control module.This flash memory is used for storing data, and it comprises several blocks, and each block comprises several pages or leaves.This cache element comprises several high-speed cache working areas, and this cache element is used for keeping in the data that will write this flash memory.This control module is used for writing request with this first when writing one first of request and writing data and write this flash memory device in receiving one first, this first is write the high-speed cache working area of data storage in these high-speed cache working areas, and be used for when the data in these high-speed cache working areas is full up, the data in this high-speed cache working area are write this flash memory device.
According to embodiments of the invention, the data capacity in each high-speed cache working area is 64K byte or 128K byte, and most preferred embodiment is the data capacity of the data capacity in each high-speed cache working area more than or equal to each block.
Another purpose of the present invention provides a kind of method of improvement one access efficiency of flash memory, and this flash memory comprises several blocks, and each block comprises several pages or leaves, and this method comprises provides a high speed high-speed cache, and it comprises several high-speed cache working areas; Write request with this first when writing one first of request and writing data and write this flash memory when receiving one first, this first is write the high-speed cache working area of data storage in these high-speed cache working areas; And when the data in these high-speed cache working areas are full up, the data in this high-speed cache working area is write this flash memory.
According to the present invention, this method also comprises step: when this first data volume that writes data greater than the data capacity in each high-speed cache working area and all this first writes this data and write this flash memory first when writing data and not being temporary in these high-speed cache working areas.
According to the present invention, this method also comprises step: when this high-speed cache working area surpasses a schedule time standby time, the data in this high-speed cache working area is write this flash memory.
Description of drawings
Fig. 1 is the synoptic diagram of the nand flash memory of prior art.
Fig. 2 is an example of store logical addresses and physical address.
Fig. 3 is the functional block diagram of storage system of the present invention.
Fig. 4 is the synoptic diagram of flash memory, control module and cache element.
Fig. 5 is the present invention is read flash data by main frame a process flow diagram.
Fig. 6 is the process flow diagram that the present invention is write data by main frame flash data.
Embodiment
See also Fig. 3, Fig. 3 is the functional block diagram of storage system 10 of the present invention.Storage system 10 comprises a main frame 20 and a flash memory device 50.But main frame 20 can be desktop PC, mobile computer, industrial computer or recording playback DVD playing device or the like.Main frame 20 comprises a control module 22 and a cache element 24.Flash memory device 50 comprises a flash memory 52.In the present embodiment, each block (Block) of flash memory 52 inside is formed by 64 pages or leaves (Page), and each page or leaf is 2K byte (bytes) or 512 (bits) sizes.Cache element 24 is by internal memory such as DRAM (Dynamic Random Access Memory) (Dynamic Random Access Memory in the main frame 20, DRAM), static random access memory (Static RandomAccess Memory, SRAM) internal memory that is cut out, it comprises several high-speed cache working areas (cache line) 26.In the present embodiment, the data capacity size in each high-speed cache working area 26 can be but is not limited to 128K byte, 64K byte or other data capacity size, data capacity size viewable design demand and adjusting.The data capacity in each high-speed cache working area (C) with the pass of the data capacity (B) of a block is: C=B * 2 n, n herein is an integer.Cache element 24 is used to provide the temporary usefulness of the high-speed cache that reads and writes data of this flash memory device 50, cache element 24 is by control module 22 controls, via control module 22 that reading and writing data of flash memory device 50 is temporary, so that the cached data output of flash memory device 50 when the reading and writing data next time to be provided.Control module 22 is one to be stored in the software program code of the internal memory of main frame 20, is responsible for the communication to operating system and bus driver interface (bus driver).
See also Fig. 4 and Fig. 5, Fig. 4 is the synoptic diagram of flash memory 52, control module 22 and cache element 24.Fig. 5 is the present invention is read flash memory 52 data by main frame 20 a process flow diagram.The flow process that reads of the present invention comprises following steps:
Step 400: beginning.
Step 402: operating system is sent one to the driver of controlling cache element 24 and is read request, to read the data of flash memory 52.
Step 404: judge whether data that this request of reading asks surpass the boundary value in high-speed cache working area 26? if, execution in step 406, if not, execution in step 408.
Step 406: these data that read request are cut apart.If the operating system appointment read the border that the high-speed cache working area has been crossed in the address, the request of then this being read is divided into a plurality of requests according to the border in high-speed cache working area.
Step 408: is the data storage that reads request in the high-speed cache working area? if, execution in step 410; If not, execution in step 412.
Step 410: if the data storage that reads request then reads the data that this reads request in the high-speed cache working area in the high-speed cache working area.
Step 412: judge whether all high-speed cache working areas all store data? if, execution in step 414, if not, execution in step 416.
Step 414: when all high-speed cache working areas all stored data, the data that then read request write the high-speed cache working area that is read least number of times from flash memory.Again with data by the memory address that copies to the operating system appointment in the high-speed cache working area.
Step 416: when still having part high-speed cache working area not store data, with the data of the request of reading in flash memory writes available high-speed cache working area.Again with data by the memory address that copies to the operating system appointment in the high-speed cache working area.
Step 418: finish.
After main frame 20 connected flash memory device 50, if main frame 20 desires to read one first data of flash memory device 50, the size of these first data was the 24K byte, and then control module 22 is given in main frame 20 one first request of reading that can transmit.First request of reading comprise logical block addresses corresponding to these first data (Logical Block Address, LBA) and the size of first data.Next, control module 22 can be judged the boundary value (step 404) that whether surpasses high-speed cache working area 26 of first data.For instance, if the size in high-speed cache working area 26 is 128K bytes, if first size of data that first request of reading is asked is 256 bytes, then control module 22 can be divided into two requests of reading (step 406) that are used for reading the 128K byte respectively above first request of reading.Next, control module 22 can judge whether first data have been stored in the high-speed cache working area 26 interior (step 408) of cache element 24.Because the temporary as yet any data of cache element 24,, control module 22 is not stored in the high-speed cache working area 26 so judging first data.Then control module 22 judges whether all high-speed cache working areas 26 all store data, in order to confirm whether still have untapped high-speed cache working area can store data.The all not temporary any data in high-speed cache working area this moment are so control module 22 can be temporary in high-speed cache working area 26 (step 416) with first data.Next, when receiving one second, control module 22 reads request when reading second data that are positioned at flash memory 52, because second data is not temporary in the high-speed cache working area, and still have untapped high-speed cache working area can store data, so control module 22 can be temporary in high-speed cache working area 26 with second data.
When receiving a third reading, control module 22 gets request when reading the 3rd data that are positioned at flash memory 52, because the 3rd data has been temporary in high-speed cache working area 26, so control module 22 can directly read the 3rd data (step 410) from cache element, and no longer need to read the 3rd data from the flash memory the inside.Please note, when control module 22 reads request when reading the 4th data of flash memory 52 receiving one the 4th, if the 4th data are not temporary in the high-speed cache working area, and data have all been stored in all high-speed cache working areas, this moment, the number of times that high-speed cache working area 26 is read was checked in control module 22 meetings, and with the 4th data temporary to the high-speed cache working area that is read least number of times to upgrade the high-speed cache working area, again with the 4th data by the memory address that copies to the operating system appointment in the high-speed cache working area 26.By the above-mentioned mechanism that reads, if each main frame need frequently read flash memory, and each reads when asking pairing data smaller, then main frame does not need to go to seek in the flash memory needed data at every turn, just can from cache element, find desired data, so can significantly improve the time of frequently reading small-sized data.For instance, in the prior art, if read the data of continuous 20K byte-sized, the data of 2K byte are only read in each order if be divided into that flash memory is read in 10 orders, then read data at every turn and will correspond to a reading order, therefore cause waste of time.But in the present invention, the data of 20K byte-sized are to be stored in earlier in the cache element, then once read to finish, and therefore can shorten the data read time.
Note that if it is maximum amount of data that operating system is specified the data volume that reads, move data for release at high-speed cache 52 and spend the unnecessary time, so control module 22 can directly be delivered to flash memory 52 with this request of reading, and not by cache element 24.
See also Fig. 4 and Fig. 6, Fig. 6 is the process flow diagram that the present invention is write data by main frame 20 flash memory 52 data.The flow process that writes of the present invention comprises following steps:
Step 500: beginning.
Step 502: 20 pairs of flash memories 52 of main frame send one and write request, are used for data are write flash memory 52.
Step 504: judge to write the data capacity of the data of request greater than flash memory area? if, execution in step 506, if not, execution in step 512.
Step 506:, judge then whether the part data of the request that writes has been temporary in cache element when the data capacity of the data that write request greater than flash memory area? if, execution in step 510, if not, execution in step 508.
Step 508: when the part data that write request have been temporary in cache element, judge then whether the interior untapped high-speed cache working area of flash cell enough stores the total data of the request that writes? if, execution in step 512, if not, execution in step 510.
Step 510: when all data that write request all were not temporary in cache element, the data that directly will write request write flash memory.
Step 512:, then will write the untapped high-speed cache working area of the data write cache unit of request when the data capacity of the data that write request less than flash memory area.
Does step 514: whether the high-speed cache working area of judging cache element all store data? if, execution in step 518, if not, execution in step 516.
Step 516: is the standby time of judging cache element above a schedule time? if, execution in step 518, if not, execution in step 500.
Step 518: surpass this schedule time when whole high-speed caches working area of cache element all stores data or the standby time of cache element, then all data with the high-speed cache working area write to flash memory in the lump.
After main frame 20 was electrically connected flash memory device 50, if main frame 20 will write flash memory device 50 with one first data, wherein the size of these first data was the 24K byte, and then main frame 20 can transmit one first and write request to control module 22 (step 502).First write request comprise logical block addresses corresponding to these first data (Logical Block Address, LBA) and the size of first data.Control module 22 can judge that first data are whether greater than the data capacity (step 504) in high-speed cache working area 26.Because the data capacity in high-speed cache working area 26 (supposing it is the 128K byte) is greater than first size of data (24K byte), control module 22 can be temporary in high-speed cache working area 26a (step 512) earlier with first data.Afterwards, write when request, and this second writes second data that request comprises the 10K byte-sized if control module 22 receives one second.After in case control module 22 is judged the data capacity of second data less than high-speed cache working area 26, can be with the earlier temporary high-speed cache working area 26 of second data, preferably, this second data can be temporary in high-speed cache working area 26a, and high-speed cache working area 26a kept in first data and second data were arranged this moment.Next, suppose that control module 22 receives one the 3rd and writes when asking, and the 3rd writes the 3rd data that request comprises the 256K byte-sized.Because the data capacity in high-speed cache working area 26 (128K byte) is less than the 3rd size of data (256K byte), then control module 22 can judge whether the 3rd data have partial data to be temporary in the high-speed cache working area, at this moment, first data has been temporary in high-speed cache working area 26a, so control module 22 can check whether first data and the 3rd data have repetition.If the 3rd data and first data do not repeat part, then the 3rd data can be write direct flash memory 52 and can be temporary in cache element 24.Otherwise if the 3rd data and first data have repetition, whether untapped high-speed cache working area 26 was enough to store the 3rd whole data among then control module 22 can be judged cache element 24.When if untapped high-speed cache working area is enough to deposit in the 3rd data, then the 3rd data can be temporary in the high-speed cache working area 26 of cache element 24, otherwise, then can write direct the 3rd data flash memory 52 and can not be temporary in high-speed cache working area 26.
Control module 22 can check also whether the high-speed cache working area 26 of cache element 24 all stores data (step 514) after writing the corresponding data write cache unit 24 of request.When the high-speed cache working area 26 of cache element 24 all stored data, control module 22 meetings once all write flash memory 52 with the data of whole cache element 24.Perhaps, when control module 22 judges that surpass a schedule time standby time of cache element 24 (step 516), control module 22 can all write flash memory 52 with the data of cache element 24.
In brief, by such writing mechanism, control module 22 writes when request receiving one, can judge the size of the data of the request that writes earlier, if data less than the size in high-speed cache working area, then can be temporary in cache element to these small datas earlier.In cache element, all be filled with when data or cache element standby time surpassing a schedule time, just can once write data in the flash memory.If so repeatedly receiving when request of writing that writes small-sized archives, storage system of the present invention can't must receive to write when asking at every turn and just must write data in the flash memory as prior art, but the data of cache element storage by the time are full up or cache element is standby time when surpassing a schedule time, the time that repeatedly writes small-sized archives just can once write flash memory to data, so can significantly be reduced.For instance, in the prior art, if the archives economy on upper strata will write the continuous data of 10 2K bytes, suppose that these data all are positioned at same block, if these 10 data separately write for 10 times, whole block will be replicated ten times.But in the present invention, these 10 data are incorporated in write-once, and whole block only can be replicated once, so the time that significantly shortening data writes.
Compared to prior art, storage system of the present invention provides a cache element, is used for keeping in the data or the temporary data that read from flash memory device that will write flash memory device.In reading process, particularly concerning the data that frequently read small-sized archives, because the data that read from flash memory for the first time all can be temporarily stored in the cache element, so when reading same data for the second time, just no longer need, thereby significantly shorten setup time from the flash memory device reading of data from the flash memory reading of data.In ablation process, particularly for repeatedly the data of small-sized archives being write flash memory, because the small-sized file data that writes can deposit the high-speed cache working area of cache element earlier in, after being filled with data, cache element just understands the write-once flash memory, can reduce the setup time that writes flash memory in such event.
In sum; though the present invention discloses as above with preferred embodiment; but this preferred embodiment is not in order to restriction the present invention; the those of ordinary skill in this field; without departing from the spirit and scope of the present invention; all can do various changes and retouching, so protection scope of the present invention is as the criterion with the scope that claim defines.

Claims (3)

1. method of improving an access efficiency of flash memory, this flash memory comprises several blocks, and each block comprises several pages or leaves, and it is characterized in that: this method comprises:
One cache element is provided, and it comprises several high-speed cache working areas;
Write request with this first when writing one first of request and writing data and write this flash memory when receiving one first, this first is write the high-speed cache working area of data storage in these high-speed cache working areas; And
When the data in these high-speed cache working areas is full up, the data in this high-speed cache working area is write this flash memory.
2. the method for improvement one access efficiency of flash memory according to claim 1 is characterized in that: this method also comprises:
When this first data volume that writes data writes data and is not temporary in these high-speed cache working areas greater than the data capacity in each high-speed cache working area and whole this first, this first is write data and write this flash memory.
3. the method for improvement one access efficiency of flash memory according to claim 1 is characterized in that: this method also comprises:
When this cache element surpasses a schedule time standby time, the data of this cache element is write this flash memory.
CN201010161955A 2007-12-14 2007-12-14 Storage system and method for improving access efficiency of flash memory Pending CN101833516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010161955A CN101833516A (en) 2007-12-14 2007-12-14 Storage system and method for improving access efficiency of flash memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010161955A CN101833516A (en) 2007-12-14 2007-12-14 Storage system and method for improving access efficiency of flash memory

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNA2007101609731A Division CN101458662A (en) 2007-12-14 2007-12-14 Storage system and method for improving flash memory access efficiency

Publications (1)

Publication Number Publication Date
CN101833516A true CN101833516A (en) 2010-09-15

Family

ID=42717592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010161955A Pending CN101833516A (en) 2007-12-14 2007-12-14 Storage system and method for improving access efficiency of flash memory

Country Status (1)

Country Link
CN (1) CN101833516A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213692A (en) * 2017-07-06 2019-01-15 慧荣科技股份有限公司 storage device management system and storage device management method
CN109426623A (en) * 2017-08-29 2019-03-05 深圳市中兴微电子技术有限公司 A kind of method and device reading data
CN110489054A (en) * 2018-05-14 2019-11-22 慧荣科技股份有限公司 Access method, relevant flash controller and the electronic device of flash memory module
WO2021082109A1 (en) * 2019-10-31 2021-05-06 江苏华存电子科技有限公司 Hybrid mapping table on static random access memory
TWI749490B (en) * 2020-03-25 2021-12-11 慧榮科技股份有限公司 Computer program product and method and apparatus for programming flash administration tables
US11307766B2 (en) 2020-03-25 2022-04-19 Silicon Motion, Inc. Apparatus and method and computer program product for programming flash administration tables

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213692A (en) * 2017-07-06 2019-01-15 慧荣科技股份有限公司 storage device management system and storage device management method
CN109213692B (en) * 2017-07-06 2022-10-21 慧荣科技股份有限公司 Storage device management system and storage device management method
CN109426623A (en) * 2017-08-29 2019-03-05 深圳市中兴微电子技术有限公司 A kind of method and device reading data
CN110489054A (en) * 2018-05-14 2019-11-22 慧荣科技股份有限公司 Access method, relevant flash controller and the electronic device of flash memory module
CN110489054B (en) * 2018-05-14 2022-09-23 慧荣科技股份有限公司 Method for accessing flash memory module, related flash memory controller and electronic device
WO2021082109A1 (en) * 2019-10-31 2021-05-06 江苏华存电子科技有限公司 Hybrid mapping table on static random access memory
TWI749490B (en) * 2020-03-25 2021-12-11 慧榮科技股份有限公司 Computer program product and method and apparatus for programming flash administration tables
US11307766B2 (en) 2020-03-25 2022-04-19 Silicon Motion, Inc. Apparatus and method and computer program product for programming flash administration tables

Similar Documents

Publication Publication Date Title
CN101458662A (en) Storage system and method for improving flash memory access efficiency
US11520697B2 (en) Method for managing a memory apparatus
US8055873B2 (en) Data writing method for flash memory, and controller and system using the same
US8386698B2 (en) Data accessing method for flash memory and storage system and controller using the same
US8364931B2 (en) Memory system and mapping methods using a random write page mapping table
US7529879B2 (en) Incremental merge methods and memory systems using the same
US8180955B2 (en) Computing systems and methods for managing flash memory device
CN102292711B (en) Solid state memory formatting
US8001317B2 (en) Data writing method for non-volatile memory and controller using the same
TWI635392B (en) Information processing device, storage device and information processing system
US20050021904A1 (en) Mass memory device based on a flash memory with multiple buffers
US20090222643A1 (en) Block management method for flash memory and controller and storage sysetm using the same
TWI385667B (en) Block accessing method for flash memory and storage system and controller using the same
US20090132757A1 (en) Storage system for improving efficiency in accessing flash memory and method for the same
US20100057979A1 (en) Data transmission method for flash memory and flash memory storage system and controller using the same
US20190384681A1 (en) Data storage device and operating method thereof
US20110145481A1 (en) Flash memory management method and flash memory controller and storage system using the same
CN101833516A (en) Storage system and method for improving access efficiency of flash memory
US8423707B2 (en) Data access method for flash memory and storage system and controller using the same
KR20090046568A (en) Flash memory system and writing method of thereof
CN111610929A (en) Data storage device and non-volatile memory control method
CN102023925A (en) A solid state disk and the application method thereof
JP2009265839A (en) Storage device
KR100490603B1 (en) Control method and apparatus for operations of flash memory system
CN110609817A (en) File storage system capable of preventing file fragmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20100915