CN1818887A - Built-in file system realization based on SRAM - Google Patents
Built-in file system realization based on SRAM Download PDFInfo
- Publication number
- CN1818887A CN1818887A CNA2006100498756A CN200610049875A CN1818887A CN 1818887 A CN1818887 A CN 1818887A CN A2006100498756 A CNA2006100498756 A CN A2006100498756A CN 200610049875 A CN200610049875 A CN 200610049875A CN 1818887 A CN1818887 A CN 1818887A
- Authority
- CN
- China
- Prior art keywords
- data
- sram
- file system
- loading
- speed cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
An embedded file system realization method base on sram. The invention is using the SRAM as the data cache in the embedded system, set up a mini-type file system in the SRAM by hashing algorithm, and set up a index of the data cache for common data file, operate the data cache directly. The advantages are: quicken the input and output operating speed of the files in the embedded system, improve the throughput rate of the input and output operation, increase the use of the data cache.
Description
Technical field
The present invention relates to the embedded file system field, particularly relate to a kind of implementation method of the embedded file system based on SRAM.
Background technology
At present in the embedded environment, occurred miscellaneous, different capabilities, different access speeds, the external memory medium of different prices high capacity Flash storer particularly occurred and has been applied to mobile embedded field, and the Cache capacity that is integrated in the CPU also is significantly improved.The technical progress of these hardware manufacturing is its advantage of combining closely, and providing with the performance that improves embedded system may.
Internal memory, or internal storage are called primary memory again, are one of critical component that is related to computer run performance height, and be very important beyond doubt.In order to accelerate the speed of system, improve the overall performance of system, we see that the amount of memory that disposes in the computing machine is increasing, and the kind of internal memory is also more and more.
The access time of computer instruction depends primarily on internal memory.For most computers system now, the access time of internal memory all is a factor that main system for restricting performance improves.Therefore when judging the performance of a certain system, just can not also to see the kind of its used internal memory, operating rate only according to the size of amount of memory.
The DRAM dynamic RAM is mainly as primary memory.For a long time, our used dynamic RAM all is PM RAM, and that a little a little later is FPM RAM.In order to catch up with the more and more faster speed of CPU, the primary memory of some newtypes is developed out.They are EDO RAM, BEDO RAM, SDRAM etc.
Dram chip designs to such an extent that resemble the matrix of bit, and there is column address of a row address each position.Memory Controller Hub will provide chip address could be read specific bit from chip data.A chip that is marked as 70ns will be read the data of a position with the time of 70ns.And to obtain address information from CPU with the extra time instruction of next bar is set.The chip manufacturing continuous advancement in technology makes this treatment effeciency more and more higher.
FPM RAM fast page mode random access memory, the what is called here " page or leaf ", it is disconnected to refer in the dram chip 2048 bit slices on the storage array.FPM RAM is a random access memory the earliest, and the FPMRAM of 60ns can be used for the Pentium system that bus speed is 66MHz (megahertz) (CPU frequency be 100,133,166 and 200MHz).
The internal memory of fast page mode is usually used in video card, is commonly referred to " DRAM ".Wherein a kind of access time of internal memory of process particular design only is 48ns, i.e. VRAM.This internal memory through particular design has " twoport ", one of them port can be directly by the CCPU access, and another port can be independently by RAM " direct access passage " access, " the direct access passage " of storer needn't waiting for CPU be finished access and just can be worked simultaneously like this, thereby will be hurry up than general DRAM.
EDO RAM expanding data output random access memory.Among dram chip, except that storage unit, also have some accessory logic circuits, now, people have noticed the accessory logic circuit of RAM chip, by increasing a spot of added logic circuit, can improve the data traffic in the unit interval, promptly so-called increase bandwidth.The trial that EDO is making aspect this just.The working method of EDO quite is similar to FPMDRAM, and EDO also has the idealized faster burst type read cycle clock arrangement than FPM DRAM.This makes can save 3 clock period when reading one group of data block of being made up of four elements from DRAM on the 66MHz bus.
BEDO RAM burst expanding data output random access memory is a reading of data in " abrupt action ", and after memory address was provided, CPU supposed data address thereafter, and automatically they is taken out in advance in other words.Like this, during each data in reading following three data, only with clock period only, CPU can be with burst mode read data (adopting 52ns BEDO and 66MHz bus), the transfer rate that this mode gives an order just improves greatly, and the instruction queue of processor just can be filled up effectively.This now RAM is only by VIA chipset 580VP, 590VP, and 860VP supports.This really fast BEDO RAM also be defectiveness, Here it is, and it can't be complementary with the bus that frequency is higher than 66MHz.
SDRAM synchronous DRAM SDRAM and system clock are synchronous, adopt the pipeline processing mode, and when specifying a specific address, SDRAM just can read a plurality of data, realize that promptly burst transmits.Specifically, the first step, assigned address; In second step, data are passed to output circuit from memory address; In the 3rd step, output data is to outside.Key is that above three steps are independently to carry out separately, and synchronous with CPU, and internal memory in the past have only execute this three steps from the beginning to the end could output datas.The secret formula of SDRAM high speed that Here it is.The read-write cycle of SDRAM is 10 to 15ns.SDRAM includes two staggered storage arrays based on two bank structure, and as CPU from a memory bank or array accesses data the time, another has been ready to read and write data.By the tight switching of two storage arrays, reading efficiency is significantly improved.
SDRAM not only can be used as main memory, also is being widely used aspect the special-purpose internal memory of display card.Concerning display card, data bandwidth is wide more, and simultaneously treated data are just many more, and the information of demonstration is just many more, and display quality is also just high more.
SDRAM also will be applied to shared drive structure (UMA), the structure of a kind of integrated main memory and display memory.This structure has reduced system cost to a great extent, because many high-performance display card prices are high, be exactly because its special-purpose display memory cost is high, and the UMA technology will utilize main memory to make display memory, no longer needing increases special display memory, thereby has reduced cost.
The SRAM static RAM is divided by generation time and working method, and static RAM also is divided into asynchronous and synchronous.Static RAM is used for high-speed cache (Cache) more.
The main method that solves the speeds match between CPU and the main memory now is the second level cache that adds between CPU and internal memory based on SRAM, and this memory system can be born 85% memory request, and does not need CPU to increase extra latent period.The performance that bigger buffer memory more can improve system is set simultaneously.
Present file system majority is based upon on the external memory, and file access speed is slower, becomes a bottleneck of data stream, by file system is based upon on the SRAM, just can address this problem, and also can make full use of the SRAM capacity that increases day by day now simultaneously.
Summary of the invention
The object of the present invention is to provide a kind of implementation method of the embedded file system based on SRAM.
The technical scheme that the present invention solves its technical matters employing is as follows:
1) the SRAM file system is to the scanning of data high-speed cache
Static RAM SRAM, Static Random Access Memory, the SRAM file system before loading, current data high-speed cache active volume is obtained in the scanning of log-on data high-speed cache;
2) loading of SRAM file system
Loading is begun by the guiding of SRAM file system control block SRAMFSCB, find up-to-date SRAMFSCB from the SRAMFSCB data interval,, carry out the loading of file system according to the current data high-speed cache active volume of previous step, the initialization files system data, thus initial work finished;
3) the SRAM file system blocks is distributed
The SRAM file system promptly has successional data content with the data of association on use order or space of living in, be combined into a piece, is assigned in one section continuous zone;
4) the SRAM file system data is replaced
When the data that system's needs occur are temporarily not in the SRAM file system, adopt least-recently-used replacement policy LRU, Least Recently Used replaces out data cache with the data in the SRAM file system, and the data of needs are packed into.
The present invention compares with background technology, and the useful effect that has is:
The present invention is a kind of implementation method of the embedded file system based on SRAM, this method makes full use of the SRAM that is used as data cache in the embedded system, on SRAM, set up a small files system by hash method, foundation is directly operated the data high-speed cache the index of frequently-used data file in data cache.
(1) high efficiency.To the speed of file input-output operation, improve the throughput of input-output operation in this method quickening embedded system.
(2) practicality.The capacity of SRAM is improved in embedded system, and this method has made full use of this characteristics, and there has been tangible improvement in the system that makes on the input and output speeds match of CPU speed and file system, has good practicability through using.
Description of drawings
Accompanying drawing is the synoptic diagram of process of the present invention.
Embodiment
The present invention is further illustrated below in conjunction with drawings and Examples.
A kind of implementation method of the embedded file system based on SRAM, its specific implementation method is as follows:
1) the SRAM file system is to the scanning of data high-speed cache
In the data cache of SRAM, in order to can be good at carrying out the access of data, provide flag by embedded OS, with the scanning of support to data high-speed cache user mode, as shown in the table:
Sign | Purposes |
Block address | The home block address |
Tag | Whether sign is used |
Index | The identification index position |
Block offset | The home block skew |
When the SRAM file system starts, the data high-speed cache is scanned.Be present in the Tag position of showing in the sign by inspection, the SRAM file system can obtain the valid data of data cache operating position, thereby obtains current data high-speed cache active volume.
Use if the current data high-speed cache has enough capacity can offer the SRAM file system, then these data are saved; Otherwise the expression system is in the busy stage, stops the loading process of SRAM file system.
2) loading of SRAM file system
Loading is begun by the guiding of SRAM file system control block SRAMFSCB.SRAMFSCB resides on the on-chip memory, when the SRAM file system starts, finds up-to-date SRAMFSCB from the SRAMFSCB data interval, current data high-speed cache active volume according to previous step, carry out the loading of file system, the initialization files system data, thus finish initial work.
The SRAM file system directories is relatively simple for structure, number of files is limited, with file and catalogue as a kind of object unified management.During access file, rare by direct Kazakhstan to file path, calculate the call number of file, visit again index node.The obvious like this path searching of having accelerated.
Simultaneously, in the process that loads, utilize index to carry out initialization, thus, the SRAM file system does not need to load by scanning all files, thereby has simplified the process of setting up file structure, has shortened the time of loading.
3) the SRAM file system blocks is distributed
The principle of locality of computer instruction shows, in short time interval, the address that produces by program often concentrate on the memory logic address space very among a small circle in.The distribution of instruction address was exactly continuous originally, and adding loop program section and subprogram segment will repeat repeatedly.Therefore, the visit to these addresses just has the upward concentrated tendency that distributes of time naturally.
This concentrated tendency of DATA DISTRIBUTION is obvious not as instruction, but can make storage address relative concentrated to the storage of array and the selection of visit and working cell.This storage address frequent access to subrange is then visited very few phenomenon to the address beyond this scope, just is called the locality of routine access.
According to the principle of locality of program, the SRAM file system promptly has successional data content with the data of association on use order or space of living in, be combined into a piece, is assigned in one section continuous zone; Near the instruction address of carrying out a part of data file is called in the SRAM high-speed cache from main memory, in a period of time, use for CPU.This has very big effect to the travelling speed that improves program.Thereby make the file access speed of system accelerate greatly, improved the throughput of input-output operation.
4) the SRAM file system data is replaced
When the data that system's needs occur are temporarily not in the SRAM file system, adopt least-recently-used replacement policy LRU, from the SRAM data cache, find out the minimum data file of nearest use, data in the SRAM file system are replaced out data cache, the data read that just needs from storer is simultaneously come out, and is encased in data cache.
Nearest minimum be a kind of general efficient algorithm with dispatching algorithm, be operated system, data base management system (DBMS) and dedicated file system and extensively adopt.The page that this algorithm is eliminated is at nearest following period of time that not accessed one page of a specified duration.It is that the local attribute that is had when carrying out according to program considers, promptly those pages that just had been used may also will be used, and the page that those were not used in the long period in general may not can use at once at once.
In order to eliminate least-recently-used page or leaf more exactly, must safeguard a special formation (claiming it to eliminate sequence in this book) for the page.The current page number in main memory of this queue for storing is just adjusted once when visit one page, Zong make rear of queue point to the page or leaf of visit recently, queue heads is exactly the page or leaf of nearest minimum usefulness.Always eliminate the indicated page or leaf of queue heads when obviously, skipping leaf interruption; And after carrying out page access, need from formation, adjust to rear of queue to this page or leaf.
Example: three main memories of having given certain course allocation, the page number that this operation is visited successively is: 4,3,0,4,1,1,2,3,2.So when these pages of visit, the situation of change that the page is eliminated sequence is as follows:
The visit page number | The page is eliminated sequence | The page is eliminated |
4 | 4 | |
3 | 4,3 | |
0 | 4,3,0 | |
4 | 3,0,4 | |
1 | 0,4,1 | 3 |
1 | 0,4,1 | |
2 | 4,1,2 | 0 |
3 | 1,2,3 | 4 |
2 | 1,3,2 |
From realizing angle, the complicated operation of lru algorithm, therefore very expensive often adopts the method for simulation when realizing.
First kind of analogy method can realize by zone bit is set.A reference indication position R is set for each page or leaf, when visiting certain one page, sign R that will this page or leaf by hardware puts 1 at every turn, and t is with the sign R all clear 0 of all pages at regular intervals.Skipping leaf when interrupting, from zone bit R is those pages or leaves of 0, selecting one page and eliminate.After selecting the page or leaf that to eliminate, also with the sign R clear 0 of all pages.This implementation method expense is little, makes poor accuracy but the size of t is difficult for determining.T is big, and skipping leaf, the sign R value of all pages is 1 when interrupting; T is little, skips leaf when interrupting, and the R value of possible all pages is " 0 ", is difficult to pick out the page that eliminate equally.
Second kind of analogy method then is the register r that a multidigit is set for each page.When the page is accessed, the leftmost position 1 of corresponding register; Every time t, the r register is moved to right one; When skipping leaf interruption, look for the page of the r register correspondence of minimum value to eliminate.
This algorithm is a kind of public algorithm, can realize C for example, C++, Java etc. with general programming language.
Claims (1)
1. implementation method based on the embedded file system of SRAM is characterized in that:
1) the SRAM file system is to the scanning of data high-speed cache
Static RAM SRAM, Static Random Access Memory, the SRAM file system before loading, current data high-speed cache active volume is obtained in the scanning of log-on data high-speed cache;
2) loading of SRAM file system
Loading is begun by the guiding of SRAM file system control block SRAMFSCB, find up-to-date SRAMFSCB from the SRAMFSCB data interval,, carry out the loading of file system according to the current data high-speed cache active volume of previous step, the initialization files system data, thus initial work finished;
3) the SRAM file system blocks is distributed
The SRAM file system promptly has successional data content with the data of association on use order or space of living in, be combined into a piece, is assigned in one section continuous zone;
4) the SRAM file system data is replaced
When the data that system's needs occur are temporarily not in the SRAM file system, adopt least-recently-used replacement policy LRU, Least Recently Used replaces out data cache with the data in the SRAM file system, and the data of needs are packed into.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100498756A CN100377118C (en) | 2006-03-16 | 2006-03-16 | Built-in file system realization based on SRAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100498756A CN100377118C (en) | 2006-03-16 | 2006-03-16 | Built-in file system realization based on SRAM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1818887A true CN1818887A (en) | 2006-08-16 |
CN100377118C CN100377118C (en) | 2008-03-26 |
Family
ID=36918906
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100498756A Expired - Fee Related CN100377118C (en) | 2006-03-16 | 2006-03-16 | Built-in file system realization based on SRAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100377118C (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833559B (en) * | 2009-11-05 | 2012-07-04 | 北京炬力北方微电子有限公司 | Method and device for reading FAT ( disk files |
CN102902748A (en) * | 2012-09-18 | 2013-01-30 | 上海移远通信技术有限公司 | Establishing method and managing method for file systems and random access memory (RAM) and communication chip of file systems |
CN103019955A (en) * | 2011-09-28 | 2013-04-03 | 中国科学院上海微系统与信息技术研究所 | Memory management method based on application of PCRAM (phase change random access memory) main memory |
CN105094827A (en) * | 2015-07-24 | 2015-11-25 | 上海新储集成电路有限公司 | Method for starting processor |
WO2017107414A1 (en) * | 2015-12-25 | 2017-06-29 | 百度在线网络技术(北京)有限公司 | File operation method and device |
CN108958813A (en) * | 2018-06-13 | 2018-12-07 | 北京无线电测量研究所 | file system construction method, device and storage medium |
CN109857573A (en) * | 2018-12-29 | 2019-06-07 | 深圳云天励飞技术有限公司 | A kind of data sharing method, device, equipment and system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6449690B1 (en) * | 1999-06-25 | 2002-09-10 | Hewlett-Packard Company | Caching method using cache data stored in dynamic RAM embedded in logic chip and cache tag stored in static RAM external to logic chip |
US6876557B2 (en) * | 2001-06-12 | 2005-04-05 | Ibm Corporation | Unified SRAM cache system for an embedded DRAM system having a micro-cell architecture |
CN1328662C (en) * | 2003-09-28 | 2007-07-25 | 中兴通讯股份有限公司 | Fault-tolerant processing method for embedding device file system |
CN1632745A (en) * | 2003-12-22 | 2005-06-29 | 中国电子科技集团公司第三十研究所 | Operating method for filing system using multi-equipment with IDE interface |
-
2006
- 2006-03-16 CN CNB2006100498756A patent/CN100377118C/en not_active Expired - Fee Related
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833559B (en) * | 2009-11-05 | 2012-07-04 | 北京炬力北方微电子有限公司 | Method and device for reading FAT ( disk files |
CN103019955A (en) * | 2011-09-28 | 2013-04-03 | 中国科学院上海微系统与信息技术研究所 | Memory management method based on application of PCRAM (phase change random access memory) main memory |
CN103019955B (en) * | 2011-09-28 | 2016-06-08 | 中国科学院上海微系统与信息技术研究所 | The EMS memory management process of PCR-based AM main memory application |
CN102902748A (en) * | 2012-09-18 | 2013-01-30 | 上海移远通信技术有限公司 | Establishing method and managing method for file systems and random access memory (RAM) and communication chip of file systems |
CN105094827A (en) * | 2015-07-24 | 2015-11-25 | 上海新储集成电路有限公司 | Method for starting processor |
CN105094827B (en) * | 2015-07-24 | 2018-08-28 | 上海新储集成电路有限公司 | A kind of method that processor starts |
WO2017107414A1 (en) * | 2015-12-25 | 2017-06-29 | 百度在线网络技术(北京)有限公司 | File operation method and device |
US11003625B2 (en) | 2015-12-25 | 2021-05-11 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for operating on file |
CN108958813A (en) * | 2018-06-13 | 2018-12-07 | 北京无线电测量研究所 | file system construction method, device and storage medium |
CN109857573A (en) * | 2018-12-29 | 2019-06-07 | 深圳云天励飞技术有限公司 | A kind of data sharing method, device, equipment and system |
Also Published As
Publication number | Publication date |
---|---|
CN100377118C (en) | 2008-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103885728B (en) | A kind of disk buffering system based on solid-state disk | |
CN1160631C (en) | Techniques for improving memory access in a virtual memory system | |
CN1135477C (en) | Method and apparatus for implementing dynamic display memory | |
CN1818887A (en) | Built-in file system realization based on SRAM | |
CN103019955B (en) | The EMS memory management process of PCR-based AM main memory application | |
CN1831824A (en) | Buffer data base data organization method | |
CN108153682B (en) | Method for mapping addresses of flash translation layer by utilizing internal parallelism of flash memory | |
CN111930316B (en) | Cache read-write system and method for content distribution network | |
CN1195817A (en) | Method and system for implementing cache coherency mechanism for utilization within non-inclusive cache memory hierarchy | |
CN102663115A (en) | Main memory database access optimization method on basis of page coloring technology | |
CN109783398A (en) | One kind is based on related perception page-level FTL solid state hard disk performance optimization method | |
CN100549945C (en) | In the embedded system based on the implementation method of the instruction buffer of SPM | |
US20130007341A1 (en) | Apparatus and method for segmented cache utilization | |
CN1896972A (en) | Method and device for converting virtual address, reading and writing high-speed buffer memory | |
CN101034375A (en) | Computer memory system | |
CN106909323B (en) | Page caching method suitable for DRAM/PRAM mixed main memory architecture and mixed main memory architecture system | |
CN116501249A (en) | Method for reducing repeated data read-write of GPU memory and related equipment | |
CN110795213A (en) | Active memory prediction migration method in virtual machine migration process | |
CN1255732C (en) | System and method for managing storage power using prefetch buffer | |
US9261946B2 (en) | Energy optimized cache memory architecture exploiting spatial locality | |
Ding et al. | TriangleKV: Reducing write stalls and write amplification in LSM-tree based KV stores with triangle container in NVM | |
KR20160121819A (en) | Apparatus for data management based on hybrid memory | |
CN102411543A (en) | Method and device for processing caching address | |
CN1949191A (en) | Method of realizing low power consumption high speed buffer storying and high speed buffer storage thereof | |
Liang et al. | Performance-centric optimization for racetrack memory based register file on GPUs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080326 Termination date: 20120316 |