CN100346318C - System and method for dynamically adjusting read ahead values based upon memory usage - Google Patents
System and method for dynamically adjusting read ahead values based upon memory usage Download PDFInfo
- Publication number
- CN100346318C CN100346318C CNB2005100657692A CN200510065769A CN100346318C CN 100346318 C CN100346318 C CN 100346318C CN B2005100657692 A CNB2005100657692 A CN B2005100657692A CN 200510065769 A CN200510065769 A CN 200510065769A CN 100346318 C CN100346318 C CN 100346318C
- Authority
- CN
- China
- Prior art keywords
- read
- free
- page
- sequential access
- virtual memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Storage Device Security (AREA)
Abstract
A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).
Description
Technical field
The present invention relates generally to adjust the system and method for the read ahead values (read ahead value) of operating system according to the memory usage situation.Specifically, the present invention is relevant with the system and method for read ahead values that monitors the virtual memory situation, adjust and read the sequential access file association.
Background technology
Virtual memory is very important in the complex operations system in many modern times.Virtual memory is by (for example, the AIX of IBM of certain operations system
TMThe memory area of an imagination of operating system) under the cooperation of hardware, supporting.Virtual memory provides an alternative storage address collection.Program is with these virtual addresses rather than actual address storage instruction and data.When the actual execution of program, virtual address is transformed to the actual storage address.
The purposes of virtual memory is to enlarge address space, i.e. the address set that can use of program.For example, virtual memory can contain the address of promising primary memory twice.Therefore one is used the program of all virtual memory just can not be fit to primary memory at once.Yet, by with program the term of execution any set point needed those parts copy into primary memory, computing machine can be carried out such program.
In order to promote that virtual memory is copied into actual storage, operating system is divided into a plurality of pages or leaves (page) with virtual memory, and each page contains the address of fixed amount.Each the page or leaf all be stored on the dish, up to needs it the time.When this page of needs, operating system is given primary memory with it from coiling copy, thereby virtual address translation is become actual address.
At AIX
TMIn, the virtual memory section is divided into a plurality of unit that are called 4K (4096) byte of page or leaf, and actual storage is divided into the page frame (page frame) of a plurality of 4K bytes.VMM management page frame distributes and solves those current (promptly being stored in the paging space) not in RAM or quoting of non-existent virtual-memory page also.In order to finish these tasks, VMM safeguards " Free Surface (freelist) " who shows available page frame, determines to make which current virtual-memory page in RAM that their page frame is returned (that is, swapping out) with a page displacedment algorithm and gives Free Surface.The used page displacedment algorithm of the VMM of AIX is considered the page or leaf of " persistence " and the page or leaf of " active section ".Mean to have permanent memory location on the non-volatile storage Duan Zaipan as title.Data file or executable program are mapped to the persistence section usually.On the other hand, active section is temporary, only exists during a program is used them.Active section does not have fixing disk storage position.When active section accessed page, they were write to dish paging space.When a program withdrawed from, all working page or leaf of this program was all put back to Free Surface immediately.Because must first write-back returning dish, working page just can reuse its page frame again, so the non-volatile storage section that preferably swaps out the earlier usually memory section that changes jobs again.
AIX such as IBM
TMThe modern operating system of operating system and so on is come the managing virtual storer with a virtual memory manager (VMM) usually, so that be from the memory requests of operating system and the memory requests service that receives from application.Many VMM attempt to predict when a program sequentially reads a file from dish, so that look ahead some pages or leaves, make follow-up page or leaf load memory just before PROGRAMMED REQUESTS.This prediction of being carried out by VMM is commonly referred to " sequential access is read (Sequential-Access Read Ahead) in advance ".
At AIX
TMIn, VMM attempts model prediction by detecting a routine access file in the future to the demand of some pages of a sequential file (Sequential file).When two of a file of routine access in succession pages or leaves, the VMM suppose program will continue this file of sequential access.Therefore, VMM scheduling is to some additional calling over of file, makes file data to wait until that program restarts the situation of file I/O during one page and use for program quickly under file request than VMM.
At AIX
TMIn, sequential access is read in advance can on/off and with the adjustment of two VMM thresholdings.At first, when VMM detected visit to a sequential file at first, the number of pages of reading in advance was set to minpgahead.It is maxpgahead that second of maximum numbers of pages that VMM will can read in a sequential file are in advance adjusted threshold setting.When detecting a sequential file at first, read the minpgahead page or leaf.When the request subsequently is other pages of this sequential access file of request, just increase the number of pages of looking ahead, up to reaching the maxpgahead page or leaf of looking ahead.
Fig. 1 is the synoptic diagram that the existing techniques in realizing mode that sequential access reads in advance is shown.The VMM sequential access is read in advance to handle and is started from step 100, so in step 120, first visit to file 110 causes reading first page (page or leaf 0) of file 100.At this moment, VMM does not suppose still to be the sequential file visit at random.In step 130, first byte of one page under the routine access (page or leaf 1), and middle all the other each pages that do not have access file.At this moment, VMM infers that this program is a sequential access.It is with regard to scheduling and the current corresponding additional page of minpgahead value (for example, two additional page).In this example, be to read two additional page (page or leaf 2 and 3).Therefore, read 3 pages altogether as the result of second read request of program in this example.
In step 140, first byte of first page (page or leaf 2) that routine access has been read in advance, VMM doubles as 4 with the page or leaf read ahead values, and scheduling is read page or leaf 4 to 7 from file 110.
In step 150, first byte of first page (page or leaf 4) that routine access has been read in advance, it is 8 that VMM redoublings the page or leaf read ahead values, page or leaf 8 to 15 is read in scheduling.Doubling like this proceeds to the end that the data volume that reads reaches maxpgahead or reaches file always.
In step 160, reached maxpgahead, VMM on routine access one group read in advance the page or leaf first byte time continue to read the maxpgahead page or leaf, up to the end of file.
As seen by shown in Figure 1, high maxpgahead value has been improved efficient and the speed of carrying out a large amount of sequential access read operations.Yet it is limited that the problem of the sequential access page or leaf of looking ahead a large amount of is that storer becomes sometimes.Become when limited at storer, VMM determines that current which virtual-memory page in RAM can give back Free Surface with their page frame.Under the situation of persistent storage section, determined page or leaf can be reallocated rapidly.Yet the working storage section of reallocating if desired must at first be write the working storage segment data paging space to dish.
Therefore, the needed sequential access that is a kind of recognition memory limited (memory constraint) situation, adjusts VMM is in view of the above read the system and method for threshold value in advance.In addition, what is also needed is a kind of when storer is height-limited the sequential access of automatic cutout VMM read in advance and read in advance to connect the system and method that sequential access is read in advance in the mode of connecting, switch back and forth between the off-state to reduce sequential access.
Summary of the invention
Have found that the problems referred to above can read in advance to be provided with and solve by dynamically change the VMM sequential access according to current system storage situation.In one embodiment, the sequential access read ahead values that is provided with the user is carried out normal VMM operation.When free space is nervous in detecting the Free Surface of system, system according to free space only be nervous (low) or nervous to critical degree disconnect the pre-read operation of sequential access or reduce maximum page or leaf read in advance (maxpgahead) value (storer nervous when critical the disconnection sequential access read in advance, and in the storer anxiety but do not have anxiety to reduce maxpgahead when critical).The pre-read states of VMM sequential access that changes remains to enough free spaces always and can be used for carrying out the pre-read operation of normal VMM sequential access (the sequential access read ahead values that has changed at this moment, is reset to their original value).
In another embodiment, reduce with one and the algorithm that increases the maxpgahead value dynamically is provided with the sequential access read ahead values according to threshold value.In addition, available page or leaf reduces terrifically suddenly and also causes disconnecting sequential access and read in advance.With one consider minimum free page (minfree) be provided with Free Surface in the algorithm of difference of the low Free Surface thresholding that is provided with of the difference of free space and minfree and operator calculate current maxpgahead setting (CurPgAhead).Like this, the operator can adjust minfree and threshold setting according to the operation of carrying out on computer system.
In one embodiment, after sequential access is read to disconnect in advance, to arrive free space always and increase to former and with maxpgahead above the time, connect again when detecting rapid decline.Like this, sequential access is read in advance seldom and can be swung back and forth between the state and influence system performance switching on and off.
Above summary is the explanation through simplifying and summarizing just, has omitted concrete situation.Certainly, it will be appreciated by those skilled in the art that this summary is exemplary, rather than restrictive.From following nonrestrictive detailed description, can be clear that the of the present invention some other situation that only provides, creationary feature and advantage by claims.
Description of drawings
The present invention may be better understood with reference to the accompanying drawings for those skilled in the art, sees its numerous purposes, feature and advantage.It is same in different accompanying drawings that label symbol indicated is similar or identical item.In these accompanying drawings:
Fig. 1 illustrates the synoptic diagram that the VMM sequential access is read the existing techniques in realizing mode of function in advance;
Fig. 2 illustrates to change the process flow diagram that the VMM sequential access is read an implementation of threshold value in advance;
Fig. 3 illustrates dynamically to change the process flow diagram that the VMM sequential access is read second implementation of threshold value in advance;
Fig. 4 illustrates one to show the synoptic diagram that the exemplary sequential access that dynamically changes with logic shown in Figure 3 is read the table of thresholding in advance; And
Fig. 5 is the block scheme that can realize information handling system of the present invention.
Embodiment
Below will be elaborated to an example of the present invention, this should not be regarded as the restriction to the present invention itself.Suitably say, within the scope of patent protection of the invention that many variations all provide in the appending claims of this explanation back.
Fig. 1 illustrates the synoptic diagram that the VMM sequential access is read the existing techniques in realizing mode of function in advance.The concrete condition of Fig. 1 is described in above background parts.
Fig. 2 illustrates to change the process flow diagram that the VMM sequential access is read an implementation of threshold value in advance.Processing starts from 200, so in step 210, minterm read ahead values (minpgahead) and maximum page or leaf read ahead values (maxpgahead) are set.Minpgahead is provided with the minimum number of pages of determining by the pre-read procedure retrieval of the sequential access of VMM.On the contrary, maxpgahead is provided with and determines by the searchable maximum numbers of pages of the pre-read procedure of the sequential access of VMM.As illustrated in background parts, the sequential access of VMM read in advance with minpgahead and maxpgahead be provided with the file of wanting sequential access of looking ahead page.
In step 220, carry out the pre-read operation of " normally " VMM sequential access, monitor free page space size (available number of pages in the Free Surface 290) simultaneously.Above background parts is seen in the comparatively detailed explanation of the normal pre-read operation of VMM sequential access.Available free storage space size (free number of pages) in the system monitoring system storage is to guarantee having enough free spaces to satisfy memory requests.Determine whether that enough free spaces can be with (judge 225).If there are enough free spaces to use, judge that 225 go to "Yes" branch 226, continue to carry out the pre-read operation of normal VMM sequential access.Yet, if free space limited (constrained) judges that 225 go to "No" branch 228, so that consider the situation of free space anxiety.
Whether the situation of determining the free space anxiety is nervous to critical level (judging 230).If the amount of free space is nervous to critical level, judge that 230 go to "Yes" branch 235, so in step 240, the sequential access that disconnects VMM is read in advance.But, also there is not anxiety to critical if the amount of free space is nervous, judge that 230 just go to "No" branch 245, so, reduce the searchable maximum numbers of pages of the pre-read procedure of sequential access (maxpgahead) of VMM in step 250.In one embodiment, maxpgahead reduces an amount that depends on the free space amount.That is to say, but the big free space time spent is being arranged, but maxpgahead reduces than smaller amount of less free space time spent is being arranged.
In step 260, the VMM operation is carried out in the setting that continuing to be used in step 240 or 250 provides.Read in advance if disconnect sequential access, VMM visits at execution sequence not and proceeds operation under the situation of pre-read operation.On the other hand, if maxpgahead reduces, the pre-read operation of the sequential access of VMM continues to provide the service of reading in advance, but uses a less maximum to read number of pages in advance.Like this, perhaps do not have page or leaf or have less page or leaf to be used for the sequential access service of reading in advance.VMM is the amount in available free storage space in the storer of searching system periodically, determines whether that enough free spaces can be with (judging 270).If storer is still restricted, judge that 270 go to "No" branch 275, so returning, cycle of treatment determines whether storer is critical and sequential access is set in view of the above reads setting in advance.On the other hand, if enough free spaces (that is, storer is no longer limited) are arranged, judge that 270 go to "Yes" branch 280, so in step 285, the sequential access of connecting VMM is read (if it former disconnect) in advance, the maxpgahead value that sequential access is read in advance is reset to its original value.Handling the sequential access that continues according to current available amount of memory in the system is adjusted VMM reads to be provided with in advance.
Fig. 3 illustrates one dynamically to change the process flow diagram that the VMM sequential access is read second implementation of threshold value in advance.Processing starts from 300, so in step 310, how much retrieve free page frame from Free Surface 315.In this embodiment, the current maximum sequential access read ahead values of VMM is expressed as " CurPgAhead " (current maximum page or leaf is read in advance), thereby the maximum page or leaf that does not change user/operator's setting is read (maxpgahead) value in advance.When initialization, CurPgAhead is set to equal maxpgahead.
Determine whether current available free number of pages is less than or equal to current maximum page or leaf (CurPgAhead) value (judging 320) in the Free Surface.Read page or leaf in advance if current free number of pages is less than or equal to current maximum, judge that 320 just go to "Yes" branch 325, so in step 330, the sequential access that disconnects VMM is read in advance, so that it is limited to deal with critical memory.But, if current free number of pages is read page or leaf in advance greater than current maximum, judge that 320 just go to "No" branch 335, so, connect the pre-read operation of sequential access of VMM in step 340.Notice that the memory condition that the pre-read procedure of the sequential access of VMM occurs during can the storer analysis according to the front disconnects or connection before judging 320 entering.
When the pre-read operation of the sequential access of VMM is connected, determine whether to exist storer pressure (judging 350).Whether judgement 350 is less than the desired minimum free space of VMM (minfree) setting based on current free number of pages.If storer limited (that is, and current free space<minimum desired free space=, judge that 350 just go to "Yes" 355, deal with the storer pressure.But, if storer is not limited, judge that 350 go to "No" branch 362, so, current maximum page or leaf read ahead values (CurFgAhead) is set to equal the maxpgahead value that user/operator is provided with in step 370.
Get back to and judge 350, if thereby storer is limited judges that 350 go to "Yes" branch 355, whether be set to zero (that is, forbidding dynamically changes the maxpgahead value) so determine the thresholding that the user is provided with again.Forbid if dynamically change maxpgahead, judge that 360 go to "No" branch 366, so, current maximum page or leaf read ahead values (CurPgAhead) is set to equal the set maxpgahead value of user/operator in step 370.But, enable if dynamically change the sequential access read ahead values, judge that 360 just go to "Yes" branch 375, dynamically change maximum read ahead values according to the current storage situation.
Dynamically change maximum read ahead values and start from step 380, calculate with following formula and move page or leaf (ShiftPg) value:
Wherein, minfree is minimum required free number of pages, and freelist is current free number of pages, and threshold changes the nervous thresholding of storer of maximum sequential access read ahead values for beginning is dynamic, and integer is round numbers.In step 390, current maximum page or leaf is read (CurPgAhead) in advance be set to equal maxpgahead and moved the position that obtains from aforesaid ShiftPg algorithm.In this embodiment, maxpgahead remains unchanged, and the CurPgAhead dynamic maximum page or leaf read ahead values that to be the pre-read procedure of sequential access of VMM used (for example, shown in Figure 1 the pre-read operation of sequential access will be provided with rather than the definite maximum number of pages that will look ahead of maxpgahead value with CurPgAhead).Below the Fig. 4 that will describe in detail shows a table, has listed the current effective maximum page or leaf read ahead values (CurPgAhead) based on threshold value, minimum required free space value (MinFree) and maximum page or leaf read ahead values (maxPageAhead) in detail.
In step 395, carry out the pre-read procedure of sequential access of VMM with the current maximum page of read ahead values of reading to be provided with in (CurPgAhead) setting and the step in front in advance.Periodically (that is, at set intervals), cycle of treatment is returned the current Free Surface of retrieval, readjusts CurPgAhead setting and other sequential access on demand and reads in advance to be provided with.
Fig. 4 illustrates one to show the synoptic diagram that the illustration sequential access that dynamically changes with logic shown in Figure 3 is read the table of thresholding in advance.In example shown in Figure 4, minimum required free space (minfree) is set to 100 pages, and thresholding (threshold) is set to 90 (be minfree 90%), and maximum sequential access read ahead values (maxpageahead) is set to 64.
Table 450 is finished with the algorithm of Fig. 3, and minfree, threshold and maxpgahead remain unchanged, and is respectively 100,90 and 64:
Surpass at 90 o'clock in free number of pages, resulting ShiftPg value is 0.Shift value (0) is applied on the maxpgahead (64), obtains the CurPgAhead identical (64) with maxpgahead.Be less than or equal to 90 but greater than 80 o'clock, the ShiftPg value that obtains was 1 in free number of pages.Maxpgahead (64) moves to right one, and obtaining CurPgAhead is 32.
Be less than or equal to 80 but greater than 70 o'clock, the ShiftPg value that obtains was 2 in free number of pages.Maxpgahead (64) moves to right 2, and obtaining CurPgAhead is 16.Be less than or equal to 70 but greater than 60 o'clock, the ShiftPg value that obtains was 3 in free number of pages.Maxpgahead (64) moves to right 3, and obtaining CurPgAhead is 8.Be less than or equal to 60 but greater than 50 o'clock, the ShiftPg value that obtains was 4 in free number of pages.Maxpgahead (64) moves to right 4, and obtaining CurPgAhead is 4.
Be less than or equal to 50 but greater than 40 o'clock, the ShiftPg value that obtains was 5 in free number of pages.Maxpgahead (64) moves to right 5, and obtaining CurPgAhead is 2.At last, be less than or equal at 40 o'clock in free number of pages, the ShiftPg value that obtains is 6.Maxpgahead (64) moves to right 6, and obtaining CurPgAhead is 1 (that is, disconnecting).
Fig. 5 illustration information handling system 501, it is the simplification example of the computer system of the illustrated here calculating operation of execution.Computer system 501 comprises one or more processors 500 that are couple on the host bus 502.A two-stage (L2) Cache 504 also is couple on the host bus 502.Main frame-PCI bridge 506 couples with primary memory 508, comprise control function, total line traffic control of handling the transmission between pci bus 510, processor 500, L2 Cache 504, primary memory 508 and the host bus 502 is provided Cache and primary memory.Primary memory 508 is couple on main frame-PCI bridge 506 and the host bus 502.Only the equipment that is used by master processor 500 such as LAN card 530, all is couple on the pci bus 510.Operational processor interface and ISA access passage 512 provide an interface between pci bus 510 and pci bus 514.Like this, just pci bus 514 and pci bus 510 are isolated.Device such as flash memory 518 is couple on the pci bus 514.In one implementation, flash memory 518 comprises the BIOS sign indicating number that adopts necessary processing device executable code, is used for various low-level system functions and system start-up function.
Pci bus 514 comprises that for various for example the device of being shared by primary processor 500 and operational processor 516 of flash memory 518 provides an interface.PCI-ISA bridge 535 provides total line traffic control, USB (universal serial bus) (USB) function 545, the power management functions 555 of handling the transmission between pci bus 514 and the isa bus 540, and can comprise other unshowned functional parts, support and the System Management Bus support such as real-time clock pulse (RTC), DMA control, interruption.Non-volatile RAM 520 is couple on the isa bus 540.Operational processor 516 comprises communicates by letter with processor 500 during initialization step on JTAG and the 12C bus 522.JTAG/12C bus 522 also is couple on L2 Cache 504, main frame-PCI bridge 506 and the primary memory 508, and the communication path between processor, operational processor, L2 Cache, main frame-PCI bridge and the primary memory is provided.The power consumption (power down) that operational processor 516 all right access system electric power resources lower messaging device 501.
Peripherals and input and output (I/O) equipment can be connected with each interface (for example, parallel interface 562, serial line interface 564, keyboard interface 568 and mouse interface 570) that is couple on the isa bus 540.Perhaps, also can be that many I/O equipment are admitted by a super I/O controller (not shown) that is couple on the isa bus 540.
For computer system 501 being connected with another computer system, be connected to LAN card 530 on the pci bus 510 with by the network-copy file.Similar, in order computer system 501 to be connected to ISP receiving on the internet with telephone line, be connected to modem 575 on the serial port 564 and PCI-ISA bridge 535 on.
Though computer system shown in Figure 5 can be carried out illustrated process here, this computer system is an example of computer system only.It will be understood by those skilled in the art that many other Computer System Design can both carry out illustrated process here.
One of preferred realization of the present invention is an application program, i.e. one group of instruction (program code) of the code module in random access memory that can for example reside in computing machine.Before computing machine needs, this group instruction can be stored in another computer memory, for example be stored on the hard disk drive or be stored in, perhaps download by internet or other computer networks such as in CD (for finally in CD ROM, using) or the floppy disk (for finally in floppy disk, using).Therefore, the present invention can be implemented as a computer program that supplies computing machine to use.In addition, though these illustrated methods easily realize in a multi-purpose computer that is activated selectively or reconfigured by software, the clear such method of personnel of generally being familiar with this technical field also can with the hardware that is configured to carry out these essential method steps, firmware or more specialized apparatus realize.
Though more than illustrate and illustrated specific implementations more of the present invention; but various changes and the modification that can make in the case of without departing from the present invention according to illustrated here principle is conspicuous to those skilled in the art; therefore, appended claims all is included in all change and modifications in spirit of the present invention and scope in their scope of patent protection.In addition, be appreciated that the present invention is only defined by appended claims.The personnel that are familiar with this technical field are appreciated that if specify the concrete quantity of the feature of a claim that is proposed, will be clear and definite in this claim, do not having just there is not such restriction under the clear and definite like this situation.As the non-limitative example that helps to understand, the feature of claim quoted from the phrase that contains useful " at least one " and " one or more " in the following appended claims.Yet, should not mean that the feature with a claim of indefinite article " " citation just becomes only to contain such feature with any feature limits that contains the claim of being quoted from like this, even same claim comprises citation phrase " one or more " or " at least one " and the indefinite article such as " " with being considered as with such word; This also is applicable to the situation of using definite article in claim.
Claims (28)
1. method with the computer realization diode-capacitor storage, described method comprises:
Whether detection is limited by the storer of virtual memory manager management; And
Responding this, to detect storer limited, dynamically changes the used setting of the pre-read procedure of sequential access, and being provided with of wherein changing is fit to save the used storer of the pre-read procedure of sequential access.
2. the process of claim 1 wherein that described change further comprises:
Reduce maximum page or leaf read ahead values, wherein said maximum page or leaf read ahead values is determined the maximum number of pages that the pre-read procedure of sequential access can read.
3. the method for claim 2, described method also comprises:
The corresponding value of free page frame number of retrieval and the current management of virtual memory manager; And
Calculate the poor of free page frame number and constant minimum required free number of pages, a wherein said maximum page read ahead values reduces an amount according to the difference of being calculated.
4. the method for claim 2, described method also comprises:
Reduce to carry out virtual memory manager a period of time at interval behind the maximum page or leaf read ahead values;
Determine the described time interval later storer be limited whether alleviate; And
Response is determined that storer is limited and is alleviated, increases maximum page or leaf read ahead values.
5. the process of claim 1 wherein that described change further comprises:
The pre-read procedure of forbidding sequential access.
6. the method for claim 5, described method also comprises:
Carry out virtual memory manager a period of time at interval behind the pre-read procedure of forbidding sequential access;
Determine the described time interval later storer be limited whether alleviate; And
Response is determined that storer is limited and is alleviated, enables the pre-read procedure of sequential access.
7. the method for a computer implemented managing storage page, wherein said storer comprise a plurality of page or leaf and a plurality of free pages of having taken, and described method comprises:
Retrieval and the corresponding free tabular value of current free number of pages;
Determine that whether free tabular value is less than predetermined minimum value; And
Response determines that free tabular value less than predetermined minimum value, dynamically changes the used setting of the pre-read procedure of sequential access, and being provided with of wherein changing is fit to reduce the free page of distributing to the pre-read procedure of sequential access.
8. the method for claim 7, wherein said change further comprises:
Reduce current maximum page or leaf read ahead values, described current maximum page or leaf read ahead values is determined the maximum number of pages that the pre-read procedure of sequential access can read.
9. the method for claim 8, wherein said reducing further comprises:
Difference according to free tabular value and predetermined minimum value is calculated shift value; And
To a constant maximum page or leaf read ahead values described shift value that is shifted, described displacement obtains current maximum page or leaf read ahead values.
10. the method for claim 9, wherein said calculating further comprises:
The difference of free tabular value and predetermined minimum value is adjusted the poor of thresholding divided by predetermined minimum value with predetermined, and described being divided by, obtain quotient and the remainder; And
Shift value is set to resulting merchant.
11. the method for claim 8, described method also comprises:
Carry out virtual memory manager a period of time at interval in the described back that reduces, wherein said virtual memory manager managing storage page, and described virtual memory manager comprises the pre-read procedure of sequential access;
The retrieval and the available corresponding follow-up free tabular value of free number of pages of the described time interval later;
Follow-up free tabular value is compared with current maximum page or leaf read ahead values; And
Respond the comparative result of follow-up free tabular value, the pre-read procedure of forbidding sequential access less than current maximum page or leaf read ahead values.
12. the method for claim 8, described method also comprises:
Virtual memory manager a period of time is carried out at interval in the described back that reduces, wherein said virtual memory manager managing storage page, and described virtual memory manager comprises the pre-read procedure of sequential access;
The retrieval and the available corresponding follow-up free tabular value of free number of pages of the described time interval later;
Determine that whether follow-up free tabular value is less than predetermined minimum value;
Difference according to follow-up free tabular value and predetermined minimum value is calculated second shift value; And
To a constant maximum page or leaf read ahead values second shift value that is shifted, described displacement obtains current maximum page or leaf read ahead values.
13. the method for claim 8, described method also comprises:
Virtual memory manager a period of time is carried out at interval in the described back that reduces, wherein said virtual memory manager managing storage page, and described virtual memory manager comprises the pre-read procedure of sequential access;
The retrieval and the available corresponding follow-up free tabular value of free number of pages of the described time interval later;
Determine that whether follow-up free tabular value is greater than predetermined minimum value; And
Response is determined follow-up free tabular value greater than predetermined minimum value, and current maximum read ahead values is set to equal constant maximum page or leaf read ahead values.
14. the method for claim 7, wherein said change further comprises:
The pre-read procedure of forbidding sequential access.
15. an information handling system, described information handling system comprises:
One or more processors;
Can be by the storer of processor access;
The operating system of processor controls;
Be included in the virtual memory manager in the operating system, be used for the use of diode-capacitor storage;
By non-volatile memory district operating system management, that comprise the dish exchange area of using by virtual memory manager;
By the pre-read procedure of sequential access that operating system is carried out, the data that the order that is used for looking ahead is read from the file that is stored in the non-volatile memory district;
The memory savings software module that virtual memory manager is used, this software module is used for:
Whether detection is limited by the storer of virtual memory manager management; And
It is limited that response detects storer, dynamically changes the used setting of the pre-read procedure of sequential access, and being provided with of wherein changing is fit to save the used storer of the pre-read procedure of sequential access.
16. the information handling system of claim 15, wherein said software also is used for:
Reduce maximum page or leaf read ahead values, described maximum page or leaf read ahead values is determined the maximum number of pages that the pre-read procedure of sequential access can read.
17. the information handling system of claim 16, wherein said software also is used for:
The corresponding value of free page frame number of retrieval and the current management of virtual memory manager; And
Calculate the poor of free page frame number and constant minimum required free number of pages, a described maximum page read ahead values reduces an amount according to the difference of being calculated.
18. the information handling system of claim 16, wherein said software also is used for:
Reduce to carry out virtual memory manager a period of time at interval behind the maximum page or leaf read ahead values;
Determine the described time interval later storer be limited whether alleviate; And
Response is determined that storer is limited and is alleviated, increases maximum page or leaf read ahead values.
19. the information handling system of claim 15, wherein said software also is used for:
The pre-read procedure of forbidding sequential access.
20. the information handling system of claim 19, wherein said software also is used for:
Carry out virtual memory manager a period of time at interval behind the pre-read procedure of forbidding sequential access;
Determine the described time interval later storer be limited whether alleviate; And
Response is determined that storer is limited and is alleviated, enables the pre-read procedure of sequential access.
21. an information handling system, described information handling system comprises:
One or more processors;
Can be by the storer of processor access;
The operating system of processor controls;
Be included in the virtual memory manager in the operating system, be used for the use of diode-capacitor storage;
The non-volatile memory district that comprises the dish exchange area of using by operating system management by virtual memory manager;
By the pre-read procedure of sequential access that operating system is carried out, the data that the order that is used for looking ahead is read from the file that is stored in the non-volatile memory district;
The memory savings software module that virtual memory manager is used, this software module is used for:
Retrieval and the corresponding free tabular value of current free number of pages;
Determine that whether free tabular value is less than predetermined minimum value; And
Response determines that free tabular value less than predetermined minimum value, dynamically changes the used setting of the pre-read procedure of sequential access,
Being provided with of changing is fit to reduce the free page of distributing to the pre-read procedure of sequential access.
22. the information handling system of claim 21, wherein said software also is used for:
Reduce current maximum page or leaf read ahead values, wherein said current maximum page or leaf read ahead values is determined the maximum number of pages that the pre-read procedure of sequential access can read.
23. the information handling system of claim 22, wherein said reducing further comprises:
Difference according to free tabular value and predetermined minimum value is calculated shift value; And
To a constant maximum page or leaf read ahead values described shift value that is shifted, described displacement obtains current maximum page or leaf read ahead values.
24. the information handling system of claim 23, wherein said calculating further comprises:
The difference of free tabular value and predetermined minimum value is adjusted the poor of thresholding divided by predetermined minimum value with predetermined, and described being divided by, obtain quotient and the remainder; And
Shift value is set to resulting merchant.
25. the information handling system of claim 22, described information handling system also comprises:
Virtual memory manager a period of time is carried out at interval in the described back that reduces, wherein said virtual memory manager managing storage page, and described virtual memory manager comprises the pre-read procedure of sequential access;
The retrieval and the available corresponding follow-up free tabular value of free number of pages of the described time interval later;
Follow-up free tabular value is compared with current maximum page or leaf read ahead values; And
Respond the comparative result of follow-up free tabular value, the pre-read procedure of forbidding sequential access less than current maximum page or leaf read ahead values.
26. the information handling system of claim 22, described information handling system also comprises:
Virtual memory manager a period of time is carried out at interval in the described back that reduces, wherein said virtual memory manager managing storage page, and described virtual memory manager comprises the pre-read procedure of sequential access;
The retrieval and the available corresponding follow-up free tabular value of free number of pages of the described time interval later;
Determine that whether follow-up free tabular value is less than predetermined minimum value; And
Difference according to follow-up free tabular value and predetermined minimum value is calculated second shift value; And
To a constant maximum page or leaf read ahead values second shift value that is shifted, described displacement obtains current maximum page or leaf read ahead values.
27. the information handling system of claim 22, described information handling system also comprises:
Virtual memory manager a period of time is carried out at interval in the described back that reduces, wherein said virtual memory manager managing storage page, and described virtual memory manager comprises the pre-read procedure of sequential access;
The retrieval and the available corresponding follow-up free tabular value of free number of pages of the described time interval later;
Determine that whether follow-up free tabular value is greater than predetermined minimum value; And
Response is determined follow-up free tabular value greater than predetermined minimum value, and current maximum read ahead values is set to equal constant maximum page or leaf read ahead values.
28. the information handling system of claim 27, wherein said change further comprises:
The pre-read procedure of forbidding sequential access.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/828,455 | 2004-04-20 | ||
US10/828,455 US7120753B2 (en) | 2004-04-20 | 2004-04-20 | System and method for dynamically adjusting read ahead values based upon memory usage |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1707449A CN1707449A (en) | 2005-12-14 |
CN100346318C true CN100346318C (en) | 2007-10-31 |
Family
ID=35097664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005100657692A Expired - Fee Related CN100346318C (en) | 2004-04-20 | 2005-04-15 | System and method for dynamically adjusting read ahead values based upon memory usage |
Country Status (3)
Country | Link |
---|---|
US (2) | US7120753B2 (en) |
CN (1) | CN100346318C (en) |
TW (1) | TWI354894B (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060004977A1 (en) * | 2004-06-30 | 2006-01-05 | Joefon Jann | Autonomically tuning the virtual memory subsystem of a computer operating system |
US7552303B2 (en) * | 2004-12-14 | 2009-06-23 | International Business Machines Corporation | Memory pacing |
US9189291B2 (en) * | 2005-12-12 | 2015-11-17 | International Business Machines Corporation | Sharing a kernel of an operating system among logical partitions |
US9201703B2 (en) | 2006-06-07 | 2015-12-01 | International Business Machines Corporation | Sharing kernel services among kernels |
US20090083391A1 (en) * | 2007-09-20 | 2009-03-26 | Oh Yang Jen-Hsueh | Automatic control system with network gateway and method for operating the same |
US8352940B2 (en) * | 2008-06-09 | 2013-01-08 | International Business Machines Corporation | Virtual cluster proxy to virtual I/O server manager interface |
TWI384365B (en) * | 2009-01-19 | 2013-02-01 | Asustek Comp Inc | Control system and control method of virtual memory |
US8612374B1 (en) | 2009-11-23 | 2013-12-17 | F5 Networks, Inc. | Methods and systems for read ahead of remote data |
CN101976182A (en) * | 2010-11-15 | 2011-02-16 | 记忆科技(深圳)有限公司 | Solid state disk prereading method and device |
US8886880B2 (en) | 2012-05-29 | 2014-11-11 | Dot Hill Systems Corporation | Write cache management method and apparatus |
US8930619B2 (en) | 2012-05-29 | 2015-01-06 | Dot Hill Systems Corporation | Method and apparatus for efficiently destaging sequential I/O streams |
US9053038B2 (en) | 2013-03-05 | 2015-06-09 | Dot Hill Systems Corporation | Method and apparatus for efficient read cache operation |
US9684455B2 (en) | 2013-03-04 | 2017-06-20 | Seagate Technology Llc | Method and apparatus for sequential stream I/O processing |
US9552297B2 (en) | 2013-03-04 | 2017-01-24 | Dot Hill Systems Corporation | Method and apparatus for efficient cache read ahead |
US20140223108A1 (en) * | 2013-02-07 | 2014-08-07 | International Business Machines Corporation | Hardware prefetch management for partitioned environments |
US9152563B2 (en) | 2013-03-04 | 2015-10-06 | Dot Hill Systems Corporation | Method and apparatus for processing slow infrequent streams |
US9158687B2 (en) | 2013-03-04 | 2015-10-13 | Dot Hill Systems Corporation | Method and apparatus for processing fast asynchronous streams |
US9465555B2 (en) | 2013-08-12 | 2016-10-11 | Seagate Technology Llc | Method and apparatus for efficient processing of disparate data storage commands |
US9235511B2 (en) * | 2013-05-01 | 2016-01-12 | Globalfoundries Inc. | Software performance by identifying and pre-loading data pages |
US9547510B2 (en) * | 2013-12-10 | 2017-01-17 | Vmware, Inc. | Tracking guest memory characteristics for memory scheduling |
US9529609B2 (en) * | 2013-12-10 | 2016-12-27 | Vmware, Inc. | Tracking guest memory characteristics for memory scheduling |
US20160098203A1 (en) * | 2014-12-18 | 2016-04-07 | Mediatek Inc. | Heterogeneous Swap Space With Dynamic Thresholds |
KR20160075174A (en) * | 2014-12-19 | 2016-06-29 | 에스케이하이닉스 주식회사 | Memory system and operation method for the same |
KR20160104387A (en) * | 2015-02-26 | 2016-09-05 | 에스케이하이닉스 주식회사 | Data storage device and operating method thereof |
US20170116127A1 (en) * | 2015-10-22 | 2017-04-27 | Vormetric, Inc. | File system adaptive read ahead |
CN109471671B (en) * | 2017-09-06 | 2023-03-24 | 武汉斗鱼网络科技有限公司 | Program cold starting method and system |
US11855898B1 (en) | 2018-03-14 | 2023-12-26 | F5, Inc. | Methods for traffic dependent direct memory access optimization and devices thereof |
JP6838029B2 (en) * | 2018-10-31 | 2021-03-03 | ファナック株式会社 | Numerical control device |
US11275691B2 (en) * | 2019-04-11 | 2022-03-15 | EMC IP Holding Company LLC | Intelligent control of cache |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6408313B1 (en) * | 1998-12-16 | 2002-06-18 | Microsoft Corporation | Dynamic memory allocation based on free memory size |
US20030105940A1 (en) * | 2001-11-30 | 2003-06-05 | Cooksey Robert N. | Method and apparatus for reinforcing a prefetch chain |
CN1470019A (en) * | 2000-08-21 | 2004-01-21 | ض� | Method and apparatus for pipelining ordered input/output transactions in a cache coherent multi-processor system |
CN1484788A (en) * | 2000-12-29 | 2004-03-24 | 英特尔公司 | System and method for prefetching data into a cache based on miss distance |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USRE36462E (en) * | 1986-01-16 | 1999-12-21 | International Business Machines Corporation | Method to control paging subsystem processing in virtual memory data processing system during execution of critical code sections |
US5606685A (en) * | 1993-12-29 | 1997-02-25 | Unisys Corporation | Computer workstation having demand-paged virtual memory and enhanced prefaulting |
US6167486A (en) * | 1996-11-18 | 2000-12-26 | Nec Electronics, Inc. | Parallel access virtual channel memory system with cacheable channels |
US7406547B2 (en) * | 2000-08-09 | 2008-07-29 | Seagate Technology Llc | Sequential vectored buffer management |
US7017025B1 (en) * | 2002-06-27 | 2006-03-21 | Mips Technologies, Inc. | Mechanism for proxy management of multiprocessor virtual memory |
US7336283B2 (en) * | 2002-10-24 | 2008-02-26 | Hewlett-Packard Development Company, L.P. | Efficient hardware A-buffer using three-dimensional allocation of fragment memory |
US20040268124A1 (en) * | 2003-06-27 | 2004-12-30 | Nokia Corporation, Espoo, Finland | Systems and methods for creating and maintaining a centralized key store |
-
2004
- 2004-04-20 US US10/828,455 patent/US7120753B2/en not_active Expired - Fee Related
-
2005
- 2005-04-14 TW TW094111852A patent/TWI354894B/en not_active IP Right Cessation
- 2005-04-15 CN CNB2005100657692A patent/CN100346318C/en not_active Expired - Fee Related
-
2006
- 2006-08-08 US US11/463,100 patent/US7318142B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6408313B1 (en) * | 1998-12-16 | 2002-06-18 | Microsoft Corporation | Dynamic memory allocation based on free memory size |
CN1470019A (en) * | 2000-08-21 | 2004-01-21 | ض� | Method and apparatus for pipelining ordered input/output transactions in a cache coherent multi-processor system |
CN1484788A (en) * | 2000-12-29 | 2004-03-24 | 英特尔公司 | System and method for prefetching data into a cache based on miss distance |
US20030105940A1 (en) * | 2001-11-30 | 2003-06-05 | Cooksey Robert N. | Method and apparatus for reinforcing a prefetch chain |
Also Published As
Publication number | Publication date |
---|---|
TWI354894B (en) | 2011-12-21 |
TW200604809A (en) | 2006-02-01 |
US7120753B2 (en) | 2006-10-10 |
US7318142B2 (en) | 2008-01-08 |
US20050235125A1 (en) | 2005-10-20 |
CN1707449A (en) | 2005-12-14 |
US20060288186A1 (en) | 2006-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100346318C (en) | System and method for dynamically adjusting read ahead values based upon memory usage | |
CN100345124C (en) | Method and system for reduction of cache miss rates using shared private caches | |
US9235531B2 (en) | Multi-level buffer pool extensions | |
US8166326B2 (en) | Managing power consumption in a computer | |
CA2511752C (en) | Method and apparatus for morphing memory compressed machines | |
CN100442249C (en) | System and method for dynamic sizing of cache sequential list | |
CN100555257C (en) | The memory controller of the dma operation between the processing page replicative phase and method | |
US5895488A (en) | Cache flushing methods and apparatus | |
CN100458738C (en) | Method and system for management of page replacement | |
US7024512B1 (en) | Compression store free-space management | |
CN100573477C (en) | The system and method that group in the cache memory of managing locks is replaced | |
JPH0775004B2 (en) | Memory control method | |
US8019939B2 (en) | Detecting data mining processes to increase caching efficiency | |
US20050268052A1 (en) | System and method for improving performance of dynamic memory removals by reducing file cache size | |
US7143242B2 (en) | Dynamic priority external transaction system | |
CN1196997C (en) | Load/load detection and reorder method | |
JP3262519B2 (en) | Method and system for enhancing processor memory performance by removing old lines in second level cache | |
CN1286006C (en) | Cache system and method for managing cache system | |
US7080206B2 (en) | System and method for adaptively loading input data into a multi-dimensional clustering table | |
US6463515B1 (en) | System and method for recovering physical memory locations in a computer system | |
CN107408060B (en) | Data processing method and device | |
Baek et al. | Matrix-stripe-cache-based contiguity transform for fragmented writes in RAID-5 | |
US20090132780A1 (en) | Cache line reservations | |
CN117093508B (en) | Memory resource management method and device, electronic equipment and storage medium | |
JPH0363741A (en) | Disk cache device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20071031 Termination date: 20100415 |