CN103250142B - The system and method for a kind of management of cache destage sweep time - Google Patents

The system and method for a kind of management of cache destage sweep time Download PDF

Info

Publication number
CN103250142B
CN103250142B CN201180059215.5A CN201180059215A CN103250142B CN 103250142 B CN103250142 B CN 103250142B CN 201180059215 A CN201180059215 A CN 201180059215A CN 103250142 B CN103250142 B CN 103250142B
Authority
CN
China
Prior art keywords
sweep time
speed cache
scan
time
destage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201180059215.5A
Other languages
Chinese (zh)
Other versions
CN103250142A (en
Inventor
M·T·本哈斯
B·C·比尔兹利
S·E·威廉姆斯
L·M·古普塔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CN103250142A publication Critical patent/CN103250142A/en
Application granted granted Critical
Publication of CN103250142B publication Critical patent/CN103250142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0855Overlapped cache accessing, e.g. pipeline
    • G06F12/0859Overlapped cache accessing, e.g. pipeline with reload from main memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Provide the system and method for the destage sweep time in management of cache.A kind of system comprises high-speed cache and processor.Processor is configured to: use the first thread to determine the expectation sweep time of the multiple storage track scanned in high-speed cache continuously, and use the second thread based on the expectation sweep time determined continuously, control the actual scanning time of the described multiple storage track in high-speed cache continuously.One method comprises: use the first thread to determine the expectation sweep time of the multiple storage track scanned in high-speed cache continuously, and use the second thread based on the expectation sweep time determined continuously, control the actual scanning time of the described multiple storage track in high-speed cache continuously.Additionally provide the physical computer storage medium of the computer program comprised for performing above method.

Description

The system and method for a kind of management of cache destage sweep time
Technical field
Relate generally to computing system of the present invention, more particularly, relates to the system and method for the destage sweep time in management of cache.
Background technology
A target of computer memory system reduces the quantity when from destage conflict during write cache destage (destage) storage track, more efficiently and/or quickly operates to make storage system.Time while attempting that data are write storage track at main frame from this storage track of write cache destage, destage conflict can there is.Because the storage system in the present age is destage storage track immediately after storage track is write usually, so may produce this situation, and before main frame can write storage track again, main frame must be waited for until storage track is by from write cache destage.A kind of technology reducing the quantity of destage conflict comprises: before destage storage track, make the time period that the storage track in write cache keeps longer, thus before storage track is by destage, storage track can be write repeatedly.Although this is the effective technology for reducing destage conflict, also wish storage track not in write cache resident too for a long time so that high-speed cache become full completely.Also wish that high-speed cache does not experience the fluctuation of the unsuitable amount almost completely and almost between sky, this fluctuation is called vibration.
The U.S. Patent application US2010/0037226A1 announced discloses the grouping of the repeatedly cache directory scanning for abandoning track from high-speed cache, makes the number of times of the scanning performed at any time in order to destage track minimize thus.In the scan period of high-speed cache, check the track associated with high-speed cache, and take suitable action.Suitable action can comprise from high-speed cache destaged data or abandon data from high-speed cache.Suitable action also can change according to the type of the scanning performed high-speed cache.
The U.S. Patent application US2003/0225948A1 announced discloses the number of times of pre-configured and predefined scan process, and wherein scan process is used for from the track destaged data high-speed cache, and the number of times of scan process also can change in time.
United States Patent (USP) 7,085,892 disclose after whether the time relevant in the scan request determined to is queued up be less than the time cycle, postpone to be used for the scan request from high-speed cache destaged data.
Summary of the invention
Various embodiment is provided for the system of the destage sweep time in management of cache.A kind of system comprises high-speed cache and is coupled to the processor of high-speed cache.In one embodiment, processor is configured to: use the first thread to determine the expectation sweep time of the multiple storage track scanned in high-speed cache continuously, and use the second thread based on the expectation sweep time determined continuously, control the actual scanning time of the described multiple storage track in high-speed cache continuously.
Also be provided for the method for the destage sweep time in management of cache.One method comprises: use the first thread to determine the expectation sweep time of the multiple storage track scanned in high-speed cache continuously, and use the second thread based on the expectation sweep time determined continuously, control the actual scanning time of the described multiple storage track in high-speed cache continuously.
Additionally provide the physical computer storage medium (such as, comprising the electrical connection of one or more line, portable computer diskette, hard disk, random access memory (RAM), ROM (read-only memory) (ROM), Erasable Programmable Read Only Memory EPROM (EPROM or flash memory), optical fiber, Portable compressed dish ROM (read-only memory) (CD-ROM), optical storage, magnetic memory apparatus or aforementioned every any suitable combination) of the computer program comprised for the destage sweep time in management of cache.A kind of physical computer storage medium comprises: the computer code determining scanning expectation sweep time of the multiple storage track in high-speed cache for using the first thread continuously; With for using the second thread to control the computer code of the actual scanning time of the described multiple storage track in high-speed cache sweep time continuously based on the expectation determined continuously.
Accompanying drawing explanation
In order to will easy understand advantage of the present invention, above the of the present invention of concise and to the point description will be provided to describe more specifically by referring to specific embodiment illustrated in the accompanying drawings.Should be appreciated that, these accompanying drawings only describe exemplary embodiments of the present invention, and therefore should not be regarded as limiting the scope of the invention, and utilize other characteristic sum details describe and the present invention is described by by use accompanying drawing, in the accompanying drawings:
Fig. 1 is the block scheme of an embodiment of system for the destage sweep time in management of cache;
Fig. 2 is the process flow diagram of an embodiment of the method for expectation sweep time for determining high-speed cache in Fig. 1; And
Fig. 3 is the process flow diagram of an embodiment of the method for actual scanning time for the high-speed cache in control chart 1.
Embodiment
Various embodiment is provided for the system and method for the destage sweep time in management of cache.Additionally provide the physical computer storage medium (such as, comprising the electrical connection of one or more line, portable computer diskette, hard disk, random access memory (RAM), ROM (read-only memory) (ROM), Erasable Programmable Read Only Memory EPROM (EPROM or flash memory), optical fiber, Portable compressed dish ROM (read-only memory) (CD-ROM), optical storage, magnetic memory apparatus or aforementioned every any suitable combination) of the computer program comprised for the destage sweep time in management of cache.
Referring now to accompanying drawing, Fig. 1 is the block scheme of an embodiment of system 100 for the destage sweep time in management of cache.At least in the illustrated embodiment, system 100 comprises the storer 110 being coupled to high-speed cache 120 and processor 130 via bus 140 (such as, wired and/or wireless bus).
Storer 110 can be known in the state of the art or at the memory storage of any type of exploitation in future.The example of storer 110 includes but not limited to: have the electrical connection of one or more line, portable computer diskette, hard disk, random access memory (RAM), Erasable Programmable Read Only Memory EPROM (EPROM or flash memory), optical fiber, Portable compressed dish ROM (read-only memory) (CD-ROM), optical storage, magnetic memory apparatus or aforementioned every any suitable combination.In the various embodiments of storer 110, storage track can be stored in the memory 110.In addition, when data are written to storage track, each storage track can be destaged to storer 110 by from high-speed cache 120.
In one embodiment, high-speed cache 120 comprises the write cache being divided into one or more Pan Bao (rank), and wherein one or more storage track drawn together by each dish handbag.High-speed cache 120 can be in the prior art known or future exploitation any high-speed cache.
During operation, the storage track in each dish bag is destaged to storer 110 after storage track is write in the process of " prospect " destage.That is, while the various storage track in the dish bag of main frame (not shown) write cache 120 on one's own initiative, the process of prospect destage is destaged to storer 110 storage track from Pan Bao.Ideally, when one or more main frame wishes to write particular memory track, this particular memory track is not by destage, and this is called destage conflict, and high-speed cache 120 does not experience the large fluctuation almost completely and almost between sky, and this is called vibration.In order to reduce the quantity of destage conflict and reduce storage track and reside in time in high-speed cache 120, processor 130 is configured to perform the method for the destage sweep time in management of cache 120.
In various embodiments, processor 130 comprises maybe can access destage administration module sweep time 1310, destage administration module sweep time 1310 comprises computer-readable code, and this computer-readable code makes processor 130 perform method for the destage sweep time in management of cache 120 when being performed by processor 130.When performing the computer-readable code in destage administration module sweep time 1310, processor 130 is configured to use first thread and determines the expectation sweep time of the storage track scanned in high-speed cache 120 and use the second thread to control the actual scanning time of the storage track in high-speed cache 120 sweep time based on the expectation determined.In one embodiment, the first thread and the second thread work to determine expect sweep time and control the actual scanning time continuously or substantially continuously respectively.
When use first thread is determined to expect sweep time, processor 130 is configured to continuously, substantially monitor destage conflict for high-speed cache 120 continuously or with predetermined space, and if high-speed cache 120 is experiencing destage conflict, then expectation is being increased predetermined time amount sweep time.In one embodiment, when every second processor 130 detects that destage conflicts in high-speed cache 120, the predetermined time amount of increase is 100 milliseconds.In other embodiments, when every second processor 130 detects that destage conflicts in high-speed cache 120, the predetermined time amount of increase is less than the time quantum (such as, about 10 milliseconds to about 99 milliseconds) of 100 milliseconds.In other embodiments, when every second processor 130 detects that destage conflicts in high-speed cache 120, the predetermined time amount of increase is greater than the time quantum (such as, 101 milliseconds to about 1 second) of 100 milliseconds.So, the predetermined time amount that can increase according to the needs adjustment of the application-specific of system 100.
Processor 130 is also configured to continuously, substantially monitor vibration for high-speed cache 120 continuously or with predetermined space, and if high-speed cache 120 experiences vibration, then expecting to reduce predetermined time amount sweep time.In one embodiment, when processor 130 determines that high-speed cache 120 is experiencing vibration, processor 130 is configured to expecting to reduce 50 (50%) percent sweep time.In another embodiment, when processor 130 determines that high-speed cache 120 is experiencing vibration, processor 130 is configured to expecting that sweep time reduces the amount (such as, about 10% to about 49%) being less than 50 percent.In another embodiment, when processor 130 determines that high-speed cache 120 is experiencing vibration, processor 130 is configured to expecting that sweep time reduces the amount (such as, about 51% to about 90%) being greater than 50 percent.
In another embodiment, processor 130 is configured to continuously, substantially continuously or monitor the amount being stored in data in high-speed cache 120 with predetermined space, and if high-speed cache 120 stores the data volume being greater than scheduled volume, then expecting to reduce predetermined time amount sweep time.In one embodiment, when processor 130 determines that high-speed cache 120 storage equals the data volume of 70 (70%) percent of the total memory capacity of high-speed cache 120, processor 130 is configured to reduce expects scanning.In another embodiment, when processor 130 determines that high-speed cache 120 storage is greater than 70 percent of the total memory capacity of high-speed cache 120 (such as, about 71% to about 95%), during data volume, processor 130 is configured to reduce expects sweep time.In another embodiment, when processor 130 determines that high-speed cache 120 storage is less than 70 percent of the total memory capacity of high-speed cache 120 (such as, about 10% to about 69%), during data volume, processor 130 is configured to reduce expects sweep time.In each embodiment in these embodiments, when processor 130 determines that high-speed cache 120 storage is greater than the data volume of scheduled volume, processor 130 is configured to expecting to reduce 50 (50%) percent sweep time, minimizing is less than the amount of 50 percent (such as, about 10% to about 49%), or reduce the amount (such as, about 51% to about 90%) being greater than 50 percent.
In various embodiments, processor 130 is configured to, if high-speed cache is not experiencing destage conflict and do not experiencing a large amount of vibrations or do not storing the data volume being greater than predetermined amount of data, then keeps the current expected time.As mentioned above, clearly, processor 130 is configured to: when processor 130 detects that destage conflicts in high-speed cache 120, expectation is increased a small amount of time sweep time, and when processor 130 determine high-speed cache 120 experiencing vibration and/or store be greater than the data volume of predetermined amount of data time, expecting that sweep time reduces relatively a large amount of time.In various embodiments, processor 130 is configured to enable the second thread use expectation to control the actual scanning time of high-speed cache 120 sweep time.
When the actual scanning time of use second Thread control high-speed cache 120, processor 130 is configured to the time quantum determining that the front one scan that processor 130 performs high-speed cache 120 spends.In addition, what processor 130 was configured to the Current Scan having determined high-speed cache 120 completes the time quantum that part spends, and this time quantum is added to the time quantum of one scan before high-speed cache 120.Such as, if the front one scan of high-speed cache 120 spends 750 milliseconds and a part for the Current Scan of high-speed cache 120 has spent 260 milliseconds, then in the past one scan start to the T.T. of current time be 1010 milliseconds or 1.01 seconds.
The time that the remainder that processor 130 is also configured to the Current Scan having estimated high-speed cache 120 will spend, and the amount starting the time to current time being added to one scan in the past this time.In one embodiment, when estimating the remainder of the Current Scan of high-speed cache 120 needed, processor 130 is configured to deduct the completing partly to produce factor I of Current Scan T.T. from the Current Scan estimated.The T.T. estimated can based on estimated values such as the residue number percents of the high-speed cache 120 of the quantity of residue storage track to be scanned estimated, the to be scanned of estimation, can based on each in these estimated values of the Estimate of Total Number of the storage track such as scanned in front one scan.Such as, if processor 130 estimates that 260 milliseconds in last example have been 1/3rd of the T.T.s that Current Scan will spend, then processor 130 will estimate that the time that the remainder of the Current Scan of high-speed cache 120 will spend is 520 milliseconds (that is, 260 milliseconds × 2=520 milliseconds).In this example, factor I is 1530 milliseconds or 1.53 seconds (that is, 750ms+260ms+520ms=1530ms).
In another embodiment, when estimating the remainder of Current Scan of the high-speed cache 120 needed, to the time quantum of current time (namely processor 130 to be configured to from factor I deducts in the past one scan, 1010 milliseconds in above example), and this amount divided by factor I (that is, (1530ms-1010ms)/1530ms).The result of this calculating is multiplied by the current expected time (that is, expected time × [(1530ms-1010ms)/1530ms]) obtained from the first thread subsequently.What this result was added in the past one scan subsequently starts T.T. to current time to obtain factor I.
Processor 130 is configured to subsequently that to determine the reference scan factor, the reference scan factor can be used in predicting whether next actual scanning time will be too fast divided by the twice (that is, current expected time × 2) of current expected time factor I.If the reference scan factor is less than one (1), then processor 130 will determine that next actual scanning time may be too fast, and processor 130 will increase next the actual scanning time for lower one scan.In one embodiment, in order to increase next actual scanning time, processor 130 will reduce the quantity of the destage task of the storage track in scanning and/or destage high-speed cache 120.The quantity reducing the destage task in high-speed cache 120 has the effect of the speed reducing scanning and/or destage storage track, the time quantum that the actual scanning which increasing execution high-speed cache 120 spends.
If the reference scan factor is greater than one (1), then processor 130 will determine that next actual scanning time may spend than expecting the time that sweep time is long, and processor 130 will not revise the sweep time being used for lower one scan.That is, processor 130 will not reduce or increase the quantity of the destage task of the storage track scanned and/or in destage high-speed cache 120, and this causes the time quantum for next sweep time substantially to keep identical with the time quantum for the Current Scan time.
It is the block scheme of an embodiment of the method 200 of expectation sweep time for determining high-speed cache (such as, high-speed cache 120) referring now to Fig. 2, Fig. 2.At least in the illustrated embodiment, method 200 starts from monitoring high-speed cache (block 205) and determining whether high-speed cache is experiencing destage conflict (block 210).
If high-speed cache is experiencing destage conflict, then method 200 is comprising expecting to increase predetermined time amount (block 215) sweep time.In one embodiment, when detecting that destage conflicts in the caches, the predetermined time amount of increase is 100 milliseconds at every turn.In other embodiments, when detecting that destage conflicts in the caches, the predetermined time amount of increase is less than the time quantum (such as, about 10 milliseconds to about 99 milliseconds) of 100 milliseconds at every turn.In other embodiments, when detecting that destage conflicts in the caches, the predetermined time amount of increase is greater than the time quantum (such as, 101 milliseconds to about 1 second) of 100 milliseconds at every turn.Method 200 is supplied to the sweep time increased method 300 (vide infra) and returns subsequently and monitors high-speed cache (block 205).
If high-speed cache is not experiencing destage conflict, then method 200 is also comprising and is determining whether high-speed cache is experiencing vibration (block 220).If high-speed cache experiences vibration, then method 200 comprises expecting to reduce predetermined time amount (block 225) sweep time.In one embodiment, when high-speed cache experiences vibration, expecting to reduce 50 (50%) percent sweep time.In another embodiment, when high-speed cache experiences vibration, expecting that sweep time reduces the amount (such as, about 10% to about 49%) being less than 50 percent.In another embodiment, when high-speed cache experiences vibration, expecting that sweep time reduces the amount (such as, about 51% to about 90%) being greater than 50 percent.Method 200 is supplied to the sweep time reduced method 300 (vide infra) and returns subsequently and monitors high-speed cache (block 205).
If high-speed cache does not experience vibration, then method 200 comprises and keeps current expectation sweep time (block 230).That is, if high-speed cache is not experiencing destage conflict and do not experiencing a large amount of vibrations, then current expectation sweep time is being kept.Method 200 returns subsequently and monitors high-speed cache (block 205).
It is the block scheme of an embodiment of the method 300 of actual scanning time for controlling high-speed cache (such as, high-speed cache 120) referring now to Fig. 3, Fig. 3.At least in the illustrated embodiment, method 300 starts from determining the time quantum (block 305) that the front one scan that processor (such as, processor 130) performs high-speed cache spends.
What method 300 continued the Current Scan having determined high-speed cache completes the time quantum (block 310) that part spends, and this time quantum be added to perform high-speed cache before the time quantum (block 315) that spends of one scan.The time (block 320) that the remainder that method 300 also comprises the Current Scan having estimated high-speed cache will spend, and the time quantum (block 325) started to current time being added to the past one scan calculated at block 315 this time.In one embodiment, estimate that the remainder of the Current Scan of high-speed cache needed comprises: deduct the completing partly to produce factor I of Current Scan T.T. from the Current Scan estimated.The T.T. estimated can based on estimated values such as the residue number percents of the high-speed cache of the quantity of residue storage track to be scanned in the caches estimated, the to be scanned of estimation, can based on each in these estimated values of the Estimate of Total Number of the storage track such as scanned in front one scan.
In another embodiment, estimate that the remainder of the Current Scan of the high-speed cache needed comprises: to the time quantum of current time from factor I deducts the past one scan, and this amount divided by factor I.The result of this calculating is multiplied by the current expected time obtained from method 200 as discussed above subsequently.What this result was added in the past one scan subsequently starts T.T. to current time to obtain factor I.
Method 300 also to comprise factor I divided by the twice (that is, current expected time × 2) of current expected time to determine the reference scan factor (block 330).Whether method 300 comprises subsequently: will too fast (block 335) by determining whether the reference scan factor is less than one (1) to predict next actual scanning time.If the reference scan factor is less than one (1), then method 300 comprises: determine that next actual scanning time too soon (block 340), and may increase next the actual scanning time (block 345) being used for lower one scan.In one embodiment, increase the quantity that next actual scanning time comprises the destage task of the storage track reduced in scanning and/or destage high-speed cache, because the quantity of the destage task in minimizing high-speed cache has the effect of the speed reducing scanning and/or destage storage track, the time quantum that the actual scanning which increasing execution high-speed cache spends.Method 300 returns the time quantum (block 305) determining that the front one scan that processor (such as, processor 130) performs high-speed cache spends subsequently.
If the reference scan factor is more than or equal to one (1), then method 300 comprises: determine that next actual scanning time may spend than expecting the time (block 350) that sweep time is long, and do not revise or refuse to revise the sweep time (block 355) for lower one scan.That is, will not increase or reduce the quantity of the destage task of the storage track scanned and/or in destage high-speed cache, this causes the time quantum for next sweep time substantially to keep identical with the time quantum for the Current Scan time.Method 300 returns the time quantum (block 305) determining that the front one scan that processor (such as, processor 130) performs high-speed cache spends subsequently.
Although provide at least one exemplary embodiment in the detailed description before of the present invention, should be appreciated that to there is a large amount of modification.
One of ordinary skill in the art will appreciate that each aspect of the present invention can be implemented as system, method or computer program.Therefore, each aspect of the present invention can adopt the form of the embodiment that usually all can be referred to herein as " circuit ", the complete hardware embodiment of " module " or " system ", completely software implementation (comprising firmware, resident software, microcode etc.) or be combined with software and hardware aspect.In addition, each aspect of the present invention can adopt the form being implemented in the computer program had in one or more computer-readable mediums of computer readable program code.
Any combination of one or more computer-readable medium can be used.Computer-readable medium can be computer-readable signal media or physical computer readable storage medium storing program for executing.Physical computer readable storage medium storing program for executing can be such as but not limited to electronics, magnetic, optics, crystal, polymkeric substance, electromagnetism, infrared or semiconductor system, equipment or device or aforementioned every any suitable combination.The example of physical computer readable storage medium storing program for executing includes but not limited to have the electrical connection of one or more line, portable computer diskette, hard disk, RAM, ROM, EPROM, flash memory, optical fiber, CD-ROM, optical storage, magnetic memory apparatus or aforementioned every any suitable combination.In the context of this article, computer-readable recording medium can be any tangible medium that can comprise or store by the program that instruction execution system, equipment or device use or combined command executive system, equipment or device use or data.
Any suitable medium (including but not limited to wireless, wired, optical fiber cable, radio frequency (RF) etc. or aforementioned every any suitable combination) can be used to transmit the program code that computer-readable medium comprises.The computer code of the operation for performing each aspect of the present invention can be write according to any static instruction (such as, " C " programming language or other similar programming language).Computer code can completely on the computing machine of user perform, part on the computing machine of user perform, as stand alone software bag perform, part on the computing machine of user and part on the remote computer perform or perform on remote computer or server completely.In a rear situation, remote computer (can be included but not limited to by the network of any type or communication system, LAN (Local Area Network) (LAN) or wide area network (WAN), converging network) be connected to the computing machine of user, or can (such as, use ISP to pass through internet) and be connected to outer computer.
Each aspect of the present invention is described above with reference to the process flow diagram of method, equipment (system) and computer program according to an embodiment of the invention and/or block scheme.Will be understood that, the combination of the square frame in each square frame in process flow diagram and/or block scheme and process flow diagram and/or block scheme can be realized by computer program instructions.These computer program instructions can be provided to the processor of multi-purpose computer, special purpose computer or other programmable data processing device, to produce a machine, thus instruction (processor of described instruction machine or other programmable data processing device as calculated performs) produces the device being used for the function/action specified in one or more square frames of realization flow figure and/or block scheme.
These computer program instructions also can be stored in computer-readable medium, this computer-readable medium can instruct computer, other programmable data processing device or other device play a role according to specific mode, thus be stored in instruction in computer-readable medium and produce one and manufacture a product, described in manufacture a product and comprise the instruction of the function/action specified in one or more square frames of realization flow figure and/or block scheme.Computer program instructions also can be loaded on computing machine, other programmable data processing device or other device, to make to perform a series of operation steps to produce computer implemented process on computing machine, other programmable device or other device, thus the instruction performed on this computing machine or other programmable device is provided for the process of the function/action specified in one or more square frames of realization flow figure and/or block scheme.
Process flow diagram in above accompanying drawing and block scheme represent the framework of the possible implementation of system according to various embodiments of the present invention, method and computer program product, function and operation.In this, each square frame in process flow diagram or block scheme can represent the module of the code of the one or more executable instructions comprised for realizing the logic function specified, section or a part.Be also to be noted that in the implementation that some are other, the function marked in square frame can perform not according to the order marked in accompanying drawing.Such as, in fact, according to the function related to, two square frames illustrated continuously can perform substantially simultaneously, or these square frames sometimes can perform with contrary order.Be also to be noted that the combination of the square frame in each square frame in block scheme and/or process flow diagram and block scheme and/or process flow diagram can be realized by the function put rules into practice or the system based on specialized hardware of action or the combination of specialized hardware and computer instruction.

Claims (12)

1. the system for management of cache destage sweep time (100), comprising:
High-speed cache (120); With
Processor (130), coupling (140) is to high-speed cache, and wherein processor is configured to:
The first thread (200) is used to determine the expectation sweep time of the multiple storage track scanned in high-speed cache continuously, and
Use the second thread (300) based on the expectation sweep time determined continuously, control the actual scanning time of the described multiple storage track in high-speed cache continuously;
Wherein, when determining that (200) expect sweep time continuously, processor (130) is configured to:
(205) destage conflict (210) is monitored continuously for high-speed cache (120), and
If high-speed cache is experiencing destage conflict, then expectation is being increased (215) predetermined time amount sweep time;
Continuously for cache monitors (205) vibration (220); And
If high-speed cache experiences vibration, then expectation minimizing sweep time (225) predetermined percentage.
2. the system as claimed in claim 1 (100), wherein, processor (130) is configured to:
If high-speed cache (120) is experiencing destage conflict, then expectation is being increased (215) 100ms sweep time;
If high-speed cache experiences vibration, then expectation minimizing sweep time (225) 50%; And
If high-speed cache is not experiencing destage conflict and do not experiencing vibration, then keeping the expectation sweep time that (230) are current.
3. the system as claimed in claim 1 (100), wherein, when controlling the actual scanning time continuously, processor (130) is configured to:
Determine (305) last sweep time;
That determines (310) Current Scan completes part;
Estimate the remainder of the Current Scan that (320) have needed;
(325) are sued for peace to produce the reference time to the remainder completing the Current Scan of part and estimation of described last sweep time, Current Scan;
Divided by (330), the described reference time is expected that the twice of sweep time is to produce the reference scan factor; And
(345) next sweep time for lower one scan is revised based on the reference scan factor.
4. system (100) as claimed in claim 3, wherein, when estimating the remainder of the Current Scan that (320) have needed, processor (130) is configured to:
From estimate Current Scan deduct T.T. Current Scan complete part to produce factor I;
Factor I divided by the T.T. of Current Scan estimated to produce factor Ⅱ; And
Factor Ⅱ is multiplied by and expects sweep time.
5. system (100) as claimed in claim 4, wherein, when based on next sweep time for lower one scan of reference scan factor amendment (345), processor (130) is also configured to:
If (335) the reference scan factor is less than one, then increase next sweep time of one scan under (345); And
If the reference scan factor is more than or equal to one, then keep the quantity of the destage task in (355) high-speed cache.
6. system (100) as claimed in claim 5, wherein, when increasing (345) next sweep time for lower one scan, processor (130) is configured to the quantity of the destage task reduced in high-speed cache.
7., for the method for the destage sweep time in management of cache (120), the method comprises:
The first thread (200) is used to determine the expectation sweep time of the multiple storage track scanned in high-speed cache continuously; And
Use the second thread (300) based on the expectation sweep time determined continuously, control the actual scanning time of the described multiple storage track in high-speed cache continuously;
Wherein, determine that (200) are expected to comprise sweep time continuously:
(205) destage conflict (210) is monitored continuously for high-speed cache (120), and
If high-speed cache is experiencing destage conflict, then expectation is being increased (215) predetermined time amount sweep time;
Continuously for cache monitors (205) vibration (220); And
If high-speed cache experiences vibration, then expectation minimizing sweep time (225) predetermined percentage.
8. method as claimed in claim 7, also comprises:
If high-speed cache (120) is experiencing destage conflict, then expectation is being increased (215) 100ms sweep time;
If high-speed cache experiences vibration, then expectation minimizing sweep time (225) 50%; And
If high-speed cache is not experiencing destage conflict and do not experiencing vibration, then keeping the expectation sweep time that (230) are current.
9. method as claimed in claim 7, wherein, the control actual scanning time comprises continuously:
Determine (305) last sweep time;
That determines (310) Current Scan completes part;
Estimate the remainder of the Current Scan that (320) have needed;
(325) are sued for peace to produce the reference time to the remainder completing the Current Scan of part and estimation of described last sweep time, Current Scan;
Divided by (330), the described reference time is expected that the twice of sweep time is to produce the reference scan factor; And
(345) next sweep time for lower one scan is revised based on the reference scan factor.
10. method as claimed in claim 9, wherein, estimate that the remainder of the Current Scan needed comprises:
From estimate Current Scan deduct T.T. Current Scan complete part to produce factor I;
Factor I divided by the T.T. of Current Scan estimated to produce factor Ⅱ; And
Factor Ⅱ is multiplied by and expects sweep time.
11. methods as claimed in claim 10, wherein, comprise based on reference scan time modification (345) next sweep time for lower one scan:
If (335) the reference scan factor is less than one, then increase next sweep time of one scan under (345); And
If the reference scan factor is more than or equal to one, then keep the quantity of the destage task in (355) high-speed cache.
12. methods as claimed in claim 11, wherein, increase (345) comprised for next sweep time of lower one scan: the quantity reducing the destage task in high-speed cache.
CN201180059215.5A 2010-12-10 2011-11-29 The system and method for a kind of management of cache destage sweep time Active CN103250142B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/965,131 2010-12-10
US12/965,131 US8639888B2 (en) 2010-12-10 2010-12-10 Systems and methods for managing cache destage scan times
PCT/EP2011/071263 WO2012076362A1 (en) 2010-12-10 2011-11-29 Managing cache destage scan times

Publications (2)

Publication Number Publication Date
CN103250142A CN103250142A (en) 2013-08-14
CN103250142B true CN103250142B (en) 2015-11-25

Family

ID=45047802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201180059215.5A Active CN103250142B (en) 2010-12-10 2011-11-29 The system and method for a kind of management of cache destage sweep time

Country Status (6)

Country Link
US (2) US8639888B2 (en)
JP (1) JP5875056B2 (en)
CN (1) CN103250142B (en)
DE (1) DE112011104314B4 (en)
GB (1) GB2499968B (en)
WO (1) WO2012076362A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8719504B2 (en) * 2012-09-14 2014-05-06 International Business Machines Corporation Efficient processing of cache segment waiters

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7085892B2 (en) * 2003-06-17 2006-08-01 International Business Machines Corporation Method, system, and program for removing data in cache subject to a relationship
CN101563677A (en) * 2006-12-20 2009-10-21 国际商业机器公司 System, method and computer program product for managing data using a write-back cache unit

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0219942A (en) 1988-07-08 1990-01-23 Canon Inc Cache subsystem
JPH0594373A (en) * 1991-09-30 1993-04-16 Nec Corp Data flash interval control system
JPH05303528A (en) * 1992-04-27 1993-11-16 Oki Electric Ind Co Ltd Write-back disk cache device
DE69616148T2 (en) 1995-05-22 2002-03-07 Lsi Logic Corp Method and device for data transmission
US6412045B1 (en) * 1995-05-23 2002-06-25 Lsi Logic Corporation Method for transferring data from a host computer to a storage media using selectable caching strategies
US7096320B2 (en) * 2001-10-31 2006-08-22 Hewlett-Packard Development Company, Lp. Computer performance improvement by adjusting a time used for preemptive eviction of cache entries
US6948009B2 (en) 2002-06-04 2005-09-20 International Business Machines Corporation Method, system, and article of manufacture for increasing processor utilization
US7191207B2 (en) 2003-06-11 2007-03-13 International Business Machines Corporation Apparatus and method to dynamically allocate bandwidth in a data storage and retrieval system
US7356651B2 (en) 2004-01-30 2008-04-08 Piurata Technologies, Llc Data-aware cache state machine
US7805574B2 (en) 2005-02-09 2010-09-28 International Business Machines Corporation Method and cache system with soft I-MRU member protection scheme during make MRU allocation
WO2007068122A1 (en) 2005-12-16 2007-06-21 Univ Western Ontario System and method for cache management
US7523271B2 (en) 2006-01-03 2009-04-21 International Business Machines Corporation Apparatus, system, and method for regulating the number of write requests in a fixed-size cache
US7577787B1 (en) 2006-12-15 2009-08-18 Emc Corporation Methods and systems for scheduling write destages based on a target
US7721043B2 (en) 2007-01-08 2010-05-18 International Business Machines Corporation Managing write requests in cache directed to different storage groups
US7793049B2 (en) 2007-10-30 2010-09-07 International Business Machines Corporation Mechanism for data cache replacement based on region policies
US9430395B2 (en) 2008-08-11 2016-08-30 International Business Machines Corporation Grouping and dispatching scans in cache

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7085892B2 (en) * 2003-06-17 2006-08-01 International Business Machines Corporation Method, system, and program for removing data in cache subject to a relationship
CN101563677A (en) * 2006-12-20 2009-10-21 国际商业机器公司 System, method and computer program product for managing data using a write-back cache unit

Also Published As

Publication number Publication date
US20120151151A1 (en) 2012-06-14
GB2499968A (en) 2013-09-04
US8639888B2 (en) 2014-01-28
JP5875056B2 (en) 2016-03-02
US20120254539A1 (en) 2012-10-04
CN103250142A (en) 2013-08-14
US8589623B2 (en) 2013-11-19
GB201312203D0 (en) 2013-08-21
DE112011104314B4 (en) 2022-02-24
WO2012076362A1 (en) 2012-06-14
JP2014500553A (en) 2014-01-09
DE112011104314T5 (en) 2013-09-26
GB2499968B (en) 2014-01-29

Similar Documents

Publication Publication Date Title
US10659552B2 (en) Device and method for monitoring server health
CN105512251B (en) A kind of page cache method and device
CN102971717B (en) Memory access table is preserved and recovery system and method
NL2011914B1 (en) Mobile device and method of managing data using swap thereof.
CN101510167B (en) A kind of method of plug-in component operation, Apparatus and system
CN104584650A (en) Managing power consumption in mobile devices
CN105554544B (en) A kind of data processing method and system
CN104808952A (en) Data caching method and device
US9875517B2 (en) Data processing method, data processing apparatus, and storage medium
CN110888704A (en) High-concurrency interface processing method, device, equipment and storage medium
CN103250142B (en) The system and method for a kind of management of cache destage sweep time
CN103119567B (en) For the system and method in managing virtual tape pool territory
US20160266808A1 (en) Information processing device, information processing method, and recording medium
CN104067214A (en) Increased destaging efficiency
US9959839B2 (en) Predictive screen display method and apparatus
US20200278810A1 (en) Method for Mitigating Writing-Performance Variation and Preventing IO Blocking in a Solid-State Drive
CN111414227A (en) Method and device for reading mirror image data and computing equipment
CN109376878A (en) A kind of acquisition method and relevant apparatus of SOE event
US9798664B2 (en) Method and apparatus for correcting cache profiling information in multi-pass simulator
JP5617586B2 (en) Information processing program, relay device, and relay management device
CN105721531B (en) message synchronization method and device
JP7048890B2 (en) Information processing equipment, information collection program and information collection method
CN112925474A (en) Terminal device control method, storage medium and terminal device
CN116701793A (en) Page rendering method, computer device and computer readable storage medium
CN112835598A (en) Automobile ECU (electronic control Unit) flashing method and system and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant