CN106557430A - A kind of data cached brush method and device - Google Patents

A kind of data cached brush method and device Download PDF

Info

Publication number
CN106557430A
CN106557430A CN201510601253.9A CN201510601253A CN106557430A CN 106557430 A CN106557430 A CN 106557430A CN 201510601253 A CN201510601253 A CN 201510601253A CN 106557430 A CN106557430 A CN 106557430A
Authority
CN
China
Prior art keywords
brush
lun
data cached
unit
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510601253.9A
Other languages
Chinese (zh)
Other versions
CN106557430B (en
Inventor
李关强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huawei Technology Co Ltd
Original Assignee
Chengdu Huawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huawei Technology Co Ltd filed Critical Chengdu Huawei Technology Co Ltd
Priority to CN201510601253.9A priority Critical patent/CN106557430B/en
Publication of CN106557430A publication Critical patent/CN106557430A/en
Application granted granted Critical
Publication of CN106557430B publication Critical patent/CN106557430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the present invention discloses a kind of data cached brush method and device, and the method includes:When having data cached, the data brush parameter of at least two LUN that disk includes is obtained;The priority number of the LUN is calculated according to the data brush parameter of each LUN;The brush page number of the LUN is determined according to the priority number and brush page number of concurrent of each LUN, the page quantity of disk is write when brush page number of concurrent is each brush, the page quantity of LUN when the brush page number of LUN is each brush, is write;According to the brush page number of each LUN, will be data cached in the data cached brush of all or part at least two LUN.Implement the embodiment of the present invention, brush page number of concurrent can reasonably be distributed to the multiple LUN in disk, to improve the motility of brush.

Description

A kind of data cached brush method and device
Technical field
The present invention relates to field of computer technology, more particularly to a kind of data cached brush method and device.
Background technology
One disk can include multiple LUN (Logical Unit Number, LUN), when depositing When reservoir is needed data cached write disk, memorizer is by data cached according to certain brush strategy brush Enter in multiple LUN that disk includes.During brush each due to memorizer, brush page number of concurrent, i.e., The page number of write disk is fixed, and the performance of multiple LUN may be different, therefore, how to exist Reasonably distributing brush page number of concurrent between multiple LUN becomes a problem demanding prompt solution.
The content of the invention
The embodiment of the present invention discloses a kind of data cached brush method and device, for will be the brush page concurrent Number reasonably distributes to the multiple LUN in disk, to improve the motility of brush.
Embodiment of the present invention first aspect discloses a kind of data cached brush method, including:
When having data cached, the data brush ginseng of at least two LUN that disk includes is obtained Number;
The priority of the LUN is calculated according to the data brush parameter of each LUN Number;
The logic list is determined according to the priority number and brush page number of concurrent of each LUN The brush page number of unit number, the brush page number of concurrent write the page of the disk when being each brush Quantity, the brush page number of the LUN write the page of the LUN when being each brush Face quantity;
According to the brush page number of each LUN, by it is described it is data cached in whole or portion Point data cached brush is at least two LUN.
With reference to the embodiment of the present invention in a first aspect, the first in embodiment of the present invention first aspect is possible In implementation, the data brush parameter of each LUN includes that write is described in Preset Time The data cached capacity and the LUN of LUN is brushed in the Preset Time every time Average time needed for disk;
It is described that the excellent of the LUN is calculated according to the data brush parameter of each LUN First series includes:
The data cached capacity of each LUN will be write in the Preset Time divided by described LUN average time needed for brush every time in the Preset Time, to obtain the logical block Number priority number.
With reference to the embodiment of the present invention in a first aspect, second in embodiment of the present invention first aspect possible In implementation, the priority number and brush page number of concurrent according to each LUN is true The brush page number of the fixed LUN includes:
The priority number of each LUN is added, to obtain total priority number;
By the priority number of each LUN divided by total priority number, to obtain described patrolling The brush for collecting unit number matches somebody with somebody specified number;
The brush of each LUN is multiplied by into brush page number of concurrent with specified number, it is described to obtain The brush page number of LUN.
With reference to the embodiment of the present invention in a first aspect, the third in embodiment of the present invention first aspect is possible In implementation, methods described also includes:
Obtain the data cached capacity;
According to the corresponding relation that default brush page number of concurrent is interval with data cached capacity, institute is obtained State the corresponding brush page number of concurrent of data cached capacity.
With reference to the embodiment of the present invention in a first aspect, the 4th kind in embodiment of the present invention first aspect possible It is in implementation, described that the logic list is calculated according to the data brush parameter of each LUN After the priority number of unit number, methods described also includes:
The destination logical unit number before judging the priority number of the destination logical unit number for calculating and calculating Whether the absolute value of the difference of priority number is more than preset value, if being more than, performs described according to each institute The priority number and brush page number of concurrent for stating LUN determines the brush page of the LUN Several steps;
Wherein, the destination logical unit number is the arbitrary logic list at least two LUN Unit number.
Embodiment of the present invention second aspect discloses a kind of data cached brush device, including:
First acquisition unit, for when having data cached, obtaining at least two logics that disk includes The data brush parameter of unit number;
Computing unit, the number of each LUN for being obtained according to the first acquisition unit The priority number of the LUN is calculated according to brush parameter;
Determining unit, the priority of each LUN for being calculated according to the computing unit Number and brush page number of concurrent determine the brush page number of the LUN, and the brush page is concurrent The page quantity of the disk is write when number is each brush, the brush page number of the LUN is Write the page quantity of the LUN every time during brush;
Brush unit, the brush page of each LUN for being determined according to the determining unit Face number, by it is described it is data cached in the data cached brush of all or part at least two logics list In unit number.
With reference to embodiment of the present invention second aspect, the first in embodiment of the present invention second aspect is possible In implementation, the data brush parameter of each LUN includes that write is described in Preset Time The data cached capacity and the LUN of LUN is brushed in the Preset Time every time Average time needed for disk;
The computing unit, specifically for each LUN will be write in the Preset Time Data cached capacity mean time needed for each brush in the Preset Time divided by the LUN Between, to obtain the priority number of the LUN.
With reference to embodiment of the present invention second aspect, second in embodiment of the present invention second aspect is possible In implementation, the determining unit includes:
Addition subelement, the priority of each LUN for the computing unit is calculated Number is added, to obtain total priority number;
Except subunit, for the priority number of each LUN is single divided by addition Total priority number that unit obtains, matches somebody with somebody specified number with the brush for obtaining the LUN;
Multiplication subunit, for by it is described except subunit obtain each LUN brush Brush page number of concurrent is multiplied by with specified number, to obtain the brush page number of the LUN.
With reference to embodiment of the present invention second aspect, the third in embodiment of the present invention second aspect is possible In implementation, described device also includes:
Second acquisition unit, for obtaining the data cached capacity;
3rd acquiring unit, for interval with data cached capacity according to default brush page number of concurrent Corresponding relation, obtain the data cached corresponding brush page of capacity that the second acquisition unit is obtained Number of concurrent.
With reference to embodiment of the present invention second aspect, the 4th kind in embodiment of the present invention second aspect is possible In implementation, described device also includes:
Judging unit, for judge the priority number of destination logical unit number that the computing unit is calculated with Before calculating, whether the absolute value of the difference of the priority number of the destination logical unit number is more than preset value, when The judged result of the judging unit is described according to each institute when being, then to trigger the determining unit execution The priority number and brush page number of concurrent for stating LUN determines the brush page of the LUN Several steps;
Wherein, the destination logical unit number is the arbitrary logic list at least two LUN Unit number.
In the embodiment of the present invention, when having data cached, can be according to the data brush of LUN The priority number of parameter calculating logic unit number, and the priority number according to LUN and the brush page Number of concurrent determines the brush page number of LUN, afterwards will according to the brush page number of LUN The data cached brush of all or part in data cached therefore, it can brush page to LUN Face number of concurrent reasonably distributes to the multiple LUN in disk, to improve the motility of brush.
Description of the drawings
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, below will be to needed for embodiment Accompanying drawing to be used is briefly described, it should be apparent that, drawings in the following description are only the present invention Some embodiments, for those of ordinary skill in the art, before creative labor is not paid Put, can be with according to these other accompanying drawings of accompanying drawings acquisition.
Fig. 1 is a kind of structure chart of the network architecture of data cached brush disclosed in the embodiment of the present invention;
Fig. 2 is a kind of flow chart of data cached brush method disclosed in the embodiment of the present invention;
Fig. 3 is the flow chart of another kind of data cached brush method disclosed in the embodiment of the present invention;
Fig. 4 is a kind of structure chart of data cached brush device disclosed in the embodiment of the present invention;
Fig. 5 is the structure chart of another kind of data cached brush device disclosed in the embodiment of the present invention;
Fig. 6 is the structure chart of another data cached brush device disclosed in the embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out Clearly and completely describe, it is clear that described embodiment is only a part of embodiment of the invention, and not It is whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making The every other embodiment obtained under the premise of creative work, belongs to the scope of protection of the invention.
The embodiment of the present invention discloses a kind of data cached brush method and device, for will be the brush page concurrent Number reasonably distributes to the multiple LUN in disk, to improve the motility of brush.Individually below It is described in detail.
A kind of data cached brush method and device disclosed in embodiment for a better understanding of the present invention, under Face first to the embodiment of the present invention using the network architecture be described.Fig. 1 is referred to, Fig. 1 is of the invention real Apply a kind of structure chart of the network architecture of data cached brush disclosed in example.As shown in figure 1, this is data cached The network architecture of brush can include server 101, caching (Cache) device 102 and disk 103, wherein:
Server 101, for being communicated with buffer 102, by the data write buffer 102 of storage.
Buffer 102, for being communicated with disk 103, the data cached brush that server 101 is write Into disk 103.
Disk 103 includes at least two LUN, the data difference brush in 102 brush of buffer to disk 103 Into these LUN.
Based on the network architecture of the data cached brush shown in Fig. 1, Fig. 2 is referred to, Fig. 2 is enforcement of the present invention A kind of flow chart of data cached brush method disclosed in example.Wherein, the data cached brush method be from The angle of buffer is describing.As shown in Fig. 2 the data cached brush method may comprise steps of.
S201, the data for when having data cached, obtaining at least two LUN that disk includes Brush parameter.
In the present embodiment, when having data cached, constantly or periodically can obtain disk includes At least two LUN data brush parameter.Wherein, the data brush parameter of LUN Including in Preset Time write LUN data cached capacity and LUN when default Interior average time needed for brush every time, therefore, it can the data brush parameter by LUN The time-consuming situation of brush of the business load situation and LUN of solution LUN.
S202, according to the priority number of the data brush parameter calculating logic unit number of each LUN.
In the present embodiment, get at least two LUN that disk includes data brush parameter it Afterwards, by can according to the priority number of the data brush parameter calculating logic unit number of each LUN, The data cached capacity of each LUN can will be write in Preset Time divided by the logical block Average time needed for brush every time number in the Preset Time, to obtain the priority number of the LUN. It can be seen that, LUN is the average time needed for each brush in Preset Time shorter, LUN Priority number is higher;The data cached capacity of write LUN is bigger in Preset Time, logic list The priority number of unit number is higher.
S203, the priority number according to each LUN and brush page number of concurrent determine logical block Number brush page number.
In the present embodiment, as the priority number of LUN is higher, show that LUN processes number According to efficiency it is higher, can be that the high LUN for the treatment of effeciency distributes more data, be to process effect Rate it is low LUN distribution less data, so as between at least two LUN reasonably Distribution is brushed into the data cached of disk.Therefore, in the data brush parameter meter according to each LUN After calculating the priority number of LUN, by the priority number according to each LUN and brush page Face number of concurrent determines the brush page number of LUN, will each LUN priority number phase Total priority number of all LUN is obtained, afterwards by the priority number of each LUN Divided by total priority number to obtain the brush of each LUN with specified number, finally by each logical block Number brush be multiplied by brush page number of concurrent with specified number to obtain the brush page number of each LUN. Wherein, the page quantity of disk, the brush of LUN are write when brush page number of concurrent is each brush The page quantity of LUN is write when disk page number is each brush.
S204, according to the brush page number of each LUN, will be data cached in all or part Data cached brush is at least two LUN.
In the present embodiment, determined according to the priority number and brush page number of concurrent of each LUN and patrolled After collecting the brush page number of unit number, by the brush page number according to each LUN, will caching The data cached brush of all or part in data is at least two LUN.
In the data cached brush method described by Fig. 2, when having data cached, can be according to logic The priority number of the data brush parameter calculating logic unit number of unit number, and according to the excellent of LUN First sum of series brush page number of concurrent determines the brush page number of LUN, afterwards according to logical block Number brush page number will be data cached in the data cached brush of all or part to LUN, because This, brush page number of concurrent can reasonably be distributed to the multiple LUN in disk, to carry The motility of high brush.
Based on the network architecture of the data cached brush shown in Fig. 1, Fig. 3 is referred to, Fig. 3 is enforcement of the present invention The flow chart of another kind of data cached brush method disclosed in example.Wherein, the data cached brush method is Describe from the angle of buffer.As shown in figure 3, the data cached brush method can include following step Suddenly.
S301, the data for when having data cached, obtaining at least two LUN that disk includes Brush parameter.
In the present embodiment, when having data cached, constantly or periodically can obtain disk includes At least two LUN data brush parameter.Wherein, the data brush parameter of LUN Including in Preset Time write LUN data cached capacity and LUN when default Interior average time needed for brush every time, therefore, it can the data brush parameter by LUN The time-consuming situation of brush of the business load situation and LUN of solution LUN.
S302, according to the priority number of the data brush parameter calculating logic unit number of each LUN.
In the present embodiment, get at least two LUN that disk includes data brush parameter it Afterwards, by can according to the priority number of the data brush parameter calculating logic unit number of each LUN, The data cached capacity of each LUN can will be write in Preset Time divided by the logical block Average time needed for brush every time number in the Preset Time, to obtain the priority number of the LUN. It can be seen that, LUN is the average time needed for each brush in Preset Time shorter, LUN Priority number is higher;The data cached capacity of write LUN is bigger in Preset Time, logic list The priority number of unit number is higher.
The data cached capacity of S303, acquisition, and according to default brush page number of concurrent and caching number According to the interval corresponding relation of capacity obtain the data cached corresponding brush page number of concurrent of capacity.
In the present embodiment, the page quantity of disk is write when brush page number of concurrent is each brush, can be with Size of the brush page number of concurrent with data cached capacity is linked together, for example:When data cached Capacity it is larger when, brush page number of concurrent can be increased, so as to faster by data cached brush to magnetic In disk;When data cached capacity is less, brush page number of concurrent can be reduced, to reduce disk Load.
In the present embodiment, data cached capacity can be divided into into multiple intervals in advance, each interval point With a brush page number of concurrent, therefore, it can obtain data cached capacity, and according to default Brush page number of concurrent obtains the capacity of current cache data with the interval corresponding relation of data cached capacity Corresponding brush page number of concurrent.Wherein, step S303 can be with step S301 and step S302 simultaneously Row perform or serial perform, it is also possible between step S302 and step S304 perform or with step S304 Executed in parallel, can to perform as after being and before step S305 in the judged result of step S304, Execution step S303 is can also be without, the present embodiment is not construed as limiting.
The priority number of the destination logical unit number that S304, judgement are calculated and the front destination logical unit number of calculating The absolute value of difference of priority number whether be more than preset value, if being more than, execution step S305, if No more than, then terminate.
In the present embodiment, mesh before can first judging the priority number of the destination logical unit number for calculating and calculating Whether the absolute value of the difference of the priority number of mark LUN is more than preset value, if being more than, shows There are the business load situation and/or the brush of LUN of LUN at least two LUN The time-consuming situation of disk there occurs larger change, by execution step S305;If being not more than, show at least two In individual LUN, the brush of the business load situation without LUN and/or LUN takes There is large change in situation, will terminate.Wherein, destination logical unit number is at least two LUN In arbitrary LUN.
S305, the priority number according to each LUN and brush page number of concurrent determine logical block Number brush page number.
In the present embodiment, as the priority number of LUN is higher, show that LUN processes number According to efficiency it is higher, can be that the high LUN for the treatment of effeciency distributes more data, be to process effect Rate it is low LUN distribution less data, so as between at least two LUN reasonably Distribution is brushed into the data cached of disk.Therefore, in the data brush parameter meter according to each LUN After calculating the priority number of LUN, or the priority number of the destination logical unit number for calculating and meter After before calculating, the absolute value of the difference of the priority number of destination logical unit number is more than preset value, or according to Default brush page number of concurrent obtains data cached appearance with the interval corresponding relation of data cached capacity After measuring corresponding brush page number of concurrent, by the priority number according to each LUN and brush page Face number of concurrent determines the brush page number of LUN, will each LUN priority number phase Total priority number of all LUN is obtained, afterwards by the priority number of each LUN Divided by total priority number to obtain the brush of each LUN with specified number, finally by each logical block Number brush be multiplied by brush page number of concurrent with specified number to obtain the brush page number of each LUN. Wherein, the page quantity of LUN is write when the brush page number of LUN is each brush.
S306, according to the brush page number of each LUN, will be data cached in all or part Data cached brush is at least two LUN.
In the present embodiment, determined according to the priority number and brush page number of concurrent of each LUN and patrolled After collecting the brush page number of unit number, by the brush page number according to each LUN, will caching The data cached brush of all or part in data is at least two LUN.
In the data cached brush method described by Fig. 3, when having data cached, can be according to logic The priority number of the data brush parameter calculating logic unit number of unit number, and according to the excellent of LUN First sum of series brush page number of concurrent determines the brush page number of LUN, afterwards according to logical block Number brush page number will be data cached in the data cached brush of all or part to LUN, because This, brush page number of concurrent can reasonably be distributed to the multiple LUN in disk, to carry The motility of high brush.
Based on the network architecture of the data cached brush shown in Fig. 1, Fig. 4 is referred to, Fig. 4 is enforcement of the present invention A kind of structure chart of data cached brush device disclosed in example.Wherein, data cached brush device can be Buffer.As shown in figure 4, the data cached brush device 400 can include:
Acquiring unit 401, for when having data cached, obtaining at least two logics that disk includes The data brush parameter of unit number;
Computing unit 402, the data brush of each LUN for being obtained according to acquiring unit 401 Disk parameter calculates the priority number of the LUN;
Determining unit 403, the priority of each LUN for being calculated according to computing unit 402 Number and brush page number of concurrent determine the brush page number of the LUN, and brush page number of concurrent is every The page quantity of disk is write during secondary brush, is write when the brush page number of LUN is each brush The page quantity of LUN;
Brush unit 404, the brush page of each LUN for being determined according to determining unit 403 Face number, will be data cached in the data cached brush of all or part at least two LUN.
In the data cached brush device described by Fig. 4, when having data cached, can be according to patrolling The priority number of the data brush parameter calculating logic unit number of unit number is collected, and according to LUN Priority number and brush page number of concurrent determine the brush page number of LUN, afterwards according to logic list Unit number brush page number will be data cached in the data cached brush of all or part to LUN, Therefore, it can brush page number of concurrent is reasonably distributed to the multiple LUN in disk, so as to Improve the motility of brush.
Based on the network architecture of the data cached brush shown in Fig. 1, Fig. 5 is referred to, Fig. 5 is enforcement of the present invention The structure chart of another kind of data cached brush device disclosed in example.Wherein, data cached brush device can be with For buffer.As shown in figure 5, the data cached brush device 500 can include:
First acquisition unit 501, for when having data cached, obtain that disk includes at least two The data brush parameter of LUN;
Computing unit 502, the number of each LUN for being obtained according to first acquisition unit 501 The priority number of the LUN is calculated according to brush parameter;
Determining unit 503, the priority of each LUN for being calculated according to computing unit 502 Number and brush page number of concurrent determine the brush page number of the LUN, and brush page number of concurrent is every The page quantity of disk is write during secondary brush, is write when the brush page number of LUN is each brush The page quantity of LUN;
Brush unit 504, the brush page of each LUN for being determined according to determining unit 503 Face number, will be data cached in the data cached brush of all or part at least two LUN.
As a kind of possible embodiment, when the data brush parameter of each LUN includes default The interior data cached capacity and the LUN for writing the LUN is every in Preset Time Average time needed for secondary brush;
Computing unit 502, specifically for the data cached of each LUN being write in Preset Time Capacity divided by LUN average time needed for brush every time in the Preset Time, to obtain logic list The priority number of unit number.
Used as a kind of possible embodiment, determining unit 503 can include:
Addition subelement 5031, the priority of each LUN for computing unit 502 is calculated Number is added, to obtain total priority number;
Except subunit 5032, for the priority number of each LUN is removed subelement with additive The 5031 total priority numbers for obtaining, match somebody with somebody specified number with the brush for obtaining the LUN;
Multiplication subunit 5033, the brush of each LUN for will obtain except subunit 5032 Disk is multiplied by brush page number of concurrent with specified number, to obtain the brush page number of the LUN.
Used as a kind of possible embodiment, the data cached brush device 500 can also include:
Second acquisition unit 505, for obtaining data cached capacity;
3rd acquiring unit 506, for according to default brush page number of concurrent and data cached capacity Interval corresponding relation, obtains the data cached corresponding brush of capacity that second acquisition unit 505 is obtained Page number of concurrent.
Specifically, it is determined that unit 503, for each LUN calculated according to computing unit 502 Priority number and the 3rd acquiring unit 506 obtain brush page number of concurrent determine the LUN Brush page number.
Used as a kind of possible embodiment, the data cached brush device 500 can also include:
Judging unit 507, for judging the priority of the destination logical unit number of the calculating of computing unit 502 Whether number is more than preset value with the absolute value of the difference of the priority number for calculating front destination logical unit number, when The judged result of judging unit 507 described is patrolled according to each when being, then to trigger determining unit 503 and performing The priority number and brush page number of concurrent of volume unit number determine the step of the brush page number of the LUN Suddenly;
Wherein, destination logical unit number is the arbitrary LUN at least two LUN.
Specifically, each LUN that computing unit 502 is obtained according to first acquisition unit 501 Data brush parameter calculates the priority number of the LUN, and triggering judging unit 507 is judged to calculate The priority number of the destination logical unit number that unit 502 is calculated is preferential with the front destination logical unit number of calculating Whether the absolute value of the difference of series is more than preset value.
In the data cached brush device described by Fig. 5, when having data cached, can be according to patrolling The priority number of the data brush parameter calculating logic unit number of unit number is collected, and according to LUN Priority number and brush page number of concurrent determine the brush page number of LUN, afterwards according to logic list Unit number brush page number will be data cached in the data cached brush of all or part to LUN, Therefore, it can brush page number of concurrent is reasonably distributed to the multiple LUN in disk, so as to Improve the motility of brush.
Based on the network architecture of the data cached brush shown in Fig. 1, Fig. 6 is referred to, Fig. 6 is enforcement of the present invention The structure chart of another data cached brush device disclosed in example.Wherein, the data cached brush device can Think memorizer.As shown in fig. 6, the data cached brush device 600 can include processor 601, storage Device 602, input equipment 603 and output device 604, between processor 601 and memorizer 602, processor 601 Between input equipment 603 and between processor 601 and output device 604 can pass through bus or other Mode connects, wherein, in the present embodiment be by bus connect in the way of as a example by.Wherein:
Input equipment 603, for the data cached of the reception server write, and sends data cached to place Reason device 601;
Be stored with memorizer 602 batch processing code, and processor 601 is used to call memorizer 602 to deposit The program code of storage performs following operation:
When having data cached, the data brush ginseng of at least two LUN that disk includes is obtained Number;
The priority number of the LUN is calculated according to the data brush parameter of each LUN;
The LUN is determined according to the priority number and brush page number of concurrent of each LUN Brush page number, writes the page quantity of disk, logical block when brush page number of concurrent is each brush Number brush page number the page quantity of the LUN is write when being each brush;
Output device 604, for the brush page number according to each LUN, in data cached At least two LUN that include of the data cached brush of all or part to disk in.
As a kind of possible embodiment, when the data brush parameter of each LUN includes default The interior data cached capacity and the LUN for writing the LUN is every in Preset Time Average time needed for secondary brush;
Processor 601 calculates the excellent of the LUN according to the data brush parameter of each LUN The mode of first series is specially:
The data cached capacity of each LUN will be write in Preset Time divided by the LUN The average time needed for each brush in Preset Time, to obtain the priority number of the LUN.
As a kind of possible embodiment, priority number of the processor 601 according to each LUN It is specially with the mode of the brush page number that brush page number of concurrent determines the LUN:
The priority number of each LUN is added, to obtain total priority number;
By the priority number of each LUN divided by total priority number, to obtain the LUN Brush matches somebody with somebody specified number;
The brush of each LUN is multiplied by into brush page number of concurrent with specified number, to obtain the logic list The brush page number of unit number.
Used as a kind of possible embodiment, processor 601 is additionally operable to the journey for calling memorizer 602 to store Sequence code performs following operation:
Obtain data cached capacity;
According to the corresponding relation that default brush page number of concurrent is interval with data cached capacity, obtain slow The corresponding brush page number of concurrent of capacity of deposit data.
As a kind of possible embodiment, data brush of the processor 601 according to each LUN After parameter calculates the priority number of the LUN, processor 601 is additionally operable to call memorizer 602 The program code of storage performs following operation:
Before judging the priority number of the destination logical unit number for calculating and calculating, destination logical unit number is preferential Whether the absolute value of the difference of series is more than preset value, if being more than, performs described according to each logic list The step of priority number and brush page number of concurrent of unit number determine the brush page number of the LUN;
Wherein, destination logical unit number is the arbitrary LUN at least two LUN.
In the data cached brush device described by Fig. 6, when having data cached, can be according to patrolling The priority number of the data brush parameter calculating logic unit number of unit number is collected, and according to LUN Priority number and brush page number of concurrent determine the brush page number of LUN, afterwards according to logic list Unit number brush page number will be data cached in the data cached brush of all or part to LUN, Therefore, it can brush page number of concurrent is reasonably distributed to the multiple LUN in disk, so as to Improve the motility of brush.
It should be noted that for aforesaid each embodiment of the method, in order to be briefly described, therefore by which all A series of combination of actions is expressed as, but those skilled in the art should know, the present invention does not receive institute The restriction of the sequence of movement of description because according to the present invention, certain some step can using other order or Person is carried out simultaneously.Secondly, those skilled in the art should also know, embodiment described in this description Preferred embodiment is belonged to, involved action and the module not necessarily present invention are necessary.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment Suddenly can be by program to instruct the hardware of correlation to complete, the program can be stored in a computer can Read in storage medium, storage medium can include:Flash disk, read only memory (Read-Only Memory, ROM), random access device (Random Access Memory, RAM), disk or CD etc..
The data cached brush method and device for being provided to the embodiment of the present invention above has carried out detailed Jie Continue, specific case used herein is set forth to the principle and embodiment of the present invention, above reality The explanation for applying example is only intended to help and understands the method for the present invention and its core concept;Simultaneously for ability The those skilled in the art in domain, according to the thought of the present invention, can in specific embodiments and applications There is change part, in sum, this specification content should not be construed as limiting the invention.

Claims (10)

1. a kind of data cached brush method, it is characterised in that include:
When having data cached, the data brush ginseng of at least two LUN that disk includes is obtained Number;
The priority of the LUN is calculated according to the data brush parameter of each LUN Number;
The logic list is determined according to the priority number and brush page number of concurrent of each LUN The brush page number of unit number, the brush page number of concurrent write the page of the disk when being each brush Quantity, the brush page number of the LUN write the page of the LUN when being each brush Face quantity;
According to the brush page number of each LUN, by it is described it is data cached in whole or portion Point data cached brush is at least two LUN.
2. method according to claim 1, it is characterised in that the number of each LUN Include writing the data cached capacity of the LUN and described in Preset Time according to brush parameter LUN average time needed for brush every time in the Preset Time;
It is described that the excellent of the LUN is calculated according to the data brush parameter of each LUN First series includes:
The data cached capacity of each LUN will be write in the Preset Time divided by described LUN average time needed for brush every time in the Preset Time, to obtain the logical block Number priority number.
3. method according to claim 1, it is characterised in that described according to each logic list The priority number and brush page number of concurrent of unit number determines that the brush page number of the LUN includes:
The priority number of each LUN is added, to obtain total priority number;
By the priority number of each LUN divided by total priority number, to obtain described patrolling The brush for collecting unit number matches somebody with somebody specified number;
The brush of each LUN is multiplied by into brush page number of concurrent with specified number, it is described to obtain The brush page number of LUN.
4. method according to claim 1, it is characterised in that methods described also includes:
Obtain the data cached capacity;
According to the corresponding relation that default brush page number of concurrent is interval with data cached capacity, institute is obtained State the corresponding brush page number of concurrent of data cached capacity.
5. method according to claim 1, it is characterised in that described according to each logic list After the data brush parameter of unit number calculates the priority number of the LUN, methods described also includes:
The destination logical unit number before judging the priority number of the destination logical unit number for calculating and calculating Whether the absolute value of the difference of priority number is more than preset value, if being more than, performs described according to each institute The priority number and brush page number of concurrent for stating LUN determines the brush page of the LUN Several steps;
Wherein, the destination logical unit number is the arbitrary logic list at least two LUN Unit number.
6. a kind of data cached brush device, it is characterised in that include:
First acquisition unit, for when having data cached, obtaining at least two logics that disk includes The data brush parameter of unit number;
Computing unit, the number of each LUN for being obtained according to the first acquisition unit The priority number of the LUN is calculated according to brush parameter;
Determining unit, the priority of each LUN for being calculated according to the computing unit Number and brush page number of concurrent determine the brush page number of the LUN, and the brush page is concurrent The page quantity of the disk is write when number is each brush, the brush page number of the LUN is Write the page quantity of the LUN every time during brush;
Brush unit, the brush page of each LUN for being determined according to the determining unit Face number, by it is described it is data cached in the data cached brush of all or part at least two logics list In unit number.
7. device according to claim 6, it is characterised in that the number of each LUN Include writing the data cached capacity of the LUN and described in Preset Time according to brush parameter LUN average time needed for brush every time in the Preset Time;
The computing unit, specifically for each LUN will be write in the Preset Time Data cached capacity mean time needed for each brush in the Preset Time divided by the LUN Between, to obtain the priority number of the LUN.
8. device according to claim 6, it is characterised in that the determining unit includes:
Addition subelement, the priority of each LUN for the computing unit is calculated Number is added, to obtain total priority number;
Except subunit, for the priority number of each LUN is single divided by addition Total priority number that unit obtains, matches somebody with somebody specified number with the brush for obtaining the LUN;
Multiplication subunit, for by it is described except subunit obtain each LUN brush Brush page number of concurrent is multiplied by with specified number, to obtain the brush page number of the LUN.
9. device according to claim 6, it is characterised in that described device also includes:
Second acquisition unit, for obtaining the data cached capacity;
3rd acquiring unit, for interval with data cached capacity according to default brush page number of concurrent Corresponding relation, obtain the data cached corresponding brush page of capacity that the second acquisition unit is obtained Number of concurrent.
10. device according to claim 6, it is characterised in that described device also includes:
Judging unit, for judge the priority number of destination logical unit number that the computing unit is calculated with Before calculating, whether the absolute value of the difference of the priority number of the destination logical unit number is more than preset value, when The judged result of the judging unit is described according to each institute when being, then to trigger the determining unit execution The priority number and brush page number of concurrent for stating LUN determines the brush page of the LUN Several steps;
Wherein, the destination logical unit number is the arbitrary logic list at least two LUN Unit number.
CN201510601253.9A 2015-09-19 2015-09-19 A kind of data cached brush method and device Active CN106557430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510601253.9A CN106557430B (en) 2015-09-19 2015-09-19 A kind of data cached brush method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510601253.9A CN106557430B (en) 2015-09-19 2015-09-19 A kind of data cached brush method and device

Publications (2)

Publication Number Publication Date
CN106557430A true CN106557430A (en) 2017-04-05
CN106557430B CN106557430B (en) 2019-06-21

Family

ID=58414871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510601253.9A Active CN106557430B (en) 2015-09-19 2015-09-19 A kind of data cached brush method and device

Country Status (1)

Country Link
CN (1) CN106557430B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109739441A (en) * 2019-01-02 2019-05-10 郑州云海信息技术有限公司 Data cached method is brushed under a kind of storage system
CN110275670A (en) * 2018-03-16 2019-09-24 华为技术有限公司 Method, apparatus, storage equipment and the storage medium of data flow in control storage equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103229136A (en) * 2012-12-26 2013-07-31 华为技术有限公司 Disk writing method for disk arrays and disk writing device for disk arrays
CN104461936A (en) * 2014-11-28 2015-03-25 华为技术有限公司 Cached data disk brushing method and device
CN105095112A (en) * 2015-07-20 2015-11-25 华为技术有限公司 Method and device for controlling caches to write and readable storage medium of non-volatile computer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103229136A (en) * 2012-12-26 2013-07-31 华为技术有限公司 Disk writing method for disk arrays and disk writing device for disk arrays
CN104461936A (en) * 2014-11-28 2015-03-25 华为技术有限公司 Cached data disk brushing method and device
CN105095112A (en) * 2015-07-20 2015-11-25 华为技术有限公司 Method and device for controlling caches to write and readable storage medium of non-volatile computer

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110275670A (en) * 2018-03-16 2019-09-24 华为技术有限公司 Method, apparatus, storage equipment and the storage medium of data flow in control storage equipment
US11734183B2 (en) 2018-03-16 2023-08-22 Huawei Technologies Co., Ltd. Method and apparatus for controlling data flow in storage device, storage device, and storage medium
CN109739441A (en) * 2019-01-02 2019-05-10 郑州云海信息技术有限公司 Data cached method is brushed under a kind of storage system

Also Published As

Publication number Publication date
CN106557430B (en) 2019-06-21

Similar Documents

Publication Publication Date Title
Lu et al. Scalable computation of stream surfaces on large scale vector fields
CN109189572B (en) Resource estimation method and system, electronic equipment and storage medium
WO2012105969A1 (en) Estimating a performance characteristic of a job using a performance model
CN115880132B (en) Graphics processor, matrix multiplication task processing method, device and storage medium
CN104504147A (en) Resource coordination method, device and system for database cluster
CN104572301A (en) Resource distribution method and system
CN106776466A (en) A kind of FPGA isomeries speed-up computation apparatus and system
CN104932933A (en) Spin lock acquisition method and apparatus
CN105677755B (en) A kind of method and device handling diagram data
CN106407005B (en) A kind of concurrent process merging method and system based on multi-scale coupling
Freniere et al. The feasibility of Amazon's cloud computing platform for parallel, GPU-accelerated, multiphase-flow simulations
CN104537003A (en) Universal high-performance data writing method for Hbase database
CN105493024B (en) A kind of data threshold prediction technique and relevant apparatus
CN106557430A (en) A kind of data cached brush method and device
CN106155822A (en) A kind of disposal ability appraisal procedure and device
CN104050189B (en) The page shares processing method and processing device
CN106021188A (en) Parallel hardware architecture and parallel computing method for floating point matrix inversion
CN104657216A (en) Resource allocation method and device for resource pool
CN104579763A (en) Network capacity expansion method and equipment
CN110162666A (en) A kind of the execution method and executive device of retrieval tasks
CN111027688A (en) Neural network calculator generation method and device based on FPGA
CN109542351A (en) A kind of power consumption control method and solid state hard disk of solid state hard disk
CN105518617B (en) Data cached processing method and processing device
CN103218249B (en) A kind of virtual cluster control method and equipment, system of virtual cluster
CN108234615B (en) Table item processing method, mainboard and main network equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant