CN107025223B - A kind of buffer management method and server towards multi-tenant - Google Patents
A kind of buffer management method and server towards multi-tenant Download PDFInfo
- Publication number
- CN107025223B CN107025223B CN201610064482.6A CN201610064482A CN107025223B CN 107025223 B CN107025223 B CN 107025223B CN 201610064482 A CN201610064482 A CN 201610064482A CN 107025223 B CN107025223 B CN 107025223B
- Authority
- CN
- China
- Prior art keywords
- hit rate
- buffer area
- preset
- area hit
- tenant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/217—Database tuning
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a kind of buffer management method and server towards multi-tenant, this method comprises: according to the history buffer hit rate of target tenant in multi-tenant in default historical time section, according to the following buffer area hit rate of preset rules prediction target tenant;Compare the size of the following buffer area hit rate and the expectation buffer area hit rate of target tenant;If the following buffer area hit rate is greater than desired buffer area hit rate, then according to the difference of the following buffer area hit rate and the first pre-set buffer area hit rate, the first capacity is calculated according to the first preset algorithm, and discharges data page from the destination buffer of target tenant according to the first capacity;If the following buffer area hit rate is less than desired buffer area hit rate, according to the difference of the second pre-set buffer area hit rate and following buffer area hit rate, the second capacity is calculated according to the second preset algorithm, and be that data page is added in destination buffer according to the second capacity.The present invention is dynamically adapted the buffer pool size of target tenant.
Description
Technical field
The present invention relates to database technical field more particularly to a kind of buffer management methods and service towards multi-tenant
Device.
Background technique
In the field database (Database, DB), tenant indicates " user " of a database, his " rental " database
Part resource.Multi-tenant (Multi-Tenancy/Tenant) refers to that multiple tenants share a database instance.With cloud
The rise of computing technique and flourish, multi-tenant application is deployed on cloud by more and more cloud service providers, multiple rents
Family shares identical system or program assembly on cloud, shares resource (such as hardware resource, computing resource, memory source on cloud
Deng), and can ensure that the isolation of data between tenant.For example, enterprise's mailbox is opened by certain site for service by multiple enterprises, each
The authentication informations such as enterprise ID, password are filled in using same interface in Website login by enterprise, receive the same service flow in website
Journey, but it is isolation that website, which is the information that each enterprise retains,.One enterprise can not see the information of other enterprises.Here, often
A enterprise is exactly a tenant, includes multiple personal accounts of administrator and common employee in a tenant.
For multiple tenants when the data stored on to cloud access, there is bright for traditional buffer resource Managed Solution
Aobvious deficiency: for each tenant when creating on cloud, cloud service system distributes the buffer area of fixed size on demand for it, due to system
The buffer area of each tenant has uncertainty using size in the process of running, often because of the buffer pool size of initial setting up
The wasting of resources that is excessive and causing buffer area, as the buffer pool size of initial setting up is too small and causes not to be able to satisfy tenant
Demand.
Summary of the invention
The embodiment of the invention provides a kind of buffer management method and server towards multi-tenant.
In a first aspect, providing a kind of buffer management method towards multi-tenant, it is applied to but is not limited to server, example
Such as database server, the data base handling system of large scale processing can be applied to, the data base handling system includes but not
Be limited to: distributed disk base system, clustered database system and MPP (Massively Parallel Processing,
Massive parallel processing) Database Systems.And the server can be common server, or Cloud Server
(also known as cloud computing server or cloud host).The described method includes:
According to the history buffer hit rate of target tenant in default historical time section, the mesh is predicted according to preset rules
The following buffer area hit rate of tenant is marked, the target tenant is one in the multi-tenant;
Compare the size of the following buffer area hit rate and the preset expectation buffer area hit rate of the target tenant, institute
Stating desired buffer area hit rate is average demand value of the target tenant to buffer area hit rate, and the target tenant is servicing
When applying for the registration of on device, expectation buffer area hit rate can be configured;
If the future buffer area hit rate is greater than expectation buffer area hit rate, ordered according to the following buffer area
The difference of middle rate and target tenant preset first pre-set buffer area hit rate, calculates first according to the first preset algorithm
Capacity, and data page is discharged from the corresponding destination buffer of the target tenant according to first capacity;Wherein described
One pre-set buffer area hit rate is less than the following buffer area hit rate;
If the future buffer area hit rate is less than expectation buffer area hit rate, default according to the target tenant
The second pre-set buffer area hit rate and the following buffer area hit rate difference, calculate second according to the second preset algorithm
Capacity, and be that data page is added in the destination buffer according to second capacity;Wherein second pre-set buffer area is hit
Rate is greater than the following buffer area hit rate.
By implement first aspect described in method, can according to the buffer area hit rate demand in target tenant's future with
And the size relation of the preset buffer area hit rate demand of target tenant, dynamically adjust the corresponding target of the target tenant
The capacity of buffer area.
In some implementations, described according to the following buffer area hit rate and the target tenant preset first
The difference of pre-set buffer area hit rate calculates the first capacity according to the first preset algorithm, comprising:
Using the expectation buffer area hit rate as target tenant preset first pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the following buffer area hit rate and the expectation buffer area hit rate, according to the
One preset algorithm calculates the first capacity.
It, can be the expectation buffer area hit rate demand for meeting the target tenant the case where by executing above-mentioned steps
Under, it is calculated, is obtained using the difference of the following buffer area hit rate and the expectation buffer area hit rate of the target tenant
The capacity for the data page having more than needed to the target tenant avoids so that the data page that the target tenant has more than needed be discharged
The wasting of resources.
In some implementations, it is described according to target tenant preset second pre-set buffer area's hit rate with it is described
The difference of the following buffer area hit rate calculates the second capacity according to the second preset algorithm, comprising:
Using the expectation buffer area hit rate as target tenant preset second pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the expectation buffer area hit rate and the following buffer area hit rate, according to the
Two preset algorithms calculate the second capacity.
By executing above-mentioned steps, for the expectation buffer area hit rate demand for ensureing the target tenant, the mesh is utilized
The expectation buffer area hit rate and the difference of the following buffer area hit rate for marking tenant calculate, and obtain the target tenant
The capacity of data page required at present ensures the access of the target tenant to add data page for the target tenant
It can demand.
In some implementations, described according to the following buffer area hit rate and the target tenant preset first
The difference of pre-set buffer area hit rate calculates the first capacity according to the first preset algorithm, comprising:
Judge whether the following buffer area hit rate is greater than the preset maximum buffer hit rate of the target tenant, institute
Maximum buffer hit rate is stated greater than expectation buffer area hit rate, the maximum buffer hit rate is the target tenant
To the greatest requirements value of buffer area hit rate, the target tenant when applying for the registration of on the server, can to it is described most
Big buffer pool size is configured;
If so, being hit the maximum buffer hit rate as the target tenant preset first pre-set buffer area
Rate, and according to difference and the preset buffer area of the following buffer area hit rate and maximum preset buffer area hit rate
Capacity calculates the first capacity according to the first preset algorithm.
It, can be in the premise for the maximum buffer hit rate demand for meeting the target tenant by executing above-mentioned steps
Under, it is calculated using the following buffer area hit rate and the difference of the maximum buffer hit rate, obtains the target
The capacity of buffer area data page more than needed avoids the wasting of resources so that the data page that the target tenant has more than needed be discharged.
In some implementations, it is described according to first capacity from the corresponding destination buffer of the target tenant
Discharge data page, comprising:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, and will be released
The data page of first capacity be added in freebuf, the data page in the freebuf is used for for described more
The tenant for needing to add data page in tenant uses.
By executing above-mentioned steps, the data page released from the buffer area of the target tenant can be added to sky
In not busy buffer area, the access performance demand of the target tenant can have not only been ensured, but also the target tenant can have been had more than needed slow
It rushes area resource to release, to supply tenant's (including target that other in the multi-tenant need to increase buffer pool size
The tenant that other tenants and the server in tenant, the multi-tenant newly increase) or server in other business
It uses, improves the overall performance of multi-tenant and the resource utilization of buffer area.
In some implementations, described according to the following buffer area hit rate and the target tenant preset first
The difference of pre-set buffer area hit rate calculates the first capacity according to the first preset algorithm, comprising:
Judge whether the following buffer area hit rate is greater than the preset maximum buffer hit rate of the target tenant, institute
Maximum buffer hit rate is stated greater than expectation buffer area hit rate;
If it is not, then being hit the expectation buffer area hit rate as the target tenant preset first pre-set buffer area
Rate, and held according to the difference and preset buffer area of the following buffer area hit rate and the expectation buffer area hit rate
Amount, calculates the first capacity according to the first preset algorithm.
It, can be the expectation buffer area hit rate demand for meeting the target tenant the case where by executing above-mentioned steps
Under, it is calculated, is obtained using the difference of the following buffer area hit rate and the expectation buffer area hit rate of the target tenant
The capacity for the data page having more than needed to the target tenant avoids so that the data page that the target tenant has more than needed be discharged
The wasting of resources.
In some implementations, it is described according to first capacity from the corresponding destination buffer of the target tenant
Discharge data page, comprising:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, it is described to release
Data page be used to need to add for other in the multi-tenant tenant's use of data page, first capacity is less than or equal to
Described other need to add tenant's data page capacity to be added of data page.
, can be when other tenants there is data page demand in the multi-tenant by executing above-mentioned steps, it will be from the mesh
It marks the data page released in the buffer area of tenant and distributes to other tenants use, can both ensure the access of the target tenant
Performance requirement, and the buffer resource that the target tenant has more than needed can be released, to supply other in the multi-tenant
The tenant for needing to add data page uses, and improves the overall performance of multi-tenant and the resource utilization of buffer area.
In some implementations, it is described according to target tenant preset second pre-set buffer area's hit rate with it is described
The difference of the following buffer area hit rate calculates the second capacity according to the second preset algorithm, comprising:
Judge whether the following buffer area hit rate is less than the preset minimal buffering area hit rate of the target tenant, institute
Minimal buffering area hit rate is stated less than expectation buffer area hit rate, minimal buffering area hit rate is the target tenant
To the KB required of buffer area hit rate, the target tenant when applying for the registration of on the server, can to it is described most
Minibuffer area hit rate is configured;
If so, being hit minimal buffering area hit rate as the target tenant preset second pre-set buffer area
Rate, and held according to the difference and preset buffer area of minimal buffering area hit rate and the following buffer area hit rate
Amount, calculates the second capacity according to the second preset algorithm.
By executing above-mentioned steps, for the minimal buffering area hit rate demand for ensureing the target tenant, the mesh is utilized
The minimal buffering area hit rate and the difference of the following buffer area hit rate for marking tenant calculate, and obtain the target tenant
It at least needs to add the data page of the second capacity, to add data page for the target tenant, ensures the target tenant's
Access performance demand.
In some implementations, it is described according to target tenant preset second pre-set buffer area's hit rate with it is described
The difference of the following buffer area hit rate calculates the second capacity according to the second preset algorithm, comprising:
Judge whether the following buffer area hit rate is less than the preset minimal buffering area hit rate of the target tenant, institute
Minimal buffering area hit rate is stated less than expectation buffer area hit rate;
If it is not, then being hit the expectation buffer area hit rate as the target tenant preset second pre-set buffer area
Rate, and held according to the difference and preset buffer area of the expectation buffer area hit rate and the following buffer area hit rate
Amount, calculates the second capacity according to the second preset algorithm.
By executing above-mentioned steps, for the expectation buffer area hit rate demand for ensureing the target tenant, the mesh is utilized
The expectation buffer area hit rate and the difference of the following buffer area hit rate for marking tenant calculate, and obtain the target tenant
The capacity of data page required at present ensures the access of the target tenant to add data page for the target tenant
It can demand.
In some implementations, the following buffer area hit rate and the preset expectation of target tenant
The size of buffer area hit rate, comprising:
Judge that the following buffer area hit rate subtracts whether the difference for it is expected buffer area hit rate is greater than first in advance
If threshold value, alternatively, judging that the expectation buffer area hit rate subtracts the difference of the following buffer area hit rate and whether is greater than the
Two preset thresholds;
If it is default that the difference that the future buffer area hit rate subtracts the expectation buffer area hit rate is greater than described first
Threshold value, then the following buffer area hit rate is greater than expectation buffer area hit rate;If expectation buffer area hit rate subtracts
The difference of the following buffer area hit rate is gone to be greater than second preset threshold, then the following buffer area hit rate is less than institute
State desired buffer area hit rate.It, can be with by the way that float value (i.e. described first preset threshold and second preset threshold) is arranged
The capacity of destination buffer described in necessary adjustment is judged whether there is more flexiblely.
In some implementations, the basis presets the history buffer hit rate of target tenant in historical time section,
The step for predicting the following buffer area hit rate of the target tenant according to preset rules is triggered by least one following event:
Predetermined period reaches;Preset time point reaches;Number of the buffer area hit rate lower than the tenant of pre-set buffer area hit rate threshold value
More than or equal to predetermined number threshold value.When meeting trigger event, server can understand the buffering of the target tenant in time
Area's hit rate situation, and then the capacity of the destination buffer is adjusted in time.
Second aspect provides a kind of server, comprising:
Predicting unit, for the history buffer hit rate according to target tenant in default historical time section, according to default
Rule predicts the following buffer area hit rate of the target tenant, and the target tenant is one in the multi-tenant;
Comparing unit, the following buffer area hit rate and the target predicted for the predicting unit
The size of the preset expectation buffer area hit rate of tenant, the expectation buffer area hit rate are that the target tenant orders buffer area
The average demand value of middle rate can be to the expectation buffer area hit rate when target tenant applies for the registration of on the server
It is configured;
First administrative unit, if comparing the following buffer area hit rate for the comparing unit is greater than the expectation
Buffer area hit rate is then hit according to the following buffer area hit rate and the target tenant preset first pre-set buffer area
The difference of rate calculates the first capacity according to the first preset algorithm, and corresponding from the target tenant according to first capacity
Destination buffer in discharge data page;Wherein first pre-set buffer area's hit rate is less than the following buffer area hit
Rate;
Second administrative unit, if comparing the following buffer area hit rate for the comparing unit is less than the expectation
Buffer area hit rate is then hit according to target tenant preset second pre-set buffer area's hit rate and the following buffer area
The difference of rate calculates the second capacity according to the second preset algorithm, and is that the destination buffer adds according to second capacity
Add data page;Wherein second pre-set buffer area's hit rate is greater than the following buffer area hit rate.
It, can buffer area hit rate demand according to target tenant's future and the target tenant by above-mentioned server
The size relation of preset buffer area hit rate demand dynamically adjusts the capacity of the corresponding destination buffer of the target tenant.
In some implementations, first administrative unit is specifically used for:
Using the expectation buffer area hit rate as target tenant preset first pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the following buffer area hit rate and the expectation buffer area hit rate, according to the
One preset algorithm calculates the first capacity.
By above-mentioned server, can in the case where meeting the expectation buffer area hit rate demand of the target tenant,
It is calculated using the difference of the following buffer area hit rate and the expectation buffer area hit rate of the target tenant, obtains institute
The capacity for the data page that target tenant has more than needed is stated, so that the data page that the target tenant has more than needed be discharged, avoids resource
Waste.
In some implementations, second administrative unit is specifically used for:
Using the expectation buffer area hit rate as target tenant preset second pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the expectation buffer area hit rate and the following buffer area hit rate, according to the
Two preset algorithms calculate the second capacity.
The target is utilized by above-mentioned server for the expectation buffer area hit rate demand for ensureing the target tenant
The expectation buffer area hit rate of tenant and the difference of the following buffer area hit rate calculate, and obtain the target tenant mesh
The capacity of preceding required data page ensures the access performance of the target tenant to add data page for the target tenant
Demand.
In some implementations, first administrative unit includes:
First judging unit, if comparing the following buffer area hit rate for the comparing unit is greater than the expectation
Buffer area hit rate, then judge whether the following buffer area hit rate is greater than the preset maximum buffer life of the target tenant
Middle rate, the maximum buffer hit rate are greater than expectation buffer area hit rate, and the maximum buffer hit rate is described
For target tenant to the greatest requirements value of buffer area hit rate, the target tenant, can be with when applying for the registration of on the server
The maximum buffer capacity is configured;
First computing unit, if it is described to judge that the following buffer area hit rate is greater than for first judging unit
The preset maximum buffer hit rate of target tenant, then it is preset using the maximum buffer hit rate as the target tenant
First pre-set buffer area hit rate, and according to the difference of following the buffer area hit rate and maximum preset buffer area hit rate
Value and preset buffer pool size calculate the first capacity according to the first preset algorithm, and according to first capacity from institute
It states in the corresponding destination buffer of target tenant and discharges data page.
By above-mentioned server, can under the premise of meeting the maximum buffer hit rate demand of the target tenant,
It is calculated using the following buffer area hit rate and the difference of the maximum buffer hit rate, obtains the Target buffer
The capacity of area's data page more than needed avoids the wasting of resources so that the data page that the target tenant has more than needed be discharged.
In some implementations, first administrative unit is specifically used for:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, and will be released
The data page of first capacity be added in freebuf, the data page in the freebuf is used for for described more
The tenant for needing to add data page in tenant uses.
By above-mentioned server, the data page released from the buffer area of the target tenant can be added to the free time
In buffer area, the access performance demand of the target tenant, but also the buffering that the target tenant can be had more than needed can have not only been ensured
Area resource releases, with supply other in the multi-tenant need to increase buffer pool size tenant (including the target rent
The tenant that other tenants and the server in family, the multi-tenant newly increase) or server in other business make
With improving the overall performance of multi-tenant and the resource utilization of buffer area.
In some implementations, first administrative unit includes:
Second judgment unit, if comparing the following buffer area hit rate for the comparing unit is greater than the expectation
Buffer area hit rate, then judge whether the following buffer area hit rate is greater than the preset maximum buffer life of the target tenant
Middle rate, the maximum buffer hit rate are greater than expectation buffer area hit rate;
Second computing unit, if judging that the following buffer area hit rate is less than or waits for the second judgment unit
In the preset maximum buffer hit rate of the target tenant, then using the expectation buffer area hit rate as the target tenant
Preset first pre-set buffer area hit rate, and according to the following buffer area hit rate and the expectation buffer area hit rate
Difference and preset buffer pool size calculate the first capacity according to the first preset algorithm, and according to first capacity from
The target tenant discharges data page in corresponding destination buffer.
By above-mentioned server, can in the case where meeting the expectation buffer area hit rate demand of the target tenant,
It is calculated using the difference of the following buffer area hit rate and the expectation buffer area hit rate of the target tenant, obtains institute
The capacity for the data page that target tenant has more than needed is stated, so that the data page that the target tenant has more than needed be discharged, avoids resource
Waste.
In some implementations, first administrative unit is specifically used for:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, it is described to release
Data page be used to need to add for other in the multi-tenant tenant's use of data page, first capacity is less than or equal to
Described other need to add tenant's data page capacity to be added of data page.
, can be when other tenants there is data page demand in the multi-tenant by above-mentioned server, it will be from the target
The data page released in the buffer area of tenant distributes to other tenants use, can both ensure the access of the target tenant
Energy demand, and the buffer resource that the target tenant has more than needed can be released, it is needed with supplying other in the multi-tenant
The tenant for adding data page uses, and improves the overall performance of multi-tenant and the resource utilization of buffer area.
In some implementations, second administrative unit includes:
Third judging unit, if comparing the following buffer area hit rate for the comparing unit is less than the expectation
Buffer area hit rate, then judge whether the following buffer area hit rate is less than the preset minimal buffering area life of the target tenant
Middle rate, minimal buffering area hit rate are less than expectation buffer area hit rate, and minimal buffering area hit rate is described
For target tenant to the KB required of buffer area hit rate, the target tenant, can be with when applying for the registration of on the server
Minimal buffering area hit rate is configured;
Third computing unit, if it is described to judge that the following buffer area hit rate is less than for the third judging unit
The preset minimal buffering area hit rate of target tenant, then it is preset using minimal buffering area hit rate as the target tenant
Second pre-set buffer area hit rate, and according to the difference of minimal buffering area hit rate and the following buffer area hit rate with
And preset buffer pool size, the second capacity is calculated according to the second preset algorithm, and be the mesh according to second capacity
It marks buffer area and adds data page.
The target is utilized by above-mentioned server for the minimal buffering area hit rate demand for ensureing the target tenant
The minimal buffering area hit rate of tenant and the difference of the following buffer area hit rate calculate, and obtain the target tenant extremely
It needs to add the data page of the second capacity less, to add data page for the target tenant, ensures the visit of the target tenant
Ask performance requirement.
In some implementations, second administrative unit includes:
4th judging unit, if comparing the following buffer area hit rate for the comparing unit is less than the expectation
Buffer area hit rate, then judge whether the following buffer area hit rate is less than the preset minimal buffering area life of the target tenant
Middle rate, minimal buffering area hit rate are less than expectation buffer area hit rate;
4th computing unit, if judging that the following buffer area hit rate is greater than or waits for the 4th judging unit
In the preset minimal buffering area hit rate of the target tenant, then using the expectation buffer area hit rate as the target tenant
Preset second pre-set buffer area hit rate, and according to the expectation buffer area hit rate and the following buffer area hit rate
Difference and preset buffer pool size calculate the second capacity according to the second preset algorithm, and are according to second capacity
Add data page in the destination buffer.
The target is utilized by above-mentioned server for the expectation buffer area hit rate demand for ensureing the target tenant
The expectation buffer area hit rate of tenant and the difference of the following buffer area hit rate calculate, and obtain the target tenant mesh
The capacity of preceding required data page ensures the access performance of the target tenant to add data page for the target tenant
Demand.
In some implementations, the comparing unit is specifically used for:
Judge that the following buffer area hit rate subtracts whether the difference for it is expected buffer area hit rate is greater than first in advance
If threshold value, alternatively, judging that the expectation buffer area hit rate subtracts the difference of the following buffer area hit rate and whether is greater than the
Two preset thresholds;
If it is default that the difference that the future buffer area hit rate subtracts the expectation buffer area hit rate is greater than described first
Threshold value, then the following buffer area hit rate is greater than expectation buffer area hit rate;If expectation buffer area hit rate subtracts
The difference of the following buffer area hit rate is gone to be greater than second preset threshold, then the following buffer area hit rate is less than institute
State desired buffer area hit rate.It, can be with by the way that float value (i.e. described first preset threshold and second preset threshold) is arranged
The capacity of destination buffer described in necessary adjustment is judged whether there is more flexiblely.
In some implementations, the predicting unit, which executes the basis and presets target tenant in historical time section, goes through
History buffer area hit rate, according to preset rules predict following buffer area hit rate this operation of the target tenant by with down toward
Few event triggering: predetermined period reaches;Preset time point reaches;Buffer area hit rate is lower than pre-set buffer area hit rate threshold
The number of the tenant of value is greater than or equal to predetermined number threshold value.When meeting trigger event, server can understand described in time
The buffer area hit rate situation of target tenant, and then the capacity of the destination buffer is adjusted in time.
The third aspect provides a kind of server, including processor and memory, wherein the server can be counted for example
According to library server, the data base handling system of large scale processing can be applied to, the data base handling system includes but is not limited to:
Distributed disk base system, clustered database system and MPP Database Systems.And the server can be generic services
Device, or Cloud Server;The memory is for storing the buffer management code towards multi-tenant, the processing
Device is used to that the program code of the memory storage to be called to execute above-mentioned first aspect or any implementation institute of first aspect
The buffer management method towards multi-tenant of description.
In some implementations of the invention, first preset algorithm are as follows: C1=(A-B) * D, wherein C1 is described
First capacity, A are the following buffer area hit rate, and B is first pre-set buffer area hit rate, and D is described preset slow
Rush Qu Rongliang.The data page capacity that the destination buffer can release can be calculated by the first preset algorithm.
In some implementations of the invention, first preset algorithm are as follows: C1=min ((A-B) * D, E), wherein
C1 is first capacity, and A is the following buffer area hit rate, and B is first pre-set buffer area hit rate, and D is described
Preset buffer pool size, E are the data page capacity that other tenants are to be added in the multi-tenant.By by the target tenant
The data page capacity that the can release data page capacity to be added with other tenants is compared, it may be determined that goes out the Target buffer
The practical data page capacity discharged in area.
In some implementations of the invention, second preset algorithm are as follows: C2=min ((F-A) * D, G), wherein
C2 is second capacity, and F is second pre-set buffer area hit rate, and A is the following buffer area hit rate, and D is described
Preset buffer pool size, G are the number in other tenants are to be released in the multi-tenant data page capacity and freebuf
According to the sum of page capacity.Pass through server currently available data page capacity and target tenant data page capacity to be added
It is compared, it may be determined that go out the data page capacity of the destination buffer actual interpolation.
In some implementations of the invention, the preset buffer pool size includes: that the target tenant is preset
Minimal buffering area capacity, the preset expectation buffer pool size of the target tenant, the preset maximum buffer of target tenant
Current at least one of the capacity of capacity and the destination buffer.When the target tenant applies for the registration of on the server,
Minimal buffering area capacity, the expectation buffer pool size and the maximum buffer capacity can be configured respectively.
In some implementations of the invention, the default historical time section can be a past period,
It can be past at least two period.For example, when the default historical time section can be by the server current prediction
Between put and the time interval shortest historical forecast time point with the current predictive time point composed by the period,
It can be every two adjacent predicted time point between the current predicted time point and preset predicted time point to be formed
P (P is positive integer, and P >=1) a period, can also (Q be positive integer, and 1≤Q for any Q in the P period
≤ P) a period.Wherein, the preset predicted time point may include but be not limited to server starting postrun the
One predicted time point.
In some implementations of the invention, the preset rules include: that (N is positive whole according to the N of the target tenant
Number, N >=1) a history buffer hit rate, the average buffer hit rate of N number of history buffer hit rate is calculated, by institute
State the following buffer area hit rate that average buffer hit rate is determined as the target tenant;Alternatively, according to the target tenant
M (M is positive integer, M >=2) a history buffer hit rate variation tendency, predict the following buffer area of the target tenant
Hit rate.
Implement the embodiment of the present invention, according to the following buffer area hit rate of the target tenant in the obtained multi-tenant of prediction with
And the preset buffer area hit rate of target tenant, dynamically to adjust the corresponding destination buffer of target tenant in multi-tenant
Capacity, not only can be to avoid the access performance for influencing tenant because of out of buffers, but also can be made to avoid because buffer area is excessive
At the wasting of resources.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described.
Fig. 1 is the configuration diagram of data base buffer in server provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of buffer management method towards multi-tenant provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of server provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram of another server provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of another server provided in an embodiment of the present invention;
Fig. 6 is the structural schematic diagram of another server provided in an embodiment of the present invention;
Fig. 7 is the structural schematic diagram of another server provided in an embodiment of the present invention;
Fig. 8 is the structural schematic diagram of another server provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with attached drawing, the embodiment of the present invention is described.
It should be noted that the term used in embodiments of the present invention is only merely for the mesh of description specific embodiment
, it is not intended to limit the invention." the one of the embodiment of the present invention and singular used in the attached claims
Kind ", " described " and "the" are also intended to including most forms, unless the context clearly indicates other meaning.It is also understood that this
Term "and/or" used herein refers to and includes one or more associated any or all possible group for listing project
It closes.Although in embodiments of the present invention various functional modules may be described using term " first ", " second " and " third " etc.
Or unit, but these proper nouns, functional module or unit should not necessarily be limited by these terms.These terms are only used to each function mould
Block or unit are distinguished from each other out, and there is no the meanings that tandem limits.In addition, " multi-tenant " in the embodiment of the present invention includes
At least two tenants.
Server described in following embodiment is mountable database management system software, is specifically as follows data
Library server can be applied to the data base handling system of large scale processing, and the data base handling system includes but is not limited to: point
Cloth disk base system, clustered database system and MPP Database Systems.And the server can be generic services
Device, or Cloud Server.Wherein, common server refers to independent server, is specifically as follows one or more calculating
Machine can be disposed and provide service in a local network for multiple users (can be regarded as tenant), be stored in the common server multiple
The respective user data of user, multiple users can access the common server by local area network;Cloud Server is based on cloud framework
Server cluster, have a multiple servers, beyond the clouds, multiple tenants rent computing resource, storage resource etc. from cloud for deployment.It should
Be stored with the respective tenant data of multiple tenants in Cloud Server, multiple tenants can by the internet access Cloud Server,
In, these services include but is not limited to: inquiry, update, storage etc..
First introduce the general plotting of buffer management scheme provided by the invention below, the present invention will be described in detail side subsequent again
The detailed step of case.Firstly, being that the framework of data base buffer in server provided in an embodiment of the present invention shows referring to Figure 1
It is intended to, is illustrated with reference to Fig. 1 the general plotting of the buffer management method provided by the invention towards multi-tenant: in Fig. 1, service
Device is towards 4 tenants, and the mark of 4 tenants is respectively T1, T2, T3 and T4, and 4 tenants infused on that server by application
Volume, and 4 tenants are registered in the server when creation, server is respectively 4 tenants point from the memory of operating system
The privately owned buffer area of certain capacity is matched, here for the ease of distinguishing, the title of 4 privately owned buffer areas has been respectively designated as
The capacity of Buffer1, Buffer2, Buffer3 and Buffer4,4 privately owned buffer areas may be the same or different, it is assumed that 4
The capacity of a buffer area is respectively 2GB (Gigabyte, gigabytes), 2GB, 1GB and 3GB.In addition, server is also in the present invention
Certain shared resource is set, i.e., freebuf shown in FIG. 1.
Wherein, freebuf is made of several idle data pages, which can be the number of server reserves
According to page, it is also possible to the empty data page released from the buffer area of certain tenants.The present invention is combined using dynamic and static state
Mode use buffer area, when initial server be each tenant distribute certain capacity buffer resource, server run
In the process, server can be according to the pre-set performance requirement of each tenant (specially each pre-set buffer area of tenant
Hit rate) and each tenant actual performance (specially currently for each tenant prediction the following buffer area hit rate),
The buffer pool size of dynamic adjustment each tenant (including reducing and increasing), is suitably released from the buffer area of the higher tenant of performance
Partial buffer area is added into freebuf, and from freebuf for the lower tenant of performance in discharge section buffer area,
To improve the resource utilization of buffer area while the overall performance for meeting multi-tenant.
It should be noted that buffer area is made of several data pages (alternatively referred to as data block), a data page
Capacity be usually 4KB (Kilobyte, kilobytes) or 8KB, continuous data page constitutes a buffer area, the appearance of buffer area
Amount refers to the sum of the capacity of each data page in the buffer area.
It should be noted that data page in data page and freebuf in the buffer area of 4 tenants shown in FIG. 1
Pattern difference is intended merely to as differentiation, and the data page in the buffer area of 4 tenants is stored with the corresponding data of each tenant, should
Class data page is used to keep in the data that server is read in from the hardware stores node such as disk or the hardware such as disk will be written and deposits
Store up the data of node.And the data page in freebuf is idle data page, it is generally the case that will not in such data page
Be stored with the data of tenant, avoid such data page by other tenants using when inquire upper one data using tenant, from
And enhance the Information Security of each tenant.
Following embodiment is the explanation carried out by taking the target tenant in multi-tenant as an example, for its in the multi-tenant
His tenant can also equally realize that dynamic is adjusted using the buffer management method provided by the embodiment of the present invention towards multi-tenant
The buffer pool size of other whole tenants.
Below with reference to Fig. 2 specific embodiment that the present invention will be described in detail.
Fig. 2 is referred to, is that a kind of process of buffer management method towards multi-tenant provided in an embodiment of the present invention is shown
It is intended to.The buffer management method towards multi-tenant can be applied to but be not limited to server, such as database server, and institute
Stating server can be common server, or Cloud Server, should buffer management method towards multi-tenant include but
It is not limited to following steps.
S201: it according to the history buffer hit rate of target tenant in default historical time section, is predicted according to preset rules
The following buffer area hit rate of the target tenant.
Specifically, the target tenant is one in multi-tenant.In server operational process, the server is by institute
The history buffer hit rate for stating target tenant is hit according to the following buffer area that preset rules predict to obtain the target tenant
Rate.Wherein, the history buffer hit rate refers to reads from data base buffer in default historical time section server
The ratio of the total quantity of data is read to the quantity of data and the server, the total quantity that the server reads data includes
From the data bulk read in the data page of data base buffer and the number read from the hardware stores node such as disk
The sum of data bulk.
Optionally, the history buffer hit rate of the target tenant is predicted to obtain by the server according to preset rules
The step for following buffer area hit rate of the target tenant can be triggered by least one following event: predetermined period reaches
(i.e. cyclic forecast);Preset time point reaches (i.e. timing predictions);Buffer area hit rate is lower than pre-set buffer area hit rate threshold
The number of the tenant of value is greater than or equal to predetermined number threshold value.Wherein, the predetermined period, the preset time point, described
Pre-set buffer area hit rate threshold value and the predetermined number threshold value can be arranged by the system default of the server, can also be by institute
Administrator's manual setting of server is stated, the embodiment of the present invention is not especially limited.When meeting trigger event, server can be with
The following buffer area hit rate situation of the target tenant is understood in time, and then the buffer area of the target tenant is held in time
Amount is adjusted.
For example, the predetermined period is 30 minutes, then the server is primary every prediction in 30 minutes in the process of running
The following buffer area hit rate of the target tenant;Alternatively, the preset time point can be each integral point (such as 3 in one day
Point, 7 points, 8 points etc.) and each least bit (such as 3 points 30 minutes, 7 points 30 minutes, 8 points 30 minutes etc.), then the server is in each integral point
Or the following buffer area hit rate of the primary target tenant is predicted when least bit arrival;Alternatively, pre-set buffer area hit rate
Threshold value is 80%, and the predetermined number threshold value is 5, then the server is in the rent for detecting that buffer area hit rate is lower than 80%
When the number at family is greater than or equal to 5, the following buffer area hit rate of the target tenant is predicted.
Optionally, the default historical time section can be a past period, or past at least two
A period, the embodiment of the present invention are not especially limited.For example, the default historical time section can be current by the server
When composed by predicted time point and time interval shortest historical forecast time point with the current predictive time point
Between section, be also possible to every two adjacent predicted time point between the current predicted time point and preset predicted time point
Composed P (P is positive integer, and P >=1) a period, can also in the P period any Q (Q is positive integer,
And 1≤Q≤P) a period.Wherein, the preset predicted time point may include but be not limited to the server starting operation
First predicted time point afterwards.For example, the server currently carries out the following buffer area hit rate of the target tenant
Time point when prediction is 11 points of the morning, is the morning 10 with 11 points of time interval of a morning shortest historical forecast time point
Point 30 minutes, then the default historical time section is 10:30:00-11:00:00;Alternatively, the predicted time that the server is current
Point is 11 points of the morning, and preset predicted time point is 8 a.m., and predetermined period of the server setting is half an hour, then institute
State default historical time section be respectively 08:00:00-08:30:00,08:30:00-09:00:00,09:00:00-09:30:00,
09:30:00-10:00:00,10:00:00-10:30:00 and 10:30:00-11:00:00;Alternatively, the default historical time
Section is any three in above-mentioned 6 periods: 08:00:00-08:30:00,09:00:00-09:30:00 and 10:00:00-
10:30:00。
Optionally, the preset rules include but is not limited to: according to the N of the target tenant (N is positive integer, 1≤N≤
P) a history buffer hit rate calculates the average buffer hit rate of N number of history buffer hit rate, will be described average
Buffer area hit rate is determined as the following buffer area hit rate of the target tenant;Alternatively, according to the M of the target tenant, (M is
Positive integer, 2≤M≤P) a history buffer hit rate variation tendency, predict the following buffer area hit of the target tenant
Rate.
For example, the default historical time section is 10:30:00-11:00:00, the server is counted at 11 points of the morning
History buffer hit rate to the target tenant in this period of 10:30:00-11:00:00 is 95%, then by 95%
It is determined as the following buffer area hit rate of the target tenant;Alternatively, the default historical time section is respectively 08:00:00-
08:30:00、08:30:00-09:00:00、09:00:00-09:30:00、09:30:00-10:00:00、10:00:00-10:
30:00 and 10:30:00-11:00:00, the server obtain the target tenant at 11 points of the morning in above-mentioned 6 history
Between history buffer hit rate in section be respectively 95%, 90%, 85%, 95%, 85%, 90%, then calculate 6 history bufferings
The average buffer hit rate 90% being calculated is determined as the target tenant by the average buffer hit rate of area's hit rate
The following buffer area hit rate;Alternatively, the default historical time section is respectively 08:00:00-08:30:00,08:30:00-
09:00:00,09:00:00-09:30:00,09:30:00-10:00:00,10:00:00-10:30:00 and 10:30:00-11:
00:00, the server obtain history buffer life of the target tenant in above-mentioned 6 historical time sections at 11 points of the morning
Middle rate is respectively 95%, 90%, 85%, 95%, 85%, 90%, then according to the variation tendency of 6 history buffer hit rates,
The following buffer area hit rate for being determined as the target tenant for 95%.
S202: the future buffer area hit rate is big with the preset expectation buffer area hit rate of the target tenant
It is small.
Specifically, the expectation buffer area hit rate is average demand value of the target tenant to buffer area hit rate,
The target tenant can be configured the expectation buffer area hit rate when applying for the registration of in the server, the mesh
Expectation buffer area hit rate can be only arranged in mark tenant, three buffer area hit rates can also be arranged simultaneously, respectively most
Big buffer area hit rate, expectation buffer area hit rate and minimal buffering area hit rate, wherein the maximum buffer hit rate >
Minimal buffering area hit rate described in the expectation buffer area hit rate >, the maximum buffer hit rate and the minimal buffering
Area's hit rate can be described in detail subsequent.The server is according to the following buffer area hit rate of the target tenant and institute
The size relation between desired buffer area hit rate is stated to determine whether needing to adjust the corresponding Target buffer of the target tenant
The capacity in area.If the future buffer area hit rate is greater than the expectation buffer area hit rate or the following buffer area life
Middle rate is less than expectation buffer area hit rate, then the result judged adjusts the capacity of the destination buffer as needs
It is whole, it, can be to the appearance of the destination buffer if the future buffer area hit rate is equal to expectation buffer area hit rate
Whether amount is without adjustment, or the capacity to the destination buffer is needed to be adjusted determine according to actual needs.
Optionally, the following buffer area hit rate and the preset expectation buffer area of the target tenant are hit
The size of rate, comprising:
Judge that the following buffer area hit rate subtracts whether the difference for it is expected buffer area hit rate is greater than first in advance
If threshold value, alternatively, judging that the expectation buffer area hit rate subtracts the difference of the following buffer area hit rate and whether is greater than the
Two preset thresholds;If it is pre- that the difference that the future buffer area hit rate subtracts the expectation buffer area hit rate is greater than described first
If threshold value, then the following buffer area hit rate is greater than expectation buffer area hit rate;If expectation buffer area hit rate
The difference for subtracting the following buffer area hit rate is greater than second preset threshold, then the following buffer area hit rate is less than
Expectation buffer area hit rate.
Wherein, first preset threshold and second preset threshold can be defaulted by the operating system of the server sets
It sets, can also be by administrator's manual setting of the server, and first preset threshold and second preset threshold can
With identical, can also be different, the embodiment of the present invention is not especially limited.By the way that float value (i.e. described first preset threshold is arranged
With second preset threshold), the capacity of destination buffer described in necessary adjustment can be judged whether there is more flexiblely.Example
Such as, first preset threshold is 2%, and the future buffer area hit rate is 95%, and expectation buffer area hit rate is
90%, the difference that the future buffer area hit rate subtracts the expectation buffer area hit rate is greater than 2%, then the result judged is
The future buffer area hit rate is greater than expectation buffer area hit rate;Alternatively, second preset threshold is 2%, it is described
The following buffer area hit rate is 85%, and the expectation buffer area hit rate is 90%, and expectation buffer area hit rate subtracts institute
The difference for stating the following buffer area hit rate is greater than 2%, then the result judged is less than the phase as the following buffer area hit rate
Hope buffer area hit rate.
S203: if the future buffer area hit rate is greater than expectation buffer area hit rate, according to described following slow
The difference for rushing area's hit rate Yu target tenant preset first pre-set buffer area hit rate is calculated according to the first preset algorithm
First capacity out, and data page is discharged from the corresponding destination buffer of the target tenant according to first capacity;Wherein
First pre-set buffer area's hit rate is less than the following buffer area hit rate.
Specifically, if the following buffer area hit rate of the target tenant is greater than expectation buffer area hit rate, benefit
With the following buffer area hit rate of the target tenant and the difference of target tenant preset first pre-set buffer area hit rate
Value is calculated, and the capacity for the data page that the target tenant has more than needed is obtained, thus according to first capacity by the target
The data page that tenant has more than needed discharges, and avoids the wasting of resources.Wherein, the data page released from the destination buffer can
Need to increase the tenant of buffer pool size (including its in the target tenant, the multi-tenant for other in the multi-tenant
The tenant that his tenant and the server newly increase) or server in other business use.
Optionally, first preset algorithm are as follows: C1=(A-B) * D, wherein C1 is first capacity, A be it is described not
Carry out buffer area hit rate, B is first pre-set buffer area hit rate, and D is the preset buffer pool size.Wherein, described
Preset buffer pool size includes: the preset minimal buffering area capacity of the target tenant, the preset expectation of target tenant
In buffer pool size, the current capacity of the preset maximum buffer capacity of the target tenant and the destination buffer at least
It is a kind of.It, can be slow to minimal buffering area capacity, the expectation respectively when the target tenant applies for the registration of on the server
It rushes area's capacity and the maximum buffer capacity is configured, can also be set just for one of buffer pool size
It sets.For example, the minimal buffering area capacity of the target tenant setting is 1GB, it is expected that buffer pool size is 1.5GB, maximum cushioning
Area's capacity is 2GB.
Optionally, first preset algorithm are as follows: C1=min ((A-B) * D, E), wherein C1 is first capacity, A
For the following buffer area hit rate, B is first pre-set buffer area hit rate, and D is the preset buffer pool size, root
The available data page capacity of the target tenant can be calculated according to formula (A-B) * D, E is other tenants in the multi-tenant
Data page capacity to be added.It waits adding if the available data page capacity of target tenant is greater than or equal to other described tenants
The data page capacity added, then first capacity is equal to other described tenants data page capacity to be added;If the target is rented
The available data page capacity in family is less than other described tenants data page capacity to be added, then first capacity is equal to described
The available data page capacity of target tenant.So as to determine that the destination buffer is practical according to the demand of other tenants
The data page capacity for needing to discharge can either ensure the performance requirement of the target tenant, and data page can be supplied to other
There is the tenant of demand to use, improves the overall performance of multi-tenant and the utilization rate of buffer area.
Optionally, described according to the following buffer area hit rate and target tenant preset first pre-set buffer area
The difference of hit rate calculates the first capacity according to the first preset algorithm, comprising:
Using the expectation buffer area hit rate as target tenant preset first pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the following buffer area hit rate and the expectation buffer area hit rate, according to the
One preset algorithm calculates the first capacity.
Specifically, if the target tenant is only arranged desired buffer area hit rate in initial setting up, if
Judge that the following buffer area hit rate of the target tenant is greater than expectation buffer area hit rate, then may indicate that in future
A period of time server can satisfy the expectation buffer area hit rate demand of the target tenant in this case can
To be calculated using the following buffer area hit rate of the target tenant and the difference of the expectation buffer area hit rate, obtain
The capacity for the data page that the target tenant has more than needed is the first capacity, thus the number for the first capacity that the target tenant is had more than needed
It is discharged according to page, avoids the wasting of resources.
For example, the future buffer area hit rate is 95%, the expectation buffer area hit rate is 90%, and the target is slow
It rushes the current capacity in area and calculates the first capacity C 1=then according to the first preset algorithm C1=(A-B) * D for 2GB
(95%-90%) * 2GB=102MB (Megabyte, Mbytes).
Optionally, described according to the following buffer area hit rate and target tenant preset first pre-set buffer area
The difference of hit rate calculates the first capacity according to the first preset algorithm, comprising:
Judge whether the following buffer area hit rate is greater than the preset maximum buffer hit rate of the target tenant, institute
Maximum buffer hit rate is stated greater than expectation buffer area hit rate;
If so, being hit the maximum buffer hit rate as the target tenant preset first pre-set buffer area
Rate, and according to difference and the preset buffer area of the following buffer area hit rate and maximum preset buffer area hit rate
Capacity calculates the first capacity according to the first preset algorithm.
Specifically, the maximum buffer hit rate is greatest requirements value of the target tenant to buffer area hit rate,
The target tenant can be configured the maximum buffer hit rate when applying for the registration of on the server, if
The target tenant is respectively arranged desired buffer area hit rate and maximum buffer hit rate in initial setting up, if
Judge that the following buffer area hit rate of the target tenant is greater than expectation buffer area hit rate, then needs further to judge
Whether the future buffer area hit rate is greater than the maximum buffer hit rate, if so, may indicate that at following one section
Time server can satisfy the maximum buffer hit rate demand of the target tenant, in such a case, it is possible to utilize
The difference of the following buffer area hit rate and the maximum buffer hit rate of the target tenant calculates, and obtains the mesh
The capacity for the data page that mark tenant has more than needed is the first capacity, and first capacity is that the target tenant currently can at least discharge
The data page that the target tenant has more than needed is discharged, avoids the wasting of resources by the capacity of data page out.
For example, the future buffer area hit rate is 95%, the expectation buffer area hit rate is 90%, described maximum slow
Rushing area's hit rate is 94%, and the preset minimal buffering area capacity of target tenant is 1GB, then according to first preset algorithm
C1=(A-B) * D calculates the first capacity C 1=(95%-94%) * 1GB=10*0MB.
Optionally, if judging, the following buffer area hit rate of the target tenant is hit greater than the maximum buffer
Rate, then the maximum buffer hit rate that may indicate that server can satisfy the target tenant in the coming period of time need
It asks, in the case where meeting the expectation buffer area hit rate demand of the target tenant, also can use the target tenant's
The difference of the following buffer area hit rate and the expectation buffer area hit rate is calculated, and show that the target tenant can release
Data page be the first capacity, first capacity indicates the appearance for the data page that the target tenant currently can at most release
Amount, the data page that the target tenant has more than needed is discharged, the wasting of resources is avoided.
For example, the future buffer area hit rate is 95%, the expectation buffer area hit rate is 90%, described maximum slow
Rushing area's hit rate is 94%, and the preset minimal buffering area capacity of target tenant is 1GB, then according to first preset algorithm
C1=(A-B) * D calculates the first capacity C 1=(95%-90%) * 1GB=51.2MB.
Optionally, described to discharge data from the corresponding destination buffer of the target tenant according to first capacity
Page, comprising:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, and will be released
The data page of first capacity be added in freebuf, the data page in the freebuf is used for for described more
The tenant for needing to add data page in tenant uses.
Specifically, discharging the target according to first capacity being calculated after calculating first capacity
The data page of buffer area, and the data page released is added in the freebuf of the server.The free buffer
Data page in area is for needing to add the tenant of data page or other business use of the server, to improve rent more
The overall performance at family and the utilization rate of buffer area, wherein the tenant for needing to add data page includes but is not limited to: the mesh
The tenant that mark tenant, other tenants in the multi-tenant and the server newly increase.
For example, if first capacity is 10.2MB, if the capacity of each data page is 8KB, the Target buffer
Area needs to discharge 1306 data pages of 10.2MB/8KB ≈, and 1306 data pages discharged from the destination buffer are added
Into freebuf.
Optionally, if only desired buffer area hit rate be arranged when the target tenant is initial, work as judgement
When the following buffer area hit rate is greater than the expectation buffer area hit rate out, it can will be discharged from the destination buffer
The data page of the first capacity out is added in freebuf;Alternatively, if being buffered when the target tenant is initial to expectation
Area's hit rate and maximum buffer hit rate are arranged respectively, then when judging that the following buffer area hit rate is greater than institute
When stating maximum buffer hit rate, the data page of the first capacity released from the destination buffer can be added to sky
In not busy buffer area, so that the data page that the target tenant has more than needed be released, the wasting of resources is avoided.
Optionally, described according to the following buffer area hit rate and target tenant preset first pre-set buffer area
The difference of hit rate calculates the first capacity according to the first preset algorithm, comprising:
Judge whether the following buffer area hit rate is greater than the preset maximum buffer hit rate of the target tenant, institute
Maximum buffer hit rate is stated greater than expectation buffer area hit rate;
If it is not, then being hit the expectation buffer area hit rate as the target tenant preset first pre-set buffer area
Rate, and held according to the difference and preset buffer area of the following buffer area hit rate and the expectation buffer area hit rate
Amount, calculates the first capacity according to the first preset algorithm.
Specifically, if the target tenant is in initial setting up respectively to desired buffer area hit rate and maximum buffer
Hit rate is arranged, if judging, the following buffer area hit rate of the target tenant is hit greater than the expectation buffer area
Rate, then need further to judge whether the following buffer area hit rate is greater than the maximum buffer hit rate, if it is not, then may be used
To show that server in the coming period of time is not able to satisfy the maximum buffer hit rate demand of the target tenant, still
It can satisfy the expectation buffer area hit rate demand of the target tenant, in such a case, it is possible to utilize the target tenant
The following buffer area hit rate and it is described expectation buffer area hit rate difference calculated, obtain what the target tenant had more than needed
The capacity of data page is the first capacity, and the data page that the target tenant has more than needed is discharged, the wasting of resources is avoided.
For example, the future buffer area hit rate is 95%, the expectation buffer area hit rate is 90%, described maximum slow
Rushing area's hit rate is 96%, and the preset expectation buffer pool size of target tenant is 1.5GB, then according to the described first pre- imputation
Method C1=(A-B) * D calculates the first capacity C 1=(95%-90%) * 1.5GB=76.8MB.
Optionally, described to discharge data from the corresponding destination buffer of the target tenant according to first capacity
Page, comprising:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, it is described to release
Data page be used to need to add for other in the multi-tenant tenant's use of data page, first capacity is less than or equal to
Described other need to add tenant's data page capacity to be added of data page.
Specifically, discharging the target according to first capacity being calculated after calculating first capacity
The data page of buffer area, and the data page released is added to the corresponding buffer area of tenant that other need to add data page
In, to supply other tenants use.To provide partial data page under the premise of ensureing the access performance of the target tenant
It needs the tenant of data page to use to other, improves the utilization rate of buffer area.
It should be noted that remaining in data page and the freebuf when judging that other tenants need to add
When data page deficiency, first capacity is calculated, and discharge the data page in the destination buffer according to first capacity,
In the buffer area of other tenants described in being added to from the data page released in the destination buffer, so as to according to it
The demand of his tenant discharges the data page in the destination buffer, between different tenants dynamic adaptation data page, favorably
In the utilization rate for improving buffer area.
For example, the future buffer area hit rate is 95%, the expectation buffer area hit rate is 90%, described maximum slow
Rushing area's hit rate is 96%, and the preset it is expected buffer pool size of target tenant is 1.5GB, and there are two rents in the multi-tenant
Family needs to add data page: the first tenant needs to add the data page of 10MB, and the second tenant needs to add the data page of 20MB, and
There is no remaining data page in the freebuf at this time, then according to the first preset algorithm C1=min ((A-B) * D,
E), calculating the data page capacity that the target tenant can provide out is (A-B) * D=(95%-90%) * 1.5GB=
76.8MB, D=10MB+20MB=30MB then calculate the first capacity C 1=(76.8MB, the 30MB)=30MB.Then from institute
The data page for discharging 30MB in destination buffer is stated, and 10MB data page therein is added to the corresponding buffer area of the first tenant
In, the data page of other 20MB is added in the corresponding buffer area of the second tenant.
It should be noted that the step for data page of the release destination buffer described in the embodiment of the present invention
Specifically: first the data in data page to be released are purged, then be added in the freebuf (or add
It is added in the corresponding buffer area of tenant that other need to add data page, or other regions being added in server memory),
And before removing the data in a certain data page, it can first judge whether the data in the data page have been updated and (repair
Change) or the data page in whether increase new data, if so, needing that the data in the data page are first write disk etc. hard
In part memory node, the data in hardware store node are updated with realizing, it is then again that the data in the data page are clear
It removes, the data of the target tenant is accessed when avoiding other tenants subsequent using the data page, to improve the number of multi-tenant
According to safety.
S204: it if the future buffer area hit rate is less than expectation buffer area hit rate, is rented according to the target
The difference of family preset second pre-set buffer area's hit rate and the following buffer area hit rate, calculates according to the second preset algorithm
Second capacity out, and be that data page is added in the destination buffer according to second capacity;Wherein second pre-set buffer
Area's hit rate is greater than the following buffer area hit rate.
Specifically, if the following buffer area hit rate of the target tenant is less than expectation buffer area hit rate, benefit
It is calculated with the difference of the expectation buffer area hit rate and the following buffer area hit rate, obtains the target tenant mesh
The capacity of preceding required data page ensures the access performance of the target tenant to add data page for the target tenant
Demand.
It is understood that the total capacity of data page can generate vital shadow to access performance in destination buffer
It rings, the total capacity of data page is bigger in destination buffer, and cacheable data are more, and target tenant passes through server access data
When, the probability that the server inquires data from buffer area is bigger, and response speed is faster, and access performance is also better.
Specifically, second preset algorithm are as follows: C2=min ((F-A) * D, G), wherein C2 is second capacity, F
For second pre-set buffer area hit rate, A is the following buffer area hit rate, and D is the preset buffer pool size,
(F-A) * D is target tenant data page capacity to be added, and G is the data page that other tenants are to be released in the multi-tenant
The sum of data page capacity in capacity and freebuf.If target tenant data page capacity to be added is greater than or equal to
The sum of data page capacity in the multi-tenant in other tenants data page capacity and freebuf to be released, then described
Two capacity be equal to the data page capacity in the multi-tenant in other tenants data page capacity and freebuf to be released it
With;If target tenant data page capacity to be added is less than the data page capacity that other tenants are to be released in the multi-tenant
With the sum of the data page capacity in freebuf, then second capacity is equal to target tenant data page to be added and holds
Amount.
Wherein, the preset buffer pool size includes: the preset minimal buffering area capacity of the target tenant, the mesh
The preset expectation buffer pool size of mark tenant, the preset maximum buffer capacity of the target tenant and the destination buffer are worked as
At least one of preceding capacity.It, can be respectively to the minimal buffering when target tenant applies for the registration of on the server
Area's capacity, the expectation buffer pool size and the maximum buffer capacity are configured.
Optionally, described according to target tenant preset second pre-set buffer area's hit rate and the following buffer area
The difference of hit rate calculates the second capacity according to the second preset algorithm, comprising:
Using the expectation buffer area hit rate as target tenant preset second pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the expectation buffer area hit rate and the following buffer area hit rate, according to the
Two preset algorithms calculate the second capacity.
Specifically, if the target tenant is only arranged desired buffer area hit rate in initial setting up, if
Judge that the following buffer area hit rate of the target tenant is less than expectation buffer area hit rate, then may indicate that in future
A period of time server be not able to satisfy the expectation buffer area hit rate demand of the target tenant, to ensure that the target is rented
The expectation buffer area hit rate demand at family can use the expectation buffer area hit rate and the following buffering of the target tenant
The difference of area's hit rate is calculated, and obtains the target tenant and need the capacity of data page to be added to be the second capacity, thus
The data page that the second capacity is added for the target tenant, ensures the access performance demand of the target tenant.
For example, the future buffer area hit rate is 85%, the expectation buffer area hit rate is 90%, and the target is slow
Rushing the current capacity in area is 2GB, other tenants data page total capacity to be released and the freebuf in the multi-tenant
In the sum of data page capacity be 20MB+1GB=1044MB, then according to the second preset algorithm C2=mnn ((F-A) * D,
G), target tenant data page capacity to be added is calculated are as follows: (F-A) * D=(90%-85%) * 2GB=102MB, then
The second capacity C 2=min (102MB, the 1044MB)=102MB is calculated, then adds 102MB's for the destination buffer
Data page, since the data page in the freebuf is supplied to the target tenant enough, then by the freebuf
The data page of middle 102MB is added in the destination buffer.
Optionally, described according to target tenant preset second pre-set buffer area's hit rate and the following buffer area
The difference of hit rate calculates the second capacity according to the second preset algorithm, comprising:
Judge whether the following buffer area hit rate is less than the preset minimal buffering area hit rate of the target tenant, institute
Minimal buffering area hit rate is stated less than expectation buffer area hit rate;
If so, being hit minimal buffering area hit rate as the target tenant preset second pre-set buffer area
Rate, and held according to the difference and preset buffer area of minimal buffering area hit rate and the following buffer area hit rate
Amount, calculates the second capacity according to the second preset algorithm.
Specifically, minimal buffering area hit rate is KB required of the target tenant to buffer area hit rate,
The target tenant can be configured minimal buffering area hit rate when applying for the registration of on the server, if
The target tenant is arranged desired buffer area hit rate and minimal buffering area hit rate in initial setting up, if sentencing
The following buffer area hit rate of the disconnected target tenant out is less than expectation buffer area hit rate, then needs further to judge institute
State whether the following buffer area hit rate is less than minimal buffering area hit rate, if so, may indicate that at following one section
Between server be not able to satisfy the minimal buffering area hit rate demand of the target tenant, for the minimum for ensureing the target tenant
Buffer area hit rate demand can use the minimal buffering area hit rate and the following buffer area hit rate of the target tenant
Difference calculated, obtain the data page that the target tenant at least needs to add the second capacity, thus for the target rent
Data page is added at family, ensures the access performance demand of the target tenant.
For example, the future buffer area hit rate is 80%, the expectation buffer area hit rate is 90%, described minimum slow
Rushing area's hit rate is 85%, and the preset minimal buffering area capacity is 1GB, other tenants number to be released in the multi-tenant
It is 50MB+20MB=70MB according to the sum of the data page capacity in page total capacity and the freebuf, then according to described second
Preset algorithm C2=min ((F-A) * D, G), calculating target tenant data page capacity to be added is (85%-80%) *
1GB=51.2MB then calculates the second capacity C 2=min (51.2MB, the 70MB)=51.2MB.It then can preferentially will be described
Remaining 20MB data page is added in the destination buffer in freebuf, and from the tenant of other releasable data pages
The data page of 31.2MB is discharged in corresponding buffer area, and the data page of the 31.2MB released is added to the Target buffer
Qu Zhong.
Optionally, described according to target tenant preset second pre-set buffer area's hit rate and the following buffer area
The difference of hit rate calculates the second capacity according to the second preset algorithm, comprising:
Judge whether the following buffer area hit rate is less than the preset minimal buffering area hit rate of the target tenant, institute
Minimal buffering area hit rate is stated less than expectation buffer area hit rate;
If it is not, then being hit the expectation buffer area hit rate as the target tenant preset second pre-set buffer area
Rate, and held according to the difference and preset buffer area of the expectation buffer area hit rate and the following buffer area hit rate
Amount, calculates the second capacity according to the second preset algorithm.
Specifically, if the target tenant hits desired buffer area hit rate and minimal buffering area in initial setting up
Rate is arranged, if judging, the following buffer area hit rate of the target tenant is hit less than the expectation buffer area
Rate, then need further to judge whether the following buffer area hit rate is less than minimal buffering area hit rate, if it is not, then may be used
To show that server in the coming period of time can satisfy the minimal buffering area hit rate of the target tenant, but cannot
Meet the expectation buffer area hit rate demand of the target tenant, to ensure that the expectation buffer area hit rate of the target tenant needs
It asks, the expectation buffer area hit rate and the difference of the following buffer area hit rate that can use the target tenant are counted
It calculates, obtains the data page that the target tenant at least needs to add the second capacity, so that data page is added for the target tenant,
Ensure the access performance demand of the target tenant.
For example, the future buffer area hit rate is 85%, the expectation buffer area hit rate is 90%, described minimum slow
Rushing area's hit rate is 80%, and the preset expectation buffer pool size of target tenant is 1.5GB, other tenants in the multi-tenant
The sum of data page capacity in data page total capacity and the freebuf to be released is 50MB+0MB=50MB, then basis
It is (F- that the second preset algorithm C2=min ((F-A) * D, G), which calculates target tenant data page capacity to be added,
A) * D=(90%-85%) * 1.5GB=76.8MB then calculates the second capacity C 2=min (76.8MB, 50MB)=50MB.By
There is no remaining data page in the freebuf, is then released from the corresponding buffer area of tenant of other releasable data pages
The data page of 50MB is put, and the data page of the 50MB released is added in the destination buffer.
Optionally, if the following buffer area hit rate of the target tenant is less than minimal buffering area hit rate, institute
It states target tenant and belongs to the first priority tenant, if the following buffer area hit rate of the target tenant is greater than the minimal buffering
Area's hit rate and being less than expectation buffer area hit rate, then the target tenant belongs to the second priority tenant, and described first
Priority tenant is compared to the permission for the second priority tenant with data page in preferential addition buffer area.In the free time
It can be preferably tenant's offer data page of high demand, guarantee multi-tenant is whole as far as possible in buffer area when data page deficiency
Body performance.For example, the corresponding following buffer area hit rate of tenant T1 is less than the corresponding minimal buffering area hit rate of tenant T1, tenant
The corresponding following buffer area hit rate of T2 is greater than the corresponding minimal buffering area hit rate of tenant T2 and is less than the tenant T2 corresponding phase
Hope buffer area hit rate, then the server priority is the data page in tenant T1 addition buffer area, then slow for tenant T2 addition
The data page in area is rushed, the preferential buffer requirements for ensureing high priority tenant.
As it can be seen that in the method depicted in fig. 2, the following buffer area of the target tenant in multi-tenant obtained according to prediction
Hit rate and the preset buffer area hit rate of the target tenant, dynamically to adjust the corresponding mesh of target tenant in multi-tenant
The capacity of buffer area is marked, not only can be to avoid the access performance for influencing tenant because of out of buffers, but also it can be to avoid because of buffer area
It is excessive and result in waste of resources.
For the ease of better implementing the above-mentioned buffer management method towards multi-tenant of the embodiment of the present invention, the present invention
It additionally provides for realizing the server of the above method is implemented.
Fig. 3 is referred to, is a kind of structural schematic diagram of server provided in an embodiment of the present invention.Service as shown in Figure 3
Device 30 can be showed in the form of general purpose computing device.The component of server 30 can include but is not limited to: one or more
Processor or processing unit 301, connect different system components at memory 302 (including processor 301 and memory 302)
Bus 303.Wherein,
Bus 303 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts
For example, these architectures include but is not limited to industry standard architecture (Industry Standard
Architecture, ISA) bus, microchannel architecture (Micro Channel Architecture, MAC) bus, enhancing
Type isa bus, Video Electronics Standards Association (Video Electronics Standards Association, VESA) local
Bus and peripheral component interconnection (Peripheral Component Interconnect, PCI) bus.
Server 30 typically comprises a variety of computer system readable media, these media can be and any can be serviced
The usable medium that device 30 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 302 may include the computer system readable media of form of volatile memory, such as arbitrary access is deposited
Reservoir (Random Access Memory, RAM) 3021 and/or cache memory 3022.Server 30 can be further
Including other removable/nonremovable, volatile/non-volatile computer system storage mediums.Only as an example, storage system
System 3023 can be used for reading and writing immovable, non-volatile magnetic media (Fig. 3 do not show, commonly referred to as " hard disk drive ").
Although being not shown in Fig. 3, the disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") can be provided,
And the CD drive to removable anonvolatile optical disk (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In
In the case of these, each driver can be connected by one or more data media interfaces with bus 303.Memory 302
For storing data, application data, the result data that processor executes or processor are specifically included and executes the data needed,
Memory 302 may include at least one program product, which has one group of (for example, at least one) program module, this
A little program modules are configured to perform the buffer management method towards multi-tenant described in various embodiments of the present invention.
Server 30 can also have the program/utility 3024 of one group of (at least one) program module 3025, can be with
Be stored in such as memory 302, such program module 3025 includes but is not limited to: operating system, one or more answers
With program, other program modules and program data, it may include network rings in each of these examples or certain combination
The realization in border.Program module 3025 usually executes the buffer management method described in the invention towards multi-tenant.
Server 30 can also be with one or more external equipments 32 (such as keyboard, mouse, voice-input device, touch
Output equipments such as the input equipments such as input equipment and display 31, loudspeaker, printer etc.) connection.Server 30 can also wrap
Containing permissible for example, by local area network (LAN), wide area network (WAN) and/or public network (such as internet etc.) distributed computing
Wireless network in environment is come the communication interface 304 that is communicated with other computer equipments 34.Other computer equipments 34 can
To include server, client computer (the including but not limited to platform for executing application program associated with data access and directory service
The mobile terminal devices such as formula computer, laptop, tablet computer, mobile phone) etc., which is specifically as follows each tenant
Client computer can be used for example, by network access servers such as local area network, internets in corresponding client computer, multiple tenants
30, thus the tenant data needed for obtaining.Communication interface 304 is an example of communication media.Communication media is usually by such as
Computer readable instructions, data structure, program module or other in the modulated message signals such as carrier wave or other transmission mechanisms
Data embody, and including any information transmitting medium.Term " modulated message signal " refers to one or more feature
The signal for being set or changing in a manner of encoded information in the signal.As an example, not a limit, communication media includes wired
The connection of medium, such as cable network or direct line and wireless medium, such as acoustics, RF (Radio Frequency, radio frequency),
Infrared ray and other wireless mediums.Term used herein computer-readable medium includes storage medium and communication media two
Person.
Server 30 can also connect at least one hardware data library facilities 33 (including database 1- number by system bus
According to library N), the data of multiple tenants can be stored on database facility, and the data of each tenant are mutually isolated, it is ensured that multiple rents
The safety of user data.For server 30 when receiving request of the tenant by client computer access target data, server 30 is first
First check whether the buffer area of operating system has been read out come the target data for determining that the tenant wants to inquire, if the mesh
Mark data be previously removed, then server 30, which will take out the target data and be provided it to from buffer area, makes access
The tenant of request, if the target data was previously removed not yet, this is simultaneously taken out in 30 access associated data library of server
Target data is supplied to the tenant for making access request.
It will be understood by those skilled in the art that the structure of server 30 shown in Fig. 3 does not constitute the limit to server
It is fixed, it may include perhaps combining certain components or different component layouts than illustrating more or fewer components, it should be bright
It is white, although not shown in the drawings, other hardware and/or software module can be used in conjunction with server 30, including but not limited to: micro- generation
Code, device driver, redundant processing unit, external disk drive array, disk array (Redundant Arrays of
Independent Disks, RAID) system, tape drive and data backup storage system etc..
Processor 301 is used to read the buffer management code towards multi-tenant of the storage of memory 304, and executes
Following operation:
Processor 301 is according to the history buffer hit rate of target tenant in default historical time section, according to preset rules
Predict the following buffer area hit rate of the target tenant, the target tenant is one in the multi-tenant;
The following buffer area hit rate of processor 301 and the preset expectation buffer area hit rate of the target tenant
Size;
If it is described future buffer area hit rate be greater than expectation buffer area hit rate, processor 301 according to it is described not
The difference for coming buffer area hit rate Yu target tenant preset first pre-set buffer area hit rate, according to the first preset algorithm
The first capacity is calculated, and discharges data page from the corresponding destination buffer of the target tenant according to first capacity;
Wherein first pre-set buffer area's hit rate is less than the following buffer area hit rate;
If the future buffer area hit rate is less than expectation buffer area hit rate, processor 301 is according to the mesh
The difference for marking tenant's preset second pre-set buffer area's hit rate and the following buffer area hit rate, according to the second preset algorithm
The second capacity is calculated, and is that data page is added in the destination buffer according to second capacity;Wherein described second is default
Buffer area hit rate is greater than the following buffer area hit rate.
Server 30 can be preset according to the buffer area hit rate demand in target tenant's future and the target tenant
The size relation of buffer area hit rate demand dynamically adjusts the capacity of the corresponding destination buffer of the target tenant.
Optionally, processor 301 is default according to the following buffer area hit rate and the target tenant preset first
The difference of buffer area hit rate calculates the first capacity according to the first preset algorithm, comprising:
Processor 301 is ordered the expectation buffer area hit rate as the target tenant preset first pre-set buffer area
Middle rate, and held according to the difference and preset buffer area of the following buffer area hit rate and the expectation buffer area hit rate
Amount, calculates the first capacity according to the first preset algorithm.
Server 30 can be in the case where meeting the expectation buffer area hit rate demand of the target tenant, using described
The following buffer area hit rate of target tenant and the difference of the expectation buffer area hit rate calculate, and obtain the target and rent
The capacity of family data page more than needed avoids the wasting of resources so that the data page that the target tenant has more than needed be discharged.
Optionally, processor 301 is according to target tenant preset second pre-set buffer area's hit rate and the future
The difference of buffer area hit rate calculates the second capacity according to the second preset algorithm, comprising:
Processor 301 is ordered the expectation buffer area hit rate as the target tenant preset second pre-set buffer area
Middle rate, and held according to the difference and preset buffer area of the expectation buffer area hit rate and the following buffer area hit rate
Amount, calculates the second capacity according to the second preset algorithm.
For the expectation buffer area hit rate demand for ensureing the target tenant, server 30 utilizes the phase of the target tenant
It hopes that buffer area hit rate and the difference of the following buffer area hit rate are calculated, it is required at present to obtain the target tenant
The capacity of data page ensures the access performance demand of the target tenant to add data page for the target tenant.
Optionally, processor 301 is default according to the following buffer area hit rate and the target tenant preset first
The difference of buffer area hit rate calculates the first capacity according to the first preset algorithm, comprising:
Processor 301 judges whether the following buffer area hit rate is greater than the preset maximum buffer of target tenant
Hit rate, the maximum buffer hit rate are greater than expectation buffer area hit rate;
If so, processor 301 is preset the maximum buffer hit rate as the target tenant preset first
Buffer area hit rate, and according to the difference of the following buffer area hit rate and maximum preset buffer area hit rate and pre-
If buffer pool size, calculate the first capacity according to the first preset algorithm.
Server 30 can be under the premise of meeting the maximum buffer hit rate demand of the target tenant, using described
The difference of the following buffer area hit rate and the maximum buffer hit rate is calculated, and obtains what the destination buffer was had more than needed
The capacity of data page avoids the wasting of resources so that the data page that the target tenant has more than needed be discharged.
Optionally, processor 301 is discharged from the corresponding destination buffer of the target tenant according to first capacity
Data page, comprising:
Processor 301 discharges data page from the corresponding destination buffer of the target tenant according to first capacity,
And the data page of first capacity released is added in freebuf, the data page in the freebuf is used
The tenant for needing to add data page in for the multi-tenant uses.
The data page released from the buffer area of the target tenant can be added to freebuf by server 30
In, it can not only ensure the access performance demand of the target tenant, but also the buffer resource that the target tenant can be had more than needed
Release, with supply other in the multi-tenant need to increase buffer pool size tenant it is (including the target tenant, described
The tenant that other tenants and the server in multi-tenant newly increase) or server in other business use, improve
The overall performance of multi-tenant and the resource utilization of buffer area.
Optionally, processor 301 is default according to the following buffer area hit rate and the target tenant preset first
The difference of buffer area hit rate calculates the first capacity according to the first preset algorithm, comprising:
Processor 301 judges whether the following buffer area hit rate is greater than the preset maximum buffer of target tenant
Hit rate, the maximum buffer hit rate are greater than expectation buffer area hit rate;
If it is not, then processor 301 is preset the expectation buffer area hit rate as the target tenant preset first
Buffer area hit rate, and according to the following buffer area hit rate and the difference for it is expected buffer area hit rate and preset
Buffer pool size calculates the first capacity according to the first preset algorithm.
Server 30 can be in the case where meeting the expectation buffer area hit rate demand of the target tenant, using described
The following buffer area hit rate of target tenant and the difference of the expectation buffer area hit rate calculate, and obtain the target and rent
The capacity of family data page more than needed avoids the wasting of resources so that the data page that the target tenant has more than needed be discharged.
Optionally, processor 301 is discharged from the corresponding destination buffer of the target tenant according to first capacity
Data page, comprising:
Processor 301 discharges data page from the corresponding destination buffer of the target tenant according to first capacity,
The tenant that the data page released is used to need to add for other in the multi-tenant data page uses, first capacity
Need to add tenant's data page capacity to be added of data page less than or equal to described other.
Server 30 can be when other tenants have data page demand in the multi-tenant, will be from the slow of the target tenant
It rushes the data page released in area and distributes to other tenants use, can both ensure the access performance demand of the target tenant,
The buffer resource that the target tenant has more than needed can be released again, need to add number to supply other in the multi-tenant
It is used according to the tenant of page, improves the overall performance of multi-tenant and the resource utilization of buffer area.
Optionally, processor 301 is according to target tenant preset second pre-set buffer area's hit rate and the future
The difference of buffer area hit rate calculates the second capacity according to the second preset algorithm, comprising:
Processor 301 judges whether the following buffer area hit rate is less than the preset minimal buffering area of the target tenant
Hit rate, minimal buffering area hit rate are less than expectation buffer area hit rate;
If so, processor 301 is preset minimal buffering area hit rate as the target tenant preset second
Buffer area hit rate, and according to the difference of minimal buffering area hit rate and the following buffer area hit rate and preset
Buffer pool size calculates the second capacity according to the second preset algorithm.
Server 30 is the minimal buffering area hit rate demand for ensureing the target tenant, most using the target tenant
Minibuffer area hit rate and the difference of the following buffer area hit rate are calculated, and show that the target tenant at least needs to add
Add the data page of the second capacity, to add data page for the target tenant, ensures that the access performance of the target tenant needs
It asks.
Optionally, processor 301 is according to target tenant preset second pre-set buffer area's hit rate and the future
The difference of buffer area hit rate calculates the second capacity according to the second preset algorithm, comprising:
Processor 301 judges whether the following buffer area hit rate is less than the preset minimal buffering area of the target tenant
Hit rate, minimal buffering area hit rate are less than expectation buffer area hit rate;
If it is not, then processor 301 is preset the expectation buffer area hit rate as the target tenant preset second
Buffer area hit rate, and according to the difference of the expectation buffer area hit rate and the future buffer area hit rate and preset
Buffer pool size calculates the second capacity according to the second preset algorithm.
Server 30 is the expectation buffer area hit rate demand for ensureing the target tenant, utilizes the phase of the target tenant
It hopes that buffer area hit rate and the difference of the following buffer area hit rate are calculated, it is required at present to obtain the target tenant
The capacity of data page ensures the access performance demand of the target tenant to add data page for the target tenant.
Optionally, first preset algorithm are as follows: C1=(A-B) * D, wherein C1 is first capacity, A be it is described not
Carry out buffer area hit rate, B is first pre-set buffer area hit rate, and D is the preset buffer pool size.
Optionally, first preset algorithm are as follows: C1=min ((A-B) * D, E), wherein C1 is first capacity, A
For the following buffer area hit rate, B is first pre-set buffer area hit rate, and D is the preset buffer pool size, E
For the data page capacity to be added of other tenants in the multi-tenant.
Optionally, second preset algorithm are as follows: C2=min ((F-A) * D, G), wherein C2 is second capacity, F
For second pre-set buffer area hit rate, A is the following buffer area hit rate, and D is the preset buffer pool size, G
For the sum of the data page capacity in other tenants in the multi-tenant data page capacity and freebuf to be released.
Optionally, the preset buffer pool size include: the preset minimal buffering area capacity of the target tenant, it is described
The preset expectation buffer pool size of target tenant, the preset maximum buffer capacity of the target tenant and the destination buffer
At least one of current capacity.
Optionally, processor 301 is executed according to the history buffer hit rate for presetting target tenant in historical time section, is pressed
Predict that the operation of the following buffer area hit rate of the target tenant is triggered by least one following event according to preset rules:
Predetermined period reaches;Preset time point reaches;Buffer area hit rate is lower than the rent of pre-set buffer area hit rate threshold value
The number at family is greater than or equal to predetermined number threshold value.When meeting trigger event, server can understand the target in time and rent
The buffer area hit rate situation at family, and then the capacity of the destination buffer is adjusted in time.
It should be noted that the function of each functional module can be according to above-mentioned in server 30 described in the embodiment of the present invention
Method specific implementation in embodiment of the method shown in Fig. 2, details are not described herein again.
Fig. 4 is referred to, is the structural schematic diagram of another server provided in an embodiment of the present invention.As shown in figure 4, service
Device 40 can include: predicting unit 401, comparing unit 403, the first administrative unit 405 and the second administrative unit 407, wherein
Predicting unit 401, for the history buffer hit rate according to target tenant in default historical time section, according to pre-
If rule predicts the following buffer area hit rate of the target tenant, the target tenant is one in the multi-tenant;
Comparing unit 403, the following buffer area hit rate obtained for the predicting unit 401 prediction and institute
State the size of the preset expectation buffer area hit rate of target tenant;
First administrative unit 405, if comparing the following buffer area hit rate for the comparing unit 403 is greater than institute
Desired buffer area hit rate is stated, then according to the following buffer area hit rate and preset first pre-set buffer of the target tenant
The difference of area's hit rate calculates the first capacity according to the first preset algorithm, and rents according to first capacity from the target
Family discharges data page in corresponding destination buffer;Wherein first pre-set buffer area's hit rate is less than the following buffer area
Hit rate;
Second administrative unit 407, if comparing the following buffer area hit rate for the comparing unit 403 is less than institute
Desired buffer area hit rate is stated, then according to target tenant preset second pre-set buffer area's hit rate and the following buffering
The difference of area's hit rate calculates the second capacity according to the second preset algorithm, and slow for the target according to second capacity
Rush area's addition data page;Wherein second pre-set buffer area's hit rate is greater than the following buffer area hit rate.
Optionally, first administrative unit 405 is specifically used for:
Using the expectation buffer area hit rate as target tenant preset first pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the following buffer area hit rate and the expectation buffer area hit rate, according to the
One preset algorithm calculates the first capacity.
Optionally, second administrative unit 407 is specifically used for:
Using the expectation buffer area hit rate as target tenant preset second pre-set buffer area hit rate, and root
According to the difference and preset buffer pool size of the expectation buffer area hit rate and the following buffer area hit rate, according to the
Two preset algorithms calculate the second capacity.
Optionally, first administrative unit 405 is specifically used for:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, and will be released
The data page of first capacity be added in freebuf, the data page in the freebuf is used for for described more
The tenant for needing to add data page in tenant uses.
Optionally, first administrative unit 405 is specifically used for:
Data page is discharged from the corresponding destination buffer of the target tenant according to first capacity, it is described to release
Data page be used to need to add for other in the multi-tenant tenant's use of data page, first capacity is less than or equal to
Described other need to add tenant's data page capacity to be added of data page.
In the embodiment of the present invention, server 40 can be according to the buffer area hit rate demand in target tenant's future and described
The size relation of the preset buffer area hit rate demand of target tenant dynamically adjusts the corresponding destination buffer of the target tenant
Capacity.
It should be noted that the function of each functional unit can be according to above-mentioned in server 40 described in the embodiment of the present invention
Method specific implementation in embodiment of the method shown in Fig. 2, details are not described herein again.
Fig. 5 is referred to, is the structural schematic diagram of another server provided in an embodiment of the present invention.Wherein, shown in fig. 5
Server 40 is that server 40 as shown in Figure 4 optimizes.Compared with Fig. 4, server 40 shown in fig. 5 equally packet
Include predicting unit 401, comparing unit 403, the first administrative unit 405 and the second administrative unit 407, and the first administrative unit 405
It specifically includes: the first judging unit 4051 and the first computing unit 4053, wherein
First judging unit 4051 is greater than if comparing the following buffer area hit rate for the comparing unit 403
Expectation buffer area hit rate, then judge whether the following buffer area hit rate is greater than the preset maximum of target tenant
Buffer area hit rate, the maximum buffer hit rate are greater than expectation buffer area hit rate;
First computing unit 4053, if judging the following buffer area hit rate for first judging unit 4051
Greater than the preset maximum buffer hit rate of the target tenant, then rented using the maximum buffer hit rate as the target
Family preset first pre-set buffer area hit rate, and ordered according to the following buffer area hit rate and the maximum preset buffer area
The difference of middle rate and preset buffer pool size calculate the first capacity according to the first preset algorithm, and according to described first
Capacity discharges data page from the corresponding destination buffer of the target tenant.
It should be noted that the function of each functional unit can be according to above-mentioned in server 40 described in the embodiment of the present invention
Method specific implementation in embodiment of the method shown in Fig. 2, details are not described herein again.
Fig. 6 is referred to, is the structural schematic diagram of another server provided in an embodiment of the present invention.Wherein, shown in fig. 6
Server 40 is that server 40 as shown in Figure 4 optimizes.Compared with Fig. 4, server 40 shown in fig. 6 equally packet
Include predicting unit 401, comparing unit 403, the first administrative unit 405 and the second administrative unit 407, and the first administrative unit 405
It specifically includes: second judgment unit 4055 and the second computing unit 4057, wherein
Second judgment unit 4055 is greater than if comparing the following buffer area hit rate for the comparing unit 403
Expectation buffer area hit rate, then judge whether the following buffer area hit rate is greater than the preset maximum of target tenant
Buffer area hit rate, the maximum buffer hit rate are greater than expectation buffer area hit rate;
Second computing unit 4057, if judging the following buffer area hit rate for the second judgment unit 4055
Less than or equal to the preset maximum buffer hit rate of the target tenant, then using expectation buffer area hit rate as described in
Target tenant preset first pre-set buffer area hit rate, and according to the following buffer area hit rate and the expectation buffer area
The difference of hit rate and preset buffer pool size calculate the first capacity according to the first preset algorithm, and according to described
One capacity discharges data page from the corresponding destination buffer of the target tenant.
It should be noted that the function of each functional unit can be according to above-mentioned in server 40 described in the embodiment of the present invention
Method specific implementation in embodiment of the method shown in Fig. 2, details are not described herein again.
Fig. 7 is referred to, is the structural schematic diagram of another server provided in an embodiment of the present invention.Wherein, shown in Fig. 7
Server 40 is that server 40 as shown in Figure 4 optimizes.Compared with Fig. 4, server 40 shown in Fig. 7 equally packet
Include predicting unit 401, comparing unit 403, the first administrative unit 405 and the second administrative unit 407, and the second administrative unit 407
It specifically includes: third judging unit 4071 and third computing unit 4073, wherein
Third judging unit 4071 is less than if comparing the following buffer area hit rate for the comparing unit 403
Expectation buffer area hit rate, then judge whether the following buffer area hit rate is less than the preset minimum of target tenant
Buffer area hit rate, minimal buffering area hit rate are less than expectation buffer area hit rate;
Third computing unit 4073, if judging the following buffer area hit rate for the third judging unit 4071
Less than the preset minimal buffering area hit rate of the target tenant, then rented minimal buffering area hit rate as the target
Family preset second pre-set buffer area hit rate, and according to minimal buffering area hit rate and the following buffer area hit rate
Difference and preset buffer pool size, calculate the second capacity according to the second preset algorithm, and according to second capacity
Data page is added for the destination buffer.
It should be noted that the function of each functional unit can be according to above-mentioned in server 40 described in the embodiment of the present invention
Method specific implementation in embodiment of the method shown in Fig. 2, details are not described herein again.
Fig. 8 is referred to, is the structural schematic diagram of another server provided in an embodiment of the present invention.Wherein, shown in Fig. 8
Server 40 is that server 40 as shown in Figure 4 optimizes.Compared with Fig. 4, server 40 shown in Fig. 8 equally packet
Include predicting unit 401, comparing unit 403, the first administrative unit 405 and the second administrative unit 407, and the second administrative unit 407
It specifically includes: the 4th judging unit 4075 and the 4th computing unit 4077, wherein
4th judging unit 4075 is less than if comparing the following buffer area hit rate for the comparing unit 403
Expectation buffer area hit rate, then judge whether the following buffer area hit rate is less than the preset minimum of target tenant
Buffer area hit rate, minimal buffering area hit rate are less than expectation buffer area hit rate;
4th computing unit 4077, if judging the following buffer area hit rate for the 4th judging unit 4075
More than or equal to the preset minimal buffering area hit rate of the target tenant, then using expectation buffer area hit rate as described in
Target tenant preset second pre-set buffer area hit rate, and according to the expectation buffer area hit rate and the following buffer area
The difference of hit rate and preset buffer pool size calculate the second capacity according to the second preset algorithm, and according to described
Two capacity are that data page is added in the destination buffer.
Optionally, above-mentioned first administrative unit 405 can also include the first judging unit 4051, the first computing unit simultaneously
4053, second judgment unit 4055 and the second computing unit 4057, the specific implementation of each unit are referred in above scheme
The description of same unit, details are not described herein again.
Optionally, above-mentioned second administrative unit 407 can also include third judging unit 4071, third computing unit simultaneously
4073, the specific implementation of the 4th judging unit 4075 and the 4th computing unit 4077, each unit is referred in above scheme
The description of same unit, details are not described herein again.
It should be noted that the function of each functional unit can be according to above-mentioned in server 40 described in the embodiment of the present invention
Method specific implementation in embodiment of the method shown in Fig. 2, details are not described herein again.
In conclusion implementing the embodiment of the present invention, the following of the target tenant in multi-tenant obtained according to prediction is buffered
Area's hit rate and the preset buffer area hit rate of the target tenant, it is corresponding dynamically to adjust the target tenant in multi-tenant
The capacity of destination buffer, not only can to avoid the access performance for influencing tenant because of out of buffers, but also can to avoid because buffering
Area is excessive and results in waste of resources.
Further, its in multi-tenant will be supplied from the data page that the corresponding destination buffer of the target tenant releases
The tenant that he needs to add data page uses, and the overall performance of multi-tenant and the resource utilization of buffer area can be improved.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer-readable storage medium
In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory etc..
The above disclosure is only the preferred embodiments of the present invention, cannot limit the right model of the present invention with this certainly
It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.
Claims (27)
1. a kind of buffer management method towards multi-tenant characterized by comprising
According to the history buffer hit rate of target tenant in default historical time section, predict that the target is rented according to preset rules
The following buffer area hit rate at family, the target tenant are one in the multi-tenant;
Compare the size of the following buffer area hit rate and the preset expectation buffer area hit rate of the target tenant;
If the future buffer area hit rate is greater than expectation buffer area hit rate, according to the following buffer area hit rate
With the difference of target tenant preset first pre-set buffer area hit rate, the first appearance is calculated according to the first preset algorithm
Amount, and data page is discharged from the corresponding destination buffer of the target tenant according to first capacity;Wherein described first
Pre-set buffer area hit rate is less than the following buffer area hit rate;
If the future buffer area hit rate is less than expectation buffer area hit rate, according to the target tenant preset the
The difference of two pre-set buffer area hit rates and the following buffer area hit rate, calculates the second appearance according to the second preset algorithm
Amount, and be that data page is added in the destination buffer according to second capacity;Wherein second pre-set buffer area hit rate
Greater than the following buffer area hit rate.
2. the method according to claim 1, wherein described according to the following buffer area hit rate and the mesh
The difference for marking tenant's preset first pre-set buffer area hit rate, calculates the first capacity according to the first preset algorithm, comprising:
Using the expectation buffer area hit rate as target tenant preset first pre-set buffer area hit rate, and according to institute
The following buffer area hit rate and the difference for it is expected buffer area hit rate and preset buffer pool size are stated, it is pre- according to first
Imputation method calculates the first capacity.
3. the method according to claim 1, wherein described default slow according to the target tenant preset second
The difference for rushing area's hit rate and the following buffer area hit rate, calculates the second capacity according to the second preset algorithm, comprising:
Using the expectation buffer area hit rate as target tenant preset second pre-set buffer area hit rate, and according to institute
State desired buffer area hit rate and the following buffer area hit rate difference and preset buffer pool size, it is pre- according to second
Imputation method calculates the second capacity.
4. the method according to claim 1, wherein described according to the following buffer area hit rate and the mesh
The difference for marking tenant's preset first pre-set buffer area hit rate, calculates the first capacity according to the first preset algorithm, comprising:
Judge whether the following buffer area hit rate is greater than the preset maximum buffer hit rate of the target tenant, it is described most
Big buffer area hit rate is greater than expectation buffer area hit rate;
If so, using the maximum buffer hit rate as target tenant preset first pre-set buffer area hit rate,
And held according to the difference and preset buffer area of the following buffer area hit rate and maximum preset buffer area hit rate
Amount, calculates the first capacity according to the first preset algorithm.
5. method according to claim 1,2 or 4, which is characterized in that it is described according to first capacity from the target
Tenant discharges data page in corresponding destination buffer, comprising:
Data page, and the institute that will be released are discharged from the corresponding destination buffer of the target tenant according to first capacity
The data page for stating the first capacity is added in freebuf, and the data page in the freebuf is used to supply the multi-tenant
The middle tenant for needing to add data page uses.
6. the method according to claim 1, wherein described according to the following buffer area hit rate and the mesh
The difference for marking tenant's preset first pre-set buffer area hit rate, calculates the first capacity according to the first preset algorithm, comprising:
Judge whether the following buffer area hit rate is greater than the preset maximum buffer hit rate of the target tenant, it is described most
Big buffer area hit rate is greater than expectation buffer area hit rate;
If it is not, then using the expectation buffer area hit rate as target tenant preset first pre-set buffer area hit rate,
And according to the difference and preset buffer pool size of the following buffer area hit rate and the expectation buffer area hit rate, press
The first capacity is calculated according to the first preset algorithm.
7. the method according to claim 2 or 6, which is characterized in that described to be rented according to first capacity from the target
Family discharges data page in corresponding destination buffer, comprising:
Data page, the number released are discharged from the corresponding destination buffer of the target tenant according to first capacity
It is used according to the tenant that page is used to need to add for other in the multi-tenant data page, first capacity is less than or equal to described
Other need to add tenant's data page capacity to be added of data page.
8. the method according to claim 1, wherein described default slow according to the target tenant preset second
The difference for rushing area's hit rate and the following buffer area hit rate, calculates the second capacity according to the second preset algorithm, comprising:
Judge whether the following buffer area hit rate is less than the preset minimal buffering area hit rate of the target tenant, it is described most
Minibuffer area hit rate is less than expectation buffer area hit rate;
If so, using minimal buffering area hit rate as target tenant preset second pre-set buffer area hit rate,
And according to the difference and preset buffer pool size of minimal buffering area hit rate and the following buffer area hit rate, press
The second capacity is calculated according to the second preset algorithm.
9. the method according to claim 1, wherein described default slow according to the target tenant preset second
The difference for rushing area's hit rate and the following buffer area hit rate, calculates the second capacity according to the second preset algorithm, comprising:
Judge whether the following buffer area hit rate is less than the preset minimal buffering area hit rate of the target tenant, it is described most
Minibuffer area hit rate is less than expectation buffer area hit rate;
If it is not, then using the expectation buffer area hit rate as target tenant preset second pre-set buffer area hit rate,
And according to the difference and preset buffer pool size of the expectation buffer area hit rate and the following buffer area hit rate, press
The second capacity is calculated according to the second preset algorithm.
10. according to claim 1, method described in 2,4 or 6, which is characterized in that first preset algorithm are as follows: C1=(A-
B) * D, wherein C1 is first capacity, and A is the following buffer area hit rate, and B is the hit of first pre-set buffer area
Rate, D are the preset buffer pool size.
11. according to claim 1, method described in 2 or 6, which is characterized in that first preset algorithm are as follows: C1=min ((A-
B) * D, E), wherein C1 is first capacity, and A is the following buffer area hit rate, and B is first pre-set buffer area life
Middle rate, D are the preset buffer pool size, and E is the data page capacity that other tenants are to be added in the multi-tenant.
12. according to claim 1, method described in 3,8 or 9, which is characterized in that second preset algorithm are as follows: C2=min
((F-A) * D, G), wherein C2 is second capacity, and F is second pre-set buffer area hit rate, and A is the following buffering
Area's hit rate, D are the preset buffer pool size, G be in the multi-tenant other tenants data page capacity to be released and
The sum of data page capacity in freebuf.
13. according to claim 2,3,4,6,8 or 9 described in any item methods, which is characterized in that the preset buffer area holds
Amount includes: the preset minimal buffering area capacity of the target tenant, the preset expectation buffer pool size of the target tenant, described
Current at least one of the capacity of the preset maximum buffer capacity of target tenant and the destination buffer.
14. a kind of server characterized by comprising
Predicting unit, for the history buffer hit rate according to target tenant in default historical time section, according to preset rules
Predict the following buffer area hit rate of the target tenant, the target tenant is one in multi-tenant;
Comparing unit, the following buffer area hit rate predicted for the predicting unit and the target tenant
The size of preset expectation buffer area hit rate;
First administrative unit buffers if comparing the following buffer area hit rate for the comparing unit greater than the expectation
Area's hit rate, then according to the following buffer area hit rate and target tenant preset first pre-set buffer area hit rate
Difference calculates the first capacity according to the first preset algorithm, and according to first capacity from the corresponding mesh of the target tenant
Mark discharges data page in buffer area;Wherein first pre-set buffer area's hit rate is less than the following buffer area hit rate;
Second administrative unit buffers if comparing the following buffer area hit rate for the comparing unit less than the expectation
Area's hit rate, then according to target tenant preset second pre-set buffer area's hit rate and the following buffer area hit rate
Difference calculates the second capacity according to the second preset algorithm, and is that number is added in the destination buffer according to second capacity
According to page;Wherein second pre-set buffer area's hit rate is greater than the following buffer area hit rate.
15. server according to claim 14, which is characterized in that first administrative unit is specifically used for:
Using the expectation buffer area hit rate as target tenant preset first pre-set buffer area hit rate, and according to institute
The following buffer area hit rate and the difference for it is expected buffer area hit rate and preset buffer pool size are stated, it is pre- according to first
Imputation method calculates the first capacity.
16. server according to claim 14, which is characterized in that second administrative unit is specifically used for:
Using the expectation buffer area hit rate as target tenant preset second pre-set buffer area hit rate, and according to institute
State desired buffer area hit rate and the following buffer area hit rate difference and preset buffer pool size, it is pre- according to second
Imputation method calculates the second capacity.
17. server according to claim 14, which is characterized in that first administrative unit includes:
First judging unit buffers if comparing the following buffer area hit rate for the comparing unit greater than the expectation
Area's hit rate, then judge whether the following buffer area hit rate is greater than the preset maximum buffer hit of the target tenant
Rate, the maximum buffer hit rate are greater than expectation buffer area hit rate;
First computing unit, if judging that the following buffer area hit rate is greater than the target for first judging unit
The preset maximum buffer hit rate of tenant, then using the maximum buffer hit rate as the target tenant preset first
Pre-set buffer area hit rate, and according to the difference of the following buffer area hit rate and maximum preset buffer area hit rate with
And preset buffer pool size, the first capacity is calculated according to the first preset algorithm, and according to first capacity from the mesh
Mark tenant discharges data page in corresponding destination buffer.
18. server described in 4,15 or 17 according to claim 1, which is characterized in that first administrative unit is specifically used for:
Data page, and the institute that will be released are discharged from the corresponding destination buffer of the target tenant according to first capacity
The data page for stating the first capacity is added in freebuf, and the data page in the freebuf is used to supply the multi-tenant
The middle tenant for needing to add data page uses.
19. server according to claim 14, which is characterized in that first administrative unit includes:
Second judgment unit buffers if comparing the following buffer area hit rate for the comparing unit greater than the expectation
Area's hit rate, then judge whether the following buffer area hit rate is greater than the preset maximum buffer hit of the target tenant
Rate, the maximum buffer hit rate are greater than expectation buffer area hit rate;
Second computing unit, if judging that the following buffer area hit rate is less than or equal to institute for the second judgment unit
The preset maximum buffer hit rate of target tenant is stated, then is preset using the expectation buffer area hit rate as the target tenant
The first pre-set buffer area hit rate, and according to the difference of the following buffer area hit rate and the expectation buffer area hit rate
And preset buffer pool size, the first capacity is calculated according to the first preset algorithm, and according to first capacity from described
Target tenant discharges data page in corresponding destination buffer.
20. server described in 5 or 19 according to claim 1, which is characterized in that first administrative unit is specifically used for:
Data page, the number released are discharged from the corresponding destination buffer of the target tenant according to first capacity
It is used according to the tenant that page is used to need to add for other in the multi-tenant data page, first capacity is less than or equal to described
Other need to add tenant's data page capacity to be added of data page.
21. server according to claim 14, which is characterized in that second administrative unit includes:
Third judging unit buffers if comparing the following buffer area hit rate for the comparing unit less than the expectation
Area's hit rate, then judge whether the following buffer area hit rate is less than the preset minimal buffering area hit of the target tenant
Rate, minimal buffering area hit rate are less than expectation buffer area hit rate;
Third computing unit, if judging that the following buffer area hit rate is less than the target for the third judging unit
The preset minimal buffering area hit rate of tenant, then using minimal buffering area hit rate as the target tenant preset second
Pre-set buffer area hit rate, and according to the difference of minimal buffering area hit rate and the following buffer area hit rate and pre-
If buffer pool size, calculate the second capacity according to the second preset algorithm, and slow for the target according to second capacity
Rush area's addition data page.
22. server according to claim 14, which is characterized in that second administrative unit includes:
4th judging unit buffers if comparing the following buffer area hit rate for the comparing unit less than the expectation
Area's hit rate, then judge whether the following buffer area hit rate is less than the preset minimal buffering area hit of the target tenant
Rate, minimal buffering area hit rate are less than expectation buffer area hit rate;
4th computing unit, if judging that the following buffer area hit rate is greater than or equal to institute for the 4th judging unit
The preset minimal buffering area hit rate of target tenant is stated, then is preset using the expectation buffer area hit rate as the target tenant
The second pre-set buffer area hit rate, and according to the difference of the expectation buffer area hit rate and the following buffer area hit rate
And preset buffer pool size, the second capacity is calculated according to the second preset algorithm, and be described according to second capacity
Add data page in destination buffer.
23. server described in 4,15,17 or 19 according to claim 1, which is characterized in that first preset algorithm are as follows: C1
=(A-B) * D, wherein C1 is first capacity, and A is the following buffer area hit rate, and B is first pre-set buffer area
Hit rate, D are the preset buffer pool size.
24. server described in 4,15 or 19 according to claim 1, which is characterized in that first preset algorithm are as follows: C1=
Min ((A-B) * D, E), wherein C1 is first capacity, and A is the following buffer area hit rate, and B is described first default
Buffer area hit rate, D are the preset buffer pool size, and E is that the data page that other tenants are to be added in the multi-tenant holds
Amount.
25. server described in 4,16,21 or 22 according to claim 1, which is characterized in that second preset algorithm are as follows: C2
=min ((F-A) * D, G), wherein C2 is second capacity, and F is second pre-set buffer area hit rate, A be it is described not
Carry out buffer area hit rate, D is the preset buffer pool size, and G is the data page that other tenants are to be released in the multi-tenant
The sum of data page capacity in capacity and freebuf.
26. the described in any item servers in 5,16,17,19,21 or 22 according to claim 1, which is characterized in that described preset
Buffer pool size includes: the preset minimal buffering area capacity of the target tenant, the preset expectation buffer area of the target tenant
At least one of capacity, the current capacity of the preset maximum buffer capacity of the target tenant and the destination buffer.
27. a kind of server, which is characterized in that including processor and memory, wherein the memory is for storing towards more
The buffer management code of tenant, the processor are used to that the program code perform claim of the memory storage to be called to want
The buffer management method towards multi-tenant of asking 1-13 described in any item.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610064482.6A CN107025223B (en) | 2016-01-29 | 2016-01-29 | A kind of buffer management method and server towards multi-tenant |
PCT/CN2016/090281 WO2017128641A1 (en) | 2016-01-29 | 2016-07-18 | Multi-tenant buffer management method and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610064482.6A CN107025223B (en) | 2016-01-29 | 2016-01-29 | A kind of buffer management method and server towards multi-tenant |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107025223A CN107025223A (en) | 2017-08-08 |
CN107025223B true CN107025223B (en) | 2019-11-22 |
Family
ID=59397289
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610064482.6A Active CN107025223B (en) | 2016-01-29 | 2016-01-29 | A kind of buffer management method and server towards multi-tenant |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107025223B (en) |
WO (1) | WO2017128641A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11163887B2 (en) * | 2018-02-14 | 2021-11-02 | Microsoft Technology Licensing, Llc | Clearance of bare metal resource to trusted state usable in cloud computing |
CN109960591B (en) * | 2019-03-29 | 2023-08-08 | 神州数码信息系统有限公司 | Cloud application resource dynamic scheduling method for tenant resource encroachment |
CN110221989A (en) * | 2019-06-20 | 2019-09-10 | 北京奇艺世纪科技有限公司 | A kind of data cache method, device, storage medium and computer equipment |
CN112737975B (en) * | 2020-12-25 | 2023-05-09 | 珠海西山居数字科技有限公司 | Buffer capacity adjustment method and device |
CN114490749A (en) * | 2021-12-28 | 2022-05-13 | 珠海大横琴科技发展有限公司 | Resource access method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1996935A (en) * | 2006-01-04 | 2007-07-11 | 华为技术有限公司 | A dynamic adjusting method for data packets in the buffer at the streaming receiving end |
CN102238294A (en) * | 2010-04-23 | 2011-11-09 | 鸿富锦精密工业(深圳)有限公司 | User terminal device and method for dynamically regulating size of shake buffer area |
CN102622441A (en) * | 2012-03-09 | 2012-08-01 | 山东大学 | Automatic performance identification tuning system based on Oracle database |
CN102880678A (en) * | 2012-09-11 | 2013-01-16 | 哈尔滨工程大学 | Embedded real-time memory database |
US8943126B1 (en) * | 2012-08-21 | 2015-01-27 | Google Inc. | Rate limiter for push notifications in a location-aware service |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0391871A3 (en) * | 1989-04-03 | 1992-05-27 | International Business Machines Corporation | Method for managing a prioritized cache |
CN101493821A (en) * | 2008-01-25 | 2009-07-29 | 中兴通讯股份有限公司 | Data caching method and device |
US20120151479A1 (en) * | 2010-12-10 | 2012-06-14 | Salesforce.Com, Inc. | Horizontal splitting of tasks within a homogenous pool of virtual machines |
US20140289332A1 (en) * | 2013-03-25 | 2014-09-25 | Salesforce.Com, Inc. | System and method for prefetching aggregate social media metrics using a time series cache |
CN103778071A (en) * | 2014-01-20 | 2014-05-07 | 华为技术有限公司 | Cache space distribution method and device |
CN104407986B (en) * | 2014-10-27 | 2018-03-13 | 华为技术有限公司 | The method, apparatus and controller of allocating cache in storage device |
-
2016
- 2016-01-29 CN CN201610064482.6A patent/CN107025223B/en active Active
- 2016-07-18 WO PCT/CN2016/090281 patent/WO2017128641A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1996935A (en) * | 2006-01-04 | 2007-07-11 | 华为技术有限公司 | A dynamic adjusting method for data packets in the buffer at the streaming receiving end |
CN102238294A (en) * | 2010-04-23 | 2011-11-09 | 鸿富锦精密工业(深圳)有限公司 | User terminal device and method for dynamically regulating size of shake buffer area |
CN102622441A (en) * | 2012-03-09 | 2012-08-01 | 山东大学 | Automatic performance identification tuning system based on Oracle database |
US8943126B1 (en) * | 2012-08-21 | 2015-01-27 | Google Inc. | Rate limiter for push notifications in a location-aware service |
CN102880678A (en) * | 2012-09-11 | 2013-01-16 | 哈尔滨工程大学 | Embedded real-time memory database |
Also Published As
Publication number | Publication date |
---|---|
CN107025223A (en) | 2017-08-08 |
WO2017128641A1 (en) | 2017-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107025223B (en) | A kind of buffer management method and server towards multi-tenant | |
US10057187B1 (en) | Dynamic resource creation to connect client resources in a distributed system | |
CN1921449B (en) | Stable, minimal skew resource flow control method and system | |
US10817427B2 (en) | Headless resilient backup and restore software ecosystem and with a first backup server and a second backup server in a plurality of backup servers, with the first backup server selecting a the second backup server to start a job where selection is based on historical client latency, scheduled backup server workload, and whether metadata is cached in any of the plurality of backup servers | |
CN112532632B (en) | Resource allocation method and device for multi-level cloud platform and computer equipment | |
US10616134B1 (en) | Prioritizing resource hosts for resource placement | |
CN103368867A (en) | Method and system of cached object communicating with secondary site through network | |
US10725660B2 (en) | Policy-based optimization of cloud resources on tiered storage operations | |
WO2015149644A1 (en) | Intelligent file pre-fetch based on access patterns | |
US8700932B2 (en) | Method for on-demand energy savings by lowering power usage of at least one storage device in a multi-tiered storage system | |
US20190155511A1 (en) | Policy-based optimization of cloud resources on tiered storage operations | |
US11907766B2 (en) | Shared enterprise cloud | |
US10659531B2 (en) | Initiator aware data migration | |
US11347413B2 (en) | Opportunistic storage service | |
CN114070847B (en) | Method, device, equipment and storage medium for limiting current of server | |
WO2019097363A1 (en) | Policy-based optimization of cloud resources on tiered storage operations | |
US11042660B2 (en) | Data management for multi-tenancy | |
US20200174670A1 (en) | Reducing write collisions in data copy | |
US11442629B2 (en) | I/O performance in a storage system | |
US11747978B2 (en) | Data compaction in distributed storage system | |
US11036430B2 (en) | Performance capability adjustment of a storage volume | |
Hsu et al. | Effective memory reusability based on user distributions in a cloud architecture to support manufacturing ubiquitous computing | |
KR102543689B1 (en) | Hybrid cloud management system and control method thereof, node deployment apparatus included in the hybrid cloud management system and control method thereof | |
US11966338B2 (en) | Prefetching management in database system based on number of pages being prefetched | |
US20220398134A1 (en) | Allocation of services to containers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211222 Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province Patentee after: Super fusion Digital Technology Co.,Ltd. Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd. |