CN103226607B - Description and transformation method supporting metadata I/O service quality performance requirement in parallel file system - Google Patents

Description and transformation method supporting metadata I/O service quality performance requirement in parallel file system Download PDF

Info

Publication number
CN103226607B
CN103226607B CN201310156737.8A CN201310156737A CN103226607B CN 103226607 B CN103226607 B CN 103226607B CN 201310156737 A CN201310156737 A CN 201310156737A CN 103226607 B CN103226607 B CN 103226607B
Authority
CN
China
Prior art keywords
load
time
child
client
performance requirement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310156737.8A
Other languages
Chinese (zh)
Other versions
CN103226607A (en
Inventor
肖利民
谢柯
李秀桥
霍志胜
阮利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201310156737.8A priority Critical patent/CN103226607B/en
Publication of CN103226607A publication Critical patent/CN103226607A/en
Application granted granted Critical
Publication of CN103226607B publication Critical patent/CN103226607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a description and transformation method supporting metadata I/O service quality performance requirements in a parallel file system. The method comprises the following steps: load performance requirement description and load implementation history information to a controller is sent by a client at intervals; performance requirements and implementation condition information of different loads are collected by the controller to generate an average time delay requirement and a service time requirement for next phase, and the information is sent to the client; and the average time delay requirement and the service time requirement for next phase are received by the client, load operation analysis is carried out by the client so as to obtain a sub-operation information, andthe time delay requirement corresponding to the sub-operation are further generated according to the average time delay target for the next phase. The method provided by the invention can avoid resource waste or resource insufficiency caused by unreasonable static indexes, is more flexible in meeting the performance requirements at load levels.

Description

In parallel file system support metadata I/O service quality performance requirement description and Conversion method
Technical field
The present invention relates in a kind of file system under multi-load environment load services quality description and conversion, and in particular to A kind of performance requirement description and transformation method for being applied in parallel file system support metadata I/O service quality;Belong to meter Calculation machine technical field.
Background technology
With the development of the hardware technologies such as server, interference networks and storage device, at present in large-scale information infrastructure Extensive file system gradually carry to the different types of application such as the Internet, applications, data mining, Collaboration on Scientific Research, scientific algorithm For sharing the service that storage resource is accessed, to make full use of its large-scale calculating and storage capacity.Different type application it is negative Carrier has multifarious I/O access modules and storage requirements for access, while different application generally has different priority, Research shows that different application may differ from 7 orders of magnitude for the demand of I/O performances.Because data access depends on metadata, On the premise of metadata access performance is increasingly becoming bottleneck problem, the metadata access performance requirement of application is only met The performance requirement of application can further be ensured.
Existing research typically describes and meets the performance requirement of application load using I/O service quality, and using different Dispatching algorithm to distribute for different loads calculate, the resource such as storage ensureing the I/O performances of application load, i.e., describe first These performance requirements are then converted into the demand for resource by the performance requirement required for load, finally again using different Method is ensureing the performance of load.Existing research primarily focuses on guarantee data access performance, often directly adopts with bandwidth (the data volume size that the unit interval accesses, such as MB/s) and/or time delay (response delay of single I/O requests) etc. refer to for measurement Mark to describe the performance requirement of the data access of load.But the performance of load metadata is weighed often with handling capacity (during unit Between the operation number that completes, such as IOPS) and/or time delay (time delay of single operation such as creates the time delay of operation) be index, And metadata access has larger difference at aspects such as action type, access data volume, interaction protocols with data access, because This needs is a kind of for the description and transformation method of metadata I/O service quality performance demand.
Specifically, metadata operation type includes document creation/deletion action, directory creating/deletion action, searches behaviour Work, file/directory meta data file read-write operation, row take directory operation etc., and different operations in implementation procedure due to being related to The different interaction protocol of client and server, therefore including different child-operations, and the number involved by each child-operation Also differ according to amount.Such as:Establishment file is operated includes two child-operations on meta data server:Establishment meta data file, Increase directory entry in parent directory, create the various attribute informations that meta data file wants log file, and directory entry is one , to the mapping of meta data file, therefore the data volume of first child-operation operational access more sub- than second is more for filename;And Row take directory operation including two child-operations:Read directory operation, read the meta data file operation of directory entry, the two child-operations Data volume is all relevant with the number of the directory entry contained in catalogue.
In sum, it is existing well to portray metadata with regard to data I/O performance requirement description and transformation method The performance requirement of I/O, it is difficult to meet demand of the concurrent polymorphic type load to metadata performance guarantee in extensive file system.And Lack suitable metadata I/O performance requirement description and transformation method just cannot further take appropriate measures distribution resource come Ensure the performance requirement of metadata I/O of load.
The content of the invention
1st, purpose
The purpose of the present invention is to propose to support that the performance requirement of metadata I/O service quality is retouched in a kind of parallel file system State and conversion method, reach in the environment of multi-load, the convenient performance requirement for different load given load levels, while energy It is enough that the performance requirement of load stage is converted to into the intelligible fine-grained performance objective of file system, further take so as to convenient Corresponding measure is distributing the performance requirement of metadata I/O of resource guarantee load.
2nd, technical scheme
According to for metadata I/O performance requirement and metadata operation diversity and the analysis of feature stage by stage, at this The performance requirement that two-layer is taken in scheme is described and conversion plan, as shown in Figure 1.It is exactly specifically that upper strata is specified using user Load performance requirement description, then by a series of conversion regime, the service quality performance demand of load stage is converted into The service quality performance demand parameter of the metadata child-operation rank of lower floor.
According to for the analysis of metadata I/O load performance demand, in present design, upper strata is for the performance of load Requirement description is using average throughput R (averagely completing the number for operating in load implementation procedure in the unit interval) and average delay D (loading the average delays of all operations in implementation procedure) is used as the performance indications of load, while being distinguished using load indications W Different loads, the use priority P shows unequally loaded priority.Therefore the metadata I/O performance requirement of a load is retouched Stating can pass through a four-tuple<W, R, D, P>Uniquely to mark, wherein W, the implication of R, D, P is as above.
According to for the analysis of metadata operation, in present design, dynamically by average throughput demand R on upper strata The service time demand being converted into required for each stage load, is dynamically converted into demand D of the average delay on upper strata The corresponding fine granularity delay requirement of the child-operation of all operations in load.The each of which stage load service time demand be The one storage system service time loaded required in order to meet throughput demand.Specific transfer process is as shown in figure 1, figure In mainly include client and controller, client is in the cluster calculate node, and controller can use an independent meter Operator node, it is also possible to which the election one from existing calculate node is used as controller.Pass through HVN between controller and client Network connects, and one or more client service passes through one in the performance requirement of the metadata I/O load produced by an application Individual four-tuple<W, R, D, P>Uniquely to mark, a time interval value is preset, in each time interval one is called The individual stage.Often cross a time interval, the history of the implementation status that client is asked by performance requirement description and in loading Information is sent to controller, while client will be analyzed to the operation in load, obtains its child-operation information, while control The historical information that each client feedback is returned can be collected and collected to device, determine next stage by average delay controller every The average delay performance requirement of individual load, the clothes that each load of next stage is assigned to are determined by service time assignment controller The business time, the average delay that client feeds back controller further by child-operation time delay controller be converted into child-operation when Prolong demand.
Specific technical scheme steps are as follows:
Step 201. client is by the performance requirement description for loading and loads the historical information for performing when predetermined Between section be sent to controller.
The unequally loaded performance requirement and implementation status information that receive is collected and collected to step 202. controller, produces The average delay demand and the service time demand of next stage in raw next one stage, and these demand informations are sent to into visitor Family end.
Step 203. client receives the service time that the average delay demand in next stage and next stage need Demand information;Client is analyzed acquisition its child-operation information by the operation to loading, and average according to next stage Time delay target further produces the corresponding delay requirement of child-operation.
Wherein, the performance requirement description described in step 201 is exactly four-tuple<W, R, D, P>, load label is represented respectively, put down Equal handling capacity, the average delay of operation, the priority of load, each of all operations in client collection load implementation procedure Time started, end time, Action number, sub numbering of child-operation etc., client is further aggregated into and is supported on the client The historical information that end performs, including belonging to request number, average delay, average throughput that up to the present the load has completed Amount, total execution time, client numbering, load numbering etc..In each client initialization, when needing synchronous between them Between, often after time interval Tp, all of client needs the unified performance requirement description that load is sent to controller And the historical execution information of load, it is referred to as a stage between transmission information twice.Client is collected and collected this stage And the historical execution information for belonging to same load before, then it is sent to controller at the end of this stage.
Wherein, controller receives the load performance demand information and historical execution information of client transmission in step 202, Controller collects the historical execution information for belonging to same load, so as to be belonged to according to load numbering and client numbering Up to the present request number ReqNum for having completed of the load, average delay AvergeD, average throughput AverageR, load numbering W etc., while the performance requirement information Store of load is got up.Average delay controller and service Time controller calculates the average of load next one stage according to the performance requirement target and historical execution information of load Time delay target and the service time target of next stage.
Wherein, the metadata I/O operation in load is obtained by analysis client the child-operation letter of operation in step 203 Breath, child-operation information can include number, the corresponding data volume of child-operation of child-operation etc., and child-operation time delay controller is further The average delay target of the next stage for receiving is converted into into the corresponding time delay target of child-operation.
3rd, advantage and effect
The present invention proposes the performance requirement description that metadata I/O service quality is supported in a kind of parallel file system and changes Method, compared with the conventional method, its major advantage has:(1) metadata I/O feature is have been directed to, the method includes two-layer, upper strata The performance requirement of load level specified is easy for, and lower floor is easy for storage system and recognizes fine-grained performance requirement index. (2) flexibility, by feedback information service time and the child-operation time delay target of lower floor are dynamically adjusted, it is to avoid static state The unreasonable wasting of resources for causing of index or the situation of inadequate resource, more flexibly meet the performance requirement of load level.
Description of the drawings
Fig. 1 is the two-layer performance requirement description and transformation method for supporting metadata I/O service quality.
Fig. 2 is the performance requirement description and transformation method flow process for supporting metadata I/O service quality.
Fig. 3 loads next stage average delay flow process to determine.
Fig. 4 is distribution service time next stage flow process.
Fig. 5 determines child-operation time delay target flow process for client.
Specific embodiment
Must become more apparent to express the object, technical solutions and advantages of the present invention, below in conjunction with the accompanying drawings and specifically The present invention is further described in more detail for embodiment.
The present invention is operated on the cluster equipped with parallel file system (such as PVFS, Lustre) on software;And should Multiple client (calculate node) is configured in cluster, multiple stage (more than 1) server is configured, wherein at least has one for metadata Server, other are data server.Metadata I/O service quality is supported in a kind of parallel file system of present invention design Performance requirement is described realizes that in client wherein controller may operate on an independent client with conversion method, A client operation controller can also be elected among existing client.
The performance requirement description of metadata I/O service quality is supported in a kind of parallel file system of present invention design and is turned The general structure of method is changed as shown in figure 1, i.e. the performance requirement of two-layer is described and conversion plan.Upper strata is using bearing that user specifies Performance requirement description is carried, then by a series of conversion regime, the QoS requirement of user class the unit of lower floor is converted into into The QoS requirement index of data child-operation rank.
Wherein, upper strata (loads the unit interval in implementation procedure for the performance requirement of load is described using average throughput R Inside averagely complete operate number) and average time delay D (load implementation procedure in all operations average delays) as load Performance indications, while distinguishing different loads using load indications W, the use priority P shows unequally loaded priority.Cause The metadata I/O performance requirement description of this load can pass through a four-tuple<W, R, D, P>Uniquely to mark.
The present invention dynamically by the average throughput demand on upper strata be converted into load each stage required for service time Demand, the corresponding fine granularity time delay of child-operation of all operations in dynamically being converted into loading by the demand of the average delay on upper strata Demand.The service time demand in wherein each stage is storage system clothes of the load required in order to meet throughput demand The business time, including load network, server end CPU process and various memory services times within the storage system.
Specific technical scheme steps are as follows:
Step 201. client is by the performance requirement description for loading and loads the historical information for performing when predetermined Between section be sent to controller.
The unequally loaded performance requirement and implementation status information that receive is collected and collected to step 202. controller, produces The average delay demand and the service time demand of next stage in raw next one stage, and these demand informations are sent to into visitor Family end.
The average delay switch process of next stage is as shown in Figure 3:
(1) controller collects the information of the load that client is sent every Tp time intervals, including all work Jump load WiRequest number ReqNum for up to the present having completed, average delay AvergeDi, average throughput AverageRi, while including load WiPerformance requirement description<Wi, Ri, Di, Pi>;
(2) it is all of load distribution next stage average delay, in two kinds of situation:
If a () receives for the first time the information of load Wi, that is, load Wi and just bring into operation, then NextDi=Di
B can () basis make the average delay of load reach Di by adjusting next stage average delay, i.e., according to Delta =(AverageRi+ReqNumi)×Di-ReqNumi×AverageDiValue, be divided into two kinds of situations:
(b1) if average delay can be made to reach Di, i.e. Delta>0, then next stage average delay
(b2) if average delay can not be made to reach Di, i.e. Delta 0, we are by the average delay target of next stage It is set to less value MinorTime set in advance.
(3) the next stage average delay of the load for having converted is sent to into client;
(4) it is carried out finishing if all of load, then terminates, otherwise, continues executing with step 1.
For the transfer process of service time next stage, as shown in Figure 4:
The time span for assuming a stage is Tp, and a total n load proposes metadata I/O performance requirement, this stage Distribute to load WiService time be ServiceTi, next stage will distribute to load WiService time be NextServiceTi.Concrete operation step is as follows:
(1) controller collects the information of the load that client is sent every Tp time intervals, including all work Jump load WiRequest number ReqNum for up to the present having completedi, average delay AvergeDi, average throughput AverageRi, while including load WiPerformance requirement description<Wi, Ri, Di, Pi>;
(2) to the service time NextServiceT of all load predistribution next stages being carrying out, if load WiJust start to perform or load service time ServiceT on last stageiFor 0 situation, thenIt is no Then according to service time ServiceT on last stageiIt is adjusted, obtains
(3) the service time NextServiceT of the next stage for distributing to each load is readjusted, according toDivide three kinds of situations:
If (a) Delta=0, illustrate that the time for different loads distribution takes the time interval of Tp just, it is not necessary to adjust The NextServiceT of whole each load;
If (b) Delta>0, illustrate that the service time for being pre-assigned to each load has exceeded time interval Tp, therefore need Order that will be according to the priority in load performance requirement description from low to high, reduces the service time of load distribution, Zhi Daowei The summation of the service time of all load distribution is equal to Tp, i.e. Delta=0;
If (c) Delta<0, illustrate still there is unappropriated time quantum in time interval Tp, then by the time quantum All of load is respectively allocated to according to the ratio shared by R in the description of unequally loaded performance requirement.
(4) the service time NextServiceT of the next stage of each load adjusted is sent to into each client End, while will be stored in ServiceT for each NextServiceT for loading;
(5) it is carried out finishing if all of load, then terminates, otherwise, continues executing with step 1.
Step 203. client receives the service time that the average delay target in next stage and next stage need Target information;Client is analyzed acquisition its child-operation information by the operation to loading, and average according to next stage Time delay target further produces the corresponding delay requirement of child-operation.
It is as follows for the analytical procedure of metadata operation:
(1) species of parallel file system metadata I/O operation is analyzed first, and different types of operation determines its son The method of the number of operation;
(2) when the operation of a load will be scheduled, it is first determined the species of the operation, then according to (1st) step Analysis determine the operation child-operation number.
For the time delay transfer process of child-operation, as shown in Figure 5.Assume that operation i includes m child-operation, using tijRepresent The time of the actual cost of operation j-th child-operation of i, using DijRepresent the time delay target of operation j-th operation of i, the in addition load The average delay target of the operation received from controller is Di
Comprise the following steps that:
(1) analysis obtains the operation species of child-operation and numbering j (numbering j represents j-th child-operation) of child-operation, with And this kind of generic operation correspondence child-operation number m;
(2) determine the time delay target of child-operation, be divided into two kinds of situations:
If a () the child-operation is first child-operation for operating i, then the time delay target of the child-operation is set to
First child-operation of (b) if not operation i, then need the child-operation for having completed for obtaining operation i The time of cost, because the time that the child-operation for having completed spends is likely larger than time delay target Di of the operation, it is therefore desirable to In two kinds of situation:
(b1) if the time that the child-operation for having completed spends is less than time delay target Di of the operation, then the child-operation Time delay target can be set to
(b2) if the time that the child-operation for having completed spends is more than or equal to time delay target Di of the operation, that is to say, that The time delay of the operation is inherently more than Di, and the time delay target of follow-up child-operation is set to pre-set less value by us MinorTime。
It should be noted last that:Above example is only to illustrative and not limiting technical scheme, although ginseng The present invention has been described in detail according to above-described embodiment, it will be understood by those within the art that:Still can be to this Invention is modified or equivalent, any modification or partial replacement without departing from the spirit and scope of the present invention, and its is equal Should cover in the middle of scope of the presently claimed invention.

Claims (4)

1. the performance requirement description and transformation method of metadata I/O service quality is supported in a kind of parallel file system, by two The description and conversion of layer, by the performance requirement of upper strata load level the intelligible fine-grained property of underlying file system is converted to Energy demand parameter, it is characterised in that:The method is comprised the following steps:
Step one, client is by the performance requirement description for loading and loads the historical information for performing every certain period of time Give controller;
The unequally loaded performance requirement and implementation status information that receive is collected and collected to step 2, controller, under generation The average delay demand in one stage and the service time demand of next stage, and these demand informations are sent to into client End;
Step 3, client receives the average delay demand in next stage and the service time demand information of next stage; Client is analyzed by the metadata operation to loading and obtains its child-operation information, and according to the average delay of next stage Target further produces the corresponding delay requirement of child-operation.
2. support that the performance requirement of metadata I/O service quality is retouched in a kind of parallel file system according to claim 1 State and conversion method, it is characterised in that:For the performance requirement description of load is using unit in load implementation procedure in step one The average delay D that number R for operating averagely is completed in time and all operations in implementation procedure are loaded refers to as the performance of load Mark, while distinguishing different loads using load indications W, the use priority P shows unequally loaded priority, so that one The metadata I/O performance requirement description of individual load can pass through a four-tuple<W, R, D, P>Uniquely to mark.
3. support that the performance requirement of metadata I/O service quality is retouched in a kind of parallel file system according to claim 1 State and conversion method, it is characterised in that:Controller receives the load performance demand information of client transmission and goes through in step 2 History execution information, controller collects and belongs to the historical execution information of same load and obtain according to load numbering and client numbering To request number ReqNum for up to the present having completed, average delay AverageD, the average throughput that belong to the load AverageR, loads numbering W, while the performance requirement information Store of load is got up, wherein average delay controller and clothes Business time controller calculates the flat of load next one stage according to the performance requirement target and historical execution information of load Equal time delay target and the service time target of next stage,
It is wherein as follows for the determination process of next stage average delay target:
(1) controller collects the information of the load that client is sent every Tp time intervals, including all active negative Carry WiRequest number ReqNum for up to the present having completedi, average delay AverageDi, average throughput AverageRi, while including load WiPerformance requirement description<Wi, Ri, Di, Pi>,
(2) it is all of load distribution next stage average delay, in two kinds of situation:
If a () receives for the first time the information of load Wi, that is, load Wi and just bring into operation, then NextDi=Di
B can () basis make the average delay of load reach Di by adjusting next stage average delay, i.e., according to Delta= (AverageRi+ReqNumi)×Di-ReqNumi×AverageDiValue, be divided into two kinds of situations:
(b1) if average delay can be made to reach Di, i.e. Delta>0, then next stage average delay
(b2) if average delay can not be made to reach Di, i.e. Delta<0, the average delay of next stage is targeted by advance One less value MinorTime of setting,
(3) the next stage average delay of the load for having converted is sent to into client,
(4) it is carried out finishing if all of load, then terminates, otherwise, continues executing with step (1),
It is as follows for the process of the conversion of service time next stage:
Controller collects the information of the load that client is sent every Tp time intervals, including all active load Wi Request number ReqNum for up to the present having completedi, average delay AvergeDi, average throughput AverageRi, together When include load WiPerformance requirement description<Wi, Ri, Di, Pi>,
Service time NextServiceT to all load predistribution next stages being carrying out, if load WiJust start Perform or load service time ServiceT on last stageiFor 0 situation, thenOtherwise root According to service time ServiceT on last stageiIt is adjusted, obtains
The service time NextServiceT of the next stage for distributing to each load is readjusted, according toDivide three kinds of situations:
If (a) Delta=0, illustrate that the time for different loads distribution takes the time interval of Tp just, it is each negative without the need for adjusting The NextServiceT of load;
If (b) Delta>0, the service time for illustrating and distributing to each load has exceeded time interval Tp, according to loading Priority in energy requirement description order from low to high, subtracts under absorbed service time, until the clothes for all load distribution The summation of business time is equal to Tp, i.e. Delta=0;
If (c) Delta<0, illustrate still there is unappropriated time quantum in time interval Tp, then by the time quantum according to Ratio in the description of unequally loaded performance requirement shared by R is respectively allocated to all of load,
The service time NextServiceT of the next stage of each load adjusted is sent to into each client, while will It is stored in ServiceT for each NextServiceT for loading,
It is carried out finishing if all of load, then terminates, otherwise, continues executing with step (1).
4. support that the performance requirement of metadata I/O service quality is retouched in a kind of parallel file system according to claim 1 State and conversion method, it is characterised in that:Client is by analysis operated the metadata I/O operation in load in step 3 Child-operation information, child-operation information can include child-operation number, the corresponding data volume of child-operation, child-operation timing_delay estimation The operation average delay target of the next stage for receiving further is converted into the corresponding time delay target of different child-operations by device,
Determine that the process of child-operation time delay target is as follows:
(1) analysis obtains numbering j that the operation species and son of child-operation are manipulated, and this kind of generic operation correspondence child-operation number m;
(2) determine the time delay target of child-operation, be divided into two kinds of situations:
If a () the child-operation is first child-operation for operating i, then the time delay target of the child-operation is set to
First child-operation of (b) if not operation i, then need the child-operation for having completed for obtaining operation i to spend Time, because the time that the child-operation that completed spends is likely larger than time delay target Di of the operation, it is therefore desirable to be divided to two The situation of kind:
(b1) if time for spending of the child-operation that completed less than the operation time delay target Di, then the child-operation when Prolonging target can be set toWherein tijRepresent the time of the actual cost of operation j-th child-operation of i;
(b2) if the time that the child-operation for having completed spends is more than or equal to time delay target Di of the operation, that is to say, that the behaviour The time delay of work is inherently more than Di, and the time delay target of follow-up child-operation is set to pre-set less value by us MinorTime。
CN201310156737.8A 2013-04-28 2013-04-28 Description and transformation method supporting metadata I/O service quality performance requirement in parallel file system Active CN103226607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310156737.8A CN103226607B (en) 2013-04-28 2013-04-28 Description and transformation method supporting metadata I/O service quality performance requirement in parallel file system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310156737.8A CN103226607B (en) 2013-04-28 2013-04-28 Description and transformation method supporting metadata I/O service quality performance requirement in parallel file system

Publications (2)

Publication Number Publication Date
CN103226607A CN103226607A (en) 2013-07-31
CN103226607B true CN103226607B (en) 2017-04-26

Family

ID=48837052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310156737.8A Active CN103226607B (en) 2013-04-28 2013-04-28 Description and transformation method supporting metadata I/O service quality performance requirement in parallel file system

Country Status (1)

Country Link
CN (1) CN103226607B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090019052A1 (en) * 2007-07-12 2009-01-15 International Business Machines, Corporation Providing file system availability during local path failure of a non-server node
CN100583065C (en) * 2006-11-22 2010-01-20 国际商业机器公司 System and method for providing high performance scalable file I/O
CN102932424A (en) * 2012-09-29 2013-02-13 浪潮(北京)电子信息产业有限公司 Method and system for synchronizing data caching of distributed parallel file system
CN102929958A (en) * 2012-10-10 2013-02-13 无锡江南计算技术研究所 Metadata processing method, agenting and forwarding equipment, server and computing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100583065C (en) * 2006-11-22 2010-01-20 国际商业机器公司 System and method for providing high performance scalable file I/O
US20090019052A1 (en) * 2007-07-12 2009-01-15 International Business Machines, Corporation Providing file system availability during local path failure of a non-server node
CN102932424A (en) * 2012-09-29 2013-02-13 浪潮(北京)电子信息产业有限公司 Method and system for synchronizing data caching of distributed parallel file system
CN102929958A (en) * 2012-10-10 2013-02-13 无锡江南计算技术研究所 Metadata processing method, agenting and forwarding equipment, server and computing system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A dynamic and adaptive load balancing strategy for parallel file system with large-scale I/O servers;Bin Dong等;《Journal of Parallel and Distributed computing(2012)》;20121031;第72卷(第10期);1254-1268 *
A New File-Specific Strip Size Selection Method for Highly Concurrent Data Access;Bin Dong等;《2012 ACM/IEEE 13th International Conference on Grid Computing》;20121130;22-30 *
Self-acting Load Balancing with Parallel Sub File Migration for Parallel File System;Bin Dong等;《Computational Science and Optimization(CSO),2010 Third International Joint Conference on》;20100531;第2卷;317-321 *

Also Published As

Publication number Publication date
CN103226607A (en) 2013-07-31

Similar Documents

Publication Publication Date Title
US7945913B2 (en) Method, system and computer program product for optimizing allocation of resources on partitions of a data processing system
CN103516807B (en) A kind of cloud computing platform server load balancing system and method
JP5343523B2 (en) Job management apparatus, job management method, and job management program
CN104050042B (en) The resource allocation methods and device of ETL operations
CN108182105B (en) Local dynamic migration method and control system based on Docker container technology
Zeng et al. An integrated task computation and data management scheduling strategy for workflow applications in cloud environments
CN108845874B (en) Dynamic resource allocation method and server
Tsai et al. Two-tier multi-tenancy scaling and load balancing
CN114138486A (en) Containerized micro-service arranging method, system and medium for cloud edge heterogeneous environment
CN110221920B (en) Deployment method, device, storage medium and system
Kim et al. An autonomic approach to integrated hpc grid and cloud usage
Shah et al. Load balancing through block rearrangement policy for hadoop heterogeneous cluster
CN104112049A (en) P2P (peer-to-peer) architecture based cross-data-center MapReduce task scheduling system and P2P architecture based cross-data-center MapReduce task scheduling method
CN112463395A (en) Resource allocation method, device, equipment and readable storage medium
CN116684420A (en) Cluster resource scheduling method, device, cluster system and readable storage medium
Keerthika et al. A multiconstrained grid scheduling algorithm with load balancing and fault tolerance
US10628279B2 (en) Memory management in multi-processor environments based on memory efficiency
Guo Ant colony optimization computing resource allocation algorithm based on cloud computing environment
CN103226607B (en) Description and transformation method supporting metadata I/O service quality performance requirement in parallel file system
CN114860449B (en) Data processing method, device, equipment and storage medium
Rahmani et al. Data placement using Dewey Encoding in a hierarchical data grid
Cavallo et al. Application profiling in hierarchical Hadoop for geo-distributed computing environments
Radha et al. Allocation of resources and scheduling in cloud computing with cloud migration
EP3910472B1 (en) Processing allocation in data center fleets
Su et al. Node capability aware resource provisioning in a heterogeneous cloud

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant