CN102681889B - Scheduling method of cloud computing open platform - Google Patents

Scheduling method of cloud computing open platform Download PDF

Info

Publication number
CN102681889B
CN102681889B CN201210128627.6A CN201210128627A CN102681889B CN 102681889 B CN102681889 B CN 102681889B CN 201210128627 A CN201210128627 A CN 201210128627A CN 102681889 B CN102681889 B CN 102681889B
Authority
CN
China
Prior art keywords
thread
node
server
round
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210128627.6A
Other languages
Chinese (zh)
Other versions
CN102681889A (en
Inventor
唐雪飞
陈科
王威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201210128627.6A priority Critical patent/CN102681889B/en
Publication of CN102681889A publication Critical patent/CN102681889A/en
Application granted granted Critical
Publication of CN102681889B publication Critical patent/CN102681889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a scheduling method of a cloud computing open platform, which particularly characterized in that call requests of mass users are monitored through maintaining a central server; business service and data service of a cluster server are dynamically scheduled; and meanwhile, detachable service components are called and a multithread ability of a multi-core processor is scheduled and distributed according to use levels of the users. Through using the method, the defects that the traditional open platform is short of expansibility and flexibility can be effectively corrected, two targets of rapidly constructing and deploying an application and operation environments and dynamically adjusting the application and operation environments are established, the existing equipment is maximally utilized and the service is maximally constructed; and meanwhile, through adopting a multi-core computing manner and a multithread technology, the scheduling speed and the response ability of the system service are improved.

Description

A kind of dispatching method of cloud computing open platform
Technical field
The invention belongs to computerized information analysis and technical field of data processing, be specifically related to a kind of dispatching method of cloud computing open platform.
Background technology
Cloud computing is one of focus of the outer business of Present Domestic and scientific research institution's research, and being the development of grid computing, parallel computation, Distributed Calculation, is a kind of emerging business computation schema.It has employed ripe Intel Virtualization Technology and provides on-demand service by the user that the resource of data center is packaged as on internet.Job scheduling and Resourse Distribute are two gordian techniquies of cloud computing, and the commerciality of cloud computing makes it pay close attention to service quality to user, and its Intel Virtualization Technology makes Resourse Distribute and job scheduling parallel distributed different from the past calculate.In grid computing, in Distributed Calculation, parallel computation, job scheduling and Resourse Distribute are the contents of extensively research.Efficiency is focused in the research of traditional job scheduling, Resourse Distribute, and the fairness of Resourse Distribute be also be related to QoS of customer, an importance that the load equilibrium of system, task complete efficiency.Especially cloud computing is that a kind of business realizes, and object is, for different users provides service, computing power, storage etc., to need the satisfaction more focusing on user's request.
What together develop with cloud computing also has Web open platform.As a kind of new network service mode, first Web open platform provides a basic service, then by opening the interface of self, help third party developer by using and assemble its interface and other third party's service interfaces produce new application, ensure that this application can unify to operate on this open platform; When user uses open platform, more more intensive can complete multiple network activity, platform provides various service and related guarantee for user.The basic service of open platform can be existing, such as door, blog, also can be emerging, such as customer relationship.
1. multi-threaded parallel and mutual exclusion techniques
Each program just run in system is a process.Each process comprises one or more thread.Process also may be the Dynamic Execution of whole program or subprogram.Thread is the set of one group of instruction, or the particular segment of program, it can independently in program perform, also it can be interpreted as the context that code runs, so thread is the process of lightweight substantially, it is responsible for performing multitask in single program, is usually responsible for scheduling and the execution of multiple thread by operating system.
When there being multiple thread parallel to run, thread pool is adopted to manage multithreading.Thread pool is a kind of multiple threads form, adds task to queue in processing procedure, and then after establishment thread, automatically start these tasks, thread pool threads is all background thread.Each thread uses the storehouse size of acquiescence, runs, and be in multiple thread units with the priority given tacit consent to.If certain thread is idle (as waited for certain event) in Managed Code, then another worker thread of insertion makes all processors keep busy by thread pool.If all thread pool threads all remain busy, but comprise the work of hang-up in queue, then thread pool will create another worker thread over time, but the number of thread exceedes maximal value never.The thread exceeding maximal value can be queued up, but they will wait until that other thread just starts after completing.
Under multi-thread environment, semaphore is adopted to complete the mutual exclusion of cross-thread with synchronous.Semaphore is the total data structure of the one overall situation used under multi-thread environment, is can be used for ensureing that two or more critical code section is not by concurrent invocation.Before entering a critical code section, thread must obtain a semaphore; Once this critical code section completes, so necessary release semaphore of this thread, other thread wanting to enter this critical code section must wait for until first thread release semaphore.In order to complete this process, need establishment semaphore, then will obtain the first and last end that semaphore (Acquire Semaphore) and release semaphore (Release Semaphore) are placed on each critical code section respectively, what confirm that these semaphores quote is the semaphore of initial creation.
2. multiprocessor distributes and load-balancing technique
Multiprocessor tasks distributes and load balancing is under the environment of multiple processor cores, carries out dynamic assignment and debugging to Processing tasks, thus improves processor utilization, realizes the dispatching response of multi-threaded parallel.
The scheduling model of multicomputer system mainly comprises processor model, task model and dispatching algorithm three part.Processor model describes the information such as processor structure and processing power, and task model describes the relevant information that the pending required by task such as scheduling are wanted.If a multicomputer system is by m processor (P 1, P 2..., P m) and K resource (Res=r 1, r 2..., r k), processing power C i(i=1,2 ..., m) represent processor P ithe ability of Processing tasks within the unit interval, the processing power TPC of multicomputer system is defined as: TPC = Σ i = 1 m C i .
Multicomputer system adopts centralized scheduling mechanism, is performed by all task matching by scheduler to each processor; Each processor has respective scheduling queue, and processor selection task from this scheduling queue processes; Communication between scheduler and each processor is realized by scheduling queue.
To multicomputer system, use scheduling success ratio, average response time, processor utilization, scheduling length as the performance index evaluating dispatching algorithm.
1) scheduling success ratio (Success Rate, SR): refer to by the ratio of algorithmic dispatching successful task-set number N ' with the task-set number N be scheduled: SR = N ′ N .
2) average response time (Average Response Time, ART): refer to that concentrated all tasks start the mean difference of time and the time of arrival processed.If task S itime of arrival be t i, the time starting to process is st i, then its response time is rt i=st i-t i.If the number of tasks in task-set is n, then the average response time of task is: the average response time ART of a task-set is less, and namely the stand-by period is shorter, then the performance of dispatching algorithm is better.
3) scheduling length (Schedule Length, SL): refer to from task time of arrival the earliest, to the difference of the deadline of last task.Deadline (finish time) is ft (i)=st (i)+pt (i), wherein, the processing time that pt (i) is task st (i), then scheduling length is: SL=max (ft (i))-min (t (i)).
Under the successful prerequisite of scheduling, it is complete that less scheduling length means that task is treated earlier, and the average utilization of processor is higher.
4) processor utilization (Utilization Rate, UR): finger processor is executed the task time of taking and the ratio with scheduling length, if having S in task-set S jindividual task is by processor P j(j=1,2 ..., m) respond, then P jutilization factor be:
And the average utilization of all processors (Average Utilization Rate, AUR) then can be expressed as:
Generally, cloud computing task scheduling system is similar to Grid scheduling system, according to status information and the prediction of resource, the set of tasks of user to be mapped on resource collection and to perform, and operation result is returned to user according to certain scheduling strategy.Although said method can utilize multiserver to carry out task process, do not add ruuning situation and the performance parameter of each server, because the handling property of individual server can not be made full use of.A dispatching method can not make These parameters reach optimum simultaneously, so under the prerequisite ensureing scheduling success ratio, should improve other indexs as far as possible, be optimized to make the overall performance of system.
Summary of the invention
In order to overcome the shortcoming that existing dispatching algorithm and treatment effeciency exist, the present invention proposes a kind of dispatching method of cloud computing open platform.
Technical scheme of the present invention is: a kind of dispatching method of cloud computing open platform, comprises the steps:
The some parallel multithreadings of the first step, central server structure, and construct process pool, afterwards the process of structure is put into process pool;
Second step, registration central server create and start finger daemon, and by main thread initializes global variable, global variable at least should comprise semaphore, sub-thread state collection and result set, for carrying out sub-thread synchronization and record queries record;
3rd step, operate in central server load equalizer initialize communications port after wait for the connection of front end web proxy server process, when there being server connection request to arrive, load equalizer generates a thread and this server communication, load equalizer continues the connection request waiting for other server, when a new customer requests service, the server of the i.e. load maximum weight that load equalizer selects load minimum from information table is served for it;
Request in 4th step, central server processing threads pond, and call the service of application server, calls the multi-core parallel concurrent processor of application server cluster according to user right, complete the distribution to processor node of process and thread;
The data of database server collected by 5th step, central server, process distributed data;
6th step, central server call cluster service, and integrate mass data and return to web proxy server and call, and give back the thread to thread pond of current request simultaneously, continue the request of monitoring users and from thread pool, call idle thread.
The above-mentioned first step specifically comprises as follows step by step:
1) with FORK-JOIN structure for model creation concurrent thread model;
2) daemon thread FORK thread is out put into thread pool, be responsible for the life cycle management of thread by thread pool;
3) under multi-threading parallel process environment, variable mutex is adopted to be mutex amount, for realizing the synchronous and mutual exclusion of the resource access under multi-thread environment; The course of work of this mutex amount is as follows: when the resource that request one uses mutex to represent, process first reads the value of mutex, to judge whether corresponding resource can be used; When the value of mutex is greater than 0, show have resource to ask; When equaling 0, show without available resources, process enters sleep state until when having available resources; When process does not re-use the shared resource of a semaphore control, the value of mutex increases 1;
The load balancing of above-mentioned 3rd step load balancing implement body is: when the node of cloud computing open platform comes into operation for the first time, sets an initial weights SW (N i), along with the change of node load, balanced device adjusts weights, and when weights are run by node, the dynamic state of parameters of each side calculates, node N iweights be described by following formula:
SW(N i)=K 0*L_CPU(N i)+K 1*L_Memory(N i)+K 2*L_Process(N i)+K 3*L_IO(N i)+K 4*L_Response(N i)
Wherein, K 0, K 1, K 2, K 3and K 4represent constant coefficient, L_CPU (N i) be node N icPU usage, L_Memory (N i) be node N imemory usage, L_Process (N i) be node N irate of people logging in, L_IO (N i) be occupation rate, the L_Response (N of the magnetic disc i/o of node Ni i) be node N ithe process response time, L_CPU (N i)=1-P_CPU (N i), wherein, P_CPU (N i) represent node N ithe utilization factor of CPU.
Above-mentioned 4th step specifically comprises as follows step by step:
1) foundation of Task Assignment Model:
If application server cluster comprises N nodeindividual processing node concurrent program to be allocated has N procindividual process process P icomprise M iindividual thread total number of threads of concurrent program N thread = Σ k = 0 N proc - 1 M k ;
Concurrent program to be allocated is expressed as a task nexus figure, is specially a non-directed graph G=(V, E), wherein, V is the set { V of each application server node in application server cluster i, node V icorresponding two tuple <T i, P i>; E is the set { E of nonoriented edge ij; Connected node V iand V jlimit E ij∈ E, represents thread T iand T jbetween communication or shared data, the weights W on limit ijrepresent the frequent degree of two thread communications or shared data;
2) carry out two-wheeled operation, the first round has operated the distribution of central server response process to cluster server; Second takes turns and has operated thread in processing server node and manage the distribution of device core everywhere; Each is taken turns operation and comprises successive ignition, concrete processing procedure is as follows: from the initiating task graph of a relation of input, each selection has the limit of maximum weights, and merge two summits of this edge, the number of threads comprised in newly-generated node must be less than or equal to a threshold value; Repeat this process, until the number of task nexus figure interior joint equals unappropriated number of threads in central server current thread pond; First and second takes turns the threshold value calculating as follows respectively that operation uses:
Threshold first _ round = Max ( [ N thread N proc ] , M max ) &times; ( 1 + &alpha; ) , 0 < &alpha; < 1
Threshhold sec ond _ round = [ Threashold first _ round N core ]
Wherein, Threshold first_roundrepresent the threshold value of the processing node number of first round operation, Threshhold second_roundthe threshold value of the composite node number of operation is taken turns in expression second, rounding operation in [] expression, M maxfor the maximum thread that process has, N corefor existing process number, Max () represent and compare the parameter imported into and get maximal value, α is a percent value, is used for representing weighing in equally loaded and reducing between communication; Specifically be divided into as follows step by step:
2a) first round is operated, initial task nexus figure carries out preliminary division in units of each server individuality, namely the composite node number=process number in figure, the corresponding process of each composite node, the thread be included in composite node all belongs to this process; The first round termination condition of operation is composite node number≤Threshold in figure first_round, at the end of in figure each composite node be a subgraph, a corresponding processing node, the thread be included in subgraph should distribute to this processing node;
Each subgraph 2b) gone out first round division of operations carries out second and takes turns operation, namely the second initiating task graph of a relation of taking turns operation is the first round operate the subgraph obtained, and the second termination condition of taking turns operation is the process number≤Threshhold be scheduled in resource pool in figure second_round, at the end of the corresponding process of each composite node in figure, the thread being included in the thread pool in central server should distribute to this processor core;
2c) the target ip address of analysis request message load balancing accordingly, in the load balance situation of server by the request scheduling of same target IP address to same node, be specially: first find out target ip address most recently used node, if this service node is available and not overload, then by load-balanced server, user is asked to be sent to this service node; If service node does not exist, or this service node overloads and has service node to be in the operating load of its half, then each service node of poll selects the minimum service node of link, and request is sent to this service node.
Beneficial effect of the present invention: dispatching method of the present invention is by safeguarding that the call request of mass users monitored by central server, the business service of dynamic dispatching cluster server and data, services, simultaneously call removably serviced component the multithreading ability of dispatching distribution polycaryon processor according to the use rank of user.Pass through said method, effectively can overcome the defect that conventional open platform lacks extensibility and retractility, establish fast construction application deployment running environment and these two targets of dynamic conditioning application runtime environment, accomplish utilize existing equipment in maximum efficiency and set up service; Calculated and multithreading by multinuclear simultaneously, improve schedule speed and the responding ability of system service; Meanwhile provide modularization removably to serve, user's service of different rights can be isolated and distribute corresponding computing power and processing power for it, supporting the diversity service between different user.
Accompanying drawing explanation
Fig. 1 is dispatching method of the present invention based on platform schematic diagram.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described further.
Dispatching method of the present invention based on platform schematic diagram as shown in Figure 1, be described as follows:
Web proxy server: user collects the request of interconnected user on the network, and provide queue scheduling buffer area for service.
Central server: the registration of this module unified management user, divide simultaneously be used in response user request server executive process and for service processes distribute authority and plan as a whole call serve and data.
Database server cluster: comprise multiple database server, for storing the mass data that user produces, possess high outlet bandwidth, high internal memory is equipped with Large Copacity external memory storage simultaneously, supports the feature that large data store.
Application server cluster: comprise multiple application server, for depositing each system service mirror image, high outlet bandwidth, dispatches for central server.
The inventive method comprises the steps:
The first step, the some multi-threaded parallels of registration center's server constructs, and construct process pool, afterwards the process of structure is put into process pool; The process of structure can be understood as the initialization structure when system starts.
This step can be realized step by step by following:
1) with FORK-JOIN structure for model creation concurrent thread model, in FORK-JOIN models of concurrency, FORK statement produces a new concurrent thread path, at the end of use JOIN statement.After former thread and the new thread produced all reach JOIN statement, code continues to perform in a sequential manner.If need more concurrent thread, then need to perform more FORK statement.
2) daemon thread FORK thread is out put into thread pool, be responsible for the life cycle management of thread by thread pool.
3) under multi-threading parallel process environment, adopt variable mutex to be mutex amount, realize the synchronous of the resource access under multi-thread environment and mutual exclusion.The principle of work of this mutex amount is as follows: when the resource that request one uses mutex to represent, process needs the value first reading mutex, to judge whether corresponding resource can be used.When the value of mutex is greater than 0, show have resource to ask.When equaling 0, illustrate that process can enter sleep state until when having available resources now without available resources.When process does not re-use the shared resource of a semaphore control, the value of mutex increases 1, carries out increase and decrease operation and is atomic operation, ensure the mutual exclusion of maintenance resources or the synchronization of access of multi-process to the value of semaphore.
Second step, registration center's server create and start finger daemon, by main thread initializes global variable.Global variable at least should comprise semaphore, sub-thread state collection and result set, for carrying out sub-thread synchronization and record queries record;
The connection of front-end proxy agent server Proxy Server process is waited for after 3rd step, load equalizer LoadBalance initialize communications port.When there being server connection request to arrive, generate a thread and this server communication.LoadBalance continues the connection request waiting for other servers.A thread receives and records the performance load information of a Web server.Have several Web server node LoadBalance just to generate several thread in cluster environment to communicate with it.In each computation period, information on services list accurately can reflect the load performance information of all Web servers.When a new customer requests service, the server of the i.e. load maximum weight that LoadBalance selects load minimum from information table is served for it.
Concrete load balancing is:
When using in the first input coefficient of node of system, set an initial weights SW (Ni), along with the change of node load, balanced device adjusts weights, and when weights are run by node, the dynamic state of parameters of each side calculates.In conjunction with the weights that each node is current, the size of the weights made new advances can be calculated.The object of Dynamic Weights is the situation that correctly will reflect node load, to predict the load change that node is possible in the future.
Conveniently suitably adjusting for the ratio of different application to parameters in system operation, is each setting parameter constant coefficient K i, be used for representing the significance level of each load parameter, wherein ∑ K i=1.Therefore, any one node N iweights formula just can be described as:
SW(N i)=K 0*L_CPU(N i)+K 1*L_Memory(N i)+K 2*L_Process(N i)+K 3*L_IO(N i)+K 4*L_Response(N i)
Wherein, K 0, K 1, K 2, K 3and K 4representing constant coefficient, for distinguishing the performance of different model server, specifically can decide according to the configuration of server, K 0the CPU of corresponding server, K 1the internal memory of corresponding server, K 2the network connection process number of corresponding server, K 3the disk I/O access speed of corresponding server, K 4the process number of corresponding server; Above-mentioned constant coefficient is relative quantity, illustrates, if the CPU frequency that the CPU frequency of A server is 2GHz, B server is double-core 3GHz and 6GHz, then and the K that A server is corresponding 0: the K that B server is corresponding 0=1:3; K 1, K 2, K 3, K 4with K 0similar, no longer illustrate.Like this K is set according to the concrete condition of server 0, K 1, K 2, K 3and K 4under the prerequisite ensureing high utilization rate, maximization dispatching can be reached.
L_CPU (Ni) is node N icPU usage, L_Memory (Ni) is node N imemory usage, L_Process (Ni) is node N irate of people logging in, the occupation rate that L_IO (Ni) is the magnetic disc i/o of node Ni, L_Response (Ni) are node N ithe process response time, L_CPU (Ni)=1-P_CPU (Ni), wherein, P_CPU (Ni) represents node N ithe utilization factor of CPU.
Request in 4th step, central server processing threads pond, and call the service interface of application server.The multi-core parallel concurrent processor of cluster server is called, for completing the distribution to processor node of process and thread according to user right:
1) foundation of Task Assignment Model:
If server cluster comprises N nodeindividual processing node concurrent program to be allocated has N procindividual process process P icomprise M iindividual thread total number of threads of concurrent program N thread = &Sigma; k = 0 N proc - 1 M k .
Concurrent program to be allocated can be expressed as a task nexus figure, and it is a non-directed graph G=(V, E), and wherein V is the set { V of each application server node in application server cluster i, node V icorresponding two tuple <T i, P i>, wherein T ibe the thread number that node is corresponding, P is above exactly process ID number, namely in operating system for the pid of identification process, use pid to replace representing the process of instantiation herein; E is the set { E of nonoriented edge ij; Connected node V iand V jlimit E ij∈ E, represents thread T iand T jbetween communication or shared data, the weights W on limit ijrepresent the frequent degree of two thread communications or shared data.
2) carry out two-wheeled operation, the first round has operated the distribution of central server response process to cluster server; Second takes turns and has operated thread in child servers processing node and manage the distribution of device core everywhere.Each is taken turns operation and comprises successive ignition, and its processing procedure is: from the initiating task graph of a relation of input, each selects the limit with maximum weights, merges two summits of this edge, that is, merges two composite nodes and generates a new composite node.In order to meet load balancing condition, the number of threads comprised in newly-generated composite node must be less than or equal to a threshold value.Repeat this and " select-merge " process, until the number of composite node equals the number of threads of the distribution of central server thread pool in task nexus figure.First and second takes turns the threshold value calculating as follows respectively that operation uses:
Threshold first _ round = Max ( [ N thread N proc ] , M max ) &times; ( 1 + &alpha; ) , 0 < &alpha; < 1
Threshhold sec ond _ round = [ Threashold first _ round N core ]
Wherein, Threshold first_roundrepresent the threshold value of the processing node number of first round operation, Threshhold second_roundthe threshold value of the composite node number of operation is taken turns in expression second, rounding operation in [] expression, M maxfor the maximum thread that process has, N corefor existing process number, Max () represent and compare the parameter imported into and get maximal value, α is a percent value, is used for representing weighing in equally loaded and reducing between communication; Specifically be divided into as follows step by step:
2a) first round is operated, initial task nexus figure carries out preliminary division in units of each server individuality, namely the composite node number=process number in figure, the corresponding process of each composite node, the thread be included in composite node all belongs to this process; The first round termination condition of operation is composite node number≤Threshold in figure first_round, at the end of in figure each composite node be a subgraph, a corresponding processing node, the thread be included in subgraph should distribute to this processing node;
Each subgraph 2b) gone out first round division of operations carries out second and takes turns operation, namely the second initiating task graph of a relation of taking turns operation is the first round operate the subgraph obtained, and the second termination condition of taking turns operation is the process number≤Threshhold be scheduled in resource pool in figure second_round, at the end of the corresponding process of each composite node in figure, the thread being included in the thread pool in central server should distribute to this processor core.Here, include the process of establishment inside the thread of the thread pool in central server, process is according to the Node B threshold Threshhold calculated second_rounddistribute.
2c) the target ip address of analysis request message load balancing accordingly.In the load balance situation of server by the request scheduling of same target IP address to same node, improve locality of reference and the main memory Cache hit rate of each service node, thus improve the processing power of whole group system.Concrete implementation step is: first find out target ip address most recently used node, if this service node is available and not overload, then user is asked to be sent to this service node by load-balanced server; If service node does not exist, or this service node overloads and has service node to be in the operating load of its half, then each service node of poll selects the minimum service node of link, and request is sent to this service node.
The data of database server collected by 5th step, central server, process distributed data.
Concrete step is as follows: establish database server cluster to comprise N nodeindividual sub-processing node d0 server is also the corresponding numbering D of each server-assignment as each server of central server poll 0arrive the server being numbered odd number is transferred to data and is numbered on the previous server of even number, the then data of merger two servers, such as merger <D 0, D 1>, <D 2, D 3> etc.; The server obtaining data poll numbering again again, the server being numbered odd number is equally transferred to data and is numbered on the previous server of even number, the then data of merger two servers; By that analogy, move in circles until total data is all integrated into D 0on.
6th step, central server call cluster service Service Handle, and integrate mass data and return to web proxy server and call.Give back the thread to thread pond of current request simultaneously, continue the request of monitoring users and from thread pool, call idle thread.
Dispatching method of the present invention is by safeguarding that the call request of mass users monitored by a customer center and central server, when in the face of large-scale consumer request, dynamic adjustment load strategy, reaches the effect of dynamic task scheduling to improve the response time of entire system.The simultaneously business service of dynamic dispatching cluster server and data, services, calls removably serviced component the multithreading ability of dispatching distribution polycaryon processor according to the use rank of user.The defect effectively overcoming conventional open platform shortage extensibility and retractility can be reached in this way and set up fast construction application deployment running environment and these two targets of dynamic conditioning application runtime environment simultaneously; Calculated and multithreading by multinuclear simultaneously, improve schedule speed and the responding ability of system service; Meanwhile provide modularization removably to serve, user's service of different rights can be isolated and distribute corresponding computing power and processing power for it, supporting the diversity service between different user.
Those of ordinary skill in the art will appreciate that, embodiment described here is to help reader understanding's principle of the present invention, should be understood to that protection scope of the present invention is not limited to so special statement and embodiment.Those of ordinary skill in the art can make various other various concrete distortion and combination of not departing from essence of the present invention according to these technology enlightenment disclosed by the invention, and these distortion and combination are still in protection scope of the present invention.

Claims (3)

1. a dispatching method for cloud computing open platform, is characterized in that, comprises the steps:
The some parallel multithreadings of the first step, central server structure, and construct thread pool, afterwards the thread of structure is put into thread pool;
Second step, registration central server create and start daemon thread, and by main thread initializes global variable, global variable at least should comprise semaphore, sub-thread state collection and result set, for carrying out sub-thread synchronization and record queries record;
3rd step, operate in central server load equalizer initialize communications port after wait for the connection of front end web proxy server process, when there being server connection request to arrive, load equalizer generates a thread and this server communication, load equalizer continues the connection request waiting for other server, when a new customer requests service, the server of the i.e. load maximum weight that load equalizer selects load minimum from information table is served for it;
The load balancing of described load balancing implement body is: when the node of cloud computing open platform comes into operation for the first time, sets an initial weights SW (N i), along with the change of node load, balanced device adjusts weights, and when weights are run by node, the dynamic state of parameters of each side calculates, node N iweights be described by following formula:
SW(N i)=K 0*L_CPU(N i)+K 1*L_Memory(N i)+K 2*L_Process(N i)+K 3*L_IO(N i)+K 4*L_Response(N i)
Wherein, K 0, K 1, K 2, K 3and K 4represent constant coefficient, L_CPU (N i) be node N icPU usage, L_Memory (N i) be node N imemory usage, L_Process (N i) be node N irate of people logging in, L_IO (N i) be occupation rate, the L_Response (N of the magnetic disc i/o of node Ni i) be node N ithe process response time, L_CPU (N i)=1-P_CPU (N i), wherein, P_CPU (N i) represent node N ithe idleness of CPU;
Request in 4th step, central server processing threads pond, and call the service of application server, calls the multi-core parallel concurrent processor of application server cluster according to user right, complete the distribution to processor node of process and thread;
The data of database server collected by 5th step, central server, process distributed data;
6th step, central server call cluster service, and integrate mass data and return to web proxy server and call, and give back the thread to thread pond of current request simultaneously, continue the request of monitoring users and from thread pool, call idle thread.
2. the dispatching method of cloud computing open platform according to claim 1, is characterized in that, the first step specifically comprises as follows step by step:
1) with FORK-JOIN structure for model creation concurrent thread model;
2) daemon thread FORK thread is out put into thread pool, be responsible for the life cycle management of thread by thread pool;
3) under multi-threading parallel process environment, variable mutex is adopted to be mutex amount, for realizing the synchronous and mutual exclusion of the resource access under multi-thread environment; The course of work of this mutex amount is as follows: when the resource that request one uses mutex to represent, thread first reads the value of mutex, to judge whether corresponding resource can be used; When the value of mutex is greater than 0, show have resource to ask; When equaling 0, show without available resources, thread enters sleep state until when having available resources; When thread does not re-use the shared resource of a semaphore control, the value of mutex increases 1.
3. the dispatching method of cloud computing open platform according to claim 1, is characterized in that, the 4th step specifically comprises as follows step by step:
1) foundation of Task Assignment Model:
If application server cluster comprises N nodeindividual processing node D 0, D 1..., concurrent program to be allocated has N procindividual process P 0, P 1..., process P icomprise M iindividual thread T 0, T 1..., total number of threads of concurrent program N thread = &Sigma; k = 0 N proc - 1 M k ;
Concurrent program to be allocated is expressed as a task nexus figure, is specially a non-directed graph G=(V, E), wherein V is the set { V of each application server node in application server cluster i, node V icorresponding two tuple <T i, P i>; E is the set { E of nonoriented edge ij; Connected node V iand V jlimit E ij∈ E, represents thread T iand T jbetween communication or shared data, the weights W on limit ijrepresent the frequent degree of two thread communications or shared data;
2) carry out two-wheeled operation, the first round has operated the distribution of central server response process to application server cluster; Second takes turns and has operated thread in processing server node and manage the distribution of device core everywhere; Each is taken turns operation and comprises successive ignition, concrete processing procedure is as follows: from the initiating task graph of a relation of input, each selection has the limit of maximum weights, and merge two summits of this edge, the number of threads comprised in newly-generated node must be less than or equal to a threshold value; Repeat this process, until the number of task nexus figure interior joint equals unappropriated number of threads in central server current thread pond; First and second takes turns the threshold value calculating as follows respectively that operation uses:
Threshold first _ round = Max ( [ N thread N proc ] , M max ) &times; ( 1 + &alpha; ) , 0 < &alpha; < 1
Threshhold sec ond _ round = [ Threashold first _ round N core ]
Wherein, Threshold first_roundrepresent the threshold value of the processing node number of first round operation, Threshhold second_roundthe threshold value of the composite node number of operation is taken turns in expression second, rounding operation in [] expression, M maxfor the maximum thread that process has, N corefor existing process number, Max () represent and compare the parameter imported into and get maximal value, α is a percent value, represents to weigh in equally loaded and reducing between communication; Specifically be divided into as follows step by step:
2a) first round is operated, initial task nexus figure carries out preliminary division in units of each server individuality, namely the composite node number=process number in figure, the corresponding process of each composite node, the thread be included in composite node all belongs to this process; The first round termination condition of operation is composite node number≤Threshold in figure first_round, at the end of in figure each composite node be a subgraph, a corresponding processing node, the thread be included in subgraph should distribute to this processing node;
Each subgraph 2b) gone out first round division of operations carries out second and takes turns operation, namely the second initiating task graph of a relation of taking turns operation is the first round operate the subgraph obtained, and the second termination condition of taking turns operation is the process number≤Threshhold be scheduled in resource pool in figure second_round, at the end of the corresponding process of each composite node in figure, the thread being included in the thread pool in central server should distribute to this processor core;
2c) the target ip address of analysis request message load balancing accordingly, in the load balance situation of server by the request scheduling of same target IP address to same node, be specially: first find out target ip address most recently used node, if this service node is available and not overload, then by load-balanced server, user is asked to be sent to this service node; If service node does not exist, or this service node overloads and has service node to be in the operating load of its half, then each service node of poll selects the minimum service node of link, and request is sent to this service node.
CN201210128627.6A 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform Active CN102681889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210128627.6A CN102681889B (en) 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210128627.6A CN102681889B (en) 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform

Publications (2)

Publication Number Publication Date
CN102681889A CN102681889A (en) 2012-09-19
CN102681889B true CN102681889B (en) 2015-01-07

Family

ID=46813858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210128627.6A Active CN102681889B (en) 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform

Country Status (1)

Country Link
CN (1) CN102681889B (en)

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780646B (en) * 2012-10-22 2017-04-12 中国长城计算机深圳股份有限公司 Cloud resource scheduling method and system
CN103810444B (en) * 2012-11-15 2018-08-07 南京中兴软件有限责任公司 The method and system of multi-tenant application isolation in a kind of cloud computing platform
CN103294623B (en) * 2013-03-11 2016-04-27 浙江大学 A kind of multi-thread dispatch circuit of configurable SIMD system
CN103197968B (en) * 2013-03-18 2016-03-30 焦点科技股份有限公司 A kind of thread pool disposal route and system merging synchronous asynchronous feature
CN103425536B (en) * 2013-08-26 2017-03-15 中国科学院软件研究所 A kind of test resource management method of Based on Distributed system performance testing
CN103513940B (en) * 2013-10-21 2016-09-07 北京华胜天成科技股份有限公司 Virtual machine extends method and the virtual system console of disk size online
US20150172204A1 (en) * 2013-12-13 2015-06-18 International Business Machines Corporation Dynamically Change Cloud Environment Configurations Based on Moving Workloads
WO2015134611A2 (en) * 2014-03-04 2015-09-11 Michael Manthey Distributed computing systems and methods
CN103927225B (en) * 2014-04-22 2018-04-10 浪潮电子信息产业股份有限公司 A kind of internet information processing optimization method of multi-core framework
US9921866B2 (en) * 2014-12-22 2018-03-20 Intel Corporation CPU overprovisioning and cloud compute workload scheduling mechanism
CN104572881A (en) * 2014-12-23 2015-04-29 国家电网公司 Method for importing distribution network graph model based on multi-task concurrency
CN104811503A (en) * 2015-05-21 2015-07-29 龙信数据(北京)有限公司 R statistical modeling system
CN105183824B (en) * 2015-08-28 2020-03-17 重庆简悉大数据科技有限公司 Data integration method and device
CN106612310A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 A server scheduling method, apparatus and system
CN106775608B (en) * 2015-11-24 2020-09-04 腾讯科技(深圳)有限公司 Method and device for realizing independent system process
CN105528669B (en) * 2015-11-26 2019-07-16 国网北京市电力公司 Load prediction system for electric system
CN107102901B (en) * 2016-02-23 2020-07-14 华为技术有限公司 Task processing method and device
CN107450968B (en) * 2016-05-31 2020-09-08 华为技术有限公司 Load reduction method, device and equipment
US10860373B2 (en) * 2016-10-11 2020-12-08 Microsoft Technology Licensing, Llc Enhanced governance for asynchronous compute jobs
CN107979629B (en) * 2016-10-25 2021-05-25 北京京东尚科信息技术有限公司 Distributed cache system and data cache method and device thereof
CN106776075A (en) * 2016-12-27 2017-05-31 北京五八信息技术有限公司 Message treatment method and equipment
CN106648872A (en) * 2016-12-29 2017-05-10 深圳市优必选科技有限公司 Multi-thread processing method and device and server
CN108268314A (en) * 2016-12-31 2018-07-10 北京亿阳信通科技有限公司 A kind of method of multithreading task concurrent processing
CN108633311B (en) * 2017-01-26 2021-12-21 华为技术有限公司 Method and device for concurrent control based on call chain and control node
CN106844037B (en) * 2017-02-22 2021-06-29 郑州云海信息技术有限公司 KNL-based test method and system
CN107196961A (en) * 2017-06-28 2017-09-22 深圳市欧乐在线技术发展有限公司 A kind of IP address hidden method and device
CN107800768B (en) * 2017-09-13 2020-01-10 平安科技(深圳)有限公司 Open platform control method and system
EP3462319A1 (en) * 2017-09-29 2019-04-03 Siemens Aktiengesellschaft Method, device and test program for recognizing a weak point in an original program
CN109660569B (en) * 2017-10-10 2021-10-15 武汉斗鱼网络科技有限公司 Multitask concurrent execution method, storage medium, device and system
CN107707672B (en) * 2017-10-31 2021-01-08 苏州浪潮智能科技有限公司 Method, device and equipment for reconstructing code with balanced load
CN107943561B (en) * 2017-12-14 2019-06-11 长春工程学院 A kind of scientific workflow method for scheduling task towards cloud computing platform
CN109960570B (en) * 2017-12-14 2021-09-03 北京图森智途科技有限公司 Multi-module scheduling method, device and system
CN107872539B (en) * 2017-12-15 2021-01-15 安徽长泰信息安全服务有限公司 Data processing system and method based on cloud computing platform
CN108471457A (en) * 2018-06-16 2018-08-31 温州职业技术学院 Based on distributed node dynamic memory load-balancing method
CN109040206B (en) * 2018-07-09 2020-12-08 广东工业大学 Computing resource long-term allocation method based on mobile device cloud
CN109308219B (en) * 2018-08-23 2021-08-10 创新先进技术有限公司 Task processing method and device and distributed computer system
CN109637278A (en) * 2019-01-03 2019-04-16 青岛萨纳斯智能科技股份有限公司 Big data teaching experiment training platform
CN110554923B (en) * 2019-09-09 2020-05-22 北京人民在线网络有限公司 Optimization method and system for distributed chained computing resources for cloud computing
CN110990165B (en) * 2019-11-15 2020-10-09 北京连山科技股份有限公司 Method for simultaneously working multiple users in multi-channel concurrent transmission system and transmission server
CN111124651B (en) * 2019-12-27 2023-05-23 中通服公众信息产业股份有限公司 Method for concurrently scheduling multiple threads in distributed environment
CN111124690B (en) * 2020-01-02 2023-06-23 哈尔滨理工大学 Secure distribution method of email server based on OpenMP thread optimization
CN111431743B (en) * 2020-03-18 2021-03-02 中南大学 Data analysis-based method and system for constructing edge resource pool in large-scale WiFi system
CN111966472B (en) * 2020-07-02 2023-09-26 佛山科学技术学院 Process scheduling method and system of industrial real-time operating system
CN113132192A (en) * 2021-03-02 2021-07-16 西安电子科技大学 Massive Internet of things equipment access and management method
CN113423025B (en) * 2021-06-22 2024-02-13 烟台东方智能技术有限公司 Data management terminal with artificial intelligence
CN113904443B (en) * 2021-09-28 2023-01-06 国网江苏省电力有限公司连云港供电分公司 Multidimensional space visual field transformer equipment monitoring and early warning system
CN114116068B (en) * 2021-12-02 2023-06-02 重庆紫光华山智安科技有限公司 Service start optimization method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system
CN101546277A (en) * 2009-04-27 2009-09-30 华为技术有限公司 Multiple core processor platform and multiple core processor synchronization method
CN101916209A (en) * 2010-08-06 2010-12-15 华东交通大学 Cluster task resource allocation method for multi-core processor
CN102129390A (en) * 2011-03-10 2011-07-20 中国科学技术大学苏州研究院 Task scheduling system of on-chip multi-core computing platform and method for task parallelization
CN102184125A (en) * 2011-06-02 2011-09-14 首都师范大学 Load balancing method based on program behaviour online analysis under heterogeneous multi-core environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system
CN101546277A (en) * 2009-04-27 2009-09-30 华为技术有限公司 Multiple core processor platform and multiple core processor synchronization method
CN101916209A (en) * 2010-08-06 2010-12-15 华东交通大学 Cluster task resource allocation method for multi-core processor
CN102129390A (en) * 2011-03-10 2011-07-20 中国科学技术大学苏州研究院 Task scheduling system of on-chip multi-core computing platform and method for task parallelization
CN102184125A (en) * 2011-06-02 2011-09-14 首都师范大学 Load balancing method based on program behaviour online analysis under heterogeneous multi-core environment

Also Published As

Publication number Publication date
CN102681889A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN102681889B (en) Scheduling method of cloud computing open platform
Harchol-Balter Open problems in queueing theory inspired by datacenter computing
Kaur et al. Analysis of job scheduling algorithms in cloud computing
Hoenisch et al. Optimization of complex elastic processes
Liu et al. Dynamically negotiating capacity between on-demand and batch clusters
CN106201681B (en) Method for scheduling task based on pre-release the Resources list under Hadoop platform
CN110099083A (en) A kind of load equilibration scheduling method and device for server cluster
Sedighi et al. Fariness of task scheduling in high performance computing environments
Buttazzo et al. Partitioning parallel applications on multiprocessor reservations
CN113190342B (en) Method and system architecture for multi-application fine-grained offloading of cloud-edge collaborative networks
Karatza et al. Parallel job scheduling in homogeneous distributed systems
Xu et al. Intelligent scheduling for parallel jobs in big data processing systems
Teng et al. Scheduling real-time workflow on MapReduce-based cloud
Venu et al. Task scheduling in cloud computing: a survey
Long et al. Toward OS-Level and Device-Level Cooperative Scheduling for Multitasking GPUs
Sharma Task migration with edf-rm scheduling algorithms in distributed system
Sibai Simulation and performance analysis of multi-core thread scheduling and migration algorithms
Shih et al. Multi-tier elastic computation framework for mobile cloud computing
Gu et al. The implementation of MapReduce scheduling algorithm based on priority
Shao et al. Fairness Scheduling for Tasks with Different Real-time Level on Heterogeneous Systems
Xu et al. Multi resource scheduling with task cloning in heterogeneous clusters
Prasad et al. Performance Analysis of Schedulers to Handle Multi Jobs in Hadoop Cluster.
Kumar et al. Global analysis of resource arbitration for MPSoC
Li et al. Cress: Dynamic scheduling for resource constrained jobs
YuHai et al. A new dynamic scheduling algorithm for real-time heterogeneous multiprocessor systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant