CN102681889A - Scheduling method of cloud computing open platform - Google Patents

Scheduling method of cloud computing open platform Download PDF

Info

Publication number
CN102681889A
CN102681889A CN2012101286276A CN201210128627A CN102681889A CN 102681889 A CN102681889 A CN 102681889A CN 2012101286276 A CN2012101286276 A CN 2012101286276A CN 201210128627 A CN201210128627 A CN 201210128627A CN 102681889 A CN102681889 A CN 102681889A
Authority
CN
China
Prior art keywords
node
thread
server
service
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101286276A
Other languages
Chinese (zh)
Other versions
CN102681889B (en
Inventor
唐雪飞
陈科
王威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201210128627.6A priority Critical patent/CN102681889B/en
Publication of CN102681889A publication Critical patent/CN102681889A/en
Application granted granted Critical
Publication of CN102681889B publication Critical patent/CN102681889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The invention discloses a scheduling method of a cloud computing open platform, which particularly characterized in that call requests of mass users are monitored through maintaining a central server; business service and data service of a cluster server are dynamically scheduled; and meanwhile, detachable service components are called and a multithread ability of a multi-core processor is scheduled and distributed according to use levels of the users. Through using the method, the defects that the traditional open platform is short of expansibility and flexibility can be effectively corrected, two targets of rapidly constructing and deploying an application and operation environments and dynamically adjusting the application and operation environments are established, the existing equipment is maximally utilized and the service is maximally constructed; and meanwhile, through adopting a multi-core computing manner and a multithread technology, the scheduling speed and the response ability of the system service are improved.

Description

A kind of dispatching method of cloud computing open platform
Technical field
The invention belongs to computerized information analysis and technical field of data processing, be specifically related to a kind of dispatching method of cloud computing open platform.
Background technology
Cloud computing is one of focus of current domestic and international commerce and scientific research institution's research, is the development of grid computing, parallel computation, Distributed Calculation, is a kind of emerging commercial computation schema.It has adopted ripe Intel Virtualization Technology with the user that the resource of data center is packaged as on the internet on-demand service to be provided.Job scheduling and resources allocation are two gordian techniquies of cloud computing, and the commerciality of cloud computing is paid close attention to quality of services for users it, and its Intel Virtualization Technology makes resources allocation and job scheduling parallel distributed different from the past calculate.In grid computing, in Distributed Calculation, the parallel computation, job scheduling and resources allocation are the contents of broad research.Efficient is paid attention in traditional job scheduling, resources allocation research, and the fairness of resources allocation also is the load balancing property that is related to QoS of customer, system, the importance that task is accomplished efficient.Especially cloud computing is a kind of commercial realization, and purpose is need pay attention to the satisfaction of user's request more for different users provides service, computing power, storage etc.
Also has the Web open platform with cloud computing together develops.As a kind of new network service mode; The Web open platform at first provides a basic service; Interface through open itself then; Help third party developer through utilization with assemble its interface and other third party's service interfaces produce new application, guarantee that this application can unify to operate on this open platform; When the user uses open platform, can be more more intensive completion multiple network activities, platform provides various services and relevant guarantee for the user.The basic service of open platform can be existing, and for example door, blog also can be emerging, for example customer relationship.
1. multi-threaded parallel and mutual exclusion techniques
Each program of just in system, moving all is a process.Each process comprises one or more threads.Process also possibly be the dynamic execution of whole procedure or subprogram.Thread is the set of one group of instruction; Or the particular segment of program; It can independently be carried out in program, also can be interpreted as it the context of code operation, so thread is the process of lightweight basically; It is responsible in single program, carrying out multitask, is responsible for the scheduling and the execution of a plurality of threads usually by operating system.
When a plurality of thread parallels move, adopt thread pool that multithreading is managed.Thread pool is a kind of multithreading processing form, adds task to formation in the processing procedure, after creating thread, starts these tasks automatically then, and the thread pool thread all is a background thread.Each thread all uses the storehouse size of acquiescence, with the priority operation of acquiescence, and is in the multiple thread units.If certain thread is idle (as waiting for certain incident) in Managed Code, then thread pool will insert another worker thread and make all processors maintenances busy.If it is busy that all thread pool threads all remain, but comprise the work of hang-up in the formation, then thread pool will be created another worker thread over time, but the number of thread surpasses maximal value never.Surpassing peaked thread can line up, but they just start after will waiting until other thread completion.
Under multi-thread environment, adopt semaphore to accomplish the mutual exclusion of cross-thread with synchronously.Semaphore is the total data structure of under multi-thread environment, using of a kind of overall situation, is to be used for guaranteeing that two or more critical code sections are not by concurrent invocation.Before getting into a critical code section, thread must obtain a semaphore; In case this critical code section has been accomplished, this thread must release semaphore so, and other wants that the thread that gets into this critical code section must wait for up to first thread release semaphore.In order to accomplish this process; Need to create a semaphore; To obtain semaphore (Acquire Semaphore) and release semaphore (Release Semaphore) then and be placed on the first terminal of each critical code section respectively, what confirm that these semaphores quote is the semaphore of initial creation.
2. multiprocessor distributes and load-balancing technique
Multiprocessor Task Distribution and load balancing are under the environment of multiple processor cores, and Processing tasks is carried out dynamic assignment and debugging, thereby improve processor utilization, realize the dispatching response of multi-threaded parallel.
The scheduling model of multicomputer system mainly comprises processor model, task model and dispatching algorithm three parts.The processor model is described information such as processor structure and processing power, and task model is described scheduling and waited for the needed relevant information of task for processing.If a multicomputer system is by m processor (P 1, P 2..., P m) and K resource (Res=r 1, r 2..., r k), processing power C i(i=1,2 ..., m) expression processor P iThe ability of Processing tasks in the unit interval, the processing power TPC of multicomputer system is defined as: TPC = Σ i = 1 m C i .
Multicomputer system adopts centralized scheduling mechanism, by scheduler all Task Distribution is carried out to each processor; Each processor all has scheduling queue separately, and processor selection task from this scheduling queue is handled; Communicating by letter between scheduler and each processor realizes through scheduling queue.
To multicomputer system, use to be scheduled to power, average response time, processor utilization, scheduling length as the performance index of estimating dispatching algorithm.
1) be scheduled to power (Success Rate, SR): refer to counted the ratio that N ' and the task-set that is scheduled are counted N by the successful task-set of algorithmic dispatching: SR = N ′ N .
2) average response time (Average Response Time, ART): refer to concentrate time that all tasks begin to handle and the mean difference of time of arrival.If task S iTime of arrival be t i, the time that begins to handle is st i, then its response time is rt i=st i-t iIf the number of tasks in the task-set is n; Then the average response time of task is:
Figure BDA0000158316130000023
the average response time ART of a task-set more little; Be that the stand-by period is short more, then the performance of dispatching algorithm is good more.
3) scheduling length (Schedule Length, SL): refer to from task time of arrival the earliest poor to deadline of a last task.Deadline (finish time) is ft (i)=st (i)+pt (i), and wherein, pt (i) is the processing time of task st (i), then dispatches length and is: SL=max (ft (i))-min (t (i)).
Dispatching under the successful prerequisite, less scheduling length means that task is early disposed, and the average utilization of processor is higher.
4) processor utilization (Utilization Rate, UR): finger processor execute the task time of taking and with the scheduling length ratio, establishing among the task-set S has S jIndividual task is processed device P j(j=1,2 ..., m) response, then P jUtilization factor be:
Figure BDA0000158316130000031
And the average utilization of all processors (Average Utilization Rate AUR) then can be expressed as:
Figure BDA0000158316130000032
Generally speaking, the cloud computing task scheduling system is similar to the gridding scheduling system, according to state of resources information and prediction, according to certain scheduling strategy user's set of tasks is mapped on the resource collection and carries out, and operation result is returned to the user.Though said method can utilize multiserver to carry out task and handle, and does not add the ruuning situation and the performance parameter of each server, because can not make full use of the handling property of individual server.A dispatching method can not make These parameters reach optimum simultaneously, so should be scheduled under the prerequisite of power in assurance, improves other indexs as far as possible, so that the overall performance of system is optimized.
Summary of the invention
In order to overcome the shortcoming that exists on existing dispatching algorithm and the treatment effeciency, the present invention proposes a kind of dispatching method of cloud computing open platform.
Technical scheme of the present invention is: a kind of dispatching method of cloud computing open platform comprises the steps:
The first step, central server are constructed some parallel multithreadings, and the structure process pool, put into process pool to the process of structure afterwards;
Second step, registration central server are created and are started finger daemon, and by the main thread initializes global variable, global variable should comprise semaphore at least, and sub-thread state collection and result set are used to carry out sub-thread synchronization and record queries record;
The 3rd step, operate in the connection of waiting for front end web acting server process behind the load equalizer initialize communications port in the central server; When the server connection request arrives; Load equalizer generates a thread and this server communication; Load equalizer continue to be waited for the connection request of other server, when a new client requests service, load equalizer from information table, selects load minimum be that load weights maximum service device is its service;
Request in the 4th step, the central server processing threads pond, and the service of calling application server, according to the multi-core parallel concurrent processor of user right calling application server cluster, completion process and thread are to the distribution of processor node;
The data that the 5th step, central server are collected database server are handled distributed data;
The 6th step, central server calls cluster service, and integrates mass data and return to the web acting server and call, and gives back the thread to thread pond of current request simultaneously, continues the request of monitoring users and also from thread pool, calls idle thread.
The above-mentioned first step specifically comprises as follows step by step:
1) be the concurrent threading model of model creation with the FORK-JOIN structure;
2) thread that daemon thread FORK is come out is put into thread pool, is responsible for the life cycle management of thread by thread pool;
3) under the multi-threaded parallel processing environment, adopting variable mutex is the mutex amount, is used to realize the synchronous and mutual exclusion of the resource access under the multi-thread environment; The course of work of this mutex amount is following: when the resource that mutex representes was used in one of request, whether process read the value of mutex earlier, available to judge corresponding resource; When the value of mutex greater than 0 the time, showing has resource to ask; Equal at 0 o'clock, show no available resources, process gets into sleep state when available resources are arranged; When process did not re-use the shared resource of a semaphore control, the value of mutex increased 1;
The load balancing strategy of above-mentioned the 3rd step load balancing implement body is: when the node of cloud computing open platform comes into operation for the first time, set an initial weights SW (N i), along with the variation of node load, balanced device is adjusted weights, and the parameter dynamic calculation of each side drew when weights were moved by node, node N iWeights describe with following formula:
SW(N i)=K 0*L_CPU(N i)+K 1*L_Memory(N i)+K 2*L_Process(N i)+K 3*L_IO(N i)+K 4*L_Response(N i)
Wherein, K 0, K 1, K 2, K 3And K 4The expression constant coefficient, L_CPU (N i) be node N iCPU usage, L_Memory (N i) be node N iMemory usage, L_Process (N i) be node N iRate of people logging in, L_IO (N i) be occupation rate, the L_Response (N of the magnetic disc i/o of node Ni i) be node N iThe process response time, L_CPU (N i)=1-P_CPU (N i), wherein, P_CPU (N i) utilization factor of expression current C PU.
Above-mentioned the 4th step specifically comprises as follows step by step:
1) foundation of Task Assignment Model:
If application server cluster comprises N NodeIndividual processing node D 0, D 1...,
Figure BDA0000158316130000041
Concurrent program to be allocated has N ProcIndividual process P 0, P 1..., Process P iComprise M iIndividual thread T 0, T 1...,
Figure BDA0000158316130000043
Total thread number of concurrent program N Thread = Σ k = 0 N Proc - 1 M k ;
Concurrent program to be allocated is expressed as a task graph of a relation, and (V, E), wherein, V is the set { V of each application server node in the application server cluster to be specially a non-directed graph G= i, node V iCorresponding doublet<t i, P i>E is the set { E of nonoriented edge Ij; Connected node V iAnd V jLimit E Ij∈ E, expression thread T iAnd T jBetween communication or shared data, the weights W on limit IjThe frequent degree of representing two thread communications or shared data;
2) carry out two wheel operations, the distribution of central server response process to cluster server accomplished in first round operation; Second wheel operation is accomplished the distribution of processing server intranodal thread to processor core; Each wheel operation comprises repeatedly iteration; Concrete processing procedure is following: begin from the initiating task graph of a relation of input; Each limit of selecting to have maximum weights, two summits of merging this edge, the number of threads that is comprised in the newly-generated node must be smaller or equal to a threshold value; Repeat this process, the number of node equals unappropriated number of threads in the central server current thread pond in the task graph of a relation; The threshold value that first and second wheel operation uses is calculated by following formula respectively:
Threshold first _ round = Max ( [ N thread N proc ] , M max ) &times; ( 1 + &alpha; ) , 0 < &alpha; < 1
Threshhold sec ond _ round = [ Threashold first _ round N core ]
Wherein, Threshold First_roundThe threshold value of the processing node number of expression first round operation, Threshhold Second_roundThe threshold value of representing the composite node number of second wheel operation, rounding operation in [] expression, M MaxBe the maximum thread that process has, N CoreFor existing process number, Max () represent the parameter of relatively importing into and get maximal value, α is a percent value, is used for being illustrated between equally loaded and the minimizing communication weighing; Specifically be divided into as follows step by step:
2a) to first round operation, initial task graph of a relation is that unit carries out preliminary division with each server individuality, the composite node number=process number in promptly scheming, and the corresponding process of each composite node, the thread that is included in the composite node all belongs to this process; The first round termination condition of operation is composite node number≤Threshold among the figure First_round, each composite node is a sub-graphs in scheming during end, corresponding processing node, and the thread that is included in the subgraph should be distributed to this processing node;
Each subgraph that 2b) first round division of operations is gone out all carries out second wheel operation, and promptly the initiating task graph of a relation of second wheel operation is the subgraph that first round operation obtains, and the termination condition of second wheel operation is the process number≤Threshhold that is scheduled in the resource pool among the figure Second_round, corresponding process of each composite node among figure during end, the thread that is included in the thread pool in the central server should be distributed to this processor core;
2c) target ip address of analysis request message and load balancing in view of the above; Request scheduling with same target IP address under the load balance situation of server arrives same node; Be specially: at first find out target ip address most recently used node; If this service node is available and not overload, then user's request is sent to this service node by load-balanced server; If service node does not exist, or this service node overloads and have service node to be in its half the operating load, and then each service node of poll is selected the minimum service node of link, and request is sent to this service node.
Beneficial effect of the present invention: dispatching method of the present invention is monitored the call request of mass user through safeguarding central server; The business service of dynamic dispatching cluster server and data, services are simultaneously called the multithreading ability that serviced component removably and scheduling distribute polycaryon processor according to user's use rank.Pass through said method; Can effectively overcome the defective that the traditional open platform lacks extensibility and retractility; Set up fast construction application deployment running environment and used these two targets of runtime environment, utilized existing equipment with accomplishing maximal efficiency and set up service with dynamically adjusting; Calculate and multithreading through multinuclear simultaneously, improve the schedule speed and the responding ability of system service; Meanwhile provide modularization removably to serve, can isolate user's service of different rights and distribute corresponding calculated ability and processing power for it, support the otherness service between the different user.
Description of drawings
Fig. 1 be dispatching method of the present invention based on the platform synoptic diagram.
Embodiment
Below in conjunction with accompanying drawing and specific embodiment the present invention is done further explanation.
Dispatching method of the present invention based on the platform synoptic diagram as shown in Figure 1, specify as follows:
The web proxy server: the user collects interconnected user on the network's request, and for serving the queue scheduling buffer area is provided.
Central server: this module unified management user's registration, divide to be used in response user requested service device executive process and to be service processes distribution authority and pool service of calling and data simultaneously.
Database server cluster: comprise a plurality of database servers, be used to store the mass data that the user produces that possess high outlet bandwidth, high internal memory is equipped with the high capacity external memory storage simultaneously, supports the characteristics of big data storage.
Application server cluster: comprise a plurality of application servers, be used to deposit each system service mirror image, high outlet bandwidth supplies the central server scheduling.
The inventive method comprises the steps:
The first step, the some multi-threaded parallels of registration center's server constructs, and structure process pool are put into process pool to the process of structure afterwards; The process of structure is appreciated that and is the structure of initialization when the system start-up.
This step can be passed through to realize step by step as follows:
1) be the concurrent threading model of model creation with the FORK-JOIN structure, in the FORK-JOIN models of concurrency, a FORK statement produces a new concurrent thread path, uses the JOIN statement during end.After former thread and the new thread that produces all reached the JOIN statement, code continued to carry out in a sequential manner.More if desired concurrent thread then need be carried out more FORK statement.
2) thread that daemon thread FORK is come out is put into thread pool, is responsible for the life cycle management of thread by thread pool.
3) under the multi-threaded parallel processing environment, adopting variable mutex is the mutex amount, realizes the synchronous and mutual exclusion of the resource access under the multi-thread environment.The principle of work of this mutex amount is following: when the resource that mutex representes was used in one of request, whether process need read the value of mutex earlier, available to judge corresponding resource.When the value of mutex greater than 0 the time, showing has resource to ask.Equal at 0 o'clock, no available resources now are described, process can get into sleep state when available resources are arranged.When process did not re-use the shared resource of semaphore control, the value of mutex increased 1, the value of semaphore is increased and decreased operation be atomic operation, guaranteed the mutual exclusion of maintenance resources or the synchronization of access of multi-process.
Second step, registration center's server are created and are started finger daemon, by the main thread initializes global variable.Global variable should comprise semaphore at least, and sub-thread state collection and result set are used to carry out sub-thread synchronization and record queries record;
Wait for the connection of front-end proxy agent server Proxy Server process behind the 3rd step, the load equalizer LoadBalance initialize communications port.When the server connection request arrives, generate a thread and this server communication.LoadBalance continues to wait for the connection request of other servers.A thread receives and writes down the performance load information of a Web server.Having several Web server node LoadBalance just to generate several threads in the cluster environment communicates with it.In each computation period, the information on services tabulation can accurately reflect the load performance information of all Web servers.When a new client requests service, LoadBalance from information table, selects load minimum be that load weights maximum service device is its service.
Concrete load balancing strategy is:
When using in the first input coefficient of the node of system, set an initial weights SW (Ni), along with the variation of node load, balanced device is adjusted weights, and the parameter dynamic calculation of each side drew when weights were moved by node.In conjunction with the current weights of each node, can calculate the size of the weights that make new advances.Dynamically the purpose of weights is the situations that will correctly reflect the node load, with the in the future possible load variations of prediction node.
In system's operational process, the ratio of each parameter is suitably adjusted for ease, be constant coefficient Ki of each parameter setting, be used for representing the significance level of each load parameter, wherein ∑ Ki=1 to different application.Therefore, the weights formula of any one node Ni just can be described as:
SW(N i)=K 0*L_CPU(N i)+K 1*L_Memory(N i)+K 2*L_Process(N i)+K 3*L_IO(N i)+K 4*L_Response(N i)
Wherein, K 0, K 1, K 2, K 3And K 4Represent constant coefficient, be used to distinguish the performance of different model server, specifically can decide K according to the configuration of server 0The CPU of corresponding with service device, K 1The internal memory of corresponding with service device, K 2The network connection process number of corresponding with service device, K 3The disk I access speed of corresponding with service device, K 4The process number of corresponding with service device; Above-mentioned constant coefficient is a relative quantity, illustrates, if the CPU frequency of A server is 2GHz, the CPU frequency of B server is 6GHz for double-core 3GHz, then the corresponding K of A server 0: the K that the B server is corresponding 0=1: 3; K 1, K 2, K 3, K 4With K 0Similar, no longer explanation.Concrete condition according to server is provided with K like this 0, K 1, K 2, K 3And K 4Can under the prerequisite that guarantees high utilization rate, reach the maximization dispatching.
L_CPU (Ni) is the CPU usage of node Ni; L_Memory (Ni) is the memory usage of node Ni; L_Process (Ni) is the rate of people logging in of node Ni, and L_IO (Ni) is that occupation rate, the L_Response (Ni) of the magnetic disc i/o of node Ni is the process response time of node Ni, L_CPU (Ni)=1-P_CPU (Ni); Wherein, the utilization factor of P_CPU (Ni) expression current C PU.
Request in the 4th step, the central server processing threads pond, and the service interface of calling application server.Call the multi-core parallel concurrent processor of cluster server according to user right, be used for the distribution of completion process and thread to processor node:
1) foundation of Task Assignment Model:
If server cluster comprises N NodeIndividual processing node D 0, D 1...,
Figure BDA0000158316130000081
Concurrent program to be allocated has N ProcIndividual process P 0, P 1...,
Figure BDA0000158316130000082
Process P iComprise M iIndividual thread T 0, T 1...,
Figure BDA0000158316130000083
Total thread number of concurrent program
Figure BDA0000158316130000084
Concurrent program to be allocated can be expressed as a task graph of a relation, and it is that (V, E), wherein V is the set { V of each application server node in the application server cluster to a non-directed graph G= i, node V iCorresponding doublet<t i, P i>, T wherein iBe the corresponding thread number of node, the P of front is exactly a process ID number, promptly is used for the pid of identification process in the operating system, uses pid to replace the process of expression instantiationization here; E is the set { E of nonoriented edge Ij; Connected node V iAnd V jLimit E Ij∈ E, expression thread T iAnd T jBetween communication or shared data, the weights W on limit IjThe frequent degree of representing two thread communications or shared data.
2) carry out two wheel operations, the distribution of central server response process to cluster server accomplished in first round operation; Second wheel operation is accomplished the distribution that the interior thread of child servers processing node is managed device nuclear everywhere.Each wheel operation comprises repeatedly iteration, and its processing procedure is: begin from the initiating task graph of a relation of input, and each limit of selecting to have maximum weights, two summits of merging this edge that is to say, merge two composite nodes and generate a new composite node.In order to satisfy the load balancing condition, the number of threads that is comprised in the newly-generated composite node must be smaller or equal to a threshold value.Repeat this " selection-merging " process, the number of composite node equals the number of threads of the distribution of central server thread pool in the task graph of a relation.The threshold value that first and second wheel operation uses is calculated respectively as follows:
Threshold first _ round = Max ( [ N thread N proc ] , M max ) &times; ( 1 + &alpha; ) , 0 < &alpha; < 1
Threshhold sec ond _ round = [ Threashold first _ round N core ]
Wherein, Threshold First_roundThe threshold value of the processing node number of expression first round operation, Threshhold Second_roundThe threshold value of representing the composite node number of second wheel operation, rounding operation in [] expression, M MaxBe the maximum thread that process has, N CoreFor existing process number, Max () represent the parameter of relatively importing into and get maximal value, α is a percent value, is used for being illustrated between equally loaded and the minimizing communication weighing; Specifically be divided into as follows step by step:
2a) to first round operation, initial task graph of a relation is that unit carries out preliminary division with each server individuality, the composite node number=process number in promptly scheming, and the corresponding process of each composite node, the thread that is included in the composite node all belongs to this process; The first round termination condition of operation is composite node number≤Threshold among the figure First_round, each composite node is a sub-graphs in scheming during end, corresponding processing node, and the thread that is included in the subgraph should be distributed to this processing node;
Each subgraph that 2b) first round division of operations is gone out all carries out second wheel operation, and promptly the initiating task graph of a relation of second wheel operation is the subgraph that first round operation obtains, and the termination condition of second wheel operation is the process number≤Threshhold that is scheduled in the resource pool among the figure Second_round, corresponding process of each composite node among figure during end, the thread that is included in the thread pool in the central server should be distributed to this processor core.Here, the thread of the thread pool in central server the inside includes the process of establishment, and process is according to the node threshold value Threshhold that calculates Second_roundDistribute.
2c) target ip address of analysis request message and load balancing in view of the above.Request scheduling with same target IP address under the load balance situation of server arrives same node, improves the locality of reference and the main memory Cache hit rate of each service node, thereby improves the processing power of whole group system.The practical implementation step is: at first find out target ip address most recently used node, if this service node is available and not overload, then by load-balanced server user's request is sent to this service node; If service node does not exist, or this service node overloads and have service node to be in its half the operating load, and then each service node of poll is selected the minimum service node of link, and request is sent to this service node.
The data that the 5th step, central server are collected database server are handled distributed data.
Concrete step is following: establish database server cluster and comprise N NodeSub-processes node D 0, D 1...,
Figure BDA0000158316130000092
The D0 server is the corresponding numbering of each server-assignment D also as each server of central server poll 0Arrive
Figure BDA0000158316130000093
To the previous server that is numbered even number, two data in server of merger then are such as merger data transmission for the server that is numbered odd number<d 0, D 1>,<d 2, D 3>Or the like; The server that obtains data is poll numbering again again, the server that is numbered odd number equally data transmission to the previous server that is numbered even number, two data in server of merger then; By that analogy, move in circles up to all being integrated into D to total data 0On.
The 6th step, central server call cluster service and service handle, and integrate mass data and return to the web acting server and call.Give back the thread to thread pond of current request simultaneously, continue the request of monitoring users and from thread pool, call idle thread.
Dispatching method of the present invention is through safeguarding that a customer center is the call request that central server is monitored mass user; In the face of the large-scale consumer request time, can dynamically adjust the load strategy, the effect that reaches the dynamic task scheduling is to improve the response time of entire system.The simultaneously business service and the data, services of dynamic dispatching cluster server are called the multithreading ability that serviced component removably and scheduling distribute polycaryon processor according to user's use rank.Can reach the defective that effectively overcomes traditional open platform shortage extensibility and retractility in this way sets up fast construction application deployment running environment simultaneously and dynamically adjusts these two targets of application runtime environment; Calculate and multithreading through multinuclear simultaneously, improve the schedule speed and the responding ability of system service; Meanwhile provide modularization removably to serve, can isolate user's service of different rights and distribute corresponding calculated ability and processing power for it, support the otherness service between the different user.
Those of ordinary skill in the art will appreciate that embodiment described here is in order to help reader understanding's principle of the present invention, should to be understood that protection scope of the present invention is not limited to such special statement and embodiment.Those of ordinary skill in the art can make various other various concrete distortion and combinations that do not break away from essence of the present invention according to these teachings disclosed by the invention, and these distortion and combination are still in protection scope of the present invention.

Claims (4)

1. the dispatching method of a cloud computing open platform is characterized in that, comprises the steps:
The first step, central server are constructed some parallel multithreadings, and the structure process pool, put into process pool to the process of structure afterwards;
Second step, registration central server are created and are started finger daemon, and by the main thread initializes global variable, global variable should comprise semaphore at least, and sub-thread state collection and result set are used to carry out sub-thread synchronization and record queries record;
The 3rd step, operate in the connection of waiting for front end web acting server process behind the load equalizer initialize communications port in the central server; When the server connection request arrives; Load equalizer generates a thread and this server communication; Load equalizer continue to be waited for the connection request of other server, when a new client requests service, load equalizer from information table, selects load minimum be that load weights maximum service device is its service;
Request in the 4th step, the central server processing threads pond, and the service of calling application server, according to the multi-core parallel concurrent processor of user right calling application server cluster, completion process and thread are to the distribution of processor node;
The data that the 5th step, central server are collected database server are handled distributed data;
The 6th step, central server calls cluster service, and integrates mass data and return to the web acting server and call, and gives back the thread to thread pond of current request simultaneously, continues the request of monitoring users and also from thread pool, calls idle thread.
2. the dispatching method of cloud computing open platform according to claim 1 is characterized in that, the first step specifically comprises as follows step by step:
1) be the concurrent threading model of model creation with the FORK-JOIN structure;
2) thread that daemon thread FORK is come out is put into thread pool, is responsible for the life cycle management of thread by thread pool;
3) under the multi-threaded parallel processing environment, adopting variable mutex is the mutex amount, is used to realize the synchronous and mutual exclusion of the resource access under the multi-thread environment; The course of work of this mutex amount is following: when the resource that mutex representes was used in one of request, whether process read the value of mutex earlier, available to judge corresponding resource; When the value of mutex greater than 0 the time, showing has resource to ask; Equal at 0 o'clock, show no available resources, process gets into sleep state when available resources are arranged; When process did not re-use the shared resource of a semaphore control, the value of mutex increased 1.
3. the dispatching method of cloud computing open platform according to claim 1; It is characterized in that; The 3rd step load balancing strategy of described load balancing implement body is: when the node of cloud computing open platform comes into operation for the first time, set an initial weights SW (N i), along with the variation of node load, balanced device is adjusted weights, and the parameter dynamic calculation of each side drew when weights were moved by node, node N iWeights describe with following formula:
SW(N i)=K 0*L_CPU(N i)+K 1*L_Memory(N i)+K 2*L_Process(N i)+K 3*L_IO(N i)+K 4*L_Response(N i)
Wherein, K 0, K 1, K 2, K 3And K 4The expression constant coefficient, L_CPU (N i) be node N iCPU usage, L_Memory (N i) be node N iMemory usage, L_Process (N i) be node N iRate of people logging in, L_IO (N i) be occupation rate, the L_Response (N of the magnetic disc i/o of node Ni i) be node N iThe process response time, L_CPU (N i)=1-P_CPU (N i), wherein, P_CPU (N i) utilization factor of expression current C PU.
4. the dispatching method of cloud computing open platform according to claim 1 is characterized in that, the 4th step specifically comprised as follows step by step:
1) foundation of Task Assignment Model:
If application server cluster comprises N NodeIndividual processing node D 0, D 1..., Concurrent program to be allocated has N ProcIndividual process P 0, P 1..., Process P iComprise M iIndividual thread T 0, T 1...,
Figure FDA0000158316120000023
Total thread number of concurrent program
Figure FDA0000158316120000024
Concurrent program to be allocated is expressed as a task graph of a relation, and (V, E), wherein V is the set { V of each application server node in the application server cluster to be specially a non-directed graph G= i, node V iCorresponding doublet<t i, P i>E is the set { E of nonoriented edge Ij; Connected node V iAnd V jLimit E Ij∈ E, expression thread T iAnd T jBetween communication or shared data, the weights W on limit IjThe frequent degree of representing two thread communications or shared data;
2) carry out two wheel operations, the distribution of central server response process to cluster server accomplished in first round operation; Second wheel operation is accomplished the distribution of processing server intranodal thread to processor core; Each wheel operation comprises repeatedly iteration; Concrete processing procedure is following: begin from the initiating task graph of a relation of input; Each limit of selecting to have maximum weights, two summits of merging this edge, the number of threads that is comprised in the newly-generated node must be smaller or equal to a threshold value; Repeat this process, the number of node equals unappropriated number of threads in the central server current thread pond in the task graph of a relation; The threshold value that first and second wheel operation uses is calculated by following formula respectively:
Figure FDA0000158316120000026
Wherein, Threshold First_roundThe threshold value of the processing node number of expression first round operation, Threshhold Second_roundThe threshold value of representing the composite node number of second wheel operation, rounding operation in [] expression, M MaxBe the maximum thread that process has, N CoreFor existing process number, Max () represent the parameter of relatively importing into and get maximal value, α is a percent value, is illustrated between equally loaded and the minimizing communication to weigh; Specifically be divided into as follows step by step:
2a) to first round operation, initial task graph of a relation is that unit carries out preliminary division with each server individuality, the composite node number=process number in promptly scheming, and the corresponding process of each composite node, the thread that is included in the composite node all belongs to this process; The first round termination condition of operation is composite node number≤Threshold among the figure First_round, each composite node is a sub-graphs in scheming during end, corresponding processing node, and the thread that is included in the subgraph should be distributed to this processing node;
Each subgraph that 2b) first round division of operations is gone out all carries out second wheel operation, and promptly the initiating task graph of a relation of second wheel operation is the subgraph that first round operation obtains, and the termination condition of second wheel operation is the process number≤Threshhold that is scheduled in the resource pool among the figure Second_round, corresponding process of each composite node among figure during end, the thread that is included in the thread pool in the central server should be distributed to this processor core;
2c) target ip address of analysis request message and load balancing in view of the above; Request scheduling with same target IP address under the load balance situation of server arrives same node; Be specially: at first find out target ip address most recently used node; If this service node is available and not overload, then user's request is sent to this service node by load-balanced server; If service node does not exist, or this service node overloads and have service node to be in its half the operating load, and then each service node of poll is selected the minimum service node of link, and request is sent to this service node.
CN201210128627.6A 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform Active CN102681889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210128627.6A CN102681889B (en) 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210128627.6A CN102681889B (en) 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform

Publications (2)

Publication Number Publication Date
CN102681889A true CN102681889A (en) 2012-09-19
CN102681889B CN102681889B (en) 2015-01-07

Family

ID=46813858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210128627.6A Active CN102681889B (en) 2012-04-27 2012-04-27 Scheduling method of cloud computing open platform

Country Status (1)

Country Link
CN (1) CN102681889B (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197968A (en) * 2013-03-18 2013-07-10 焦点科技股份有限公司 Thread pool processing method and system capable of fusing synchronous and asynchronous features
CN103294623A (en) * 2013-03-11 2013-09-11 浙江大学 Configurable multi-thread dispatch circuit for SIMD system
CN103425536A (en) * 2013-08-26 2013-12-04 中国科学院软件研究所 Test resource management method oriented towards distributed system performance tests
CN103513940A (en) * 2013-10-21 2014-01-15 北京华胜天成科技股份有限公司 Method for on-line extension of disk size of virtual machine and virtual system console
CN103780646A (en) * 2012-10-22 2014-05-07 中国长城计算机深圳股份有限公司 Cloud resource scheduling method and system
CN103810444A (en) * 2012-11-15 2014-05-21 中兴通讯股份有限公司 Method and system for multi-tenant application isolation in cloud computing platform
CN103927225A (en) * 2014-04-22 2014-07-16 浪潮电子信息产业股份有限公司 Multi-core framework Internet information processing and optimizing method
CN104572881A (en) * 2014-12-23 2015-04-29 国家电网公司 Method for importing distribution network graph model based on multi-task concurrency
CN104714847A (en) * 2013-12-13 2015-06-17 国际商业机器公司 Dynamically Change Cloud Environment Configurations Based on Moving Workloads
CN104811503A (en) * 2015-05-21 2015-07-29 龙信数据(北京)有限公司 R statistical modeling system
CN105183824A (en) * 2015-08-28 2015-12-23 重庆简悉大数据科技有限公司 Data integration method and apparatus
CN105528669A (en) * 2015-11-26 2016-04-27 国网北京市电力公司 Load prediction system for power system
CN106612310A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 A server scheduling method, apparatus and system
CN106648872A (en) * 2016-12-29 2017-05-10 深圳市优必选科技有限公司 Method and device for multithread processing and server
CN106775608A (en) * 2015-11-24 2017-05-31 腾讯科技(深圳)有限公司 The implementation method and device of autonomous system process
CN106776075A (en) * 2016-12-27 2017-05-31 北京五八信息技术有限公司 Message treatment method and equipment
CN106844037A (en) * 2017-02-22 2017-06-13 郑州云海信息技术有限公司 A kind of method of testing and system based on KNL
CN107003887A (en) * 2014-12-22 2017-08-01 英特尔公司 Overloaded cpu setting and cloud computing workload schedules mechanism
CN107111785A (en) * 2014-03-04 2017-08-29 迈克尔·曼希 Space-like computation in a computing device
CN107102901A (en) * 2016-02-23 2017-08-29 华为技术有限公司 A kind of task processing method and device
CN107450968A (en) * 2016-05-31 2017-12-08 华为技术有限公司 Load restoring method, device and equipment
CN107707672A (en) * 2017-10-31 2018-02-16 郑州云海信息技术有限公司 A kind of method, apparatus and equipment of the code refactoring of load balancing
CN107800768A (en) * 2017-09-13 2018-03-13 平安科技(深圳)有限公司 Open platform control method and system
CN107872539A (en) * 2017-12-15 2018-04-03 安徽长泰信息安全服务有限公司 A kind of data handling system and method based on cloud computing platform
CN107943561A (en) * 2017-12-14 2018-04-20 长春工程学院 A kind of scientific workflow method for scheduling task towards cloud computing platform
CN107979629A (en) * 2016-10-25 2018-05-01 北京京东尚科信息技术有限公司 Distributed cache system and its data cache method and device
CN108268314A (en) * 2016-12-31 2018-07-10 北京亿阳信通科技有限公司 A kind of method of multithreading task concurrent processing
CN108471457A (en) * 2018-06-16 2018-08-31 温州职业技术学院 Based on distributed node dynamic memory load-balancing method
CN108633311A (en) * 2017-01-26 2018-10-09 华为技术有限公司 A kind of method, apparatus and control node of the con current control based on call chain
CN109040206A (en) * 2018-07-09 2018-12-18 广东工业大学 A kind of computing resource long-term allocation method based on mobile device cloud
WO2019000597A1 (en) * 2017-06-28 2019-01-03 深圳市欧乐在线技术发展有限公司 Ip address hiding method and device
CN109308219A (en) * 2018-08-23 2019-02-05 阿里巴巴集团控股有限公司 Task processing method, device and Distributed Computer System
CN109637278A (en) * 2019-01-03 2019-04-16 青岛萨纳斯智能科技股份有限公司 Big data teaching experiment training platform
CN109660569A (en) * 2017-10-10 2019-04-19 武汉斗鱼网络科技有限公司 A kind of Multi-task Concurrency executes method, storage medium, equipment and system
CN109804351A (en) * 2016-10-11 2019-05-24 微软技术许可有限责任公司 The enhancing of asynchronous computing operation is administered
CN109960570A (en) * 2017-12-14 2019-07-02 北京图森未来科技有限公司 A kind of multimode dispatching method, device and system
CN110554923A (en) * 2019-09-09 2019-12-10 吕春燕 Optimization method and system for distributed chained computing resources for cloud computing
CN110990165A (en) * 2019-11-15 2020-04-10 北京连山科技股份有限公司 Method for simultaneously working multiple users in multi-channel concurrent transmission system and transmission server
CN111108483A (en) * 2017-09-29 2020-05-05 西门子股份公司 Method, device and test program for identifying vulnerabilities in an original program
CN111124651A (en) * 2019-12-27 2020-05-08 中通服公众信息产业股份有限公司 Method for multithreading concurrent scheduling in distributed environment
CN111124690A (en) * 2020-01-02 2020-05-08 哈尔滨理工大学 Secure distribution method of E-mail server based on OpenMP thread optimization
CN111431743A (en) * 2020-03-18 2020-07-17 中南大学 Data analysis-based method and system for constructing edge resource pool in large-scale WiFi system
CN111966472A (en) * 2020-07-02 2020-11-20 佛山科学技术学院 Process scheduling method and system for industrial real-time operating system
CN113132192A (en) * 2021-03-02 2021-07-16 西安电子科技大学 Massive Internet of things equipment access and management method
CN113423025A (en) * 2021-06-22 2021-09-21 烟台东方智能技术有限公司 Data management terminal with artificial intelligence
CN113904443A (en) * 2021-09-28 2022-01-07 国网江苏省电力有限公司连云港供电分公司 Multidimensional space visual field transformer equipment monitoring and early warning system
CN114116068A (en) * 2021-12-02 2022-03-01 重庆紫光华山智安科技有限公司 Service starting optimization method and device, electronic equipment and readable storage medium
CN117792860A (en) * 2022-12-29 2024-03-29 乔羽 Big data communication analysis management method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system
CN101546277A (en) * 2009-04-27 2009-09-30 华为技术有限公司 Multiple core processor platform and multiple core processor synchronization method
CN101916209A (en) * 2010-08-06 2010-12-15 华东交通大学 Cluster task resource allocation method for multi-core processor
CN102129390A (en) * 2011-03-10 2011-07-20 中国科学技术大学苏州研究院 Task scheduling system of on-chip multi-core computing platform and method for task parallelization
CN102184125A (en) * 2011-06-02 2011-09-14 首都师范大学 Load balancing method based on program behaviour online analysis under heterogeneous multi-core environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system
CN101546277A (en) * 2009-04-27 2009-09-30 华为技术有限公司 Multiple core processor platform and multiple core processor synchronization method
CN101916209A (en) * 2010-08-06 2010-12-15 华东交通大学 Cluster task resource allocation method for multi-core processor
CN102129390A (en) * 2011-03-10 2011-07-20 中国科学技术大学苏州研究院 Task scheduling system of on-chip multi-core computing platform and method for task parallelization
CN102184125A (en) * 2011-06-02 2011-09-14 首都师范大学 Load balancing method based on program behaviour online analysis under heterogeneous multi-core environment

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780646A (en) * 2012-10-22 2014-05-07 中国长城计算机深圳股份有限公司 Cloud resource scheduling method and system
CN103780646B (en) * 2012-10-22 2017-04-12 中国长城计算机深圳股份有限公司 Cloud resource scheduling method and system
CN103810444B (en) * 2012-11-15 2018-08-07 南京中兴软件有限责任公司 The method and system of multi-tenant application isolation in a kind of cloud computing platform
CN103810444A (en) * 2012-11-15 2014-05-21 中兴通讯股份有限公司 Method and system for multi-tenant application isolation in cloud computing platform
CN103294623B (en) * 2013-03-11 2016-04-27 浙江大学 A kind of multi-thread dispatch circuit of configurable SIMD system
CN103294623A (en) * 2013-03-11 2013-09-11 浙江大学 Configurable multi-thread dispatch circuit for SIMD system
CN103197968A (en) * 2013-03-18 2013-07-10 焦点科技股份有限公司 Thread pool processing method and system capable of fusing synchronous and asynchronous features
CN103425536A (en) * 2013-08-26 2013-12-04 中国科学院软件研究所 Test resource management method oriented towards distributed system performance tests
CN103425536B (en) * 2013-08-26 2017-03-15 中国科学院软件研究所 A kind of test resource management method of Based on Distributed system performance testing
CN103513940A (en) * 2013-10-21 2014-01-15 北京华胜天成科技股份有限公司 Method for on-line extension of disk size of virtual machine and virtual system console
CN104714847A (en) * 2013-12-13 2015-06-17 国际商业机器公司 Dynamically Change Cloud Environment Configurations Based on Moving Workloads
CN107111785A (en) * 2014-03-04 2017-08-29 迈克尔·曼希 Space-like computation in a computing device
CN103927225A (en) * 2014-04-22 2014-07-16 浪潮电子信息产业股份有限公司 Multi-core framework Internet information processing and optimizing method
CN103927225B (en) * 2014-04-22 2018-04-10 浪潮电子信息产业股份有限公司 A kind of internet information processing optimization method of multi-core framework
CN107003887A (en) * 2014-12-22 2017-08-01 英特尔公司 Overloaded cpu setting and cloud computing workload schedules mechanism
CN104572881A (en) * 2014-12-23 2015-04-29 国家电网公司 Method for importing distribution network graph model based on multi-task concurrency
CN104811503A (en) * 2015-05-21 2015-07-29 龙信数据(北京)有限公司 R statistical modeling system
CN105183824A (en) * 2015-08-28 2015-12-23 重庆简悉大数据科技有限公司 Data integration method and apparatus
CN105183824B (en) * 2015-08-28 2020-03-17 重庆简悉大数据科技有限公司 Data integration method and device
CN106612310A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 A server scheduling method, apparatus and system
CN106775608B (en) * 2015-11-24 2020-09-04 腾讯科技(深圳)有限公司 Method and device for realizing independent system process
CN106775608A (en) * 2015-11-24 2017-05-31 腾讯科技(深圳)有限公司 The implementation method and device of autonomous system process
CN105528669B (en) * 2015-11-26 2019-07-16 国网北京市电力公司 Load prediction system for electric system
CN105528669A (en) * 2015-11-26 2016-04-27 国网北京市电力公司 Load prediction system for power system
CN107102901A (en) * 2016-02-23 2017-08-29 华为技术有限公司 A kind of task processing method and device
CN107102901B (en) * 2016-02-23 2020-07-14 华为技术有限公司 Task processing method and device
CN107450968A (en) * 2016-05-31 2017-12-08 华为技术有限公司 Load restoring method, device and equipment
CN107450968B (en) * 2016-05-31 2020-09-08 华为技术有限公司 Load reduction method, device and equipment
CN109804351B (en) * 2016-10-11 2023-07-14 微软技术许可有限责任公司 Enhanced governance of asynchronous computational operations
CN109804351A (en) * 2016-10-11 2019-05-24 微软技术许可有限责任公司 The enhancing of asynchronous computing operation is administered
CN107979629B (en) * 2016-10-25 2021-05-25 北京京东尚科信息技术有限公司 Distributed cache system and data cache method and device thereof
CN107979629A (en) * 2016-10-25 2018-05-01 北京京东尚科信息技术有限公司 Distributed cache system and its data cache method and device
CN106776075A (en) * 2016-12-27 2017-05-31 北京五八信息技术有限公司 Message treatment method and equipment
WO2018121696A1 (en) * 2016-12-29 2018-07-05 深圳市优必选科技有限公司 Multi-thread processing method and device, and server
CN106648872A (en) * 2016-12-29 2017-05-10 深圳市优必选科技有限公司 Method and device for multithread processing and server
CN108268314A (en) * 2016-12-31 2018-07-10 北京亿阳信通科技有限公司 A kind of method of multithreading task concurrent processing
CN108633311A (en) * 2017-01-26 2018-10-09 华为技术有限公司 A kind of method, apparatus and control node of the con current control based on call chain
CN106844037B (en) * 2017-02-22 2021-06-29 郑州云海信息技术有限公司 KNL-based test method and system
CN106844037A (en) * 2017-02-22 2017-06-13 郑州云海信息技术有限公司 A kind of method of testing and system based on KNL
WO2019000597A1 (en) * 2017-06-28 2019-01-03 深圳市欧乐在线技术发展有限公司 Ip address hiding method and device
CN107800768A (en) * 2017-09-13 2018-03-13 平安科技(深圳)有限公司 Open platform control method and system
CN107800768B (en) * 2017-09-13 2020-01-10 平安科技(深圳)有限公司 Open platform control method and system
WO2019052225A1 (en) * 2017-09-13 2019-03-21 平安科技(深圳)有限公司 Open platform control method and system, computer device, and storage medium
CN111108483B (en) * 2017-09-29 2023-11-03 西门子股份公司 Method, apparatus and test program for identifying vulnerabilities in original program
CN111108483A (en) * 2017-09-29 2020-05-05 西门子股份公司 Method, device and test program for identifying vulnerabilities in an original program
CN109660569B (en) * 2017-10-10 2021-10-15 武汉斗鱼网络科技有限公司 Multitask concurrent execution method, storage medium, device and system
CN109660569A (en) * 2017-10-10 2019-04-19 武汉斗鱼网络科技有限公司 A kind of Multi-task Concurrency executes method, storage medium, equipment and system
CN107707672B (en) * 2017-10-31 2021-01-08 苏州浪潮智能科技有限公司 Method, device and equipment for reconstructing code with balanced load
CN107707672A (en) * 2017-10-31 2018-02-16 郑州云海信息技术有限公司 A kind of method, apparatus and equipment of the code refactoring of load balancing
CN109960570B (en) * 2017-12-14 2021-09-03 北京图森智途科技有限公司 Multi-module scheduling method, device and system
CN107943561A (en) * 2017-12-14 2018-04-20 长春工程学院 A kind of scientific workflow method for scheduling task towards cloud computing platform
CN109960570A (en) * 2017-12-14 2019-07-02 北京图森未来科技有限公司 A kind of multimode dispatching method, device and system
CN107872539B (en) * 2017-12-15 2021-01-15 安徽长泰信息安全服务有限公司 Data processing system and method based on cloud computing platform
CN107872539A (en) * 2017-12-15 2018-04-03 安徽长泰信息安全服务有限公司 A kind of data handling system and method based on cloud computing platform
CN108471457A (en) * 2018-06-16 2018-08-31 温州职业技术学院 Based on distributed node dynamic memory load-balancing method
CN109040206B (en) * 2018-07-09 2020-12-08 广东工业大学 Computing resource long-term allocation method based on mobile device cloud
CN109040206A (en) * 2018-07-09 2018-12-18 广东工业大学 A kind of computing resource long-term allocation method based on mobile device cloud
CN109308219B (en) * 2018-08-23 2021-08-10 创新先进技术有限公司 Task processing method and device and distributed computer system
CN109308219A (en) * 2018-08-23 2019-02-05 阿里巴巴集团控股有限公司 Task processing method, device and Distributed Computer System
CN109637278A (en) * 2019-01-03 2019-04-16 青岛萨纳斯智能科技股份有限公司 Big data teaching experiment training platform
CN110554923A (en) * 2019-09-09 2019-12-10 吕春燕 Optimization method and system for distributed chained computing resources for cloud computing
CN110990165A (en) * 2019-11-15 2020-04-10 北京连山科技股份有限公司 Method for simultaneously working multiple users in multi-channel concurrent transmission system and transmission server
CN110990165B (en) * 2019-11-15 2020-10-09 北京连山科技股份有限公司 Method for simultaneously working multiple users in multi-channel concurrent transmission system and transmission server
CN111124651A (en) * 2019-12-27 2020-05-08 中通服公众信息产业股份有限公司 Method for multithreading concurrent scheduling in distributed environment
CN111124651B (en) * 2019-12-27 2023-05-23 中通服公众信息产业股份有限公司 Method for concurrently scheduling multiple threads in distributed environment
CN111124690A (en) * 2020-01-02 2020-05-08 哈尔滨理工大学 Secure distribution method of E-mail server based on OpenMP thread optimization
CN111431743A (en) * 2020-03-18 2020-07-17 中南大学 Data analysis-based method and system for constructing edge resource pool in large-scale WiFi system
CN111966472A (en) * 2020-07-02 2020-11-20 佛山科学技术学院 Process scheduling method and system for industrial real-time operating system
CN111966472B (en) * 2020-07-02 2023-09-26 佛山科学技术学院 Process scheduling method and system of industrial real-time operating system
CN113132192A (en) * 2021-03-02 2021-07-16 西安电子科技大学 Massive Internet of things equipment access and management method
CN113423025A (en) * 2021-06-22 2021-09-21 烟台东方智能技术有限公司 Data management terminal with artificial intelligence
CN113423025B (en) * 2021-06-22 2024-02-13 烟台东方智能技术有限公司 Data management terminal with artificial intelligence
CN113904443A (en) * 2021-09-28 2022-01-07 国网江苏省电力有限公司连云港供电分公司 Multidimensional space visual field transformer equipment monitoring and early warning system
CN113904443B (en) * 2021-09-28 2023-01-06 国网江苏省电力有限公司连云港供电分公司 Multidimensional space visual field transformer equipment monitoring and early warning system
CN114116068A (en) * 2021-12-02 2022-03-01 重庆紫光华山智安科技有限公司 Service starting optimization method and device, electronic equipment and readable storage medium
CN114116068B (en) * 2021-12-02 2023-06-02 重庆紫光华山智安科技有限公司 Service start optimization method and device, electronic equipment and readable storage medium
CN117792860A (en) * 2022-12-29 2024-03-29 乔羽 Big data communication analysis management method

Also Published As

Publication number Publication date
CN102681889B (en) 2015-01-07

Similar Documents

Publication Publication Date Title
CN102681889B (en) Scheduling method of cloud computing open platform
Salot A survey of various scheduling algorithm in cloud computing environment
Van den Bossche et al. Online cost-efficient scheduling of deadline-constrained workloads on hybrid clouds
Buttazzo et al. Partitioning real-time applications over multicore reservations
CN103324525B (en) Method for scheduling task under a kind of cloud computing environment
CN104580306B (en) A kind of multiple terminals backup services system and its method for scheduling task
CN103064744B (en) The method for optimizing resources that a kind of oriented multilayer Web based on SLA applies
De Assuncao et al. Impact of user patience on auto-scaling resource capacity for cloud services
CN111026519B (en) Distributed task priority scheduling method and system and storage medium
CN104820616B (en) A kind of method and device of task scheduling
Liu et al. Dynamically negotiating capacity between on-demand and batch clusters
CN106201681B (en) Method for scheduling task based on pre-release the Resources list under Hadoop platform
Singh et al. A comparative study of various scheduling algorithms in cloud computing
Zhang et al. Multi-resource fair allocation for cloud federation
CN116048721A (en) Task allocation method and device for GPU cluster, electronic equipment and medium
Teng et al. Scheduling real-time workflow on MapReduce-based cloud
Hung et al. Task scheduling for optimizing recovery time in cloud computing
Xu et al. Intelligent scheduling for parallel jobs in big data processing systems
Ogawa et al. Cloud bursting approach based on predicting requests for business-critical web systems
Buttazzo et al. Partitioning parallel applications on multiprocessor reservations
CN112988363B (en) Resource scheduling method, device, server and storage medium
Bagga et al. Moldable load scheduling using demand adjustable policies
Sharma Task migration with edf-rm scheduling algorithms in distributed system
Saha et al. Tromino: Demand and DRF aware multi-tenant queue manager for apache mesos cluster
Shao et al. Fairness scheduling for tasks with different real-time level on heterogeneous systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant