CN109416646A - A kind of optimization method and processing equipment of container allocation - Google Patents
A kind of optimization method and processing equipment of container allocation Download PDFInfo
- Publication number
- CN109416646A CN109416646A CN201680086973.9A CN201680086973A CN109416646A CN 109416646 A CN109416646 A CN 109416646A CN 201680086973 A CN201680086973 A CN 201680086973A CN 109416646 A CN109416646 A CN 109416646A
- Authority
- CN
- China
- Prior art keywords
- task
- group
- server
- container
- nexus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephonic Communication Services (AREA)
- Stored Programmes (AREA)
Abstract
A kind of optimization method and processing equipment of container allocation, wherein the optimization method of container allocation includes: that cognition calculation server obtains N number of task, and N is the integer greater than 1, and N number of task is decomposed to obtain by application program;Relationship analysis is called to N number of task, determine at least one task nexus group, each task nexus group includes at least two tasks, and there are call relations between any two task at least two tasks, and call relation is between any two task to the dependence of implementing result;Container allocation adjustment information is generated according at least one determining task nexus group, container allocation adjustment information includes the mapping relations between at least one task nexus group and container;Container allocation adjustment information is sent from server to chosen.Using the embodiment of the present invention, resource overhead is advantageously reduced, improves memory source utilization rate.
Description
The present invention relates to the optimization methods and processing equipment of field of cloud computer technology more particularly to a kind of container allocation.
The storage of open source distributed document and processing frame Hadoop big data technology constantly bring the application and practice in forward position along with the big data tide to have swept the globe.Nowadays, the concept of container (Container) is introduced in the framework of distributed file system Hadoop2.0, container is distribution resource (memory in Hadoop resource management system, processor, hard disk etc.) unit, it will accordingly start a Java Virtual Machine (Java Virtual Machine when starting, JVM), to execute corresponding task in this receptacle, such as: off-line calculation frame (such as MapReduce calculating), memory Computational frame Spark etc. can operate in Hadoop resource management Yarn (Yet Another Resource Negotiator, another resource coordinator) receptacle.
Fig. 1 is the system architecture diagram of Hadoop distributed resource management memory Computational frame Spark on Yarn system, the Spark on Yarn system includes: client Client, primary server cluster and from server cluster, in Fig. 1, it primary server cluster and is introduced so that a primary server and one are from server as an example respectively from server cluster, resource management module (Resource Manager is run in primary server, ) and application management module (Application Manager RM, AM), node administration module (Node Manager is run from server, NM), container proxy module (Agent), Hadoop distributed file system (Ha Doop Distribution File System, HDFS), Hadoop database (Hadoop Database, Hbase), mapping specification computing module (Map Ruduce, MR).The container allocation process of above system are as follows: client submits the job request comprising destination application to primary server first, and the data file of destination application is uploaded to HDFS, after resource management module in primary server listens to the job request of client, selection one is used to distribute container from server, notify this from one container of node administration module assignment in server as resident container, and creation and running memory Computational frame application program main control module (Spark Application Master in resident container, SAM), memory Computational frame application journey
Sequence main control module registers the mapping relations between resident container and memory Computational frame application program main control module to application management module, memory Computational frame application program main control module obtains the data file of destination application from Hadoop distributed file system, and according to preset container allocation strategy, parsing destination application is the multiple tasks in multiple and different operation phase, and an independent container is distributed for each task, and creation and running memory Computational frame execution module (Spark Executor) in each independent container, the task in container is executed by memory Computational frame execution module.
It finds in the course of the research, when existing Spark on Yarn system carries out container allocation for the multiple tasks of application program, for each task one container of corresponding distribution, it is one-to-one i.e. between task and container, due to the creation of each container, it destroys and recycling all has time overhead, especially in the implementation procedure of large-scale application program (Application), this container allocation strategy, which makes time overhead add up accounting, can reach entire destination application executes the time 30%, cause the time overhead in application program implementation procedure big, the execution efficiency of application program is seriously affected.
Summary of the invention
The present invention provides the optimization method and processing equipment of a kind of container allocation, and by optimization, there are the container allocation strategies of the multiple tasks of call relation, to reduce resource overhead, raising resource utilization ratio.
First aspect, the embodiment of the present invention provides a kind of optimization method of container allocation, this method is applied to the cognition calculation server in distributed resource management memory Computational frame Spark on Yarn system, wherein, Spark on Yarn system includes the slave server for being selected to distribution container, there should be communication connection from server and cognition calculation server, method and step includes:
It recognizes calculation server and obtains N number of task, N is the integer greater than 1, and above-mentioned N number of task is decomposed to obtain by application program;
Cognition calculation server is called relationship analysis to above-mentioned N number of task, determine at least one task nexus group, each task nexus group includes at least two tasks, wherein, there are call relation between any two task, to implementing result, there are dependences between any two task for above-mentioned call relation;
It recognizes calculation server and container allocation adjustment information is generated according at least one determining task nexus group, container allocation adjustment information includes the mapping relations between at least one task nexus group and container;
It recognizes calculation server and sends container allocation adjustment information from server to chosen.
In the optimization method embodiment of container allocation, the call relation that the cognition calculation server of Spark on Yarn system passes through the multiple tasks of analysis destination application, determine at least one task nexus group, establish the mapping relations between at least one task nexus group and container, generate the container allocation adjustment information comprising the mapping relations, the container allocation adjustment information is sent to from server, to promote to optimize the container allocation strategy of multiple tasks according to above-mentioned mapping relations from server, since task nexus group includes at least two tasks, so with the corresponding container of a task nexus group no longer only corresponding task, but at least two tasks in a corresponding task nexus group, the allocation strategy of a task is only corresponded to relative to a container in existing scheme, be conducive to save container allocation quantity, from And resource overhead is reduced, improve resource utilization ratio.
With reference to first aspect, in some possible implementations, above-mentioned task nexus group includes first task relationship group and the second task nexus group, before cognition calculation server generates container allocation adjustment information according at least one determining task nexus group, method further include:
Cognition calculation server is called relational completeness analysis to first task relationship group and the second task nexus group;
If there are call relations between the task in task and the second task nexus group in first task relationship group, calculation server is recognized by first task relationship group and the merging of the second task nexus group and is independent task nexus group;
Cognition calculation server is adjusted the mapping relations between first task relationship group and the second task nexus group and container.
It can be seen that, in optional embodiment of the present invention, calculation server is recognized by carrying out completeness analysis to multiple tasks relationship group, there are the task nexus groups of call relation to be independent task nexus group for merging, due to including more tasks in the independent task nexus group, so container corresponding with the independent task nexus group will be associated with more tasks, be conducive to further increase task quantity corresponding to single container, to reduce the corresponding number of containers of all tasks, while improving container execution efficiency, it is further reduced resource overhead, improve resource utilization ratio.
With reference to first aspect, in some possible implementations, cognition calculation server is called relationship analysis to N number of task, determines at least one task nexus group, comprising:
Recognize calculation server and determine X task from N number of task, there are call relation between any task and at least one other task in X task, other tasks be removed in the X task described in times
Task except one task, X are the positive integer less than or equal to N;
It recognizes calculation server and determines at least one task nexus group from X task.
It can be seen that, in optional embodiment of the present invention, cognition calculation server filters out the X task there are call relation in advance, filtering out in N number of task and other tasks in time does not have the individual task of any call relation, task nexus group is only determined from the X task, is conducive to the execution efficiency for improving algorithm.
With reference to first aspect, it in some possible implementations, recognizes calculation server and determines at least one task nexus group from X task, comprising:
Cognition calculation server analyzes call relation included by X task, determine Y task nexus group for including in X task, each of above-mentioned Y task nexus group task nexus group includes at least two tasks, wherein, there are the call relation between any two task, Y is the positive integer less than X.
With reference to first aspect, in some possible implementations, cognition calculation server analyzes call relation included by X task, determines Y task nexus group for including in X task, comprising:
It recognizes calculation server and call relation present in X task and X task is equivalent to the digraph in graph theory, wherein, X task is equivalent to the vertex in digraph, directed connection line call relation present in X task being equivalent between the vertex of above-mentioned digraph;
Cognition calculation server solves Y independent sets of the digraph, to obtain the corresponding Y task nexus group of above-mentioned Y independent sets.
With reference to first aspect, in some possible implementations, cognition calculation server sends container allocation adjustment information from server to chosen, comprising:
Recognize the available resources of container in the chosen slave server of calculation server detection;
If detecting resource required for chosen less than Y task nexus group from the available resources of container in server, the container allocation instruction information for Y1 task nexus group in Y task nexus group is then sent to chosen from server, Y1 is the positive integer less than Y.
It can be seen that, in optional embodiment of the present invention, instruction information can be distributed according to the available resources dynamic adjustment container of the chosen container from server by recognizing calculation server, the case where can not distributing container because of the available resources of container deficiency from server generation is avoided, the stability of lifting system distribution container is conducive to.
Second aspect, the embodiment of the present invention provides a kind of optimization method of container allocation, applied to the slave server in distributed resource management memory Computational frame Spark on Yarn system, this from server be selected to distribution container slave server, above-mentioned Spark on Yarn system includes the cognition calculation server connecting with from server communication, this method comprises:
The container allocation adjustment information that cognition calculation server is sent is obtained from server, container allocation adjustment information includes the mapping relations between at least one task nexus group and container, at least one task nexus group is relationship analysis to be called to N number of task of destination application and determination, each independent task nexus group includes at least two tasks, there are call relations between any two task at least two tasks, and call relation is between any two task to the dependence of implementing result;
From server according to container allocation adjustment information, container is distributed for above-mentioned N number of task.
It can be seen that, in the embodiment of the present invention, the slave server of Spark on Yarn system obtains the container allocation adjustment information that cognition calculation server is sent, include the mapping relations between at least one task nexus group and container in the container allocation adjustment information, due to including at least two tasks in task nexus group, so with the corresponding container of a task nexus group no longer only corresponding task, but at least two tasks in a corresponding task nexus group, the allocation strategy of a task is only corresponded to relative to a container in existing scheme, be conducive to save container allocation quantity, advantageously reduce resource overhead, improve resource utilization ratio.
It is described before the container allocation adjustment information that server obtains that the cognition calculation server is sent in some possible implementations in conjunction with second aspect, the method also includes:
The data file of destination application is obtained from server;
Destination application is parsed from server to obtain N number of task;
Container allocation request is sent from server to cognition calculation server, which requests for requesting the cognition calculation server application program to be called relationship analysis to N number of task, with the container allocation adjustment information of determination N number of task.
In conjunction with second aspect, in some possible implementations, at least one task nexus group is Y task nexus group, and Y is the positive integer less than N;
If the available resources of container are less than resource required for the Y task nexus group in chosen slave server, container allocation adjustment information is for Y1 task nexus group in Y task nexus group
Container allocation adjustment information, Y1 are the positive integer less than Y.
It can be seen that, in optional embodiment of the present invention, the container allocation adjustment information that the slave server of Spark on Yarn system is got can carry out dynamic adjustment according to the available resources of the container from server, the case where can not distributing container because of the available resources of container deficiency from server generation is avoided, the stability of lifting system distribution container is conducive to.
Further, in conjunction with second aspect, in some possible implementations, from server according to container allocation adjustment information, after distributing container for N number of task, method further include:
N task of the corresponding task nexus group of target container in container is obtained from server, n is the positive integer greater than 1;
The call relation that execution the stage parameter and n task of n task are obtained from server, executes stage parameter and includes at least two execution stage parameters;
From server according to the call relation for executing stage parameter and n task, determine n task executes sequence;
Above-mentioned n task is run according to execution sequence in target container from server.
It can be seen that, in optional embodiment of the present invention, it is that task nexus group is distributed after target container from server, can in the target container continuous operation task nexus group n task, the life cycle of a container is served only for for the scheme of one task of operation in compared with the existing technology, is conducive to the execution efficiency of hoisting container.
Further, in conjunction with second aspect, in some possible implementations, after running the n task according to execution sequence in target container from server, this method further include:
The target container, the corresponding Java Virtual Machine resource of recycling target container are destroyed from server.
It can be seen that, in optional embodiment of the present invention, from n task of server continuous operation task nexus group in target container, and after the n task run is complete, target container, the corresponding Java Virtual Machine resource of recycling target container are just destroyed, i.e. without being the creation of each task execution container, destruction and the operating process of resource reclaim, the multiplexing for realizing container advantageously reduces resource overhead, improves container execution efficiency.
The third aspect, the embodiment of the present invention provide a kind of cognition calculation server, which, which has, realizes
The function of the behavior of calculation server is recognized in above method design.The function can also execute corresponding software realization by hardware realization by hardware.The hardware or software include one or more modules corresponding with above-mentioned function.
In a possible design, cognition calculation server includes processor, the processor is configured to cognition calculation server is supported to execute corresponding function in the above method.Further, cognition calculation server can also include receiver and transmitter, and the receiver and transmitter are for supporting to recognize calculation server and from the communication between the equipment such as server.Further, cognition calculation server can also include memory, and the memory saves the necessary program instruction of cognition calculation server and data for coupling with processor.
Fourth aspect, the embodiment of the present invention provide one kind from server, which has the function of realizing the above method in designing from the behavior of server.Above-mentioned function can also execute corresponding software realization by hardware realization by hardware.The hardware or software include one or more modules corresponding with above-mentioned function.
It include processor from server in a possible design, above-mentioned processor is configured as supporting to execute corresponding function in the above method from server.It further, can also include receiver and transmitter from server, receiver and transmitter are used to support the communication between the equipment such as server and cognition calculation server.It further, can also include memory from server, above-mentioned memory is saved for coupling with processor from the necessary program instruction of server and data.
5th aspect, the embodiment of the invention provides a kind of container processing systems, using with distributed resource management memory Computational frame Spark on Yarn system, said vesse processing system include as the third aspect of the embodiment of the present invention provide cognition calculation server and fourth aspect of the embodiment of the present invention provide slave server.
6th aspect, the embodiment of the invention provides a kind of computer readable storage medium, computer-readable recording medium storage program code.Above procedure code includes the instruction for some or all of executing described in first aspect either method of the embodiment of the present invention step.
7th aspect, the embodiment of the invention provides a kind of computer readable storage medium, computer-readable recording medium storage program code.Program code includes the instruction for some or all of executing described in second aspect either method of the embodiment of the present invention step.
In terms of in conjunction with any of the above, in some possible implementations, the call relation is the call relation between the data file of the task, and the call relation comprises at least one of the following: calling directly relationship and indirect call relation;The relationship that calls directly comprises at least one of the following: unidirectionally calling directly relationship, two-way calls directly relationship;The indirect call relation comprises at least one of the following: the indirect call relation of transitivity, the indirect call relation dependent on the third party.
The accompanying drawings required for describing the embodiments of the present invention are briefly described below.
Fig. 1 is the system architecture diagram of the operating mechanism in Yarn container at present of memory Computational frame Spark disclosed in prior art;
Fig. 2 is a kind of system architecture diagram of Hadoop distributed resource management memory Computational frame Spark on Yarn system 100 provided in an embodiment of the present invention;
Fig. 3 is a kind of flow diagram of container allocation treatment process provided in an embodiment of the present invention;
Fig. 4 is a kind of flow diagram of the optimization method of container allocation provided in an embodiment of the present invention;
Fig. 5 A is the schematic diagram of the call relation between a kind of task provided in an embodiment of the present invention;
Fig. 5 B is a kind of task parsing schematic diagram of destination application provided in an embodiment of the present invention;
Fig. 5 C is a kind of exemplary diagram that container allocation is carried out according to resources balance strategy provided in an embodiment of the present invention;
Fig. 6 A is a kind of unit composition block diagram for recognizing calculation server provided in an embodiment of the present invention;
Fig. 6 B is a kind of structural schematic diagram for recognizing calculation server provided in an embodiment of the present invention;
Fig. 7 A is a kind of unit composition block diagram from server provided in an embodiment of the present invention;
Fig. 7 B is a kind of structural schematic diagram from server provided in an embodiment of the present invention.
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is retouched
It states.
Fig. 2 is a kind of system architecture diagram of Hadoop distributed resource management memory Computational frame Spark on Yarn system 100 provided in an embodiment of the present invention.The Spark on Yarn system 100 specifically includes: client Client, the primary server in primary server cluster, from server cluster slave server and cognition calculation server cluster in cognition calculation server.
Client generates the data file and application requests of destination application for obtaining destination application.
Resource management module (Resource Manager is run in primary server,) and application program management (Applications Manager RM, AM) module, wherein, RM module is for receiving application requests, inquiry AM module determines that container is available and sends resource allocation request from server as chosen slave server, and to from server.
Node administration (Node Manager is run from server, NM) module, container act on behalf of (Agent) module, Hadoop distributed file system (Hadoop Distribution File System, HDFS storage resource (as being somebody's turn to do the hard disk resources from server), Hadoop database (Hadoop Database, Hbase), mapping specification operation (Map Reduce) module from server is subordinated in);Wherein, the HDFS includes that each can obtain data file from HDFS from server from all storage resources from server in server cluster in the Spark on Yarn system or upload data file to HDFS.
Container allocation request is sent to cognition calculation server from server for receiving resource allocation request, the data file of processing target application program;Wherein, NM module is used to manage the resource from server, such as creates resident container, container of the management for the task of distributing;The master control of running memory Computational frame application program (Spark Application Master, SAM) module in the resident container of NM module creation, SAM module are used to distribute container according to the multiple tasks that container allocation result is destination application;Agent module is designed, designed software module, for collecting the information datas such as the container predistribution result of SAM module generation and the data file of task, and Pipeline write mode is written by pipeline, by the data file storage of task to HDFS, Hbase is written into container predistribution result;HDFS is used to store the data file of destination application, the container allocation result that Hbase is sent for storage container predistribution result and cognition calculation server;Map Ruduce module is used for when recognizing the call relation of calculation server analysis multiple tasks,
Corresponding data analysis capabilities are provided for cognition calculation server.
Recognize calculation server operational management platform (Administration Platform, AP), center module (Cognitive Center) is recognized, analyzes script module (Analysis Scripts) and networking products interface (Web UI) module;Cognition calculation server is for receiving container allocation request, the data file for obtaining the multiple tasks of destination application is called relationship analysis, and the call relation of the multiple tasks obtained according to analysis generates the container allocation of multiple tasks as a result, and sending container allocation result to from server.Wherein, analysis script module provides human-computer interaction interface for storing preset resource allocation policy, Web UI module, the information such as operation information, real time monitoring situation of container for running task for exporting Spark on Yarn system.
Below by taking client obtains the destination application of user's typing as an example, the container allocation treatment process of above-mentioned Spark on Yarn system 100 is described in detail.As shown in figure 3, the example container allocation processing process the following steps are included:
S301, client obtain the destination application of user's typing, generate the application requests and data file of destination application, send data file to HDFS.
Wherein, above-mentioned HDFS include in the Spark on Yarn system from server cluster all storage resources from server, each share the data file in the HDFS from server, above-mentioned storage resource for example can be hard disk resources etc..
S302, RM module sending application PROGRAMMED REQUESTS of the client to primary server.
S303, the RM module of primary server receive application requests, inquire AM module, determine the available slave server for distributing container as chosen from server of container resource.
NM module of S304, the RM module to this from server sends resource allocation request.
S305 receives resource allocation request from the NM of server, distributes resident container, creates in resident container and runs SAM module.
S306, SAM module obtains the data file of destination application from HDFS, and according to the former container allocation strategy prestored, parsing destination application is the multiple tasks in multiple and different operation phase, and generates the data file and container predistribution result of multiple tasks;
Wherein, container predistribution result includes the mapping relations between task and container, and a task corresponds to a container.
S307, SAM module sends the data file of multiple tasks by Agent module to HDFS, and mapping relations, the operation phase parameter of multiple tasks and the container sent between mapping relations, destination application and the multiple tasks between server and destination application to Hbase pre-allocates result.
S308 after container proxy module has executed aforesaid operations, sends container allocation request to the AP of cognition calculation server;
S309, the AP for recognizing calculation server receive container allocation request, the cognition computation requests of the multiple tasks for destination application are sent to cognition center module.
S310, after recognizing center module reception cognition computation requests, the data file of multiple tasks is obtained from HDFS, preset container allocation strategy is obtained from script module, according to the data file of the container allocation strategy and the multiple tasks of above-mentioned acquisition, Map Reduce module is called to be called relationship analysis to the multiple tasks of destination application, determine at least one task nexus group, and container allocation adjustment information is generated according at least one task nexus group, and send container allocation adjustment information to from the Hbase of server.
Wherein, which includes the mapping relations between at least one task nexus group and container.
S311, container proxy module calls at least one task nexus group in Hbase and the mapping relations between container, and mapping relations are returned to SAM module.
S312, SAM module distribute container according to the mapping relations between at least one task nexus group and container.
It can be seen that, compared to the prior art, Spark on Yarn system provided in an embodiment of the present invention includes recognizing calculation server and the chosen slave server for being used to distribute container, wherein, cognition calculation server can establish the mapping relations between container at least one task nexus group corresponding with destination application, and the container allocation adjustment information comprising the mapping relations is sent to from server, optimize the original container allocation strategy of the corresponding container of a task according to the mapping relations from server, a container is allowed to correspond to a task nexus group, due to including at least two tasks in task nexus group, there are call relations between any two task at least two tasks, that is, from server according to the container allocation strategy after optimization a container can be distributed for multiple tasks To which the container can run all tasks in task nexus group, and container is just destroyed after the completion of all task runs, recycles the corresponding resource of the container, without for each task execution container creation, destroy and resource reclaim operating process, realize the multiplexing of container, resource overhead is advantageously reduced, resource utilization and container execution efficiency are improved.
With reference to the accompanying drawing 4, the optimization method of container allocation provided in an embodiment of the present invention is illustrated.Fig. 4 shows a kind of optimization method of container allocation provided in an embodiment of the present invention, applied to the cognition calculation server in distributed resource management memory Computational frame Spark on Yarn system, the Spark on Yarn system includes the slave server for being selected to distribution container, described to communicate to connect from server and the cognition calculation server.This method comprises: the part S401~S406, specific as follows:
S401, cognition calculation server obtain N number of task, and N is the integer greater than 1, and N number of task is decomposed to obtain by destination application.
Wherein, N number of task is to be obtained from the associated Hadoop distributed file system HDFS of server by cognition calculation server from above-mentioned, and the concrete form of N number of task can be the corresponding N number of data file of N number of task.The HDFS include in the Spark on Yarn system from server cluster all storage resources from server, each share the data file in the HDFS from server, the storage resource for example can be hard disk resources etc..
In one example, before cognition calculation server obtains N number of task, also execute following operation: cognition calculation server receives the container allocation request of the N number of task for destination application sent from server.
S402, the cognition calculation server is called relationship analysis to N number of task, determine at least one task nexus group, each task nexus group includes at least two tasks, there are call relations between any two task in above-mentioned each task nexus group, and call relation is between any two task to the dependence of implementing result.
In the specific implementation, the cognition center module of cognition calculation server obtains preset call relation analysis strategy from script module, and the operation of step S402 to S404 is executed according to the call relation analysis strategy.
Wherein, above-mentioned call relation comprises at least one of the following: calling directly relationship and indirect call relation.It calls directly relationship and indirect call relation as an example, above-mentioned specifically shown in Fig. 5 A call relation schematic diagram indicates.
The relationship of calling directly comprises at least one of the following: unidirectionally calling directly relationship (see (1) in Fig. 5 A), two-way calls directly relationship (see (2) in Fig. 5 A);The indirect call relation comprises at least one of the following: the indirect call relation (see (3) in Fig. 5 A) of transitivity, dependent on the third party indirect call relation (see
(4) in Fig. 5 A).
In one example, cognition calculation server is called relationship analysis to N number of task, determines that the specific implementation of at least one task nexus group may is that
It recognizes calculation server and determines X task from N number of task, there are call relations between any task and at least one other task in X task, other tasks are the task in the X task in addition to any task, and X is the positive integer less than or equal to N;
It recognizes calculation server and determines at least one task nexus group from the X task.
As it can be seen that cognition calculation server filters out the X task there are call relation in advance, filters out the acnode task in N number of task in time, and task nexus group is only determined from the X task in this example, be conducive to improve algorithm execution efficiency.
In this illustration, cognition calculation server determines that the specific implementation of at least one task nexus group may is that from above-mentioned X task
Cognition calculation server analyzes call relation included by X task, determine Y task nexus group for including in X task, each of Y task nexus group task nexus group includes at least two tasks, there are call relation between any two task at least two tasks, Y is the positive integer less than X.
In this illustration, cognition calculation server analyzes call relation included by X task, determines that the specific implementation for Y task nexus group for including in X task may is that
It recognizes calculation server and call relation present in X task and X task is equivalent to the digraph in graph theory, wherein, X task is equivalent to the vertex in digraph, directed connection line call relation present in the X task being equivalent between the vertex of digraph;
The Y independent sets that calculation server solves digraph are recognized, to obtain the corresponding Y task nexus group of the Y independent sets.
Wherein, in graph theory, independent sets refer to a subset of the vertex set of figure, and the induced subgraph of the subset is free of side.If an independent sets are not the subsets of any one independent sets, claiming this independent sets is a maximal independent set.It is known as maximum independent set comprising the most independent sets of vertex number in one figure.
As an example it is assumed that destination application is resolved to 10 tasks from server, and the corresponding digraph of 10 tasks is as shown in Figure 5 B, and the above-mentioned definition for gathering independent sets can be seen that in the digraph
Independent sets include: { task 2, task 4 }, { task 2, task 4, task 7 }, { task 2, task 4, task 7, task 8 }, { task 2, task 4, task 7, task 9 }, { task 3, task 5, task 7, task 9 } etc., wherein, maximum independent set includes { task 2, task 4, task 7, task 8 }, { task 2, task 4, task 7, task 9 }, { task 3, task 5, task 7, task 9 }.
S403, cognition calculation server generate container allocation adjustment information according at least one determining task nexus group, and container allocation adjustment information includes the mapping relations between at least one task nexus group and container.
In one example, above-mentioned task nexus group includes first task relationship group and the second task nexus group, before cognition calculation server generates container allocation adjustment information according at least one determining task nexus group, also executes following operation:
Cognition calculation server is called relational completeness analysis to first task relationship group and the second task nexus group;
If there are call relations between the task in task and the second task nexus group in first task relationship group, calculation server is recognized by first task relationship group and the merging of the second task nexus group and is independent task nexus group;
Cognition calculation server is adjusted the mapping relations between first task relationship group and the second task nexus group and container.
It can be seen that, in this example, calculation server is recognized by carrying out completeness analysis to multiple tasks relationship group, there are the task nexus groups of call relation to be independent task nexus group for merging, due to including more tasks in the independent task nexus group, so container corresponding with the independent task nexus group will be associated with more tasks, be conducive to further increase task quantity corresponding to single container, to reduce the corresponding number of containers of all tasks, while improving container execution efficiency, it is further reduced resource overhead, improves resource utilization ratio.
S404, cognition calculation server send container allocation adjustment information from server to chosen.
In one example, cognition calculation server may is that the chosen specific implementation for sending container allocation adjustment information from server
Recognize the available resources of container in the chosen slave server of calculation server detection;
If detect it is chosen be less than resource required for the Y task nexus group from the available resources of container in server, sent to chosen from server for the Y1 in Y task nexus group a
The container allocation of business relationship group indicates information, and Y1 is the positive integer less than Y.
Wherein, the available resources of container include from software and hardware resources such as the memories, CPU, hard disk of server.
It can be seen that, in this example, instruction information can be distributed according to the available resources dynamic adjustment container of the chosen container from server by recognizing calculation server, the case where can not distributing container because of the available resources of container deficiency from server generation is avoided, the stability of lifting system distribution container is conducive to.
In one embodiment, if it is described detect it is described it is chosen from the available resources of container in server be less than the Y task nexus group required for resource, the cognition calculation server also execute below operates:
Cognition calculation server selects another server in the Spark on Yarn system alternately from server, and described is alternatively the smallest from server with the chosen data transmission distance from server from server;
Calculation server is recognized to alternatively sending container allocation adjustment information for Y2 task nexus group in Y task nexus group from server, Y2 is the positive integer less than Y, and Y1+Y2=Y.
It can be seen that from above-mentioned steps S401 to S405 and correlation step, the call relation that the cognition calculation server of Spark on Yarn system passes through the multiple tasks of analysis destination application, determine at least one task nexus group, establish the mapping relations between at least one task nexus group and container, generate the container allocation adjustment information comprising the mapping relations, the container allocation adjustment information is sent to from server, to promote to optimize the container allocation strategy of multiple tasks according to above-mentioned mapping relations from server, since task nexus group includes at least two tasks, so with the corresponding container of a task nexus group no longer only corresponding task, but at least two tasks in a corresponding task nexus group, the allocation strategy of a task is only corresponded to relative to a container in existing scheme, be conducive to save About container allocation quantity advantageously reduces resource overhead, improves resource utilization ratio.
S405, the container allocation adjustment information that cognition calculation server is sent is obtained from server, container allocation adjustment information includes the mapping relations between at least one task nexus group and container, at least one task nexus group is relationship analysis to be called to N number of task of destination application and determination, each task nexus group includes at least two tasks, there are call relations between any two task at least two tasks, and call relation is between any two task to the dependence of implementing result.
In one example, from server obtain cognition calculation server send container allocation adjustment information it
Before, following operation is also executed from server:
The data file of destination application is obtained from server;
Destination application is parsed from server to obtain N number of task;
Container allocation request is sent from server to the cognition calculation server, container allocation request is for requesting cognition calculation server application program to be called relationship analysis to N number of task, with the container allocation adjustment information of the N number of task of determination.
For example, assuming that the destination application 1 that user is submitted by client is resolved to 10 tasks from server, so, from server by container proxy module by after the write-in Hbase of N number of task, the available application program task index relative table as shown in Table 1 comprising the call relation between task.
Table 1
Job ID | Task ID | Stage | Caller/Callee |
1 | 1 | 1 | null |
1 | 2 | 1 | 4 |
1 | 3 | 1 | 5 |
1 | 4 | 1 | 7 |
1 | 5 | 1 | 7 |
1 | 6 | 2 | null |
1 | 7 | 2 | 8,9 |
1 | 8 | 3 | 7 |
1 | 9 | 3 | 7 |
1 | 10 | 3 | null |
In table 1, Job ID is application program identification, Task ID is task identification, Stage is to identify the operation phase of task, such as 1, which represents corresponding task, was in for the first operation phase, 2, which represent corresponding task, was in for the second operation phase, there are the tasks of call relation with corresponding task for Caller/Callee representative, if the corresponding Caller/Callee of task 1 is null, i.e. there are the tasks of call relation with task 1, the corresponding Caller/Callee of task 2 is 4, i.e., is task 4 there are the task of call relation with task 2.7 tasks in this 10 tasks there are call relation, respectively task 2, task 3, task 4, task 5, task 7, task 8, task 9 can be determined by the index relative table.
In one example, at least one described task nexus group is Y task nexus group, and Y is the positive integer less than N;
If the available resources of the chosen container from server are less than Y task nexus group institute
The resource needed, then the container allocation adjustment information is the container allocation adjustment information for Y1 task nexus group in the Y task nexus group, and Y1 is the positive integer less than Y.
It can be seen that, in this example, the container allocation adjustment information that the slave server of Spark on Yarn system is got can carry out dynamic adjustment according to the available resources of the container from server, the case where can not distributing container because of the available resources of container deficiency from server generation is avoided, the stability of lifting system distribution container is conducive to.
S406 distributes container from server according to the container allocation adjustment information for N number of task.
In one example, it is described from server according to the container allocation adjustment information, after distributing container for N number of task, the cognition calculation server also executes to be operated below:
The cognition calculation server is that the task in N number of task in addition to the task nexus group of the determination distributes container according to preset resources balance strategy.
In this illustration, according to preset resources balance strategy, the specific implementation for distributing container for the task in N number of task in addition to the task nexus group of the determination may is that the cognition calculation server
Calculation server is recognized to obtain in Spark on Yarn system from each of server cluster from the available resources of the container of server;
Recognizing calculation server is the container that the task in N number of task in addition to the task nexus group of the determination is distributed in the most slave server of available resources.
For example, still by taking Fig. 5 B and the corresponding destination application of table 1 as an example, wherein, task in task 1, task 6 and task 10 is that the individual task of any call relation is not present with other tasks, assuming that the most slave server of available resources includes from server 1, from server 2 and from server 3 from server cluster, so, as shown in Figure 5 C, calculation server is recognized according to preset resources balance strategy, it can be distributed for task 1 from server 1, it is the distribution of task 6 from server 2, distributes for task 10 from server 3.
In one example, from server according to the container allocation adjustment information, after distributing container for N number of task, following operation is also executed:
N task of the corresponding task nexus group of the target container in the container is obtained from server, n is the positive integer greater than 1;
The call relation of execution the stage parameter and the n task of n task is obtained from server, it is described
Execution stage parameter includes at least two execution stage parameters;
From server according to the call relation of execution the stage parameter and n task, determine n task executes sequence;
N task is run according to above-mentioned execution sequence in target container from server.
It can be seen that, in this example, it is that task nexus group is distributed after target container from server, can in the target container continuous operation task nexus group n task, the life cycle of a container is served only for for the scheme of one task of operation in compared with the existing technology, is conducive to the execution efficiency of hoisting container.
In one example, it is described run the n task according to execution sequence in the target container from server after, also execute following operate:
It is described to destroy the target container from server, recycle the corresponding Java Virtual Machine resource of the target container.
It can be seen that, in this example, from n task of server continuous operation task nexus group in target container, and after the n task run is complete, target container, the corresponding Java Virtual Machine resource of recycling target container are just destroyed, i.e. without being the creation of each task execution container, destruction and the operating process of resource reclaim, the multiplexing for realizing container advantageously reduces resource overhead, improves container execution efficiency.
It can be seen that from above-mentioned steps S405 to S406 and correlation step, the slave server of Spark on Yarn system obtains the container allocation adjustment information that cognition calculation server is sent, include the mapping relations between at least one task nexus group and container in the container allocation adjustment information, due to including at least two tasks in task nexus group, so with the corresponding container of a task nexus group no longer only corresponding task, but at least two tasks in a corresponding task nexus group, the allocation strategy of a task is only corresponded to relative to a container in existing scheme, be conducive to save container allocation quantity, advantageously reduce resource overhead, improve resource utilization ratio.
It is above-mentioned that mainly the scheme of the embodiment of the present invention is described from cognition calculation server and from the angle of interaction between server.It is understood that each server, for example, cognition calculation server, from server in order to realize the above functions, it comprises execute the corresponding hardware configuration of each function and/or software module.Those skilled in the art should be readily appreciated that, unit and algorithm steps described in conjunction with the examples disclosed in the embodiments of the present disclosure, and the present invention can be realized with the combining form of hardware or hardware and computer software.Some functions is executed in a manner of hardware or computer software driving hardware actually, depends on skill
The specific application and design constraint of art scheme.Professional technician can specifically realize described function to each using distinct methods, but such implementation should not be considered as beyond the scope of the present invention.
The embodiment of the present invention can carry out the division of functional unit according to above method example to cognition calculation server, from server, such as, the each functional unit of each function division can be corresponded to, two or more functions can also be integrated in a processing unit.Above-mentioned integrated unit both can take the form of hardware realization, can also realize in the form of software functional units.It should be noted that being schematically that only a kind of logical function partition, there may be another division manner in actual implementation to the division of unit in the embodiment of the present invention.
Using integrated unit, Fig. 6 A shows a kind of possible structural schematic diagram that calculation server is recognized involved in above-described embodiment.Recognizing calculation server 600 includes: processing unit 602 and communication unit 603.Processing unit 602 is used to carry out control management to the movement of cognition calculation server, for example, processing unit 602 is for supporting cognition calculation server to execute step S402, S403, S404 in Fig. 4 and/or other processes for techniques described herein.In another example management platform, cognition center module and networking products interface module that processing unit 602 is also used to support to recognize in calculation server in Fig. 2 execute corresponding operation.Communication unit 603 is used to support the communication of cognition calculation server and other equipment, for example, with shown in Fig. 1 from the communication between server etc., specifically for supporting that recognizing calculation server executes the step S401 in Fig. 4.Recognizing calculation server can also include storage unit 601, for storing the program code and data of cognition calculation server, specifically for supporting that recognizing the script module in calculation server in Fig. 2 stores preset container allocation strategy.
Wherein, processing unit 602 can be processor or controller, such as it can be central processing unit (Central Processing Unit, CPU), general processor, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application-Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) either other programmable logic device, transistor logic, hardware component or any combination thereof.It, which may be implemented or executes, combines various illustrative logic blocks, module and circuit described in the disclosure of invention.The processor is also possible to realize the combination of computing function, such as combines comprising one or more microprocessors, DSP and the combination of microprocessor etc..
Communication unit 603 can be communication interface, transceiver, transmission circuit etc., wherein communication interface
It is to be referred to as, may include one or more interfaces, such as may include: cognition calculation server and from the interface and/or other interfaces between server.Storage unit 601 can be memory.
When processing unit 602 is processor, communication unit 603 is communication interface, and when storage unit 601 is memory, cognition calculation server involved in the embodiment of the present invention can be to recognize calculation server shown in Fig. 6 B.
Refering to shown in Fig. 6 B, which includes: processor 612, communication interface 613, memory 66.Optionally, cognition calculation server 610 can also include bus 614.Wherein, communication interface 613, processor 612 and memory 66 can be connected with each other by bus 614;Bus 614 can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, abbreviation PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, abbreviation EISA) bus etc..The bus 614 can be divided into address bus, data/address bus, control bus etc..Only to be indicated with a thick line in Fig. 6 B, it is not intended that an only bus or a type of bus convenient for indicating.
It can be appreciated that a kind of for recognizing the device of calculation server, the embodiment of the present invention does not limit cognition calculation server shown in above-mentioned Fig. 6 A or Fig. 6 B.
Using integrated unit, Fig. 7 A shows a kind of possible structural schematic diagram involved in above-described embodiment from server.It include: processing unit 702 and communication unit 703 from server 700.Processing unit 702 is for carrying out control management to from the movement of server, for example, processing unit 702 is for supporting to execute the step S406 in Fig. 4 and/or other processes for techniques described herein from server.In another example processing unit 702 is also used to support to execute corresponding operation from node administration module, memory Computational frame operation main control module, container proxy module and the mapping specification computing module in server in Fig. 2.Communication unit 703 is used to supporting communication from server and other network entities, for example, with client, primary server shown in Fig. 1, recognize calculation server etc. between communication, specifically for supporting in Fig. 2 to execute the step S405 in Fig. 4 from server.It can also include storage unit 701 from server, for storing program code and data from server, specifically for support as in Fig. 2 from being subordinated to data file that the storage resource from server stores in the HDFS in server and support the mapping relations between the container stored in Hbase predistribution result, the mapping relations between server and destination application, destination application and multiple tasks.
Wherein, processing unit 702 can be processor or controller, such as it can be central processing unit (Central Processing Unit, CPU), general processor, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application-Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) either other programmable logic device, transistor logic, hardware component or any combination thereof.It, which may be implemented or executes, combines various illustrative logic blocks, module and circuit described in the disclosure of invention.The processor is also possible to realize the combination of computing function, such as combines comprising one or more microprocessors, DSP and the combination of microprocessor etc..Communication unit 703 can be communication interface, transceiver, transmission circuit etc., wherein communication interface is to be referred to as, and may include one or more interfaces, such as may include: cognition calculation server and from the interface and/or other interfaces between server.Storage unit 701 can be memory.
When processing unit 702 is processor, communication unit 703 is communication interface, when storage unit 701 is memory, involved in the embodiment of the present invention from server can for shown in Fig. 7 B from server.
It include: processor 77, communication interface 713, memory 711 refering to shown in Fig. 7 B, being somebody's turn to do from server 710.It optionally, can also include bus 714 from server 710.Wherein, communication interface 713, processor 77 and memory 711 can be connected with each other by bus 714;Bus 714 can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, abbreviation PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, abbreviation EISA) bus etc..The bus 714 can be divided into address bus, data/address bus, control bus etc..Only to be indicated with a thick line in Fig. 7 B, it is not intended that an only bus or a type of bus convenient for indicating.
From server it can be appreciated that a kind of device for from server, the embodiment of the present invention do not limit shown in above-mentioned Fig. 7 A or Fig. 7 B.
Furthermore, the embodiment of the invention also provides a kind of container processing systems, the container processing systems are applied to distributed resource management memory Computational frame Spark on Yarn system as shown in Figure 2, and the container processing systems include that the described cognition calculation server of any of the above-described embodiment and any of the above-described embodiment are described from server.
The step of method described in the embodiment of the present invention or algorithm, can be realized in a manner of hardware, be also possible to execute the mode of software instruction by processor to realize.Software instruction can be made of corresponding software module, software module can be stored on random access memory (Random Access Memory,
RAM), flash memory, read-only memory (Read Only Memory, ROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable ROM, EPROM), in the storage medium of Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM (CD-ROM) or any other form well known in the art.A kind of illustrative storage medium is coupled to processor, to enable a processor to from the read information, and information can be written to the storage medium.Certainly, storage medium is also possible to the component part of processor.Pocessor and storage media can be located in ASIC.In addition, the ASIC can be located in gateway or mobile management network element.Certainly, pocessor and storage media can also be used as discrete assembly and be present in gateway or mobile management network element.
It will be appreciated that in said one or multiple examples, described function of the embodiment of the present invention can be realized those skilled in the art with hardware, software, firmware or their any combination.When implemented in software, these functions can be stored in computer-readable medium or as on computer-readable medium one or more instructions or code transmit.Computer-readable medium includes computer storage media and communication media, and wherein communication media includes convenient for from a place to any medium of another place transmission computer program.Storage medium can be any usable medium that general or specialized computer can access.
Above-described specific embodiment; the purpose of the embodiment of the present invention, technical scheme and beneficial effects are had been further described; it should be understood that; the foregoing is merely the specific embodiments of the embodiment of the present invention; it is not intended to limit the present invention the protection scope of embodiment; all any modification, equivalent substitution, improvement and etc. on the basis of the technical solution of the embodiment of the present invention, done should all include within the protection scope of the embodiment of the present invention.
Claims (21)
- A kind of optimization method of container allocation, it is characterized in that, applied to the cognition calculation server in distributed resource management memory Computational frame Spark on Yarn system, the Spark on Yarn system includes the slave server for being selected to distribution container, it is described to be communicated to connect from server and the cognition calculation server, which comprisesThe cognition calculation server obtains N number of task, and N is the integer greater than 1, and N number of task is decomposed to obtain by application program;The cognition calculation server is called relationship analysis to N number of task, determine at least one task nexus group, each task nexus group includes at least two tasks, there are call relations between any two task at least two task, and the call relation is between any two task to the dependence of implementing result;The cognition calculation server generates container allocation adjustment information according at least one task nexus group of the determination, and the container allocation adjustment information includes the mapping relations between at least one described task nexus group and container;The cognition calculation server chosen sends the container allocation adjustment information from server to described.
- According to according to the method for claim 1, it is characterized in that, at least one described task nexus group includes first task relationship group and the second task nexus group, before the cognition calculation server generates container allocation adjustment information according at least one task nexus group of the determination, the method also includes:The cognition calculation server is called relational completeness analysis to the first task relationship group and the second task nexus group;If there are call relation between the task in task and the second task nexus group in the first task relationship group, the first task relationship group and the second task nexus group merging are independent task nexus group by the cognition calculation server;The cognition calculation server is adjusted the mapping relations between the first task relationship group and the second task nexus group and container.
- The method according to any one of claims 1 and 2, which is characterized in that the cognition calculation server is called relationship analysis to N number of task, determines at least one task nexus group, comprising:The cognition calculation server determines X task from N number of task, there are the call relations between any task and at least one other task in the X task, other described tasks are the task in the X task in addition to any task, and X is the positive integer less than or equal to N;The cognition calculation server determines at least one task nexus group from the X task.
- According to the method described in claim 3, it is characterized in that, the cognition calculation server determines at least one task nexus group from the X task, comprising:The cognition calculation server analyzes call relation included by the X task, determine Y task nexus group for including in the X task, each of Y task nexus group task nexus group includes at least two tasks, there are the call relation between any two task at least two task, Y is the positive integer less than X.
- According to the method described in claim 4, determining Y task nexus group for including in the X task it is characterized in that, the cognition calculation server analyzes call relation included by the X task, comprising:Call relation present in the X task and the X task is equivalent to the digraph in graph theory by the cognition calculation server, wherein, the X task is equivalent to the vertex in digraph, directed connection line call relation present in the X task being equivalent between the vertex of the digraph;The cognition calculation server solves Y independent sets of the digraph, to obtain the corresponding Y task nexus group of the Y independent sets.
- According to the described in any item methods of claim 4 or 5, which is characterized in that the cognition calculation server chosen sends the container allocation adjustment information from server to described, comprising:The cognition calculation server detects the available resources of the chosen container from server;If detect it is described it is chosen be less than resource required for the Y task nexus group from the available resources of container in server, chosen sent to described from server for the Y task nexus The container allocation of Y1 task nexus group in group indicates information, and Y1 is the positive integer less than Y.
- A kind of optimization method of container allocation, it is characterized in that, applied to the slave server in distributed resource management memory Computational frame Spark on Yarn system, it is described from server be selected to distribution container slave server, the Spark on Yarn system includes and the cognition calculation server connecting from server communication, which comprisesIt is described that the container allocation adjustment information that the cognition calculation server is sent is obtained from server, the container allocation adjustment information includes the mapping relations between at least one task nexus group and container, at least one described task nexus group is relationship analysis to be called to N number of task of destination application and determination, each independent task nexus group includes at least two tasks, there are call relations between any two task at least two task, and the call relation is between any two task to the dependence of implementing result;It is described from server according to the container allocation adjustment information, distribute container for N number of task.
- The method according to the description of claim 7 is characterized in that it is described before the container allocation adjustment information that server obtains that the cognition calculation server is sent, the method also includes:The data file that destination application is obtained from server;It is described that the destination application is parsed from server to obtain N number of task;It is described to send container allocation request from server to the cognition calculation server, the container allocation request is for requesting the cognition calculation server application program to be called relationship analysis to N number of task, with the container allocation adjustment information of determination N number of task.
- According to the described in any item methods of claim 7 or 8, which is characterized in that at least one described task nexus group is Y task nexus group, and Y is the positive integer less than N;If it is described it is chosen from the available resources of container in server be less than the Y task nexus group required for resource, then the container allocation adjustment information is the container allocation adjustment information for Y1 task nexus group in the Y task nexus group, and Y1 is the positive integer less than Y.
- A kind of cognition calculation server, it is characterized in that, applied to distributed resource management memory Computational frame Spark on Yarn system, the Spark on Yarn system includes the slave server for being selected to distribution container, it is described to be communicated to connect from server and the cognition calculation server, the cognition calculation server includes: processing unit and communication unitThe communication unit, for obtaining N number of task, N is the integer greater than 1, and N number of task is decomposed to obtain by application program, and for chosen sending the container allocation adjustment information from server to described;The processing unit, for being called relationship analysis to N number of task, determine at least one task nexus group, each task nexus group includes at least two tasks, there are call relations between any two task at least two task, and the call relation is between any two task to the dependence of implementing result;And container allocation adjustment information is generated according at least one task nexus group of the determination, the container allocation adjustment information includes the mapping relations between at least one described task nexus group and container.
- Cognition calculation server according to claim 10, which is characterized in that at least one described task nexus group includes that first task relationship group and the second task nexus group, the processing unit are also used to:Before the cognition calculation server generates container allocation adjustment information according at least one task nexus group of the determination, relational completeness analysis is called to the first task relationship group and the second task nexus group;If there are call relations between the task in task and the second task nexus group in the first task relationship group, the first task relationship group and the second task nexus group merging are independent task nexus group;Mapping relations between the first task relationship group and the second task nexus group and container are adjusted.
- Cognition calculation server according to claim 11, which is characterized in that relationship analysis is being called to N number of task, when determining at least one task nexus group, the processing unit is specifically used In:X task is determined from N number of task, there are the call relations between any task and at least one other task in the X task, other described tasks are the task in the X task in addition to any task, and X is the positive integer less than or equal to N;And at least one task nexus group is determined from the X task.
- Cognition calculation server according to claim 12, which is characterized in that when determining at least one task nexus group from the X task, the processing unit is specifically used for:Call relation included by the X task is analyzed, determine Y task nexus group for including in the X task, each of Y task nexus group task nexus group includes at least two tasks, there are the call relation between any two task at least two task, Y is the positive integer less than X.
- 2 or 13 described in any item cognition calculation servers according to claim 1, which is characterized in that analyzed to call relation included by the X task, when determining Y task nexus group for including in the X task, the processing unit is specifically used for:Call relation present in the X task and the X task is equivalent to the digraph in graph theory, wherein, the X task is equivalent to the vertex in digraph, directed connection line call relation present in the X task being equivalent between the vertex of the digraph;Y independent sets of the digraph are solved, to obtain the corresponding Y task nexus group of the Y independent sets.
- The described in any item cognition calculation servers of 1-14 according to claim 1, which is characterized in that when to the chosen transmission container allocation adjustment information from server, the processing unit is specifically used for:Detect the available resources of the chosen container from server;If detect it is described it is chosen be less than resource required for the Y task nexus group from the available resources of container in server, chosen sent to described from server for institute by the communication unit The container allocation instruction information of Y1 task nexus group in Y task nexus group is stated, Y1 is the positive integer less than Y.
- One kind is from server, it is characterized in that, applied to distributed resource management memory Computational frame Spark on Yarn system, it is described from server be selected to distribution container slave server, the Spark on Yarn system includes and the cognition calculation server connecting from server communication, it is described from server include: processing unit and communication unitThe communication unit, the container allocation adjustment information sent for obtaining the cognition calculation server, the container allocation adjustment information includes the mapping relations between at least one task nexus group and container, at least one described task nexus group is relationship analysis to be called to N number of task of destination application and determination, each independent task nexus group includes at least two tasks, there are call relations between any two task at least two task, and the call relation is between any two task to the dependence of implementing result;The processing unit, for distributing container for N number of task according to the container allocation adjustment information got.
- It is according to claim 16 from server, which is characterized in that the processing unit is also used to:Before obtaining the container allocation adjustment information that the cognition calculation server is sent by the communication unit, the data file of destination application is obtained by the communication unit;The destination application is parsed to obtain N number of task;Container allocation request is sent to the cognition calculation server by the communication unit, the container allocation request is for requesting the cognition calculation server application program to be called relationship analysis to N number of task, with the container allocation adjustment information of determination N number of task.
- According to claim 1,6 or 17 is described in any item from server, which is characterized in that at least one described task nexus group is Y task nexus group, and Y is the positive integer less than N;If the available resources of the chosen container from server are less than Y task nexus group institute The resource needed, then the container allocation adjustment information is the container allocation adjustment information for Y1 task nexus group in the Y task nexus group, and Y1 is the positive integer less than Y.
- A kind of cognition calculation server characterized by comprising processor, memory, communication interface, the processor are connect with the memory and the communication interface;The memory is stored with executable program code, and the communication interface is for wirelessly communicating;The processor is used to call the executable program code in the memory, executes the method as described in claim any one of 1-6.
- One kind is from server characterized by comprising processor, memory, communication interface, the processor are connect with the memory and the communication interface;The memory is stored with executable program code, and the communication interface is for wirelessly communicating;The processor is used to call the executable program code in the memory, executes the method as described in claim any one of 7-9.
- A kind of container processing systems, which is characterized in that application and distributed resource management memory Computational frame Spark on Yarn system, the container processing systems include slave server described in cognition calculation server as claimed in claim 19 and claim 20.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/098495 WO2018045541A1 (en) | 2016-09-08 | 2016-09-08 | Optimization method for container allocation and processing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109416646A true CN109416646A (en) | 2019-03-01 |
CN109416646B CN109416646B (en) | 2022-04-05 |
Family
ID=61561644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680086973.9A Active CN109416646B (en) | 2016-09-08 | 2016-09-08 | Optimization method for container allocation and processing equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109416646B (en) |
WO (1) | WO2018045541A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112882818A (en) * | 2021-03-30 | 2021-06-01 | 中信银行股份有限公司 | Task dynamic adjustment method, device and equipment |
WO2021249118A1 (en) * | 2020-06-12 | 2021-12-16 | 华为技术有限公司 | Method and device for generating and registering ui service package and loading ui service |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427263B (en) * | 2018-04-28 | 2024-03-19 | 深圳先进技术研究院 | Spark big data application program performance modeling method and device for Docker container and storage device |
CN113452727B (en) * | 2020-03-24 | 2024-05-24 | 北京京东尚科信息技术有限公司 | Service processing method and device for equipment clouding and readable medium |
CN113568599B (en) * | 2020-04-29 | 2024-05-31 | 伊姆西Ip控股有限责任公司 | Method, electronic device and computer program product for processing a computing job |
CN112114944A (en) * | 2020-09-04 | 2020-12-22 | 武汉旷视金智科技有限公司 | Task scheduling method and device, task scheduling platform and computer storage medium |
CN114579183B (en) * | 2022-04-29 | 2022-10-18 | 之江实验室 | Job decomposition processing method for distributed computation |
US11907693B2 (en) | 2022-04-29 | 2024-02-20 | Zhejiang Lab | Job decomposition processing method for distributed computing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101478499A (en) * | 2009-01-08 | 2009-07-08 | 清华大学深圳研究生院 | Flow allocation method and apparatus in MPLS network |
CN103034475A (en) * | 2011-10-08 | 2013-04-10 | 中国移动通信集团四川有限公司 | Distributed parallel computing method, device and system |
CN104657214A (en) * | 2015-03-13 | 2015-05-27 | 华存数据信息技术有限公司 | Multi-queue multi-priority big data task management system and method for achieving big data task management by utilizing system |
CN105512083A (en) * | 2015-11-30 | 2016-04-20 | 华为技术有限公司 | YARN based resource management method, device and system |
WO2016077367A1 (en) * | 2014-11-11 | 2016-05-19 | Amazon Technologies, Inc. | System for managing and scheduling containers |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8230070B2 (en) * | 2007-11-09 | 2012-07-24 | Manjrasoft Pty. Ltd. | System and method for grid and cloud computing |
CN101615159B (en) * | 2009-07-31 | 2011-03-16 | 中兴通讯股份有限公司 | Off-line test system, local data management method thereof and corresponding device |
CN105897826A (en) * | 2015-11-24 | 2016-08-24 | 乐视云计算有限公司 | Cloud platform service creating method and system |
-
2016
- 2016-09-08 WO PCT/CN2016/098495 patent/WO2018045541A1/en active Application Filing
- 2016-09-08 CN CN201680086973.9A patent/CN109416646B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101478499A (en) * | 2009-01-08 | 2009-07-08 | 清华大学深圳研究生院 | Flow allocation method and apparatus in MPLS network |
CN103034475A (en) * | 2011-10-08 | 2013-04-10 | 中国移动通信集团四川有限公司 | Distributed parallel computing method, device and system |
WO2016077367A1 (en) * | 2014-11-11 | 2016-05-19 | Amazon Technologies, Inc. | System for managing and scheduling containers |
CN104657214A (en) * | 2015-03-13 | 2015-05-27 | 华存数据信息技术有限公司 | Multi-queue multi-priority big data task management system and method for achieving big data task management by utilizing system |
CN105512083A (en) * | 2015-11-30 | 2016-04-20 | 华为技术有限公司 | YARN based resource management method, device and system |
Non-Patent Citations (1)
Title |
---|
靳金涛: "面向船舶建造的空间资源约束项目调度优化方法与工具", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021249118A1 (en) * | 2020-06-12 | 2021-12-16 | 华为技术有限公司 | Method and device for generating and registering ui service package and loading ui service |
CN112882818A (en) * | 2021-03-30 | 2021-06-01 | 中信银行股份有限公司 | Task dynamic adjustment method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109416646B (en) | 2022-04-05 |
WO2018045541A1 (en) | 2018-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109416646A (en) | A kind of optimization method and processing equipment of container allocation | |
CN108304473B (en) | Data transmission method and system between data sources | |
CN103064960B (en) | Data base query method and equipment | |
CN110677462B (en) | Access processing method, system, device and storage medium for multi-block chain network | |
CN111538605B (en) | Distributed data access layer middleware and command execution method and device | |
CN112463761B (en) | Cross-chain collaborative platform construction method and system for dynamic unbalanced application environment | |
CN111625585B (en) | Access method, device, host and storage medium of hardware acceleration database | |
CN110658794A (en) | Manufacturing execution system | |
CN101957778B (en) | Software continuous integration method, device and system | |
CN104199740B (en) | The no tight coupling multinode multicomputer system and method for shared system address space | |
CN111861481A (en) | Block chain account checking method and system | |
CN106161520A (en) | Big market demand platform and exchange method based on it | |
Anisetti et al. | Qos-aware deployment of service compositions in 5g-empowered edge-cloud continuum | |
CN103001962A (en) | Business support method and system | |
CN117667451A (en) | Remote procedure call method oriented to data object and related equipment | |
CN112799908A (en) | Intelligent terminal safety monitoring method, equipment and medium based on edge calculation | |
CN117519972A (en) | GPU resource management method and device | |
CN109783141A (en) | Isomery dispatching method | |
CN104239222A (en) | Memory access method, device and system | |
KR102221925B1 (en) | Method for performing mining in parallel with machine learning and method for supproting the mining, in a distributed computing resource shring system based on block chain | |
CN101247309B (en) | System for universal accesses to multi-cell platform | |
CN114221971B (en) | Data synchronization method, device, server, storage medium and product | |
CN104092731A (en) | Cloud computing system | |
CN109343875A (en) | Application program update processing method, device, automatic driving vehicle and server | |
CN108628893A (en) | Metadata access method and storage device in a kind of storage device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |