CN105426247A - HLA federate planning and scheduling method - Google Patents
HLA federate planning and scheduling method Download PDFInfo
- Publication number
- CN105426247A CN105426247A CN201510753942.1A CN201510753942A CN105426247A CN 105426247 A CN105426247 A CN 105426247A CN 201510753942 A CN201510753942 A CN 201510753942A CN 105426247 A CN105426247 A CN 105426247A
- Authority
- CN
- China
- Prior art keywords
- computing node
- federal
- planning
- node
- computing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
Abstract
The invention discloses an HLA federate planning and scheduling method. The method comprises the following steps: initially configuring the operational capability of each computational node, the quantity of computational nodes and the quantity of federate; obtaining a planning and scheduling scheme, and during the first simulation, distributing the federate to each computational node by an RTI server according to an average-quantity and random-type distribution manner; and during the simulation, recording the computation and memory resources consumed by each computational node, and judging whether the planning and scheduling scheme achieves the expected target or achieves the predetermined planning frequency by taking the occupation condition of the computation and memory resources consumed by each computational node as a scheduling basis. According to the method provided by the invention, the adjustment is carried out on the federate among the computational nodes and the simulation load of each computational node is balanced, so that the single simulation can be finished in a relatively short time and the system operation efficiency is optimized; particularly, when the computational nodes have performance differences, the federate efficiencies are different and the simulation scale is relatively large, the method has more remarkable effect.
Description
Technical field
The present invention relates to the distributed simulation method based on HLA, be in particular a kind of HLA federal member programming dispatching method.
Background technology
HLA, the i.e. abbreviation of high-level architecture (HighLevelArchitecture) are the mainstream standard of current distributed emulation.In HLA distributed emulation framework, an analogue system completing particular dummy task is called federation, and federation is made up of the federal member be distributed on different computing machine.
Under HLA system, the federal base layer support system run is RTI.The mode realizing RTI generally has two kinds: one, use COBRA Programming with Pascal Language, and two, directly use Socket.Owing to using local servers to be responsible for the management of local federal member and the coordination of outside RTI service based on the RTI of COBRA, carry out unified management to underlying services, application convenience, operational efficiency are high, and therefore its applicable surface is more extensive, as KD-XSRFrame etc.
Although comparatively extensive based on the application of the RTI of COBRA, for federation fairly large under HLA system, the federal member of composition federation is more.In most cases, the calculated amount that each federal member needs is different, and federal member reasonably cannot be distributed to computing node by current technology, and finally cause computing node load to distribute uneven problem, system operations efficiency reduces.
Therefore, be computing node reasonable distribution federal member and then the operation efficiency improving system, become the target that those skilled in the art pursue always.
The Chinese patent that application number is 201410035347.X, patent name is a kind of load-balancing method of computation-intensive artificial tasks discloses following content:
The concrete steps of this equalization methods are:
1) complete the input of artificial tasks by man-machine interface, call load balancing module and generate federal member configuration parameter, completed the structure of the distributing emulation system of load-balancing technique by federal member scheduler module;
2) artificial tasks describing module is by the input model quantity of this simulation example and the parameter of each model, generates artificial tasks description document;
3) load balancing control module reads artificial tasks description document, carries out task distribution in conjunction with current computer resource utilization, generates federal member configuration file, intensive simulation calculation task is distributed to multiple federal member and runs process;
4) input parameter of federal member scheduler module using federal member configuration file as the process of establishment, starts federal member executive process, completes the loading of this federal member simulation calculation task configuration parameter.
Artificial tasks describing module: inputted by human-computer interaction interface, forms artificial tasks description document;
Load balancing control module, according to current computer cpu busy percentage and memory configurations, generates the configuration parameter of federal member, calls federal member scheduler module, the intensive federal member example of D ynamic instantiation;
Federal member scheduler module: the federal member configuration parameter generated according to load balancing control module, the startup and the parameter that complete federal member load.
Above-mentioned prior art carries out task distribution according to the parameter of model quantity, each model, the utilization power of computer resource, reaches the object of load balancing.But, when the increasing number of federal member (namely above-mentioned " model "), the parameter of all federal members becomes complicated, when causing model parameter acquisition difficulty maybe cannot obtain, the time of the consumption of actual emulation preparation and simulation run can be greater than theoretic simulation time, and, along with increasing of federal member quantity, the time that actual emulation consumes can be more and more longer, finally drags the simulation process of slow whole system, and system operations efficiency reduces.
So when federal member increased number, the operation efficiency how improving whole system becomes the target that those skilled in the art pursue always.
Summary of the invention
Under different experiments background, the problem of accurately predicting is difficult to for solving the low and model running demand of system operations efficiency that federal member increased number in prior art causes, the invention provides a kind of HLA federal member programming dispatching method, by repeatedly planning repeatedly, finally determine Distribution Results, make full use of all computing nodes, realize single emulation to complete within the shortest time, finally reach whole system computing situation optimization.
For achieving the above object, the invention discloses a kind of HLA federal member programming dispatching method, comprise the steps:
S1: the arithmetic capability of each computing node of initial configuration, computing node quantity and federal member quantity;
S2: obtain programming dispatching scheme;
S20: when emulating first, RTI server adopts quantity average, that type is random allocation scheme that federal member is distributed to each computing node, carries out complete emulation;
S21: record calculating and memory source that each computing node consumes, the foundation that the distribution situation of the calculating that each computing node consumes and memory source situation and all federal members was dispatched as next time;
S3: judge whether programming dispatching scheme reaches re-set target or predetermined planning number of times; Its detailed process is whether the programme of determining step S21 reaches the object of planning completed in the single emulation shortest time, criterion is that the calculating that each node carries out consuming when emulating next time is close with memory source or identical, if set up, then programme is now optimization planning scheme, if be false, then return step S21.
Said method obtains programming dispatching scheme by single emulation, then the whether identical or close mode of each node resource Expenditure Levels is adopted to judge the feasibility of scheme, if feasible, the distribution situation of the federal member then obtained is rational programming dispatching scheme, if infeasible, again carry out programming dispatching, till reaching re-set target.This mode is for system development and operating personnel, very convenient, and only need arrange the quantity of computing node, all the other work such as federal member quantity statistics complete automatically, and can significantly improve system development and operational efficiency, using method is easily grasped.
Further, in step S21, carry out programming dispatching in the following way: establish H
ijrepresent the computational resource that i-th computing machine jth time consumes when running, H=w
1x+w
2y; Wherein, H represents the computational resource consumption rate of certain computing node, and x represents CPU usage, and y represents memory usage, w
1and w
2for weight coefficient; When jth time is run, wherein j>1, according to H
ijthe size of value, then some computing nodes that consumption is maximum are chosen, for three computing nodes, its descending order is computing node C1> computing node C2> computing node C3, choose three computing nodes that consumption is minimum again, its ascending order is computing node D1< computing node D2< computing node D3, then in the following way from computing node C
kcomputing node D adjusted to by several realistic models of random selecting
l, wherein 1≤k≤3,1≤l≤3;
If H
k-H
l>0.3, then adjust to computing node l from computing node k random selecting 4 federal members;
If 0.2<H
k-H
l≤ 0.3, then adjust to computing node l from computing node k random selecting 3 federal members;
If 0.1<H
k-H
l≤ 0.2, then adjust to computing node l from computing node k random selecting 2 federal members;
If 0.05<H
k-H
l≤ 0.1, then adjust to computing node l from computing node k random selecting 1 federal member;
If H
k-H
l≤ 0.05, then do not adjust.
In foregoing invention content, the method is by choosing the maximum computing node of three consumptions and minimum three nodes of consumption, can with the charge capacity of a kind of mode faster each computing node balanced, certainly, during actual use, be not limited to three nodes, these needs judge according to federal member number, computing node number, and the present invention just provides a kind of feasible program adjusting federal member distribution situation.
Further, in step S3, criterion judges by following formula:
V=max{H
ij-min{H
ij, wherein V is judgment value,
If V≤0.05, then programming dispatching terminates, and the scheduling scheme that jth time obtains is optimal case.
The feasibility of scheme is judged by the difference of maximum node loads consumption and minimum node load consumption, it is goal of the invention place of the present invention, because when each computing node arithmetic capability is identical, once the load that each computing node consumes is identical or close, so the simulation time of each computing node is inevitable identical or close, each Node latency is inevitable shorter, overall operation efficiency improves naturally, from whole system emulation, this is the best mode that the arithmetic capability making full use of computing node within the shortest time completes artificial tasks.
Further, in step S1, for each computing node configures identical hardware and identical software.
For each computing node configures identical software and hardware, ensure that the arithmetic capability of each computing node is identical, like this after on average distributing federal member first, identical owing to knowing the arithmetic capability of each computing node, be conducive to the programming dispatching number of times reducing federal member.
Beneficial effect of the present invention is: after obtaining computing node resource consumption situation, the present invention adjusts federal member between computing node, balance the dummy load of each computing node, many times emulation is completed within the short period in the cards, running efficiency of system is made to reach optimization, when between computing node, particularly there is performance difference, federal member efficiency is different and simulation scale is larger time, the effect of this method is more obvious.
For operating personnel and emulator system development personnel, only need arrange the quantity of computing node and federal member, all the other work such as federal member quantity statistics complete all automatically, have the advantages such as deployment is simple, strong adaptability, fast convergence rate, wide application.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of RTI server to computing node distribution federal member.
In figure, 1, RTI server; 2, computing node.
Embodiment
Below in conjunction with accompanying drawing, detailed explanation explanation is carried out to embodiments of the present invention.
Can make full use of the computing power of computing node 2 for realizing RTI server 1, reach single and emulate the target completed within the shortest time, the present invention is realized by following technological means:
S1: the arithmetic capability of each computing node 2 of initial configuration, computing node 2 quantity and federal member quantity.RTI server 1 in the present invention refers to the RTI server 1 using COBRA technology, and what computing node 2 referred to is exactly computing machine.
In the present invention, measured the computational resource of certain computing node 2 consumption by CPU usage and memory usage, i.e. H=w
1x+w
2y; Wherein, H represents the computational resource of the consumption of certain computing node 2, and x represents CPU usage, and y represents memory usage, w
1, w
2represent the weighted value between cpu busy percentage and memory usage, w
1, w
2size arrange according to the configuring condition of computing node 2: if computer CPU configuration higher, then by w
2arrange large; If memory configurations is higher, then by w
1arrange large, such as, w
1check figure/10G, w in=CPU frequency * CPU physics
2=memory size/1G.
S2: obtain programming dispatching scheme
S20: when emulating first, assuming that the calculation resources that each federal member needs is identical, RTI server 1 adopts the allocation scheme that quantity is average, type is random federal member to be distributed to each computing node 2, carries out once complete emulation.
S21: record calculating and memory source that each computing node 2 consumes, and carry out adding up, analyzing the data obtained, the distribution situation of the namely calculating that consumes of each computing node 2 and memory source situation and all federal members, as the foundation of scheduling next time.
Have a kind of special situation, if the operand of each federal member demand is identical, emulation so first will obtain re-set target, but the probability that this situation occurs is very little.
Following mode can be adopted to carry out adding up, analyzing:
If H
ijrepresent the computational resource that i-th computing machine jth time consumes when running, then
H
ij=w
1x
ij+w
2x
ij
Make H
i0=1, wherein i=1,2,3 ..., m.
When jth time is run, wherein j>1, according to H
ijthe size of value, then the maximum computing node of several consumptions 2 is chosen, the maximum interstitial content of the consumption chosen in this embodiment is three, its descending order is computing node C1> computing node C2> computing node C3, choose three computing nodes 2 that consumption is minimum again, in this embodiment, the minimum interstitial content of consumption can be one to three, when the interstitial content that consumption is minimum is three, its ascending order is computing node D1< computing node D2< computing node D3.Select the computing node 2 that three consumptions are maximum in the present embodiment, but when practical operation, its quantity can be changed as required.Then in the following way from computing node C
kcomputing node D adjusted to by several realistic models of random selecting
l, wherein 1≤k≤3,1≤l≤3.
If H
k-H
l>0.3, then adjust to computing node l from computing node k random selecting 4 federal members;
If 0.2<H
k-H
l≤ 0.3, then adjust to computing node l from computing node k random selecting 3 federal members;
If 0.1<H
k-H
l≤ 0.2, then adjust to computing node l from computing node k random selecting 2 federal members;
If 0.05<H
k-H
l≤ 0.1, then adjust to computing node l from computing node k random selecting 1 federal member;
If H
k-H
l≤ 0.05, then do not adjust.
The allocation result of the federal member adjusted by above-mentioned adjustment mode, when emulating as next time, RTI server 1 distributes the foundation of federal member to computing node 2.
S3: judge whether programming dispatching scheme reaches re-set target;
According to the Adjusted Option that step S2 obtains, re-start task distribution according to the calculation resources of each federal member actual needs, carry out once complete emulation.If reach single to emulate the re-set target completed within the shortest time, its criterion is that the resource that each node consumes when emulating is identical or close, then reach the object of planning; Criterion judges by following formula:
V=max{H
ij-min{H
ij, wherein V is judgment value.
If V≤0.05, then programming dispatching terminates, and the scheduling scheme that jth time obtains is optimization planning scheme.
If do not reach target, then return step S21, till reaching the object of planning or reach predetermined planning number of times.According to the optimum deployment scheme of the programming dispatching obtained, rearrange federal member, carry out simulation run.
And for system developer, planning number of times and planning time can be set.The number of times of planning is more, and the effect of optimization is better; The time of each planning is longer, and the result of planning is better.And predetermined planning number of times, those skilled in the art can need rationally to arrange according to actual emulation.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all do in flesh and blood of the present invention any amendment, equivalent to replace and simple modifications etc., all should be included within protection scope of the present invention.
Claims (4)
1. a HLA federal member programming dispatching method, is characterized in that: the method comprises the steps:
S1: the arithmetic capability of each computing node of initial configuration (2), computing node quantity and federal member quantity;
S2: obtain programming dispatching scheme;
S20: when emulating first, RTI server (1) adopts quantity average, that type is random allocation scheme that federal member is distributed to each computing node, carries out complete emulation;
S21: record calculating and memory source that each computing node consumes, the foundation that the distribution situation of the calculating that each computing node consumes and memory source situation and all federal members was dispatched as next time;
S3: judge whether programming dispatching scheme reaches re-set target or reach predetermined planning number of times; Specific implementation process is whether the programme of determining step S21 reaches the object of planning completed in the single emulation shortest time, criterion is that the resource that each node carries out consuming when emulating next time is close or identical, if set up, then programme is now optimization planning scheme, if be false, then return step S21.
2. HLA federal member programming dispatching method according to claim 1, is characterized in that: in step S21, carries out programming dispatching in the following way: establish H
ijrepresent the computational resource that i-th computing machine jth time consumes when running, H=w
1x+w
2y; Wherein, H represents the computational resource consumption rate of certain computing node, and x represents CPU usage, and y represents memory usage, w
1and w
2for weight coefficient; When jth time is run, wherein j>1, according to H
ijthe size of value, then three computing nodes that consumption is maximum are chosen, its descending order is computing node C1> computing node C2> computing node C3, choose three computing nodes that consumption is minimum again, its ascending order is computing node D1< computing node D2< computing node D3, then in the following way from computing node C
kcomputing node D adjusted to by several realistic models of random selecting
l, wherein 1≤k≤3,1≤l≤3;
If H
k-H
l>0.3, then adjust to computing node l from computing node k random selecting 4 federal members;
If 0.2<H
k-H
l≤ 0.3, then adjust to computing node l from computing node k random selecting 3 federal members;
If 0.1<H
k-H
l≤ 0.2, then adjust to computing node l from computing node k random selecting 2 federal members;
If 0.05<H
k-H
l≤ 0.1, then adjust to computing node l from computing node k random selecting 1 federal member;
If H
k-H
l≤ 0.05, then do not adjust.
3. HLA federal member programming dispatching method according to claim 2, it is characterized in that: in step S3, criterion judges by following formula:
V=max{H
ij-min{H
ij, wherein V is judgment value,
If V≤0.05, then programming dispatching terminates, and the scheduling scheme that jth time obtains is optimal case.
4. HLA federal member programming dispatching method according to claim 1, is characterized in that: in step S1, for each computing node configures identical hardware and identical software.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510753942.1A CN105426247B (en) | 2015-11-09 | 2015-11-09 | A kind of HLA federal members programming dispatching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510753942.1A CN105426247B (en) | 2015-11-09 | 2015-11-09 | A kind of HLA federal members programming dispatching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105426247A true CN105426247A (en) | 2016-03-23 |
CN105426247B CN105426247B (en) | 2018-11-06 |
Family
ID=55504471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510753942.1A Expired - Fee Related CN105426247B (en) | 2015-11-09 | 2015-11-09 | A kind of HLA federal members programming dispatching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105426247B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106020977A (en) * | 2016-05-16 | 2016-10-12 | 深圳市中业智能系统控制有限公司 | Distributed task scheduling method and apparatus used for monitoring system |
CN108762892A (en) * | 2018-06-07 | 2018-11-06 | 北京仿真中心 | A kind of resource allocation method of cloud emulation collaborative simulation pattern |
CN109800054A (en) * | 2018-12-24 | 2019-05-24 | 四川知周科技有限责任公司 | A kind of distributed parallel real-time simulation scheduling implementation method |
CN110990329A (en) * | 2019-12-09 | 2020-04-10 | 杭州趣链科技有限公司 | Method, equipment and medium for high availability of federated computing |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101442555A (en) * | 2008-12-19 | 2009-05-27 | 中国运载火箭技术研究院 | Artificial resource proxy service system facing HLA |
CN102299820A (en) * | 2011-08-26 | 2011-12-28 | 于辉 | Federate node device and implementation method of high level architecture (HLA) system framework |
US8601477B2 (en) * | 2008-06-24 | 2013-12-03 | International Business Machines Corporation | Reducing instability within a heterogeneous stream processing application |
CN103442038A (en) * | 2013-08-12 | 2013-12-11 | 北京理工大学 | Master-slave distributed cooperation type HLA simulation management and control system |
CN103793281A (en) * | 2014-01-24 | 2014-05-14 | 北京仿真中心 | Load balancing method of compute-intensive simulation task |
CN104750593A (en) * | 2013-12-31 | 2015-07-01 | 中国人民解放军军械工程学院 | Method for monitoring loads on HLA (high level architecture) simulation nodes on basis of time advance performance |
-
2015
- 2015-11-09 CN CN201510753942.1A patent/CN105426247B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8601477B2 (en) * | 2008-06-24 | 2013-12-03 | International Business Machines Corporation | Reducing instability within a heterogeneous stream processing application |
CN101442555A (en) * | 2008-12-19 | 2009-05-27 | 中国运载火箭技术研究院 | Artificial resource proxy service system facing HLA |
CN102299820A (en) * | 2011-08-26 | 2011-12-28 | 于辉 | Federate node device and implementation method of high level architecture (HLA) system framework |
CN103442038A (en) * | 2013-08-12 | 2013-12-11 | 北京理工大学 | Master-slave distributed cooperation type HLA simulation management and control system |
CN104750593A (en) * | 2013-12-31 | 2015-07-01 | 中国人民解放军军械工程学院 | Method for monitoring loads on HLA (high level architecture) simulation nodes on basis of time advance performance |
CN103793281A (en) * | 2014-01-24 | 2014-05-14 | 北京仿真中心 | Load balancing method of compute-intensive simulation task |
Non-Patent Citations (2)
Title |
---|
王琼,艾丽蓉,龚爱珍: "《基于HLA的分布式仿真中负载平衡的研究》", 《计算机技术与发展》 * |
翁超: "《面向LP网络的HLA分布式仿真负载平衡问题研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106020977A (en) * | 2016-05-16 | 2016-10-12 | 深圳市中业智能系统控制有限公司 | Distributed task scheduling method and apparatus used for monitoring system |
CN106020977B (en) * | 2016-05-16 | 2019-09-13 | 深圳市中业智能系统控制有限公司 | Distributed task dispatching method and device for monitoring system |
CN108762892A (en) * | 2018-06-07 | 2018-11-06 | 北京仿真中心 | A kind of resource allocation method of cloud emulation collaborative simulation pattern |
CN109800054A (en) * | 2018-12-24 | 2019-05-24 | 四川知周科技有限责任公司 | A kind of distributed parallel real-time simulation scheduling implementation method |
CN109800054B (en) * | 2018-12-24 | 2023-05-26 | 四川知周科技有限责任公司 | Distributed parallel real-time simulation scheduling realization method |
CN110990329A (en) * | 2019-12-09 | 2020-04-10 | 杭州趣链科技有限公司 | Method, equipment and medium for high availability of federated computing |
CN110990329B (en) * | 2019-12-09 | 2023-12-01 | 杭州趣链科技有限公司 | Federal computing high availability method, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN105426247B (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lin et al. | Multi-resource scheduling and power simulation for cloud computing | |
CN102611622B (en) | Dispatching method for working load of elastic cloud computing platform | |
US9239734B2 (en) | Scheduling method and system, computing grid, and corresponding computer-program product | |
US20120060167A1 (en) | Method and system of simulating a data center | |
CN103885831B (en) | The system of selection of virtual machine host machine and device | |
CN103488539B (en) | Data center energy saving method based on central processing unit (CPU) dynamic frequency modulation technology | |
CN102981890B (en) | A kind of calculation task in Visualized data centre and virtual machine deployment method | |
CN108628674A (en) | Method for scheduling task, cloud platform based on cloud platform and computer storage media | |
CN104243617B (en) | Towards the method for scheduling task and system of mixed load in a kind of isomeric group | |
US11816509B2 (en) | Workload placement for virtual GPU enabled systems | |
CN105373432B (en) | A kind of cloud computing resource scheduling method based on virtual resource status predication | |
CN102981893B (en) | A kind of dispatching method of virtual machine and system | |
Mao et al. | A multi-resource task scheduling algorithm for energy-performance trade-offs in green clouds | |
Cao et al. | Energy efficient allocation of virtual machines in cloud computing environments based on demand forecast | |
CN105426247A (en) | HLA federate planning and scheduling method | |
Liang et al. | A low-power task scheduling algorithm for heterogeneous cloud computing | |
Cheng et al. | Heterogeneity aware workload management in distributed sustainable datacenters | |
Dong et al. | A high-efficient joint’cloud-edge’aware strategy for task deployment and load balancing | |
CN109491760A (en) | A kind of high-effect data center's Cloud Server resource autonomous management method and system | |
CN108845886A (en) | Cloud computing energy consumption optimization method and system based on phase space | |
Padoin et al. | Managing power demand and load imbalance to save energy on systems with heterogeneous CPU speeds | |
Samadi et al. | DT-MG: many-to-one matching game for tasks scheduling towards resources optimization in cloud computing | |
Akoglu et al. | Putting data science pipelines on the edge | |
CN110196773A (en) | The Multiple Time Scales Security Checking system and method for United Dispatching computing resource | |
CN109697105A (en) | A kind of container cloud environment physical machine selection method and its system, virtual resource configuration method and moving method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20190124 Address after: 100161 Xinghai Building, No. 9 Xili, Lianhuachi, Fengtai District, Beijing Patentee after: Chinese People's Liberation Army 91776 Address before: 100161 Xinghai Building, No. 9 Xili, Lianhuachi, Fengtai District, Beijing Patentee before: Zhang Bo |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20181106 Termination date: 20191109 |
|
CF01 | Termination of patent right due to non-payment of annual fee |