CN107633125A - A kind of analogue system Parallelism method based on Weighted Directed Graph - Google Patents
A kind of analogue system Parallelism method based on Weighted Directed Graph Download PDFInfo
- Publication number
- CN107633125A CN107633125A CN201710826327.8A CN201710826327A CN107633125A CN 107633125 A CN107633125 A CN 107633125A CN 201710826327 A CN201710826327 A CN 201710826327A CN 107633125 A CN107633125 A CN 107633125A
- Authority
- CN
- China
- Prior art keywords
- component model
- weight
- simulation component
- node
- directed graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The present invention discloses a kind of analogue system Parallelism method based on Weighted Directed Graph, including:S1, according to the annexation of each simulation component model in analogue system and sequential relationship structure that state shifts is produced comprising being the digraph of node and connection between simulation component model as directed edge using simulation component model;S2, the computing cost of each node and the communication overhead of directed edge in digraph are calculated, structure includes the Weighted Directed Graph using weight of the computing cost of node as node and the communication overhead using directed edge as the weight of directed edge;S3, identify parallel branch all in Weighted Directed Graph and beta pruning is carried out to Weighted Directed Graph according to the weight of directed edge, all weights included to the parallel branch in the Weighted Directed Graph after beta pruning are carried out to weight of the result as parallel branch of fusion operation, parallel branch is grouped according to the weight of parallel branch and is that each component matches somebody with somebody computing resource.The present invention may be such that computing cost equilibrium.
Description
Technical field
The present invention relates to simulation technical field.It is parallel more particularly, to a kind of analogue system based on Weighted Directed Graph
Property recognition methods.
Background technology
The effect of analogue system be under the premise of given system describes (model), primary condition and input time sequence,
Corresponding state transfer and output trajectory are generated, (can be natural system, social system to simulate and observe objective world system
And engineering system).Description to an analogue system is typically according to the composition of the system, to be divided into some, using difference
Description method, be described respectively.I.e. an analogue system generally includes the submodel of multiple isomeries.At present, it is current to do
Method be by the submodel of isomery be encapsulated as standardization simulation component model (simulation component model includes not subdivisible atom group
Part model and composite component model, composite component model are led to by the atom component model and/or composite component model of smaller particle size
Cross connecting and composing between the interface of model, model), pass through the combination of simulation component model to realize the fast of Complex simulation systems
Speed structure.
The operation of analogue system is by time-constrain, and in the case where input drives, the state of each simulation component model promotes over time
It can shift.The parallel operation of analogue system can calculate the state change of each simulation component model respectively, then by with
The information exchange of time order and function order meets objective causality to ensure that the state between each simulation component model shifts.So
And operation is all coarseness to traditional analogue system parallel, i.e. the composite component model of coarseness is assigned to different meters
Operator node performs parallel.But for newest multinuclear or many-core computing resource, such parallelization is far from performance
The advantage of computing resource even performs parallel, it is necessary to which more fine-grained simulation component model is assigned into CPU on core.
Current Parallelizing Techniques can not provide decision support to the resource allocation that fine grained parallel is run, wherein, one
The computation complexity that the computing cost of simulation component model had both been shifted by simulation component model state in period determines, also by imitating
Sequential relationship between true component model determines (to need the either internally or externally information-driven model for triggering model state transfer
The calculating of state transfer).
Accordingly, it is desirable to provide a kind of reasonable packet that can be achieved between fine-grained simulation component model so that at one
The analogue system Parallelism method based on Weighted Directed Graph that the computing cost of each group can be balanced in period
The content of the invention
It is an object of the invention to provide a kind of analogue system Parallelism method based on Weighted Directed Graph.
To reach above-mentioned purpose, the present invention uses following technical proposals:
A kind of analogue system Parallelism method based on Weighted Directed Graph, including:
S1, the annexation of each simulation component model obtained according to the information exchange relation of each simulation component model, according to
The annexation of each simulation component model and the sequential relationship structure of generation state transfer are included with emulation component in analogue system
Model is the digraph of node and connection between simulation component model as directed edge;
S2, the computing cost of each node and the communication overhead of directed edge in digraph are calculated, structure is included in terms of node
It is the weight of node and the Weighted Directed Graph using the communication overhead of directed edge as the weight of directed edge to calculate expense;
S3, identify parallel branch all in Weighted Directed Graph and Weighted Directed Graph is cut according to the weight of directed edge
Branch, all weights included to the parallel branch in the Weighted Directed Graph after beta pruning carry out fusion operation, by fusion operation result
As the weight of parallel branch, parallel branch is grouped according to the weight of parallel branch, money is calculated for the distribution of each parallel branch group
Source.
Preferably, step S1 further comprises:
S1.1, the annexation of each simulation component model obtained according to the information exchange relation of each simulation component model, with
Simulation component model is as node;
S1.2, the simulation component model of state transfer will be produced at first as starting point;
S1.3, using represent starting point simulation component model connect simulation component model as terminal, generate by starting point extremely
The directed edge of terminal;
S1.4, the simulation component model of terminal will be represented as starting point;
S1.5, judge whether that all simulation component models be used as starting point:If it is not, then it is transferred to step S1.3;If
It is that then structure completes the digraph comprising node and directed edge.
Preferably, step S2 further comprises:
The state of external information driving caused by S2.1, the connection represented to simulation component model as the directed edge of terminal
The state transfer computing cost weighting averaging of transfer computing cost and the driving of simulation component model internal information obtains emulation group
The computing cost for the node that part model represents, the weight of each node is obtained after the computing cost of each node is normalized;
S2.2, to being counted including information exchange frequency and information exchange caused by the connection of interactive data information amount,
The communication overhead for the directed edge that connection represents is obtained, the power of each directed edge is obtained after each directed edge communication overhead is normalized
Weight;
S2.3, build the Weighted Directed Graph comprising the weight of node and the weight of directed edge.
Preferably, step S3 further comprises:
S3.1, according in Weighted Directed Graph as node simulation component model generation state shift sequential and conduct
All parallel branch in the connection identification Weighted Directed Graph of directed edge;
S3.2, beta pruning carried out to Weighted Directed Graph according to the weight of directed edge, in the Weighted Directed Graph after beta pruning and
All weights progress fusion operation that row branch includes, the weight using fusion operation result as parallel branch, weight is big
Parallel branch is individually grouped and the small parallel branch of weight is merged into packet so that the weight of each parallel branch group is approximate;
S3.3, it is that each parallel branch group distributes computing resource.
Beneficial effects of the present invention are as follows:
The coarseness that technical scheme of the present invention can be run parallel in analogue system of the tradition based on composite component model
On recognition methods basis, consider in a period between the computing cost and simulation component model of different simulation component models
Communication overhead structure Weighted Directed Graph, realize the reasonable packet between fine-grained simulation component model so that at one
The computing cost of each group can be balanced in period, and the communication overhead between each group is smaller, and (information exchange is frequent, information exchange number
It is divided in as far as possible in a group according to big simulation component model is measured), so as in newest multinuclear or many-core computing resource condition
Under, decision support can be provided for fine-grained simulation component model is assigned into parallel perform on CPU or even core.
Brief description of the drawings
The embodiment of the present invention is described in further detail below in conjunction with the accompanying drawings;
Fig. 1 shows the description figure of analogue system.
Fig. 2 shows the flow chart of the analogue system Parallelism method based on Weighted Directed Graph.
Fig. 3 shows the schematic diagram of Weighted Directed Graph.
Fig. 4 shows the schematic diagram of the Weighted Directed Graph after beta pruning.
Embodiment
In order to illustrate more clearly of the present invention, the present invention is done further with reference to preferred embodiments and drawings
It is bright.Similar part is indicated with identical reference in accompanying drawing.It will be appreciated by those skilled in the art that institute is specific below
The content of description is illustrative and be not restrictive, and should not be limited the scope of the invention with this.
As shown in Fig. 2 the analogue system Parallelism method based on Weighted Directed Graph that the present embodiment provides, including:
S1, the annexation of each simulation component model obtained according to the information exchange relation of each simulation component model, according to
The annexation of each simulation component model and the sequential relationship structure of generation state transfer are included with emulation component in analogue system
Model is the digraph of node and connection between simulation component model as directed edge, wherein, digraph generally carries closed loop,
Because the output signal of each simulation component model is after some simulation component model transmission, conversion, it is more likely that finally transmits
Return the input interface of the simulation component model;
S2, the computing cost of each node and the communication overhead of directed edge in digraph are calculated, structure is included in terms of node
It is the weight of node and the Weighted Directed Graph using the communication overhead of directed edge as the weight of directed edge to calculate expense;
S3, identify parallel branch all in Weighted Directed Graph and Weighted Directed Graph is cut according to the weight of directed edge
Branch, all weights included to the parallel branch in the Weighted Directed Graph after beta pruning carry out fusion operation, by fusion operation result
As the weight of parallel branch, parallel branch is grouped according to the weight of parallel branch, money is calculated for the distribution of each parallel branch group
Source.
In the specific implementation, step S1 further comprises:
S1.1, the annexation of each simulation component model obtained according to the information exchange relation of each simulation component model, with
Simulation component model is as node;
S1.2, the simulation component model of state transfer will be produced at first as starting point;
S1.3, using represent starting point simulation component model connect simulation component model as terminal, generate by starting point extremely
The directed edge of terminal;
S1.4, the simulation component model of terminal will be represented as starting point;
S1.5, judge whether that all simulation component models be used as starting point:If it is not, then it is transferred to step S1.3;If
It is that then structure completes the digraph comprising node and directed edge.
In the specific implementation, step S2 further comprises:
The state of external information driving caused by S2.1, the connection represented to simulation component model as the directed edge of terminal
The state transfer computing cost weighting averaging of transfer computing cost and the driving of simulation component model internal information obtains emulation group
The computing cost for the node that part model represents, the weight of each node is obtained after the computing cost of each node is normalized;
S2.2, information exchange caused by connection is counted to obtain the communication overhead for the directed edge that connection represents, wherein
Including the information exchange frequency and the statistics of the data volume of single information exchange between the node that is connected for each directed edge;This
Afterwards, the information exchange frequency expense to each directed edge and single interactive data information amount expense are normalized respectively are had respectively
To the weight on side;
S2.3, build the Weighted Directed Graph comprising the weight of node and the weight of directed edge.
In the specific implementation, step S3 further comprises:
S3.1, according in Weighted Directed Graph as node simulation component model generation state shift sequential and conduct
All parallel branch in the connection identification Weighted Directed Graph of directed edge;
S3.2, due to producing state transfer according to causality and time sequencing between simulation component model, therefore can enter
The appropriate packet combining of row solves mutually to wait (caused resources idle waste) between different grouping;Consider as shown in Figure 3
Weighted Directed Graph, wherein each node computing cost weight is ci, wherein i ∈ { 1 ..., 11 } are nodal scheme, information exchange frequency
Expense is wij, its subscript represent information exchange carried out from node i to node j, single interactive data information amount expense is qij.Examine
The information exchange frequency and data volume expense of every directed edge are considered, if w75*q75Much smaller than remaining side this weighted value, then to c7-
C5 sides carry out beta pruning, and Weighted Directed Graph as shown in Figure 4 is obtained after beta pruning.
To the Weighted Directed Graph after beta pruning, the weights on every through path are calculated, calculate standard such as following formula
The equivalent overhead value of each through path can then be obtained.The node of the big parallel branch of overhead value is individually grouped, and will
The small parallel branch of weight merges into one group so that the weight balancing of each parallel branch group.For c in such as Fig. 47-c8-c6、c4-
c5-c6With c7-c8-c9Situation, if c7-c8-c6Comprehensive expense is larger, then by c4-c5It is divided into one group, c7-c8-c6It is divided into one group,
If now c8-c9Expense is also larger, then it is individually divided into one group, is otherwise incorporated into c7-c8-c6Group;If c7-c8-c6Expense is not
Greatly, c4-c5-c6With c7-c8-c9Expense is larger, then can be randomized into one group;If c7-c8-c6Expense is little, and c4-
c5-c6With c7-c8-c9In one group of expense it is big, another group of expense is little, then by c7-c8-c6It is divided into small one group of expense;If the above three
The equal expense of group is little, then can be divided into same group.The above-mentioned evaluation for expense needs the computing resource possessed by account for.
S3.3, computing resource is distributed for each parallel branch group, i.e., (i.e. state shifts frequent, information exchange to communication overhead greatly
Data volume is big) and the big simulation component model of computing cost can be allocated single resource.
The analogue system Parallelism method based on Weighted Directed Graph that the present embodiment provides, for existing coarseness
Analogue system runs the computing cost and simulation component model for not accounting for different simulation component models in a period parallel
Between communication overhead, propose the recognition methods that fine-grained simulation component model can perform parallel, realize fine-grained emulation
Packet between component model so that the computing cost of each group can be balanced in a period of time, and the communication between each group is opened
Pin is smaller (simulation component model that information exchange is frequent, interactive data information amount is big is divided in a group as far as possible), so that
Can be that fine-grained simulation component model is assigned to CPU even cores under the conditions of newest multinuclear or many-core computing resource
Upper parallel perform provides decision support.
The present embodiment is provided by taking the multi-disciplinary virtual prototype analogue system built based on high-rise Modeling Theory as an example below
The analogue system Parallelism method based on Weighted Directed Graph be further described:
The first step, the annexation of each simulation component model obtained according to the information exchange relation of each simulation component model,
Included according to the annexation of each simulation component model in analogue system and the sequential relationship structure for producing state transfer with emulation
Component model is the digraph of node and connection between simulation component model as directed edge:
Atom component model:Atom component model is by parameter interface, initialization interface, data input/output interface, event
Input/output interface, state and its conversion and time base collectively constitute.
EM=<Para,Init,iPd,iPe,oPd,oPe,STATE,StatTF,T>
Wherein,
Para:Parameter interface;
Init:Initialization interface;
iPd:Data Input Interface;
iPe:Event input interface;
oPd:Data output interface;
oPe:Event output interface;
STATE:The state on interface is presented;
StatTF:State Transferring;
Composite component model:Composite component model is by parameter interface, initialization interface, data input/output interface, event
Connection, exchange scenario, flow of event and time base collectively constitute inside input/output interface, model.
CM=<Para,Init,iPd,iPe,oPd,oPe,ID,{Mx|x∈ID∪{CM}},CPLs,SITUA,EvntFL,
T>
Wherein,
Para:Parameter interface;
Init:Initialization interface;
iPd:Data Input Interface;
iPe:Event input interface;
oPd:Data output interface;
oPe:Event output interface;
ID:The set of model index;
Mx:The sub-component model included in CM or CM;
CPLs:The set of connection inside composite model;
SITUA:The set of exchange scenario;
EvntFL:Flow of event inside composite model;
T:Time base.
It is source using the simulation component model (some EM) of generation state transfer at first as starting point, generates the first of digraph
Beginning starting point;Along the annexation (CPLs) of the simulation component model, by the connection pair where output interface (oPd or oPe)
Should be into the directed edge using starting point as initiating terminal;The simulation component model (corresponding EM) pointed to connecting has as end, generation
Xiang Bian;The simulation component model of end is re-used as starting point, circulation performs said process, travels through all simulation component models,
The complete digraph generally with closed loop of structure is (because the output signal of each simulation component model passes through some simulation component models
Transmit, after conversion, it is possible to be finally passed back to the input interface of the simulation component model).
Second step, the computing cost of each node and the communication overhead of directed edge in digraph are calculated, structure is included with node
Computing cost be the weight of node and the Weighted Directed Graph using the communication overhead of directed edge as the weight of directed edge:
Consider the computing cost of the simulation component model in digraph representated by each node, mainly end is connected with it
Directed edge represent simulation component model between caused by annexation external information drive state transfer computing cost and
State transfer computing cost (the StatTF computing cost of the simulation component model internal information driving of the node on behalf. IPe is not that space-time is outside letter
The state transfer of breath driving;IPe is the state transfer that space-time is internal information driving), it is (right again that the weighting of two class expenses is averaging
After each node is normalized) obtain the weight of the node;Additionally, it is contemplated that connected between the simulation component model that each directed edge represents
The communication overhead of relation, typically information exchange caused by each connection (oPd and oPe caused by emulating) is united
Meter obtains the weight of directed edge (after each directed edge is normalized again).
All parallel branch, the side right that gained is calculated according to communication overhead are again right in 3rd step, identification Weighted Directed Graph
Former network carries out beta pruning, and all weights included to the parallel branch in network after beta pruning carry out fusion operation, by its result
As the weight of parallel branch, parallel branch is grouped according to the weight of parallel branch, money is calculated for the distribution of each parallel branch group
Source:
Weighted Directed Graph is traveled through, identifies all parallel branch, as maximum packet scheme that can be parallel.Due to emulation group
State transfer is produced according to causality and time sequencing between part model, therefore appropriate packet combining can be carried out and solve difference
Mutually (caused resources idle waste) is waited between packet.By the way that the weight of directed edge is carried out into beta pruning to Weighted Directed Graph,
And all weights included to the parallel branch in the Weighted Directed Graph after beta pruning carry out fusion operation, take the branch that result is larger
Individually it is grouped, takes the less packet combining that carries out of result (to ensure the achievement for merging the sum of products of packet and being individually grouped
Substantially it is suitable), computing resource finally is distributed for each parallel branch group, i.e., (i.e. state shifts frequent, data interaction to communication overhead greatly
Data volume is big) and the big simulation component model of computing cost can be allocated single resource.
Obviously, the above embodiment of the present invention is only intended to clearly illustrate example of the present invention, and is not pair
The restriction of embodiments of the present invention, for those of ordinary skill in the field, may be used also on the basis of the above description
To make other changes in different forms, all embodiments can not be exhaustive here, it is every to belong to this hair
Row of the obvious changes or variations that bright technical scheme is extended out still in protection scope of the present invention.
Claims (4)
- A kind of 1. analogue system Parallelism method based on Weighted Directed Graph, it is characterised in that including:S1, the annexation of each simulation component model obtained according to the information exchange relation of each simulation component model, according to emulation The annexation of each simulation component model and the sequential relationship structure of generation state transfer are included with simulation component model in system The digraph of connection for node and between simulation component model as directed edge;S2, the computing cost of each node and the communication overhead of directed edge in digraph are calculated, structure is included and opened with the calculating of node Sell the Weighted Directed Graph of the weight for node and the communication overhead using directed edge as the weight of directed edge;S3, identify parallel branch all in Weighted Directed Graph and beta pruning carried out to Weighted Directed Graph according to the weight of directed edge, All weights for being included to the parallel branch in the Weighted Directed Graph after beta pruning carry out fusion operation, using fusion operation result as The weight of parallel branch, parallel branch is grouped according to the weight of parallel branch, computing resource is distributed for each parallel branch group.
- 2. the analogue system Parallelism method according to claim 1 based on Weighted Directed Graph, it is characterised in that step Rapid S1 further comprises:S1.1, the annexation of each simulation component model obtained according to the information exchange relation of each simulation component model, with emulation Component model is as node;S1.2, the simulation component model of state transfer will be produced at first as starting point;S1.3, using represent starting point simulation component model connect simulation component model as terminal, generate by starting point to the end Directed edge;S1.4, the simulation component model of terminal will be represented as starting point;S1.5, judge whether that all simulation component models be used as starting point:If it is not, then it is transferred to step S1.3;If so, then Structure completes the digraph comprising node and directed edge.
- 3. the analogue system Parallelism method according to claim 2 based on Weighted Directed Graph, it is characterised in that step Rapid S2 further comprises:The state of external information driving shifts caused by S2.1, the connection represented to simulation component model as the directed edge of terminal The state transfer computing cost weighting averaging of computing cost and the driving of simulation component model internal information obtains emulation component mould The computing cost for the node that type represents, the weight of each node is obtained after the computing cost of each node is normalized;S2.2, to being counted including information exchange frequency and information exchange caused by the connection of interactive data information amount, obtain The communication overhead of the directed edge represented is connected, the weight of each directed edge is obtained after each directed edge communication overhead is normalized;S2.3, build the Weighted Directed Graph comprising the weight of node and the weight of directed edge.
- 4. the analogue system Parallelism method according to claim 1 based on Weighted Directed Graph, it is characterised in that step Rapid S3 further comprises:S3.1, according to the sequential shifted in Weighted Directed Graph as the generation state of the simulation component model of node and as oriented All parallel branch in the connection identification Weighted Directed Graph on side;S3.2, beta pruning carried out to Weighted Directed Graph according to the weight of directed edge, to parallel point in the Weighted Directed Graph after beta pruning All weights for including of branch carry out fusion operation, the weight using fusion operation result as parallel branch, by weight greatly parallel Branch is individually grouped and the small parallel branch of weight is merged into packet so that the weight of each parallel branch group is approximate;S3.3, it is that each parallel branch group distributes computing resource.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710826327.8A CN107633125B (en) | 2017-09-14 | 2017-09-14 | Simulation system parallelism identification method based on weighted directed graph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710826327.8A CN107633125B (en) | 2017-09-14 | 2017-09-14 | Simulation system parallelism identification method based on weighted directed graph |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107633125A true CN107633125A (en) | 2018-01-26 |
CN107633125B CN107633125B (en) | 2021-08-31 |
Family
ID=61101923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710826327.8A Active CN107633125B (en) | 2017-09-14 | 2017-09-14 | Simulation system parallelism identification method based on weighted directed graph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107633125B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108563984A (en) * | 2018-03-02 | 2018-09-21 | 山东科技大学 | A kind of automatic identification and understanding method of procedural model figure |
CN110366205A (en) * | 2019-08-05 | 2019-10-22 | 北京邮电大学 | The selection method and device of initial source node in moving machine meeting network flow unloading |
CN110610227A (en) * | 2018-06-15 | 2019-12-24 | 北京深鉴智能科技有限公司 | Artificial neural network adjusting method and neural network computing platform |
CN110865875A (en) * | 2018-08-27 | 2020-03-06 | 阿里巴巴集团控股有限公司 | DAG task relationship graph processing method and device and electronic equipment |
CN111046575A (en) * | 2019-12-23 | 2020-04-21 | 中国航空工业集团公司沈阳飞机设计研究所 | Method and system for ensuring simulation consistency |
CN111245662A (en) * | 2020-03-09 | 2020-06-05 | 杭州迪普科技股份有限公司 | Method and device for displaying network topology |
WO2020232951A1 (en) * | 2019-05-17 | 2020-11-26 | 深圳前海微众银行股份有限公司 | Task execution method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605560A (en) * | 2013-11-25 | 2014-02-26 | 国家电网公司 | Cascading failure parallel simulation method for relay protection and safety automata |
CN103885839A (en) * | 2014-04-06 | 2014-06-25 | 孙凌宇 | Cloud computing task scheduling method based on multilevel division method and empowerment directed hypergraphs |
FR3009406A1 (en) * | 2013-08-02 | 2015-02-06 | Commissariat Energie Atomique | METHOD FOR DETERMINING A RADIUS TRACE BETWEEN TWO POINTS MODELING THE PROPAGATION OF AN ULTRASONIC WAVE CROSSING A STRUCTURE WITH DIFFRACTIVE INTERFACES |
CN104391889A (en) * | 2014-11-11 | 2015-03-04 | 西安交通大学 | Method for discovering community structure oriented to directed-weighting network |
CN105786626A (en) * | 2016-04-11 | 2016-07-20 | 南京邮电大学 | Method for distributing Storm platform threads on basis of K-partitioning |
CN106096159A (en) * | 2016-06-20 | 2016-11-09 | 华北电力大学(保定) | Distributed system behavior simulation under a kind of cloud platform analyzes the implementation method of system |
CN106407007A (en) * | 2016-08-31 | 2017-02-15 | 上海交通大学 | Elasticity analysis process oriented cloud resource allocation optimization method |
CN106682369A (en) * | 2017-02-27 | 2017-05-17 | 常州英集动力科技有限公司 | Heating pipe network hydraulic simulation model identification correction method and system, method of operation |
-
2017
- 2017-09-14 CN CN201710826327.8A patent/CN107633125B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3009406A1 (en) * | 2013-08-02 | 2015-02-06 | Commissariat Energie Atomique | METHOD FOR DETERMINING A RADIUS TRACE BETWEEN TWO POINTS MODELING THE PROPAGATION OF AN ULTRASONIC WAVE CROSSING A STRUCTURE WITH DIFFRACTIVE INTERFACES |
CN103605560A (en) * | 2013-11-25 | 2014-02-26 | 国家电网公司 | Cascading failure parallel simulation method for relay protection and safety automata |
CN103885839A (en) * | 2014-04-06 | 2014-06-25 | 孙凌宇 | Cloud computing task scheduling method based on multilevel division method and empowerment directed hypergraphs |
CN104391889A (en) * | 2014-11-11 | 2015-03-04 | 西安交通大学 | Method for discovering community structure oriented to directed-weighting network |
CN105786626A (en) * | 2016-04-11 | 2016-07-20 | 南京邮电大学 | Method for distributing Storm platform threads on basis of K-partitioning |
CN106096159A (en) * | 2016-06-20 | 2016-11-09 | 华北电力大学(保定) | Distributed system behavior simulation under a kind of cloud platform analyzes the implementation method of system |
CN106407007A (en) * | 2016-08-31 | 2017-02-15 | 上海交通大学 | Elasticity analysis process oriented cloud resource allocation optimization method |
CN106682369A (en) * | 2017-02-27 | 2017-05-17 | 常州英集动力科技有限公司 | Heating pipe network hydraulic simulation model identification correction method and system, method of operation |
Non-Patent Citations (3)
Title |
---|
吴越: "串行算法并行化处理的数学模型与算法描述", 《计算机技术与发展》 * |
宋炎侃 等: "应用有向图分层的控制系统暂态仿真并行算法及其GPU实现", 《电力系统自动化》 * |
许金凤 等: "大规模图数据划分算法综述", 《电信科学》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108563984A (en) * | 2018-03-02 | 2018-09-21 | 山东科技大学 | A kind of automatic identification and understanding method of procedural model figure |
CN110610227A (en) * | 2018-06-15 | 2019-12-24 | 北京深鉴智能科技有限公司 | Artificial neural network adjusting method and neural network computing platform |
CN110610227B (en) * | 2018-06-15 | 2022-07-26 | 赛灵思电子科技(北京)有限公司 | Artificial neural network adjusting method and neural network computing platform |
CN110865875A (en) * | 2018-08-27 | 2020-03-06 | 阿里巴巴集团控股有限公司 | DAG task relationship graph processing method and device and electronic equipment |
CN110865875B (en) * | 2018-08-27 | 2023-04-11 | 阿里巴巴集团控股有限公司 | DAG task relationship graph processing method and device and electronic equipment |
WO2020232951A1 (en) * | 2019-05-17 | 2020-11-26 | 深圳前海微众银行股份有限公司 | Task execution method and device |
CN110366205A (en) * | 2019-08-05 | 2019-10-22 | 北京邮电大学 | The selection method and device of initial source node in moving machine meeting network flow unloading |
CN111046575A (en) * | 2019-12-23 | 2020-04-21 | 中国航空工业集团公司沈阳飞机设计研究所 | Method and system for ensuring simulation consistency |
CN111245662A (en) * | 2020-03-09 | 2020-06-05 | 杭州迪普科技股份有限公司 | Method and device for displaying network topology |
CN111245662B (en) * | 2020-03-09 | 2023-01-24 | 杭州迪普科技股份有限公司 | Method and device for displaying network topology |
Also Published As
Publication number | Publication date |
---|---|
CN107633125B (en) | 2021-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633125A (en) | A kind of analogue system Parallelism method based on Weighted Directed Graph | |
Liu et al. | Adaptive asynchronous federated learning in resource-constrained edge computing | |
CN106297774B (en) | A kind of the distributed parallel training method and system of neural network acoustic model | |
CN110490738A (en) | A kind of federal learning method of mixing and framework | |
CN103294521B (en) | A kind of method reducing data center's traffic load and energy consumption | |
CN104636197B (en) | A kind of evaluation method of data center's virtual machine (vm) migration scheduling strategy | |
CN110515739A (en) | Deep learning neural network model load calculating method, device, equipment and medium | |
CN102176723A (en) | Manufacturing cloud system for supporting on-demand use and dynamic collaboration of manufacturing resources and manufacturing capacities | |
CN109492774A (en) | A kind of cloud resource dispatching method based on deep learning | |
CN108490893A (en) | A kind of industrial control method, device and equipment | |
CN108932588A (en) | A kind of the GROUP OF HYDROPOWER STATIONS Optimal Scheduling and method of front and back end separation | |
CN104053179B (en) | A kind of C RAN system integration project platforms | |
CN109656544A (en) | A kind of cloud service API adaptation method based on execution route similarity | |
CN108092804A (en) | Power telecom network maximization of utility resource allocation policy generation method based on Q-learning | |
CN105550159A (en) | Power distributing method for network-on-chip of multi-core processor | |
Ni et al. | Model transformation and formal verification for Semantic Web Services composition | |
CN110351145A (en) | A kind of radio network functions method of combination of the virtualization based on economic benefit | |
CN107566204A (en) | Excited message produces control method, device and logic detection equipment | |
CN103605573B (en) | Reconfigurable architecture mapping decision-making method based on expense calculation | |
CN103853559B (en) | Semantic Web service composite automatic validation method and system | |
CN107547256A (en) | A kind of power telecom network Hardware In The Loop Simulation Method and system | |
CN107453786A (en) | A kind of powerline network method for establishing model and device | |
CN107528731A (en) | Network applied to NS3 parallel artificials splits optimized algorithm | |
Lu et al. | Lotus: A new topology for large-scale distributed machine learning | |
CN111539685A (en) | Ship design and manufacture cooperative management platform and method based on private cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |