CN116302573A - Core allocation method and device in parallel computing - Google Patents

Core allocation method and device in parallel computing Download PDF

Info

Publication number
CN116302573A
CN116302573A CN202310580093.9A CN202310580093A CN116302573A CN 116302573 A CN116302573 A CN 116302573A CN 202310580093 A CN202310580093 A CN 202310580093A CN 116302573 A CN116302573 A CN 116302573A
Authority
CN
China
Prior art keywords
vertex
constraint condition
edge
target
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310580093.9A
Other languages
Chinese (zh)
Other versions
CN116302573B (en
Inventor
吴蕴超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yundao Zhizao Technology Co ltd
Original Assignee
Beijing Yundao Zhizao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yundao Zhizao Technology Co ltd filed Critical Beijing Yundao Zhizao Technology Co ltd
Priority to CN202310580093.9A priority Critical patent/CN116302573B/en
Publication of CN116302573A publication Critical patent/CN116302573A/en
Application granted granted Critical
Publication of CN116302573B publication Critical patent/CN116302573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a core allocation method and device in parallel computing, wherein the method comprises the following steps: acquiring grid information of explicit dynamic simulation and each constraint condition, wherein the grid information comprises grids related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition; determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition; and transmitting the target weighted graph to a graph partitioning software package, so that the graph partitioning software package distributes grids related to each constraint condition and other grids to corresponding cores. The method and the device solve the technical problem that in the prior art, the randomness of the grid related to the constraint condition to each core causes low parallel simulation efficiency, and achieve the technical effect of improving the parallel simulation efficiency.

Description

Core allocation method and device in parallel computing
Technical Field
The application relates to the technical field of explicit dynamics simulation, in particular to a core allocation method and device in parallel computing.
Background
The explicit dynamics simulation is applied to application scenes such as collision analysis of automobiles and aircrafts, manufacturing stamping analysis, explosion analysis in civil engineering, drop analysis of electronic original equipment, ballistic underwater impact analysis and the like. Explicit dynamics is more advantageous in large deformation, complex contact, material nonlinearity scenarios than implicit dynamics, while explicit dynamics robustness is also higher. When facing large simulation problems, the parallel execution of explicit dynamics simulation is realized mainly by using a data exchange mode of an information transmission interface through multiple cores.
In parallel computing for explicit dynamics, the existing practice is: firstly, the geometric information of the problem to be simulated is read in. The geometrical information consists of a grid and a connection between the grids, which is present if the two grids are physically connected.
Parallel computing requires that all grids be assigned to different computing cores, each computing core is responsible for computing on a separate grid, typically computing on each grid often requires information on the grid to which it is connected, and if the grid is on another computing core, information communication between computing cores is required. Thus, to maximize the efficiency of parallel computing, grid allocation has mainly two goals: the method comprises the steps that firstly, the grid number obtained by each computing core is guaranteed to be as average as possible, so that the computing quantity of each computing core can be guaranteed to be approximately the same; the second objective is to minimize the number of grid connections at the boundary between the computing cores, because the number of connections can approximately represent the amount of communication needed to perform parallel computation, so that it is necessary to secure as few grid connections between the computing cores as possible in order to reduce the amount of communication and increase the parallel efficiency.
Grid allocation firstly converts a grid into a weighted graph, as shown in fig. 1, fig. 1 is a schematic diagram of an original weighted graph provided in an embodiment of the present application, and fig. 1 is a weighted graph corresponding to a grid of four grids. In the prior art, each grid corresponds to one vertex in the weighted graph, and an edge between two vertices is an edge in the weighted graph, and the weight of each vertex and the weight of each edge are set to 1 by default in the weighted graph. The total number of computational cores and the weighted graph are then input into a graph partitioning package, e.g., meta, scotch, etc. The graph dividing program package can divide the weighted graph into a plurality of sub-graphs, and distributes each sub-graph to a corresponding computing core, so that vertex weights on the sub-graphs obtained by each computing core are ensured to be consistent as much as possible, and the weight sum of edges between the sub-graphs is ensured to be as little as possible. If the total number of the calculation cores is 2, the graph dividing program package divides the weighted graph into a first sub graph and a second sub graph. As shown in fig. 1, is divided into sub-graph one and sub-graph two by the dashed line of fig. 1. The first sub-graph corresponds to vertex A and vertex B, and the second sub-graph corresponds to vertex C and vertex D. The edge weights between the first sub-graph and the second sub-graph are the weight of the edge between the grid corresponding to the vertex A and the grid corresponding to the vertex B, the weight of the edge between the grid corresponding to the vertex C and the grid corresponding to the vertex D, and the weight of the edge between the grid corresponding to the vertex C and the grid corresponding to the vertex B. And then, distributing the corresponding grids to the corresponding computing cores according to the vertexes of the subgraph, so that the grid distribution targets can be met.
However, this prior approach is inefficient in terms of parallel computation when dealing with more constrained explicit dynamics simulations. This is because almost every constraint is calculated requiring a reduction operation (e.g., calculating average stress and average displacement, etc.) on the grid data involved. Correspondingly, the parallel computing is the full reduction (AllReduce) operation among multiple computing cores. Because the above-described grid allocation process has no special treatment for allocation of the grid involved in the constraint, it has randomness, and may cause the grid involved in the constraint to be scattered in each computing core. In the prior art, all computing cores participate in full-specification operation, that is, the communication domain is an overall communication domain including all computing cores, so as to ensure that all grids involved in constraint are covered.
However, the overall protocol of the overall communication domain is equivalent to setting a block for all cores, and only when all cores reach the blocking point, the downward operation can be continued, so that the waiting time of each core is increased, and the simulation efficiency is reduced. Meanwhile, the whole protocol operation of the whole communication domain is very time-consuming, because the number of times of message transmission is more, and the communication speed between cores is usually far lower than the calculation speed, especially when the number of cores is very large, the whole protocol operation of the whole communication domain is long, so that the parallel execution efficiency of the explicit dynamics simulation is low under the condition of more constraint conditions.
Disclosure of Invention
In order to improve the parallel execution efficiency of explicit dynamics simulation, the applicant has found that the full-specification operation required by any one constraint only needs to be participated in by the computing core containing the grid involved by the constraint, while other computing cores can perform their own operation without being affected, so that the efficiency can be improved by constructing local communication sub-instead of global communication sub-which consists of the computing cores containing the grid involved by the constraint. In this case, the full protocol operation is limited to the local communication sub, and the communication time is reduced because the communication sub is smaller than the global communication sub. Meanwhile, the computing cores outside the local communication sub are not influenced by the blockage of the communication sub, and the computing cores can perform self operation so as to improve the efficiency. It appears that the efficiency improvement obtained is greater if the local communication contains fewer computing cores, i.e. the constraint-related grid is allocated to a smaller number of computing cores, the higher the efficiency. In the most extreme case, if the grids related to a certain constraint are distributed to the same computing core, the full-specification operation in the constraint computation can be avoided, and communication is not needed. However, the existing grid allocation method does not specially process the grids related to the constraint, so that the grid allocation related to the constraint has randomness, and in the worst case, the grids related to each constraint are distributed on all computing cores, thereby resulting in poor parallel computing efficiency. In view of this, an object of the present application is to provide a method and apparatus for allocating cores in parallel computing, which allocate grids related to each constraint to as few computing cores as possible while guaranteeing grid allocation targets one and two as much as possible. For the constraint type involving less grids, the related grids are distributed to the same core, so that full protocol communication is eliminated, and for the constraint type involving more grids, if the related grids are distributed to the same core, the number of grids of the core is huge, so that the first target of grid distribution is destroyed, and the related grids are distributed to the less cores as much as possible.
According to the method, grids related to explicit dynamics simulation are classified according to whether the grids are related to constraint conditions or not, meanwhile, an original weighted graph is processed according to the types of the constraint conditions to generate a new weighted graph, then the new weighted graph is divided by graph dividing software, then corresponding grid division is obtained according to the division result of the new weighted graph, then if all grids related to a certain constraint are on the same computing core at the moment, full-scale operation is not needed for computing of the constraint, local communication sub-is generated according to the computing core where the grid related to the constraint is located in other cases, and the newly generated local communication sub-is used for replacing global communication sub-in the existing method during parallel computing. The method solves the technical problem of low parallel simulation efficiency caused by the randomness of grids related to allocation constraint conditions in the prior art, and achieves the technical effect of improving the parallel simulation efficiency.
The application mainly comprises the following aspects:
in a first aspect, an embodiment of the present application provides a core allocation method in parallel computing, where the core allocation method in parallel computing includes: acquiring grid information of explicit dynamic simulation and each constraint condition, wherein the grid information comprises grids related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition; determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition; and transmitting the target weighted graph to a graph partitioning software package, so that the graph partitioning software package distributes grids related to each constraint condition and other grids to corresponding cores.
Optionally, the types of the constraint conditions include: a first constraint and a second constraint; and determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition, wherein the target weighted graphs comprise: determining an original weighted graph corresponding to the grid information of the explicit dynamics simulation; the original weighted graph comprises the vertex weight of an original vertex corresponding to each grid in the grid information and the edge weight of an original edge corresponding to the grid; determining a primary vertex corresponding to each primary vertex in the primary weighted graph and a primary edge corresponding to the vertex for each first constraint condition and/or each second constraint condition in all constraint conditions of the explicit dynamics simulation; determining the vertex weight of one first target vertex corresponding to each first constraint condition and the edge weight of an edge corresponding to the first target vertex based on the original vertex related to each first constraint condition and the original edge related to the original vertex, and/or determining the vertex weight of each second target vertex corresponding to each second constraint condition and the edge weight of an edge between two second target vertices between each second target vertex based on each original vertex related to each second constraint condition and the original edge between two original vertices between each original vertex; and constructing the target weighted graph based on the vertex weights of the first target vertexes corresponding to each first constraint condition and the edge weights of the edges corresponding to the first target vertexes, and/or the vertex weights of the second target vertexes corresponding to each second constraint condition and the edge weights of the edges corresponding to the second target vertexes, and the edges between the other original vertexes except for the original vertexes corresponding to each first constraint condition and/or each second constraint condition in the original weighted graph.
Optionally, the determining, based on the primary vertex related to each first constraint condition and the primary edge related to the primary vertex, the vertex weight of the first target vertex corresponding to each first constraint condition and the edge weight of the edge corresponding to the first target vertex includes: combining the primary vertexes related to the first constraint condition into a first target vertex aiming at each first constraint condition, and adding the vertex weights of the primary vertexes related to the first constraint condition to be used as the vertex weights of a first target vertex corresponding to the first constraint condition; determining at least one other primary vertex having a primary edge between primary vertices related to the first constraint; adding the edge weights of the original edges between each other original vertex and the original vertex related to the first constraint condition to be used as the edge weights of the edges corresponding to the first target vertex and the other original vertex; and taking the edge weight of the edge corresponding to the first target vertex and each other original vertex as the edge weight of the edge corresponding to the first target vertex.
Optionally, the determining, based on each primary vertex related to each second constraint condition and a primary edge between two primary vertices between each primary vertex, a vertex weight of each second target vertex corresponding to each second constraint condition and an edge weight of an edge between two second target vertices between each second target vertex includes: and taking each original vertex related to each second constraint condition as a second target vertex corresponding to the original vertex, and increasing the edge weight of the original edge between the two original vertices related to the second constraint condition by a preset edge weight as the edge weight of the edge between the two second target vertices corresponding to the second constraint condition.
Optionally, the method further comprises: determining whether all constraint conditions of the explicit dynamics simulation contain a second constraint condition; if all constraint conditions of the explicit dynamics simulation contain at least one second constraint condition, determining a computing core distributed by the graph partitioning software package and corresponding to the second constraint condition; and constructing a target local communication sub by the computing core corresponding to the second constraint condition.
Optionally, the first constraint includes: welding, binding, shell to physical connection, etc., involve constraint types with a small number of meshes.
Optionally, the second constraint includes: surface-to-surface contact, universal contact, rigid body motion, etc., involve constraint types with a large number of meshes.
In a second aspect, embodiments of the present application further provide a core allocation apparatus in parallel computing, where the core allocation apparatus in parallel computing includes: the system comprises an acquisition module, a calculation module and a calculation module, wherein the acquisition module is used for acquiring grid information of explicit dynamic simulation and each constraint condition, the grid information comprises grids related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition; the determining module is used for determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition; and the transmission module is used for transmitting the target weighted graph to a graph dividing software package so that the graph dividing software package distributes grids related to each constraint condition and other grids to corresponding cores.
In a third aspect, embodiments of the present application further provide an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the core allocation method in parallel computing as described in the first aspect or any of the possible implementation manners of the first aspect.
In a fourth aspect, the embodiments of the present application further provide a computer readable storage medium, on which a computer program is stored, which when executed by a processor performs the steps of the core allocation method in parallel computing as described in the first aspect or any of the possible implementation manners of the first aspect.
The embodiment of the application provides a core allocation method and device in parallel computing, wherein the method comprises the following steps: acquiring grid information of explicit dynamic simulation and each constraint condition, wherein the grid information comprises grids related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition; determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition; and transmitting the target weighted graph to a graph partitioning software package, so that the graph partitioning software package distributes grids related to each constraint condition and other grids to corresponding cores. According to the method and the device, the grids related to explicit dynamic simulation are classified according to whether the grids are related to constraint conditions or not, the weighted graph information corresponding to the grids related to the constraint conditions is modified, the weighted graph information corresponding to the grids not related to the constraint conditions is kept unchanged, and the modified weighted graph is sent to the graph partitioning software package, so that the graph partitioning software package partitions the grids to corresponding cores, the technical problem of low simulation efficiency caused by randomness of the grids related to the constraint conditions allocated by the graph partitioning software package in the prior art is solved, and the technical effect of improving the simulation efficiency is achieved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic diagram of an original weighting chart provided in an embodiment of the present application.
Fig. 2 is a flowchart of a core allocation method in parallel computing according to an embodiment of the present application.
Fig. 3 shows a schematic diagram of a target weighting chart according to an embodiment of the present application.
Fig. 4 shows a schematic diagram of another object weighting graph provided in an embodiment of the present application.
Fig. 5 is a schematic diagram of a core allocation method in parallel computing according to an embodiment of the present application.
FIG. 6 is a functional block diagram of a core allocation apparatus in parallel computing according to an embodiment of the present application.
Fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are only for the purpose of illustration and description, and are not intended to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this application, illustrates operations implemented according to some embodiments of the present application. It should be appreciated that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to the flow diagrams and one or more operations may be removed from the flow diagrams as directed by those skilled in the art.
In addition, the described embodiments are only some, but not all, of the embodiments of the present application. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In the prior art, under parallel computing, the process of allocating grids related to constraint conditions to computing cores has randomness, and in each constraint computing process, all cores need to communicate to complete full protocol, and simulation efficiency is low due to excessive communication traffic and communication waiting among the cores caused by more cores.
Based on this, the embodiment of the application provides a core allocation method and device in parallel computing, by classifying grids related to explicit dynamics simulation according to whether the grids are related to constraint conditions, modifying weights of vertices and edges in a weighted graph corresponding to the grids related to the constraint conditions, keeping weights of vertices and edges in the weighted graph corresponding to the grids not related to the constraint conditions unchanged, and sending the modified weighted graph to a graph partitioning software package, so that the graph partitioning software package partitions the grids to corresponding cores, and the technical problem of low simulation efficiency caused by the fact that the graph partitioning software package randomly distributes the grids related to the constraint conditions to each core in the prior art is solved, and the technical effect of improving the simulation efficiency is achieved. The method comprises the following steps:
referring to fig. 2, fig. 2 is a flowchart of a core allocation method in parallel computing according to an embodiment of the present application. As shown in fig. 2, the core allocation method in parallel computing provided in the embodiment of the present application includes the following steps:
S101: and acquiring grid information of the explicit dynamics simulation and each constraint condition.
If explicit dynamics simulation does not involve constraints, the solution of the present application need not be considered. The present application is directed to explicit dynamics simulation involving constraints only. Explicit dynamics simulation in the present application corresponds to one or more constraints.
The grid information contains the grid to which each constraint relates and other grids that refer to grids to which the constraint does not relate. Grid refers to the fact that an object undergoing explicit dynamic simulation is divided into grids of different sizes to represent the geometry of the object during the simulation.
Types of the constraint conditions include: a first constraint and a second constraint; the first constraint includes: welding, binding and connecting the shell with the entity. The second constraint includes: face-to-face contact, universal contact, and rigid body motion. Wherein the first constraint involves a smaller number of meshes and the second constraint involves a larger number of meshes.
For example, if the present explicit dynamics simulation involves three sticks, where the constraint condition between one end of the first stick and one end of the second stick is binding, and the constraint condition between the other end of the second stick and one end of the third stick is binding, the mesh corresponding to the binding portion between the one end of the first stick and the one end of the second stick is regarded as the mesh related to the first constraint condition of the present explicit dynamics simulation with the number 1, the mesh corresponding to the binding portion between the other end of the second stick and the one end of the third stick is regarded as the mesh related to the first constraint condition of the present explicit dynamics simulation with the number 2, and the meshes corresponding to the three sticks except the mesh related to the first constraint condition with the number 1 and the mesh related to the first constraint condition with the number 2 are regarded as other meshes.
For example, if the present explicit dynamics simulation involves three sticks, where the constraint condition between one end of the first stick and one end of the second stick is binding, and the constraint condition between the other end of the second stick and one end of the third stick is surface-to-surface contact, the mesh corresponding to the binding portion between one end of the first stick and one end of the second stick is used as the mesh related to the first constraint condition of the present explicit dynamics simulation, the mesh corresponding to the binding portion between the other end of the second stick and one end of the third stick is used as the mesh related to the second constraint condition of the present explicit dynamics simulation, and the meshes other than the mesh related to the first constraint condition and the mesh related to the second constraint condition among the meshes corresponding to the three sticks are used as other meshes.
S102: and determining target weighted graphs corresponding to all grids of the explicit dynamic simulation according to the type of each constraint condition.
And determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition, wherein the target weighted graphs comprise: determining an original weighted graph corresponding to the grid information of the explicit dynamics simulation; the original weighted graph comprises the vertex weight of an original vertex corresponding to each grid in the grid information and the edge weight of an original edge corresponding to the grid; determining a primary vertex corresponding to each primary vertex in the primary weighted graph and a primary edge corresponding to the vertex for each first constraint condition and/or each second constraint condition in all constraint conditions of the explicit dynamics simulation; determining the vertex weight of one first target vertex corresponding to each first constraint condition and the edge weight of an edge corresponding to the first target vertex based on the original vertex related to each first constraint condition and the original edge related to the original vertex, and/or determining the vertex weight of each second target vertex corresponding to each second constraint condition and the edge weight of an edge between two second target vertices between each second target vertex based on each original vertex related to each second constraint condition and the original edge between two original vertices between each original vertex; and constructing the target weighted graph based on the vertex weights of the first target vertexes corresponding to each first constraint condition and the edge weights of the edges corresponding to the first target vertexes, and/or the vertex weights of the second target vertexes corresponding to each second constraint condition and the edge weights of the edges corresponding to the second target vertexes, and the edges between the other original vertexes except for the original vertexes corresponding to each first constraint condition and/or each second constraint condition in the original weighted graph.
That is, each grid in the grid information corresponds to an original vertex and a vertex weight of the original vertex in the original weighted graph, and each vertex corresponds to an original edge and an edge weight of the original edge between one or more other vertices except the vertex. For each vertex, if there is a connection between the vertex and one vertex, the vertex and the vertex with the connection correspond to one primary side; if there is a connection between the vertex and the plurality of vertices, a primary edge is corresponding between the vertex and each of the plurality of vertices.
Based on the original vertex related to each first constraint condition and the original edge related to the original vertex, determining the vertex weight of the first target vertex corresponding to each first constraint condition and the edge weight of the edge corresponding to the first target vertex comprises the following steps: combining the primary vertexes related to the first constraint condition into a first target vertex aiming at each first constraint condition, and adding the vertex weights of the primary vertexes related to the first constraint condition to be used as the vertex weights of a first target vertex corresponding to the first constraint condition; determining at least one other primary vertex having a primary edge between primary vertices related to the first constraint; adding the edge weights of the original edges between each other original vertex and the original vertex related to the first constraint condition to be used as the edge weights of the edges corresponding to the first target vertex and the other original vertex; and taking the edge weight of the edge corresponding to the first target vertex and each other original vertex as the edge weight of the edge corresponding to the first target vertex.
That is, merging the original vertices related to each first constraint condition in the original weighted graph into a first target vertex, wherein the vertex weight of the first target vertex corresponding to the constraint condition is the sum of the vertex weights of all the original vertices corresponding to the constraint condition; and if the primary edges exist between all primary vertexes corresponding to the constraint condition and other primary vertexes except for all primary vertexes of the constraint condition, taking the sum of the edge weights of the primary edges corresponding to each other primary vertex with the primary edge as the edge weight of the corresponding edge between the other primary vertex and the first target vertex.
Referring to fig. 3, fig. 3 is a schematic diagram of a target weighting chart according to an embodiment of the present application. As shown in fig. 1 and 3, the target weighted graph provided in the embodiment of the present application corresponds to the original weighted graph of fig. 1. If the vertex a and the vertex C in fig. 1 are the original vertices corresponding to the mesh related to the only constraint condition in the original weighted graph, and the constraint condition is the first constraint condition, and the total number of cores is calculated to be 2, the vertex a and the vertex C are combined to form a first target vertex ac, and the vertex weight of the vertex a and the vertex weight of the vertex C are added to form the vertex weight of the first target vertex ac. The vertex weight of the vertex B is used as the vertex weight of the vertex B1, and the vertex weight of the vertex D is used as the vertex weight of the vertex D1. The other original vertices with original edges between vertex A and vertex C are vertex B and vertex D. For vertex B, there is a primary side between vertex a and vertex C and vertex B, and then the edge weight of the primary side between vertex a and vertex B is added to the edge weight of the primary side between vertex C and vertex B as the edge weight of the edge between the first target vertex ac and vertex B1. Since the edge weight of the edge between the first target vertex ac and the vertex b1 increases, the edge between the vertex ac and the vertex b1 in fig. 3 is further indicated by a bold line. The edge weight of the primary edge between the vertex C and the vertex D is taken as the edge weight of the edge between the first target vertex ac and the vertex D1. The edge weight of the original edge between the vertex B and the vertex D is taken as the edge weight of the edge between the vertex B1 and the vertex D1. That is, the edge weight of the edge between the first target vertex ac and the vertex b1, and the edge weight of the edge between the first target vertex ac and the vertex d1 are taken as the edge weights of the edges corresponding to the first target vertex ac. The target weighted graph is constructed based on the vertex weights of the first target vertex ac, the vertex weights of the vertex b1, the vertex weights of the vertex d1, the edge weights of the edges between the first target vertex ac and the vertex b1, the edge weights of the edges between the first target vertex ac and the vertex d1, and the edge weights of the edges between the vertex b and the vertex d 1.
Determining, based on each primary vertex related to each second constraint condition and a primary edge between two primary vertices between each primary vertex, a vertex weight of each second target vertex corresponding to each second constraint condition and an edge weight of an edge between two second target vertices between each second target vertex, including: and taking each original vertex related to each second constraint condition as a second target vertex corresponding to the original vertex, and increasing the edge weight of the original edge between the two original vertices related to the second constraint condition by a preset edge weight as the edge weight of the edge between the two second target vertices corresponding to the second constraint condition.
That is, each original vertex related to each second constraint condition in the original weighted graph is taken as a second target vertex corresponding to the original vertex, and the vertex weight of the original vertex is taken as the vertex weight of the second target vertex corresponding to the original vertex; and adding a preset weight to the edge weight between two original vertexes with the original edge in each original vertex related to each second constraint condition, and taking the edge weight between two corresponding second target vertexes with the original edge as the edge weight between two corresponding original vertexes with the original edge.
Referring to fig. 4, fig. 4 is a schematic diagram of another target weighting chart according to an embodiment of the present application. As shown in fig. 1 and fig. 4, the target weighted graph provided in the embodiment of the present application corresponds to the original weighted graph of fig. 1. If the vertex a and the vertex B in fig. 1 are the original vertices corresponding to the mesh related to the only constraint condition in the original weighted graph, and the constraint condition is the second constraint condition, and the total number of calculation cores is 2, the vertex a is taken as a second target vertex a2, the vertex B is taken as a second target vertex B2, the edge weight of the original edge between the vertex a and the vertex B is increased by a preset edge weight, and the edge weight of the edge between the second target vertex a2 and the second target vertex B2 is taken as the edge weight of the edge between the second target vertex a2 and the second target vertex B2. Since the edge weight of the edge between the second target vertex a2 and the second target vertex b2 increases, the edge between the vertex a2 and the vertex b2 in fig. 4 is further indicated by a bold line. And taking the edge weight of the edge between the second target vertex a2 and the second target vertex b2 as the edge weight of the edge corresponding to the second constraint condition. The target weighted graph is constructed based on the vertex weight of the second target vertex a2, the vertex weight of the second target vertex b2, the vertex weight of the vertex c2, the vertex weight of the vertex d2, the edge weight of the edge between the second target vertex a2 and the second target vertex b2, the edge weight of the edge between the second target vertex a2 and the vertex c2, the edge weight of the edge between the second target vertex b2 and the vertex c2, the edge weight of the edge between the vertex d2 and the vertex c2, and the edge weight of the edge between the vertex d2 and the second target vertex b 2. That is, the vertex weight of the vertex a in the original weighted graph is set as the vertex weight of the second target vertex a2 of the target weighted graph, the vertex weight of the vertex B in the original weighted graph is set as the vertex weight of the second target vertex B2 of the target weighted graph, the vertex weight of the vertex C in the original weighted graph is set as the vertex weight of the vertex C2 of the target weighted graph, and the vertex weight of the vertex D in the original weighted graph is set as the vertex weight of the vertex D2 of the target weighted graph. The edge weight of the primary edge between the vertex A and the vertex C in the original weighted graph is taken as the edge weight of the edge between the second target vertex a2 and the vertex C2 of the target weighted graph, the edge weight of the primary edge between the vertex B and the vertex C in the original weighted graph is taken as the edge weight of the edge between the second target vertex B2 and the vertex C2 of the target weighted graph, the edge weight of the primary edge between the vertex D and the vertex C in the original weighted graph is taken as the edge weight of the edge between the vertex D2 and the vertex C2 of the target weighted graph, and the edge weight of the primary edge between the vertex D and the vertex B in the original weighted graph is taken as the edge weight of the edge between the vertex D2 and the second target vertex B2 of the target weighted graph. Further, since the edge weight of the edge between the vertex a2 and the vertex b2 increases, the vertex a2 and the vertex b2 are divided into one computing core and the vertex c2 and the vertex d2 are divided into the other computing core when the computing core division is performed.
Binding the vertex corresponding to each first constraint condition as a first target vertex, and dividing the software package from the graph can ensure that the vertex corresponding to each first constraint condition is distributed in the same core when dividing, thereby avoiding full-specification operation during calculation and improving parallel efficiency. Meanwhile, because the weight of the first target vertex corresponding to the first constraint condition is the sum of the weights of all vertices before binding, the vertex weight sum on the subgraph corresponding to each core is considered to be the same as much as possible when the graph dividing software package is divided, so that the grid quantity finally distributed to each core is ensured to be as consistent as possible, and the grid quantity is not distributed unevenly among calculation cores due to vertex binding.
The graph dividing software package also considers that the edge weight between the cores is as small as possible during the division, and then increases the edge weight between the vertexes corresponding to each second constraint condition, so that the graph dividing software divides the vertexes corresponding to each second constraint condition into fewer cores or even into one core as much as possible in order to avoid the edge with heavy weight at the boundary of the cores as much as possible, thereby reducing or eliminating communication between the cores and improving the parallel efficiency.
S103: and transmitting the target weighted graph to a graph partitioning software package, so that the graph partitioning software package distributes grids related to each constraint condition and other grids to corresponding cores.
That is, for each vertex corresponding to the first constraint, the vertex corresponding to the first constraint is divided into one computing core because it is already bound as one vertex in the weighted graph. For each vertex related to the second constraint condition, since the edge weights among the vertices related to the second constraint condition are larger than other edge weights, when the graph partitioning software package performs partitioning, in order to ensure that the edge weights among each computing core are smaller, the vertices related to the second constraint condition are partitioned into fewer computing cores or even one computing core as much as possible, so that communication among the computing cores is reduced or eliminated.
The core allocation method in parallel computing provided by the embodiment of the application further comprises the following steps: determining whether all constraint conditions of the explicit dynamics simulation contain a second constraint condition; if all constraint conditions of the explicit dynamics simulation contain at least one second constraint condition, determining a computing core distributed by the graph partitioning software package and corresponding to the second constraint condition; and constructing a target local area communicator by the computing core corresponding to the second constraint condition. That is, the computing core corresponding to the second constraint condition is used to construct the target local communication sub-instead of the global communication sub-in the existing method.
That is, for each vertex related to the second constraint condition, if the graph partitioning software package does not partition the vertex related to the second constraint condition into one computing core, the computing cores corresponding to the second constraint condition are combined and a target local communication sub is constructed, so that the computing cores corresponding to the second constraint condition communicate in the target local communication sub, the influence of the reduction operation can be avoided for the computing cores not related to the second constraint condition, and the target local communication sub does not need to communicate with the computing cores except for the computing cores corresponding to the second constraint condition, and does not need to wait for the completion of the computation of the rest of the computing cores, thereby improving the parallel efficiency.
If all constraint conditions of the explicit dynamics simulation do not contain the second constraint condition, the vertex related to each first constraint condition is combined into a first target vertex, so that when the graph partitioning software package partitions, the first target vertex corresponding to each first constraint condition can be ensured to be in a computing core, and when the related computation of the first constraint condition is performed, the computation is performed only in the computing core corresponding to the first constraint condition, communication with other computing cores is not needed, and a target local communication is not needed to be constructed.
Referring to fig. 5, fig. 5 is a schematic diagram of a core allocation method in parallel computing according to an embodiment of the present application. Exemplary, as shown in fig. 5, the upper left part in fig. 5 is a grid diagram of explicit dynamics simulation, the lower part in fig. 5 is an original weighted diagram corresponding to the grid diagram, and the upper right part in fig. 5 is a target weighted diagram of the grid diagram after the grid diagram is processed by the scheme of the present application. Grid 1 and grid 2 in the grid map are grids related to the first constraint condition, grid 7, grid 8, grid 11 and grid 12 are grids corresponding to the second constraint condition, and the total number of calculation cores is 2. When assigning by prior art techniques, it is possible to assign grid 1 to grid 8 to one computing core and grid 9 to grid 16 to another computing core. Grid 1 to grid 16 are in one-to-one correspondence with vertices D1 to D16 in the original weighted graph. Then, the vertex D1 corresponding to the mesh 1 and the vertex D2 corresponding to the mesh 2 are combined into a first target vertex D1,2, and the vertex weights of D1,2 are the sum of the vertex weights of the vertex D1 and the vertex weights of the vertex D2. Vertex weights of vertices D3 to D16 are in one-to-one correspondence with vertex weights of vertices D3 to D16. The edge weight of the edge between the vertexes D2 and D3 is taken as the edge weight between the first target vertexes D1,2 and D3, the edge weight of the edge between the vertexes D1 and D5 is taken as the edge weight between the first target vertexes D1,2 and D5, and the edge weight of the edge between the vertexes D2 and D6 is taken as the edge weight between the first target vertexes D1,2 and D6. And the edge weight between the added vertex D7 and the vertex D8 is taken as the edge weight between the second target vertex D7 and the second target vertex D8, the edge weight between the added vertex D7 and the vertex D11 is taken as the edge weight between the second target vertex D7 and the second target vertex D11, the edge weight between the added vertex D11 and the vertex D12 is taken as the edge weight between the second target vertex D11 and the second target vertex D12, and the edge weight between the added vertex D12 and the vertex D8 is taken as the edge weight between the second target vertex D12 and the second target vertex D8 (the bolded line in fig. 5 represents the added edge weight). Thus, according to the vertex weights of the first target vertices D1,2, the vertex weights of the vertices D3 to D16, the edge weights between D7 and D8, the edge weights between D7 and D11, the edge weights between D11 and D12, the edge weights between D12 and D8, the edge weights between D1,2 and D3, the edge weights between D1,2 and D5, the edge weights between D1,2 and D6, the edge weights of D3 and D4, the edge weights of D3 and D7, the edge weights of D8 and D4, the edge weights of D5 and D6, the edge weights of D6 and D7, the edge weights of D5 and D9, the edge weights of D6 and D10, the edge weights of D9 and D10, the edge weights of D10 and D11, the edge weights of D9 and D13, the edge weights of D10 and D14, the edge weights of D13 and D14, the edge weights of D11 and D11, the edge weights of D15 and D15, the edge weights of D15 and D16, and the software is divided in the figures. As indicated by a broken line in the target weighted graph shown in the upper right part of fig. 5, a mesh corresponding to a vertex on the left side of the broken line is divided into one computing core, a mesh corresponding to a vertex on the right side of the broken line is divided into another computing core, and the other computing core is constructed as a target local area communicator.
Based on the same application conception, the embodiment of the application also provides a core allocation device in parallel computing corresponding to the core allocation method in parallel computing provided by the embodiment, and because the principle of solving the problem by the device in the embodiment of the application is similar to that of the core allocation method in parallel computing in the embodiment of the application, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
As shown in fig. 6, fig. 6 is a functional block diagram of a core allocation apparatus in parallel computing according to an embodiment of the present application. The core allocation apparatus 10 in parallel computing includes: an acquisition module 101, a determination module 102 and a transmission module 103.
An obtaining module 101, configured to obtain grid information of an explicit dynamics simulation and each constraint condition, where the grid information includes a grid related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition;
a determining module 102, configured to determine a weighted graph corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition;
and the transmission module 103 is used for transmitting the weighted graph to a graph partitioning software package so that the graph partitioning software package distributes grids related to each constraint condition and other grids to corresponding cores.
Based on the same application concept, referring to fig. 7, which is a schematic structural diagram of an electronic device provided in an embodiment of the present application, the electronic device 20 includes: a processor 201, a memory 202 and a bus 203, said memory 202 storing machine readable instructions executable by said processor 201, said processor 201 and said memory 202 communicating via said bus 203 when said electronic device 20 is running, said machine readable instructions being executed by said processor 201 to perform the steps of the core allocation method in parallel computing as described in any of the above embodiments.
In particular, the machine readable instructions, when executed by the processor 201, may perform the following: acquiring grid information of explicit dynamic simulation and each constraint condition, wherein the grid information comprises grids related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition; determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition; and transmitting the target weighted graph to a graph partitioning software package, so that the graph partitioning software package distributes grids related to each constraint condition and other grids to corresponding cores.
Based on the same application concept, the embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to perform the steps of the core allocation method in parallel computing provided in the foregoing embodiment.
Specifically, the storage medium may be a general storage medium, such as a mobile disk, a hard disk, or the like, and when a computer program on the storage medium is executed, the core allocation method in parallel computing may be executed, by classifying a grid related to explicit dynamics simulation according to whether the grid is related to a constraint condition, simultaneously processing an original weighted graph according to a type of the constraint condition to generate a new weighted graph, partitioning the weighted graph by graph partitioning software, obtaining a corresponding grid partition according to a partitioning result of the weighted graph, and then dividing the obtained grid partition into each constraint computing to generate a local communication sub-unit, and using the newly generated local communication sub-unit to replace a global communication sub-unit in the existing method in parallel computing. The method solves the technical problem of low parallel simulation efficiency caused by the randomness of grids related to allocation constraint conditions in the prior art, and achieves the technical effect of improving the parallel simulation efficiency.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present application may be embodied in essence or a part contributing to the prior art or a part of the technical solutions, or in the form of a software product, which is stored in a storage medium and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of core allocation in parallel computing, the method comprising:
acquiring grid information of explicit dynamic simulation and each constraint condition, wherein the grid information comprises grids related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition;
determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition;
and transmitting the target weighted graph to a graph partitioning software package, so that the graph partitioning software package distributes grids related to each constraint condition and other grids to corresponding cores.
2. The method of claim 1, wherein the type of constraint comprises: a first constraint and a second constraint;
And determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition, wherein the target weighted graphs comprise:
determining an original weighted graph corresponding to the grid information of the explicit dynamics simulation; the original weighted graph comprises the vertex weight of an original vertex corresponding to each grid in the grid information and the edge weight of an original edge corresponding to the grid;
determining a primary vertex corresponding to each primary vertex in the primary weighted graph and a primary edge corresponding to the vertex for each first constraint condition and/or each second constraint condition in all constraint conditions of the explicit dynamics simulation;
determining the vertex weight of one first target vertex corresponding to each first constraint condition and the edge weight of an edge corresponding to the first target vertex based on the original vertex related to each first constraint condition and the original edge related to the original vertex, and/or determining the vertex weight of each second target vertex corresponding to each second constraint condition and the edge weight of an edge between two second target vertices between each second target vertex based on each original vertex related to each second constraint condition and the original edge between two original vertices between each original vertex;
And constructing the target weighted graph based on the vertex weights of the first target vertexes corresponding to each first constraint condition and the edge weights of the edges corresponding to the first target vertexes, and/or the vertex weights of the second target vertexes corresponding to each second constraint condition and the edge weights of the edges corresponding to the second target vertexes, and the edges between the other original vertexes except for the original vertexes corresponding to each first constraint condition and/or each second constraint condition in the original weighted graph.
3. The method according to claim 2, wherein determining the vertex weight of the first target vertex corresponding to each first constraint and the edge weight of the edge corresponding to the first target vertex based on the original vertex related to each first constraint and the original edge related to the original vertex comprises:
combining the primary vertexes related to the first constraint condition into a first target vertex aiming at each first constraint condition, and adding the vertex weights of the primary vertexes related to the first constraint condition to be used as the vertex weights of a first target vertex corresponding to the first constraint condition;
determining at least one other primary vertex having a primary edge between primary vertices related to the first constraint;
Adding the edge weights of the original edges between each other original vertex and the original vertex related to the first constraint condition to be used as the edge weights of the edges corresponding to the first target vertex and the other original vertex;
and taking the edge weight of the edge corresponding to the first target vertex and each other original vertex as the edge weight of the edge corresponding to the first target vertex.
4. The method according to claim 2, wherein determining the vertex weight of each second target vertex and the edge weight of the edge between the two second target vertices corresponding to each second constraint based on each original vertex and the original edge between the two original vertices related to each second constraint comprises:
and taking each original vertex related to each second constraint condition as a second target vertex corresponding to the original vertex, and increasing the edge weight of the original edge between the two original vertices related to the second constraint condition by a preset edge weight as the edge weight of the edge between the two second target vertices corresponding to the second constraint condition.
5. The method according to claim 2, wherein the method further comprises:
Determining whether all constraint conditions of the explicit dynamics simulation contain a second constraint condition;
if all constraint conditions of the explicit dynamics simulation contain at least one second constraint condition, determining a computing core distributed by the graph partitioning software package and corresponding to the second constraint condition;
and constructing a target local communication sub by the computing core corresponding to the second constraint condition.
6. The method of claim 2, wherein the first constraint comprises: welding, binding and connecting the shell with the entity.
7. The method of claim 2, wherein the second constraint comprises: face-to-face contact, universal contact, and rigid body motion.
8. A core allocation apparatus in parallel computing, the apparatus comprising:
the system comprises an acquisition module, a calculation module and a calculation module, wherein the acquisition module is used for acquiring grid information of explicit dynamic simulation and each constraint condition, the grid information comprises grids related to each constraint condition and other grids, and the other grids refer to grids not related to the constraint condition;
the determining module is used for determining target weighted graphs corresponding to all grids of the explicit dynamics simulation according to the type of each constraint condition;
And the transmission module is used for transmitting the target weighted graph to a graph dividing software package so that the graph dividing software package distributes grids related to each constraint condition and other grids to corresponding cores.
9. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating via said bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the core allocation method in parallel computing according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the core allocation method in parallel computing according to any of claims 1 to 7.
CN202310580093.9A 2023-05-23 2023-05-23 Core allocation method and device in parallel computing Active CN116302573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310580093.9A CN116302573B (en) 2023-05-23 2023-05-23 Core allocation method and device in parallel computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310580093.9A CN116302573B (en) 2023-05-23 2023-05-23 Core allocation method and device in parallel computing

Publications (2)

Publication Number Publication Date
CN116302573A true CN116302573A (en) 2023-06-23
CN116302573B CN116302573B (en) 2023-08-18

Family

ID=86801806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310580093.9A Active CN116302573B (en) 2023-05-23 2023-05-23 Core allocation method and device in parallel computing

Country Status (1)

Country Link
CN (1) CN116302573B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006023828A (en) * 2004-07-06 2006-01-26 Casio Comput Co Ltd Figure drawing control apparatus and program
CN101630273A (en) * 2009-08-06 2010-01-20 中国电力科学研究院 Small interference stability simulation method of electric power system
US20140320497A1 (en) * 2013-04-29 2014-10-30 Microsoft Corporation Graph partitioning for massive scale graphs
CN104933225A (en) * 2015-05-25 2015-09-23 中国科学院过程工程研究所 Method for realizing computational fluid dynamics large-scale real-time simulation
CN113377523A (en) * 2021-01-13 2021-09-10 绍兴文理学院 Heterogeneous sensing stream graph partitioning method
CN115659843A (en) * 2022-11-30 2023-01-31 哈尔滨工业大学人工智能研究院有限公司 Method for constructing display dynamics model by using artificial intelligence technology
CN115809530A (en) * 2022-12-06 2023-03-17 北京谋先飞技术有限公司 Entity network distributed physical simulation method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006023828A (en) * 2004-07-06 2006-01-26 Casio Comput Co Ltd Figure drawing control apparatus and program
CN101630273A (en) * 2009-08-06 2010-01-20 中国电力科学研究院 Small interference stability simulation method of electric power system
US20140320497A1 (en) * 2013-04-29 2014-10-30 Microsoft Corporation Graph partitioning for massive scale graphs
CN104933225A (en) * 2015-05-25 2015-09-23 中国科学院过程工程研究所 Method for realizing computational fluid dynamics large-scale real-time simulation
CN113377523A (en) * 2021-01-13 2021-09-10 绍兴文理学院 Heterogeneous sensing stream graph partitioning method
CN115659843A (en) * 2022-11-30 2023-01-31 哈尔滨工业大学人工智能研究院有限公司 Method for constructing display dynamics model by using artificial intelligence technology
CN115809530A (en) * 2022-12-06 2023-03-17 北京谋先飞技术有限公司 Entity network distributed physical simulation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116302573B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN107437110B (en) Block convolution optimization method and device of convolutional neural network
US8434088B2 (en) Optimized capacity planning
US9742869B2 (en) Approach to adaptive allocation of shared resources in computer systems
CN109408590B (en) Method, device and equipment for expanding distributed database and storage medium
JP5121936B2 (en) RESOURCE ALLOCATION DEVICE, RESOURCE ALLOCATION PROGRAM, RECORDING MEDIUM, AND RESOURCE ALLOCATION METHOD
CN111930498B (en) Efficient GPU resource allocation optimization method and system
CN110415160B (en) GPU (graphics processing Unit) topology partitioning method and device
CN113946431B (en) Resource scheduling method, system, medium and computing device
CN112559165A (en) Memory management method and device, electronic equipment and computer readable storage medium
EP2738675A2 (en) System and method for efficient resource management of a signal flow programmed digital signal processor code
CN111736957A (en) Multi-type service mixed deployment method, device, equipment and storage medium
CN116302573B (en) Core allocation method and device in parallel computing
JP2019106031A (en) Data processing system and data analysis/processing method
CN105988871B (en) Remote memory allocation method, device and system
CN116260876A (en) AI application scheduling method and device based on K8s and electronic equipment
CN113656507B (en) Method and device for executing transaction in block chain system
CN114327862B (en) Memory allocation method and device, electronic equipment and storage medium
CN116302327A (en) Resource scheduling method and related equipment
CN107656697B (en) Method and device for operating data on storage medium
CN116360973A (en) Data processing system and method of operation thereof
CN104951406A (en) Paging type address space management method and controller
CN110750330A (en) Virtual machine creating method, system, electronic equipment and storage medium
CN118012631B (en) Operator execution method, processing device, storage medium and program product
CN109144231B (en) Virtualized power management method and device
CN117349037B (en) Method, device, computer equipment and storage medium for eliminating interference in off-line application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant