CN115509737A - Three-dimensional CAD model parallel driving method based on process scheduling - Google Patents

Three-dimensional CAD model parallel driving method based on process scheduling Download PDF

Info

Publication number
CN115509737A
CN115509737A CN202211028056.9A CN202211028056A CN115509737A CN 115509737 A CN115509737 A CN 115509737A CN 202211028056 A CN202211028056 A CN 202211028056A CN 115509737 A CN115509737 A CN 115509737A
Authority
CN
China
Prior art keywords
model
file
task
dependency
storing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211028056.9A
Other languages
Chinese (zh)
Inventor
易平
朱凌穹
胡建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Eman Technology Co ltd
Original Assignee
Wuhan Eman Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Eman Technology Co ltd filed Critical Wuhan Eman Technology Co ltd
Priority to CN202211028056.9A priority Critical patent/CN115509737A/en
Publication of CN115509737A publication Critical patent/CN115509737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Abstract

The invention discloses a three-dimensional CAD model parallel driving method based on process scheduling, which comprises the following steps: acquiring and identifying the sub-parts in the assembly model; creating a task queue Q; creating a task scheduling thread and initializing a sub-process; analyzing the dependency relationship of the model expression by the subprocess; obtaining the dependency analysis of the model file and storing the dependency analysis into a file dependency table; splicing the dependency relationships of the model files to form a model dependency graph, analyzing the model dependency graph, calculating the maximum depth of each file node to obtain depth information, and storing the depth information into a file dependency table; storing the file into Q from small to large according to the depth of the file to form a model depth queue; updating and storing the model for the file in the model depth queue in sequence; and after the updating is finished, adding the main model file. The invention decomposes the complex assembly structure into a plurality of subtasks of model-driven solving of single parts, and distributes the model-driven work to the background process for driving according to resource consumption prediction and dependency relationship.

Description

Three-dimensional CAD model parallel driving method based on process scheduling
Technical Field
The invention belongs to the field of mold manufacturing digitization, and particularly relates to a three-dimensional CAD model parallel driving method based on process scheduling.
Background
The current three-dimensional design standardized modeling method is to establish basic part model structures such as screws, pressing plates, push rods and the like; then creating special structures, such as a slider seat, an A/B plate and the like; and finally, combining various parts into a whole through an assembly technology, and constraining the size and the position of the assembly sub-part through an expression relation between assemblies. Because the kernel bottom layer mechanism of the existing mainstream three-dimensional CAD software (CATIA, NX, solidworks and the like) does not support multithread operation, in the model driving operation process, the calculation is carried out only by depending on the single-core performance of a CPU, the large-scale model driving updating speed is slow, and the advantage of the multi-core parallel calculation of the modern CPU can not be utilized; at present, the assembly structure of a three-dimensional CAD model is well defined during modeling, and when different brands of sub-parts are required to be replaced in an assembly model, all the sub-parts are required to be assembled in the model, and the display state of the parts is controlled through constraint. The method can cause the structure of the assembled model to be abnormally complex, on one hand, the occupied disk space after the model is led in is increased, on the other hand, the expression of irrelevant parts is solved in the driving process of the model, and the time consumption of model driving is increased.
Disclosure of Invention
The invention aims to provide a three-dimensional CAD model parallel driving method based on process scheduling, which decomposes a complex assembly structure into a plurality of subtasks of single part model-driven solving, and allocates model-driven work to a background process for driving according to resource consumption prediction and dependency relationship based on a process scheduling task mechanism; by adopting the method of self-defining replaceable assembly part structures, the resource model of the standard part library only needs to maintain one model structure, and the sub-assemblies of parts of different brands do not need to be created in an assembly inhibition mode.
In order to solve the technical problems, the technical scheme of the invention is as follows: a three-dimensional CAD model parallel driving method based on process scheduling comprises the following steps:
s1, acquiring and identifying a sub-part in an assembly model;
s2, creating an empty task queue Q;
s3, creating a task scheduling thread and initializing a sub-process;
s4, analyzing the dependency relationship of the model expression by the subprocess; obtaining the dependency analysis of the model file and storing the dependency analysis into a file dependency table; splicing the dependency relationship of the model files to form a model dependency graph according to a preset structure, analyzing the model dependency graph, calculating the maximum depth of each file node to obtain depth information, and storing the depth information into a file dependency table;
s5, storing the files in the file dependency table into a task queue Q from small to large according to the depths of the files to form a model depth queue, and creating an empty global expression list G for storing expression values of all parts which are driven by the completed model;
s6, updating and storing the models of the files in the model depth queue in sequence;
and S7, adding the main model file after the updating is finished.
The specific steps of S1 are as follows:
s11, matching according to possible sub-parts in the resource file main model;
s12, reading the binary file of the main model in a text form;
s13, retrieving all character positions N1, N2.. Nn containing keywords in a text matching mode, wherein N is a positive real number;
and S14, taking the Nn as an initial position, reversely searching key characters, representing the key characters as the initial position Mn of the file name description character string, and intercepting (M, nn + 4) to obtain a complete file name.
Further comprising the steps of: comparing and storing all files in the assembly structure, and the method specifically comprises the following steps:
s15, searching files with corresponding names in a folder where the main model is located by the complete file names of all the files, and collecting the files in a model list H;
s16, collecting the main model files in a model list H;
and S17, copying all files in the model list H into a temporary folder.
The specific steps of S3 are as follows:
s31, acquiring the current CPU core number C through Windows API GetSystemInfo;
s32, creating (C-1) task management threads Th, and creating a model driving process running in a background in the Th; creating a main process event Em and a sub-process event Es bound with a background model driving process in a thread Th through CreateEvnet; creating a shared memory block T bound with a background model driving process through CreateFileMapping; waiting for an Em event to be triggered by a subprocess; the shared memory block is divided into two parts, the head part is a task information keyword, the size of the head part is 1024 bytes, and the storage content is a task type and a task state; the body part comprises task parameters and return results, the size of the task parameters is 10Mb, and the parameters and the return results in the model driving process are stored;
s33, after the background model driving process is started, firstly performing linear initialization work, synchronizing with a management thread in a task scheduling main process, and opening a main process event, a sub-process event and a shared memory block which are created by the main process through OpenEvent and OpenFileMapping; secondly, filling the head part in the shared memory block with initialized key information, wherein the filling of the body part with true indicates that the initialization is successful, and triggering a main process event Em through SetEvent; the subprocess enters a task waiting state;
s34, the main process obtains the head part content in the T, returns the result to be initialized and the result to be true in the body, and enters a cycle waiting for obtaining the task from the task queue Q.
The specific steps of S4 are as follows:
s41, sequentially loading the model files in the model list H into a task queue Q of a task scheduling module;
s42, each task management thread Th sequentially obtains a model name corresponding to a task from a task queue Q, a head part in a corresponding shared memory block T is set as Dependency analysis, a body part is set as a generation analysis model file name obtained from the task queue Q, and a subprocess event Es is triggered through SetEvent;
s43, stopping the event of Es by the sub-process, acquiring the task type from the head part of T, acquiring the file name F by the body part, setting the assembly loading option as a loading-only structure, opening the file F, acquiring all expressions in the file, and referring the keywords through the expressions, wherein the external dependent expressions are identified and the dependent file names are analyzed;
s44, storing the analyzed dependent file into a body part in the T by the sub-process, and triggering a main process event Em through a SetEvent;
s45, ending Em event blocking by Th, acquiring dependency analysis of the current model file, and storing the dependency analysis into a file dependency table (DAM);
s46, the Th continues to acquire the tasks from the task queue Q, and the S43-S45 are repeated until the task queue is empty and the tasks of all the sub-processes are executed;
s47, splicing the dependency relationships of the model files to form a model dependency graph in a preset structure and analyzing the model dependency graph;
and S48, calculating the maximum depth of each file node, and storing the depth information of the file nodes into the DAM.
The specific steps of S5 are as follows:
s51, storing files in the DAM into a task queue Q from small to large according to the depth of the files, and creating an empty global expression list G for storing expression values of all parts driven by the finished model;
s52, each task management thread Th sequentially obtains a model name corresponding to a task from a task queue Q, a head part in a corresponding shared memory block T is set to be Updata, a body part stores a generation analysis model file name obtained from the task queue Q, a model expression parameter E and a global expression list G obtained from an interface and a database, and a sub-process event Es is triggered through a SetEvent;
stopping the Es event by the subprocess, acquiring a task type from a head part of the T, acquiring a file name F and an expression parameter E from a body part, setting an assembly loading option as a loading-only structure, and opening the file F;
updating model expressions in sequence according to the parameter E, saving the file after the model is updated, setting the returned result in the body part as true, and adding an expression value Gn after executing model driving; if any abnormal state exists in the model-driven process, the return result of the body part is set to false, error information is added, and a main process event Es is triggered through SetEvent.
The host process will obtain the part expression value Gn for this drive from the body portion of T, storing Gn in the value G along with the file name.
Further comprising the steps of: assembly replacement, it specifically is:
the file to be replaced is placed in the main model parameter table through a keyword REPLACE, and replacement is performed according to a preset replacement rule, wherein the preset replacement rule is as follows: when the user selects TYPE = a, replacing partA with partA1 and partB with partB1; when TYPE = B, partA is replaced with partA1 and partB is replaced with partB2; when TYPE = C, replacing partA with partA2 and partB with partB1; when TYPE = D, replacing partA with partA2 and replacing partB with partB2;
and calculating the name of the file to be replaced according to the parameters selected by the user, and copying and renaming the file to be replaced into the temporary folder.
Further comprising the steps of: dependent expression update, which is specifically: in the model driving process of the sub-process, if a dependent expression Po = 'partX', X is provided, wherein X is any character, a temporary file is created in the process and named as partX, expression parameters corresponding to the partX in G are assigned to the temporary file partX, and Po can be correctly quoted to a dependent item.
There is also provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method as claimed in any one of the above when executing the computer program.
There is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method according to any one of the preceding claims.
Compared with the prior art, the invention has the following beneficial effects:
1. by a process scheduling mechanism, multi-process model driving operation is realized, the multi-core computing performance of a CPU is fully utilized, and finally, the effect of reducing the time consumption of large-scale model structure driving by times is achieved.
2. By using the assembly replacement technology, repeated assembly work of sub-parts of different brands in the resource model is omitted, the consumption of the storage space of the model disk is reduced, and the driving speed of the model is increased.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of creating a task scheduling thread and initializing a subprocess according to an embodiment of the present invention;
FIG. 3 is a schematic flowchart of a step of analyzing a dependency relationship of a model expression according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a step of updating a queue driven by a model according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a dependency splicing structure in the embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following specific examples and figures.
1. Assembly parts parsing
1.1 identifying possible sub-parts of the resource File Master model
According to the invention, the associated subfiles are searched for by binary search of the main model of the resource file, and compared with a search method of traversing the original assembly structure of a three-dimensional CAD platform, the search performance is improved by 1-2 orders:
step 1, reading a binary file of a main model in a text form;
step 2, searching all character positions N1, N2, \ 8230 \ 8230and Nn containing keywords of ". Prt/. CATProduct";
and 3, reversely searching '0' key characters by taking the character positions Nn searched in the step 2 as starting positions, wherein the characters represent the starting positions Mn of the file name description character strings. And (M, nn + 4) is intercepted to obtain a complete file name.
1.2 comparing and storing all files in the configuration
And step 1, searching files with corresponding names in a folder where the main model is located according to the file names found in the step 3 in the step 1.1, and collecting the files in a model list H.
And 2, collecting the main model files into a model list H.
And 3, copying all files in the model list H into a temporary folder temp.
2. Task scheduling module
2.1 model driver subprocess initialization
Step 1, the task scheduling service acquires the current CPU core number C through Windows API GetSystemInfo. And creates an empty thread-safe queue Q.
And 2, the task scheduling service establishes C-1 task management threads Th, and C-1 indicates that a core is reserved for supporting the running of the main process. A model-driven process running in the background is created in Th. Creating a main process event Em and a sub-process event Es bound with a background model driving process in a thread Th through CreateEven; and creating a shared memory block T bound with the background model driving process through CreateFileMapping. The naming rules of the events and the memory blocks are as follows: key + process ID. Then wait for Em event to be triggered by sub-process.
The shared memory block is divided into two parts, the head part is a task information keyword, the size of the task information keyword is 1024 bytes, and the storage content is the type and the state of a task; the body part is a task parameter and a return result, the size of the body part is 10Mb, and the parameter and the return result in the model driving process are stored.
And 3, after the background model driving process is started, firstly performing linear initialization work, synchronizing with a management thread in a task scheduling main process, and opening a main process event, a sub-process event and a shared memory block which are created by the main process through OpenEvent and OpenFileMapping. And then filling the memory block head with the initialized key information, wherein the filling of true in body indicates that the initialization is successful, and triggering the main process event Em through SetEvent. The sub-process enters a wait for task state.
And 4, the main process obtains the head part content in the T, returns the initial result and the body result to true, and enters a cycle waiting for obtaining the task from the task queue Q.
2.2 model expression dependent parsing task
And step 1, sequentially loading the model files in the model task list in the step 1.2 into a task queue Q of a task scheduling module.
And 2, each task management thread Th sequentially acquires the model name corresponding to the task from the task queue Q, the head in the corresponding shared memory block T is set as Dependency analysis, and the body is set as the generation analysis model file name acquired from the task queue Q. And triggers a sub-process event Es through SetEvent.
And 3, stopping the event of Es by the subprocess, acquiring the task type from the head part of T, acquiring the file name F from the body part, setting the assembly loading option as a loading-only structure, opening the file F, acquiring all expressions in the file, and referring the keywords through the expressions, namely identifying all external dependent expressions and analyzing the names of the dependent files.
And 4, storing the analyzed dependent file into the body in the T by the subprocess, and triggering a main process event Em through a SetEvent.
And 5, ending the Em event block by the thread Th, acquiring the dependent DA of the current model file, and storing the dependent DA in the file dependent table DAM.
And 6, the thread Th continues to acquire the tasks from the task queue Q, and the steps 3-5 are repeated until the task queue is empty and the tasks of all the subprocesses are executed.
And 7, splicing all the file dependency relationships into a graph structure. For example:
a depends on B and C, B depends on C, and C depends on D, then the graph after splicing has the following pattern:
step 8, calculating the maximum depth of each file node, as shown in FIG. 5, the depth of D is 0, because there is no dependency; c has a depth of 1 because it depends on D, B has a depth of 2 because it depends on C having a depth of 1, A has a depth of 3 because it depends on B and C, B has a depth of 2, C has a depth of 1, and 2 is taken as the maximum. And storing the file node depth information in the DAM.
2.3 model expression driven tasks
Step 1, storing the files in the DAM into a task queue Q from small to large according to the depth of the files. And creating an empty global expression list G for storing the expression values of all parts driven by the completed model.
And 2, each task management thread Th sequentially acquires a model name corresponding to a task from the task queue Q, sets the head in the corresponding shared memory block T as Updata, and stores a generation analysis model file name acquired from the task queue Q, a model expression parameter E acquired from an interface and a database and a global expression list G in the body. And triggers a sub-process event Es through SetEvent.
And 3, stopping the Es event by the subprocess, acquiring the task type from the head part of the T, acquiring the file name F and the expression parameter E from the body part, setting the assembly loading option as a loading-only structure, and opening the file F.
And 4, sequentially updating the model expressions according to the parameters E, and saving the file after the model is updated. The result returned in body is set to true, and the expression value Gn after model driving is executed is added. If there are any abnormal states in the model-driven process, the body return result is set to false and an error message is added. The main process event Es is triggered by the SetEvent.
And 5, acquiring the part expression value Gn of the driver from the body of the T by the main process, and storing the value G of the Gn and the file name.
And 6, after all model drives are completed, directly adding the main model file to complete the whole drive work of the model.
3. Assembly replacement and dependent expression update
3.1, replacement of Assembly
Step 1, the file to be replaced is configured in the parameter list of the main model through REPLACE key child. The format is as follows, when the user selects TYPE = a, replacing partA with partA1 and partB with partB1; when TYPE = B, replacing partA with partA1 and replacing partB with partB2; when TYPE = C, replacing partA with partA2 and partB with partB1; TYPE = D, partA is replaced with partA2, and partB is replaced with partB2.
And 2, calculating the name of the file to be replaced according to the parameters selected by the user, and copying and renaming the file to be replaced into the temp folder.
3.2 dependent expression updating
Step 1, in the model driving process of the subprocess, if a dependent expression exists, for example: and Po = 'partA', A, creating a temporary file in the process and naming the temporary file as partA, and assigning an expression parameter corresponding to the partA in G to the temporary file partA. At this point Po will be able to correctly reference its dependent items.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.

Claims (10)

1. A three-dimensional CAD model parallel driving method based on process scheduling is characterized by comprising the following steps:
s1, acquiring and identifying a sub-part in an assembly model;
s2, creating an empty task queue Q;
s3, creating a task scheduling thread and initializing a sub-process;
s4, analyzing the dependency relationship of the model expression by the subprocess; obtaining the dependency analysis of the model file and storing the dependency analysis into a file dependency table; splicing the dependency relationship of the model files to form a model dependency graph in a preset structure, analyzing the model dependency graph, calculating the maximum depth of each file node to obtain depth information, and storing the depth information into a file dependency table;
s5, storing the files in the file dependency table into a task queue Q from small to large according to the depth of the files to form a model depth queue, and creating an empty global expression list G for storing expression values of all parts driven by the completed model;
s6, updating and storing the models of the files in the model depth queue in sequence;
and S7, adding the main model file after the updating is finished.
2. The three-dimensional CAD model parallel driving method based on process scheduling as claimed in claim 1, wherein the specific steps of S1 are:
s11, matching according to possible sub-parts in the resource file main model;
s12, reading the binary file of the main model in a text form;
s13, retrieving all character positions N1, N2.. Nn containing keywords in a text matching mode, wherein N is a positive real number;
and S14, taking the Nn as an initial position, reversely searching key characters, representing the key characters as the initial position Mn of the file name description character string, and intercepting (M, nn + 4) to obtain a complete file name.
3. The method for parallel driving of the three-dimensional CAD model based on process scheduling as recited in claim 2, further comprising the steps of: comparing and storing all files in the assembly structure, and the method comprises the following specific steps:
s15, searching files with corresponding names in a folder where the main model is located for the complete file names of all the files, and collecting the complete file names in a model list H;
s16, collecting the main model files in a model list H;
and S17, copying all files in the model list H into a temporary folder.
4. The three-dimensional CAD model parallel driving method based on process scheduling as claimed in claim 3, wherein the specific steps of S3 are:
s31, acquiring the current CPU core number C through Windows API GetSystemInfo;
s32, creating (C-1) task management threads Th, and creating a model driving process running in a background in the Th; creating a main process event Em and a sub-process event Es bound with a background model driving process in a thread Th through CreateEvnet; creating a shared memory block T bound with a background model driving process through createFilemapping; waiting for an Em event to be triggered by a child process; the shared memory block is divided into two parts, the head part is a task information keyword, the size of the head part is 1024 bytes, and the storage content is a task type and a task state; the body part comprises task parameters and return results, the size of the task parameters is 10Mb, and the parameters and the return results in the model driving process are stored;
s33, after the background model driving process is started, firstly, linear initialization work is carried out, synchronization with a management thread in a task scheduling main process is carried out, and a main process event, a sub-process event and a shared memory block created by the main process are opened through OpenEvent and OpenFileMapping; secondly, filling the head part in the shared memory block with initialized key information, wherein the filling of the body part with true indicates that the initialization is successful, and triggering a main process event Em through SetEvent; the subprocess enters a task waiting state;
and S34, the main process obtains the returned result of the head part content in the T as initialized and the result in the body as true, and enters a cycle waiting for obtaining the task from the task queue Q.
5. The three-dimensional CAD model parallel driving method based on process scheduling as claimed in claim 4, wherein the specific steps of S4 are:
s41, sequentially loading the model files in the model list H into a task queue Q of a task scheduling module;
s42, each task management thread Th sequentially obtains a model name corresponding to a task from a task queue Q, a head part in a corresponding shared memory block T is set as Dependency analysis, a body part is set as a generation analysis model file name obtained from the task queue Q, and a subprocess event Es is triggered through SetEvent;
s43, stopping the event of Es by the sub-process, acquiring the task type from the head part of T, acquiring the file name F by the body part, setting the assembly loading option as a loading-only structure, opening the file F, acquiring all expressions in the file, and referring the keywords through the expressions, wherein the external dependent expressions are identified and the dependent file names are analyzed;
s44, storing the analyzed dependent file into a body part in the T by the sub-process, and triggering a main process event Em through a SetEvent;
s45, finishing Em event blocking by Th, acquiring the dependency analysis of the current model file, and storing the dependency analysis into a file dependency table (DAM);
s46, th continue to obtain the task from the task queue Q, and repeat S43-S45 until the task queue is empty and the tasks of all the sub-processes are executed;
s47, splicing the dependency relationship of the model files to form a model dependency graph in a preset structure and analyzing the model dependency graph;
and S48, calculating the maximum depth of each file node, and storing the depth information of the file nodes into the DAM.
6. The three-dimensional CAD model parallel driving method based on process scheduling as claimed in claim 5, wherein the specific steps of S5 are:
s51, storing files in the DAM into a task queue Q from small to large according to the depth of the files, and creating an empty global expression list G for storing expression values of all parts driven by the finished model;
s52, each task management thread Th sequentially obtains a model name corresponding to a task from a task queue Q, a head part in a corresponding shared memory block T is set to be Updata, a body part stores a generation analysis model file name obtained from the task queue Q, a model expression parameter E and a global expression list G obtained from an interface and a database, and a sub-process event Es is triggered through a SetEvent;
stopping the Es event by the subprocess, acquiring a task type from a head part of the T, acquiring a file name F and an expression parameter E from a body part, setting an assembly loading option as a loading-only structure, and opening the file F;
updating model expressions in sequence according to the parameter E, saving the file after the model is updated, setting the returned result in the body part as true, and adding an expression value Gn after executing model driving; if any abnormal state exists in the model-driven process, the return result of the body part is set to false, error information is added, and a main process event Es is triggered through SetEvent.
The host process will obtain the part expression value Gn of this drive from the body portion of T, storing Gn in the value G along with the file name.
7. The method for parallel driving of three-dimensional CAD models based on process scheduling as recited in claim 5, further comprising the steps of: assembly replacement, it specifically is:
the file to be replaced is placed in the main model parameter table through a keyword REPLACE, and is replaced according to a preset replacement rule, wherein the preset replacement rule is as follows: when the user selects TYPE = a, replacing partA with partA1 and partB with partB1; when TYPE = B, partA is replaced with partA1 and partB is replaced with partB2; when TYPE = C, replacing partA with partA2 and partB with partB1; when TYPE = D, replacing partA with partA2 and replacing partB with partB2;
and calculating the name of the file to be replaced according to the parameters selected by the user, and copying and renaming the file to be replaced to the temporary folder.
8. The method for parallel driving of three-dimensional CAD models based on process scheduling as claimed in claim 7, further comprising the steps of: dependent expression update, which is specifically: in the model driving process of the sub-process, if a dependent expression Po = 'partX', X is provided, wherein X is any character, a temporary file is created in the process and named as partX, expression parameters corresponding to the partX in G are assigned to the temporary file partX, and Po can be correctly quoted to a dependent item.
9. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-8 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202211028056.9A 2022-08-25 2022-08-25 Three-dimensional CAD model parallel driving method based on process scheduling Pending CN115509737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211028056.9A CN115509737A (en) 2022-08-25 2022-08-25 Three-dimensional CAD model parallel driving method based on process scheduling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211028056.9A CN115509737A (en) 2022-08-25 2022-08-25 Three-dimensional CAD model parallel driving method based on process scheduling

Publications (1)

Publication Number Publication Date
CN115509737A true CN115509737A (en) 2022-12-23

Family

ID=84501669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211028056.9A Pending CN115509737A (en) 2022-08-25 2022-08-25 Three-dimensional CAD model parallel driving method based on process scheduling

Country Status (1)

Country Link
CN (1) CN115509737A (en)

Similar Documents

Publication Publication Date Title
US10031775B2 (en) Backfill scheduling for embarrassingly parallel jobs
US6826752B1 (en) Programming system and thread synchronization mechanisms for the development of selectively sequential and multithreaded computer programs
KR100938036B1 (en) System supporting animation of graphical display elements through animation object instances
US8281012B2 (en) Managing parallel data processing jobs in grid environments
EP3021217A1 (en) Distributed analysis and attribution of source code
US8099693B2 (en) Methods, systems, and computer program product for parallelizing tasks in processing an electronic circuit design
US20080005332A1 (en) Method for Opportunistic Computing
US9152389B2 (en) Trace generating unit, system, and program of the same
CN104572260B (en) It is used to implement the method and apparatus of the code release control of transaction internal memory region promotion
Lanusse et al. Real-time modeling with UML: The ACCORD approach
Roelofs AIMMS 3. 10 Language Reference
Blelloch et al. Space-efficient scheduling of parallelism with synchronization variables
Shun Shared-memory parallelism can be simple, fast, and scalable
JP2010009495A (en) Information processor, program processing method, and computer program
Le et al. Automatic gpu memory management for large neural models in tensorflow
Possani et al. Unlocking fine-grain parallelism for AIG rewriting
Kim et al. {STRADS-AP}: Simplifying Distributed Machine Learning Programming without Introducing a New Programming Model
Arandi et al. Combining compile and run-time dependency resolution in data-driven multithreading
CN115509737A (en) Three-dimensional CAD model parallel driving method based on process scheduling
US8627301B2 (en) Concurrent management of adaptive programs
Estublier et al. Software product line evolution: the selecta system
Ezekiel et al. To Parallelize or to Optimize?
Knüpfer Advanced memory data structures for scalable event trace analysis
US11915135B2 (en) Graph optimization method and apparatus for neural network computation
CN116755893B (en) Job scheduling method and device of deep learning-oriented distributed computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination