CN111597035A - Simulation engine time advancing method and system based on multiple threads - Google Patents

Simulation engine time advancing method and system based on multiple threads Download PDF

Info

Publication number
CN111597035A
CN111597035A CN202010294261.4A CN202010294261A CN111597035A CN 111597035 A CN111597035 A CN 111597035A CN 202010294261 A CN202010294261 A CN 202010294261A CN 111597035 A CN111597035 A CN 111597035A
Authority
CN
China
Prior art keywords
model
time
event
events
threads
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010294261.4A
Other languages
Chinese (zh)
Other versions
CN111597035B (en
Inventor
陈秋瑞
卿杜政
蔡继红
杨凯
谢宝娣
王清云
周敏
梅铮
霍达
刘晨
张晗
徐筠
杨涵博
李志平
李亚雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Simulation Center
Original Assignee
Beijing Simulation Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Simulation Center filed Critical Beijing Simulation Center
Priority to CN202010294261.4A priority Critical patent/CN111597035B/en
Publication of CN111597035A publication Critical patent/CN111597035A/en
Application granted granted Critical
Publication of CN111597035B publication Critical patent/CN111597035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a simulation engine time advancing method and a system based on multithreading, wherein the method comprises the following steps: creating an engine service thread and a plurality of model service threads according to the core number of the simulation host; instantiating the simulation model to obtain a plurality of model instances, sending each model instance to one of a plurality of model service threads, and creating an event for the received model instance in the model service threads; based on the communication between the multiple model service threads and the engine service threads, all events are sequentially executed according to the creation time sequence of all events, and the problem of low efficiency of the traditional simulation engine time advancing method is solved by adopting a multithreading parallel simulation engine time advancing mode.

Description

Simulation engine time advancing method and system based on multiple threads
Technical Field
The invention relates to the technical field of simulation engine time advancing. And more particularly, to a multithreading-based simulation engine time advancing method and system.
Background
Time-marching is an important function of the simulation engine to ensure the logic of each model is correct during the simulation operation. The traditional time advancing method is an event queue-based advancing method and comprises the following implementation steps: creating an event queue arranged according to time sequence, adding events created based on the model components into the event queue, taking the event with the minimum time from the event queue for execution, ending the simulation until the event queue is empty, and sequentially advancing the simulation time in the process of executing the events. The traditional time advancing method adopts a centralized event queue, has high coupling degree between models, is not beneficial to optimizing a computer CPU which is developed at a rapid speed at present, and is difficult to improve the simulation efficiency.
Disclosure of Invention
The invention aims to provide a multithreading-based simulation engine time advancing method, which solves the problem of low efficiency of the traditional simulation engine time advancing method by adopting a multithreading parallel simulation engine time advancing mode. It is another object of the present invention to provide a multithreading-based simulation engine time advancing system.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a simulation engine time advancing method based on multithreading, which comprises the following steps:
creating an engine service thread and a plurality of model service threads according to the core number of the simulation host;
instantiating the simulation model to obtain a plurality of model instances, sending each model instance to one of a plurality of model service threads, and creating an event for the received model instance in the model service threads;
and executing all events in sequence according to the creation time sequence of all events based on the communication between the plurality of model service threads and the engine service thread respectively.
Preferably, instantiating the simulation model to obtain a plurality of model instances and sending each model instance to one of the plurality of model service threads specifically includes:
respectively forming a plurality of model examples according to each simulation model;
determining a model service thread corresponding to each model instance according to the obtained computational complexity of the plurality of model instances;
and respectively sending the plurality of model instances to the corresponding model service threads.
Preferably, the sequentially executing all events according to the creation time sequence of all events based on the communication between the plurality of model service threads and the engine service thread respectively specifically includes:
establishing an event queue in each model service thread;
arranging all events in each model service thread in the event queue according to the creation time sequence of the events;
and executing the events in sequence according to the creation time sequence of the events in all event queues based on the communication between the plurality of model service threads and the engine service threads respectively.
Preferably, the sequentially executing the events according to the creation time sequence of the events in all the event queues based on the communication between the plurality of model service threads and the engine service thread respectively specifically includes:
aiming at each model service thread, acquiring and executing an event with the latest creating time in the model service thread, and setting the creating time t of the event with the latest creating timeiminSet to the logical time t of the corresponding model service threadin
Creating time t of event with the next nearest creating timeinextSetting to-be-advanced time, forming an solicited time advancing message according to the to-be-advanced time, and transmitting the solicited time advancing message to an engine service thread so that the engine service thread transmits the to-be-advanced time to all other model service threads;
if the logic time t of the model service threadinAnd if the time to be advanced is less than the time to be advanced of all other model service threads, the time is advanced, the event with the latest time in the model service threads is newly acquired, and the execution is carried out until all the events in the event queue are completely executed.
The invention also discloses a simulation engine time advancing system based on multithreading, which comprises:
the multithreading establishing unit is used for establishing an engine service thread and a plurality of model service threads according to the core number of the simulation host;
the model instance creating unit is used for instantiating the simulation model to obtain a plurality of model instances, sending each model instance to one of a plurality of model service threads, and creating an event for the received model instance in the model service threads;
and the event execution unit is used for sequentially executing all events according to the creation time sequence of all events based on the communication between the plurality of model service threads and the engine service thread.
Preferably, the model instance creating unit is specifically configured to form a plurality of model instances according to each simulation model, and determine a model service thread corresponding to each model instance according to the obtained computational complexity of the plurality of model instances;
and respectively sending the plurality of model instances to the corresponding model service threads.
Preferably, the event execution unit is specifically configured to establish an event queue in each model service thread, arrange all events in each model service thread in the event queue according to a creation time order of the events, and execute the events in sequence according to the creation time order of the events in all event queues based on communication between the plurality of model service threads and the engine service threads, respectively.
Preferably, the event execution unit is specifically configured to, for each model service thread, acquire and execute an event with the latest creation time in the model service thread, and set the creation time t of the event with the latest creation time to the creation time timinSet to the logical time t of the corresponding model service threadinThe creation time t of the event whose next creation time is the latestinextSetting to-be-advanced time, forming an solicited time advancing message according to the to-be-advanced time and transmitting the solicited time advancing message to an engine service thread so that the engine service thread transmits the to-be-advanced time to all other model service threads, and if the logic time t of the model service thread is tinAnd if the time to be advanced is less than the time to be advanced of all other model service threads, the time is advanced, the event with the latest time in the model service threads is newly acquired, and the execution is carried out until all the events in the event queue are completely executed.
The invention also discloses a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, implements the method as described above.
The invention also discloses a computer-readable medium, having stored thereon a computer program,
which when executed by a processor implements the method as described above.
The invention creates an engine service thread and a plurality of model service threads according to the core number of the simulation host, and fully considers the core number of the host when defining the number of the model service threads, thereby maximally utilizing the calculation performance of the host. The simulation model is instantiated and distributed to each model service thread, each model service thread creates events for the model instances, all the events are sequentially executed based on communication among the model service threads, the time sequence of event execution is guaranteed to be correct, model calculation is carried out on different model instances in different model service threads, the time sequence is guaranteed to be correct, and the simulation efficiency can be improved. The invention has the advantages that the advantages of multithreading can be fully utilized, the scalability is realized, different model instances can be distributed according to the calculation complexity, the idle time of the host core can be effectively reduced, and the utilization efficiency of the host calculation capacity is further improved. The model instance is still propelled in a mode of event queues in the model service threads, and the threads are propelled in a mode of inter-thread communication, so that the coupling between the models is reduced, the use of thread locks and mutexes is reduced, the complexity of the system is reduced, and the overhead required by inter-thread synchronization is reduced.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
FIG. 1 is a flow diagram illustrating one embodiment of a multithreading-based simulation engine time advancing method of the present invention;
FIG. 2 is a second flowchart of an embodiment of a multithreading-based simulation engine time advancing method of the present invention;
FIG. 3 is a third flowchart of an embodiment of a multithreading-based simulation engine time advancing method of the present invention;
FIG. 4 is a fourth flowchart of a particular embodiment of a multithreading-based simulation engine time advancing method of the present invention;
FIG. 5 is a schematic diagram illustrating time-marching for one embodiment of the present invention multi-thread based simulation engine time-marching method;
FIG. 6 is a block diagram illustrating one embodiment of a multithreading-based simulation engine time advancing system of the present invention;
FIG. 7 illustrates a schematic diagram of a computer device suitable for use in implementing embodiments of the present invention.
Detailed Description
In order to more clearly illustrate the invention, the invention is further described below with reference to preferred embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
The traditional time advancing method is based on an event queue advancing method, an event queue arranged according to time sequence is established, events established based on a model component are added into the event queue, the event with the minimum time is taken out from the event queue for execution, the simulation is finished until the event queue is empty, and the simulation time is advanced in sequence in the event executing process. The traditional time advancing method adopts a centralized event queue, has high coupling degree between models, is not beneficial to optimizing a computer CPU which is developed at a rapid speed at present, and is difficult to improve the simulation efficiency. The invention provides a multithreading-based simulation engine time advancing method, which adopts a multithreading parallel simulation engine time advancing mode and solves the problem of low efficiency of the traditional simulation engine time advancing method.
Based on this, according to one aspect of the invention, the present embodiment discloses a multithreading-based simulation engine time advancing method. As shown in fig. 1, in this embodiment, the method includes:
s100: and creating an engine service thread and a plurality of model service threads according to the core number of the simulation host.
S200: the simulation model is instantiated to obtain a plurality of model instances, each model instance is sent to one of a plurality of model service threads, and an event is created for the received model instance in each model service thread.
S300: and executing all events in sequence according to the creation time sequence of all events based on the communication between the plurality of model service threads and the engine service thread respectively.
The invention creates an engine service thread and a plurality of model service threads according to the core number of the simulation host, and fully considers the core number of the host when defining the number of the model service threads, thereby maximally utilizing the calculation performance of the host. The simulation model is instantiated and distributed to each model service thread, each model service thread creates events for the model instances, all the events are sequentially executed based on communication among the model service threads, the time sequence of event execution is guaranteed to be correct, model calculation is carried out on different model instances in different model service threads, the time sequence is guaranteed to be correct, and the simulation efficiency can be improved. The invention has the advantages that the advantages of multithreading can be fully utilized, the scalability is realized, different model instances can be distributed according to the calculation complexity, the idle time of the host core can be effectively reduced, and the utilization efficiency of the host calculation capacity is further improved. The model instance is still propelled in a mode of event queues in the model service threads, and the threads are propelled in a mode of inter-thread communication, so that the coupling between the models is reduced, the use of thread locks and mutexes is reduced, the complexity of the system is reduced, and the overhead required by inter-thread synchronization is reduced.
In a preferred embodiment, in S100, one engine service thread and at most n-1 model service threads are created according to the number of cores of the simulation host, and when the number of cores of the simulation host is n. In practical application, no more than n-1 model service threads can be set, wherein n-1 model service threads can be preferably set so as to fully utilize simulation resources of hosts with n cores and improve simulation efficiency.
In a preferred embodiment, as shown in fig. 2, instantiating the simulation model in S200 to obtain a plurality of model instances and sending each model instance to one model service thread of the plurality of model service threads specifically includes:
s210: and respectively forming a plurality of model examples according to each simulation model.
S220: and determining the model service thread corresponding to each model instance according to the obtained computational complexity of the plurality of model instances.
S230: and respectively sending the plurality of model instances to the corresponding model service threads.
It will be appreciated that for one simulation model, multiple model instances may be formed. Because the invention adopts a plurality of threads to carry out simulation calculation, the model examples in the threads need to sequentially carry out the simulation calculation according to the time sequence. The different model instances have different computational complexity, and in order to prevent one thread from being in an idle state when the simulation computation of the model instance is carried out, other model service threads are in an idle state, so that the resource waste is caused. In the preferred embodiment, the simulation computation time required for each model instance can be predicted based on the computational complexity of each model instance. And distributing all the model instances to different model service threads according to the simulation calculation time of each model instance so as to reduce the idle time of the model service threads and improve the completion efficiency of the simulation task.
In a specific example, for a model instance with high computational complexity, and a plurality of model instances executed after the model instance have low computational complexity, when one model service thread executes the model instance with high computational complexity, the plurality of model instances with low computational complexity can be executed in other model service threads without being sequentially allocated to all model service threads strictly according to the execution order of the plurality of models, and the model service thread corresponding to each model instance can be determined according to the allocation principle. Only one specific example of determining the model service thread corresponding to each model instance according to the obtained computational complexity of the plurality of model instances is shown here, and in other embodiments, the model instances may be distributed in other manners, which is not limited in this respect.
In a preferred embodiment, as shown in fig. 3, the S300 may specifically include:
s310: an event queue is established in each model service thread.
S320: and arranging all events in each model service thread in the event queue according to the creation time sequence of the events.
S330: and executing the events in sequence according to the creation time sequence of the events in all event queues based on the communication between the plurality of model service threads and the engine service threads respectively.
It can be understood that, in order to ensure that the execution sequence of the model instances is normal, in the preferred embodiment, the accuracy of the execution sequence of the events in each model service thread can be ensured by forming an event queue and an event queue according to each model instance, sequentially arranging the events into the event queue according to the creation time sequence, and sequentially taking out and executing the events from the event queue. Each model instance is continuously created to form a new event in the simulation process, the creation time of the event created according to the model instance can be created at a fixed period or any time, namely, the creation time sequence of the event can be preset according to actual needs. Wherein, each event has an attribute marking the event created by the event, and the model service thread orders the event queue according to the attribute of the event.
In a preferred embodiment, as shown in fig. 4, the S330 may specifically include:
s331: aiming at each model service thread, acquiring and executing an event with the latest creating time in the model service thread, and setting the creating time t of the event with the latest creating timeiminSet to the logical time t of the corresponding model service threadin
S332: creating time t of event with the next nearest creating timeinextSetting to-be-advanced time, forming an solicited time advancing message according to the to-be-advanced time, and transmitting the solicited time advancing message to an engine service thread so that the engine service thread transmits the to-be-advanced time to all other model service threads;
s333: if the logic time t of the model service threadinAnd if the time to be advanced is less than the time to be advanced of all other model service threads, the time is advanced, the event with the latest time in the model service threads is newly acquired, and the execution is carried out until all the events in the event queue are completely executed.
It will be appreciated that in the preferred embodiment, each model service thread may communicate with an engine service thread, with communication between the plurality of model service threads being effected by the engine service thread. The accuracy of time advancing in the multiple model service threads is realized through time information communication and time logic judgment among the multiple model service threads, and all events are guaranteed to be executed according to the sequence of creation time.
The invention will be further illustrated by means of a specific example. Specifically, as shown in fig. 5, in this specific example, the multithreading-based time advancing method may be implemented by:
s1001: an engine service thread, named ServiceThread, is created that is used to coordinate the time of the entire simulation.
S1002: and (3) creating model service threads, wherein the core number of the simulation host is NProcs, and then creating NProcs-1 model service threads which are named WokerThread [0] … WokerThread [ NProcs-2] respectively.
S1003: and instantiating the model to obtain model examples, wherein the models are named as Model A and Model B …, and the like, and the model examples are named as Model A #1 and Model A #2 …, and the like. The model A #1 and the model A #2 are two model instances obtained by instantiation according to the model A.
S1004: the model instances are assigned to engine service threads, which in turn assign the model instances to the model service threads.
S1005: and the model service thread constructs an event queue, the event queue is sorted from small to large according to the occurrence time of the events, and the non-emission reduction order of the event queue time is kept when the generated events are inserted into the queue. Initially, a create event for the model instance assigned to all engine service threads within the event queue.
S1006: the model service thread executes the event queue. Each model service thread has own logic time, and the ith model service thread Workerthread [ i]Is defined as tinI is 0,1,2,3 … NProcs-2. Defining the time of the latest event in the event queue in the model service thread as timinWhen the model service thread executes the event queue, all the time in the event queue is taken out to be equal to timinAnd executes, updates the logic time tinIs timin
S1007: the model service thread updates the time. Defining a model service thread Workerthread i]Is greater than timinTime of event of (1) tinextThe model service thread sends a request time advance message to the engine service thread requesting to advance the local time to tinextThe engine service thread forwards the message to all other model service threads.
S1008: the model service thread advances time. The jth model service thread WorkerThread j]After the request time push message is sent, checking a received message queue, calculating and updating the minimum value of other model service thread time when the request time push message of other model service threads forwarded by the engine service thread is received, and defining the minimum value Tj ═ min { t } of other model service thread timeknext,k≠j},j=0,1,2,3…NProcs-2,tknextThe logical time of the k-th model service thread, k ≠ 0,1,2,3 … NProcs-2, k ≠ j. When Tj is larger than the local logic time t of the jth model service threadjnThen the time may be advanced and S1006 may continue until the event queue is empty.
Based on the same principle, the embodiment also discloses a multithreading-based simulation engine time advancing system. As shown in fig. 6, in the present embodiment, the system includes a multithread creating unit 11, a model instance creating unit 12, and an event executing unit 13.
The multithread creating unit 11 is configured to create one engine service thread and a plurality of model service threads according to the number of cores of the simulation host.
The model instance creating unit 12 is configured to instantiate the simulation model to obtain a plurality of model instances, send each model instance to one of the plurality of model service threads, and create an event for the received model instance in the model service thread.
The event execution unit 13 is configured to sequentially execute all events in the order of creation time of all events based on communication of the plurality of model service threads with the engine service thread, respectively.
In a preferred embodiment, the model instance creating unit 12 is specifically configured to form a plurality of model instances according to each simulation model, determine a model service thread corresponding to each model instance according to the obtained computation complexity of the plurality of model instances, and send the plurality of model instances to the corresponding model service threads respectively.
In a preferred embodiment, the event execution unit 13 is specifically configured to establish an event queue in each model service thread, arrange all events in each model service thread in the event queue according to a creation time order of the events, and sequentially execute the events according to the creation time order of the events in all event queues based on communication between a plurality of model service threads and the engine service thread, respectively.
In a preferred embodiment, the event execution unit 13 is specifically configured to, for each model service thread, acquire and execute an event with a latest creation time in the model service thread, and set the creation time t of the event with the latest creation timeiminSet to the logical time t of the corresponding model service threadinThe creation time t of the event whose next creation time is the latestinextSetting to-be-advanced time, forming an solicited time advancing message according to the to-be-advanced time and transmitting the solicited time advancing message to an engine service thread so that the engine service thread transmits the to-be-advanced time to all other model service threads, and if the logic time t of the model service thread is tinAnd if the time to be advanced is less than the time to be advanced of all other model service threads, the time is advanced, the event with the latest time in the model service threads is newly acquired, and the execution is carried out until all the events in the event queue are completely executed.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer device, which may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
In a typical example, the computer device comprises in particular a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the method as described above.
Referring now to FIG. 7, shown is a schematic diagram of a computer device 600 suitable for use in implementing embodiments of the present application.
As shown in fig. 7, the computer apparatus 600 includes a Central Processing Unit (CPU)601 which can perform various appropriate works and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM)) 603. In the RAM603, various programs and data necessary for the operation of the system 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a Cathode Ray Tube (CRT), a liquid crystal feedback (LCD), and the like, and a speaker and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted as necessary on the storage section 608.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A multithreading-based simulation engine time advancing method, comprising:
creating an engine service thread and a plurality of model service threads according to the core number of the simulation host;
instantiating the simulation model to obtain a plurality of model instances, sending each model instance to one of a plurality of model service threads, and creating an event for the received model instance in the model service threads;
and executing all events in sequence according to the creation time sequence of all events based on the communication between the plurality of model service threads and the engine service thread respectively.
2. The simulation engine time-marching method of claim 1, wherein instantiating the simulation model to obtain a plurality of model instances and sending each model instance to one of the plurality of model service threads specifically comprises:
respectively forming a plurality of model examples according to each simulation model;
determining a model service thread corresponding to each model instance according to the obtained computational complexity of the plurality of model instances;
and respectively sending the plurality of model instances to the corresponding model service threads.
3. The simulation engine time advancing method according to claim 1, wherein the sequentially executing all events in the creation time order of all events based on the communication between the plurality of model service threads and the engine service threads respectively comprises:
establishing an event queue in each model service thread;
arranging all events in each model service thread in the event queue according to the creation time sequence of the events;
and executing the events in sequence according to the creation time sequence of the events in all event queues based on the communication between the plurality of model service threads and the engine service threads respectively.
4. The simulation engine time advancing method according to claim 3, wherein the sequentially executing events according to the creation time sequence of events in all event queues based on the communication between the plurality of model service threads and the engine service threads respectively comprises:
aiming at each model service thread, acquiring and executing an event with the latest creating time in the model service thread, and setting the creating time t of the event with the latest creating timeiminSet to the logical time t of the corresponding model service threadin
Creating time t of event with the next nearest creating timeinextSetting to-be-advanced time, forming an solicited time advancing message according to the to-be-advanced time, and transmitting the solicited time advancing message to an engine service thread so that the engine service thread transmits the to-be-advanced time to all other model service threads;
if the logic time t of the model service threadinAnd if the time to be advanced is less than the time to be advanced of all other model service threads, the time is advanced, the event with the latest time in the model service threads is newly acquired, and the execution is carried out until all the events in the event queue are completely executed.
5. A multithreading-based simulation engine time advancing system, comprising:
the multithreading establishing unit is used for establishing an engine service thread and a plurality of model service threads according to the core number of the simulation host;
the model instance creating unit is used for instantiating the simulation model to obtain a plurality of model instances, sending each model instance to one of a plurality of model service threads, and creating an event for the received model instance in the model service threads;
and the event execution unit is used for sequentially executing all events according to the creation time sequence of all events based on the communication between the plurality of model service threads and the engine service thread.
6. The simulation engine time advancing system of claim 5, wherein the model instance creating unit is specifically configured to form a plurality of model instances according to each simulation model, and determine a model service thread corresponding to each model instance according to the computational complexity of the obtained plurality of model instances;
and respectively sending the plurality of model instances to the corresponding model service threads.
7. The simulation engine time advancing system of claim 5, wherein the event execution unit is specifically configured to establish an event queue in each model service thread, arrange all events in each model service thread in the event queue according to a creation time order of the events, and execute the events in sequence according to the creation time order of the events in all event queues based on communication between the plurality of model service threads and the engine service threads, respectively.
8. The simulation engine time advancing system of claim 7, wherein the event execution unit is specifically configured to, for each model service thread, obtain and execute a creation time latest event in the model service thread, and set the creation time t of the creation time latest event to timinSet to the logical time t of the corresponding model service threadinThe creation time t of the event whose next creation time is the latestinextSetting to-be-advanced time, forming an solicited time advancing message according to the to-be-advanced time and transmitting the solicited time advancing message to an engine service thread so that the engine service thread transmits the to-be-advanced time to all other model service threads, and if the logic time t of the model service thread is tinAnd if the time to be advanced is less than the time to be advanced of all other model service threads, the time is advanced, the event with the latest time in the model service threads is newly acquired, and the execution is carried out until all the events in the event queue are completely executed.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, implements the method of any of claims 1-4.
10. A computer-readable medium, having stored thereon a computer program,
the program when executed by a processor implements the method of any one of claims 1 to 4.
CN202010294261.4A 2020-04-15 2020-04-15 Simulation engine time propulsion method and system based on multithreading Active CN111597035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010294261.4A CN111597035B (en) 2020-04-15 2020-04-15 Simulation engine time propulsion method and system based on multithreading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010294261.4A CN111597035B (en) 2020-04-15 2020-04-15 Simulation engine time propulsion method and system based on multithreading

Publications (2)

Publication Number Publication Date
CN111597035A true CN111597035A (en) 2020-08-28
CN111597035B CN111597035B (en) 2024-03-19

Family

ID=72187559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010294261.4A Active CN111597035B (en) 2020-04-15 2020-04-15 Simulation engine time propulsion method and system based on multithreading

Country Status (1)

Country Link
CN (1) CN111597035B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256243A (en) * 2020-11-05 2021-01-22 苏州同元软控信息技术有限公司 Behavior customization method, behavior customization device, behavior customization equipment and storage medium
CN114757057A (en) * 2022-06-14 2022-07-15 中国人民解放军国防科技大学 Multithreading parallel combat simulation method and system based on hybrid propulsion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760176A (en) * 2011-04-29 2012-10-31 无锡江南计算技术研究所 Hardware transaction level simulation method, engine and system
CN104866374A (en) * 2015-05-22 2015-08-26 北京华如科技股份有限公司 Multi-task-based discrete event parallel simulation and time synchronization method
CN104915482A (en) * 2015-05-27 2015-09-16 中国科学院遥感与数字地球研究所 Satellite data receiving simulation analysis platform
US20160110209A1 (en) * 2014-10-20 2016-04-21 Electronics And Telecommunications Research Institute Apparatus and method for performing multi-core emulation based on multi-threading
CN107193639A (en) * 2017-06-05 2017-09-22 北京航空航天大学 A kind of multi-core parallel concurrent simulation engine system for supporting combined operation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760176A (en) * 2011-04-29 2012-10-31 无锡江南计算技术研究所 Hardware transaction level simulation method, engine and system
US20160110209A1 (en) * 2014-10-20 2016-04-21 Electronics And Telecommunications Research Institute Apparatus and method for performing multi-core emulation based on multi-threading
CN104866374A (en) * 2015-05-22 2015-08-26 北京华如科技股份有限公司 Multi-task-based discrete event parallel simulation and time synchronization method
CN104915482A (en) * 2015-05-27 2015-09-16 中国科学院遥感与数字地球研究所 Satellite data receiving simulation analysis platform
CN107193639A (en) * 2017-06-05 2017-09-22 北京航空航天大学 A kind of multi-core parallel concurrent simulation engine system for supporting combined operation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256243A (en) * 2020-11-05 2021-01-22 苏州同元软控信息技术有限公司 Behavior customization method, behavior customization device, behavior customization equipment and storage medium
CN112256243B (en) * 2020-11-05 2024-04-02 苏州同元软控信息技术有限公司 Behavior customization method, apparatus, device and storage medium
CN114757057A (en) * 2022-06-14 2022-07-15 中国人民解放军国防科技大学 Multithreading parallel combat simulation method and system based on hybrid propulsion
CN114757057B (en) * 2022-06-14 2022-08-23 中国人民解放军国防科技大学 Multithreading parallel combat simulation method and system based on hybrid propulsion

Also Published As

Publication number Publication date
CN111597035B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN110262901B (en) Data processing method and data processing system
CN115248728B (en) Distributed training task scheduling method, system and device for intelligent computing
US9569262B2 (en) Backfill scheduling for embarrassingly parallel jobs
CN111258744A (en) Task processing method based on heterogeneous computation and software and hardware framework system
US20090300017A1 (en) Transaction Parallel Control Method, and Database Managemet System
CN114020470B (en) Resource allocation method and device, readable medium and electronic equipment
WO2022095815A1 (en) Graphics card memory management method and apparatus, device, and system
CN111274036A (en) Deep learning task scheduling method based on speed prediction
CN112416585A (en) GPU resource management and intelligent scheduling method for deep learning
CN110795226B (en) Method for processing task using computer system, electronic device and storage medium
CN111597035B (en) Simulation engine time propulsion method and system based on multithreading
CN115297008B (en) Collaborative training method, device, terminal and storage medium based on intelligent computing network
CN114429195A (en) Performance optimization method and device for hybrid expert model training
CN113407343A (en) Service processing method, device and equipment based on resource allocation
CN106844024B (en) GPU/CPU scheduling method and system of self-learning running time prediction model
CN112463340A (en) Tensorflow-based multi-task flexible scheduling method and system
CN116737370A (en) Multi-resource scheduling method, system, storage medium and terminal
CN110381150A (en) Data processing method, device, electronic equipment and storage medium on block chain
CN111694670A (en) Resource allocation method, device, equipment and computer readable medium
CN112351096B (en) Method and terminal for processing message in big data scene
WO2022120993A1 (en) Resource allocation method and apparatus for online scenario, and electronic device
CN115525425B (en) Federal learning calculation engine arrangement method and equipment based on cloud primordial technology
CN111784353B (en) Real-time feature calculation method, order risk prediction device and order system
CN107729154A (en) Resource allocation methods and device
CN107634916B (en) Data communication method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant