WO2023131121A1 - 集成电路自动化并行仿真方法和仿真装置 - Google Patents

集成电路自动化并行仿真方法和仿真装置 Download PDF

Info

Publication number
WO2023131121A1
WO2023131121A1 PCT/CN2023/070124 CN2023070124W WO2023131121A1 WO 2023131121 A1 WO2023131121 A1 WO 2023131121A1 CN 2023070124 W CN2023070124 W CN 2023070124W WO 2023131121 A1 WO2023131121 A1 WO 2023131121A1
Authority
WO
WIPO (PCT)
Prior art keywords
simulation
request
target
candidate
server
Prior art date
Application number
PCT/CN2023/070124
Other languages
English (en)
French (fr)
Inventor
周承
Original Assignee
苏州贝克微电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州贝克微电子股份有限公司 filed Critical 苏州贝克微电子股份有限公司
Publication of WO2023131121A1 publication Critical patent/WO2023131121A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/33Design verification, e.g. functional simulation or model checking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Definitions

  • the application relates to the field of electrical digital data processing, and in particular to an integrated circuit automatic parallel simulation method and a simulation device.
  • multiple simulation servers can be set up. After the engineer completes the design of the circuit structure in the EDA software, he submits the simulation request to the simulation server, and uses the simulation server to simulate the circuit. This method solves the problem of The problem of weak computing power of the local computer is solved, and multiple simulation servers can perform simulation sorting according to the first-come, first-come order of circuit simulation requests submitted by engineers, and assist in temporarily adjusting the simulation sequence with manual intervention.
  • the data files obtained by simulation operations are often very large files, and most of them occupy hundreds of gigabytes of storage space, so the data files obtained by simulation operations need to occupy a large network bandwidth when they are directly transferred between simulation servers. Further affects the efficiency of IC simulation.
  • This application provides an integrated circuit automatic parallel simulation method and a simulation device, which improves the efficiency of integrated circuit simulation.
  • the technical scheme is as follows:
  • a method for automatic parallel simulation of an integrated circuit is provided, the method is used for a control server in a simulation system, and the simulation system further includes each simulation server, and the method includes:
  • the circuit simulation request is used to request simulation resources to simulate the integrated circuit structure
  • each candidate simulation request in the second queuing queue performs a second sorting process on the second queuing queue to obtain a target queuing queue
  • simulation processing is performed on each of the candidate simulation requests by the respective simulation servers within the target processing period.
  • an integrated circuit automatic parallel simulation device comprising:
  • a simulation request obtaining module configured to obtain a circuit simulation request; the circuit simulation request is used to request simulation resources to simulate the integrated circuit structure;
  • the first queue acquisition module is used to add the circuit simulation request as a candidate simulation request to the first queuing queue
  • the second queue acquisition module is configured to perform a first sorting process on each candidate simulation request in the queue based on the remaining simulation time of each candidate simulation request in the first queue to obtain a second queue;
  • a target queue acquisition module configured to perform a second sorting process on the second queue according to the acceleration parameter size of each candidate simulation request in the second queue to obtain a target queue
  • the simulation processing module is configured to perform simulation processing on each candidate simulation request through each simulation server within the target processing period according to the priority indicated by the target queuing queue.
  • the remaining simulation time is used to indicate the remaining processing progress of candidate simulation requests whose simulation has been started before the target processing cycle
  • the device further includes: a remaining time determining module, configured to determine the remaining simulation time of the newly initiated candidate simulation request obtained within the target processing period as 0.
  • the simulation processing module is further configured to, according to the priority indicated by the target queuing queue, take out a specified number of candidate simulation requests, and perform the specified number of candidate simulation requests in order of priority Request, carry out acceleration points detection one by one; the acceleration points are cumulative points, the control server saves the attribute information of each user, and the attribute information of each user includes the cumulative points of each user;
  • the acceleration points of the first user corresponding to the first simulation request are less than the acceleration parameter of the first simulation request, the first simulation request is skipped; the The acceleration parameter of the first simulation request is the integral consumption speed set in the first simulation request;
  • the second simulation request is sent to The simulation server performs simulation processing; the acceleration parameter of the second simulation request is the point consumption speed set in the second simulation request.
  • the simulation processing module is further configured to, among the simulation servers, obtain a target simulation server that is in an idle state and has the highest priority; the priority of the simulation server is used to indicate The simulation processing performance of the simulation server; sending the second simulation request to the target simulation server for processing.
  • the simulation processing module is further configured to, when it is detected that the simulation process for the second simulation request ends, combine the acceleration points of the second user with the second simulation request The difference between the acceleration parameters is updated as the acceleration points of the second user.
  • the device further includes:
  • the simulation data acquisition module is used to obtain the target simulation corresponding to the target candidate simulation request in each target simulation server that has simulated the target candidate simulation request after it is detected that the simulation processing of the target candidate simulation request is completed.
  • data the target simulation data also includes at least one of front server data and post server data;
  • the front server data is used to indicate that the target candidate simulation request is performed before obtaining the target simulation data A simulated server;
  • the post server data is used to indicate a server that simulates the target candidate simulation request after obtaining the target simulation data;
  • the simulation result sending module is used for splicing each of the target simulation data into the target simulation result and sending it to the target computer device; the target computer device is the device that sends the target candidate simulation request.
  • the target simulation data further includes a circuit state of the target simulation server before performing a simulation operation on the target candidate simulation request, and a circuit state after performing the simulation operation.
  • a computer device in yet another aspect, includes a processor and a memory, at least one instruction, at least one program, code set or instruction set are stored in the memory, and the at least one instruction, at least one The program, code set or instruction set is loaded and executed by the processor to realize the above-mentioned integrated circuit automatic parallel simulation method.
  • a computer-readable storage medium wherein at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to implement the above-mentioned automatic parallel simulation method for integrated circuits.
  • a computer program product is provided and a computer program product or computer program is provided, the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the above-mentioned automatic parallel simulation method for integrated circuits.
  • the computer device can send the integrated circuit structure to the simulation system through a circuit simulation request, and at this time, the control server in the simulation system adds the circuit simulation request as a candidate simulation request to the first queuing queue; The control server then sorts the candidate simulation requests in the queuing queue according to the remaining simulation time to obtain the second queuing queue; Sorting processing, so as to obtain the target queuing queue, so that the simulation server can simulate each candidate simulation request according to the priority indicated by the sequence.
  • the control server obtains the simulation data obtained after the simulation processing of the corresponding circuit simulation request from each simulation server, and splices the simulation data to obtain the final simulation result; And each simulation data also indicates the simulation server executed before or after each simulation data, and the circuit state before or after each simulation data is obtained, even if there is a part of simulation data missing in each simulation data, the control server can also according to The circuit state at the end of the simulation stored in the data file of the previous simulation and the initial condition of the simulation stored in the data file of the next simulation, the lost initial condition of the simulation and the circuit state at the end of the simulation are obtained, and according to the lost The initial conditions of the simulation and the circuit state at the end of this simulation, rearrange the simulation server to perform simulation calculations to obtain the currently missing part, so as to obtain a complete simulation result without re-calculating the entire simulation request from beginning to end.
  • the above scheme performs intelligent queuing simulation for each simulation request, reasonably arranges and utilizes the resources of the simulation server, and uses the simulation server to store and store simulation data files in a distributed manner. Reading, so that the data files obtained by the simulation operation do not need to be transferred between different simulation servers, ensuring the smooth operation of the entire simulation server network, ensuring the speed of the simulation operation, thereby improving the simulation efficiency of the integrated circuit.
  • Fig. 1 is a schematic structural diagram of a simulation system according to an exemplary embodiment.
  • Fig. 2 is a method flow chart of an integrated circuit automatic parallel simulation method according to an exemplary embodiment.
  • Fig. 3 is a method flow chart of an integrated circuit automatic parallel simulation method according to an exemplary embodiment.
  • Fig. 4 is a structural block diagram of an integrated circuit automatic parallel simulation device according to an exemplary embodiment.
  • Fig. 5 is a schematic diagram of a computer device provided according to an exemplary embodiment of the present application.
  • the "indication" mentioned in the embodiments of the present application may be a direct indication, may also be an indirect indication, and may also mean that there is an association relationship.
  • a indicates B which can mean that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indirectly indicates B, for example, A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
  • the term "corresponding" may indicate that there is a direct or indirect correspondence between the two, or that there is an association between the two, or that it indicates and is indicated, configuration and is configuration etc.
  • predefinition can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in devices (for example, including terminal devices and network devices).
  • the implementation method is not limited.
  • Fig. 1 is a schematic structural diagram of a simulation system according to an exemplary embodiment.
  • the simulation system includes a control server 110 and various simulation servers 120 .
  • data communication is performed between each simulation server 120 and the control server 110 through a communication network, and the communication network may be a wired network or a wireless network.
  • the simulation system also includes a terminal 130, which can be a computer device used by engineers to design integrated circuits.
  • a terminal 130 can be a computer device used by engineers to design integrated circuits.
  • the application program installed in the terminal can , generating a circuit simulation request from the structural data corresponding to the integrated circuit, and sending it to the control server 110 of the simulation system, so that the control server 110 controls each simulation server to perform simulation processing on the circuit simulation request.
  • an application program with a circuit design function is installed in the terminal 130, and the terminal 130 can run the application program with a circuit design function, and generate corresponding integrated circuit data when receiving a specified operation from the user.
  • the embodiment does not limit this.
  • the terminal 130 may also be a terminal device with a data transmission interface, and the data transmission interface is used to receive integrated circuit data generated by other computer devices to construct a circuit simulation request.
  • the terminal 130 may be a mobile terminal such as a smart phone, a tablet computer, or a portable notebook computer on a laptop, or may be a terminal such as a desktop computer or a projection computer, or an intelligent terminal having a data processing component. There is no limit to this.
  • the control server 110 or the simulation server 120 can be implemented as a server, which can be a physical server or a cloud server.
  • the control server 110 is a background server of an application program in the terminal 130 .
  • the control server for allocating the simulation request fetches the simulation result corresponding to the simulation request and sends it to the local In the computer, specifically:
  • the simulation request A is executed for several times by the server used to run the simulation (that is, the simulation server) 1, the server 2 used to run the simulation, and the server 3 used to run the simulation, and the first one on the server 1 used to run the simulation
  • the data file executed for the first time is denoted as A-1-1
  • the data file executed for the second time on server 1 used to run the simulation is denoted as A-1-2
  • the data file for running the simulation is denoted as A-2-1
  • the data file executed for the first time on server 3 used to run the simulation is denoted as A-3-1
  • the data file executed for the second time on server 3 used to run the simulation Documents are recorded as A-3-2, and so on;
  • the server for allocating simulation requests records the corresponding servers for running simulations and the order of running simulations for which all simulation requests are executed, and each data file is stored in each server for running simulations;
  • the server for distributing simulation requests reads out each data file from the server executing the simulation according to the recorded order, and integrates them into the final simulation results, and sends them to the local computer of the engineer who sends the simulation request.
  • the above server can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for technical cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, and big data and artificial intelligence platforms.
  • the system may also include a management device, which is used to manage the system (such as managing the connection status between each module and the server, etc.), and the management device and the server are connected through a communication network.
  • the communication network is a wired network or a wireless network.
  • the aforementioned wireless network or wired network uses standard communication technologies and/or protocols.
  • the network is typically the Internet, but can be any other network including, but not limited to, any combination of local area networks, metropolitan area networks, wide area networks, mobile, wired or wireless networks, private networks, or virtual private networks.
  • data exchanged over a network is represented using techniques and/or formats including Hypertext Markup Language, Extensible Markup Language, and the like.
  • all or some links may be encrypted using conventional encryption techniques such as Secure Sockets Layer, Transport Layer Security, Virtual Private Network, Internet Protocol Security, etc.
  • customized and/or dedicated data communication technologies may also be used to replace or supplement the above data communication technologies.
  • Fig. 2 is a method flow chart of an integrated circuit automatic parallel simulation method according to an exemplary embodiment. The method is executed by a computer device, and the computer device may be a control server in the simulation system as shown in FIG. 1 . As shown in Figure 2, the integrated circuit automatic parallel simulation method may include the following steps:
  • Step 201 obtaining a circuit simulation request.
  • the circuit simulation request is used to request simulation resources to simulate the structure of the integrated circuit.
  • the circuit simulation request includes integrated circuit structure data.
  • the terminal after acquiring the integrated circuit structure data, the terminal can generate a corresponding circuit simulation request based on the integrated circuit structure data, and send it to the control server in the simulation system.
  • Step 202 adding the circuit emulation request as a candidate emulation request to the first queuing queue.
  • control server When the control server obtains the circuit simulation request, it does not perform simulation processing on the integrated circuit structure, but first adds it to the first queuing queue, thereby judging the simulation priority relationship between the circuit simulation request and other circuit simulation requests.
  • the computer device adds each circuit emulation request as a candidate emulation request to the first queuing queue according to the acquisition time of each circuit emulation request. That is to say, the order of the circuit emulation request acquired earlier is higher in the first queuing queue, and the order of the circuit emulation request acquired later is lower in the first queuing queue.
  • Step 203 Based on the remaining simulation time of each candidate simulation request in the first queue, perform a first sorting process on each candidate simulation request in the queue to obtain a second queue.
  • the computer equipment that is, the control server in FIG. 1 needs to select candidate simulation requests to push to the simulation server for simulation in the next simulation cycle. At this time, the control server can obtain the remaining of each candidate simulation request in the first queuing queue. The simulation time is calculated, and the first sorting process is performed on each candidate simulation request according to the remaining simulation time of the candidate simulation requests to obtain a second queuing queue.
  • the computer device adjusts the candidate simulation requests in the first queuing queue according to the remaining simulation time, so as to obtain the second queuing queue.
  • the process of sequence adjustment can be as follows: comparing the size relationship of the remaining simulation time between two adjacent candidate simulation requests one by one, until the remaining simulation time of the preceding sequence in the adjacent two candidate simulation requests is not greater than the remaining simulation time of the following sequence until the remaining simulation time.
  • the remaining simulation time of each candidate simulation request in the first queuing queue may be obtained by computer equipment through simulation according to the circuit structure and parameter value quantity of each candidate simulation request.
  • the function that the simulation server needs to fit when simulating the circuit structure should be more complex, and the time required for the simulation request should also be more complex. More.
  • the remaining simulation time of each candidate simulation request represents the estimated time it takes for each candidate simulation request to be processed.
  • the simulation server can process more simulation requests within a specified time, avoiding as far as possible one or several circuit simulation requests with more and more complicated parameters, which will greatly slow down the processing of other circuit simulation requests.
  • the remaining simulation time is used to indicate the remaining processing progress of the candidate simulation requests whose simulation has been started before the target processing period.
  • each simulation server processes each candidate simulation request according to a cycle. After a cycle is calculated, each simulation server reallocates computing resources to process the remaining unprocessed requests in the queue. Candidate simulation requests that have completed or have not yet been processed are processed.
  • the computer device counts the candidate simulation requests that have not been processed or have not started processing in the queue at this time, And obtain the candidate simulation requests that have been processed but have not been processed, and calculate the remaining simulation time required to process the remaining progress according to their processing progress and processing time in the previous cycle.
  • the computer device sorts the candidate simulation requests that have started the simulation process but have not finished processing them according to the remaining simulation time, the order with the largest remaining simulation time is later.
  • the remaining simulation time is large, it means that at least one cycle of candidate simulation requests has been experienced, and a lot of simulation resources are still needed for simulation processing, which may slow down the processing progress of other candidate simulation requests, so it needs to be placed in the back order.
  • the computer device determines the remaining simulation time of the newly initiated candidate simulation request obtained within the target processing period as 0.
  • the computer device sorts the first queuing queue, as a whole, it will prioritize the unprocessed candidate simulation requests at the front, so as to preferentially process the unprocessed candidate simulation requests.
  • the computer device halves the remaining simulation time of the candidate simulation requests that have not been simulated in the previous period.
  • Another situation is that when the candidate simulation request was sent to the simulation server for simulation processing two cycles ago, but the processing was not completed, and it was not processed in the previous cycle because it was not queued, at this time Halves the remaining simulation time for this candidate simulation request.
  • Candidate simulation requests with a relatively long processing time may also be sent by the control server to the simulation server for processing.
  • Step 204 Perform a second sorting process on the second queuing queue according to the acceleration parameter size of each candidate simulation request in the second queuing queue to obtain a target queuing queue.
  • each candidate simulation request in the second queuing queue takes into account the needs of the candidate simulation requests.
  • the simulation time of processing but in practical applications, each candidate simulation request has a priority in theory, that is, the integrated circuit structure in some candidate simulation requests may be very important even though it consumes a lot of resources, and the simulation results are urgently needed , when the computer device generates the candidate simulation request, it only needs to set a higher acceleration parameter, that is, the simulation server can give priority to the simulation processing of the candidate simulation request.
  • the authority value of the target user corresponding to each candidate simulation request is acquired; and the authority value of the target user is set as an acceleration parameter of each candidate simulation request.
  • the control server can pre-set corresponding authority values for each user who is allowed to make candidate simulation requests.
  • the authority value is larger, it means that the user has greater authority.
  • the control server determines the authority value of each target user as the acceleration parameter of the candidate simulation request, and then compares the priority of executing each candidate simulation request through the acceleration parameters of each candidate simulation request , so as to realize the order adjustment of each candidate simulation request in the queuing queue.
  • a second sorting process is performed on the second queuing queue to obtain a third queuing queue; obtain each candidate simulation request Corresponding acceleration points of each target user; in each candidate simulation request of the third queuing queue, the acceleration points of the corresponding target user are greater than the candidate simulation request of the acceleration parameter, and the target queuing queue is constructed according to the acceleration parameter size.
  • the acceleration parameter in the candidate simulation request may be set by the user through a computer device during the generation of the candidate simulation request.
  • control server After the control server obtains the acceleration parameter size of each candidate simulation request in the second queuing queue, it needs to compare with the acceleration points of each target user stored in the control server, that is, compare whether the acceleration parameter size set by the target user exceeds beyond the allowed range of the target user.
  • the acceleration parameter of the candidate simulation request When it is detected that the acceleration parameter of the candidate simulation request is greater than the acceleration points of the target user of the candidate simulation request, it means that the acceleration parameter of the candidate simulation request exceeds the allowed range of the target user, and the candidate simulation request is ignored at this time. Do not perform simulation processing during the current processing cycle of the simulation server.
  • control server can construct the candidate simulation requests whose acceleration points of the corresponding target users are greater than the acceleration parameter as a target queuing queue to indicate the simulation processing priority of each candidate simulation request in the current processing cycle.
  • Step 205 According to the priority indicated by the target queuing queue, each candidate simulation request is simulated by each simulation server within the target processing period.
  • each simulation server in the embodiment of the present application is prioritized according to performance, and a simulation server with higher performance has a higher priority.
  • control server After the control server obtains the target queuing queue, it can take out the candidate simulation requests one by one according to the priority indicated by the target queuing queue, and distribute them to the simulation server in the idle state for simulation processing according to the priority of the simulation server, so that A simulation result corresponding to the candidate simulation request is obtained.
  • the control server when taking out a candidate simulation request in the target queuing queue, the control server detects the status of each simulation server at this time, and among the simulation servers that are idle at this time, the simulation server with the highest priority (that is, the highest performance) Serve as the server that simulates the candidate simulation request.
  • each candidate simulation request is read according to priority in the target queuing queue, the lower the priority, the later the candidate simulation request is taken out, and the processing performance of the simulation server allocated is theoretically lower, thereby realizing
  • Each candidate simulation request is allocated to simulation servers with different performances according to priority to perform simulation operations in the target processing cycle (ie, the current processing cycle), so as to realize reasonable utilization of resources.
  • control server sorts each candidate simulation request in the queuing queue according to the remaining simulation time and the acceleration parameter size to form the target queuing queue
  • control server processes the candidate simulation requests in the target queuing queue, according to the simulation
  • the performance of the server, the needs of the target users, and the running time of the simulation, etc. perform intelligent queuing simulation for each simulation request, and rationally arrange and utilize the simulation server resources.
  • the computer device can send the integrated circuit structure to the simulation system through a circuit simulation request, and at this time, the control server in the simulation system adds the circuit simulation request as a candidate simulation request to the first In the first queuing queue; the control server then sorts the activated candidate simulation requests in the queuing queue according to the remaining simulation time to obtain the second queuing queue; the control server then sorts the candidate simulation requests in the second queuing queue according to the preset Accelerate the size of the parameter, and perform the second sorting process, so as to obtain the target queuing queue, so that the simulation server can simulate each candidate simulation request according to the priority indicated by the sequence.
  • the above scheme performs intelligent queuing simulation for each simulation request, rationally arranges and utilizes the resources of the simulation server, thereby improving the simulation efficiency of the integrated circuit.
  • Fig. 3 is a method flow chart of an integrated circuit automatic parallel simulation method according to an exemplary embodiment. The method is executed by a computer device, and the computer device may be a control server in the simulation system as shown in FIG. 1 . As shown in Figure 3, the integrated circuit automatic parallel simulation method may include the following steps:
  • Step 301 obtaining a circuit simulation request.
  • the engineer submits the simulation request generated by the circuit structure to the control server in the simulation system.
  • the simulation system includes a server for allocating simulation requests (that is, the control server in FIG. 1 ) and multiple servers for running the simulation (that is, the control server in FIG. 1 ). simulation server); and the multiple servers used to run the simulation are sorted in descending order of performance, that is, the first server at the head of the queue of multiple servers used to run the simulation has the highest performance, and the server at the end is the last One server has the lowest performance.
  • Step 302 adding the circuit emulation request as a candidate emulation request to the first queuing queue.
  • control server may also perform duplicate checking and analysis processing on the circuit simulation request.
  • the control server by analyzing the netlist file of the circuit structure, the circuit structure is checked and analyzed to confirm whether the circuit structure or a circuit structure similar to the circuit structure has submitted a simulation request;
  • the netlist file is mainly divided into four parts: the type of component, the parameter value of the component, the connection relationship between each component and the simulation condition. Comparing each part of the netlist file of the structure, it can be confirmed whether the circuit structure or a circuit structure similar to the circuit structure has submitted a simulation request;
  • the circuit structure has submitted a simulation request, look for the simulation result of the circuit structure in the cache that has completed the simulation. If the simulation result is found in the cache that has completed the simulation, the simulation result is directly obtained and sent to the engineer's computer equipment ; If no simulation result is found in the cache of the completed simulation, it means that the cache of the simulation result has been cleared, then mark the number of simulation repetitions, and add the simulation request of the circuit structure to the queue; if the circuit structure submitted before is being simulation, the request is ignored and the engineer is informed that the previously submitted circuit structure is being simulated.
  • Step 303 Based on the remaining simulation time of each candidate simulation request in the first queue, perform a first sorting process on each candidate simulation request in the queue to obtain a second queue.
  • the computer device calculates the average simulation speed of each executed simulation request (simulation request already running in the simulation server) in the last cycle, according to the simulation speed and the current progress , to estimate the remaining simulation time;
  • the estimated remaining simulation time may not be consistent with the actual simulation remaining time. According to the change of the circuit state, the actual simulation speed will vary greatly. However, because the accurate actual simulation remaining time cannot be obtained, this paper Applications are sorted in the simulation queue based on the estimated simulation time.
  • the computer device estimates the remaining simulation time of the simulation request that was not executed in the previous cycle (the estimated remaining simulation time can be the estimated Remaining simulation time) to carry out the ordering operation, that is, to reduce the estimated remaining simulation time of the simulation requests that were not executed in the previous cycle through the ordering operation, so as to ensure that the simulation requests that were not executed in the previous cycle can be completed in this cycle
  • the order of execution is advanced;
  • the ordering operation can be any feasible algorithm, one of the optional algorithms is: halving the estimated remaining simulation time of simulation requests that were not executed in the previous cycle, and halving the estimated remaining simulation time As the estimated remaining time of this simulation request in this cycle.
  • the computer device sorts all the simulation requests in the queuing queue from small to large according to the estimated remaining time.
  • Step 304 Perform a second sorting process on the second queuing queue according to the acceleration parameter size of each candidate simulation request in the second queuing queue to obtain a target queuing queue.
  • the acceleration parameter may be set as the point consumption rate of the acceleration point corresponding to the target user.
  • each request initiator that is, the target user, such as an engineer
  • Party every unit time (one period) all can obtain integral, and the accumulative integral O of each request originator, integral obtains speed P, is stored in the attribute information of control server;
  • the integral obtained per unit time (one period) can be set according to the actual situation.
  • the target user When the target user initiates a simulation request, he can set in the simulation request how many points are consumed per unit time to use the simulation resources, and the point consumption speed Q (that is, the acceleration parameter);
  • the value of the point consumption speed Q can conform to the following rules:
  • the request initiator (engineer) can also set the point consumption speed Q according to its own needs, specifically: the request initiator (engineer) provides the self-set point consumption speed while sending a simulation request to the control server in the local computer Q;
  • the number M of simulation requests is counted as the number of simulations specifically proposed.
  • the point consumption speed Q can be set by the request initiator. Therefore, when the request initiator (engineer) thinks that his circuit only needs to verify some functions, or needs to simulate the circuit as soon as possible due to the progress of the project, the request initiator (Engineers) can set a higher integral consumption speed Q by themselves, so that the corresponding candidate simulation request has a higher integral consumption speed Q (that is, a higher acceleration parameter), so that the candidate simulation request can be quickly queued to the target queue Front of the queue, thus sacrificing simulation runtime, but reducing simulation queuing time.
  • Step 305 according to the priority indicated by the target queuing queue, take out a specified number of candidate simulation requests, and perform acceleration point detection on the specified number of candidate simulation requests one by one according to the order of priority.
  • a specified number of candidate simulation requests can be taken out, and accelerated point detection can be performed one by one.
  • the specified number may be determined according to the number of simulation servers, or the specified number may be determined according to the number of simulation servers currently in an idle state.
  • Step 306A when it is detected that among the specified number of candidate simulation requests, the acceleration points of the first user corresponding to the first simulation request are smaller than the acceleration parameter of the first simulation request, the first simulation request is skipped.
  • the acceleration points are accumulated points O of the first user (that is, a certain request initiator), and the acceleration parameter of the first simulation request is the point consumption speed Q corresponding to the simulation request made by the first user.
  • control server sequentially takes out simulation requests from the head of the generated queuing queue, and compares the point consumption speed Q corresponding to the simulation request with the accumulated points corresponding to the request initiator corresponding to the simulation request O;
  • Step 306B when it is detected that among the specified number of candidate simulation requests, the acceleration points of the second user corresponding to the second simulation request are greater than the acceleration parameter of the second simulation request, send the second simulation request to the simulation server Perform simulation processing.
  • the simulation servers obtain the target simulation server that is idle and has the highest priority, and send the second simulation request to the target Processing in the simulation server.
  • the difference between the acceleration points of the second user and the acceleration parameter of the second simulation request is updated to the first 2. Acceleration points for users.
  • processing process of the candidate simulation request may be shown in the following steps:
  • control server sequentially takes out the simulation request from the head of the queuing queue, and compares the point consumption speed Q corresponding to the simulation request with the cumulative point O corresponding to the request initiator corresponding to the simulation request;
  • the server used to distribute the simulation request starts to search for the first server that is not scheduled for the simulation request task from the head of the simulation server queue (that is, the server with the highest priority and the highest processing capacity strong simulation server), and arrange the simulation request taken out this time to be executed in the simulation server, and calculate O-M*Q, and use the result as a new cumulative integral O;
  • the server for running the simulation is not fully scheduled (that is, the simulation server is in an idle state), and there are unexecuted simulation requests in the target queuing queue of the simulation request, the following operations are performed.
  • any simulation server is in an idle state and there is no unscheduled simulation request in the target queuing queue of the simulation request, the following operations are performed.
  • the control server finds the last scheduled simulation request, and calculates the difference R between the points consumption speed Q and the points acquisition speed P corresponding to the simulation request;
  • control server can take out the calculation result in the simulation server that has completed the simulation calculation, and re-determine the candidate simulation request in the target queuing queue and send it to the simulation server for simulation.
  • control server fetches the simulation result corresponding to the simulation request in real time.
  • the control server can wait until the end of this period before fetching the simulation request and its corresponding simulation result; it can also fetch the simulation request and its corresponding simulation result in real time. After receiving the simulation results, arrange a new simulation request in real time for the vacated simulation server.
  • the specific steps for arranging a new simulation request in real time are as follows:
  • the operation is repeated until there is no situation where the credit consumption rate Q corresponding to the simulation request close to the head of the queue is smaller than the credit consumption rate Q corresponding to the next simulation request.
  • the simulation request is sequentially taken out from the head of the queuing queue, and the point consumption speed Q corresponding to the simulation request is compared with the accumulated integral O corresponding to the request initiator corresponding to the simulation request;
  • the server used to distribute the simulation request will search for the first server that is not scheduled for the simulation request task from the head of the server queue used to run the simulation, and take out the simulation request taken out this time.
  • the request is scheduled to be executed in the server, and O-M*Q is calculated, and the result is used as the new cumulative point O;
  • the operation is repeated until all the remaining simulation requests in the simulation request queue do not meet the condition that the point consumption speed Q is not greater than the cumulative point O or all the servers used to run the simulation are assigned simulation request tasks.
  • the remaining simulation requests will be processed according to the corresponding request initiator (target user)
  • the cumulative points O are sorted from large to small;
  • the control server fetches the simulation result corresponding to the simulation request in real time, and sends the simulation result to the local computer of the engineer who sent the simulation request ;
  • the control server can wait until the end of the cycle to fetch the simulation result corresponding to the simulation request and send it to the local computer of the engineer who sent the simulation request; it can also fetch the simulation result corresponding to the simulation request in real time.
  • the simulation results are sent to the local computer of the engineer who made the simulation request.
  • the target simulation data corresponding to the target candidate simulation request in each target simulation server that has simulated the target candidate simulation request is obtained ; Splicing each target simulation data into the target simulation result and sending it to the target computer device; the target computer device is the device that sends the target candidate simulation request.
  • the target simulation data also includes at least one of front server data and post server data; the front server data is used to indicate the server that simulates the target candidate simulation request before obtaining the target simulation data; The set server data is used to indicate the server that simulates the target candidate simulation request after the target simulation data is acquired.
  • a database in the control server (optionally, a database server can also be set up separately to store the database), which is used to record the simulation request and its corresponding Simulation result segmentation information.
  • a database server can also be set up separately to store the database
  • the simulation server where the previous data file is located (that is, the front server data) and the location where the next data file is located in each data file (that is, the simulation data obtained by simulation).
  • Simulation server that is, post server data
  • the previous data file of data file A-2-2 is A-1-2, and the next data file is A-3-1, then record the location of the previous data file A-1 at the head of A-2-2 -2, record the location A-3-1 of a data file at the end of A-2-2; at this time, each location information indicates the ID of each simulation server.
  • the local computer of the engineer who sends out the simulation request can access all the simulation servers, obtain the complete simulation running sequence according to the data files, and thus obtain the complete simulation results for sending to the local computer.
  • the target simulation data further includes the circuit state of the target simulation server before performing the simulation operation on the target candidate simulation request, and the circuit state after performing the simulation operation.
  • the control server When the simulation server used for simulation has an accident, the control server also has an accident.
  • the local computer of the engineer who sent the simulation request can access all the simulation servers. According to all data files According to the simulation running sequence of the simulation, which simulation server is the missing data file of the previous simulation and the data file of the next simulation respectively located in, so that the circuit state after the simulation operation stored in the data file of the previous simulation and the The simulation initial conditions stored in the data file of the next simulation, the lost simulation initial conditions of this simulation and the circuit state after the simulation operation are obtained, and according to the lost simulation initial conditions of this simulation and the circuit state after the simulation operation, the local Re-run the simulation calculation in the computer to obtain the currently missing part, so as to obtain the complete simulation result, without having to re-calculate the entire simulation request from beginning to end.
  • the simulation system deployed through the above solution can quickly and effectively obtain complete simulation results even when an accident occurs on the server.
  • the simulation data files are stored and read in a distributed manner through the simulation server, so that the data files obtained by the simulation calculation do not need to be transferred between different simulation servers, thus ensuring the smooth operation of the entire simulation server network, ensuring the speed of the simulation operation, and improving improved simulation efficiency.
  • the computer device can send the integrated circuit structure to the simulation system through a circuit simulation request, and at this time, the control server in the simulation system adds the circuit simulation request as a candidate simulation request to the first In the first queuing queue; the control server then sorts the activated candidate simulation requests in the queuing queue according to the remaining simulation time to obtain the second queuing queue; the control server then sorts the candidate simulation requests in the second queuing queue according to the preset Accelerate the size of the parameter, and perform the second sorting process, so as to obtain the target queuing queue, so that the simulation server can simulate each candidate simulation request according to the priority indicated by the sequence.
  • the above scheme performs intelligent queuing simulation for each simulation request, reasonably arranges and utilizes server resources, and encourages request initiators to submit high-quality simulation requirements instead of relying on repeated use
  • the simulation trial and error of the server is used for design, and it prevents some long-term simulation requirements from occupying high-quality server resources for a long time, thereby improving the simulation efficiency of integrated circuits.
  • Fig. 4 is a structural block diagram of an integrated circuit automatic parallel simulation device according to an exemplary embodiment.
  • the devices include:
  • the simulation request obtaining module 401 is used to obtain a circuit simulation request; the circuit simulation request is used to request simulation resources to simulate the integrated circuit structure;
  • the first queue acquisition module 402 is configured to add the circuit emulation request as a candidate emulation request to the first queuing queue;
  • the second queue acquisition module 403 is configured to perform a first sorting process on each candidate simulation request in the queue based on the remaining simulation time of each candidate simulation request in the first queue to obtain a second queue;
  • the target queue acquisition module 404 is configured to perform a second sorting process on the second queue according to the acceleration parameter size of each candidate simulation request in the second queue to obtain a target queue;
  • the simulation processing module 405 is configured to perform simulation processing on each candidate simulation request through each simulation server within a target processing period according to the priority indicated by the target queuing queue.
  • the remaining simulation time is used to indicate the remaining processing progress of the candidate simulation request that has started simulation before the target processing cycle
  • the device further includes: a remaining time determining module, configured to determine the remaining simulation time of the newly initiated candidate simulation request obtained within the target processing period as 0.
  • the simulation processing module is further configured to, according to the priority indicated by the target queuing queue, take out a specified number of candidate simulation requests, and perform the specified number of candidate simulation requests in order of priority request, and perform acceleration points detection one by one;
  • the acceleration points of the first user corresponding to the first simulation request are less than the acceleration parameter of the first simulation request, skipping the first simulation request;
  • the second simulation request is sent to The simulation server performs simulation processing.
  • the simulation processing module is further configured to, among the simulation servers, obtain a target simulation server that is in an idle state and has the highest priority; the priority of the simulation server is used to indicate The simulation processing performance of the simulation server; sending the second simulation request to the target simulation server for processing.
  • the simulation processing module is further configured to, when it is detected that the simulation process for the second simulation request ends, combine the acceleration points of the second user with the second simulation request The difference between the acceleration parameters is updated as the acceleration points of the second user.
  • the device further includes:
  • the simulation data acquisition module is used to obtain the target simulation corresponding to the target candidate simulation request in each target simulation server that has simulated the target candidate simulation request after it is detected that the simulation processing of the target candidate simulation request is completed.
  • data the target simulation data also includes at least one of front server data and post server data;
  • the front server data is used to indicate that the target candidate simulation request is performed before obtaining the target simulation data A simulated server;
  • the post server data is used to indicate a server that simulates the target candidate simulation request after obtaining the target simulation data;
  • the simulation result sending module is used for splicing each of the target simulation data into the target simulation result and sending it to the target computer device; the target computer device is the device that sends the target candidate simulation request.
  • the target simulation data further includes a circuit state of the target simulation server before performing a simulation operation on the target candidate simulation request, and a circuit state after performing the simulation operation.
  • the computer device can send the integrated circuit structure to the simulation system through a circuit simulation request, and at this time, the control server in the simulation system adds the circuit simulation request as a candidate simulation request to the first In the first queuing queue; the control server then sorts the activated candidate simulation requests in the queuing queue according to the remaining simulation time to obtain the second queuing queue; the control server then sorts the candidate simulation requests in the second queuing queue according to the preset Accelerate the size of the parameter, and perform the second sorting process, so as to obtain the target queuing queue, so that the simulation server can simulate each candidate simulation request according to the priority indicated by the sequence.
  • the above scheme performs intelligent queuing simulation for each simulation request, rationally arranges and utilizes the resources of the simulation server, thereby improving the simulation efficiency of the integrated circuit.
  • FIG. 5 is a schematic diagram of a computer device provided according to an exemplary embodiment of the present application
  • the computer device includes a memory and a processor, the memory is used to store a computer program, and the computer program is processed by the When the controller is executed, the above method is implemented.
  • the processor may be a central processing unit (Central Processing Unit, CPU).
  • the processor can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate array (Field-Programmable Gate Array, FPGA) or other Chips such as programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations of the above-mentioned types of chips.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the memory can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the methods in the embodiments of the present application.
  • the processor executes various functional applications and data processing of the processor by running non-transitory software programs, instructions, and modules stored in the memory, that is, implements the methods in the above method implementation manners.
  • the memory may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created by the processor, and the like.
  • the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage devices.
  • the memory may optionally include memory located remotely from the processor, and such remote memory may be connected to the processor through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • a computer-readable storage medium for storing at least one computer program, and the at least one computer program is loaded and executed by a processor to implement all or part of the steps in the above method .
  • the computer-readable storage medium can be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a read-only optical disc (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks, and optical data storage devices, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请是关于一种集成电路自动化并行仿真方法和仿真装置,具体涉及电数字数据处理领域。所述方法包括:获取电路仿真请求;将电路仿真请求作为候选仿真请求加入第一排队队列;基于各个候选仿真请求的剩余仿真时间,将各个候选仿真请求进行第一次排序处理,获得第二排队队列;按照候选仿真请求的加速参数大小,对第二排队队列进行第二次排序处理,获得目标排队队列;按照目标排队队列所指示的优先级,在目标处理周期内通过各个仿真服务器对各个候选仿真请求进行仿真处理。上述方案中根据仿真服务器的性能、目标用户的需求以及仿真运行的时间等,对各个仿真请求进行智能化排队仿真,合理安排和利用仿真服务器资源,从而提高了集成电路的仿真效率。

Description

集成电路自动化并行仿真方法和仿真装置
本申请要求在2022年1月6日提交中国专利局、申请号为202210007453.1、发明名称为“一种自动化集成电路并行仿真方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电数字数据处理领域,具体涉及集成电路自动化并行仿真方法和仿真装置。
背景技术
现有技术中,工程师在EDA软件中对集成电路的电路结构设计完成后,为了验证电路的正确性,通常需要在EDA软件中继续对电路进行仿真。
而为了解决本地计算机算力有限的问题,可以设置多台仿真服务器,工程师在EDA软件中对电路结构完成设计后,将仿真请求提交到仿真服务器中,利用仿真服务器对电路进行仿真,此方法解决了本地计算机算力弱的问题,并且多台仿真服务器可以按照工程师提交电路仿真请求的先来后到顺序来进行仿真排序,并辅助以人工干预来临时调整仿真顺序。
上述方案中,人工干预具有一定的主观性,往往不能及时判断出需要进行调整的优先级顺序,从而使得长时间思考得到的优秀的电路设计无法及时获取到仿真资源,导致集成电路仿真的效率较低;
并且,仿真运算得到的数据文件往往是十分巨大的一个文件,多数要占用几百G字节的存储空间,故仿真运算得到的数据文件直接在仿真服务器之间流转需要占用较大的网络带宽,进一步影响集成电路仿真的效率。
发明内容
本申请提供了集成电路自动化并行仿真方法和仿真装置,提高了集成电路仿真效率,技术方案如下:
一方面,提供了一种集成电路自动化并行仿真方法,所述方法用于仿真系统中的控制服务器,所述仿真系统还包括各个仿真服务器,所述方法包括:
获取电路仿真请求;所述电路仿真请求用于请求仿真资源对集成电路结构进行仿真;
将所述电路仿真请求作为候选仿真请求加入第一排队队列;
基于所述第一排队队列的各个候选仿真请求的剩余仿真时间,将所述排队队列中的各个候选仿真请求进行第一次排序处理,获得第二排队队列;
按照所述第二排队队列中的各个候选仿真请求的加速参数大小,对所述第二排队队列进行第二次排序处理,获得目标排队队列;
按照所述目标排队队列所指示的优先级,在目标处理周期内通过所述各个仿真服务器对所述各个候选仿真请求进行仿真处理。
又一方面,提供了一种集成电路自动化并行仿真装置,所述装置包括:
仿真请求获取模块,用于获取电路仿真请求;所述电路仿真请求用于请求仿真资源对集成电路结构进行仿真;
第一队列获取模块,用于将所述电路仿真请求作为候选仿真请求加入第一排队队列;
第二队列获取模块,用于基于所述第一排队队列的各个候选仿真请求的剩余仿真时间,将所述排队队列中的各个候选仿真请求进行第一次排序处理,获得第二排队队列;
目标队列获取模块,用于按照所述第二排队队列中的各个候选仿真请求的加速参数大小,对所述第二排队队列进行第二次排序处理,获得目标排队队列;
仿真处理模块,用于按照所述目标排队队列所指示的优先级,在目标处理周期内通过各个仿真服务器对所述各个候选仿真请求进行仿真处理。
在一种可能的实现方式中,所述剩余仿真时间用于指示在目标处理周期之前已启动仿真的候选仿真请求的剩余处理进度;
所述装置还包括:剩余时间确定模块,用于将在目标处理周期内获取到的新发起的候选仿真请求的剩余仿真时间确定为0。
在一种可能的实现方式中,所述仿真处理模块,还用于,按照目标排队队列所指示的优先级,取出指定数量的候选仿真请求,并按照优先级顺序对所述指定数量的候选仿真请求,逐个进行加速积分检测;所述加速积分为累计积分,所述控制服务器保存有各个用户的属性信息,所述各个用户的属性信息中包含各个用户的累计积分;
当检测到所述指定数量的候选仿真请求中,第一仿真请求所对应的第一用户的加速积分,小于所述第一仿真请求的加速参数时,跳过所述第一仿真请求;所述第一仿真请求的加速参数为所述第一仿真请求中设置的积分消耗速度;
或者,当检测到所述指定数量的候选仿真请求中,第二仿真请求所对应的第二用户的加速积分,大于所述第二仿真请求的加速参数时,将所述第二仿真请求发送至仿真服务器进行仿真处理;所述第二仿真请求的加速参数为所述第二仿真请求中设置的积分消耗速度。
在一种可能的实现方式中,所述仿真处理模块,还用于,在所述各个仿真服务器中,获取处于空闲状态且优先级最高的目标仿真服务器;所述仿真服务器的优先级用于指示所述仿真服务器的仿真处理性能;将所述第二仿真请求发送至所述目标仿真服务器中进行处理。
在一种可能的实现方式中,所述仿真处理模块,还用于,当检测到针对所述第二仿真请求的仿真过程结束,将所述第二用户的加速积分与所述第二仿真请求的加速参数之间的差值,更新为所述第二用户的加速积分。
在一种可能的实现方式中,所述装置还包括:
仿真数据获取模块,用于当检测到对目标候选仿真请求的仿真处理完成后,获取对所述目标候选仿真请求进行过仿真的各个目标仿真服务器中,与所述目标候选仿真请求对应的目标仿真数据;所述目标仿真数据中还包括前置服务器数据与后置服务器数据中的至少一者;所述前置服务器数据用于指示在获取所述目标仿真数据之前对所述目标候选仿真请求进行仿真的服务器;所述后置服务器数据用于指示在获取所述目标仿真数据之后对所述目标候选仿真请求进行仿真的服务器;
仿真结果发送模块,用于将各个所述目标仿真数据拼接为所述目标仿真结果,并发送至目标计算机设备中;所述目标计算机设备为发送所述目标候选仿真请求的设备。
在一种可能的实现方式中,所述目标仿真数据中还包括所述目标仿真服务器在对所述目标候选仿真请求执行仿真操作前的电路状态,以及执行仿真操作后的电路状态。
再一方面,提供了一种计算机设备,所述计算机设备中包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述集成电路自动化并行仿真方法。
又一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令,所述至少一条指令由处理器加载并执行以实现上述的集成电路自动化并行仿真方法。
再一方面,提供了一种计算机程序产品还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述的集成电路自动化并行仿真方法。
本申请提供的技术方案可以包括以下有益效果:
当需要对集成电路进行仿真时,计算机设备可以将集成电路结构通过电路仿真请求发送至仿真系统中,此时仿真系统中的控制服务器将该电路仿真请求作为候选仿真请求加入第一排队队列中;控制服务器再将排队队列中的候选仿真请求,按照剩余仿真时间进行排序,得到第二排队队列;控制服务器再将第二排队队列中的候选仿真请求按照预先设置的加速参数大小,进行第二次排序处理,从而获得目标排队队列,以便仿真服务器对按照顺序指示的优先级对各个候选仿真请求进行仿真。
并且,当检测到电路仿真请求的仿真处理完成后,控制服务器从各个仿真服务器中获取到对应电路仿真请求执行仿真处理后得到的仿真数据,并将仿真数据进行拼接,从而得到最终的仿真结果;并且各个仿真数据中还指示了在各个仿真数据之前或之后执行的仿真服务器,以及得到各个仿真数据之前或之后的电路状态,即使各个仿真数据中存在一部分仿真数据丢失的情况,控制服务器也可以根据上一次仿真的数据文件中存储的仿真结束时的电路状态和下一次仿真的数据文件中存储的仿真初始条件,得到丢失的本次仿真的仿真初始条件及结束时的电路状态,并根据丢失的本次仿真的仿真初始条件及结束时的电路状态,重新安排仿真服务器进行仿真计算得到当前丢失的部分,从而得到完整的仿真结果,而无需重新把整个仿真请求从头到尾运算一遍。
上述方案根据仿真服务器的性能、目标用户的需求以及仿真运行的时间等,对各个仿真请求进行智能化排队仿真,合理安排和利用仿真服务器资源,同时利用仿真服务器对仿真数据文件进行分布式存储和读取,使得仿真运算得到的数据文件无需在不同的仿真服务器之间流转,保证了整个仿真服务器网络运行通畅,确保了仿真运行的速度,从而提高了集成电路的仿真效率。
附图说明
为了更清楚地说明本申请具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据一示例性实施例示出的一种仿真系统的结构示意图。
图2是根据一示例性实施例示出的集成电路自动化并行仿真方法的方法流程图。
图3是根据一示例性实施例示出的集成电路自动化并行仿真方法的方法流程图。
图4是根据一示例性实施例示出的集成电路自动化并行仿真装置的结构方框图。
图5是根据本申请一示例性实施例提供的一种计算机设备示意图。
具体实施方式
下面将结合附图对本申请的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
应理解,在本申请的实施例中提到的“指示”可以是直接指示,也可以是间接指示,还可以是表示具有关联关系。举例说明,A指示B,可以表示A直接指示B,例如B可以通过A获取;也可以表示A间接指示B,例如A指示C,B可以通过C获取;还可以表示A和B之间具有关联关系。
在本申请实施例的描述中,术语“对应”可表示两者之间具有直接对应或间接对应的关系,也可以表示两者之间具有关联关系,也可以是指示与被指示、配置与被配置等关系。
本申请实施例中,“预定义”可以通过在设备(例如,包括终端设备和网络设备)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。
图1是根据一示例性实施例示出的一种仿真系统的结构示意图。该仿真系统中包含控制服务器110以及各个仿真服务器120。其中,各个仿真服务器120与控制服务器110之间通过通信网络进行数据通信,该通信网络可以是有线网络也可以是无线网络。
可选的,该仿真系统中还包含有终端130,该终端130可以是工程师用于设计集成电路的计算机设备,当工程师通过该终端130设计出集成电路后,可以通过该终端中安装的应用程序,将该集成电路对应的结构数据生成电路仿真请求,并发送至仿真系统的控制服务器110中,以便控制服务器110控制各个仿真服务器对该电路仿真请求进行仿真处理。
可选的,终端130中安装有具有电路设计功能的应用程序,该终端130可以运行该具有电路设计功能的应用程序,并在接收到用户的指定操作时,生成对应的集成电路数据,本申请实施例对此不做限定。
该终端130还可以是具有数据传输接口的终端设备,该数据传输接口用于接收其他计算机设备所生成的集成电路数据以构建电路仿真请求。
可选的,该终端130可以是智能手机、平板电脑,膝上便携式笔记本电脑等移动终端,也可以是台式电脑、投影式电脑等终端,或是具有数据处理组件的智能终端,本申请实施例对此不设限制。
控制服务器110或仿真服务器120可以实现为一台服务器,其可以是物理服务器,也可以实现为云服务器。在一种可能的实现方式中,控制服务器110是终端130中应用程序的后台服务器。
在一种可能的实现方式中,在仿真服务器完成对应(电路)仿真请求的仿真操作后,用于分配仿真请求的控制服务器取出仿真请求对应的仿真结果,并发送至发出仿真请求的工程师的本地计算机中,具体为:
在用于分配仿真请求的服务器(即控制服务器)中建立一个数据库,记录仿真请求及对应的仿真结果分段信息,例如:
仿真请求A,被用于运行仿真的服务器(即仿真服务器)1、用于运行仿真的服务器2、用于运行仿真的服务器3分别执行了若干时间,在用于运行仿真的服务器1上第一次执行的数据文件记为A-1-1,在用于运行仿真的服务器1上第二次执行的数据文件记为A-1-2,在用于运行仿真的服务器2上第一次执行的数据文件记为A-2-1,在用于运行仿真的服务器3上第一次执行的数据文件记为A-3-1,在用于运行仿真的服务器3上第二次执行的数据文件记为A-3-2,以此类推;
用于分配仿真请求的服务器记录了所有仿真请求被执行对应的用于运行仿真的服务器及运行仿真的先后顺序,各个数据文件存储于各个用于运行仿真的服务器中;
用于分配仿真请求的服务器根据记录的顺序把各个数据文件从执行仿真的服务器中读出,并整合为最终的仿真结果,发送至发出仿真请求的工程师的本地计算机中。
可选的,上述服务器可以是独立的物理服务器,也可以是由多个物理服务器构成的服务器集群或者是分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、以及大数据和人工智能平台等技术云计算服务的云服务器。
可选的,该系统还可以包括管理设备,该管理设备用于对该系统进行管理(如管理各个模块与服务器之间的连接状态等),该管理设备与服务器之间通过通信网络相连。可选的,该通信网络是有 线网络或无线网络。
可选的,上述的无线网络或有线网络使用标准通信技术和/或协议。网络通常为因特网,但也可以是其他任何网络,包括但不限于局域网、城域网、广域网、移动、有线或无线网络、专用网络或者虚拟专用网络的任何组合。在一些实施例中,使用包括超文本标记语言、可扩展标记语言等的技术和/或格式来代表通过网络交换的数据。此外还可以使用诸如安全套接字层、传输层安全、虚拟专用网络、网际协议安全等常规加密技术来加密所有或者一些链路。在另一些实施例中,还可以使用定制和/或专用数据通信技术取代或者补充上述数据通信技术。
图2是根据一示例性实施例示出的集成电路自动化并行仿真方法的方法流程图。该方法由计算机设备执行,该计算机设备可以是如图1所示的仿真系统中的控制服务器。如图2所示,该集成电路自动化并行仿真方法可以包括如下步骤:
步骤201,获取电路仿真请求。
该电路仿真请求用于请求仿真资源对集成电路结构进行仿真。
在一种可能的实现方式中,该电路仿真请求中包含集成电路结构数据。
即终端在获取到集成电路结构数据后,可以基于该集成电路结构数据生成对应的电路仿真请求,并发送至仿真系统中的控制服务器中。
步骤202,将该电路仿真请求作为候选仿真请求加入第一排队队列。
当控制服务器获取到电路仿真请求后,先不对其中的集成电路结构进行仿真处理,而是先将其加入第一排队队列中,从而判断该电路仿真请求与其他电路仿真请求的仿真优先级关系。
在一种可能的实现方式中,计算机设备按照各个电路仿真请求的获取时间,将各个电路仿真请求作为候选仿真请求加入第一排队队列。也就是说,先获取到的电路仿真请求在第一排队队列中的顺序更靠前,而后获取到的电路仿真请求在第一排队队列中的顺序更靠后。
步骤203,基于该第一排队队列的各个候选仿真请求的剩余仿真时间,将该排队队列中的各个候选仿真请求进行第一次排序处理,获得第二排队队列。
计算机设备,即图1中的控制服务器需要在下一个仿真周期,选取候选仿真请求以推送至仿真服务器进行仿真,此时控制服务器可以对获取到的在第一排队队列中的各个候选仿真请求的剩余仿真时间进行计算,并通过候选仿真请求的剩余仿真时间,对各个候选仿真请求进行第一次排序处理,获得第二排队队列。
在本申请实施例的一种可能的实现方式中,计算机设备将第一排队队列中各个候选仿真请求,按照剩余仿真时间的大小进行顺序调整,从而获得第二排队队列。
其中,顺序调整的过程可以为,逐个比较相邻两个候选仿真请求之间的剩余仿真时间的大小关系,直至相邻两个候选仿真请求中顺序靠前的剩余仿真时间不大于顺序靠后的剩余仿真时间为止。
在一种可能的实现方式中,该第一排队队列中的各个候选仿真请求的剩余仿真时间,可以是计算机设备根据各个候选仿真请求的电路结构以及参数值数量进行模拟得到的。
例如,当候选仿真请求中的电路结构较为复杂时,参数值数量较多时,仿真服务器对该电路结构进行仿真时所需要拟合出的函数也应较复杂,仿真请求所需要花费的时间也应更多。
此时,各个候选仿真请求的剩余仿真时间,即代表着各个候选仿真请求被处理所需要花费的预计时间,当将剩余仿真时间较低的候选仿真请求的顺序摆在前面进行处理时,可以使得仿真服务器在指定时间内可以处理更多的仿真请求,尽可能避免由于参数较多较为复杂的一个或几个电路仿真请求,极大地拖慢其他电路仿真请求的处理。
在另一种可能的实现方式中,该剩余仿真时间用于指示在目标处理周期之前,已启动仿真的候选仿真请求的剩余处理进度。
在如图1所示的仿真系统中,各个仿真服务器按照周期,对各个候选仿真请求进行处理,当一个周期计算完毕后,各个仿真服务器重新分配计算资源,以便对排队队列中剩余的,未处理完或者未开始处理的候选仿真请求进行处理。
此时在执行完一个周期的仿真任务后,在后一个周期的仿真任务开始之前(即目标处理周期之前),计算机设备统计此时排队队列中未处理完的或者未开始处理的候选仿真请求,并获取其中已经开始处理但未处理完成的候选仿真请求,根据其在上一个周期的处理进度以及处理时间,计算出处理完剩余进度所需要花费的剩余仿真时间。
此时,当计算机设备对已经启动仿真处理但未处理完的候选仿真请求,按照剩余仿真时间进行排序时,剩余仿真时间大的顺序靠后。
而剩余仿真时间大,则说明已经经历过至少一个周期的候选仿真请求,仍然需要很多的仿真资源进行仿真处理,可能会拖慢其他候选仿真请求的处理进度,因此需要将其顺序放在后面。
在一种可能的实现方式中,计算机设备将在目标处理周期内获取到的新发起的候选仿真请求的剩余仿真时间确定为0。
此时计算机设备在对第一排队队列进行排序处理时,从整体来看,会优先将未被处理过的候选仿真请求排列的靠前,以便优先对未处理过的候选仿真请求进行处理。
在一种可能的实现方式中,计算机设备将上一个周期内未进行仿真处理的候选仿真请求的剩余仿真时间减半。
在上一个周期未进行仿真处理的候选仿真请求至少存在两种情况,一是该候选仿真请求一直未来得及进行仿真处理,或者是在上一周期内被加入排队队列中,并未处理,此时该候选仿真请求在控制服务器获取到该候选仿真请求的处理周期内,剩余仿真时间应该被设置为0,因此将该剩余仿真时间减半后仍然为0;
另一种情况是,当该候选仿真请求在两个周期之前,被发送至仿真服务器进行仿真处理,但并未处理完成,而在上一个周期又由于未排到队并未被处理,此时将该候选仿真请求的剩余仿真时间减半。
即当候选仿真请求在超过一个周期未被处理时,为了避免该候选仿真请求由于预计处理时间过长被长时间搁置,将该剩余仿真时间减半,从而提高该候选仿真请求的优先级,使得处理时间较长的候选仿真请求也可以被控制服务器发送至仿真服务器中处理。
步骤204,按照该第二排队队列中的各个候选仿真请求的加速参数大小,对该第二排队队列进行第二次排序处理,获得目标排队队列。
在计算机设备将第一排队队列中的各个候选仿真请求,进行第一次排序处理后,得到了第二排队队列,此时第二排队队列中的各个候选仿真请求,考虑到了候选仿真请求所需要处理的仿真时间,但在实际应用中,各个候选仿真请求理论上存在优先顺序,即有些候选仿真请求中的集成电路结构,尽管所需要消耗的资源多,但可能是非常重要的,急需仿真结果,此时计算机设备在生成候选仿真请求时,只需要设置较高的加速参数,即可以使得仿真服务器优先对该候选仿真请求进行仿真处理。
在一种可能的实现方式中,获取该各个候选仿真请求所对应的目标用户的权限值;将该目标用户的权限值设定为该各个候选仿真请求的加速参数。
为了进一步区分各个候选仿真请求的重要性,控制服务器中可以预先对允许提出候选仿真请求 的各个用户设置各自对应的权限值,当权限值较大时,则说明该用户的权限更大。目标用户在候选仿真请求中请求加速处理时,控制服务器将各个目标用户的权限值确定为该候选仿真请求的加速参数,再通过各个候选仿真请求的加速参数,比较执行各个候选仿真请求的优先级,从而实现对排队队列中的各个候选仿真请求的顺序调整。
在一种可能的实现方式中,按照该第二排队队列中的各个候选仿真请求的加速参数大小,对该第二排队队列进行第二次排序处理,获得第三排队队列;获取各个候选仿真请求分别对应的各个目标用户的加速积分;将该第三排队队列的各个候选仿真请求中,对应的目标用户的加速积分大于该加速参数的候选仿真请求,按照加速参数大小构建为该目标排队队列。
此时该候选仿真请求中的加速参数,可以是在候选仿真请求的生成过程中,用户通过计算机设备设置的。
在控制服务器获取到第二排队队列中的各个候选仿真请求的加速参数大小后,需要与控制服务器中存储的,各个目标用户的加速积分进行比较,也就是比较目标用户设置的加速参数大小是否超过了该目标用户被允许的范围。
当检测到候选仿真请求的加速参数,大于该候选仿真请求的目标用户的加速积分时,则说明该候选仿真请求的加速参数,超过了目标用户被允许的范围,此时忽略该候选仿真请求,在仿真服务器的当前处理周期内不做仿真处理。
因此控制服务器可以将对应的目标用户的加速积分大于该加速参数的候选仿真请求,构建为目标排队队列,以指示在当前处理周期内,各个候选仿真请求的仿真处理优先级。
步骤205,按照该目标排队队列所指示的优先级,在目标处理周期内通过各个仿真服务器对该各个候选仿真请求进行仿真处理。
在一种可能的实现方式中,本申请实施例中的各个仿真服务器按照性能进行优先级排序,性能越高的仿真服务器具有越高的优先级。
此时控制服务器在获取到目标排队队列后,可以按照目标排队队列所指示的优先级,逐个取出候选仿真请求,并按照仿真服务器优先级,分配至空闲状态的仿真服务器中以进行仿真处理,从而得到与该候选仿真请求对应的仿真结果。
例如,当取出目标排队队列中的某一个候选仿真请求时,此时控制服务器检测各个仿真服务器的状态,并将此时处于空闲状态的仿真服务器中,优先级最高(即性能最高)的仿真服务器作为对该候选仿真请求进行仿真处理的服务器。
因此,随着在目标排队队列,按照优先级对各个候选仿真请求进行读取,优先级越低,越后取出的候选仿真请求,分配到的仿真服务器的处理性能理论上也越低,从而实现将各个候选仿真请求按照优先级分配给不同性能的仿真服务器在目标处理周期(即当前处理周期)执行仿真操作,从而实现对资源的合理利用。
当控制服务器对排队队列中的各个候选仿真请求,分别按照剩余仿真时间以及加速参数大小进行排序,形成的目标排队队列后,控制服务器在对目标排队队列中的候选仿真请求进行处理时,根据仿真服务器的性能、目标用户的需求以及仿真运行的时间等,对各个仿真请求进行智能化排队仿真,合理安排和利用仿真服务器资源。
综上所述,当需要对集成电路进行仿真时,计算机设备可以将集成电路结构通过电路仿真请求发送至仿真系统中,此时仿真系统中的控制服务器将该电路仿真请求作为候选仿真请求加入第一排队队列中;控制服务器再将排队队列中的已启动的候选仿真请求,按照剩余仿真时间进行排序,得 到第二排队队列;控制服务器再将第二排队队列中的候选仿真请求按照预先设置的加速参数大小,进行第二次排序处理,从而获得目标排队队列,以便仿真服务器对按照顺序指示的优先级对各个候选仿真请求进行仿真。上述方案根据仿真服务器的性能、目标用户的需求以及仿真运行的时间等,对各个仿真请求进行智能化排队仿真,合理安排和利用仿真服务器资源,从而提高了集成电路的仿真效率。
图3是根据一示例性实施例示出的集成电路自动化并行仿真方法的方法流程图。该方法由计算机设备执行,该计算机设备可以是如图1所示的仿真系统中的控制服务器。如图3所示,该集成电路自动化并行仿真方法可以包括如下步骤:
步骤301,获取电路仿真请求。
在本申请实施例中,工程师在集成电路的电路结构设计完成后,将该电路结构生成的仿真请求提交至仿真系统中的控制服务器。
在本申请实施例的一种可能的实现方式中,仿真系统包括一台用于分配仿真请求的服务器(即图1中的控制服务器)和多台用于运行仿真的服务器(即图1中的仿真服务器);且多台用于运行仿真的服务器按照性能好坏从高到低依次进行排序,即位于多台用于运行仿真的服务器队列头部的第一台服务器性能最高,位于尾部的最后一台服务器性能最低。
步骤302,将该电路仿真请求作为候选仿真请求加入第一排队队列。
在一种可能的实现方式中,在将该电路仿真请求作为候选仿真请求加入第一排队队列之前,控制服务器还可以对该电路仿真请求进行查重分析处理。
在控制服务器中,通过分析电路结构的网表文件,对该电路结构进行查重分析,确认该电路结构或者与该电路结构相似的电路结构是否提交过仿真请求;
网表文件主要分四部分:元器件的类型、元器件的参数值、各元器件之间连接关系及仿真条件,通过将之前提交过仿真请求的电路结构的网表文件中各个部分与该电路结构的网表文件中的各个部分进行对比,即可以确认该电路结构或者与该电路结构相似的电路结构是否提交过仿真请求;
若该电路结构提交过仿真请求,则在已完成仿真的缓存中查找该电路结构的仿真结果,如果在已完成仿真的缓存中找到仿真结果,则直接得到仿真结果并发送至工程师的计算机设备中;如果在已完成仿真的缓存中未找到仿真结果,说明该仿真结果的缓存已经被清理,则标记仿真重复次数,并将该电路结构的仿真请求加入排队队列;若之前提交的该电路结构正在仿真,则忽略请求并告知工程师之前提交的该电路结构正在仿真。
步骤303,基于该第一排队队列的各个候选仿真请求的剩余仿真时间,将该排队队列中的各个候选仿真请求进行第一次排序处理,获得第二排队队列。
在本申请实施例的一种可能的实现方式中,计算机设备计算各个被执行的仿真请求(已经在仿真服务器中运行的仿真请求)在上一周期内的平均仿真速度,根据仿真速度和当前进度,预估剩余仿真时间;
将新发起的仿真请求(即刚加入排队队列,准备放入仿真服务器中进行仿真的仿真请求)的预估剩余仿真时间设置为0;
需要说明的是,预估的剩余仿真时间与实际仿真剩余时间可能并不一致,根据电路状态的变化,实际仿真速度会有很大的变化幅度,但是由于无法得到准确的实际仿真剩余时间,因此本申请根据预估仿真时间进行仿真队列的排序。
在一种可能的实现方式中,计算机设备对上一周期内未被执行的仿真请求的预估剩余仿真时间 (该预估剩余仿真时间可以为之前某个被执行的周期内计算出来的预估剩余仿真时间)进行提序运算,即通过提序运算减少上一周期内未被执行的仿真请求的预估剩余仿真时间,从而保证上一周期内未被执行的仿真请求在本次周期内的执行顺序提前;
提序运算可以为任意可行的算法,其中一种可选的算法为:将上一周期内未被执行的仿真请求的预估剩余仿真时间减半,并将减半后的预估剩余仿真时间作为本次周期内该仿真请求的预估剩余时间。
计算机设备根据预估的剩余时间对排队队列中所有的仿真请求进行从小到大的排序。
步骤304,按照该第二排队队列中的各个候选仿真请求的加速参数大小,对该第二排队队列进行第二次排序处理,获得目标排队队列。
在本申请实施例中,该加速参数可以被设定为目标用户所对应的加速积分的积分消耗速度。
在一种可能的实现方式中,控制服务器中存在各个请求发起方(也就是目标用户,如工程师)对应的属性信息,请求发起方无论现在是否有发起的仿真请求,在控制服务器中各个请求发起方,每隔单位时间(一个周期)都会获得积分,各个请求发起方的累计积分O,积分获得速度P,保存在控制服务器的属性信息中;
其中,单位时间(一个周期)获得的积分可以根据实际情况进行设置。
目标用户在发起仿真请求时可以在仿真请求中设置单位时间内消耗多少积分来使用仿真资源,积分消耗速度Q(也就是加速参数);
例如,其中积分消耗速度Q的取值可以符合以下规则:
(1)Q的设置范围必须大于等于P,默认使用2倍P;
(2)请求发起方(工程师)也可以根据自身需求自行设置积分消耗速度Q,具体为:请求发起方(工程师)在本地电脑中向控制服务器发出仿真请求的同时,提供自行设置的积分消耗速度Q;
(3)若不是重复仿真的仿真请求,则将仿真请求的次数M计为1;
若是重复仿真的仿真请求,则将仿真请求的次数M计为具体提出仿真的次数。
若靠近排队队列头部的仿真请求对应的积分消耗速度Q小于下一个仿真请求对应的积分消耗速度Q,则交换这两项仿真请求在排队队列中的位置;
若靠近队列头部的仿真请求对应的积分消耗速度Q不小于下一个仿真请求对应的积分消耗速度Q,则不做操作;
反复操作直至不存在靠近队列头部的仿真请求对应的积分消耗速度Q小于下一个仿真请求对应的积分消耗速度Q的情况为止;
积分消耗速度Q可以由请求发起方自行设置,因此,当请求发起方(工程师)认为自己的电路只是需要验证一下部分功能,或者由于项目进度等原因,需要尽快对电路进行仿真时,请求发起方(工程师)即可自行设置一个较高的积分消耗速度Q,从而使得对应的候选仿真请求具有较高的积分消耗速度Q(即较高的加速参数),从而使得候选仿真请求快速排到目标排队队列的前面,从而牺牲仿真运行时间,但是减少仿真排队时间。
步骤305,按照目标排队队列所指示的优先级,取出指定数量的候选仿真请求,并按照优先级顺序对该指定数量的候选仿真请求,逐个进行加速积分检测。
当通过上述步骤获取到目标排队队列后,可以取出指定数量的候选仿真请求,并逐个进行加速积分检测。例如该指定数量可以是根据仿真服务器的数量确定的,或者该指定数量可以是根据当前处于闲置状态的仿真服务器的数量确定的。
步骤306A,当检测到该指定数量的候选仿真请求中,第一仿真请求所对应的第一用户的加速积分,小于该第一仿真请求的加速参数时,跳过该第一仿真请求。
可选的,该加速积分为该第一用户(即某一请求发起方)的累计积分O,该第一仿真请求的加速参数为该第一用户提出的仿真请求所对应的积分消耗速度Q。
在本申请实施例中,控制服务器在生成的排队队列中,从该排队队列头部依次取出仿真请求,比较该仿真请求对应的积分消耗速度Q与该仿真请求对应的请求发起方对应的累计积分O;
若积分消耗速度Q大于累计积分O,则对该仿真请求不做任何操作,继续比较下一个仿真请求。
步骤306B,当检测到该指定数量的候选仿真请求中,第二仿真请求所对应的第二用户的加速积分,大于该第二仿真请求的加速参数时,将该第二仿真请求发送至仿真服务器进行仿真处理。
在一种可能的实现方式中,按照该目标排队队列所指示的优先级,在该各个仿真服务器中,获取处于空闲状态且优先级最高的目标仿真服务器,将该第二仿真请求发送至该目标仿真服务器中进行处理。
在一种可能的实现方式中,当检测到针对该第二仿真请求的仿真过程结束,将该第二用户的加速积分与该第二仿真请求的加速参数之间的差值,更新为该第二用户的加速积分。
例如,该候选仿真请求的处理过程可以如下步骤所示:
1)在生成的排队队列中,控制服务器从该排队队列头部依次取出仿真请求,比较该仿真请求对应的积分消耗速度Q与该仿真请求对应的请求发起方对应的累计积分O;
2)若积分消耗速度Q不大于累计积分O,则用于分配仿真请求的服务器从仿真服务器队列头部开始查找第一台未被安排仿真请求任务的服务器(也就是优先级最高,处理能力最强的仿真服务器),并把此次取出的仿真请求安排到该仿真服务器中执行,并计算O-M*Q,把结果作为新的累计积分O;
3)反复操作直至仿真请求队列中的所有剩余仿真请求都不符合积分消耗速度Q不大于累计积分O的条件或所有仿真服务器都被安排仿真请求任务。
在一种可能的实现方式中,如果用于运行仿真的服务器未被安排满(即仿真服务器处于空闲状态),且仿真请求的目标排队队列中存在未执行的仿真请求,则执行以下操作。
把仿真请求的目标排队队列中的所有剩余仿真请求按照对应的请求发起方(工程师)的累计积分O从大到小排序;
从排序后的队列头部开始依次取出仿真请求;
从用于运行仿真的服务器头部开始查找第一台未被安排仿真请求任务的服务器,并把此次取出的候选仿真请求发送至仿真服务器中执行,并计算O-M*Q,把结果作为新的累计积分O。
在一种可能的实现方式中,如果存在仿真服务器处于空闲状态,且仿真请求的目标排队队列中无未被安排的仿真请求,则进行以下操作。
控制服务器找到最后被安排的仿真请求,计算该仿真请求对应的积分消耗速度Q与积分获得速度P的差值R;
给本周期内所有被安排的仿真请求对应的请求发起方对应的累计积分O加上差值R为最新的累计积分O。
在一种可能的实现方式中,某些仿真服务器在处理候选仿真请求时,很快就完成了仿真运算,并不需要花费一整个处理周期,而此时该仿真服务器重新处于闲置状态。
此时控制服务器可以取出完成了仿真运算的仿真服务器中的运算结果,并在目标排队队列中重新确定出候选仿真请求并发送至该仿真服务器中进行仿真。
即若本周期结束时,某个或某些仿真服务器中的仿真请求执行完毕,则控制服务器实时取出该仿真请求对应的仿真结果。
若本周期内,某个或某些仿真服务器中的仿真请求执行完毕,控制服务器可以等到本周期结束后,再取出该仿真请求及其对应的仿真结果;也可以实时取出该仿真请求及其对应的仿真结果后,对空出来的仿真服务器实时安排新的仿真请求,实时安排新的仿真请求的具体步骤如下:
(1)判断仿真请求的排队队列中是否存在未被安排的剩余仿真请求,若不存在,则对该空出来的仿真服务器不进行任何操作;若存在,则根据预估的剩余时间对剩余仿真请求的排队队列中的所有仿真请求进行从小到大的排序;
(2)比较通过上述排序后的排队队列中任意相邻的两项仿真请求对应的积分消耗速度Q,具体为:
若靠近排队队列头部的仿真请求对应的积分消耗速度Q小于下一个仿真请求对应的积分消耗速度Q,则交换这两项仿真请求在排队队列中的位置;
若靠近队列头部的仿真请求对应的积分消耗速度Q不小于下一个仿真请求对应的积分消耗速度Q,则不做操作;
反复操作直至不存在靠近队列头部的仿真请求对应的积分消耗速度Q小于下一个仿真请求对应的积分消耗速度Q的情况为止。
在上述生成的排队队列中,从该排队队列头部依次取出仿真请求,比较该仿真请求对应的积分消耗速度Q与该仿真请求对应的请求发起方对应的累计积分O;
若积分消耗速度Q不大于累计积分O,则用于分配仿真请求的服务器从用于运行仿真的服务器队列头部开始查找第一台未被安排仿真请求任务的服务器,并把此次取出的仿真请求安排到该服务器中执行,并计算O-M*Q,把结果作为新的累计积分O;
若积分消耗速度Q大于累计积分O,则对该仿真请求不做任何操作,继续比较下一个仿真请求;
反复操作直至仿真请求队列中的所有剩余仿真请求都不符合积分消耗速度Q不大于累计积分O的条件或所有用于运行仿真的服务器都被安排仿真请求任务。
在一种可能的实现方式中,在上述对各个候选仿真请求的操作结束后,检测仿真服务器是否都处于运行状态。
如果空出来的用于运行仿真的服务器仍未被安排满,且剩余仿真请求的排队队列中仍存在未被安排的仿真请求,则将仍剩余仿真请求按照对应的请求发起方(目标用户)的累计积分O从大到小排序;
从排序后的队列头部开始依次取出仿真请求;
从仿真服务器头部开始查找第一台未被安排仿真请求任务的服务器,并把此次取出的仿真请求安排到该服务器中执行,并计算O-M*Q,把结果作为新的累计积分O。
在一种可能的实现方式中,若某个周期结束时,仿真请求执行完毕,则控制服务器实时取出该仿真请求对应的仿真结果,并将该仿真结果发送至发出仿真请求的工程师的本地计算机中;
若某个周期内,仿真请求执行完毕,控制服务器可以等到本周期结束后,取出该仿真请求对应的仿真结果,并发送至发出仿真请求的工程师的本地计算机中;也可以实时取出该仿真请求对应的仿真结果,并发送至发出仿真请求的工程师的本地计算机中。
在一种可能的实现方式中,当检测到对目标候选仿真请求的仿真处理完成后,获取对该目标候选仿真请求进行过仿真的各个目标仿真服务器中与该目标候选仿真请求对应的目标仿真数据;将各 个该目标仿真数据拼接为该目标仿真结果,并发送至目标计算机设备中;该目标计算机设备为发送该目标候选仿真请求的设备。
该目标仿真数据中还包括前置服务器数据与后置服务器数据中的至少一者;该前置服务器数据用于指示在获取该目标仿真数据之前对该目标候选仿真请求进行仿真的服务器;该后置服务器数据用于指示在获取该目标仿真数据之后对该目标候选仿真请求进行仿真的服务器。
即在本申请实施例的一种可能的实现方式中,控制服务器中存在一个数据库(可选的,也可单独设立一个数据库服务器,用于存放该数据库),用于记录仿真请求以及其对应的仿真结果分段信息。而为了防止数据库所在的控制服务器产生意外,在每个数据文件(即仿真得到的仿真数据)的中记录上一段数据文件所在的仿真服务器(也就是前置服务器数据)及下一段数据文件所在的仿真服务器(也就是后置服务器数据),例如:
数据文件A-2-2的前一个数据文件是A-1-2,下一个数据文件是A-3-1,那么在A-2-2的头部记录上一个数据文件的位置A-1-2,在A-2-2的尾部记录下一个数据文件的位置A-3-1;此时各个位置信息即指示着各个仿真服务器的ID。
因此发出仿真请求的工程师的本地计算机可以访问所有的仿真服务器,根据数据文件得到完整的仿真运行顺序,从而得到完整的仿真结果,以便发送给本地计算机。
在一种可能的实现方式中,该目标仿真数据中还包括该目标仿真服务器在对该目标候选仿真请求执行仿真操作前的电路状态,以及执行仿真操作后的电路状态。
也就是说,为防止某部分数据文件所在的仿真服务器出现意外,使得该部分数据文件丢失,从而导致整个仿真结果不完整,故此时,在所有数据文件中均存储本次仿真的仿真初始条件(即上一次仿真结束时的初始电路状态)及本次仿真操作后的电路状态(即下一次仿真的仿真初始条件);
当本次用于仿真服务器出现意外,且数据库所在的控制服务器正常工作,此时,丢失了本次仿真的数据文件后,即可通过控制服务器得到上一次仿真的数据文件和下一次仿真的数据文件分别位于哪个仿真服务器中,从而根据上一次仿真的数据文件中存储的仿真结束时的电路状态和下一次仿真的数据文件中存储的仿真初始条件,得到丢失的本次仿真的仿真初始条件及仿真操作后的电路状态,并根据丢失的本次仿真的仿真初始条件及仿真操作后的电路状态,重新安排仿真服务器进行仿真计算得到当前丢失的部分,从而得到完整的仿真结果,而无需重新把整个仿真请求从头到尾运算一遍;
当本次用于仿真的仿真服务器出现意外,控制服务器也产生意外,此时,丢失了本次仿真的数据文件后,发出仿真请求的工程师的本地计算机可以访问所有的仿真服务器,根据所有数据文件的仿真运行顺序,得到丢失的本次仿真的上一次仿真的数据文件和下一次仿真的数据文件分别位于哪个仿真服务器中,从而根据上一次仿真的数据文件中存储的仿真操作后的电路状态和下一次仿真的数据文件中存储的仿真初始条件,得到丢失的本次仿真的仿真初始条件及仿真操作后电路状态,并根据丢失的本次仿真的仿真初始条件及仿真操作后电路状态,在本地计算机中重新进行仿真计算得到当前丢失的部分,从而得到完整的仿真结果,而无需重新把整个仿真请求从头到尾运算一遍。
通过上述方案部署的仿真系统,可以在服务器发生意外时,也能快速有效地得到完整的仿真结果。通过仿真服务器对仿真数据文件进行分布式存储和读取,使得仿真运算得到的数据文件无需在不同的仿真服务器之间流转,从而保证了整个仿真服务器网络运行通畅,确保了仿真运行的速度,提高了仿真效率。
综上所述,当需要对集成电路进行仿真时,计算机设备可以将集成电路结构通过电路仿真请求 发送至仿真系统中,此时仿真系统中的控制服务器将该电路仿真请求作为候选仿真请求加入第一排队队列中;控制服务器再将排队队列中的已启动的候选仿真请求,按照剩余仿真时间进行排序,得到第二排队队列;控制服务器再将第二排队队列中的候选仿真请求按照预先设置的加速参数大小,进行第二次排序处理,从而获得目标排队队列,以便仿真服务器对按照顺序指示的优先级对各个候选仿真请求进行仿真。上述方案根据仿真服务器的性能、目标用户的需求以及仿真运行的时间等,对各个仿真请求进行智能化排队仿真,合理安排和利用服务器资源,鼓励请求发起方提交优质仿真需求,而不是靠反复利用服务器的仿真试错来做设计,并防止一部分需要较长时间的仿真需求长期占用优质服务器资源,从而提高了集成电路的仿真效率。
图4是根据一示例性实施例示出的集成电路自动化并行仿真装置的结构方框图。所述装置包括:
仿真请求获取模块401,用于获取电路仿真请求;所述电路仿真请求用于请求仿真资源对集成电路结构进行仿真;
第一队列获取模块402,用于将所述电路仿真请求作为候选仿真请求加入第一排队队列;
第二队列获取模块403,用于基于所述第一排队队列的各个候选仿真请求的剩余仿真时间,将所述排队队列中的各个候选仿真请求进行第一次排序处理,获得第二排队队列;
目标队列获取模块404,用于按照所述第二排队队列中的各个候选仿真请求的加速参数大小,对所述第二排队队列进行第二次排序处理,获得目标排队队列;
仿真处理模块405,用于按照所述目标排队队列所指示的优先级,在目标处理周期内通过所述各个仿真服务器对所述各个候选仿真请求进行仿真处理。
在一种可能的实现方式中,所述剩余仿真时间用于指示在目标处理周期之前,已启动仿真的候选仿真请求的剩余处理进度;
所述装置还包括:剩余时间确定模块,用于将在目标处理周期内获取到的新发起的候选仿真请求的剩余仿真时间确定为0。
在一种可能的实现方式中,所述仿真处理模块,还用于,按照目标排队队列所指示的优先级,取出指定数量的候选仿真请求,并按照优先级顺序对所述指定数量的候选仿真请求,逐个进行加速积分检测;
当检测到所述指定数量的候选仿真请求中,第一仿真请求所对应的第一用户的加速积分,小于所述第一仿真请求的加速参数时,跳过所述第一仿真请求;
或者,当检测到所述指定数量的候选仿真请求中,第二仿真请求所对应的第二用户的加速积分,大于所述第二仿真请求的加速参数时,将所述第二仿真请求发送至仿真服务器进行仿真处理。
在一种可能的实现方式中,所述仿真处理模块,还用于,在所述各个仿真服务器中,获取处于空闲状态且优先级最高的目标仿真服务器;所述仿真服务器的优先级用于指示所述仿真服务器的仿真处理性能;将所述第二仿真请求发送至所述目标仿真服务器中进行处理。
在一种可能的实现方式中,所述仿真处理模块,还用于,当检测到针对所述第二仿真请求的仿真过程结束,将所述第二用户的加速积分与所述第二仿真请求的加速参数之间的差值,更新为所述第二用户的加速积分。
在一种可能的实现方式中,所述装置还包括:
仿真数据获取模块,用于当检测到对目标候选仿真请求的仿真处理完成后,获取对所述目标候选仿真请求进行过仿真的各个目标仿真服务器中,与所述目标候选仿真请求对应的目标仿真数据;所述目标仿真数据中还包括前置服务器数据与后置服务器数据中的至少一者;所述前置服务器数据 用于指示在获取所述目标仿真数据之前对所述目标候选仿真请求进行仿真的服务器;所述后置服务器数据用于指示在获取所述目标仿真数据之后对所述目标候选仿真请求进行仿真的服务器;
仿真结果发送模块,用于将各个所述目标仿真数据拼接为所述目标仿真结果,并发送至目标计算机设备中;所述目标计算机设备为发送所述目标候选仿真请求的设备。
在一种可能的实现方式中,所述目标仿真数据中还包括所述目标仿真服务器在对所述目标候选仿真请求执行仿真操作前的电路状态,以及执行仿真操作后的电路状态。
综上所述,当需要对集成电路进行仿真时,计算机设备可以将集成电路结构通过电路仿真请求发送至仿真系统中,此时仿真系统中的控制服务器将该电路仿真请求作为候选仿真请求加入第一排队队列中;控制服务器再将排队队列中的已启动的候选仿真请求,按照剩余仿真时间进行排序,得到第二排队队列;控制服务器再将第二排队队列中的候选仿真请求按照预先设置的加速参数大小,进行第二次排序处理,从而获得目标排队队列,以便仿真服务器对按照顺序指示的优先级对各个候选仿真请求进行仿真。上述方案根据仿真服务器的性能、目标用户的需求以及仿真运行的时间等,对各个仿真请求进行智能化排队仿真,合理安排和利用仿真服务器资源,从而提高了集成电路的仿真效率。
请参阅图5,其是根据本申请一示例性实施例提供的一种计算机设备示意图,所述计算机设备包括存储器和处理器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时,实现上述方法。
其中,处理器可以为中央处理器(Central Processing Unit,CPU)。处理器还可以为其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等芯片,或者上述各类芯片的组合。
存储器作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态计算机可执行程序以及模块,如本申请实施方式中的方法对应的程序指令/模块。处理器通过运行存储在存储器中的非暂态软件程序、指令以及模块,从而执行处理器的各种功能应用以及数据处理,即实现上述方法实施方式中的方法。
存储器可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储处理器所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施方式中,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
在一示例性实施例中,还提供了一种计算机可读存储介质,用于存储有至少一条计算机程序,所述至少一条计算机程序由处理器加载并执行以实现上述方法中的全部或部分步骤。例如,该计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的发明实施例后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书 和实施例仅被视为示例性的,本申请的真正范围和精神由权利要求书指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围的情况下进行各种修改和改变。本申请的范围仅由所附的权利要求书来限制。

Claims (10)

  1. 一种集成电路自动化并行仿真方法,其特征在于,所述方法用于仿真系统中的控制服务器,所述仿真系统还包括各个仿真服务器,所述方法包括:
    获取电路仿真请求;所述电路仿真请求用于请求仿真资源对集成电路结构进行仿真;
    将所述电路仿真请求作为候选仿真请求加入第一排队队列;
    基于所述第一排队队列的各个候选仿真请求的剩余仿真时间,将所述第一排队队列中的各个候选仿真请求进行第一次排序处理,获得第二排队队列;
    按照所述第二排队队列中的各个候选仿真请求的加速参数大小,对所述第二排队队列进行第二次排序处理,获得目标排队队列;
    按照所述目标排队队列所指示的优先级,在目标处理周期内通过所述各个仿真服务器对所述各个候选仿真请求进行仿真处理。
  2. 根据权利要求1所述的方法,其特征在于,所述剩余仿真时间用于指示在目标处理周期之前已启动仿真的候选仿真请求的剩余处理进度;
    所述基于所述第一排队队列的各个候选仿真请求的剩余仿真时间,将所述第一排队队列中的各个候选仿真请求进行第一次排序处理之前,还包括:
    将在目标处理周期内获取到的新发起的候选仿真请求的剩余仿真时间确定为0。
  3. 根据权利要求2所述的方法,其特征在于,按照所述目标排队队列所指示的优先级,通过所述各个仿真服务器对所述各个候选仿真请求进行仿真处理,包括:
    按照目标排队队列所指示的优先级,取出指定数量的候选仿真请求,并按照优先级顺序对所述指定数量的候选仿真请求,逐个进行加速积分检测;所述加速积分为累计积分,所述控制服务器保存有各个用户的属性信息,所述各个用户的属性信息中包含各个用户的累计积分;
    当检测到所述指定数量的候选仿真请求中,第一仿真请求所对应的第一用户的加速积分,小于所述第一仿真请求的加速参数时,跳过所述第一仿真请求;所述第一仿真请求的加速参数为所述第一仿真请求中设置的积分消耗速度;
    或者,当检测到所述指定数量的候选仿真请求中,第二仿真请求所对应的第二用户的加速积分,大于所述第二仿真请求的加速参数时,将所述第二仿真请求发送至仿真服务器进行仿真处理;所述第二仿真请求的加速参数为所述第二仿真请求中设置的积分消耗速度。
  4. 根据权利要求3所述的方法,其特征在于,所述将所述第二仿真请求发送至仿真服务器进行仿真处理,包括:
    在所述各个仿真服务器中,获取处于空闲状态且优先级最高的目标仿真服务器;所述仿真服务器的优先级用于指示所述仿真服务器的仿真处理性能;
    将所述第二仿真请求发送至所述目标仿真服务器中进行处理。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    当检测到针对所述第二仿真请求的仿真过程结束,将所述第二用户的加速积分与所述第二仿真请求的加速参数之间的差值,更新为所述第二用户的加速积分。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    当检测到对目标候选仿真请求的仿真处理完成后,获取对所述目标候选仿真请求进行过仿真的各个目标仿真服务器中,与所述目标候选仿真请求对应的目标仿真数据;所述目标仿真数据中还包括前置服务器数据与后置服务器数据中的至少一者;所述前置服务器数据用于指示在获取所述目标 仿真数据之前对所述目标候选仿真请求进行仿真的服务器;所述后置服务器数据用于指示在获取所述目标仿真数据之后对所述目标候选仿真请求进行仿真的服务器;
    将各个所述目标仿真数据拼接为所述目标仿真结果,并发送至目标计算机设备中;所述目标计算机设备为发送所述目标候选仿真请求的设备。
  7. 根据权利要求6所述的方法,其特征在于,所述目标仿真数据中还包括所述目标仿真服务器在对所述目标候选仿真请求执行仿真操作前的电路状态,以及执行仿真操作后的电路状态。
  8. 一种集成电路自动化并行仿真装置,其特征在于,所述装置包括:
    仿真请求获取模块,用于获取电路仿真请求;所述电路仿真请求用于请求仿真资源对集成电路结构进行仿真;
    第一队列获取模块,用于将所述电路仿真请求作为候选仿真请求加入第一排队队列;
    第二队列获取模块,用于基于所述第一排队队列的各个候选仿真请求的剩余仿真时间,将所述排队队列中的各个候选仿真请求进行第一次排序处理,获得第二排队队列;
    目标队列获取模块,用于按照所述第二排队队列中的各个候选仿真请求的加速参数大小,对所述第二排队队列进行第二次排序处理,获得目标排队队列;
    仿真处理模块,用于按照所述目标排队队列所指示的优先级,在目标处理周期内通过各个仿真服务器对所述各个候选仿真请求进行仿真处理。
  9. 一种计算机设备,其特征在于,所述计算机设备中包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如权利要求1至7中任一项所述的集成电路自动化并行仿真方法。
  10. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令,所述至少一条指令由处理器加载并执行以实现如权利要求1至7中任一项所述的集成电路自动化并行仿真方法。
PCT/CN2023/070124 2022-01-06 2023-01-03 集成电路自动化并行仿真方法和仿真装置 WO2023131121A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210007453.1 2022-01-06
CN202210007453.1A CN114021507B (zh) 2022-01-06 2022-01-06 一种自动化集成电路并行仿真方法

Publications (1)

Publication Number Publication Date
WO2023131121A1 true WO2023131121A1 (zh) 2023-07-13

Family

ID=80069748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/070124 WO2023131121A1 (zh) 2022-01-06 2023-01-03 集成电路自动化并行仿真方法和仿真装置

Country Status (2)

Country Link
CN (1) CN114021507B (zh)
WO (1) WO2023131121A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117421324A (zh) * 2023-12-19 2024-01-19 英诺达(成都)电子科技有限公司 电源状态表的合并方法、装置、设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114021507B (zh) * 2022-01-06 2022-04-29 苏州贝克微电子股份有限公司 一种自动化集成电路并行仿真方法
CN114664014A (zh) * 2022-03-28 2022-06-24 中国银行股份有限公司 银行网点用户排队方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10248747B1 (en) * 2017-05-05 2019-04-02 Cadence Design Systems, Inc. Integrated circuit simulation with data persistency for efficient memory usage
CN111651865A (zh) * 2020-05-12 2020-09-11 北京华如科技股份有限公司 一种并行离散事件的事件集中发射式仿真执行方法及系统
CN113190359A (zh) * 2021-07-01 2021-07-30 苏州贝克微电子有限公司 一种仿真请求处理方法、装置、电子设备及可读存储介质
CN113420520A (zh) * 2021-06-25 2021-09-21 海光信息技术股份有限公司 集成电路装置设计仿真方法、装置、设备和可读存储介质
CN114021507A (zh) * 2022-01-06 2022-02-08 苏州贝克微电子股份有限公司 一种自动化集成电路并行仿真方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508716B (zh) * 2011-09-29 2015-04-15 用友软件股份有限公司 任务控制装置和任务控制方法
US20150363229A1 (en) * 2014-06-11 2015-12-17 Futurewei Technologies, Inc. Resolving task dependencies in task queues for improved resource management
CN107145395B (zh) * 2017-07-04 2020-12-08 北京百度网讯科技有限公司 用于处理任务的方法和装置
CN113342498A (zh) * 2021-06-28 2021-09-03 平安信托有限责任公司 并发请求处理方法、装置、服务器及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10248747B1 (en) * 2017-05-05 2019-04-02 Cadence Design Systems, Inc. Integrated circuit simulation with data persistency for efficient memory usage
CN111651865A (zh) * 2020-05-12 2020-09-11 北京华如科技股份有限公司 一种并行离散事件的事件集中发射式仿真执行方法及系统
CN113420520A (zh) * 2021-06-25 2021-09-21 海光信息技术股份有限公司 集成电路装置设计仿真方法、装置、设备和可读存储介质
CN113190359A (zh) * 2021-07-01 2021-07-30 苏州贝克微电子有限公司 一种仿真请求处理方法、装置、电子设备及可读存储介质
CN114021507A (zh) * 2022-01-06 2022-02-08 苏州贝克微电子股份有限公司 一种自动化集成电路并行仿真方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117421324A (zh) * 2023-12-19 2024-01-19 英诺达(成都)电子科技有限公司 电源状态表的合并方法、装置、设备及存储介质
CN117421324B (zh) * 2023-12-19 2024-03-12 英诺达(成都)电子科技有限公司 电源状态表的合并方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114021507B (zh) 2022-04-29
CN114021507A (zh) 2022-02-08

Similar Documents

Publication Publication Date Title
WO2023131121A1 (zh) 集成电路自动化并行仿真方法和仿真装置
Alipourfard et al. {CherryPick}: Adaptively unearthing the best cloud configurations for big data analytics
CN108776934B (zh) 分布式数据计算方法、装置、计算机设备及可读存储介质
Zhang et al. CODA: Toward automatically identifying and scheduling coflows in the dark
EP3525096A1 (en) Resource load balancing control method and cluster scheduler
WO2017124713A1 (zh) 一种数据模型的确定方法及装置
CN104038540B (zh) 一种应用代理服务器自动选择方法及系统
WO2021159638A1 (zh) 集群队列资源的调度方法、装置、设备及存储介质
TW201820165A (zh) 用於雲端巨量資料運算架構之伺服器及其雲端運算資源最佳化方法
WO2015058578A1 (zh) 一种分布式计算框架参数优化方法、装置及系统
CN104092756A (zh) 一种基于dht机制的云存储系统的资源动态分配方法
CN105940636B (zh) 用于为数据中心的工作负荷生成分析模型的方法及服务器
JP6686371B2 (ja) データステージング管理システム
CN115335821B (zh) 卸载统计收集
CN112463390A (zh) 一种分布式任务调度方法、装置、终端设备及存储介质
CN115134371A (zh) 包含边缘网络算力资源的调度方法、系统、设备及介质
CN112052082B (zh) 任务属性优化方法、装置、服务器及存储介质
Yao et al. Probabilistic consistency guarantee in partial quorum-based data store
Sreedhar et al. A survey on big data management and job scheduling
CN111090401B (zh) 存储设备性能预测方法及装置
WO2019029721A1 (zh) 任务的调度方法、装置、设备及存储介质
CN116248699B (zh) 多副本场景下的数据读取方法、装置、设备及存储介质
WO2023093194A1 (zh) 一种云监控方法和云管理平台
CN115914237A (zh) 一种边缘环境下的深度学习任务调度方法、设备及介质
WO2017085454A1 (en) Fuzzy caching mechanism for thread execution layouts

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23736986

Country of ref document: EP

Kind code of ref document: A1