CN102855339A - Parallel processing scheme for integrated circuit layout checking - Google Patents

Parallel processing scheme for integrated circuit layout checking Download PDF

Info

Publication number
CN102855339A
CN102855339A CN2011101801319A CN201110180131A CN102855339A CN 102855339 A CN102855339 A CN 102855339A CN 2011101801319 A CN2011101801319 A CN 2011101801319A CN 201110180131 A CN201110180131 A CN 201110180131A CN 102855339 A CN102855339 A CN 102855339A
Authority
CN
China
Prior art keywords
worker
manager
slave
result
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011101801319A
Other languages
Chinese (zh)
Inventor
宋德强
王国庆
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing CEC Huada Electronic Design Co Ltd
Original Assignee
Beijing CEC Huada Electronic Design Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing CEC Huada Electronic Design Co Ltd filed Critical Beijing CEC Huada Electronic Design Co Ltd
Priority to CN2011101801319A priority Critical patent/CN102855339A/en
Publication of CN102855339A publication Critical patent/CN102855339A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a parallel processing scheme for integrated circuit layout checking, and belongs to the technical field of computer aided design of integrated circuits, in particular to the field of integrated circuit layout design rule checking (DRC) and layout versus schematic (LVS) consistency checking. according to the scheme, command-scheduling-graph-based multi-thread (Manager-Worker model) and unit-calling-relation-based multi-thread (Worker-Slave model) parallel processing methods are creatively proposed; and the two methods are combined for solving the problem that a layout cannot be checked because of overlong running time and low checking efficiency in a large-scale integrated circuit layout checking process. Actual application shows that the parallel processing scheme can be used for solving the problem about ultra-large-scale layout checking within time accepted by a user.

Description

Integrated circuit layout verification parallel processing solution
Technical field
The present invention is a kind of parallel processing plan that is applicable in the integrated circuit layout verification instrument, affiliated technical field is the integrated circuit CAD field, especially relates to consistency check (LVS) field of DRC (DRC) and domain and the schematic diagram of integrated circuit diagram.
Background technology
Over nearly 30 years, integrated circuit technique advances according to " Moore's Law " always.The characteristic dimension of chip is more and more less, and the integrated level of one single chip is also more and more higher.Along with the expansion of chip-scale, also be on the increase at the design rule of the required checking of stages of integrated circuit (IC) design.Wherein the consistency check (LVS) of the DRC of integrated circuit diagram (DRC) and integrated circuit diagram and schematic diagram becomes more and more important, and they have vital role for eliminating error, reduction design cost and the failed risk of minimizing design.In VLSI (very large scale integrated circuit) designs, the domain scale sharply expands, and how the checking work of complete design scheme becomes each large EDA manufacturer urgent problem within effective time.
Parallel processing comprises distributed treatment and two kinds of gordian techniquies of multithreading.The problem how distributed treatment research could solve a very huge computing power of needs is divided into many little parts, then these parts is distributed to many computing machines and processes, and these result of calculations is integrated obtain final result at last.Multithreading is to process object granularity than distributed less parallel processing mode.Each data of processing unit of distributed treatment process to as if a process, and the processing of multithreading to as if a thread.Each program of just moving in system is a process.Each process comprises one or more threads.Process also may be the Dynamic Execution of whole program or subprogram.Thread is the set of one group of instruction, or the particular segment of program, and it can independent execution in program.Also it can be interpreted as the context of code operation.So thread is the process of lightweight basically, it is responsible for carrying out multitask in single program.Usually be responsible for scheduling and the execution of a plurality of threads by operating system.Thread is single sequential control flow process in the program.In single program, move simultaneously a plurality of threads and finish different work, be called multithreading.The difference of thread and process is that a plurality of processes have different codes and data space, and a plurality of threads are shared data space then, and it is its Execution context that each thread has the execution stack of oneself and programmable counter.The computed memory source of needs in service and the CPU of thread.
No matter process scheduling or thread scheduling can be described with unified Producer-consumer model.In this model, there is 1 (or a plurality of) producer just to produce at set intervals a product, put into buffer zone, there is 1 (or a plurality of) consumer just to take out a product consumption from buffer zone at set intervals.The size of buffer zone is limited, and is full if the producer finds buffer zone, can only suspend production, until the consumer continues to consume a product discontented could the continuation of buffer zone produced.If it is empty that the consumer finds buffer zone, also can only wait for product of producer produces.Simple Producer-consumer model as shown in Figure 1.
The input of process scheduling is command diagram.Command diagram is the set of Management Information Base, and dependence is arranged between the order, and namely the possibility of result of an order can be the input of an other order.The input of thread scheduling is unit (Cell) topological relation figure, and this is a kind of tree derivation, and top layer unit is root, and lowermost layer is leaf node, is a kind of call relation from top layer unit to the lowermost layer unit.The unit is the basic logic unit of integrated circuit (IC) design, can mutually call between the unit, and inside, a unit is some set that derive from figure He other unit examples (Instance) of different Physical layers (Layer).
Summary of the invention
It is excessively slow to the present invention is directed to the travelling speed that faces in the very large scale integration layout verification process, and working time is long, causes the final unsolvable problem of layout verification, has proposed a kind of based on distributed and parallel processing solution multithreading.In practical engineering application, this scheme can greatly be accelerated the travelling speed of layout verification tool, and raising can be verified the domain scale.
Main technical schemes of the present invention comprises following two models:
1. command scheduling Manager-Worker model.
In the Manager-Worker model, Manager is the producer, and Worker is the consumer.Manager is responsible for producing every order, carried out by Worker, after Worker is finished with result feedback to Manager, Manager obtains to produce the order that makes new advances behind the result, until all orders all are finished.Therefore, the working method of Manger has comprised following step:
(1) compiler directive file generates command scheduling figure.Command scheduling figure is a kind of topological digraph, and node wherein represents order, and arc represents data dependence relation.
(2) read in the GDS file, generation unit calls topological relation figure.GDS is a kind of Standard File Format of integrated circuit road domain.
(3) generate pending order.Pending order is that in-degree is 0 node in the command diagram.
(4) dispatch pending order.Order is sent to the Worker of a free time.If there is not Worker idle, then Manager waits for.Socket is used in communication between Manager and the Worker.Do not carry out any communication between Worker and the Worker.
(5) receive command result.Manager will receive Worker command result feedback when dispatching, when result feedback was arranged, it was 0 command node that Manager will calculate new in-degree.
(6) finish.When all names all are finished, Manager sends end mark to all Worker, and all Worker withdraw from.Then Manager withdraws from.
The working method of Worker comprises following job step:
(1) call allocation of reception Manager.Data on the Worker are except unit topological relation figure, and every other data all are local, and Manager can pack order and input data together and is sent to the Worker process.
(2) carry out specific instructions.If exit command, Worker withdraws from, otherwise the calculation command result.During fill order, Worker no longer accepts new task.
(3) command execution is complete, and the result is returned to Manager.
2. cell scheduling Worker-Slave model.
In the Worker-Slave model, Worker is the producer, and Slave is the consumer.But Worker is responsible for producing each performance element, carried out by Slave, after Slave is finished with result feedback to Worker, Worker obtains behind the result according to producing the order that makes new advances behind the cell call graph of a relation, until all unit all are finished.The working method of Worker can be described with following several steps:
(1) instruction of reception Mananger process if exit instruction generates the exit instruction identical with Slave quantity, is put into TaskPool (task pool), and then the Worker thread withdraws from.Otherwise the calculating in-degree is 0 cell node, and all these category nodes are put into TaskPool.
(2) from ResultsPool (outcome pool), obtain Slave thread computes result, then produce new Task (task) and put into TaskPool.
The working method of Slave thread is as follows:
(1) traversal TaskPool if be not empty, obtains a task, if this task is exit instruction, the Slave thread withdraws from, otherwise begins to carry out.
(2) tasks carrying complete after, the result is put into ResultsPool, continued for (1) step.
In the Worker-Slave model, TaskPool and ResultsPool are two critical resources, and multithreading must lock to critical access, and the same time can only allow several thread accesses critical resources, otherwise can produce error result or EMS memory error.The lock called after TaskLock of TaskPool, the lock called after ResultsLock of ResultsPool.Worker puts into the job step of TaskPool with Task can be such as the description of getting off:
(1)Lock(TaskLock);
(2) Task is put into TaskPool;
(3)Unlock(TaskLock);
In like manner, Worker obtains the result from ResultsPool, and the working method of Slave access TaskPool and ResultsPool is identical with above step.
Description of drawings
Fig. 1 Producer-consumer model;
Fig. 2 Manager-Worker model workflow diagram;
Fig. 3 Worker-Slave model workflow diagram;
Fig. 4 example command topological relation figure.
Fig. 5 exemplary unit is called topological relation figure.
Embodiment
No matter based on the Manager-Worker process model of command diagram, also be based on the Worker-Slave threading model of unit topological relation figure, all will solve the dependence between the executable unit (order or a unit).As example, the manageable necessary condition of command unit is with process scheduling (take order as unit): this orders needed input data is original physical layer data, or the middle layer data, and the data in middle layer produce.
The example command sequence is as follows, command diagram as shown in Figure 4, O_ND wherein, O_NODRC, O_MEMID and O_NWEL are original layers, ND, NTAP, NWEL_1 and NWEL are the middle layers.
ND=O_ND?NOT?INSIDE?O_NODRC
NTAP=ND?NOT?INTERACT?O_MEMID
NWEL_1=O_NWEL?NOT?INSIDE?O_NODRC
NWEL=NWEL_1?NOT?CUT?O_MEMID
NW.a4{@Min.N-Well?extension?of?N+diffusion?is?0.15um
ENC?NTAP?NWEL<0.15?ABUT?SINGULAR?REGION
}
The example domain has 5 unit, and TOP calls A, three unit of B and C, and A and C have called again the D unit.Unit topological relation figure as shown in Figure 5.
Manager can only have 1 in this scheme, supposes to have 2 Worker (Master), and each Worker has 2 Slave.Then possible execution flow process is:
(1) initial time, Manager at first need compiler directive file (Rule File) to generate command diagram.Then read in GDS generation unit topology call graph.
(2) after the compiling as can be known, in-degree is that zero node is 2 INSIDE command nodes.Manager is sent to Worker1 and Worker2 with these two orders (INSIDE) and input data (O_ND, O_NODRC and O_NWEL) respectively.Then Manager blocks, and waits for result feedback.
(3) after Worker1 receives data.Calculate in-degree and be 0 node and only have TOP, at this moment Worker1 puts into TaskPool with TOP, and plans to take out a Cell from ResultsPool, but this ResultsPool be sky, so Worker1 gets clogged.
(4) Slave1 takes out the TOP task from TaskPool, and begins to execute the task.
(5) Slave2 plans to take out a task from TaskPool, but formation is empty, and Slave2 blocks.
(6) Slave1 finishes the TOP task, and TOP is put into ResultsPool, at this moment blocks Worker1 and is waken up.Slave1 accesses TaskPool, finds that TaskPool is empty, and Slave1 also gets clogged.
(7) Worker1 processes the result of TOP unit, and A, the B that TOP is called, the in-degree of three Cell of C subtract respectively 1, and their in-degree all is 0 after 1 owing to subtract, so they can be put into TaskPool successively, Worker1 blocks.
(8) Slave2 and Slave1 have obtained respectively task A and B from TaskPool, and the unblocking state begins to carry out.
(9) suppose that Slave2 finishes first, the result of A is put into ResultsPool.Then having obtained the C task continues to carry out.
(10) Worker1 has obtained the result of A task from ResultsPool, the in-degree of D is subtracted 1, but the in-degree of D is 1, can not put into TaskPool.Worker1 blocks.
(11) suppose that Slave1 and Slave2 finish simultaneously.And B and C result put into ResultsPool.Two Slave block.
(12) Worker1 takes out the result of B and C from ResutlPool.And D put into ResutlPool.Worker1 blocks.
(13) Slave1 obtains the D task, after finishing D result is put into ResultsPool.Slave1 blocks.
(14) Worker1 obtains D result, finds that the whole piece order all is finished.The result feedback of then INSIDE being ordered is to Manager.
(15) working method of Worker2 is identical with the working method ((3)-(14) step) of Worker1.Suppose that Worker2 just also finishes, and returns to Manager with the result.
(16) Manager has obtained the result of Worker1 and Worker2, and the in-degree of INTERACT and CUT order is subtracted 1, and at this moment the in-degree of these two nodes all is 0, and these two orders are sent to respectively Worker1 and Worker2.
(17) after Worker1 and Work2 receive order, continue to carry out (3)-(14) work in step.Produce command result feedback Manager.
(18) Manager receives the result of INTERACT and CUT order, the ENC order is sent to Worker1 carries out.
(19) Worker1 returns the result to Manager after finishing order.
(20) after Mangager receives the result, find that all orders all finish, send to Worker1 and Worker2 respectively and exit command.Then the Manager process withdraws from.
After (21) two Worker receive exit instruction, all Slave threads are stopped, so latter two Worker process withdraws from.Whole processing procedure is complete.

Claims (3)

1. very large scale integration layout verification parallel processing plan, its technical characterictic is to comprise following two models: 1. Manager-Worker model.Manager is the producer, and Worker is the consumer.Manager is responsible for producing every order, carried out by Worker, after Worker is finished with result feedback to Manager, Manager obtains to produce the order that makes new advances behind the result, until all orders all are finished.2. Worker-Slave model.In the Worker-Slave model, Worker is the producer, and Slave is the consumer.But Worker is responsible for producing each performance element, carried out by Slave, after Slave is finished with result feedback to Worker, Worker obtains behind the result according to producing the order that makes new advances behind the cell call graph of a relation, until all unit all are finished.
2. parallel processing plan according to claim 1 is characterized in that: 1. the Manager-Worker model is the parallel of process level, and the Worker-Slave model is the parallel of thread-level.2. the processing of Manager-Worker model is to liking order, and the processing of Worker-Slave model is to liking integrated circuit unit.
3. parallel processing plan according to claim 1, its technical characterictic comprises 4 workflows of following two models:
(1) Manager-Worker model.
Manager process workflow:
1. the compiler directive file generates command scheduling figure.
2. read in the GDS file, generation unit calls topological relation figure.
3. generate pending order.
4. dispatch pending order.
5. receive command result.
6. carry out and finish, send exit instruction to the Worker process.
Worker process workflow:
1. receive the call allocation of Manager.
2. carry out specific instructions.
3. command execution is complete, and the result is returned to Manager.
(2) Worker-Slave model.
Worker thread work flow process:
1. the order that distributes according to Manager generates pending unit and puts into TaskPool.
2. from ResultsPool, obtain Slave thread computes result, then produce new task and put into TaskPool.
Slave thread work flow process:
1. travel through TaskPool, if be not empty, obtain a task, if this task is exit instruction, the Slave thread withdraws from, otherwise begins to carry out.
2. tasks carrying complete after, the result is put into ResultsPool, continued for (1) step.
CN2011101801319A 2011-06-29 2011-06-29 Parallel processing scheme for integrated circuit layout checking Pending CN102855339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101801319A CN102855339A (en) 2011-06-29 2011-06-29 Parallel processing scheme for integrated circuit layout checking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101801319A CN102855339A (en) 2011-06-29 2011-06-29 Parallel processing scheme for integrated circuit layout checking

Publications (1)

Publication Number Publication Date
CN102855339A true CN102855339A (en) 2013-01-02

Family

ID=47401926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101801319A Pending CN102855339A (en) 2011-06-29 2011-06-29 Parallel processing scheme for integrated circuit layout checking

Country Status (1)

Country Link
CN (1) CN102855339A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415550A (en) * 2014-04-14 2017-02-15 国家信息及自动化研究院 Method of automatic synthesis of circuits, device and computer program associated therewith
CN112671900A (en) * 2020-12-23 2021-04-16 南京华大九天科技有限公司 Layout data transmission method and system and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101044457A (en) * 2004-09-28 2007-09-26 英特尔公司 System, method and apparatus for dependency chain processing
US20100274549A1 (en) * 2008-03-27 2010-10-28 Rocketick Technologies Ltd. Design simulation using parallel processors
CN102089752A (en) * 2008-07-10 2011-06-08 洛克泰克科技有限公司 Efficient parallel computation of dependency problems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101044457A (en) * 2004-09-28 2007-09-26 英特尔公司 System, method and apparatus for dependency chain processing
US20100274549A1 (en) * 2008-03-27 2010-10-28 Rocketick Technologies Ltd. Design simulation using parallel processors
CN102089752A (en) * 2008-07-10 2011-06-08 洛克泰克科技有限公司 Efficient parallel computation of dependency problems

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415550A (en) * 2014-04-14 2017-02-15 国家信息及自动化研究院 Method of automatic synthesis of circuits, device and computer program associated therewith
CN106415550B (en) * 2014-04-14 2020-07-28 国家信息及自动化研究院 Method for automatic circuit synthesis, associated device and computer program
CN112671900A (en) * 2020-12-23 2021-04-16 南京华大九天科技有限公司 Layout data transmission method and system and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN102831011B (en) A kind of method for scheduling task based on many core systems and device
Zahaf et al. Energy-efficient scheduling for moldable real-time tasks on heterogeneous computing platforms
CN102918501B (en) Be used for the method and system of the performance of analyzing multithreading application
US20130013897A1 (en) Method to dynamically distribute a multi-dimensional work set across a multi-core system
Deja et al. Machining process sequencing and machine assignment in generative feature-based CAPP for mill-turn parts
Bhuiyan et al. Energy-efficient parallel real-time scheduling on clustered multi-core
CN104731642A (en) Backfill scheduling method and system for embarrassingly parallel jobs
CN102902512A (en) Multi-thread parallel processing method based on multi-thread programming and message queue
CN105068874A (en) Resource on-demand dynamic allocation method combining with Docker technology
CN102708009B (en) Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)
Lombardi et al. Robust scheduling of task graphs under execution time uncertainty
CN103440364A (en) Method and system for automatically generating WBS (Work Breakdown Structure) node basing on BIM (Building Information Modeling)
CN108985937A (en) A kind of computing resource sharing method and block catenary system based on block chain technology
US20200110676A1 (en) Programming model and framework for providing resilient parallel tasks
CN103729246A (en) Method and device for dispatching tasks
CN106598731A (en) Heterogeneous multi-core architecture-based runtime system and control method thereof
Song et al. A recursive operator allocation approach for assembly line-balancing optimization problem with the consideration of operator efficiency
JP2011113449A (en) Application generation system, method, and program
Bocewicz et al. Multimodal processes prototyping subject to grid-like network and fuzzy operation time constraints
Li Parallel nonconvex generalized Benders decomposition for natural gas production network planning under uncertainty
CN106156049A (en) A kind of method and system of digital independent
Dorflinger et al. Hardware and software task scheduling for ARM-FPGA platforms
CN102855339A (en) Parallel processing scheme for integrated circuit layout checking
Rolf Parallelism in constraint programming
Poursabzi et al. An improved model and a heuristic for capacitated lot sizing and scheduling in job shop problems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130102