CN112580296A - Method, apparatus and storage medium for processing a circuit layout - Google Patents

Method, apparatus and storage medium for processing a circuit layout Download PDF

Info

Publication number
CN112580296A
CN112580296A CN202011491209.4A CN202011491209A CN112580296A CN 112580296 A CN112580296 A CN 112580296A CN 202011491209 A CN202011491209 A CN 202011491209A CN 112580296 A CN112580296 A CN 112580296A
Authority
CN
China
Prior art keywords
sub
processing
processing devices
jobs
operations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011491209.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Manufacturing EDA Co Ltd
Original Assignee
Advanced Manufacturing EDA Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Manufacturing EDA Co Ltd filed Critical Advanced Manufacturing EDA Co Ltd
Priority to CN202011491209.4A priority Critical patent/CN112580296A/en
Publication of CN112580296A publication Critical patent/CN112580296A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/39Circuit design at the physical level
    • G06F30/398Design verification or optimisation, e.g. using design rule check [DRC], layout versus schematics [LVS] or finite element methods [FEM]

Abstract

According to example embodiments of the present disclosure, methods, apparatuses, devices and computer-readable storage media for processing a circuit layout are provided. A method for processing a circuit layout includes generating a plurality of sub-jobs for performing a Design Rule Check (DRC) on the circuit layout. Each sub-job corresponds to a layout cell of the circuit layout, and specifies at least one or more operations for which DRC is to be performed on the layout cell. The method also includes assigning the plurality of sub-jobs to the plurality of processing devices based on configuration information of the plurality of processing devices and a complexity of the one or more operations. At least one processing device of the plurality of processing devices is configured with accelerated processing resources. The method also includes determining a result of the check to perform the DRC on the circuit layout based on results of the processing of the plurality of sub-jobs by the plurality of processing devices. In this way, a fast and efficient DRC scheme can be advantageously implemented.

Description

Method, apparatus and storage medium for processing a circuit layout
Technical Field
Embodiments of the present disclosure relate generally to integrated circuits and, more particularly, relate to a method, apparatus, and computer-readable storage medium for processing a circuit layout.
Background
The circuit layout (also simply referred to as layout) is a series of geometric figures converted from a designed and simulated optimized circuit, and comprises physical information data related to devices such as the size of the integrated circuit, the topology definition of each layer and the like. The integrated circuit manufacturer manufactures the mask according to the data. The layout pattern on the mask determines the size of the devices or physical layer of connections on the chip. Thus, the geometric dimensions on the layout are directly related to the dimensions of the physical layer on the chip. For this reason, the layout needs to be designed according to Design rules, and Design Rule Check (DRC) needs to be performed on the layout. However, the process of DRC on the circuit layout consumes a lot of computing resources and time.
Disclosure of Invention
According to an example embodiment of the present disclosure, a solution for processing a circuit layout is provided.
In a first aspect of the disclosure, a method for processing a circuit layout is provided. The method includes generating a plurality of sub-jobs for performing design rule checking on the circuit layout. Each sub-job corresponds to a layout cell of the circuit layout, and specifies at least one or more operations for which a design rule check is to be performed on the layout cell. The method also includes assigning the plurality of sub-jobs to the plurality of processing devices based on configuration information of the plurality of processing devices and a complexity of the one or more operations. At least one processing device of the plurality of processing devices is configured with accelerated processing resources. The method further includes determining a check result of performing the design rule check on the circuit layout based on results of the processing of the plurality of sub-jobs by the plurality of processing devices.
In a second aspect of the disclosure, an electronic device is provided that includes one or more processors; and a storage device for storing the one or more programs which, when executed by the one or more processors, cause the one or more processors to perform the actions. The actions include generating a plurality of sub-jobs for performing design rule checks on the circuit layout. Each sub-job corresponds to a layout cell of the circuit layout, and specifies at least one or more operations for which a design rule check is to be performed on the layout cell. The actions also include assigning the plurality of sub-jobs to the plurality of processing devices based on configuration information of the plurality of processing devices and a complexity of the one or more operations. At least one processing device of the plurality of processing devices is configured with accelerated processing resources. The acts further include determining a check result for performing a design rule check on the circuit layout based on results of the processing of the plurality of sub-jobs by the plurality of processing devices.
In some embodiments, the complexity depends on whether one or more operations involve processing the relative positions of the geometries in the layout cells.
In some embodiments, assigning the plurality of sub-jobs to the plurality of processing devices based on configuration information of the plurality of processing devices and complexity of the one or more operations comprises: determining, based on configuration information of a plurality of processing devices, a plurality of pairs of processing devices from the plurality of processing devices, each pair of processing devices including a first processing device not configured with accelerated processing resources and a second processing device configured with accelerated processing resources; and if the one or more operations include a first operation and a second operation of higher complexity than the first operation, wherein the first operation does not involve processing of the relative position and the second operation involves processing of the relative position, assigning each sub-job to a respective pair of the plurality of pairs of processing devices such that the first operation is performed by the first processing device and the second operation is performed by the second processing device.
In some embodiments, assigning the plurality of sub-jobs to the plurality of processing devices based on configuration information of the plurality of processing devices and complexity of the one or more operations comprises: determining a first group of processing devices and a second group of processing devices from the plurality of processing devices based on configuration information of the plurality of processing devices, the first group of processing devices being a group of processing devices that are not configured with accelerated processing resources and the second group of processing devices being a group of processing devices that are configured with accelerated processing resources; assigning a first group of sub-jobs to a first group of processing devices if the first group of sub-jobs in the plurality of sub-jobs includes a first operation and does not include a second operation of higher complexity than the first operation, wherein the first operation does not involve processing of a relative position and the second operation involves processing of a relative position; and assigning a second group of sub-jobs to the second group of processing devices if the second group of sub-jobs includes the second operation and does not include the first operation.
In some embodiments, the size of the first layout cell is different from the size of the second layout cell, the first layout cell being a layout cell corresponding to each of the first set of sub-jobs, the second layout cell being a layout cell corresponding to each of the second set of sub-jobs.
In some embodiments, each sub-job further specifies a pattern search operation for determining a plurality of patterns from the layout cells, each pattern including at least one geometry of the layout cell. Generating the plurality of sub-jobs includes: each sub-job is set to perform one or more operations on a plurality of patterns determined by the pattern search operation.
In some embodiments, each sub-job further specifies a pattern classification operation for determining a set of patterns belonging to the same type from among the plurality of patterns and selecting a reference pattern from among the set of patterns. Setting each sub-job to perform one or more operations on the plurality of patterns comprises setting each sub-job to: performing one or more operations on the reference pattern to obtain an inspection result of the reference pattern; and performing one or more operations on the remaining patterns of the set of patterns by applying the inspection result to the remaining patterns except for the reference pattern.
In some embodiments, the one or more operations include at least one of: a bias operation for a set of geometries in a layout cell, a combination operation for combining geometries in a layout cell, a geometry shift operation for changing relative positions, or a geometry edge shift operation for changing relative positions.
In some embodiments, the accelerated processing resources are removably configured to the at least one processing device.
In a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1A shows a schematic diagram of one example operation for DRC for a layout pattern;
FIG. 1B shows a schematic diagram of another example operation for DRC for a layout pattern;
FIG. 2 illustrates a schematic diagram of an example architecture in which various embodiments of the present disclosure can be implemented;
FIG. 3 illustrates a schematic diagram of a portion of the example architecture of FIG. 2, in accordance with some embodiments of the present disclosure;
FIG. 4 illustrates an example layout cell according to some embodiments of the present disclosure;
FIG. 5 illustrates an indexing structure for the example layout cell of FIG. 4, in accordance with some embodiments of the present disclosure;
FIG. 6 illustrates a block diagram of a process of allocating sub-jobs, according to some embodiments of the present disclosure;
FIG. 7 illustrates a block diagram of a process of allocating sub-jobs, according to some embodiments of the present disclosure;
FIG. 8 illustrates a flow diagram of an example method for processing a circuit layout, according to some embodiments of the present disclosure; and
FIG. 9 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As used herein, the term "accelerated processing resource" or similar terms refer to hardware or software that is capable of fast processing relative to conventional processing resources, such as a Central Processing Unit (CPU). The Accelerated Processing Resources (APR) may include, but are not limited to, an Accelerated Processor (APU), an Artificial Intelligence (AI) chip. The AI chip may include, for example, a Tensor Processing Unit (TPU), a neural Network Processing Unit (NPU), and other existing or future developed AI chips. In some embodiments, the accelerated processing resources may also include a Graphics Processing Unit (GPU), a Field Programmable Gate Array (FPGA), or the like. It will be appreciated that such accelerated processing resources are primarily used to increase the computational speed of AI-related applications.
As mentioned briefly above, prior to actual production, a design rule check needs to be performed on the circuit layout. In existing computing environments where design rule checking is performed, the computing units used for design rule checking typically utilize or are based on conventional processing resources, such as CPU, GPU, FPGA, CELL bandwidth engine architecture (CELL BE) processors. In a distributed processing architecture, these computing units are typically implemented as a distributed plurality of clients. These clients typically have the same configuration and are configured with one or more of the conventional processing resources of a CPU, GPU, FPGA, etc.
However, in the conventional solution, the running time consumed for the design rule check is still long, which is not favorable for obtaining a large number of results quickly. Therefore, a faster and efficient solution is desired to perform design rule checking operations for a circuit layout.
According to an embodiment of the present disclosure, a solution for performing design rule checking on a circuit layout is presented. In this scheme, a management apparatus generates a plurality of sub-jobs for performing design rule checking on a circuit layout. Each sub-job corresponds to one layout cell of the circuit layout, and specifies at least one or more operations for design rule checking to be performed on the layout cell. Then, the management apparatus allocates the plurality of sub-jobs to the plurality of processing apparatuses based on the configuration information of the plurality of processing apparatuses and the complexity of the one or more operations. At least one processing device of the plurality of processing devices is configured with accelerated processing resources. The complexity of one or more operations may depend on whether the respective operation involves processing of the relative positions of the geometries in the layout cells. Next, the management apparatus determines a circuit layout subjected to the design rule check based on results of processing of the plurality of sub-jobs by the plurality of processing apparatuses.
According to the design rule checking scheme for a circuit layout presented herein, sub-jobs may be assigned to appropriate processing devices according to the configuration of the processing devices and the characteristics of the plurality of sub-jobs. In this way, the processing resources of the processing device can be better utilized and the processing of the sub-jobs is accelerated. Thus, the scheme of the present disclosure can advantageously achieve fast and efficient design rule checking.
To better understand the scheme proposed herein for processing a circuit layout, the relevant principles of DRC will be described below. In general, DRC operations perform geometry selection, geometry shifting, edge shifting, geometry creation, etc., operations, also referred to herein as "DRC operations," on layout patterns in a circuit layout based on a logic condition or a rule table.
Some DRC operations may not relate to the relative positions of different geometries in the layout pattern, while other DRC operations may relate to the relative positions of different geometries in the layout pattern. In other words, some DRC operations may not need to take into account the surroundings of the geometry being processed, while other DRC operations need to take into account the surroundings of the geometry being processed.
In DRC operations that do not involve relative positions, the same geometric rules or logical operations may be performed on a set of geometries (e.g., all geometries) in the layout pattern. Such DRC operations that do not relate to relative positions are also referred to herein as "geometric logical operations," but this is for discussion purposes only and is not intended to limit the scope of the present disclosure.
The geometry logic operation may include a bias operation for a set of geometries in the layout pattern. For example, such a biasing operation may move the edges of a set of geometries (e.g., all geometries) in the layout pattern inward or outward by a distance, such as 1 nm. Alternatively or additionally, the geometry logic operation may include a combining operation for combining geometries in the layout pattern. For example, the circuit layout may include multiple layers, AND the combining operation may combine the geometries of the different layers according to a certain logic (e.g., OR, AND). It should be appreciated that the geometric logic operations described above are merely exemplary, and that the geometric logic operations may include any suitable operations that do not involve the relative positions of the geometric figures.
In DRC operations involving relative positions, transformation operations, such as global shifts, edge movements, etc., may be performed on the whole or a portion of the geometry based on the relative positions involved. Such DRC operations involving relative positions are also referred to herein as "geometric transformation operations," but this is for discussion purposes only and is not intended to limit the scope of the present disclosure.
The geometry transformation operation may include a geometry shifting operation for changing the relative position of the geometry. Referring to fig. 1A, a schematic diagram of a geometry shifting operation for a layout pattern 110 is shown. As shown in fig. 1A, the layout pattern 110 may include a geometry 111 and a geometry 112. Fig. 1A shows a distance 101 between a corner of a geometry 111 and a corner of a geometry 112 before performing a geometry shifting operation. If, according to DRC, distance 101 is less than the allowed distance, at least one of geometry 111 and geometry 112 may be shifted to increase the distance between corners to the allowed distance. FIG. 1A shows an example of shifting the geometry 112 to the right. After the geometry shifting operation is performed, the distance between the corner of the shifted geometry 113 and the corner of the geometry 111 is changed to the distance 102, and the distance 102 is an allowable distance. Although an example of shifting the geometry 112 is shown in fig. 1A, it should be understood that the geometry 111 may also be shifted to increase the distance between corners, or both the geometry 111 and the geometry 112 may be shifted to increase the distance between corners.
Alternatively or additionally, the geometry transformation operation may comprise a geometry edge movement operation for changing the relative position of the geometry. Referring to fig. 1B, a schematic diagram of a geometry edge-shifting operation for layout pattern 120 is shown. As shown in fig. 1B, the layout pattern 120 may include a geometry 121 and a geometry 122. FIG. 1B shows the distance 103 between the corner of the geometry 121 and the corner of the geometry 122 before performing the geometry edge movement operation. If the distance 103 is less than the allowed distance, depending on the DRC, the edge of at least one of the geometry 111 and the geometry 112 may be moved to increase the distance between the corners to the allowed distance. Fig. 1B shows an example of moving the left edge of the geometry 122 to the right. After the geometry edge movement operation is performed, the original geometry 122 is changed to the geometry 123, and the distance between the corner of the geometry 123 and the corner of the geometry 121 is changed to the distance 104, the distance 104 being the allowed distance. Although an example of moving the left edge of the geometry 122 is shown in FIG. 1B, it should be understood that the right edge of the geometry 121 may also be moved to the left to increase the distance between corners, or both the right edge of the geometry 121 and the left edge of the geometry 122 may be moved to increase the distance between corners.
Examples of geometric transformation operations are described above with reference to fig. 1A and 1B. The distance between the corners of the geometry is only an example of a relative position. The geometric transformation operation may involve any suitable measure of the relative position of the geometry, such as line-end to line-end distance, etc. Furthermore, the geometric transformation operations described above are merely exemplary, and may include any suitable operation involving the relative positions of geometric figures.
As can be seen from the examples of the geometry logical operations and the geometry transformation operations described above, the relative positions between the processed geometries or between the processed geometries and other geometries may not be considered in the execution of the geometry logical operations. In contrast, in the execution of the geometric transformation operation, the relative position between the processed geometric figures or between the processed geometric figures and other geometric figures needs to be considered. For example, it is necessary to determine whether the relative position satisfies the corresponding constraint condition according to DRC.
Thus, different DRC operations may have different complexities depending on whether processing of the relative positions of different geometries in the layout pattern is involved. In other words, the complexity of the DRC operation may depend on whether the context of the geometry being processed needs to be considered in performing the DRC operation. In some embodiments, a low complexity DRC operation may refer to a DRC operation that does not involve processing of the relative position of the geometry, such as the geometry logic operations described above. A high complexity DRC operation may refer to a DRC operation that involves processing of the relative position of a geometry, such as the geometry transformation operation described above. In some embodiments, the complexity partitioning of the DRC operation may be further refined based on the number of geometries related to the relative positions involved. For example, DRC operations involving relative positions of two geometries are less complex than DRC operations involving relative positions of three or more geometries.
The determination of the complexity of the DRC operation is described above with reference to the processing of the relative position as an example. It will be appreciated that the complexity of the DRC operation may also be determined based on other criteria, such as the type of operation, the size of the rule table for the operation, and the like.
The geometric logic operations and geometric transformation operations involved in DRC jobs may require a high consumption of computational resources. These operations are particularly well suited to be processed with accelerated processing resources. Therefore, the efficiency of DRC can be improved by using accelerated processing resources to perform these operations.
Furthermore, although the geometric logic operation, the geometric transformation operation, and the like consume a large amount of computing resources, the geometric transformation operation involving the relative position is more complicated and requires more accelerated processing resources than the geometric logic operation not involving the relative position. Thus, in some embodiments employing a hybrid architecture, geometry transformation operations may be performed using accelerated processing resources while geometry logic operations are performed using conventional processing resources. In this way, a balance of efficiency and cost may be achieved.
Example architecture
Fig. 2 illustrates a schematic diagram of an example architecture 200 in which various embodiments of the present disclosure can be implemented. As shown in FIG. 2, the architecture 200 generally includes a management device 210 and a plurality of processing devices 220-1 through 220-6. For example, the management device 210 may be a server, and the plurality of processing devices 220-1 to 220-6 may be a plurality of clients.
The management device 210 in the architecture 200 may be any device with computing capabilities. As non-limiting examples, the management device 210 may be any type of fixed, mobile, or portable computing device, including but not limited to a desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, multimedia computer, mobile phone, and the like. In some embodiments, all or a portion of the components of the management device 210 may be distributed in the cloud.
The plurality of processing devices 220-1 through 220-6 may also be collectively referred to hereinafter as the plurality of processing devices 220, or individually as the processing devices 220. The processing device 220 and the management device 210 may communicate with each other and transmit data. Data transfer between the management device 210 and the processing device 220 may be based on any suitable form of communication connection, including but not limited to a wide area network (e.g., the internet), a local area network, a private network, a public network, a packet network, a wired network, or a wireless network, such as a connection established via bluetooth, Near Field Communication (NFC), wireless fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), infrared, 2G/3G/4G/5G, and other future developed technologies, among others.
At least one processing device of the plurality of processing devices 220 may be configured with accelerated processing resources. As an example, processing devices 220-1, 220-2, and 220-3 shown in FIG. 2 are configured with accelerated processing resources. A processing device configured with accelerated processing resources is also referred to as an "accelerated processing device. In addition to accelerated processing resources, accelerated processing devices, such as processing devices 220-1, 220-2, and 220-3, may also be configured with conventional processing resources. Thus, accelerated processing devices may include, but are not limited to: APR, a combination of CPU and APR, a combination of CPU, APR and GPU, a combination of CPU, APR and FPGA, a combination of CPU, APR, GPU and FPGA, and the like.
In some embodiments, the accelerated processing resources may be configured to the processing device in a persistent manner or in a non-removable manner. An accelerated processing resource configured in this manner may be referred to as a non-removable accelerated processing resource. For example, one or more of processing devices 220-1, 220-2, and 220-3 may be configured with accelerated processing resources, e.g., with a TPU, NPU, or APU built in, at factory.
In some embodiments, the accelerated processing resources may be removably configured to the processing device. An accelerated processing resource configured in this manner may be referred to as a removable accelerated processing resource. For example, the accelerated processing resources may be configured to the processing device in the form of an external plug-in. In this way, a processing device that would otherwise have only conventional processing resources can be provided with accelerated processing capabilities when needed. In such embodiments, architecture 100 may have greater flexibility to configure processing device 220 according to DRC throughput.
The example architecture 200 shown in fig. 2 is a hybrid architecture. Such a hybrid architecture may include processing devices 220-4, 220-5, and 220-6 that are not configured with accelerated processing resources in addition to processing devices 220-1, 220-2, and 220-3 that are configured with accelerated processing resources. Processing devices that are not configured with accelerated processing resources are also referred to as "regular processing devices". Conventional processing equipment may be configured with any suitable conventional processing resources. Thus, a conventional processing device may include, but is not limited to, a combination of a CPU and a GPU, a combination of a CPU and an FPGA, a CPU, a GPU, an FPGA, and the like. It will be appreciated that in some embodiments, a conventional processing device may be configured with accelerated processing resources in a removable manner, thereby becoming an accelerated processing device.
In such a hybrid architecture, an accelerated processing device and a conventional processing device may be used separately for operations of different complexity as specified by the DRC job. In this way, a balance of efficiency and cost may be achieved. Such an embodiment will be described below with reference to fig. 6 and 7.
In the example architecture 200 shown in fig. 2, the number of management devices and processing devices is merely exemplary and is not intended to limit the scope of the present disclosure. For example, in embodiments consistent with the present disclosure, there may be more or fewer processing devices. Further, the number of processing devices configured with and without accelerated processing resources is merely exemplary. For example, in some embodiments, multiple processing devices 220 may each be configured with accelerated processing resources. In some embodiments, some of the plurality of processing devices 220 may be configured with removable accelerated processing resources, while other processing devices may be configured with non-removable accelerated processing resources.
The foregoing describes an example architecture 200 in which embodiments according to the present disclosure can be implemented. Fig. 3 illustrates a schematic diagram of a portion 300 of the example architecture 200 of fig. 2, in accordance with some embodiments of the present disclosure. Fig. 3 illustrates data transmission between the management device 210 and the processing device by taking the processing device 220-1 as an example.
The management device 210 is configured to control job allocation and execution for DRC. As shown in fig. 3, the management apparatus 210 may include an execution unit 311 (such as a CPU), a DRC just-in-time compiler 312, and a storage 313. The management device 210 may receive or locally store DRC-related data as well as circuit layout data. DRC-related data may include files, recipes (e.g., rules used), etc. for performing DRC. For example, DRC-related data may include DRC binaries, rule data used (such as a rule table), logical conditions, and the like. The circuit layout data may include a circuit layout to be DRC-enabled, and the like.
The management device 210 may divide a circuit layout (e.g., of a mask) into a plurality of layout cells, each of which may refer to a pattern composed of one or more geometric figures and has a certain size. The management apparatus 210 generates a plurality of sub-jobs for executing DRC on the circuit layout. Each sub-job corresponds to one layout cell, and at least specifies one or more operations for DRC, i.e., one or more DRC operations, to be performed on the layout cell. For example, as shown in fig. 2, DRC just-in-time compiler 312 may be implemented at management device 210, and DRC just-in-time compiler 312 may generate execution code for processing device 220 to execute the corresponding sub-job.
The one or more DRC operations specified by the sub-job may include geometric transformation operations that involve relative positions and geometric logical operations that do not involve relative positions. As an example, the specified one or more DRC operations may comprise a bias operation for a set of geometries in a layout cell, such as moving the edges of all geometries in a layout cell inward or outward by a distance. As another example, the specified one or more DRC operations may include a combining operation for combining geometries in layout cells, e.g., combining geometries in layer 1 of layout cells with geometries in layer 2. As yet another example, the specified one or more DRC operations may include a geometry shift operation for changing the relative position of a geometry in a layout cell, such as described above with reference to fig. 1A. As yet another example, the specified one or more DRC operations may include a geometry edge movement operation for changing the relative position of a geometry in a layout cell.
Additionally, in some embodiments, the generated sub-jobs may also specify pattern analysis operations for layout cells. A pattern analysis operation is performed prior to the DRC operation to determine the geometry to which the DRC operation is directed. Such an example embodiment will be described below with reference to fig. 4 and 5.
To allocate the plurality of sub-jobs to the plurality of processing devices 220, the management device 210 may determine a configuration of each of the plurality of processing devices 220 to determine whether the respective processing device is configured with accelerated processing resources. For example, the management device 210 may query the plurality of processing devices 220 for their configurations, respectively. As another example, the processing device 220 may send a message to the management device 210 to notify of the configuration change when its configuration changes. For example, the processing device 220 may notify the management device 210 when configured with removable accelerated processing resources.
Next, the management apparatus 210 can allocate a plurality of sub-jobs to the plurality of processing apparatuses 220 based on the configuration of the plurality of processing apparatuses 220 and the complexity of DRC operations. An example embodiment of allocating a sub-job will be described below with reference to fig. 6 and 7.
As shown in fig. 3, if it is determined that a certain sub-job is assigned to the processing device 220-1, the management device 210 may transmit data related to the sub-job to the processing device 220-1. The data related to the sub-jobs may include: geometric data of the layout cell corresponding to the sub-job, logic conditions, rules, functions, and the like for executing DRC.
Memory 322 of processing device 220-1 may send data related to the sub-job to execution unit 321. As shown in fig. 2, the processing device 220-1 is configured with accelerated processing resources so the execution unit 321 can process data using one or more of the vector pattern 331, the matrix pattern 332, and the tensor pattern 333.
The processing device 220-1 transmits the DRC result of the sub-job processing to the management device 210 after processing the sub-job. The DRC results may include geometry data in layout cells that have passed the DRC and DRC flag (flag) data. Although only processing device 220-1 is shown, it should be understood that multiple sub-jobs may be processed in parallel at multiple processing devices 220.
The management apparatus 210 receives results of processing a plurality of sub-jobs, that is, receiving results of DRC on a plurality of layout cells, from the plurality of processing apparatuses 220, respectively. The management device 210 generates DRC results for the entire circuit layout based on these results. As shown in fig. 3, the management device 210 may output or store DRC results for the entire circuit layout, which may include DRC-passed circuit layout and DRC flag information for the entire circuit layout, and the like.
GeneratingExamples of sub-jobs
Before performing DRC on the layout cells, pattern analysis needs to be performed on the layout cells. The operation for performing pattern analysis on the layout cells is also referred to as a pattern analysis operation. The pattern analysis operation may include a pattern search operation for determining a plurality of patterns from the layout cells on which the DRC operation is to be performed.
In pattern searching, it may be necessary to obtain geometric information for layout cells. Such geometry information may indicate the individual geometries included in the layout cell and the dependencies on the geometry combinations. One fast and efficient way to obtain geometry information is to index each geometry in a layout cell. In this way, pattern analysis can be performed quickly.
As an example, an R-tree or binary tree may be used to construct an index structure for a layout cell (or the entire mask layout as well) as the geometry information. The R-tree or binary tree may form a single tree or may form a forest while maintaining the hierarchy of the layout. The R-tree described herein refers to a tree-like data structure built for a circuit layout to index the geometry in the layout. The R-tree may include a root node, intermediate nodes, and leaf nodes, with indexes established between the root node and a plurality of intermediate nodes, and indexes established between each intermediate node and a plurality of leaf nodes.
FIG. 4 illustrates an example layout cell 400 according to some embodiments of the present disclosure. The example layout cell 400 shown in fig. 4 may be considered as one example of a layout cell of a circuit layout. The geometries indexed R8-R19 and the pattern comprising a plurality of geometries indexed R1-R7 are shown in fig. 4, which may be considered a combination of geometries. It should be understood that the number and relative positions of the geometries shown in fig. 4 are merely exemplary and are not intended to limit the scope of the present disclosure. FIG. 5 illustrates an indexing structure 500 for the example layout cell of FIG. 4, according to some embodiments of the present disclosure.
The geometries R8-R19 correspond to a single geometry and may constitute leaf nodes in the index structure 500. By analyzing the geometries R8-R19 as leaf nodes, a pattern comprising a plurality of geometries, i.e., forming other nodes in the index structure 500, may be determined. By way of example, by calculating the distances between the geometries R8-R19, it may be determined that the geometries R8-R10 are close to each other (e.g., less than a threshold distance apart) but do not overlap each other. Thus, the geometries R8-R10 may be grouped into a pattern R3. As another example, the geometry R11 overlaps the geometry R12 so they may be grouped into the pattern R4.
Similarly, the geometries R13-R14 may be grouped into a pattern R5; the geometries R15-R16 may be grouped into a pattern R6; the geometries R17-R19 may be grouped into a pattern R7. Unlike the geometries R8-R19, the patterns R3-R7 include a plurality of geometries. As shown in FIG. 5, patterns R3-R7 may constitute intermediate nodes in index structure 500.
The patterns R6 and R7 were grouped into the pattern R1 by analysis of the patterns R3-R7; the patterns R3, R4, and R5 are grouped into R2. As shown in FIG. 5, patterns R1 and R2 may form the root nodes of indexing structure 500.
Example implementations of pattern search operations are described above. As can be seen from the above description, by constructing the index structure 500, it can be determined that the layout cell 400 includes a plurality of patterns of geometry, such as the patterns R3-R7 or the patterns R1-R2. Thus, a plurality of patterns to be subjected to the DRC operation, for example, the patterns R3 to R7, can be determined from the patterns R3 to R7 or the patterns R1 to R2 as a combination of geometric figures.
In some embodiments, the pattern search operation may be performed in whole or in part by the management device 210. In such an embodiment, the management apparatus 210 may set the sub-job corresponding to the layout cell 400 to perform the DRC operation on the plurality of patterns (e.g., the patterns R3-R7), respectively. The management apparatus 220 may transmit the data of the index structure 500 to the corresponding processing apparatus 220 as a part of the data related to the sub-job shown in fig. 3. Thus, the use of an indexing structure for the circuit layout may be in parallel across multiple processing devices 220. In this way, the geometric operations involved in the sub-job can be accelerated, thereby contributing to an improvement in the processing efficiency of the entire DRC job.
In some embodiments, the pattern search operation may be performed by the processing device 220. That is, each sub-job may specify, in addition to one or more DRC operations, a pattern search operation for layout cells, e.g., the construction and search of an index structure. The management apparatus 210 may set each sub-job to perform DRC operations on a plurality of patterns determined by the pattern search operation. In such an embodiment, for the assigned sub-job, the processing device 220 may first perform the specified pattern search operation on the layout cells to determine the pattern on which the DRC operation is to be performed, and then perform the DRC operation on the determined pattern.
As used herein, the term "set sub-job to … …" and variations thereof may refer to generating executable instructions for a sub-job in generating the sub-job such that the corresponding operation or action is performed when the sub-job is processed by a processing device.
Additionally, in some embodiments, the pattern analysis operation may further include a pattern classification operation for determining a set of patterns belonging to the same type from the determined plurality of patterns and selecting a reference pattern from the set of patterns. Multiple patterns in a layout cell may be classified to determine a set of patterns that are of the same type. For example, the same pattern may be classified into the same type. As another example, scaled patterns may be classified as the same type. If it is determined that a group of patterns belonging to the same type is included in the plurality of patterns, a reference pattern may be determined from the group of patterns. The reference pattern may be any pattern of the set of patterns.
In some embodiments, the pattern classification operation may be performed by the management device 210 (e.g., a pattern analyzer). For example, in the case where the pattern search operation is performed by the management apparatus 210, the pattern classification operation may also be performed by the management apparatus 210. In such embodiments, the management device 210 may send the results of the pattern search (e.g., the constructed index structure) and the results of the pattern classification (e.g., the determined grouping of patterns and the selected reference pattern) to the processing device 220. The management device 210 may set each sub-job to: the DRC operation is performed on the reference pattern to obtain an inspection result of the reference pattern, and the inspection result of the reference pattern is applied to the other patterns in the group of patterns other than the reference pattern.
In some embodiments, the pattern classification operation may be performed by the processing device 220. For example, in the case where the pattern search operation is performed by the processing device 220, the pattern classification operation may also be performed by the processing device 220. In such embodiments, each sub-job may specify a pattern classification operation for a layout cell in addition to one or more DRC operations. The management device 210 may set each sub-job to: performing a pattern classification operation to determine a reference pattern in a set of patterns of the same type; performing a DRC operation on the reference pattern to obtain a result of checking the reference pattern; and applying the inspection result of the reference pattern to the other patterns in the group of patterns except for the reference pattern.
In this way, for each type of pattern, a reference pattern can be determined therefrom as a seed. In executing the sub job, the processing device 220 may perform a DRC operation on each type of reference pattern, and the result of checking the reference pattern may be applied to other patterns of that type. In this manner, repeated DRC operations may be reduced, e.g., repeated application of geometric rules or logical conditions to the same pattern may be avoided.
Furthermore, additional advantages may also be realized in embodiments in which pattern analysis operations (e.g., pattern search operations, pattern classification operations) are performed by the processing device 220. In conventional DRC schemes, it is generally necessary to send DRC-related files to a dedicated pattern analysis tool (e.g., pattern analysis software) and receive the results of the analysis from the pattern analysis tool. Thus, such conventional DRC schemes involve a large number of file input/output (I/O) actions. In contrast, in embodiments where the pattern analysis operations are performed by the processing device 220, file I/O may be avoided. In this way, communication bandwidth can be saved and the efficiency of DRC can be further improved.
Example of allocating a sub-job
After generating the plurality of sub-jobs, the management apparatus 210 allocates the sub-jobs to the processing apparatuses 220 based on the configurations of the plurality of processing apparatuses 220 and the complexity of DRC operations specified by the sub-jobs. In some embodiments, multiple processing devices 220 may each be configured with accelerated processing resources. In such an embodiment, the management apparatus 210 may assign each sub-job to a corresponding one of the processing apparatuses.
In some embodiments, some processing devices of plurality of processing devices 220 may be configured with accelerated processing resources, while other processing devices may not be configured with accelerated processing resources. In such embodiments, the management device 210 may pair or group the plurality of processing devices 220.
In some embodiments, the management device 210 may pair an accelerated processing device with a conventional processing device. Each pair of processing devices may be used to process a respective one or more sub-jobs. The accelerated processing device and the regular processing device of each pair of processing devices may cooperatively perform one or more operations in the assigned sub-job.
FIG. 6 illustrates a block diagram of a process 600 for allocating sub-jobs, according to some embodiments of the present disclosure. As shown in FIG. 6, the management device 210 may organize the plurality of processing devices 220 shown in FIG. 2 into pairs of processing devices, where a first pair 611 of the processing devices may include a processing device 220-1 configured with accelerated processing resources and a processing device 220-4 not configured with accelerated processing resources; the second pair 612 of processing devices may include a processing device 220-2 configured with accelerated processing resources and a processing device 220-5 not configured with accelerated processing resources; the third pair 613 of processing devices may include a processing device 220-3 configured with accelerated processing resources and a processing device 220-6 not configured with accelerated processing resources.
If each sub-job includes operations of different complexity, the management device 210 may assign each sub-job to a pair of processing devices such that operations of high complexity are performed by an accelerated processing device thereof and operations of low complexity are performed by a regular processing device thereof. For example, the management device 210 may specify, in a file or an instruction for job assignment, that an operation with high complexity is performed by an accelerated processing device, and an operation with low complexity is performed by a conventional processing device.
As shown in FIG. 6, sub-job 601 may be assigned to a first pair 611 of processing devices, sub-job 602 may be assigned to a second pair 612 of processing devices, and sub-job 603 may be assigned to a third pair 613 of processing devices. As an example, the geometric transformation operations in sub-job 601 may be performed by processing device 220-1, while the geometric logic operations may be performed by processing device 220-4. For example, the geometry shifting operation and the geometry edge moving operation may be performed by processing device 220-1, and the biasing operation and the combining operation may be performed by processing device 220-4. Management device 210 may specify, in a file or instructions for job assignment, that geometric transformation operations be performed by processing device 220-1 and that geometric logic operations be performed by processing device 220-4.
The results of each pair of processing devices processing the corresponding sub-job may be sent back to the management device 210. The management device 210 may determine the DRC-passed circuit layout based on the results from each pair of processing devices.
Although speeding up processing resources can lead to a great optimization of processing speed, it is expensive compared to conventional processing resources. In such embodiments, the flexibility of the hybrid architecture may be exploited by pairing an accelerated processing device with a conventional processing device. In this way, a balance of efficiency and cost may be achieved.
In other embodiments, the management device 210 may group the accelerated processing device with a conventional processing device. Accordingly, a plurality of sub-jobs for performing DRC on the circuit layout may also be grouped. Each group of sub-jobs may include operations of the same complexity and are assigned to a respective group of processing devices.
FIG. 7 illustrates a block diagram of a process 700 for allocating sub-jobs, according to some embodiments of the present disclosure. As shown in FIG. 7, the management device 210 may organize the plurality of processing devices 220 shown in FIG. 2 into multiple groups of processing devices, where a first group 730 of processing devices may include processing devices 220-4, 220-5, and 220-6 that are not configured with accelerated processing resources and a second group 740 of processing devices may include processing devices 220-1, 220-2, and 220-3 that are configured with accelerated processing resources.
The first group 710 of sub-jobs generated by management device 210 may include sub-jobs 711, 712, and 713. Each sub-job in the first set 710 of sub-jobs may comprise only low complexity operations, e.g. only geometric logic operations. The management device 210 may assign the first group 710 of sub-jobs to the first group 730 of processing devices, i.e., processing devices that are not configured with accelerated processing resources. For example, as shown in FIG. 7, sub-jobs 711, 712, and 713 are assigned to processing devices 220-4, 220-5, and 220-6, respectively.
The management device 210 may receive the processing results of the first group 710 of sub-jobs from the first group 730 of processing devices. The management device 210 may also reconstruct or organize the data for subsequent DRC steps based on the results of these processes.
Management device 210 may generate a second set 720 of sub-jobs that includes sub-jobs 721, 722, and 723. Each sub-job of the second set 720 of sub-jobs may include only operations of high complexity, for example only geometric transformation operations. Management device 210 may assign second group 720 of sub-jobs to second group 740 of processing devices, i.e., processing devices configured with accelerated processing resources. For example, as shown in FIG. 7, sub-jobs 721, 722, and 723 may be allocated to processing devices 220-1, 220-2, and 220-3, respectively.
The management device 210 may in turn receive the processing results of the second group 720 of sub-jobs from the second group 740 of processing devices. Management device 210 may determine the DRC-passed mask layout based on these processing results and the previously received processing results of the first group 710 of sub-jobs.
The number of processing devices and sub-jobs shown in fig. 7 is merely illustrative and not intended to be limiting. In some embodiments, the number of sub-jobs of the first group 710 may be the same as the number of sub-jobs of the second group 720. In some embodiments, the number of sub-jobs of the first group 710 may be different from the number of sub-jobs of the second group 720.
Additionally, the size of the layout cell corresponding to each of the first group 710 of sub-jobs may be different from the size of the layout cell corresponding to each of the second group 720 of sub-jobs. For example, the management device 210 may optimize the size of the layout cell corresponding to each group of sub-jobs based on the computing power of the corresponding processing device.
The example process 700 described above constitutes a sub-job loop. In the sub job loop shown in fig. 7, the management apparatus 210 allocates the sub job having a low complexity first and reallocates the sub job having a high complexity, but it should be understood that this is merely exemplary. In some embodiments, for one sub-job cycle, the management apparatus 210 may allocate the sub-job with high complexity first and allocate the sub-job with low complexity again. In other embodiments, the management device 210 may simultaneously allocate a sub-job with high complexity and a sub-job with low complexity.
It will be appreciated that in such embodiments, the less complex sub-jobs (e.g., first set 710 of sub-jobs) and the more complex sub-jobs (e.g., second set 720 of sub-jobs) form a batch of sub-jobs. The management device 210 may cause different groups of processing devices to always be running by allocating different batches of sub-jobs. For example, after the first group 730 of processing devices has completed processing the first group 710 of sub-jobs, the management device 210 may assign the next batch of sub-jobs with low complexity to the first group 710 of sub-jobs.
By grouping processing devices according to accelerated processing devices and conventional processing devices, and grouping sub-jobs according to complexity, the flexibility of the hybrid architecture can be leveraged. In this way, the processing device can continuously process the sub-job for DRC. Therefore, in such an embodiment, the processing efficiency of DRC can be further improved.
Example methods and example embodiments
FIG. 8 illustrates a flow diagram of an example method 800 for processing a circuit layout, according to some embodiments of the present disclosure. The method 800 may be implemented by the management device 210 shown in fig. 2. For ease of discussion, the method 800 will be described in conjunction with FIG. 2.
At block 810, management device 210 generates a plurality of sub-jobs for performing DRC on a circuit layout (e.g., a mask layout). Each sub-job corresponds to one layout cell of the circuit layout, and specifies at least one or more operations (also referred to as DRC operations) for which DRC is to be performed on the layout cell. For example, the one or more DRC operations may comprise the geometry logic operations, geometry transformation operations described above.
In some embodiments, each sub-job may further specify a pattern search operation for determining a plurality of patterns from the layout cells, each pattern including at least one geometry of the layout cell. For example, for each sub-job, the pattern search operation may obtain geometric information for the layout cell. Such geometry information may indicate the individual geometries included in the layout cell and the dependencies on the geometry combinations. For example, the index structure 500 may be obtained as the geometry information by indexing each geometry included in the layout cell. Further, a plurality of patterns of layout cells, such as the patterns R3 through R7 shown in fig. 5, may be determined from the geometry combinations based on the geometry information. Each pattern comprises at least one geometry belonging to the pattern according to the geometry information. The management apparatus 210 may set each sub-job to perform one or more DRC operations on a plurality of patterns determined by the pattern search operation. In some embodiments, each sub-job may further specify a pattern classification operation for determining a group of patterns belonging to the same type from among the plurality of patterns and selecting a reference pattern from the group of patterns. The reference pattern may be any pattern of the set of patterns. The management apparatus 210 may set each of the sub-jobs to: performing one or more DRC operations on the reference pattern to obtain a result of the inspection of the reference pattern; and performing one or more DRC operations on the remaining patterns of the set of patterns, excluding the reference pattern, by applying the inspection results to the remaining patterns.
At block 820, the management device 210 allocates a plurality of sub-jobs to the plurality of processing devices 220 based on the configuration information of the plurality of processing devices 220 and the complexity of the one or more DRC operations. At least one processing device of the plurality of processing devices 220 is configured with accelerated processing resources. For example, processing devices 220-1, 220-2, and 220-3 shown in FIG. 2 are configured with accelerated processing resources. The accelerated processing resources may include, but are not limited to, a GPU, FGPA, APU, AI chip, etc. The AI chips may include, for example, TPU, NPU, and other existing or future developed AI chips.
In some embodiments, the accelerated processing resources are removably configured to the at least one processing device. For example, a GPU, FGPA, TPU, NPU, or APU is configured in the form of an external plug-in to one or more of processing devices 220-1, 220-2, and 220-3.
In some embodiments, the accelerated processing resources are non-removably configured to the at least one processing device. For example, one or more of processing devices 220-1, 220-2, and 220-3 have built in accelerated processing resources, such as a GPU, FGPA, TPU, NPU, or APU.
In some embodiments, the complexity may depend on whether one or more operations involve processing the relative positions of the geometries in the layout cells.
In some embodiments, the management device 210 may determine a plurality of pairs of processing devices from the plurality of processing devices 220 based on configuration information of the plurality of processing devices 220. Each pair of processing devices includes a first processing device that is not configured with accelerated processing resources and a second processing device that is configured with accelerated processing resources. For example, the management device 210 may determine a plurality of pairs of processing devices shown in fig. 6.
If the one or more operations include a first operation and a second operation of higher complexity than the first operation, for example, the first operation is a geometric logical operation that does not involve processing of a relative position of a geometric figure, and the second operation is a geometric transformation operation that involves processing of a phase position, the management device 210 may assign each sub-job to a corresponding pair of the plurality of pairs of processing devices such that the first operation is performed by the first processing device and the second operation is performed by the second processing device. For example, each of the sub-jobs 601, 602, and 603 illustrated in fig. 6 includes a geometric transformation operation with high complexity and a geometric logic operation with low complexity. The sub-jobs 601, 602, and 603 may be respectively assigned to a plurality of pairs of processing devices.
In some embodiments, the management device 210 may determine the first group of processing devices and the second group of processing devices from the plurality of processing devices 220 based on configuration information of the plurality of processing devices 220. The first set of processing devices may be a set of processing devices that are not configured with accelerated processing resources and the second set of processing devices may be a set of processing devices that are configured with accelerated processing resources. If a first group of sub-jobs of the plurality of sub-jobs comprises a first operation and does not comprise a second operation of higher complexity than the first operation, e.g. the first operation is a geometrical logical operation not involving the processing of the relative position of the geometry and the second operation is a geometrical transformation operation involving the processing of the phase position, the management device 210 may assign the first group of sub-jobs to the first group of processing devices. If a second group of sub-jobs of the plurality of sub-jobs includes the second operation and does not include the first operation, the management apparatus 210 may assign the second group of sub-jobs to a second group of processing apparatuses.
As an example, the management device 210 may determine a first group 730 of processing devices and a second group 740 of processing devices. The plurality of sub-jobs may include a first group 710 of sub-jobs and a second group 720 of sub-jobs. The first group 710 of sub-jobs may include only geometric logic operations and the second group 720 of sub-jobs may include only geometric transformation operations. The management device 210 may assign a first group 710 of sub-jobs to a first group 730 of processing devices and a second group 720 of sub-jobs to a second group 740 of processing devices.
In some embodiments, the second set of sub-jobs may be generated after the first set of sub-jobs is executed. In some embodiments, the dimensions of the layout cells corresponding to each of the first set of sub-jobs may be different from the dimensions of the layout cells corresponding to each of the second set of sub-jobs.
At block 830, the management apparatus 210 determines a check result of performing the design rule check on the circuit layout based on results of the processing of the plurality of sub-jobs by the plurality of processing apparatuses 220. For example, the management device 210 may generate DRC results as shown in fig. 3.
In another aspect of the present disclosure, a method for performing DRC is also provided. The method may be implemented by processing device 220 and may include the actions described above with respect to processing device 220.
Example apparatus
Fig. 9 illustrates a schematic block diagram of an example device 900 that may be used to implement embodiments of the present disclosure. The device 900 may be used to implement the management device 210 or the processing device 220 of fig. 1. As shown, device 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)902 or loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processing unit 901 performs the various methods and processes described above, such as the method 800. For example, in some embodiments, method 800 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into RAM 903 and executed by CPU 901, one or more steps of method 800 described above may be performed. Alternatively, in other embodiments, CPU 901 may be configured to perform method 800 by any other suitable means (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. A method of processing a circuit layout, comprising:
generating a plurality of sub-jobs for performing design rule checking on a circuit layout, wherein each sub-job corresponds to a layout cell of the circuit layout and specifies at least one or more operations for which design rule checking is to be performed on the layout cell;
allocating the plurality of sub-jobs to a plurality of processing devices based on configuration information of the plurality of processing devices and a complexity of the one or more operations, at least one of the plurality of processing devices configured with accelerated processing resources; and
and determining a checking result of performing design rule checking on the circuit layout based on results of the plurality of sub-job processing performed by the plurality of processing devices.
2. The method according to claim 1, wherein the complexity depends on whether the one or more operations involve processing of relative positions of geometries in the layout cell.
3. The method of claim 2, wherein assigning the plurality of sub-jobs to a plurality of processing devices based on the configuration information of the plurality of processing devices and a complexity of the one or more operations comprises:
determining, based on the configuration information for the plurality of processing devices, a plurality of pairs of processing devices from the plurality of processing devices, each pair of processing devices comprising a first processing device not configured with the accelerated processing resources and a second processing device configured with the accelerated processing resources; and
assigning the each sub-job to a respective pair of the plurality of pairs of processing devices such that the first operation is performed by the first processing device and the second operation is performed by the second processing device if the one or more operations include a first operation and a second operation of higher complexity than the first operation, wherein the first operation does not involve processing of the relative position and the second operation involves processing of the relative position.
4. The method of claim 2, wherein assigning the plurality of sub-jobs to a plurality of processing devices based on the configuration information of the plurality of processing devices and a complexity of the one or more operations comprises:
determining, based on the configuration information for the plurality of processing devices, a first set of processing devices and a second set of processing devices from the plurality of processing devices, the first set of processing devices being a set of processing devices that are not configured with the accelerated processing resources and the second set of processing devices being a set of processing devices that are configured with the accelerated processing resources;
assigning a first group of sub-jobs of the plurality of sub-jobs to the first group of processing devices if the first group of sub-jobs includes a first operation and does not include a second operation of higher complexity than the first operation, wherein the first operation does not involve processing of the relative position and the second operation involves processing of the relative position; and
assigning a second group of sub-jobs of the plurality of sub-jobs to the second group of processing devices if the second group of sub-jobs includes the second operation and does not include the first operation.
5. The method according to claim 4, wherein a size of a first layout cell is different from a size of a second layout cell, the first layout cell being a layout cell corresponding to each of the first set of sub-jobs and the second layout cell being a layout cell corresponding to each of the second set of sub-jobs.
6. The method according to claim 1, wherein said each sub-job further specifies a pattern search operation for determining a plurality of patterns from said layout cell, each pattern comprising at least one geometry of said layout cell, and generating said plurality of sub-jobs comprises:
setting the each sub-job to perform the one or more operations on the plurality of patterns determined by the pattern search operation.
7. The method of claim 6, wherein the each sub-job further specifies a pattern classification operation for determining a set of patterns belonging to the same type from the plurality of patterns and selecting a reference pattern from the set of patterns, and setting the each sub-job to perform the one or more operations on the plurality of patterns comprises:
setting each sub-job to:
performing the one or more operations on the reference pattern to obtain an inspection result of the reference pattern; and is
Performing the one or more operations on the remaining patterns of the set of patterns other than the reference pattern by applying the inspection result to the remaining patterns.
8. The method of claim 1, wherein the one or more operations comprise at least one of:
a bias operation for a set of geometries in the layout cell,
a combining operation for combining the geometries in the layout cells,
a geometric shift operation for changing the relative position, or
A geometric edge movement operation for changing the relative position.
9. The method of claim 1, wherein the accelerated processing resources are removably configured to the at least one processing device.
10. An electronic device, the device comprising:
one or more processors; and
storage for storing one or more programs that, when executed by the one or more processors, perform actions comprising:
generating a plurality of sub-jobs for performing design rule checking on a circuit layout, wherein each sub-job corresponds to a layout cell of the circuit layout and specifies at least one or more operations for which design rule checking is to be performed on the layout cell;
allocating the plurality of sub-jobs to a plurality of processing devices based on configuration information of the plurality of processing devices and a complexity of the one or more operations, at least one of the plurality of processing devices configured with accelerated processing resources; and
and determining a checking result of performing design rule checking on the circuit layout based on results of the plurality of sub-job processing performed by the plurality of processing devices.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202011491209.4A 2020-12-16 2020-12-16 Method, apparatus and storage medium for processing a circuit layout Pending CN112580296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011491209.4A CN112580296A (en) 2020-12-16 2020-12-16 Method, apparatus and storage medium for processing a circuit layout

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011491209.4A CN112580296A (en) 2020-12-16 2020-12-16 Method, apparatus and storage medium for processing a circuit layout

Publications (1)

Publication Number Publication Date
CN112580296A true CN112580296A (en) 2021-03-30

Family

ID=75135637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011491209.4A Pending CN112580296A (en) 2020-12-16 2020-12-16 Method, apparatus and storage medium for processing a circuit layout

Country Status (1)

Country Link
CN (1) CN112580296A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6543039B1 (en) * 1998-09-29 2003-04-01 Kabushiki Kaisha Toshiba Method of designing integrated circuit and apparatus for designing integrated circuit
CN1633658A (en) * 2001-08-29 2005-06-29 英芬能技术公司 Integrated circuit chip design
US7913206B1 (en) * 2004-09-16 2011-03-22 Cadence Design Systems, Inc. Method and mechanism for performing partitioning of DRC operations
CN102368276A (en) * 2011-09-14 2012-03-07 天津蓝海微科技有限公司 Flow method for automatically verifying correctness of electric rule file
CN111309491A (en) * 2020-05-14 2020-06-19 北京并行科技股份有限公司 Operation cooperative processing method and system
CN111339724A (en) * 2020-02-21 2020-06-26 全芯智造技术有限公司 Method, apparatus and storage medium for generating data processing model and layout
CN111611766A (en) * 2020-05-15 2020-09-01 全芯智造技术有限公司 Method, apparatus and storage medium for determining circuit layout constraints

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6543039B1 (en) * 1998-09-29 2003-04-01 Kabushiki Kaisha Toshiba Method of designing integrated circuit and apparatus for designing integrated circuit
CN1633658A (en) * 2001-08-29 2005-06-29 英芬能技术公司 Integrated circuit chip design
US7913206B1 (en) * 2004-09-16 2011-03-22 Cadence Design Systems, Inc. Method and mechanism for performing partitioning of DRC operations
CN102368276A (en) * 2011-09-14 2012-03-07 天津蓝海微科技有限公司 Flow method for automatically verifying correctness of electric rule file
CN111339724A (en) * 2020-02-21 2020-06-26 全芯智造技术有限公司 Method, apparatus and storage medium for generating data processing model and layout
CN111309491A (en) * 2020-05-14 2020-06-19 北京并行科技股份有限公司 Operation cooperative processing method and system
CN111611766A (en) * 2020-05-15 2020-09-01 全芯智造技术有限公司 Method, apparatus and storage medium for determining circuit layout constraints

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林斌: "光学邻近校正技术和版图热点管理技术研究", 中国博士学位论文全文数据库信息科技辑, 15 July 2014 (2014-07-15), pages 135 - 23 *

Similar Documents

Publication Publication Date Title
CN112579286B (en) Method, apparatus and storage medium for light source mask optimization
US10108458B2 (en) System and method for scheduling jobs in distributed datacenters
US20200192880A1 (en) Optimal dynamic shard creation in storage for graph workloads
CN112559163A (en) Method and device for optimizing tensor calculation performance
US20190146837A1 (en) Distributed real-time computing framework using in-storage processing
US20110131554A1 (en) Application generation system, method, and program product
CN112560392B (en) Method, apparatus and storage medium for processing a circuit layout
Tan et al. Serving DNN models with multi-instance gpus: A case of the reconfigurable machine scheduling problem
WO2021202011A1 (en) Partitioning for an execution pipeline
CN116011562A (en) Operator processing method, operator processing device, electronic device and readable storage medium
Goudarzi et al. Design of a universal logic block for fault-tolerant realization of any logic operation in trapped-ion quantum circuits
CN112559181A (en) Hot spot detection method and device for circuit layout and storage medium
Adoni et al. DHPV: a distributed algorithm for large-scale graph partitioning
Er et al. Parallel genetic algorithm to solve traveling salesman problem on mapreduce framework using hadoop cluster
US10198293B2 (en) Distributed real-time computing framework using in-storage processing
Montone et al. Wirelength driven floorplacement for FPGA-based partial reconfigurable systems
KR102238600B1 (en) Scheduler computing device, data node of distributed computing system having the same, and method thereof
Mollajafari An efficient lightweight algorithm for scheduling tasks onto dynamically reconfigurable hardware using graph-oriented simulated annealing
Li et al. Performance optimization algorithm of radar signal processing system
CN112580296A (en) Method, apparatus and storage medium for processing a circuit layout
Gallet et al. Efficient scheduling of task graph collections on heterogeneous resources
WO2017104072A1 (en) Stream data distribution processing method, stream data distribution processing system and storage medium
Bengre et al. A learning-based scheduler for high volume processing in data warehouse using graph neural networks
Zhou et al. Multi-shape task placement algorithm based on low fragmentation resource management on 2D heterogeneous dynamic partial reconfigurable devices
Karanik et al. Edge Service Allocation Based on Clustering Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination