CN115270686A - Chip layout method based on graph neural network - Google Patents

Chip layout method based on graph neural network Download PDF

Info

Publication number
CN115270686A
CN115270686A CN202210726916.XA CN202210726916A CN115270686A CN 115270686 A CN115270686 A CN 115270686A CN 202210726916 A CN202210726916 A CN 202210726916A CN 115270686 A CN115270686 A CN 115270686A
Authority
CN
China
Prior art keywords
model
data
chip
neural network
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210726916.XA
Other languages
Chinese (zh)
Inventor
王玉莹
郝沁汾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Core Optical Interconnect Technology Research Institute Co ltd
Original Assignee
Wuxi Core Optical Interconnect Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Core Optical Interconnect Technology Research Institute Co ltd filed Critical Wuxi Core Optical Interconnect Technology Research Institute Co ltd
Priority to CN202210726916.XA priority Critical patent/CN115270686A/en
Publication of CN115270686A publication Critical patent/CN115270686A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/39Circuit design at the physical level
    • G06F30/392Floor-planning or layout, e.g. partitioning or placement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/323Translation or migration, e.g. logic to logic, hardware description language [HDL] translation or netlist translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Architecture (AREA)
  • Design And Manufacture Of Integrated Circuits (AREA)

Abstract

The invention discloses a chip layout method based on a graph neural network, which comprises the following steps: s1, converting a circuit into graph data, establishing a two-dimensional coordinate system x-y, labeling a data set, and dividing a training set and a verification set; s2, constructing a graph neural network model, setting a loss function and setting a loss threshold; s3, respectively training the neural network model of the graph by taking the training set as input data and updating the model to obtain a first model for predicting an x coordinate and a second model for predicting a y coordinate; and S4, preprocessing the data of each unit in the chip to be distributed and the graph data of the chip, inputting the preprocessed data into the first model and the second model to obtain the x and y coordinate values of each unit on the chip to be distributed, and outputting a final distribution result. Compared with the traditional chip layout technology, the method has the advantages that the circuit netlist data are converted into the graph data, the graph neural network model is trained by means of the loss function, intelligent chip layout is achieved, and the method has the advantages of saving time input and labor cost.

Description

Chip layout method based on graph neural network
Technical Field
The invention relates to the technical field of chip layout, in particular to a chip layout method based on a graph neural network.
Background
Chip layout refers primarily to the process of determining the placement positions of the various units or modules in a chip to be designed on a circuit board. Layout is one of the key links in integrated circuit layout design, and is the most complex and time-consuming step in the whole chip design process.
In the last decade, the rapid development of the machine learning field is greatly promoted by the progress of systems and hardware, so that the realization of the rapid and accurate chip layout method by using the machine learning algorithm has practical significance and practical value. The existing chip layout method mainly depends on an electronic design automation layout tool, and the layout method mostly needs manual intervention and requires high layout experience of participators; the rest chip layout methods, such as the simulated annealing method, have the problems of low intelligent level and large calculated amount, and the reinforcement learning method has the problems of more iteration times and long training time.
The prior art discloses a chip layout optimizing system and method based on deep reinforcement learning. The method comprises the steps that a data preprocessing module is used for reading and analyzing pl and net files, and netlist information in the pl and net files is converted into an initial state and a reward function of an intelligent body; the strategy network module respectively obtains global embedding features and node embedding features which respectively comprise thickness granularity through a convolutional neural network and a graph neural network, the feature vectors respectively obtained by the two networks are fused, and finally, the current time behavior, namely probability distribution of possible placement positions of elements is predicted. The scheme is based on the idea of reinforcement learning, and has the defects of large scale of required training data, more required iteration times and long required training time.
Therefore, in combination with the above requirements and the defects of the prior art, the present application provides a chip layout method based on a graph neural network.
Disclosure of Invention
The invention provides a chip layout method based on a graph neural network, which can save time input and labor cost while realizing intelligent chip layout.
The primary objective of the present invention is to solve the above technical problems, and the technical solution of the present invention is as follows:
the invention provides a chip layout method based on a graph neural network, which comprises the following steps:
s1, converting a circuit into graph data according to different resource types, establishing a two-dimensional coordinate system x-y on the surface of a chip to be laid out, labeling a data set, and dividing the data set into a training set and a verification set according to a set proportion.
S2, constructing a graph neural network model, respectively setting loss functions related to the x coordinate and the y coordinate, and setting a loss threshold.
And S3, respectively training the neural network model of the graph by taking the training set as input data, updating the model, and calculating the value of the loss function by taking the verification set as the input data until the loss value is smaller than a set threshold value to obtain a first model for predicting an x coordinate and a second model for predicting a y coordinate.
And S4, preprocessing the data of each unit in the chip to be distributed and the graph data of the chip, inputting the preprocessed data into the first model and the second model, combining the prediction results of the first model and the second model to obtain the x and y coordinate values of each unit on the chip to be distributed, and outputting a final distribution result.
Further, the graph data in step S1 is a heterogeneous graph, and different types of nodes in the heterogeneous graph correspond to different types of chip resources.
The chip resources mainly comprise modules such as a configurable logic unit, a storage unit, a multifunctional high-performance IO (input output) module, a clock resource and the like, and each type of resource has a unique distribution rule, so that the circuit diagram netlist data can be mapped into a different composition, namely different types of resources correspond to different types of nodes on the different composition.
Further, the labeled data set in step S1 specifically includes: dividing all resources of the programmable logic device chip into N classes, wherein N is a natural number, marking positions of different units included in each class of resources on a two-dimensional plane coordinate system, and marking the types as follows: n and position:
Figure BDA0003713550480000021
wherein N =1,2, \8230;, N, i =1,2, \8230;, MnWhereinMnIs the total number of units contained in the nth type of resource.
Further, in the graph neural network model in step S2, the main body is N sets of heterogeneous graph neural network HGNN layers stacked according to resource types, wherein the number of layers of each set of heterogeneous graph neural network HGNN layer is 3.
Further, the graph neural network model comprises two parts:
the first part comprises N groups of GNN network layers and MLP layers which are parallel, the GNN network layers and the MLP layers are sequentially connected, node information input by the first part enters the GNN network corresponding to the resource type group, then the output value of the GNN network is input to the MLP layer, and node embedded expression corresponding to the resource type is obtained.
The second part comprises a set of MLP layer and FC layer which are connected in sequence, the nodes of multiple resource types output by the first part are embedded and expressed, input to the FC layer after dimension transformation of the MLP layer, and the FC layer outputs the final coordinate values.
Further, the mathematical expression of the loss function in step S2 is:
Figure BDA0003713550480000031
Figure BDA0003713550480000032
wherein L is1Is a loss function related to the x coordinate, xiOutput values of the neural network model for the graph related to x; l is2As a loss function related to the y coordinate, yiIs the output value of the graph neural network model related to y.
Further, the neural network model of the graph is updated through an Adam optimizer.
And the graph neural network model completes the training of the model by applying a gradient descent method under the guidance of the loss function.
Further, the training process of step S3 is as follows:
s31, inputting the data of the training set into a graph neural network model related to x coordinates, and calculating a loss function L by using the output value1And updates the model.
And S32, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing training, saving the training as the first model for predicting the x coordinate, and otherwise, repeating the training steps S31-S32.
S33, inputting the data of the training set into a graph neural network model related to the y coordinate, and calculating a loss function L by using the output value2And updates the model.
And S34, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing training, saving the training as a second model for predicting the y coordinate, and otherwise, repeating the training steps S33-S34.
Further, step S4 specifically includes: preprocessing attribute data of each unit in a chip to be distributed and netlist data of the chip, and inputting the preprocessed attribute data and netlist data into a first model to obtain an x coordinate value of each unit on the chip to be distributed; preprocessing attribute data of each unit in the chip to be distributed and netlist data of the chip and inputting the preprocessed attribute data and netlist data into a second model to obtain a y coordinate value of each unit on the chip to be distributed; and combining the prediction results of the first model and the second model to obtain a final layout result.
The second aspect of the present invention provides a chip layout system based on a graph neural network, including: the device comprises a data preprocessing and data marking module, a graph neural network initialization module, a model training and storing module and a model layout prediction module; the data preprocessing and data marking module reads and analyzes a circuit netlist data file, converts the netlist data of a circuit into graph data according to different resource types, constructs a coordinate system, marks the positions of all units contained in each type of resource on a chip and divides a data set; the graph neural network initialization module builds a model architecture taking a heterogeneous graph neural network as a main body according to the total number of resource types on a chip, initializes the parameters of the whole model and sets a loss function; the model training and storing module completes the training of the model under the guidance of the loss function and stores the trained model; the model layout prediction module calculates the x and y coordinate values of each unit on the chip to be laid out by applying the trained model, and outputs the final layout result.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a chip layout method based on a graph neural network, which realizes intelligent chip layout by converting circuit netlist data into graph data and training a graph neural network model by means of a loss function, and has the characteristics of saving time input and labor cost.
Drawings
FIG. 1 is a flow chart of a chip layout method based on a graph neural network according to the present invention.
Fig. 2 is a schematic diagram of the main structure of the HGNN model of the chip layout method based on the graph neural network of the present invention.
FIG. 3 is a schematic diagram of the HGNN model weight update based on the chip layout method of the graph neural network.
Fig. 4 is a schematic structural diagram of a chip layout system based on a graph neural network according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
Example 1
As shown in fig. 1-2, the present invention provides a chip layout method based on a graph neural network, which specifically includes the following steps:
s1, converting a circuit into graph data according to different resource types, establishing a two-dimensional coordinate system x-y on the surface of a chip to be laid out, labeling a data set, and dividing the data set into a training set and a verification set according to a set proportion.
S2, constructing a graph neural network model, respectively setting loss functions related to the x coordinate and the y coordinate, and setting a loss threshold.
And S3, respectively training the neural network model of the graph by taking the training set as input data, updating the model, and calculating the value of the loss function by taking the verification set as the input data until the loss value is smaller than a set threshold value to obtain a first model for predicting an x coordinate and a second model for predicting a y coordinate.
And S4, preprocessing the data of each unit in the chip to be distributed and the graph data of the chip, inputting the preprocessed data into the first model and the second model, combining the prediction results of the first model and the second model to obtain the x and y coordinate values of each unit on the chip to be distributed, and outputting a final distribution result.
Further, the graph data in step S1 is a heterogeneous graph, and different types of nodes in the heterogeneous graph correspond to different types of chip resources.
The chip resources mainly comprise modules such as a configurable logic unit, a storage unit, a multifunctional high-performance IO (input output) module, a clock resource and the like, and each type of resource has a unique distribution rule, so that the circuit diagram netlist data can be mapped into a different composition, namely different types of resources correspond to different types of nodes on the different composition.
Further, the labeled data set in step S1 specifically includes: dividing all resources of the programmable logic device chip into N classes, wherein N is a natural number, marking positions of different units included in each class of resources on a two-dimensional plane coordinate system, and marking the types as follows: n and position:
Figure BDA0003713550480000051
wherein N =1,2, \8230:, N, i =1,2,…,MnWherein M isnIs the total number of units contained in the nth type of resource.
Wherein, in a specific embodiment, the proportion of the training set and the validation set divided is 8.
Further, in the graph neural network model in step S2, the main body is N sets of heterogeneous graph neural network HGNN layers stacked according to resource types, wherein the number of layers of each set of heterogeneous graph neural network HGNN layer is 3.
Further, the graph neural network model comprises two parts:
the first part comprises N groups of GNN network layers and MLP layers which are parallel, the GNN network layers and the MLP layers are sequentially connected, node information input by the first part enters the GNN network corresponding to the resource type group, then the output value of the GNN network is input to the MLP layer, and node embedded expression corresponding to the resource type is obtained.
The second part comprises a set of MLP layer and FC layer which are connected in sequence, the nodes of multiple resource types output by the first part are embedded and expressed, input to the FC layer after dimension transformation of the MLP layer, and the FC layer outputs the final coordinate values.
Further, the mathematical expression of the loss function in step S2 is:
Figure BDA0003713550480000061
Figure BDA0003713550480000062
wherein L is1Is a loss function related to the x coordinate, xiOutput values of the neural network model for the graph related to x; l is a radical of an alcohol2As a loss function related to the y coordinate, yiIs the output value of the neural network model of the graph related to y.
Further, the graph neural network model realizes the updating of the model through an Adam optimizer.
And the graph neural network model completes the training of the model by applying a gradient descent method under the guidance of the loss function.
Further, the training process of step S3 is as follows:
s31, inputting the data of the training set into a graph neural network model related to x coordinates, and calculating a loss function L by using the output value1And updates the model.
And S32, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing training, saving the training as the first model for predicting the x coordinate, and otherwise, repeating the training steps S31-S32.
S33, inputting the data of the training set into a graph neural network model related to the y coordinate, and calculating a loss function L by using the output value2And updates the model.
And S34, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing training, saving the training as a second model for predicting the y coordinate, and otherwise, repeating the training steps S33-S34.
Further, step S4 specifically includes: preprocessing attribute data of each unit in a chip to be distributed and netlist data of the chip, and inputting the preprocessed attribute data and netlist data into a first model to obtain an x coordinate value of each unit on the chip to be distributed; preprocessing attribute data of each unit in the chip to be distributed and netlist data of the chip and inputting the preprocessed attribute data and netlist data into a second model to obtain a y coordinate value of each unit on the chip to be distributed; and combining the prediction results of the first model and the second model to obtain a final layout result.
Example 2
Based on the foregoing embodiment 1, with reference to fig. 3, this embodiment describes in detail a process of obtaining nodes in graph data by using a trained graph neural network model.
In a specific embodiment, a hexagram node in the graph data is selected for acquisition, and as shown in fig. 3, the node has two types of neighbors: round and triangular nodes, using trained model 1The GNN network obtains embedded representations of two types of neighbor nodes and embedded representation e of the hexagram node itself1(ii) a Then, after the representations of the same type of neighbor nodes are spliced, the representations are respectively input into the MLP network of the model 1, and the output of the MLP layers of the two types of neighbor nodes is obtained. Then, e is added1,e2,e3After splicing, inputting the spliced nodes into a next MLP network to obtain the final expression emb of the hexagram nodes1. Finally, the emb is put into1Inputting the final predicted value x of the node abscissa to the full connection layer (FC) of the model 11(ii) a Similarly, the final predicted value y of the vertical coordinate of the node can be obtained by applying the trained model 21. The predicted two-dimensional coordinate (x) of the node1,y1) Namely the specific position after the node is laid out.
Example 3
As shown in fig. 4, the present invention further provides a chip layout system based on a graph neural network, including: the device comprises a data preprocessing and data marking module, a graph neural network initialization module, a model training and storing module and a model layout prediction module; the data preprocessing and data marking module reads and analyzes a circuit netlist data file, converts the circuit netlist data into graph data according to different resource types, constructs a coordinate system, marks the positions of all units contained in each type of resource on a chip and divides a data set; the graph neural network initialization module builds a model architecture taking a heterogeneous graph neural network as a main body according to the total number of resource types on the chip, initializes the parameters of the whole model and sets a loss function; the model training and storing module completes the training of the model under the guidance of the loss function and stores the trained model; and the model layout prediction module calculates the x and y coordinate values of each unit on the chip to be laid by applying the trained model and outputs the final layout result.
The specific working principle of each module is as follows:
the data preprocessing and data marking module converts the circuit into graph data according to different resource types, establishes a two-dimensional coordinate system x-y on the surface of the chip to be laid out, marks the data set and divides the data set into a training set and a verification set according to a set proportion.
Further, the graph data is an abnormal graph, and different types of nodes in the abnormal graph correspond to different types of chip resources.
Further, the labeled data set specifically includes: dividing all resources of the programmable logic device chip into N classes, wherein N is a natural number, marking positions of different units included in each class of resources on a two-dimensional plane coordinate system, and marking the types as follows: n and position:
Figure BDA0003713550480000081
wherein N =1,2, \8230;, N, i =1,2, \8230;, MnWherein M isnIs the total number of units contained in the nth type of resource.
The graph neural network initialization module constructs a graph neural network model, sets loss functions related to an x coordinate and a y coordinate respectively, and sets a loss threshold.
Furthermore, the graph neural network model is mainly composed of N groups of heterogeneous graph neural network HGNN layers stacked according to resource types, wherein the number of layers of each group of heterogeneous graph neural network HGNN layers is 3.
Further, the graph neural network model comprises two parts:
the first part comprises N groups of GNN network layers and MLP layers which are parallel, the GNN network layers and the MLP layers are sequentially connected, node information input by the first part enters the GNN network corresponding to the resource type group, then the output value of the GNN network is input to the MLP layer, and node embedded expression corresponding to the resource type is obtained.
The second part comprises a set of MLP layer and FC layer which are connected in sequence, the nodes of multiple resource types output by the first part are embedded and expressed, input to the FC layer after dimension transformation of the MLP layer, and the FC layer outputs the final coordinate values.
Further, the mathematical expression of the loss function is:
Figure BDA0003713550480000082
Figure BDA0003713550480000083
wherein L is1Is a loss function related to the x coordinate, xiOutput values of the neural network model for the graph related to x; l is a radical of an alcohol2As a loss function related to the y coordinate, yiIs the output value of the graph neural network model related to y.
Further, the neural network model of the graph is updated through an Adam optimizer.
And the neural network model of the graph is trained by applying a gradient descent method under the guidance of a loss function.
And the model training and storing module respectively trains the graph neural network model by taking the training set as input data and updates the model, and calculates the value of the loss function by taking the verification set as the input data until the loss value is smaller than a set threshold value, so as to obtain a first model for predicting the x coordinate and a second model for predicting the y coordinate.
Further, the training process is as follows:
s31, inputting the data of the training set into a graph neural network model related to x coordinates, and calculating a loss function L by using the output value1And updates the model.
And S32, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing training, saving the training as the first model for predicting the x coordinate, and otherwise, repeating the training steps S31-S32.
S33, inputting the data of the training set into a graph neural network model related to y coordinates, and calculating a loss function L by using the output value2And updates the model.
S34, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing training, saving the training as a second model for predicting y coordinates, and otherwise, repeating the training steps S33-S34
The model layout prediction module inputs the preprocessed data of each unit in the chip to be laid out and the preprocessed graph data of the chip into a first model and a second model, combines the prediction results of the first model and the second model to obtain the coordinate values of x and y of each unit on the chip to be laid out, and outputs the final layout result.
Further, the layout prediction process specifically includes: preprocessing attribute data of each unit in a chip to be laid out and netlist data of the chip and inputting the preprocessed attribute data and netlist data into a first model to obtain an x coordinate value of each unit on the chip to be laid out; preprocessing attribute data of each unit in the chip to be distributed and netlist data of the chip and inputting the preprocessed attribute data and netlist data into a second model to obtain a y coordinate value of each unit on the chip to be distributed; and combining the prediction results of the first model and the second model to obtain a final layout result.
The drawings depicting the positional relationship of the structures are for illustrative purposes only and are not to be construed as limiting the present patent.
It should be understood that the above-described examples of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement or improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A chip layout method based on a graph neural network is characterized by comprising the following steps:
s1, converting a circuit into graph data according to different resource types, establishing a two-dimensional coordinate system x-y on the surface of a chip to be laid out, labeling a data set, and dividing the data set into a training set and a verification set according to a set proportion;
s2, constructing a graph neural network model, respectively setting loss functions related to an x coordinate and a y coordinate, and setting a loss threshold;
s3, respectively training the neural network model of the graph by taking the training set as input data and updating the model, and calculating the value of a loss function by taking the verification set as the input data until the loss value is smaller than a set threshold value to obtain a first model for predicting an x coordinate and a second model for predicting a y coordinate;
and S4, preprocessing the data of each unit in the chip to be distributed and the graph data of the chip, inputting the preprocessed data into the first model and the second model, combining the prediction results of the first model and the second model to obtain the x and y coordinate values of each unit on the chip to be distributed, and outputting the final distribution result.
2. The method according to claim 1, wherein the graph data in step S1 is a heterogeneous graph, and different types of nodes in the heterogeneous graph correspond to different types of chip resources.
3. The method according to claim 1, wherein the labeled data set in step S1 is specifically: dividing all resources of the programmable logic device chip into N classes, wherein N is a natural number, marking the positions of different units included in each class of resources on a two-dimensional plane coordinate system, and marking the types as follows: n and position:
Figure FDA0003713550470000011
wherein N =1,2, \8230;, N, i =1,2, \8230;, MnWherein M isnIs the total number of units contained in the nth type of resource.
4. The method as claimed in claim 1, wherein the graph neural network model in step S2 is composed of N sets of heterogeneous graph neural network HGNN layers stacked according to resource types, wherein the number of layers of each set of heterogeneous graph neural network HGNN layers is 3.
5. The method of claim 4, wherein the neural network model comprises two parts:
the first part comprises N groups of GNN network layers and MLP layers which are parallel, the GNN network layers and the MLP layers are sequentially connected, the node information input by the first part enters the GNN network corresponding to the resource type group, and then the output value of the GNN network is input to the MLP layer to obtain node embedded expression corresponding to the resource type;
the second part comprises a set of MLP layer and FC layer which are connected in sequence, the nodes of multiple resource types output by the first part are embedded and expressed, input into the FC layer after dimension transformation of the MLP layer, and the FC layer outputs final coordinate values.
6. The method of claim 1, wherein the mathematical expression of the loss function in step S2 is as follows:
Figure FDA0003713550470000022
Figure FDA0003713550470000021
wherein L is1Is a loss function related to the x coordinate, xiOutput values of the neural network model for the graph related to x; l is2As a loss function related to the y coordinate, yiIs the output value of the graph neural network model related to y.
7. The chip layout method based on the graph neural network as claimed in claim 6, wherein the graph neural network model realizes the updating of the model through an Adam optimizer.
8. The method of claim 7, wherein the training process of step S3 is as follows:
s31, inputting the data of the training set into a graph neural network model related to x coordinates, and calculating a loss function L by using the output value1And updating the model:
s32, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing the training, saving the training as a first model for predicting the x coordinate, otherwise, repeating the training steps S31-S32;
s33, inputting the data of the training set into a graph neural network model related to y coordinates, and calculating a loss function L by using the output value2And updating the model;
and S34, inputting the data of the verification set into the updated model, calculating the loss value of the model according to the obtained output value of the verification set, judging whether the loss value is smaller than a set threshold value, if so, finishing the training, saving the training as a second model for predicting the y coordinate, and otherwise, repeating the training steps S33-S34.
9. The method according to claim 1, wherein the step S4 is specifically: preprocessing attribute data of each unit in a chip to be laid out and netlist data of the chip and inputting the preprocessed attribute data and netlist data into a first model to obtain an x coordinate value of each unit on the chip to be laid out; preprocessing attribute data of each unit in the chip to be distributed and netlist data of the chip and inputting the preprocessed attribute data and netlist data into a second model to obtain a y coordinate value of each unit on the chip to be distributed; and combining the prediction results of the first model and the second model to obtain a final layout result.
10. A chip layout system based on a graph neural network, comprising: the device comprises a data preprocessing and data marking module, a graph neural network initialization module, a model training and storing module and a model layout prediction module; the data preprocessing and data marking module reads and analyzes a circuit netlist data file, converts the circuit netlist data into graph data according to different resource types, constructs a coordinate system, marks the positions of all units contained in each type of resource on a chip and divides a data set; the graph neural network initialization module builds a model architecture taking a heterogeneous graph neural network as a main body according to the total number of resource types on the chip, initializes the parameters of the whole model and sets a loss function; the model training and storing module completes the training of the model under the guidance of the loss function and stores the trained model; and the model layout prediction module calculates the x and y coordinate values of each unit on the chip to be laid by applying the trained model and outputs a final layout result.
CN202210726916.XA 2022-06-24 2022-06-24 Chip layout method based on graph neural network Pending CN115270686A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210726916.XA CN115270686A (en) 2022-06-24 2022-06-24 Chip layout method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210726916.XA CN115270686A (en) 2022-06-24 2022-06-24 Chip layout method based on graph neural network

Publications (1)

Publication Number Publication Date
CN115270686A true CN115270686A (en) 2022-11-01

Family

ID=83761966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210726916.XA Pending CN115270686A (en) 2022-06-24 2022-06-24 Chip layout method based on graph neural network

Country Status (1)

Country Link
CN (1) CN115270686A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115688670A (en) * 2022-12-29 2023-02-03 全芯智造技术有限公司 Integrated circuit layout method and device, storage medium and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241478A (en) * 2020-11-12 2021-01-19 广东工业大学 Large-scale data visualization dimension reduction method based on graph neural network
CN114154412A (en) * 2021-11-25 2022-03-08 上海交通大学 Optimized chip layout system and method based on deep reinforcement learning
WO2022057310A1 (en) * 2020-09-15 2022-03-24 华为技术有限公司 Method, apparatus and system for training graph neural network
US20220101103A1 (en) * 2020-09-25 2022-03-31 Royal Bank Of Canada System and method for structure learning for graph neural networks
CN114297493A (en) * 2021-12-28 2022-04-08 北京三快在线科技有限公司 Object recommendation method, object recommendation device, electronic equipment and storage medium
CN114372438A (en) * 2022-01-12 2022-04-19 广东工业大学 Chip macro-unit layout method and system based on lightweight deep reinforcement learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022057310A1 (en) * 2020-09-15 2022-03-24 华为技术有限公司 Method, apparatus and system for training graph neural network
US20220101103A1 (en) * 2020-09-25 2022-03-31 Royal Bank Of Canada System and method for structure learning for graph neural networks
CN112241478A (en) * 2020-11-12 2021-01-19 广东工业大学 Large-scale data visualization dimension reduction method based on graph neural network
CN114154412A (en) * 2021-11-25 2022-03-08 上海交通大学 Optimized chip layout system and method based on deep reinforcement learning
CN114297493A (en) * 2021-12-28 2022-04-08 北京三快在线科技有限公司 Object recommendation method, object recommendation device, electronic equipment and storage medium
CN114372438A (en) * 2022-01-12 2022-04-19 广东工业大学 Chip macro-unit layout method and system based on lightweight deep reinforcement learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AZALIA MIRHOSEINI等: "A graph placement methodology for fast chip design", 《NATURE》, vol. 594, pages 207 - 229 *
WALTER LAU NETO等: "LSOracle: a Logic Synthesis Framework Driven by Artificial Intelligence", 《2019 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN(ICCAD)》, pages 1 - 6 *
李涵等: "图神经网络加速结构综述", 《计算机研究与发展》, vol. 58, no. 06, pages 1204 - 1229 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115688670A (en) * 2022-12-29 2023-02-03 全芯智造技术有限公司 Integrated circuit layout method and device, storage medium and terminal equipment

Similar Documents

Publication Publication Date Title
US20200364389A1 (en) Generating integrated circuit floorplans using neural networks
CN111612252B (en) Automatic site selection method and device for large-scale emergency facilities and readable storage medium
CN114896937A (en) Integrated circuit layout optimization method based on reinforcement learning
CN110276442A (en) A kind of searching method and device of neural network framework
CN112685504B (en) Production process-oriented distributed migration chart learning method
CN115907436B (en) Quality coupling prediction-based water resource water environment regulation and control method and system
Su et al. Algorithms for solving assembly sequence planning problems
CN112445876A (en) Entity alignment method and system fusing structure, attribute and relationship information
CN113343427B (en) Structural topology configuration prediction method based on convolutional neural network
CN107392307A (en) The Forecasting Methodology of parallelization time series data
CN109145342A (en) Automatic wiring system and method
CN116151324A (en) RC interconnection delay prediction method based on graph neural network
CN115270686A (en) Chip layout method based on graph neural network
CN115146580A (en) Integrated circuit path delay prediction method based on feature selection and deep learning
CN117114250A (en) Intelligent decision-making system based on large model
CN113505560B (en) FPGA wiring congestion prediction method and system
CN115455899A (en) Analytic layout method based on graph neural network
CN113240219A (en) Land utilization simulation and prediction method
JP2019194765A (en) Optimization device and method of controlling the same
CN110705650B (en) Sheet metal layout method based on deep learning
CN107491841A (en) Nonlinear optimization method and storage medium
CN114662204B (en) Elastic bar system structure system data processing method and device based on graph neural network
CN115600421A (en) Construction method and device and medium of autonomous traffic system evolution model based on improved Petri network
CN112613830B (en) Material reserve center site selection method
CN112434817B (en) Method, apparatus and computer storage medium for constructing communication algorithm database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination