WO2023211471A1 - Generation of supplemental test programs for integrated circuit design testing - Google Patents

Generation of supplemental test programs for integrated circuit design testing Download PDF

Info

Publication number
WO2023211471A1
WO2023211471A1 PCT/US2022/027099 US2022027099W WO2023211471A1 WO 2023211471 A1 WO2023211471 A1 WO 2023211471A1 US 2022027099 W US2022027099 W US 2022027099W WO 2023211471 A1 WO2023211471 A1 WO 2023211471A1
Authority
WO
WIPO (PCT)
Prior art keywords
test templates
design
integrated circuit
templates
new test
Prior art date
Application number
PCT/US2022/027099
Other languages
French (fr)
Inventor
Ning Yan
Masood Mortazavi
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Priority to PCT/US2022/027099 priority Critical patent/WO2023211471A1/en
Publication of WO2023211471A1 publication Critical patent/WO2023211471A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/33Design verification, e.g. functional simulation or model checking
    • G06F30/3308Design verification, e.g. functional simulation or model checking using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2115/00Details relating to the type of the circuit
    • G06F2115/10Processors

Definitions

  • a method of generating by a computing device of supplemental test programs for an integrated circuit includes: obtaining a design for an integrated circuit and receiving a plurality of initial test templates for testing of the design of the integrated circuit.
  • the method also includes: analyzing the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generating by a graph convolutional policy network of a plurality of new test templates for the design of the integrated circuit from the program representation; and performing a plurality of parallel simulations of the design of the integrated circuit using the plurality of new test templates.
  • the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generating a revised set of test templates.
  • the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
  • the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates providing accurate simulation results, testing the design of the integrated circuit using the new test templates.
  • the method may also include: in response to the design of the integrated circuit not passing the testing, modifying the design of the integrated circuit; and testing the modified design of the integrated circuit using the new test templates.
  • the method may also include: in response to the design of the integrated circuit passing the testing, fabricating the design of the integrated circuit.
  • analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates includes building an abstract syntax tree from the test templates. [0010]
  • analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates further comprises: learning program representations from the abstract syntax tree.
  • generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation comprises: generating the new test templates from the learnt program representations.
  • the method further includes training the graph convolutional policy network based on results of the plurality of parallel simulations.
  • a system includes one or more interfaces and one or more processors coupled to the one or more interfaces.
  • the one or more interfaces are configured to: receive a plurality of initial test templates for testing of the design of the integrated circuit.
  • the one or more processors are configured to: obtain a design for an integrated circuit; analyze the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generate by a graph convolutional policy network of a plurality of new test templates for the design of the integrated circuit from the program representation; and perform a plurality of parallel simulations of the design of the integrated circuit using the plural of new test templates.
  • the one or more processors are further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generate a revised set of test templates.
  • the one or more processors are further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
  • the one or more processors are further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide accurate simulation results; and in response the new test templates provide accurate simulation results, testing the design of the integrated circuit using the new test templates.
  • the one or more processors are further configured to: in response to the design of the integrated circuit not passing the testing, modify the design of the integrated circuit; and test the modified design of the integrated circuit using the new test templates.
  • the one or more processors are further configured to: build an abstract syntax tree from the test templates.
  • the one or more processors are further configured to: learn program representations from the abstract syntax tree.
  • the one or more processors are further configured to generate the new test templates from the learnt program representations.
  • a method includes: obtaining a design for an integrated circuit; receiving a plurality of initial test templates for testing of the design of the integrated circuit; analyzing the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generating by a graph convolutional policy network of a plurality of additional test templates for the design of the integrated circuit from the program representation; testing the design of the integrated circuit using the additional test templates; and, in response to the design of the integrated circuit passing the testing, fabricating the design of the integrated circuit.
  • the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generating a revised set of test templates.
  • the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
  • the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates providing accurate simulation results, testing the design of the integrated circuit using the new test templates.
  • the method may also include: in response to the design of the integrated circuit not passing the testing, modifying the design of the integrated circuit; and testing the modified design of the integrated circuit using the new test templates.
  • analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates includes building an abstract syntax tree from the test templates.
  • analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates further comprises: learning program representations from the abstract syntax tree.
  • generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation comprises: generating the new test templates from the learnt program representations.
  • the method may also include training the graph convolutional policy network based on results of the plurality of parallel simulations.
  • FIG. 1 is a flowchart of an embodiment for the design and a manufacture of an integrated circuit design.
  • FIG. 2 is a flowchart of an overview for one embodiment of system workflow.
  • FIG. 3 is a block diagram of an embodiment of the system components to perform the process illustrated in FIG. 2.
  • FIG. 4 is a schematic representation of several layers of a neural network in more detail.
  • FIG. 5 is a flowchart describing one embodiment of a process for training a neural network to generate a set of weights.
  • FIG. 6 is a flowchart describing a process for the inference phase of supervised learning using a neural network.
  • FIG. 7 is a schematic representation of an input and first hidden layer of a graph convolutional network.
  • FIG. 8 is a high-level block diagram of a more general computing system that can be used to implement various embodiments described in the preceding figures.
  • the following presents techniques for supplementing the test programs generated for testing designs for integrated circuits, such as central processing units (CPUs).
  • the design under test is typically tested using high-level templates provided by verification engineers based on years of experiences and expertise in related fields.
  • the manually written test templates might not be comprehensive and it is cumbersome and time-consuming to manually compose additional test templates.
  • the following presents a system and methods that automatically generates more test templates based on existing templates and provides template components.
  • One set of embodiments is based on use of a graph convolutional policy network for program generation and a reinforcement learning algorithm for coverage optimization.
  • FIG. 1 is a flowchart of an embodiment for the design and fabrication of an integrated circuit design.
  • a design under test is generated.
  • this can include the typical steps in a CPU or other integrated circuit design, such as: system specification, such as a feasibility study and functional analysis; architectural or system level design; logic design, such analogue and digital design and simulation and system level simulation and verification; and circuit design, such digital design synthesis and determining the design for test.
  • test templates are received, where these are high-level templates usually provided by verification engineers based on years of experiences and expertise in related fields. These manually written templates are generally not comprehensive as it is cumbersome and time-consuming to manually compose additional test templates.
  • the design under test is tested by applying pseudo random stimuli at 105 in order simulate the operation of the design.
  • Constrained random verification (CRV) is one standard method in industrial design verification for chips such as CPUs. Central to this process is the design of an elaborate testbench that applies pseudorandom stimulus to the design-under-test (DUT) downstream. If the design does not meet the requirements of the test, the design is revised at 109 and re-tested. Once the tests are passed, the design can move to be manufactured at a fabrication facility at step 111 , where there may be additional processes performed first.
  • test code generation is complex task, which why people often use high-level templates for guiding the test generation program, with the test templates composed using a domain specific language (DSL) or some high-level language, such as Python.
  • DSL domain specific language
  • Python some high-level language
  • DSL domain specific language
  • GCPN graph convolutional policy network
  • the system can run multiple simulations, each with one set of new programs generated, in parallel and obtain the code coverage results from simulations, with the coverage results used as rewards for a reinforcement learning algorithm for optimizing the graph convolutional policy network.
  • the systems used in previous approaches have lacked automatic optimization of code coverage in constrained random verification and instead have mostly relied on manual effort from verification engineers with substantial expertise in related fields.
  • the automatic approach presented here can save significant amounts of time for configuring the test program generators by using graph convolutional networks to abstract information from the provided templates.
  • FIG. 2 is a flowchart of an overview for one embodiment of system workflow, starting at 201 .
  • One or more initial templates are received at 203, where these can be the same as the sort of manually generated templates of 103 of FIG. 1.
  • the design or designs under test are received or, more generally, obtained, such as when the same processing or computing devices also generates the design.
  • 205 is shown as following 203 in FIG. 2, 205 can occur earlier or later in the flow, as long as the design in available when subsequently used.
  • the flow of FIG. 2 departs from that of FIG. 1 at 207.
  • the initial test templates are analyzed to build the abstract syntax tree for use in generating the additional templates by considering the variable of the test templates and their relationships within the structure of the programs’ language, where a system for 207 and subsequent processes is discussed in more detail with respect to the FIG. 3.
  • a graph convolutional network can then learn representations of the structure of the programs (i.e., the node embeddings of the test templates) from the abstract syntax tree built in 207 by using these program representations as the inputs to the graph convolutional network.
  • Using a graph generative module for generating new test templates follows at 211 by propagating the inputs form 209 through the graph convolutional network.
  • the graph generative module takes the learnt program representations from 209 as the input, with a graph generative module utilizing a graph convolutional policy network to generate augmented graphs, which are further converted to new test templates.
  • These newly generated test templates are then used by a test generation program for generating new test sets in 213 that can be used, much as a set of manually generated templates are used at 105 of FIG. 1 , that can be used to run tests on the design under test by applying pseudo-random stimuli according to the test sets at 215.
  • the tests for the newly generated test sets are performed.
  • multiple simulations for the design under test from 205 can then be run in parallel using the new test sets generated and the coverage as the loss function for optimizing the graph generative module.
  • a determination is made at 217 on whether a certain coverage threshold is met: if so, the system stops at 219; if not, the flow loops back to 211 for generating more test templates.
  • the determination can be made by comparing the simulation results of 215 with the results of the initial test templates from 203 to see whether they agree within a limit on the amount of acceptable error and whether the tests cover an acceptable number of circuit properties and features. For example, the determination can be made on whether the design meets benchmark values on a selected set of circuit characteristics, such as speed, accuracy, power consumption, or other important metrics of the design under test.
  • FIG. 3 is a block diagram of an embodiment of a system component to perform the process illustrated in FIG. 2.
  • 203 can correspond to the input of the initial test templates into block 311 of graph generator 301 , with block 311 performing 207 by extracting program representations using a graph convolutional network.
  • Block 315 performs 209 by using a graph generative module for generation of new templates, with 211 implemented through the graph convolutional policy network of block 317.
  • Test program generator 303 uses the test templates 331 from block 313 of graph generator 301 to perform 213.
  • the parallel simulation environments 305 receive the design under test 351 as an input, corresponding to 205, and also receives the test program sets 353, which are then used to perform the parallel simulation of 215.
  • the code coverage from the parallel simulations can then be supplied to block 315 to make the determination of 217.
  • the graph generator 301 extracts program representations using a graph convolutional network.
  • Graph convolutional network models are a type of neural network architectures that can leverage the graph structure and aggregate node information in a convolutional fashion. :For example, molecular structures can be analyzed by treating the atoms as nodes of a graph corresponding to the bonds between these atoms.
  • the general idea of a neural network can be illustrated with respect to FIGs. 4-6, with an application of a graph convolutional network illustrated with respect to FIG. 7.
  • Convolutional neural networks are a type of network that employ a mathematical operation called convolution, that is a specialized kind of linear operation.
  • Convolutional networks are neural networks that use convolution in place of general matrix multiplication in at least one of their layers.
  • a CNN is formed of an input and an output layer, with a number of intermediate hidden layers in between.
  • the hidden layers of a CNN are typically a series of convolutional layers that “convolve” with a multiplication or other dot product.
  • Each “neuron” in a neural network computes an output value by applying a specific function to the input values coming from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias.
  • the vector of weights and the bias are called filters and represent particular features of the input (e.g., a particular shape).
  • a distinguishing feature of CNNs is that many neurons can share the same filter.
  • an initial input (such as an image of an array of pixel values) is followed by a number of convolutional layers and other types of neural network layers, the last of which provides the output.
  • Each neuron in the first convolutional layer takes as input data a portion of the input.
  • the neuron’s learned weights which are collectively referred to as its convolution filter, determine the neuron’s single-valued output in response to the input.
  • a neuron’s filter is applied to the input by sliding the input region along the full input’s values to generate the values of the convolutional layer.
  • the equivalent convolution is normally implemented by statically identical copies of the neuron to different input regions.
  • FIG. 4 is a schematic representation of several layers of a neural network in more detail.
  • the shown three layers of the artificial neural network are represented as an interconnected group of nodes or artificial neurons, represented by the circles, and a set of connections from the output of one artificial neuron to the input of another.
  • the example shows three input nodes (h, h, I3) and two output nodes (O1 , O2), with an intermediate layer of four hidden or intermediate nodes (Hi, H2, H3, H4).
  • the inputs to the input nodes are not shown, but may be the initial inputs to the network if this is first layer of the network or may be from a preceding layer if it is itself a hidden layer.
  • outputs of the output nodes are also not shown, but these maybe the final output of a network or server as the input to a subsequent layer.
  • the nodes, or artificial neurons/synapses, of the artificial neural network are implemented by logic elements of a CPU or other processing system as a mathematical function that receives one or more inputs and sums them to produce an output. Usually, each input is separately weighted and the sum is passed through the node’s mathematical function to provide the node’s output.
  • the signal at a connection between nodes is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs.
  • Nodes and their connections typically have a weight value that is adjusted as a learning process proceeds. The weight increases or decreases the strength of the signal at a connection.
  • Nodes may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold.
  • the nodes are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.
  • FIG. 4 shows only a single intermediate or hidden layer, a complex deep neural network (DNN) can have many such intermediate layers.
  • DNN complex deep neural network
  • a supervised artificial neural network is “trained” by supplying inputs and then checking and correcting the outputs. For example, a neural network that is trained to recognize dog breeds will process a set of images and calculate the probability that the dog in an image is a certain breed. A user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex neural networks have many layers. Due to the depth provided by a large number of intermediate or hidden layers, neural networks can model complex non-linear relationships as they are trained.
  • FIG. 5 is a flowchart describing one embodiment of a process for training a neural network to generate a set of weights.
  • the training process is often performed in the cloud, allowing additional or more powerful processing to be accessed.
  • the input is received.
  • the input is propagated through the layers connecting the input to the next layer using the current filter, or set of weights.
  • the neural network’s output is then received at the next layer in 505, so that the values received as output from one layer serve as the input to the next layer.
  • the inputs from the first layer are propagated in this way through all of the intermediate or hidden layers until they reach the output.
  • the input would be the image data of a number of dogs, and the intermediate layers use the current weight values to calculate the probability that the dog in an image is a certain breed, with the proposed dog breed label returned at 505.
  • a person can then review the results at 507 to select which probabilities the neural network should return and decide whether the current set of weights supply a sufficiently accurate labelling by comparing the proposed label with the actual label and, if so, the training is complete (511 ). If the result is not sufficiently accurate, the neural network adjusts the weights at 509 based on the probabilities the user selected, followed by looping back to 503 to run the input data again with the adjusted weights.
  • the neural network s set of weights have been determined, they can be used to “inference,” which is the process of using the determined weights to generate an output result from data input into the neural network. Once the weights are determined at 511 , they can then be stored in memory for later use.
  • FIG. 6 is a flowchart describing a process for the inference phase of supervised learning using a neural network to predict the “meaning” of the input data using an estimated accuracy.
  • the neural network may be inferenced both in the cloud and by an edge device’s (e.g., smart phone, automobile process, hardware accelerator) processor.
  • the input is received, such as the image of a dog in the example used above. If the previously determined weights are not present in the device running the neural network application, they are loaded at 622. For example, on a host processor executing the neural network, the weights could be read out of an SSD in which they are stored and loaded into RAM on the host device.
  • the input data is then propagated through the neural network’s layers, where 623 will be similar to 603 of FIG. 5, but now using the weights established at the end of the training process at 611. After propagating the input through the intermediate layers, the output is then provided at 625.
  • a graph convolutional network neural networks are generalized to work on arbitrary structured graphs and are here applied to the building of a program representation, such as the node embeddings of the program in the domain specific language, to use in building the additional test templates.
  • the graph convolutional network can capture the graph node information as well as neighboring nodes information by iteratively aggregating node embedding. Each aggregation then forms a new layer in the graph convolutional network model, which represents the node and its connections to neighboring nodes information in a different multi-dimensional space. This can be illustrated with respect to FIG. 7.
  • FIG. 7 is a schematic representation of an input and first hidden layer of a graph convolutional network.
  • the input of FIG. 7 is a structured graph made up of multiple nodes (the open circles) and the connections between pairs of these nodes.
  • the graphs can be the node embeddings for syntax trees of the programs of test templates for testing of the design of the integrated circuit.
  • the input structured graph is then propagated through the hidden layers, where a first hidden layer is shown.
  • connection information to neighboring nodes is shown for three of the nodes are explicitly shown, where the capture of the graph node information as well as neighboring nodes information in the hidden layer is represented by the darked graph nodes and bolded connections.
  • the node at center, left of the input is connected to three nodes; at center, the node in the middle of the input graph is connected to five surrounding nodes; and at bottom, the node at bottom, right of the input graph has only a single connection.
  • This layer is then propagated through the subsequent hidden layers to strengthen or weaken the connections and add or delete nodes to generate supplemental test programs.
  • Embodiments for the system of FIG. 3 presented here use such a graph convolutional approach for building program representations.
  • the graph generator 301 can form the backbone of a program graph by the abstract syntax tree (AST), where graph nodes are formed by the syntax tokens and edges are formed by the relationships between these tokens. Additional information can be added to the abstract syntax tree, such as adding edges of last lexical usage for one variable.
  • the graph generator 301 can use the graph convolutional network techniques for building program graph representations, which is further used in the graph generative module for program graph generation.
  • the graph convolutional policy network is designed as a reinforcement learning agent that operates within a test template aware graph generation environment. In the case of the graph convolutional policy network 317, the network is trained by use of a policy gradient to optimize a reward composed of test template property objective and adversarial loss provided by the graph convolutional network.
  • Embodiments for the process of FIG. 2 and system of FIG. 3 can use a graph generative module 315 that models the program graph augmentation task (formed by an abstract syntax tree) as a Markov decision process (MDP).
  • the procedure of a Markov decision process is to augment a program graph that can be described as a trajectory of states and actions.
  • the states represent the initial graph, intermediate augmented graphs, and final augmented graph.
  • the actions represent how the program graphs are augmented step by step based on a state transition distribution, which is further represented by a graph convolutional policy network.
  • the graph generative module 315 takes the program abstract syntax graph formed by block 311 of an existing test template to be augmented as one input and computes the embedding of the input graph.
  • the graph generative module 315 also takes the learnt program representation (i.e., the program embeddings) as input.
  • the graph convolutional policy network 317 predicts actions to augment the input graph during the generation process, where each action samples a probabilistic graph for selecting which nodes and edges to be added to the graph, where the nodes and edges can be from a pre-defined operation set provided by domain experts.
  • the program representation extraction 311 uses a graph convolutional network
  • the graph generative module 315 uses a graph convolutional policy network 317.
  • the graph convolutional network and the graph convolutional policy network have different usages here: the graph convolutional network is used for extracting program or graph representations, while the graph convolutional policy network is used for predicting the probability of adding a particular node or edge.
  • the graph convolutional policy network 317 is used by the graph generative module 315 to augment the program abstracted graph, which the generator of the new set of test templates 313 then transforms these into program test templates provided to the test program generator 303. These templates are loaded into the simulation environments 305 that provides the code coverage result back to the graph generative module 315 for the reinforcement learning algorithm to train the graph convolutional policy network model based on rewards returned from the simulation environment.
  • the graph generative module 315 establishes a probabilistic graph over the embeddings of the nodes and edges in the graph. After each action, a new augmented graph is formed and, based on this, a new supplemental test template can be generated by the generator of the new set of test templates 313.
  • the new test templates 331 will be used for the test program generator 303 for generating new test sets.
  • the new test sets 353 can be loaded into parallel running simulators 305 for testing and the final coverage results that can be aggregated and returned to the graph generative module 315 as the rewards for training the policy network.
  • the policy network 317 can use an existing policy gradient-based methods, such as Proximal Policy Optimization (PPO), for optimization of the policy network.
  • PPO Proximal Policy Optimization
  • the graph convolutional policy network 317 can be pre-trained using existing test templates provided by domain experts.
  • the system can: 1 ) abstract the program template as a high-level pre-defined components graph (i.e., obtaining the embeddings); and derive the probabilistic graph using node and edge embeddings. With such a probabilistic graph, the policy model could sample it and derive actions from the sampling results. [0068] With respect to abstracting the program template to compute the embedding or program representation, there are many methods that could potentially be used. One approach makes use of syntax-level abstractions such as variables, expressions, or statements as nodes, where these can also use the relationships between these nodes as edges. In an example embodiment for the template generation task, the system of FIG.
  • edges 3 can alternately make use of much bigger syntax components such as API calls, classes, or even pre-defined code blocks as graph nodes in graph augmentation, where the edges can also be established between these high-level syntax components.
  • a simple way to add edges is to have some code blocks arranged sequentially or putting them in different orders, while more complicated methods involve nested blocks or function calls.
  • experts can define a set of fixed components and fixed attach points for connecting them. This approach constrains the possibility of having too many combinations; for example, this approach could have some base class which includes several pre-defined APIs, and use subclasses to arrange them.
  • one embodiment is to generate a probabilistic graph that uses the embeddings of nodes at both ends and edge as inputs.
  • the system can concatenate all the embedding inputs and use a simple neural model (such as a Multi-Layer Perceptron (MLP) model), and then use a Soft-Max function over the output of the neural model to obtain a probability between 0 and 1 for selecting these nodes and edge.
  • MLP Multi-Layer Perceptron
  • An alternative approach is to use multiple normal distribution vectors to predict the nodes selected and edge selected based on those embedding inputs.
  • An alternative reward method is to use an adversarial reward besides the external rewards from a simulator.
  • the system can make use of a network model like the one that produces graph embeddings for both original graph and augmented graph.
  • the network model can map the new graph into its own embedding.
  • the embeddings are further mapped into scalar values for comparison, which serves as the loss of the adversarial award of the reinforced learning system.
  • An intermediate reward could also be used for verifying the program correctness in the middle of the augmentation progress. Such reward could be obtained in a similar approach to that described above.
  • the system described above provides an automatic approach of test templates augmentation for the set of test generation programs for CPU or other integrated circuit random verification by utilizing the graph convolutional policy network for generating new test templates for optimizing the code coverage for random verification of the CPU or other integrated circuit. This allows for the automatic optimization of code coverage in constraint random verification, unlike previous approaches that rely on manual effort from verification engineers.
  • FIG. 8 is a high-level block diagram of one embodiment of a more general computing system 800 that can be used to implement various embodiments of the systems described above.
  • computing system 800 is a network system 800.
  • Specific devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device.
  • a device may contain multiple instances of a component, such as multiple processing units, processors, memories, etc.
  • the network system may comprise a computing system 801 equipped with one or more input/output devices, such as network interfaces, storage interfaces, and the like.
  • the computing system 801 may include one or more central processing units (CPUs) 810 and/or other processors (e.g., graphics processing units (GPUs), tensor processing units (TPUs)) , a memory 820, a mass storage device 830, and an I/O interface 860 connected to a bus 870.
  • the computing system 801 is configured to connect to various input and output devices (keyboards, displays, etc.) through the I/O interface 860, such as can be used to receive the initial test templates and circuit design inputs.
  • the bus 870 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus or the like.
  • the microprocessor 810 may comprise any type of electronic data processor, including CPU, GPUs, TPUs, and so on.
  • the microprocessor 810 may be configured to implement any of the schemes described herein with respect to the generation of supplemented test programs for integrated circuit design testing systems of Figures 1 -7, using any one or combination of elements described in the embodiments.
  • the memory 820 may comprise any type of system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like.
  • the memory 820 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
  • the mass storage device 830 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 870.
  • the mass storage device 830 may comprise, for example, one or more of a solid-state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
  • the computing system 801 also includes one or more network interfaces 850, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or one or more networks 880.
  • the network interface 850 allows the computing system 801 to communicate with remote units via the network 880.
  • the network interface 850 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas.
  • the computing system 801 is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
  • the network interface 850 may be used to receive and/or transmit interest packets and/or data packets in an ICN.
  • the term “network interface” will be understood to include a port.
  • the components depicted in the computing system of FIG. 8 are those typically found in computing systems suitable for use with the technology described herein, and are intended to represent a broad category of such computer components that are well known in the art. Many different bus configurations, network platforms, and operating systems can be used.
  • the technology described herein can be implemented using hardware, firmware, software, or a combination of these.
  • these elements of the embodiments described above can include hardware only or a combination of hardware and software (including firmware).
  • logic elements programmed by firmware to perform the functions described herein is one example of elements of the described systems.
  • a CPU, GPU, or other microprocessor 810 can include a processor, FGA, ASIC, integrated circuit or other type of circuit.
  • the software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein.
  • the processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and nonremovable media.
  • Computer readable media may comprise computer readable storage media and communication media.
  • Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • a computer readable medium or media does (do) not include propagated, modulated or transitory signals.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
  • some or all of the software can be replaced by dedicated hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Applicationspecific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
  • FPGAs Field-programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Applicationspecific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • special purpose computers etc.
  • some of the elements used to execute the instructions issued in FIG. 2 can use specific hardware elements.
  • software stored on a storage device
  • the one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
  • Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Abstract

Techniques for augmentation of the set of test programs (331) generated for testing designs of integrated circuits (351), such as central processing units (CPUs), are presented. A system and methods are described that can automatically generate more test templates (331) based on existing templates and provided template components. One set of examples is based on the graph convolutional policy network (317) for program generation and reinforcement learning algorithm for coverage optimization.

Description

GENERATION OF SUPPLEMENTAL TEST PROGRAMS FOR INTEGRATED CIRCUIT DESIGN TESTING
Inventors:
Ning Yan Masood Mortazavi
FIELD
[0001] The following is related generally to the field of design and testing of integrated circuits, such as CPUs.
BACKGROUND
[0002] Central processing units and many other integrated circuits are highly complex structures that can involve thousands or even millions of individual logic elements and other circuit components. It is important to thoroughly test such circuits prior to the fabrication, but this is itself a complex process and relies upon a set of test programs that can validate the many different aspects of these circuits. These test programs are traditional manually produced and many not be as comprehensive as desired, so that it would be beneficial if there was a better way to augment the available collection of test patterns by generating supplemental test patterns.
SUMMARY
[0003] According to one aspect of the present disclosure, a method of generating by a computing device of supplemental test programs for an integrated circuit includes: obtaining a design for an integrated circuit and receiving a plurality of initial test templates for testing of the design of the integrated circuit. The method also includes: analyzing the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generating by a graph convolutional policy network of a plurality of new test templates for the design of the integrated circuit from the program representation; and performing a plurality of parallel simulations of the design of the integrated circuit using the plurality of new test templates.
[0004] Optionally, in the preceding aspect, the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generating a revised set of test templates.
[0005] Optionally, in either of the preceding aspects, the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
[0006] Optionally, in any of the preceding aspects, the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates providing accurate simulation results, testing the design of the integrated circuit using the new test templates.
[0007] Optionally, in the preceding aspect, the method may also include: in response to the design of the integrated circuit not passing the testing, modifying the design of the integrated circuit; and testing the modified design of the integrated circuit using the new test templates.
[0008] Optionally, in either of the preceding two aspects, the method may also include: in response to the design of the integrated circuit passing the testing, fabricating the design of the integrated circuit.
[0009] Optionally, in any of the preceding aspects, analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates includes building an abstract syntax tree from the test templates. [0010] Optionally, in the preceding aspect, analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates further comprises: learning program representations from the abstract syntax tree.
[0011] Optionally, in the preceding aspect, generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation comprises: generating the new test templates from the learnt program representations.
[0012] Optionally, in any of the preceding aspects, the method further includes training the graph convolutional policy network based on results of the plurality of parallel simulations.
[0013] According to an additional aspect of the present disclosure, a system includes one or more interfaces and one or more processors coupled to the one or more interfaces. The one or more interfaces are configured to: receive a plurality of initial test templates for testing of the design of the integrated circuit. The one or more processors are configured to: obtain a design for an integrated circuit; analyze the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generate by a graph convolutional policy network of a plurality of new test templates for the design of the integrated circuit from the program representation; and perform a plurality of parallel simulations of the design of the integrated circuit using the plural of new test templates.
[0014] Optionally, in the preceding aspect, the one or more processors are further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generate a revised set of test templates.
[0015] Optionally, in either of the two preceding aspects, the one or more processors are further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
[0016] Optionally, in the any of the preceding aspects for a system, the one or more processors are further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide accurate simulation results; and in response the new test templates provide accurate simulation results, testing the design of the integrated circuit using the new test templates.
[0017] Optionally, in the preceding aspect, the one or more processors are further configured to: in response to the design of the integrated circuit not passing the testing, modify the design of the integrated circuit; and test the modified design of the integrated circuit using the new test templates.
[0018] Optionally, in the any of the preceding aspects for a system, in analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates, the one or more processors are further configured to: build an abstract syntax tree from the test templates.
[0019] Optionally, in the preceding aspect, in analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates, the one or more processors are further configured to: learn program representations from the abstract syntax tree.
[0020] Optionally, in the preceding aspect, in generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation, the one or more processors are further configured to generate the new test templates from the learnt program representations.
[0021] Optionally, in the any of the preceding aspects for a system, the one or more processors further configured to train the graph convolutional policy network based on results of the plurality of parallel simulations. [0022] According to other aspects, a method includes: obtaining a design for an integrated circuit; receiving a plurality of initial test templates for testing of the design of the integrated circuit; analyzing the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generating by a graph convolutional policy network of a plurality of additional test templates for the design of the integrated circuit from the program representation; testing the design of the integrated circuit using the additional test templates; and, in response to the design of the integrated circuit passing the testing, fabricating the design of the integrated circuit.
[0023] Optionally, in the preceding aspect, the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generating a revised set of test templates.
[0024] Optionally, in either of the preceding aspects, the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
[0025] Optionally, in any of the preceding three aspects, the method may also include: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates providing accurate simulation results, testing the design of the integrated circuit using the new test templates.
[0026] Optionally, in the preceding aspect, the method may also include: in response to the design of the integrated circuit not passing the testing, modifying the design of the integrated circuit; and testing the modified design of the integrated circuit using the new test templates. [0027] Optionally, in any of the five preceding aspects, analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates includes building an abstract syntax tree from the test templates.
[0028] Optionally, in the preceding aspect, analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates further comprises: learning program representations from the abstract syntax tree.
[0029] Optionally, in the preceding aspect, generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation comprises: generating the new test templates from the learnt program representations.
[0030] Optionally, in any of the preceding eight aspects, the method may also include training the graph convolutional policy network based on results of the plurality of parallel simulations.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements.
[0032] FIG. 1 is a flowchart of an embodiment for the design and a manufacture of an integrated circuit design.
[0033] FIG. 2 is a flowchart of an overview for one embodiment of system workflow.
[0034] FIG. 3 is a block diagram of an embodiment of the system components to perform the process illustrated in FIG. 2.
[0035] FIG. 4 is a schematic representation of several layers of a neural network in more detail. [0036] FIG. 5 is a flowchart describing one embodiment of a process for training a neural network to generate a set of weights.
[0037] FIG. 6 is a flowchart describing a process for the inference phase of supervised learning using a neural network.
[0038] FIG. 7 is a schematic representation of an input and first hidden layer of a graph convolutional network.
[0039] FIG. 8 is a high-level block diagram of a more general computing system that can be used to implement various embodiments described in the preceding figures.
DETAILED DESCRIPTION
[0040] The following presents techniques for supplementing the test programs generated for testing designs for integrated circuits, such as central processing units (CPUs). The design under test is typically tested using high-level templates provided by verification engineers based on years of experiences and expertise in related fields. The manually written test templates might not be comprehensive and it is cumbersome and time-consuming to manually compose additional test templates. The following presents a system and methods that automatically generates more test templates based on existing templates and provides template components. One set of embodiments is based on use of a graph convolutional policy network for program generation and a reinforcement learning algorithm for coverage optimization.
[0041] It is understood that the present embodiments of the disclosure may be implemented in many different forms and that scopes of the claims should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0042] Central processing units and other modern integrated circuits are highly complex systems that are difficult to design and expensive to fabricate. Because of this, an integrated circuit design undergoes extensive testing and revision before being manufactured. Constrained random verification (CRV) is a standard method in industrial design verification for integrated circuit chips such as CPUs. Central to this process is the design of an elaborate testbench that applies pseudorandom stimuli to the design-under-test (DUT) downstream. Due to the complexity of test code generation, people often use high-level templates for guiding the test generation program. The test templates are composed using a domain specific language (DSL) or some high-level language, such as Python. Each test template represents some aspects of testing targets.
[0043] FIG. 1 is a flowchart of an embodiment for the design and fabrication of an integrated circuit design. Beginning at 101 , a design under test is generated. For example, this can include the typical steps in a CPU or other integrated circuit design, such as: system specification, such as a feasibility study and functional analysis; architectural or system level design; logic design, such analogue and digital design and simulation and system level simulation and verification; and circuit design, such digital design synthesis and determining the design for test.
[0044] At 103 the test templates are received, where these are high-level templates usually provided by verification engineers based on years of experiences and expertise in related fields. These manually written templates are generally not comprehensive as it is cumbersome and time-consuming to manually compose additional test templates. Using these test templates, the design under test is tested by applying pseudo random stimuli at 105 in order simulate the operation of the design. Constrained random verification (CRV) is one standard method in industrial design verification for chips such as CPUs. Central to this process is the design of an elaborate testbench that applies pseudorandom stimulus to the design-under-test (DUT) downstream. If the design does not meet the requirements of the test, the design is revised at 109 and re-tested. Once the tests are passed, the design can move to be manufactured at a fabrication facility at step 111 , where there may be additional processes performed first.
[0045] The greater the number and the greater the variety of test templates that are available for 105, the more accurate the testing can be as each test template can represent different aspects of testing targets. However, as noted, test code generation is complex task, which why people often use high-level templates for guiding the test generation program, with the test templates composed using a domain specific language (DSL) or some high-level language, such as Python. To improve upon this situation, the following presents system and methods that automatically augment the received set of test templates, generating additional templates based on existing templates and the provided template components. Example embodiments are based on a graph convolutional policy network for program generation and reinforcement learning algorithm for coverage optimization.
[0046] More specifically, techniques and a system are presented that can analyze test templates written in a certain domain specific language (DSL) and learn the representation of these programs in terms of variable and their relationships within the programs’ syntax using a graph convolutional network. The learnt program representation of the structure of the received test program templates (i.e., node embeddings of programs for executing the test program templates) can be used for generation of new test program templates. The system can use a pre-defined operation set, such as a user provided high-level components of test template for selection of valid graph augmentation, and a graph convolutional policy network (GCPN) for graph augmentation, which can form new test templates for test case generation. The system can run multiple simulations, each with one set of new programs generated, in parallel and obtain the code coverage results from simulations, with the coverage results used as rewards for a reinforcement learning algorithm for optimizing the graph convolutional policy network. This leads to a system that can provide an automatic approach for test templates supplementation for the test generation program for CPU or other integrated circuit design random verification by utilizing the graph convolutional policy network for generating new test templates for optimizing the code coverage for CPU random verification. [0047] As noted, the systems used in previous approaches have lacked automatic optimization of code coverage in constrained random verification and instead have mostly relied on manual effort from verification engineers with substantial expertise in related fields. The automatic approach presented here can save significant amounts of time for configuring the test program generators by using graph convolutional networks to abstract information from the provided templates. The embodiments described in the following use multiple parallel simulations speeds up the evaluation of the supplemental test templates generated from system, modelling the program features and for generating new programs which improves the code coverage by use of graph convolutional networks. Reinforcement learning systems and graph convolutional policy networks allow the system to continuous generating new set of test templates thus continuously improving the code coverage.
[0048] FIG. 2 is a flowchart of an overview for one embodiment of system workflow, starting at 201 . One or more initial templates are received at 203, where these can be the same as the sort of manually generated templates of 103 of FIG. 1. At 205 the design or designs under test are received or, more generally, obtained, such as when the same processing or computing devices also generates the design. Although 205 is shown as following 203 in FIG. 2, 205 can occur earlier or later in the flow, as long as the design in available when subsequently used. The flow of FIG. 2 departs from that of FIG. 1 at 207.
[0049] At 207, the initial test templates are analyzed to build the abstract syntax tree for use in generating the additional templates by considering the variable of the test templates and their relationships within the structure of the programs’ language, where a system for 207 and subsequent processes is discussed in more detail with respect to the FIG. 3. At 209 a graph convolutional network can then learn representations of the structure of the programs (i.e., the node embeddings of the test templates) from the abstract syntax tree built in 207 by using these program representations as the inputs to the graph convolutional network. Using a graph generative module for generating new test templates follows at 211 by propagating the inputs form 209 through the graph convolutional network. In this process, the graph generative module takes the learnt program representations from 209 as the input, with a graph generative module utilizing a graph convolutional policy network to generate augmented graphs, which are further converted to new test templates. These newly generated test templates are then used by a test generation program for generating new test sets in 213 that can be used, much as a set of manually generated templates are used at 105 of FIG. 1 , that can be used to run tests on the design under test by applying pseudo-random stimuli according to the test sets at 215.
[0050] At 215, the tests for the newly generated test sets are performed. To improve efficiency, multiple simulations for the design under test from 205 can then be run in parallel using the new test sets generated and the coverage as the loss function for optimizing the graph generative module. A determination is made at 217 on whether a certain coverage threshold is met: if so, the system stops at 219; if not, the flow loops back to 211 for generating more test templates. The determination can be made by comparing the simulation results of 215 with the results of the initial test templates from 203 to see whether they agree within a limit on the amount of acceptable error and whether the tests cover an acceptable number of circuit properties and features. For example, the determination can be made on whether the design meets benchmark values on a selected set of circuit characteristics, such as speed, accuracy, power consumption, or other important metrics of the design under test.
[0051] FIG. 3 is a block diagram of an embodiment of a system component to perform the process illustrated in FIG. 2. Relative to FIG. 2, 203 can correspond to the input of the initial test templates into block 311 of graph generator 301 , with block 311 performing 207 by extracting program representations using a graph convolutional network. Block 315 performs 209 by using a graph generative module for generation of new templates, with 211 implemented through the graph convolutional policy network of block 317. Test program generator 303 uses the test templates 331 from block 313 of graph generator 301 to perform 213. The parallel simulation environments 305 receive the design under test 351 as an input, corresponding to 205, and also receives the test program sets 353, which are then used to perform the parallel simulation of 215. The code coverage from the parallel simulations can then be supplied to block 315 to make the determination of 217.
[0052] At block 311 , the graph generator 301 extracts program representations using a graph convolutional network. Graph convolutional network models are a type of neural network architectures that can leverage the graph structure and aggregate node information in a convolutional fashion. :For example, molecular structures can be analyzed by treating the atoms as nodes of a graph corresponding to the bonds between these atoms. The general idea of a neural network can be illustrated with respect to FIGs. 4-6, with an application of a graph convolutional network illustrated with respect to FIG. 7.
[0053] Convolutional neural networks, or CNNs, are a type of network that employ a mathematical operation called convolution, that is a specialized kind of linear operation. Convolutional networks are neural networks that use convolution in place of general matrix multiplication in at least one of their layers. A CNN is formed of an input and an output layer, with a number of intermediate hidden layers in between. The hidden layers of a CNN are typically a series of convolutional layers that “convolve” with a multiplication or other dot product. Each “neuron” in a neural network computes an output value by applying a specific function to the input values coming from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias. Learning, in a neural network, progresses by making iterative adjustments to these biases and weights. The vector of weights and the bias are called filters and represent particular features of the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter.
[0054] In neural network, an initial input (such as an image of an array of pixel values) is followed by a number of convolutional layers and other types of neural network layers, the last of which provides the output. Each neuron in the first convolutional layer takes as input data a portion of the input. The neuron’s learned weights, which are collectively referred to as its convolution filter, determine the neuron’s single-valued output in response to the input. In convolutional layers, a neuron’s filter is applied to the input by sliding the input region along the full input’s values to generate the values of the convolutional layer. In practice, the equivalent convolution is normally implemented by statically identical copies of the neuron to different input regions. The process is repeated through each of the convolutional layers using each layer’s learned weights, after which it may be propagated through additional layers using their learned weights. [0055] FIG. 4 is a schematic representation of several layers of a neural network in more detail. In FIG. 4, the shown three layers of the artificial neural network are represented as an interconnected group of nodes or artificial neurons, represented by the circles, and a set of connections from the output of one artificial neuron to the input of another. The example shows three input nodes (h, h, I3) and two output nodes (O1 , O2), with an intermediate layer of four hidden or intermediate nodes (Hi, H2, H3, H4). The inputs to the input nodes (h, I2, I3) are not shown, but may be the initial inputs to the network if this is first layer of the network or may be from a preceding layer if it is itself a hidden layer. Similarly, outputs of the output nodes (O1 , O2) are also not shown, but these maybe the final output of a network or server as the input to a subsequent layer. The nodes, or artificial neurons/synapses, of the artificial neural network are implemented by logic elements of a CPU or other processing system as a mathematical function that receives one or more inputs and sums them to produce an output. Usually, each input is separately weighted and the sum is passed through the node’s mathematical function to provide the node’s output.
[0056] In common artificial neural network implementations, the signal at a connection between nodes (artificial neurons/synapses) is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. Nodes and their connections typically have a weight value that is adjusted as a learning process proceeds. The weight increases or decreases the strength of the signal at a connection. Nodes may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, the nodes are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. Although FIG. 4 shows only a single intermediate or hidden layer, a complex deep neural network (DNN) can have many such intermediate layers.
[0057] A supervised artificial neural network is “trained” by supplying inputs and then checking and correcting the outputs. For example, a neural network that is trained to recognize dog breeds will process a set of images and calculate the probability that the dog in an image is a certain breed. A user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex neural networks have many layers. Due to the depth provided by a large number of intermediate or hidden layers, neural networks can model complex non-linear relationships as they are trained.
[0058] FIG. 5 is a flowchart describing one embodiment of a process for training a neural network to generate a set of weights. The training process is often performed in the cloud, allowing additional or more powerful processing to be accessed. At 501 , the input is received. At 503 the input is propagated through the layers connecting the input to the next layer using the current filter, or set of weights. The neural network’s output is then received at the next layer in 505, so that the values received as output from one layer serve as the input to the next layer. The inputs from the first layer are propagated in this way through all of the intermediate or hidden layers until they reach the output. In the dog breed example of the preceding paragraph, the input would be the image data of a number of dogs, and the intermediate layers use the current weight values to calculate the probability that the dog in an image is a certain breed, with the proposed dog breed label returned at 505. A person can then review the results at 507 to select which probabilities the neural network should return and decide whether the current set of weights supply a sufficiently accurate labelling by comparing the proposed label with the actual label and, if so, the training is complete (511 ). If the result is not sufficiently accurate, the neural network adjusts the weights at 509 based on the probabilities the user selected, followed by looping back to 503 to run the input data again with the adjusted weights. Once the neural network’s set of weights have been determined, they can be used to “inference,” which is the process of using the determined weights to generate an output result from data input into the neural network. Once the weights are determined at 511 , they can then be stored in memory for later use.
[0059] FIG. 6 is a flowchart describing a process for the inference phase of supervised learning using a neural network to predict the “meaning” of the input data using an estimated accuracy. Depending on the case, the neural network may be inferenced both in the cloud and by an edge device’s (e.g., smart phone, automobile process, hardware accelerator) processor. At 621 , the input is received, such as the image of a dog in the example used above. If the previously determined weights are not present in the device running the neural network application, they are loaded at 622. For example, on a host processor executing the neural network, the weights could be read out of an SSD in which they are stored and loaded into RAM on the host device. At 623, the input data is then propagated through the neural network’s layers, where 623 will be similar to 603 of FIG. 5, but now using the weights established at the end of the training process at 611. After propagating the input through the intermediate layers, the output is then provided at 625.
[0060] In a graph convolutional network, neural networks are generalized to work on arbitrary structured graphs and are here applied to the building of a program representation, such as the node embeddings of the program in the domain specific language, to use in building the additional test templates. The graph convolutional network can capture the graph node information as well as neighboring nodes information by iteratively aggregating node embedding. Each aggregation then forms a new layer in the graph convolutional network model, which represents the node and its connections to neighboring nodes information in a different multi-dimensional space. This can be illustrated with respect to FIG. 7.
[0061] FIG. 7 is a schematic representation of an input and first hidden layer of a graph convolutional network. Relative to the more familiar application of a neural network to image data, the input of FIG. 7 is a structured graph made up of multiple nodes (the open circles) and the connections between pairs of these nodes. As applied here, the graphs can be the node embeddings for syntax trees of the programs of test templates for testing of the design of the integrated circuit. The input structured graph is then propagated through the hidden layers, where a first hidden layer is shown. In the aggregation of the first hidden layer, the connection information to neighboring nodes is shown for three of the nodes are explicitly shown, where the capture of the graph node information as well as neighboring nodes information in the hidden layer is represented by the darked graph nodes and bolded connections. For example, at top in the representation of the hidden layer, the node at center, left of the input is connected to three nodes; at center, the node in the middle of the input graph is connected to five surrounding nodes; and at bottom, the node at bottom, right of the input graph has only a single connection. This layer is then propagated through the subsequent hidden layers to strengthen or weaken the connections and add or delete nodes to generate supplemental test programs.
[0062] Embodiments for the system of FIG. 3 presented here use such a graph convolutional approach for building program representations. The graph generator 301 can form the backbone of a program graph by the abstract syntax tree (AST), where graph nodes are formed by the syntax tokens and edges are formed by the relationships between these tokens. Additional information can be added to the abstract syntax tree, such as adding edges of last lexical usage for one variable. The graph generator 301 can use the graph convolutional network techniques for building program graph representations, which is further used in the graph generative module for program graph generation. The graph convolutional policy network is designed as a reinforcement learning agent that operates within a test template aware graph generation environment. In the case of the graph convolutional policy network 317, the network is trained by use of a policy gradient to optimize a reward composed of test template property objective and adversarial loss provided by the graph convolutional network.
[0063] Embodiments for the process of FIG. 2 and system of FIG. 3 can use a graph generative module 315 that models the program graph augmentation task (formed by an abstract syntax tree) as a Markov decision process (MDP). The procedure of a Markov decision process is to augment a program graph that can be described as a trajectory of states and actions. The states represent the initial graph, intermediate augmented graphs, and final augmented graph. The actions represent how the program graphs are augmented step by step based on a state transition distribution, which is further represented by a graph convolutional policy network. The graph generative module 315 takes the program abstract syntax graph formed by block 311 of an existing test template to be augmented as one input and computes the embedding of the input graph. The graph generative module 315 also takes the learnt program representation (i.e., the program embeddings) as input. The graph convolutional policy network 317 predicts actions to augment the input graph during the generation process, where each action samples a probabilistic graph for selecting which nodes and edges to be added to the graph, where the nodes and edges can be from a pre-defined operation set provided by domain experts. [0064] Note that the program representation extraction 311 uses a graph convolutional network, while the graph generative module 315 uses a graph convolutional policy network 317. The graph convolutional network and the graph convolutional policy network have different usages here: the graph convolutional network is used for extracting program or graph representations, while the graph convolutional policy network is used for predicting the probability of adding a particular node or edge.
[0065] The graph convolutional policy network 317 is used by the graph generative module 315 to augment the program abstracted graph, which the generator of the new set of test templates 313 then transforms these into program test templates provided to the test program generator 303. These templates are loaded into the simulation environments 305 that provides the code coverage result back to the graph generative module 315 for the reinforcement learning algorithm to train the graph convolutional policy network model based on rewards returned from the simulation environment.
[0066] More specifically, in the example embodiments, the graph generative module 315 establishes a probabilistic graph over the embeddings of the nodes and edges in the graph. After each action, a new augmented graph is formed and, based on this, a new supplemental test template can be generated by the generator of the new set of test templates 313. The new test templates 331 will be used for the test program generator 303 for generating new test sets. The new test sets 353 can be loaded into parallel running simulators 305 for testing and the final coverage results that can be aggregated and returned to the graph generative module 315 as the rewards for training the policy network. The policy network 317 can use an existing policy gradient-based methods, such as Proximal Policy Optimization (PPO), for optimization of the policy network. The graph convolutional policy network 317 can be pre-trained using existing test templates provided by domain experts.
[0067] To construct the probabilistic graph, the system can: 1 ) abstract the program template as a high-level pre-defined components graph (i.e., obtaining the embeddings); and derive the probabilistic graph using node and edge embeddings. With such a probabilistic graph, the policy model could sample it and derive actions from the sampling results. [0068] With respect to abstracting the program template to compute the embedding or program representation, there are many methods that could potentially be used. One approach makes use of syntax-level abstractions such as variables, expressions, or statements as nodes, where these can also use the relationships between these nodes as edges. In an example embodiment for the template generation task, the system of FIG. 3 can alternately make use of much bigger syntax components such as API calls, classes, or even pre-defined code blocks as graph nodes in graph augmentation, where the edges can also be established between these high-level syntax components. For example, a simple way to add edges is to have some code blocks arranged sequentially or putting them in different orders, while more complicated methods involve nested blocks or function calls. In an example embodiment, experts can define a set of fixed components and fixed attach points for connecting them. This approach constrains the possibility of having too many combinations; for example, this approach could have some base class which includes several pre-defined APIs, and use subclasses to arrange them.
[0069] With respect to deriving the probabilistic graph, one embodiment is to generate a probabilistic graph that uses the embeddings of nodes at both ends and edge as inputs. The system can concatenate all the embedding inputs and use a simple neural model (such as a Multi-Layer Perceptron (MLP) model), and then use a Soft-Max function over the output of the neural model to obtain a probability between 0 and 1 for selecting these nodes and edge. An alternative approach is to use multiple normal distribution vectors to predict the nodes selected and edge selected based on those embedding inputs.
[0070] An alternative reward method is to use an adversarial reward besides the external rewards from a simulator. The system can make use of a network model like the one that produces graph embeddings for both original graph and augmented graph. When a new augmented graph is generated, the network model can map the new graph into its own embedding. The embeddings are further mapped into scalar values for comparison, which serves as the loss of the adversarial award of the reinforced learning system. An intermediate reward could also be used for verifying the program correctness in the middle of the augmentation progress. Such reward could be obtained in a similar approach to that described above. [0071] The system described above provides an automatic approach of test templates augmentation for the set of test generation programs for CPU or other integrated circuit random verification by utilizing the graph convolutional policy network for generating new test templates for optimizing the code coverage for random verification of the CPU or other integrated circuit. This allows for the automatic optimization of code coverage in constraint random verification, unlike previous approaches that rely on manual effort from verification engineers.
[0072] In terms of implementing the system of FIG. 3, FIG. 8 is a high-level block diagram of one embodiment of a more general computing system 800 that can be used to implement various embodiments of the systems described above. In one example, computing system 800 is a network system 800. Specific devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. Furthermore, a device may contain multiple instances of a component, such as multiple processing units, processors, memories, etc.
[0073] The network system may comprise a computing system 801 equipped with one or more input/output devices, such as network interfaces, storage interfaces, and the like. The computing system 801 may include one or more central processing units (CPUs) 810 and/or other processors (e.g., graphics processing units (GPUs), tensor processing units (TPUs)) , a memory 820, a mass storage device 830, and an I/O interface 860 connected to a bus 870. The computing system 801 is configured to connect to various input and output devices (keyboards, displays, etc.) through the I/O interface 860, such as can be used to receive the initial test templates and circuit design inputs. The bus 870 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus or the like. As noted, the microprocessor 810 may comprise any type of electronic data processor, including CPU, GPUs, TPUs, and so on. The microprocessor 810 may be configured to implement any of the schemes described herein with respect to the generation of supplemented test programs for integrated circuit design testing systems of Figures 1 -7, using any one or combination of elements described in the embodiments. The memory 820 may comprise any type of system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like. In an embodiment, the memory 820 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
[0074] The mass storage device 830 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 870. The mass storage device 830 may comprise, for example, one or more of a solid-state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
[0075] The computing system 801 also includes one or more network interfaces 850, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or one or more networks 880. The network interface 850 allows the computing system 801 to communicate with remote units via the network 880. For example, the network interface 850 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas. In an embodiment, the computing system 801 is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like. In one embodiment, the network interface 850 may be used to receive and/or transmit interest packets and/or data packets in an ICN. Herein, the term “network interface” will be understood to include a port.
[0076] The components depicted in the computing system of FIG. 8 are those typically found in computing systems suitable for use with the technology described herein, and are intended to represent a broad category of such computer components that are well known in the art. Many different bus configurations, network platforms, and operating systems can be used.
[0077] The technology described herein can be implemented using hardware, firmware, software, or a combination of these. Depending on the embodiment, these elements of the embodiments described above can include hardware only or a combination of hardware and software (including firmware). For example, logic elements programmed by firmware to perform the functions described herein is one example of elements of the described systems. A CPU, GPU, or other microprocessor 810 can include a processor, FGA, ASIC, integrated circuit or other type of circuit. The software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein. The processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and nonremovable media. By way of example, and not limitation, computer readable media may comprise computer readable storage media and communication media. Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. A computer readable medium or media does (do) not include propagated, modulated or transitory signals.
[0078] Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
[0079] In alternative embodiments, some or all of the software can be replaced by dedicated hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Applicationspecific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc. For example, some of the elements used to execute the instructions issued in FIG. 2 can use specific hardware elements. In one embodiment, software (stored on a storage device) implementing one or more embodiments is used to program one or more processors. The one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
[0080] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
[0081] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0082] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[0083] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
[0084] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is:
1. A method of generating by a computing device of supplemental test programs for an integrated circuit, comprising: obtaining a design for an integrated circuit; receiving a plurality of initial test templates for testing of the design of the integrated circuit; analyzing the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing the initial test templates; generating by a graph convolutional policy network of a plurality of new test templates for the design of the integrated circuit from the program representation; and performing a plurality of parallel simulations of the design of the integrated circuit using the plurality of new test templates.
2. The method of claim 1 , further comprising: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generating a revised set of test templates.
3. The method of any of claims 1 -2, further comprising: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide sufficient test coverage for the design of a selected set of characteristics of the integrated circuit design; and in response the new test templates not providing sufficient test coverage for the design of the selected set of characteristics of the integrated circuit, generating additional new test templates.
4. The method of any of claims 1 -3, further comprising: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates providing accurate simulation results, testing the design of the integrated circuit using the new test templates.
5. The method of claim 4, further comprising: in response to the design of the integrated circuit not passing the testing, modifying the design of the integrated circuit; and testing the modified design of the integrated circuit using the new test templates.
6. The method of any of claims 4-5, further comprising: in response to the design of the integrated circuit passing the testing, fabricating the design of the integrated circuit.
7. The method of any of claims 1 -6, wherein analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates comprises: building an abstract syntax tree from the test templates.
8. The method of claim 7, wherein analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates further comprises: learning program representations from the abstract syntax tree.
9. The method of claim 8, wherein generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation comprises: generating the new test templates from the learnt program representations.
10. The method of any of claims 1 -9, further comprising: training the graph convolutional policy network based on results of the plurality of parallel simulations.
11. A system, comprising: one or more interfaces configured to: receive a plurality of initial test templates for testing of the design of the integrated circuit; and one or more processors and circuitry coupled to the one or more interfaces and configured to: obtain a design for an integrated circuit; analyze the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generate by a graph convolutional policy network of a plurality of new test templates for the design of the integrated circuit from the program representation; and perform a plurality of parallel simulations of the design of the integrated circuit using the plural of new test templates.
12. The system of claim 11 , the one or more processors further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generate a revised set of test templates.
13. The system of any of claims 11 -12, the one or more processors further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
14. The system of any of claims 11 -13, the one or more processors further configured to: subsequent to performing the plurality of parallel simulations, determine whether the new test templates provide accurate simulation results; and in response the new test templates provide accurate simulation results, testing the design of the integrated circuit using the new test templates.
15. The system of claim 14, the one or more processors further configured to: in response to the design of the integrated circuit not passing the testing, modify the design of the integrated circuit; and test the modified design of the integrated circuit using the new test templates.
16. The system of any of claims 11 -15, wherein, in analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates, the one or more processors further configured to: build an abstract syntax tree from the test templates.
17. The system of claim 16, wherein, in analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates, the one or more processors further configured to: learn program representations from the abstract syntax tree.
18. The system of claim 17, wherein, in generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation, the one or more processors further configured to: generate the new test templates from the learnt program representations.
19. The system of any of claims 11 -19, the one or more processors further configured to: train the graph convolutional policy network based on results of the plurality of parallel simulations.
20. A method, comprising: obtaining a design for an integrated circuit; receiving a plurality of initial test templates for testing of the design of the integrated circuit; analyzing the initial test templates by a graph convolutional network to extract a program representation of a structure of programs for executing of the initial test templates; generating by a graph convolutional policy network of a plurality of additional test templates for the design of the integrated circuit from the program representation; testing the design of the integrated circuit using the additional test templates; and in response to the design of the integrated circuit passing the testing, fabricating the design of the integrated circuit.
21 . The method of claim 20, further comprising: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates not providing accurate simulation results, generating a revised set of test templates.
22. The method of any of claims 20-21 , further comprising: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide sufficient test coverage of a selected set of characteristics for the design of the integrated circuit design; and in response the new test templates not providing sufficient test coverage of the selected set of characteristics for the design of the integrated circuit, generating additional new test templates.
23 The method of any of claims 20-22, further comprising: subsequent to performing the plurality of parallel simulations, determining whether the new test templates provide accurate simulation results; and in response the new test templates providing accurate simulation results, testing the design of the integrated circuit using the new test templates.
24. The method of claim 23, further comprising: in response to the design of the integrated circuit not passing the testing, modifying the design of the integrated circuit; and testing the modified design of the integrated circuit using the new test templates.
25. The method of any of claims 20-24, wherein analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates comprises: building an abstract syntax tree from the test templates.
26. The method of claim 25, wherein analyzing the initial test templates by a graph convolutional network to extract the program representation of the initial test templates further comprises: learning program representations from the abstract syntax tree.
27. The method of claim 26, wherein generating by the graph convolutional policy network of the plurality of new test templates for the design of the integrated circuit from the program representation comprises: generating the new test templates from the learnt program representations.
28. The method of any of claims 20-27, further comprising: training the graph convolutional policy network based on results of the plurality of parallel simulations.
PCT/US2022/027099 2022-04-29 2022-04-29 Generation of supplemental test programs for integrated circuit design testing WO2023211471A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2022/027099 WO2023211471A1 (en) 2022-04-29 2022-04-29 Generation of supplemental test programs for integrated circuit design testing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/027099 WO2023211471A1 (en) 2022-04-29 2022-04-29 Generation of supplemental test programs for integrated circuit design testing

Publications (1)

Publication Number Publication Date
WO2023211471A1 true WO2023211471A1 (en) 2023-11-02

Family

ID=81748233

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/027099 WO2023211471A1 (en) 2022-04-29 2022-04-29 Generation of supplemental test programs for integrated circuit design testing

Country Status (1)

Country Link
WO (1) WO2023211471A1 (en)

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AUSTIN P WRIGHT ET AL: "Comparison of Syntactic and Semantic Representations of Programs in Neural Embeddings", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 January 2020 (2020-01-24), XP081585800 *
GUYUE HUANG ET AL: "Machine Learning for Electronic Design Automation: A Survey", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 10 January 2021 (2021-01-10), XP081875281 *
LORENZO FERRETTI ET AL: "A Graph Deep Learning Framework for High-Level Synthesis Design Space Exploration", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 29 November 2021 (2021-11-29), XP091106099 *
YASAEI ROZHIN ET AL: "GNN4TJ: Graph Neural Networks for Hardware Trojan Detection at Register Transfer Level", 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), EDAA, 1 February 2021 (2021-02-01), pages 1504 - 1509, XP033941348, DOI: 10.23919/DATE51398.2021.9474174 *

Similar Documents

Publication Publication Date Title
US20230299951A1 (en) Quantum neural network
US11900212B1 (en) Constructing quantum processes for quantum processors
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
US20230196202A1 (en) System and method for automatic building of learning machines using learning machines
CN113343630B (en) Modeling method and modeling device, electronic equipment and storage medium
US20180253512A1 (en) Novel system and method for achieving functional coverage closure for electronic system verification
US20200134131A1 (en) Test pattern generation systems and methods
EP3846034B1 (en) Systems and methods for automated testing using artificial intelligence techniques
Pandey Machine learning and systems for building the next generation of EDA tools
Alqudah et al. Parallel implementation of genetic algorithm on FPGA using Vivado high level synthesis
US10803218B1 (en) Processor-implemented systems using neural networks for simulating high quantile behaviors in physical systems
WO2023211471A1 (en) Generation of supplemental test programs for integrated circuit design testing
US11227090B2 (en) System and method for achieving functional coverage closure for electronic system verification
CN115146580A (en) Integrated circuit path delay prediction method based on feature selection and deep learning
Khumprom et al. A hybrid evolutionary CNN-LSTM model for prognostics of C-MAPSS aircraft dataset
Huang et al. Machine learning and its applications in test
Dinu et al. Level up in verification: Learning from functional snapshots
Reinhold et al. Filter pruning for efficient transfer learning in deep convolutional neural networks
Bahnsen et al. Effect analysis of low-level hardware faults on neural networks using emulated inference
US20230153074A1 (en) Automated Process for Discovering Optimal Programs and Circuits in New Computing Platforms
US20220405599A1 (en) Automated design of architectures of artificial neural networks
CN116527411B (en) Data security intelligent protection model construction method and device and collaboration platform
Robert et al. Tools for quantum computing based on decision diagrams
Kang et al. Learned Formal Proof Strengthening for Efficient Hardware Verification
CN117971355A (en) Heterogeneous acceleration method, device, equipment and storage medium based on self-supervision learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22724199

Country of ref document: EP

Kind code of ref document: A1