WO2020234685A1 - 半導体素子の電気特性予測方法 - Google Patents
半導体素子の電気特性予測方法 Download PDFInfo
- Publication number
- WO2020234685A1 WO2020234685A1 PCT/IB2020/054411 IB2020054411W WO2020234685A1 WO 2020234685 A1 WO2020234685 A1 WO 2020234685A1 IB 2020054411 W IB2020054411 W IB 2020054411W WO 2020234685 A1 WO2020234685 A1 WO 2020234685A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- learning model
- semiconductor element
- electrical characteristics
- learning
- transistor
- Prior art date
Links
- 239000004065 semiconductor Substances 0.000 title claims abstract description 133
- 238000000034 method Methods 0.000 title claims description 211
- 238000013528 artificial neural network Methods 0.000 claims abstract description 91
- 230000008569 process Effects 0.000 claims description 148
- 238000004364 calculation method Methods 0.000 claims description 36
- 229910044991 metal oxide Inorganic materials 0.000 claims description 3
- 150000004706 metal oxides Chemical class 0.000 claims description 3
- 239000010408 film Substances 0.000 description 54
- 238000010586 diagram Methods 0.000 description 29
- 239000013598 vector Substances 0.000 description 25
- 230000006870 function Effects 0.000 description 22
- 210000002569 neuron Anatomy 0.000 description 14
- 238000004891 communication Methods 0.000 description 13
- 230000004913 activation Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 7
- 229920002120 photoresistant polymer Polymers 0.000 description 7
- 238000013461 design Methods 0.000 description 6
- 208000002564 X-linked cardiac valvular dysplasia Diseases 0.000 description 5
- 238000004140 cleaning Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000005229 chemical vapour deposition Methods 0.000 description 4
- 238000000576 coating method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000005669 field effect Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 230000001151 other effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 2
- 230000002068 genetic effect Effects 0.000 description 2
- 239000012535 impurity Substances 0.000 description 2
- 238000001878 scanning electron micrograph Methods 0.000 description 2
- 238000004544 sputter deposition Methods 0.000 description 2
- 239000010409 thin film Substances 0.000 description 2
- 102100033972 Amyloid protein-binding protein 2 Human genes 0.000 description 1
- 101100065699 Arabidopsis thaliana ETC1 gene Proteins 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 101000785279 Dictyostelium discoideum Calcium-transporting ATPase PAT1 Proteins 0.000 description 1
- 102100023882 Endoribonuclease ZC3H12A Human genes 0.000 description 1
- 101710112715 Endoribonuclease ZC3H12A Proteins 0.000 description 1
- 101000779309 Homo sapiens Amyloid protein-binding protein 2 Proteins 0.000 description 1
- 101000579484 Homo sapiens Period circadian protein homolog 1 Proteins 0.000 description 1
- 101001126582 Homo sapiens Post-GPI attachment to proteins factor 3 Proteins 0.000 description 1
- 101000713296 Homo sapiens Proton-coupled amino acid transporter 1 Proteins 0.000 description 1
- 102100028293 Period circadian protein homolog 1 Human genes 0.000 description 1
- 101100464779 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) CNA1 gene Proteins 0.000 description 1
- 102100022068 Serine palmitoyltransferase 1 Human genes 0.000 description 1
- 101710122478 Serine palmitoyltransferase 1 Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 101150077894 dop1 gene Proteins 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000002488 metal-organic chemical vapour deposition Methods 0.000 description 1
- 239000003960 organic solvent Substances 0.000 description 1
- QGVYYLZOAMMKAH-UHFFFAOYSA-N pegnivacogin Chemical compound COCCOC(=O)NCCCCC(NC(=O)OCCOC)C(=O)NCCCCCCOP(=O)(O)O QGVYYLZOAMMKAH-UHFFFAOYSA-N 0.000 description 1
- 238000011112 process operation Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 238000007740 vapor deposition Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/26—Testing of individual semiconductor devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/28—Testing of electronic circuits, e.g. by signal tracer
- G01R31/2832—Specific tests of electronic circuits not provided for elsewhere
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L21/00—Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L21/00—Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
- H01L21/02—Manufacture or treatment of semiconductor devices or of parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L22/00—Testing or measuring during manufacture or treatment; Reliability measurements, i.e. testing of parts without further processing to modify the parts as such; Structural arrangements therefor
- H01L22/10—Measuring as part of the manufacturing process
- H01L22/14—Measuring as part of the manufacturing process for electrical parameters, e.g. resistance, deep-levels, CV, diffusions by electrical means
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L29/00—Semiconductor devices specially adapted for rectifying, amplifying, oscillating or switching and having potential barriers; Capacitors or resistors having potential barriers, e.g. a PN-junction depletion layer or carrier concentration layer; Details of semiconductor bodies or of electrodes thereof ; Multistep manufacturing processes therefor
- H01L29/66—Types of semiconductor device ; Multistep manufacturing processes therefor
- H01L29/68—Types of semiconductor device ; Multistep manufacturing processes therefor controllable by only the electric current supplied, or only the electric potential applied, to an electrode which does not carry the current to be rectified, amplified or switched
- H01L29/76—Unipolar devices, e.g. field effect transistors
- H01L29/772—Field effect transistors
- H01L29/78—Field effect transistors with field effect produced by an insulated gate
- H01L29/786—Thin film transistors, i.e. transistors with a channel being at least partly a thin film
- H01L29/78645—Thin film transistors, i.e. transistors with a channel being at least partly a thin film with multiple gate
- H01L29/78648—Thin film transistors, i.e. transistors with a channel being at least partly a thin film with multiple gate arranged on opposing sides of the channel
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L29/00—Semiconductor devices specially adapted for rectifying, amplifying, oscillating or switching and having potential barriers; Capacitors or resistors having potential barriers, e.g. a PN-junction depletion layer or carrier concentration layer; Details of semiconductor bodies or of electrodes thereof ; Multistep manufacturing processes therefor
- H01L29/66—Types of semiconductor device ; Multistep manufacturing processes therefor
- H01L29/68—Types of semiconductor device ; Multistep manufacturing processes therefor controllable by only the electric current supplied, or only the electric potential applied, to an electrode which does not carry the current to be rectified, amplified or switched
- H01L29/76—Unipolar devices, e.g. field effect transistors
- H01L29/772—Field effect transistors
- H01L29/78—Field effect transistors with field effect produced by an insulated gate
- H01L29/786—Thin film transistors, i.e. transistors with a channel being at least partly a thin film
- H01L29/7869—Thin film transistors, i.e. transistors with a channel being at least partly a thin film having a semiconductor body comprising an oxide semiconductor material, e.g. zinc oxide, copper aluminium oxide, cadmium stannate
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L29/00—Semiconductor devices specially adapted for rectifying, amplifying, oscillating or switching and having potential barriers; Capacitors or resistors having potential barriers, e.g. a PN-junction depletion layer or carrier concentration layer; Details of semiconductor bodies or of electrodes thereof ; Multistep manufacturing processes therefor
- H01L29/66—Types of semiconductor device ; Multistep manufacturing processes therefor
- H01L29/68—Types of semiconductor device ; Multistep manufacturing processes therefor controllable by only the electric current supplied, or only the electric potential applied, to an electrode which does not carry the current to be rectified, amplified or switched
- H01L29/76—Unipolar devices, e.g. field effect transistors
- H01L29/772—Field effect transistors
- H01L29/78—Field effect transistors with field effect produced by an insulated gate
- H01L29/786—Thin film transistors, i.e. transistors with a channel being at least partly a thin film
- H01L29/78696—Thin film transistors, i.e. transistors with a channel being at least partly a thin film characterised by the structure of the channel, e.g. multichannel, transverse or longitudinal shape, length or width, doping structure, or the overlap or alignment between the channel and the gate, the source or the drain, or the contacting structure of the channel
Definitions
- One aspect of the present invention relates to a method of training a multimodal learning model using any one or more of process recipes, electrical characteristics, or image data. Further, one aspect of the present invention relates to a method of predicting the electrical characteristics of a semiconductor device using a multimodal learned model using any one or more of process recipes, electrical characteristics, or image data. One aspect of the present invention relates to a method for predicting electrical characteristics of a semiconductor device using a computer.
- a semiconductor element refers to an element that can function by utilizing semiconductor characteristics.
- a semiconductor device such as a transistor, diode, light emitting element, or light receiving element.
- a semiconductor element is a passive element produced by a conductive film, an insulating film, or the like, such as a capacitance, a resistor, or an inductor.
- a semiconductor device or a semiconductor device including a circuit having a passive element is a circuit having a passive element.
- AI Artificial Intelligence
- the field of robots or the field of energy handling high power such as power ICs
- problems such as an increase in the amount of calculation or an increase in power consumption.
- development of new semiconductor devices is underway. While the integrated circuits required by the market or the semiconductor elements used in the integrated circuits are becoming more complicated, the early start-up of integrated circuits having new functions is required. However, the knowledge, know-how, or experience of a skilled engineer is required for process design, device design, or circuit design in the development of semiconductor devices.
- Patent Document 1 discloses a parameter adjusting device that uses a genetic algorithm to adjust parameters of a physical model of a transistor.
- Process design, device design, and circuit design are required to develop semiconductor devices.
- the semiconductor element is formed by combining a plurality of process steps.
- Semiconductor devices have the problem that the electrical characteristics of semiconductor devices differ when the order of process processes changes. Even in the same process, if the manufacturing apparatus or process conditions are different, there is a problem that the electrical characteristics of the semiconductor element are different.
- the semiconductor element has a problem that it exhibits different electrical characteristics due to the progress of miniaturization even if it is formed by using the same process, different devices having the same function, and the same conditions.
- the cause is the film thickness accuracy or processing accuracy of the manufacturing apparatus, and cases where the cause is that the physical model due to miniaturization is different.
- one aspect of the present invention is to provide a simple method for predicting electrical characteristics of a semiconductor device.
- one aspect of the present invention is to provide a method for predicting electrical characteristics of a semiconductor element using a simple computer.
- one aspect of the present invention is to provide a neural network that learns a process list of a semiconductor element and outputs a first feature amount.
- one aspect of the present invention is to provide a neural network that learns the electrical characteristics of the semiconductor element generated by the process list of the semiconductor element and outputs a second feature amount.
- one aspect of the present invention is to provide a neural network that learns a schematic cross-sectional view or a cross-sectional observation image of a semiconductor device generated by the process list of the semiconductor device and outputs a third feature amount. Let's try.
- one aspect of the present invention is to provide a neural network for multimodal learning using the first to third feature quantities.
- one aspect of the present invention is to output the value of a variable used in a calculation formula representing the electrical characteristics of a semiconductor element by a neural network that performs multimodal learning.
- One aspect of the present invention is a method for predicting electrical characteristics of a semiconductor device having a feature amount calculation unit and a characteristic prediction unit.
- the feature amount calculation unit has a first learning model and a second learning model
- the characteristic prediction unit has a third learning model.
- the first learning model has a step of learning a process list for producing a semiconductor device. Further, the first learning model has a step of generating a first feature quantity.
- the second learning model has a step of learning the electrical properties of the semiconductor device generated by the process list. Further, the second learning model has a step of generating a second feature quantity.
- the third learning model has a step of performing multimodal learning using the first feature amount and the second feature amount. Further, the third learning model is a method for predicting the electrical characteristics of a semiconductor element, which has a step of outputting the value of a variable used in a calculation formula representing the electrical characteristics of the semiconductor element.
- the feature amount calculation unit has a fourth learning model.
- the fourth learning model has a step of learning a schematic cross-sectional view generated using a process list. Further, the fourth learning model has a step of generating a third feature quantity.
- the third learning model has a step of performing multimodal learning using the first feature amount, the second feature amount, and the third feature amount.
- a method for predicting the electrical characteristics of a semiconductor device in which the third learning model has a step of outputting the value of a variable used in a calculation formula representing the electrical characteristics of the semiconductor element is preferable.
- the first learning model has a first neural network
- the second learning model has a second neural network.
- a method for predicting the electrical characteristics of a semiconductor device in which the first feature amount generated by the first neural network has a step of updating the weighting coefficient of the second neural network is preferable.
- the second learning model when the first learning model is given a process list for inference and the second learning model is given the value of the voltage given to the terminals of the semiconductor element, the second learning model , A method for predicting electrical characteristics of a semiconductor device having a step of outputting a current value according to a voltage value is preferable.
- the third learning model A method for predicting the electrical characteristics of a semiconductor device, which comprises a step of outputting the value of a variable used in the calculation formula of the electrical characteristics of the semiconductor element, is preferable.
- a method for predicting electrical characteristics of a semiconductor element whose semiconductor element is a transistor is preferable.
- the transistor preferably contains a metal oxide in the semiconductor layer.
- One aspect of the present invention can provide a simple method for predicting electrical characteristics of a semiconductor device.
- one aspect of the present invention can include a neural network that learns a process list of semiconductor elements and outputs a first feature amount.
- one aspect of the present invention can include a neural network that learns the electrical characteristics of the semiconductor element generated by the process list of the semiconductor element and outputs a second feature amount.
- one aspect of the present invention can include a neural network that learns a schematic cross-sectional view or a cross-sectional image of the semiconductor device generated by the process list of the semiconductor device and outputs a third feature amount.
- one aspect of the present invention can include a neural network that performs multimodal learning using the first to third features.
- one aspect of the present invention can output the value of a variable used in a calculation formula representing the electrical characteristics of a semiconductor element by a neural network that performs multimodal learning.
- the effect of one aspect of the present invention is not limited to the effects listed above.
- the effects listed above do not preclude the existence of other effects.
- the other effects are the effects not mentioned in this item, which are described below. Effects not mentioned in this item can be derived from those described in the description or drawings by those skilled in the art, and can be appropriately extracted from these descriptions.
- one aspect of the present invention has at least one of the above-listed effects and / or other effects. Therefore, one aspect of the present invention may not have the effects listed above in some cases.
- FIG. 1 is a diagram illustrating a method for predicting electrical characteristics of a semiconductor element.
- 2A, 2B, 2C, and 2D are tables for explaining the process list.
- 3A and 3B are diagrams for explaining a process list.
- FIG. 3C is a diagram illustrating a neural network for learning a process list.
- 4A and 4B are diagrams for explaining the electrical characteristics of the semiconductor element.
- FIG. 4C is a diagram illustrating a neural network for learning electrical characteristics.
- FIG. 5 is a diagram illustrating a method for predicting electrical characteristics of a semiconductor element.
- FIG. 6A is a diagram illustrating a neural network for learning image data.
- FIG. 6B is a diagram illustrating a schematic cross-sectional view of the semiconductor element.
- FIG. 1 is a diagram illustrating a method for predicting electrical characteristics of a semiconductor element.
- 2A, 2B, 2C, and 2D are tables for explaining the process list.
- 3A and 3B are diagrams
- FIG. 6C is a diagram for explaining a cut-off observation image of the semiconductor element.
- FIG. 7 is a diagram illustrating a method for predicting electrical characteristics of a semiconductor element.
- FIG. 8 is a diagram illustrating a method for predicting electrical characteristics of a semiconductor element.
- FIG. 9 is a diagram illustrating a computer that operates a program.
- a method for predicting electrical characteristics of a semiconductor element will be described.
- a feature amount calculation unit and a characteristic prediction unit are used as a method for predicting the electrical characteristics of a semiconductor element.
- the feature amount calculation unit has a first learning model and a second learning model, and the characteristic prediction unit has a third learning model.
- the first learning model has a first neural network
- the second learning model has a second neural network
- the third learning model has a third neural network. It is preferable that the first to third neural networks are different from each other.
- the first learning model learns a process list for producing a semiconductor element.
- the first learning model updates the weighting factor of the first neural network by being given a process list for generating the semiconductor element. That is, the first neural network is a neural network that learns the process list as teacher data.
- a semiconductor element will be described by paraphrasing it as a transistor.
- the semiconductor element is not limited to the transistor.
- the transistor is an example, and the semiconductor element may be a diode, a thermistor, a gyro sensor, an acceleration sensor, a light emitting element, a light receiving element, or the like.
- the semiconductor element may include a resistor, a capacitance, or the like.
- the above-mentioned process list is information in which a plurality of processes necessary for forming a transistor are combined.
- the process item preferably includes at least the process ID, the device ID, and the conditions.
- the types of steps include at least one or a plurality of steps such as a film forming step, a cleaning step, a resist coating step, an exposure step, a developing step, a processing step, a baking step, a peeling step, and a doping step.
- the conditions include setting conditions of each device and the like.
- the process content represented by each process ID may be performed by devices having different functions.
- the film forming step includes an organic metal vapor deposition method (MOCVD), a chemical vapor deposition method (CVD), a sputtering method, and the like. Therefore, in the case of the information given to the first learning model, the two-dimensional information can be managed as the one-dimensional information by expressing the process ID and the device ID with one code. By expressing the process ID and the device ID using a code, learning items are reduced and the amount of calculation is reduced.
- the code generation method will be described in detail with reference to FIG.
- the first learning model generates the first feature quantity by the first neural network learned by the process list.
- the second learning model learns the electrical characteristics of the transistor generated by the first model. More specifically, the second learning model learns the electrical properties of the transistors generated by the process list given to the first learning model.
- the second learning model updates the weighting factor of the second neural network given the electrical properties of the transistor. That is, the second neural network is a neural network that learns the electrical characteristics of the transistor as teacher data.
- the electrical characteristics of the transistor an Id-Vgs characteristic for evaluating the temperature characteristic or the threshold voltage of the transistor and an Id-Vds characteristic for evaluating the saturation characteristic of the transistor can be used.
- the drain current Id indicates the magnitude of the current flowing through the drain terminal when a voltage is applied to the gate terminal, drain terminal, and source terminal of the transistor.
- the Id-Vgs characteristic is a change in the drain current Id when a different voltage is applied to the gate terminal of the transistor. Further, the Id-Vds characteristic is a change in the value Id of the drain current when a different voltage is applied to the drain terminal of the transistor.
- the second learning model generates a second feature amount by a second neural network that learns the electrical characteristics of the transistor generated by the process list.
- the third learning model performs multimodal learning using the first feature amount and the second feature amount.
- the third learning model updates the weighting coefficient of the third neural network by being given the first feature amount and the second feature amount. That is, the third neural network is a neural network that learns the process list and the electrical characteristics of the transistors corresponding to the process list as teacher data.
- multimodal learning means the first feature amount generated from the process list for producing the semiconductor element, and the second feature amount generated from the electrical characteristics of the semiconductor element generated by the process list. It is to learn using different forms of information.
- a neural network that uses features generated from a plurality of different types of information as input information can be called a neural network having a multimodal interface.
- the third neural network corresponds to a neural network having a multimodal interface.
- the third learning model outputs the values of variables used in the calculation formula representing the electrical characteristics of the transistor. That is, the value of the variable is a value predicted by the method of predicting the electrical characteristics of the semiconductor element.
- the transistor's gravure channel approximation formula is used as a calculation formula that expresses the electrical characteristics of the transistor.
- Equation (1) represents the electrical characteristics of the saturation region of the transistor.
- Equation (2) represents the electrical characteristics of the linear region of the transistor.
- the variables predicted by the method for predicting the electrical characteristics of the transistor are the drain current Id used in the equation (1) or (2), the field effect mobility ⁇ FE, the unit area capacitance Cox of the gate insulating film, the channel length L, and the channel width W. , Or there are variables such as the threshold voltage Vth. It is preferable that inference data described later is given to the gate voltage Vg given to the gate terminal or the drain voltage Vd given to the drain terminal.
- the third learning model can output all the values of the above-mentioned variables, or may output the values of any one or a plurality of variables.
- the method of predicting the electrical characteristics of a semiconductor element uses learning with teacher data, and therefore rewards the first to third neural networks for the output result of the third learning model.
- the first to third neural networks update the weighting coefficient so as to approach the result calculated from the equation (1) or (2) from the electrical characteristics of the transistor.
- the feature amount calculation unit further has a fourth learning model.
- the fourth learning model learns a schematic cross-sectional view of a transistor generated using a process list. Alternatively, the fourth learning model learns the cross-sectional SEM image of the transistor generated using the process list.
- the fourth learning model generates a third feature amount by learning a schematic cross-sectional view of a transistor or a cross-sectional SEM image. When the fourth learning model generates the third feature amount, the first learning model generates the first feature amount in parallel, and the second learning model generates the second feature amount. Is preferable.
- the third learning model performs multimodal learning using the first feature amount, the second feature amount, and the third feature amount. Therefore, the third learning model outputs the value of the variable used in the calculation formula representing the electrical characteristic of the transistor.
- the first feature updates the weighting factor of the second neural network.
- the first feature quantity is the output of the first learning model that trained the process list. That is, the first feature quantity is related to the electrical characteristics of the transistor generated by the process list.
- the third learning model is the electricity of the transistor. Outputs the value of the variable used in the characteristic calculation formula.
- the first learning model is given a process list for inference
- the second learning model is given the value of the voltage given to the terminals (gate terminal, drain terminal, source terminal) of the transistor.
- the second learning model outputs the value of the current flowing through the drain terminal according to the value of the voltage as a predicted value.
- the transistor electrical characteristic prediction method described with reference to FIG. 1 includes a feature amount calculation unit 110 and a characteristic prediction unit 120.
- the feature amount calculation unit 110 has a learning model 210 and a learning model 220
- the characteristic prediction unit 120 has a learning model 230.
- the learning model 210 has a neural network 211 and a neural network 212.
- the neural network 211 and the neural network 212 will be described in detail with reference to FIG. 3C.
- the learning model 220 has a neural network 221 and an activation function 222.
- the neural network 221 preferably has an input layer, an intermediate layer, and an output layer.
- the neural network 221 will be described in detail with reference to FIG. 4C.
- the learning model 230 has a neural network composed of a fully connected layer 231 and a fully connected layer 232, and a fully connected layer 233.
- the bond layer 231 has a multimodal interface.
- the coupling layer 231 combines the first feature quantity generated from the process list and the second feature quantity generated from the electrical characteristics of the transistor generated by the process list, and the fully coupled layer 232 is combined. Generate output data to be given to.
- the fully coupled layer 233 outputs a predicted value of electrical characteristics (for example, drain current) to the output terminals OUT_1 to OUT_w.
- the value of the variable included in the above-mentioned equation (1) or equation (2) corresponds to the output terminals OUT_1 to OUT_w.
- w is an integer of 1 or more.
- 2A to 2D are tables for explaining the process list given to the learning model 210.
- FIG. 2A is a table for explaining the process items of the smallest unit included in the process list.
- the process list is composed of a plurality of process items.
- the process item is composed of a process ID, an apparatus ID, an apparatus setting condition, and the like. Although not shown in FIG. 2A, it may be described which part of the transistor each process item forms.
- Examples of the process items included in the process list can be a process ID, an apparatus ID, a condition, and a formation location.
- the formation location includes an oxide film, an electrode (gate, source, drain, etc.), a semiconductor layer, and the like.
- the actual semiconductor element forming step further includes a plurality of steps such as contact forming and wiring forming.
- FIG. 2B is a table for explaining the process items of the semiconductor element as an example.
- the process ID includes a film forming process, a cleaning process, a resist coating process, an exposure process, a developing process, a processing process 1, a processing process 2, a baking process, a peeling process, a doping process, and the like.
- the device ID is assigned to the device used in each process.
- the setting conditions of the device are items to be set for the device used in each process. Even in the same process, if the device IDs are different, different device setting conditions may be given to each device.
- the device ID used in the process can be set as follows. For example, film forming process: CVD1, cleaning process: WAS1, resist coating process: REG1, exposure process: PAT1, developing process: DEV1, processing process 1: ETC1, processing process 2: CMP1, baking process: OVN1, peeling process PER1, Doping step: Expressed as DOP1 or the like. It is preferable that the process ID is always managed in association with the device ID.
- the process ID can be represented by one code in combination with the device ID. As an example, when the process ID is the film forming process and the device ID is CVD1, the code is 0011. However, the code to be assigned is managed as a unique number. Further, the conditions set for each device have a plurality of setting items.
- j, k, m, n, p, r, s, t, u, and v in FIG. 2B are integers of 1 or more.
- FIG. 2C is a table for explaining that even if the process items are the same, the code will be changed if the equipment used is different.
- a method of forming a film by using a chemical vapor deposition method and a method of forming a film by using a sputtering method (device ID: SPT1) even if the process ID is the same as the film forming process.
- a sputtering method device ID: SPT1
- an apparatus device ID: CVD1 for forming a film using plasma
- an apparatus device ID: CVD2
- heat for forming a film using heat
- the same devices when a plurality of the same devices are provided, different codes may be used for each device.
- the film quality of the film to be formed may differ, so it is necessary to manage the unit. is there.
- the electrical characteristics of a transistor may be affected by the device ID in the process list.
- FIG. 2D is a table for explaining the process items included in the process list given to the learning model 210.
- Code: 0011 means a process ID: a film forming process and an apparatus ID: CVD1.
- the film forming conditions given to the code: 0011 include film thickness, temperature, pressure, electric power, gas 1, and flow rate of gas 1. More specifically, the film forming conditions given to the code: 0011 are a film thickness: 5 nm, a temperature: 500 ° C., a pressure: 200 Pa, an electric power: 150 W, a gas 1: SiH, and a flow rate of the gas 1: 2000 sccm. It is preferable that the conditions that can be set as process items can be set differently depending on the apparatus.
- FIG. 3A and 3B are diagrams illustrating a part of the process list.
- FIG. 3C is a diagram illustrating a neural network for learning a process list.
- the process of processing the film formed by the film forming process will be described using a part of the process list shown in FIG. 3A.
- a film specified by the film forming process is formed.
- the film formation conditions and the like are omitted for the sake of simplicity.
- the apparatus ID: CVD1 is used from the code: 0011.
- the drawings FIG. 2B and the like
- the photoresist is coated on the film formed.
- the mask pattern of the film is transferred to the photoresist.
- the developing step the photoresist other than the transferred mask pattern is removed with a developing solution to form a mask pattern of the photoresist.
- the developing step may include a step of firing the photoresist.
- the processing step 1 the film is processed using the mask pattern formed on the photoresist.
- the peeling step the photoresist is peeled off.
- a cleaning step is added after the film forming step, and a baking step is added after the peeling step.
- a cleaning step is added after the film forming step
- impurities remaining on the film formed film are removed, or the unevenness of the surface formed on the upper part of the film is made uniform.
- elements contained in the film by removing impurities (organic solvent, water, etc.) remaining on the film to be processed by adding a baking step after the peeling step, or by baking the film.
- the film quality can be changed by stimulating the reaction of. By baking the film, the density of the film is increased and the film quality can be hardened.
- FIG. 3B shows that the film formed in the film forming process has different characteristics by adding a process different from that in FIG. 3A. Therefore, the process list affects the electrical characteristics of the transistors generated by the process list.
- FIG. 3C is a diagram for explaining a learning model 210 that learns a process list as learning data.
- the learning model 210 has a neural network 211 and a neural network 212.
- Process items are given to the neural network 211 in the order of processes according to the process list.
- the process item as shown in FIG. 2D, the process and the device name used in the process are given by one code. Each code is given multiple conditions to set for the device to be used. Each condition is given in the form of a number or a number with a unit.
- the neural network 211 may be provided with a file in which a plurality of process items are described in process order.
- the neural network 211 vectorizes the process items using Word2Vec (W2V).
- W2Vec Word2Vec
- Word2VecGloVe Global Vectors for Word Presentation
- Bag-of-words and the like can be used.
- Vectorizing text data can be rephrased as converting it into a distributed representation.
- the distributed representation can be rephrased as an embedded representation (feature vector or embedded vector).
- the condition of the process item is treated as a set of words, not as a sentence. Therefore, it is preferable to treat the process list as a set of words.
- the neural network 211 has an input layer 211a, a hidden layer 211b, and a hidden layer 211c.
- the neural network 211 outputs a feature vector generated from the process list. It should be noted that a plurality of the feature vectors can be output, or they may be aggregated into one. Hereinafter, a case where the neural network 211 outputs a plurality of feature vectors will be described.
- the hidden layer may have one or a plurality of hidden layers.
- the neural network 212 is given a plurality of feature vectors generated by the neural network 211. It is preferable to use DAN (Deep Averaging Network) for the neural network 212.
- DAN Deep Averaging Network
- the neural network 212 has an AGGREGATE layer 212a, a fully connected layer 212b, and a fully connected layer 212c.
- the AGGREGATE layer 212a can collectively handle a plurality of feature vectors output by the neural network 211.
- the fully connected layer 212b and the fully connected layer 212c have a sigmoid function, a step function, a ramp function (Rectifier Unit), or the like as an activation function.
- a function with a non-linear activation function is effective for feature vectorizing complex training data. Therefore, the neural network 212 can average the feature vectors of the process items constituting the process list and aggregate them into one feature vector. The aggregated feature vector is given to the learning model 230.
- the fully bonded layer may be one layer or a plurality of layers.
- FIG. 4A or FIG. 4B is a diagram illustrating the electrical characteristics of the transistor generated by the process list used by the learning model 210 for learning.
- FIG. 4C is a diagram illustrating a neural network for learning the electrical characteristics of a transistor.
- FIG. 4A is a diagram showing the Id-Vds characteristics used for evaluating the saturation characteristics of the transistor.
- the Id-Vds characteristic indicates the current flowing through the drain terminal when a voltage is applied to the gate terminal, drain terminal, and source terminal of the transistor. That is, the Id-Vds characteristic is the value Id of the drain current when a different voltage is applied to the drain terminal of the transistor.
- FIG. 4A is a diagram plotting the drain current Id when the potentials A1 to A10 are applied to the drain terminal of the transistor.
- FIG. 4B is a diagram showing the Id-Vgs characteristics used for evaluating the linear characteristics of the transistor.
- the Id-Vgs characteristic indicates the current flowing through the drain terminal when a voltage is applied to the gate terminal, drain terminal, and source terminal of the transistor. That is, the Id-Vgs characteristic is the value Id of the drain current when a different voltage is applied to the gate terminal of the transistor.
- FIG. 4B is a diagram plotting the drain current Id when the potentials A1 to A10 are applied to the gate terminal of the transistor.
- FIG. 4C is a diagram illustrating a neural network 221 that learns the electrical characteristics of a transistor using the data of FIG. 4A or FIG. 4B.
- a voltage Vd given to the drain terminal of the transistor, a voltage Vg given to the gate terminal of the transistor, and a voltage Vs given to the source terminal of the transistor are given to the input layer. Further, under the above-mentioned conditions, the current Id flowing through the drain terminal of the transistor may be given.
- the input layer has neurons X1 to X4, the hidden layer has neurons Y1 to Y10, and the output layer has neurons Z1.
- the neuron Z1 characterizes the electrical properties into a feature vector, and the activation function 222 outputs a predicted value.
- the number of neurons in the hidden layer is preferably equal to the number of plots given as training data. Alternatively, it is more preferable that the number of neurons in the hidden layer is larger than the number of plots given as training data. When the number of neurons in the hidden layer is larger than the number of plots given as training data, the learning model 220 learns the electrical characteristics of the transistor in detail.
- the neuron Z1 has a function of an activation function 222.
- the neural network 221 learns the electrical characteristics of a transistor.
- the voltage Vd given to the drain terminal of the transistor is given to the neuron X1
- the voltage Vg given to the gate terminal of the transistor is given to the neuron X2
- the voltage Vs given to the source terminal of the transistor is given to the neuron X3.
- the neuron X4 is given a drain current Id that flows through the drain terminal of the transistor.
- the drain current Id is given as teacher data.
- the weighting coefficient of the hidden layer is updated so that the output of the neuron Z1 or the output of the activation function 222 approaches the drain current Id.
- the drain current Id is not given as the learning data
- the learning is performed so that the output of the neuron Z1 or the output of the activation function 222 approaches the drain current Id.
- the learning model 220 learns in parallel with the learning model 210.
- the process list given to the learning model 210 is highly relevant to the electrical characteristics given to the learning model 220. Therefore, in order to learn to predict the electrical characteristics of the transistor, it is effective to learn the learning model 220 and the learning model 210 in parallel.
- the characteristic prediction unit 120 has a learning model 230.
- the learning model 230 is a neural network having a fully connected layer 231 and a fully connected layer 232, and a fully connected layer 233.
- the fully bonded layer may be one layer or a plurality of layers.
- the connection layer 231 combines the feature vectors output by different learning models (learning model 210, learning model 220), and makes the combined feature vector into another feature vector. That is, by providing the coupling layer 231 the characteristic prediction unit 120 functions as a neural network having a multimodal interface.
- the fully coupled layer 233 outputs the predicted value of the electrical characteristics to the output terminal OUT_1 to the output terminal OUT_w.
- the predicted values of the electrical characteristics that are the outputs are the field effect mobility ⁇ FE of the above-mentioned formula (1) or (2), the unit area capacitance Cox of the gate insulating film, the channel length L, and the channel width. W, threshold voltage Vth, etc. correspond. Further, it is preferable to output a drain voltage Vd, a gate voltage Vg, or the like.
- the value of each variable calculated from the electrical characteristics of the transistor may be given to the coupling layer 231 as teacher data. In the learning model 230, the weighting coefficient is updated by being given the teacher data.
- FIG. 5 is a diagram illustrating a method of predicting electrical characteristics of a semiconductor element different from that of FIG. FIG. 5 has a feature amount calculation unit 110A.
- the feature amount extraction unit 110A is different from the feature amount calculation unit 110 shown in FIG. 1 in that it has a learning model 240.
- the learning model 240 is a neural network that learns image data.
- the image data learned by the learning model 240 is a schematic cross-sectional view of a transistor formed by a process list, a cross-sectional observation image observed using a scanning electron microscope (SEM), or the like.
- SEM scanning electron microscope
- the coupling layer 231A included in the characteristic prediction unit 120 is generated from a feature vector generated from the process list, a feature vector generated from the electrical characteristics of the transistor generated by the process list, and a schematic cross-section diagram or an actual completed cross-sectional observation image.
- the resulting feature vectors are combined to generate output data to be given to the fully connected layer 232.
- FIG. 6A is a diagram for explaining the learning model 240 in detail.
- the learning model 240 has a convolutional neural network 241 and a fully connected layer 242.
- the convolutional neural network 241 has a convolutional layer 241a to a convolutional layer 241e.
- the number of convolution layers is not limited and may be an integer of 1 or more. Note that FIG. 6A shows a case where five convolution layers are provided as an example.
- the fully bonded layer 242 has a fully bonded layer 242a to a fully bonded layer 242c. Therefore, the learning model 240 can be called a CNN (Convolutional Neural Network).
- CNN Convolutional Neural Network
- FIG. 6B shows a schematic cross-sectional view of the transistor generated by the process list given to the learning model 210.
- FIG. 6C shows a cross-sectional observation image of the transistor generated by the process list given to the learning model 210.
- the learning model 240 for learning the schematic cross-sectional view of the transistor may use a different learning model for learning the cross-sectional observation image of the transistor.
- FIG. 6B shows a semiconductor layer, a gate oxide film, and a gate electrode
- FIG. 6C shows a semiconductor layer, a gate oxide film, and a gate electrode corresponding to FIG. 6C.
- the cross-sectional observation image it may be difficult to recognize because the gate oxide film of the transistor is a thin film.
- the schematic cross-sectional view it may be described so that a thin film that may be erroneously detected can be recognized. Therefore, by learning the schematic sectional view, the sectional observation image can be learned more accurately. Therefore, the process list is more relevant to the electrical characteristics of the transistor and the actual cross-section observation image. Therefore, it becomes easy to predict the electrical characteristics of the semiconductor element.
- FIGS. 6B and 6C show an example of a transistor having a metal oxide in the semiconductor layer.
- the method for predicting the electrical characteristics of a semiconductor element can also be applied to a transistor containing silicon in the semiconductor layer. Alternatively, it can also be applied to a transistor containing a compound semiconductor or an oxide semiconductor.
- the semiconductor element is not limited to the transistor.
- the method for predicting electrical characteristics of a semiconductor element can also be applied to resistors, capacitances, diodes, thermistors, gyro sensors, accelerometers, light emitting elements, light receiving elements, and the like.
- FIG. 7 is a diagram illustrating a method of predicting electrical characteristics of a semiconductor element different from that of FIG.
- the feature amount calculation unit 110B is provided.
- the feature amount calculation unit 110B is different in that the output of the learning model 210 updates the weighting coefficient of the neural network 221.
- the neural network 221 improves the prediction of the electrical characteristics of the transistor.
- FIG. 7 describes a method for predicting the electrical characteristics of a transistor using a method for predicting the electrical characteristics of a semiconductor element.
- the learning model 210, the learning model 220, and the learning model 230 have already been trained.
- the neural network 211 is provided with a process list having a new configuration as inference data 1.
- the neural network 221 is given a drain voltage given to the drain terminal of the transistor, a gate voltage given to the gate terminal of the transistor, a source voltage given to the source terminal of the transistor, and the like as inference data 2.
- the characteristic prediction unit 120 predicts the value of each variable of the above-mentioned equation (1) or (2) by using the feature vector generated by the inference data 1 and the feature vector generated by the inference data 2. .. Further, the activation function 222 can output the inference result 1 by the inference data 2.
- the inference result 1 can predict the drain current Id predicted by the drain voltage given to the drain terminal of the transistor, the gate voltage given to the gate terminal of the transistor, the source voltage given to the source terminal of the transistor, and the like.
- FIG. 8 is a diagram illustrating a method of predicting electrical characteristics of a semiconductor element, which is different from that of FIG. FIG. 8 has a feature amount calculation unit 110C.
- the feature amount calculation unit 110C is different from the feature amount calculation unit 110A shown in FIG. 5 in that the output of the learning model 210 updates the weighting coefficient of the neural network 221.
- FIG. 8 describes a method for predicting the electrical characteristics of a transistor using a method for predicting the electrical characteristics of a semiconductor element.
- the learning model 210, the learning model 220, the learning model 230, and the learning model 240 have already been trained.
- the neural network 211 is provided with a process list having a new configuration as inference data 1.
- the neural network 221 is given a drain voltage given to the drain terminal of the transistor, a gate voltage given to the gate terminal of the transistor, a source voltage given to the source terminal of the transistor, and the like as inference data 2.
- the neural network 241 is provided with a sectional schematic diagram or a sectional observation image having a new configuration as inference data 3.
- the characteristic prediction unit 120 uses the feature vector generated by the inference data 1, the feature vector generated by the inference data 2, and the feature vector generated by the inference data 3 to use the above-mentioned equation (1) or (2). Predict the value of each variable in. Further, the activation function 222 can output the inference result 1 by the inference data 2. The inference result 1 can predict the drain current Id predicted by the drain voltage given to the drain terminal of the transistor, the gate voltage given to the gate terminal of the transistor, the source voltage given to the source terminal of the transistor, and the like.
- the fully coupled layer 233 of FIG. 7 or FIG. 8 outputs a predicted value of electrical characteristics to the output terminal OUT_1 to the output terminal OUT_w.
- the field effect mobility ⁇ FE of the above-mentioned formula (1) or (2), the unit area capacitance Cox of the gate insulating film, the channel length L, the channel width W, the threshold voltage Vth, or the like is used. Correspond.
- FIG. 9 is a diagram illustrating a computer that operates the program.
- the computer 10 connects the database 21, the remote computer 22, or the remote computer 23 via a network.
- the computer 10 includes an arithmetic unit 11, a memory 12, an input / output interface 13, a communication device 14, and a storage 15.
- the computer 10 is electrically connected to the display device 16a and the keyboard 16b via the input / output interface 13. Further, the computer 10 is electrically connected to the network interface 17 via the communication device 14, and the network interface 17 is electrically connected to the database 21, the remote computer 22, and the remote computer 23 via the network.
- the network includes a local area network (LAN) and the Internet.
- the network can use either wired or wireless communication, or both.
- wireless communication in addition to short-range communication means such as Wi-Fi (registered trademark) and Bluetooth (registered trademark), communication means compliant with the third generation mobile communication system (3G), LTE
- 3G third generation mobile communication system
- LTE Various means such as a communication means compliant with (sometimes called 3.9G), a communication means compliant with the 4th generation mobile communication system (4G), or a communication means compliant with the 5th generation mobile communication system (5G).
- Communication means can be used.
- the method for predicting the electrical characteristics of a semiconductor element uses a computer 10 to predict the electrical characteristics of the semiconductor element.
- the program included in the computer 10 is stored in the memory 12 or the storage 15.
- the program uses the arithmetic unit 11 to generate a learning model.
- the program can be displayed on the display device via the input / output interface 13.
- the user can give learning data such as a process list, electrical characteristics, a schematic cross-sectional view, or a cross-sectional observation image to the program from the keyboard for the program displayed on the display device 16a.
- the display device 16a converts the electrical characteristics of the semiconductor element predicted by the method for predicting the electrical characteristics of the semiconductor element into numbers, mathematical formulas, or graphs and displays them.
- the program can also be used by the remote computer 22 or the remote computer 23 via the network.
- the program stored in the memory or storage of the database 21, the remote computer 22, or the remote computer 23 can be used to operate the computer 10.
- the remote computer 22 may be a mobile information terminal or a mobile terminal such as a tablet computer or a notebook computer. In the case of a mobile information terminal, a mobile terminal, or the like, communication can be performed using wireless communication.
- one aspect of the present invention can provide a method for predicting the electrical characteristics of a semiconductor element using a computer.
- the method for predicting the electrical characteristics of a semiconductor element is multimodal by giving a process list, electrical characteristics of the semiconductor element generated by the process list, or a schematic cross-sectional view or a cross-sectional observation image of the semiconductor element generated by the process list as learning data. You can learn a lot.
- the method for predicting the electrical characteristics of a semiconductor element is an expression expressing the electrical characteristics or electrical characteristics of the semiconductor element by giving a new process list, voltage conditions given to the semiconductor element, a schematic cross-sectional view, or a cross-sectional observation image as inference data. The value of the variable can be predicted.
- the method for predicting the electrical characteristics of a semiconductor device can reduce the number of experiments for confirmation in the development of the semiconductor device, and can effectively utilize the information of the past experiments.
- This embodiment can be implemented by appropriately combining some of them.
- OUT_w Output terminal, OUT_1: Output terminal, 10: Computer, 11: Arithmetic device, 12: Memory, 13: Input / output interface, 14: Communication device, 15: Storage, 16a: Display device, 16b: Keyboard, 17: Network Interface, 21: Database, 22: Remote computer, 23: Remote computer, 110: Feature amount calculation unit, 110A: Feature amount calculation unit, 110B: Feature amount calculation unit, 110C: Feature amount calculation unit, 120: Characteristic prediction unit, 210: Learning model, 211: Neural network, 211a: Input layer, 211b: Hidden layer, 211c: Hidden layer, 212: Neural network, 212a: AGGREGATE layer, 212b: Fully connected layer, 212c: Fully connected layer, 220: Learning Model, 221: Neural network, 230: Learning model, 231: Bonding layer, 231A: Bonding layer, 232: Fully coupled layer, 233: Fully coupled layer, 240: Learning model, 241: Neural network, 24
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Power Engineering (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Computer Hardware Design (AREA)
- Manufacturing & Machinery (AREA)
- Ceramic Engineering (AREA)
- Thin Film Transistor (AREA)
- Semiconductor Integrated Circuits (AREA)
- Testing Or Measuring Of Semiconductors Or The Like (AREA)
Abstract
Description
図2A、図2B、図2C、図2Dは、工程リストを説明する表である。
図3A、図3Bは、工程リストを説明する図である。図3Cは、工程リストを学習するニューラルネットワークを説明する図である。
図4A、図4Bは、半導体素子の電気特性を説明する図である。図4Cは、電気特性を学習するニューラルネットワークを説明する図である。
図5は、半導体素子の電気特性予測方法を説明する図である。
図6Aは、画像データを学習するニューラルネットワークを説明する図である。図6Bは、半導体素子の断面模式図を説明する図である。図6Cは、半導体素子の断観察像を説明する図である。
図7は、半導体素子の電気特性予測方法を説明する図である。
図8は、半導体素子の電気特性予測方法を説明する図である。
図9は、プログラムを動作させるコンピュータを説明する図である。
本発明の一態様では、半導体素子の電気特性予測方法について説明する。一例として、半導体素子の電気特性予測方法には、特徴量算出部と、特性予測部と、を用いる。特徴量算出部は、第1の学習モデルと、第2の学習モデルとを有し、特性予測部は、第3の学習モデルを有する。なお、第1の学習モデルは、第1のニューラルネットワークを有し、第2の学習モデルは、第2のニューラルネットワークを有し、第3の学習モデルは、第3のニューラルネットワークを有する。なお、第1乃至第3のニューラルネットワークはそれぞれ異なることが好ましい。
Claims (7)
- 特徴量算出部と、特性予測部とを有する半導体素子の電気特性予測方法であって、
前記特徴量算出部は、第1の学習モデルと、第2の学習モデルと、を有し、
前記特性予測部は、第3の学習モデルを有し、
前記第1の学習モデルが、前記半導体素子を生成するための工程リストを学習するステップを有し、
前記第2の学習モデルが、前記工程リストによって生成される前記半導体素子の電気特性を学習するステップを有し、
前記第1の学習モデルが、第1の特徴量を生成するステップを有し、
前記第2の学習モデルが、第2の特徴量を生成するステップを有し、
前記第3の学習モデルが、前記第1の特徴量と、前記第2の特徴量と、を用いてマルチモーダルな学習をするステップを有し、
前記第3の学習モデルが、前記半導体素子の電気特性を表す計算式に用いる変数の値を出力するステップを有する、
前記半導体素子の電気特性予測方法。 - 請求項1において、
前記特徴量算出部は、第4の学習モデルを有し、
前記第4の学習モデルが、前記工程リストを用いて生成する断面模式図を学習するステップを有し、
前記第4の学習モデルが、第3の特徴量を生成するステップを有し、
前記第3の学習モデルが、前記第1の特徴量と、前記第2の特徴量と、前記第3の特徴量と、を用いてマルチモーダルな学習をするステップを有し、
前記第3の学習モデルが、前記半導体素子の電気特特性を表す計算式に用いる前記変数の値を出力するステップを有する、
前記半導体素子の電気特性予測方法。 - 請求項1または請求項2において、
前記第1の学習モデルは、第1のニューラルネットワークを有し、
前記第2の学習モデルは、第2のニューラルネットワークを有し、
前記第1のニューラルネットワークが生成する前記第1の特徴量が、前記第2のニューラルネットワークの重み係数を更新するステップを有する前記半導体素子の電気特性予測方法。 - 請求項1乃至請求項3のいずれか一において、
前記第1の学習モデルには、推論用工程リストが与えられ、且つ、前記第2の学習モデルには、前記半導体素子の端子に与える電圧の値が与えられる場合、前記第2の学習モデルが、前記電圧の値に応じた電流の値を出力するステップを有する前記半導体素子の電気特性予測方法。 - 請求項1乃至請求項3のいずれか一において、
前記第1の学習モデルには、推論用工程リストが与えられ、且つ、前記第2の学習モデルには、前記半導体素子の端子に与える電圧の値が与えられる場合、前記第3の学習モデルが、前記半導体素子の電気特性の計算式に用いる変数の値を出力するステップを有する、前記半導体素子の電気特性予測方法。 - 請求項1乃至請求項5のいずれか一において、
前記半導体素子が、トランジスタである前記半導体素子の電気特性予測方法。 - 請求項6において、
前記トランジスタは、半導体層に金属酸化物を有する前記半導体素子の電気特性予測方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021520483A JP7519996B2 (ja) | 2019-05-23 | 2020-05-11 | 半導体素子の電気特性予測方法 |
KR1020217040751A KR20220012269A (ko) | 2019-05-23 | 2020-05-11 | 반도체 소자의 전기 특성 예측 방법 |
US17/611,987 US20220252658A1 (en) | 2019-05-23 | 2020-05-11 | Method for predicting electrical characteristics of semiconductor element |
CN202080036592.6A CN113841222A (zh) | 2019-05-23 | 2020-05-11 | 半导体元件的电特性预测方法 |
JP2024110187A JP2024124548A (ja) | 2019-05-23 | 2024-07-09 | 半導体素子の電気特性予測方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-096919 | 2019-05-23 | ||
JP2019096919 | 2019-05-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020234685A1 true WO2020234685A1 (ja) | 2020-11-26 |
Family
ID=73458401
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2020/054411 WO2020234685A1 (ja) | 2019-05-23 | 2020-05-11 | 半導体素子の電気特性予測方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220252658A1 (ja) |
JP (2) | JP7519996B2 (ja) |
KR (1) | KR20220012269A (ja) |
CN (1) | CN113841222A (ja) |
WO (1) | WO2020234685A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023091313A3 (en) * | 2021-11-19 | 2023-08-03 | The Government Of The United States Of America, As Represented By The Secretary Of The Navy | Neural network-based prediction of semiconductor device response |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102512102B1 (ko) * | 2022-05-24 | 2023-03-21 | 주식회사 알세미 | 반도체 소자의 동작 영역 별로 특화된 다수의 인공 신경망을 이용한 컴팩트 모델링 방법 및 시스템 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008021805A (ja) * | 2006-07-12 | 2008-01-31 | Sharp Corp | テスト結果予測装置、テスト結果予測方法、半導体テスト装置、半導体テスト方法、システム、プログラム、および記録媒体 |
JP2020043270A (ja) * | 2018-09-12 | 2020-03-19 | 東京エレクトロン株式会社 | 学習装置、推論装置及び学習済みモデル |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005038216A (ja) | 2003-07-16 | 2005-02-10 | Shinka System Sogo Kenkyusho:Kk | パラメータ調整装置 |
US20170337482A1 (en) | 2016-05-20 | 2017-11-23 | Suraj Sindia | Predictive system for industrial internet of things |
KR101917006B1 (ko) * | 2016-11-30 | 2018-11-08 | 에스케이 주식회사 | 머신 러닝 기반 반도체 제조 수율 예측 시스템 및 방법 |
US10319743B2 (en) * | 2016-12-16 | 2019-06-11 | Semiconductor Energy Laboratory Co., Ltd. | Semiconductor device, display system, and electronic device |
US11537841B2 (en) * | 2019-04-08 | 2022-12-27 | Samsung Electronics Co., Ltd. | System and method for compact neural network modeling of transistors |
-
2020
- 2020-05-11 KR KR1020217040751A patent/KR20220012269A/ko unknown
- 2020-05-11 CN CN202080036592.6A patent/CN113841222A/zh active Pending
- 2020-05-11 US US17/611,987 patent/US20220252658A1/en active Pending
- 2020-05-11 JP JP2021520483A patent/JP7519996B2/ja active Active
- 2020-05-11 WO PCT/IB2020/054411 patent/WO2020234685A1/ja active Application Filing
-
2024
- 2024-07-09 JP JP2024110187A patent/JP2024124548A/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008021805A (ja) * | 2006-07-12 | 2008-01-31 | Sharp Corp | テスト結果予測装置、テスト結果予測方法、半導体テスト装置、半導体テスト方法、システム、プログラム、および記録媒体 |
JP2020043270A (ja) * | 2018-09-12 | 2020-03-19 | 東京エレクトロン株式会社 | 学習装置、推論装置及び学習済みモデル |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023091313A3 (en) * | 2021-11-19 | 2023-08-03 | The Government Of The United States Of America, As Represented By The Secretary Of The Navy | Neural network-based prediction of semiconductor device response |
Also Published As
Publication number | Publication date |
---|---|
JP2024124548A (ja) | 2024-09-12 |
KR20220012269A (ko) | 2022-02-03 |
US20220252658A1 (en) | 2022-08-11 |
JP7519996B2 (ja) | 2024-07-22 |
JPWO2020234685A1 (ja) | 2020-11-26 |
CN113841222A (zh) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2024124548A (ja) | 半導体素子の電気特性予測方法 | |
JP6620439B2 (ja) | 学習方法、プログラム及び学習装置 | |
Guo et al. | An incremental extreme learning machine for online sequential learning problems | |
Abbasi et al. | Improving response surface methodology by using artificial neural network and simulated annealing | |
Yassin et al. | Binary particle swarm optimization structure selection of nonlinear autoregressive moving average with exogenous inputs (NARMAX) model of a flexible robot arm | |
CN115066695A (zh) | 使用目标特定动作值函数的多目标强化学习 | |
WO2015136666A1 (ja) | 個別電気機器稼働状態推定装置、およびその方法 | |
Kim et al. | Modelling of plasma etching using a generalized regression neural network | |
US20220207351A1 (en) | Semiconductor design optimization using at least one neural network | |
KR102285516B1 (ko) | 반도체 소자 모델링 방법 및 시스템 | |
TW202032436A (zh) | 高度共線響應空間中的規範性分析 | |
WO2021025075A1 (ja) | 訓練装置、推定装置、訓練方法、推定方法、プログラム及びコンピュータ読み取り可能な非一時的記憶媒体 | |
CN111339724A (zh) | 用于生成数据处理模型和版图的方法、设备和存储介质 | |
Kearney et al. | Tidbd: Adapting temporal-difference step-sizes through stochastic meta-descent | |
US20230125401A1 (en) | Method of predicting characteristic of semiconductor device and computing device performing the same | |
US20030154175A1 (en) | Back-propagation neural network with enhanced neuron characteristics | |
Kim et al. | Modeling plasma etching process using a radial basis function network | |
Zhao et al. | Aging-aware training for printed neuromorphic circuits | |
WO2021094881A1 (ja) | 半導体素子の特性予測システム | |
EP3985564A1 (en) | Method for prediction of relations in a knowledge graph representing an industrial system used to monitor and control the industrial system | |
US11670403B2 (en) | Method and apparatus for generating chemical structure using neural network | |
CN113809747B (zh) | 一种配电网拓扑识别方法、电子设备及介质 | |
Cortes et al. | Coordinated deployment of mobile sensing networks with limited-range interactions | |
CN115983166A (zh) | 预测半导体器件的特性的方法和执行该方法的计算装置 | |
JP2023027851A (ja) | 訓練装置、プラント、モデルを生成する方法、推論装置、推論方法及びプラントの制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20810822 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021520483 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217040751 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20810822 Country of ref document: EP Kind code of ref document: A1 |