CN104915515A - BP neural network based GFET modeling method - Google Patents
BP neural network based GFET modeling method Download PDFInfo
- Publication number
- CN104915515A CN104915515A CN201510364868.4A CN201510364868A CN104915515A CN 104915515 A CN104915515 A CN 104915515A CN 201510364868 A CN201510364868 A CN 201510364868A CN 104915515 A CN104915515 A CN 104915515A
- Authority
- CN
- China
- Prior art keywords
- model
- gfet
- data
- network
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The present invention discloses a BP neural network based GFET modeling method. The method comprises six steps of: collecting data; preprocessing the data; determining a BP network GFET model structure to be used; building and training a GFET model; performing denormalization on an output electric current predicted by the GFET model to obtain a model output value; and implementing Verilog-A of the GFET mode. The method of the present invention has the advantages of being short in calculation time, high in accuracy, requiring no complete theory knowledge of devices and being applicable to circuit design and simulation.
Description
Technical field
The invention belongs to device Modeling field, be specifically related to a kind of modeling method of graphene field effect pipe (GFET).
Background technology
Device model is the instrument of outlines device performance, by new device large-scale application in circuit design, and accurate device model is essential.Traditional device model is numerical model and aggregate product plan mainly.
Based on the numerical evaluation of first principle or unbalance distribution, numerical model can the characteristic of accurate simulation GFET, but operation time is long, cannot be applied to circuit simulation tools.Aggregate product plan adopts analytical expression to describe the characteristic of GFET, computing time is short, can be integrated into for circuit design in circuit simulation tools, but needs to be shaped with deep understanding to the working machine of GFET, and be reduced to suitable analytical expression, be often difficult to accomplish to novel GFET device.
Neural network is a kind of modeling method based on machine learning and statistical theory, by the relation adjustment inherent parameters between the inputoutput data of learning training sample, the performance of description object, has that structure is simple, pace of learning soon, does not need the advantages such as complete device knowwhy.Adopt neural network can complete new device modeling efficiently, accurately.
Summary of the invention
The present invention is in order to overcome the deficiency of existing modeling method, propose a kind of GFET modeling method based on BP (BackPropagation) neural network, its computing time is short, degree of accuracy is high, do not need complete device knowwhy, according to the input data of GFET, gate source voltage V can be comprised
gs, drain-source voltage V
ds, channel width W, channel length L, accurately calculate the output channel current I of GFET
d.
The technical solution used in the present invention is as follows: a kind of GFET modeling method based on BP neural network, is characterized in that, comprise the following steps:
Step 1. data acquisition: gather a certain amount of GFET inputoutput data for BP network model training and testing, namely the network model obtained after training test can be used for the prediction to GFET performance under other input parameters; Input data comprise gate source voltage V
gs, drain-source voltage V
ds, channel width W, channel length L, export data be channel current I
d; By a certain percentage all data are divided into training data and test data at random after collecting data;
Step 2. data prediction: be normalized unified for step 1 the data obtained;
Step 3. determines BP network G FET model structure used, comprises the neuron number of hidden layer number and each hidden layer, and input layer number is input parameter number, and output layer neuron number is output parameter number;
Step 4.GFET model construction and training;
Step 5. pair GFET model prediction output current carries out renormalization, obtains model output valve;
The Verilog-A of step 6.GFET model realizes.
As preferably, the data normalization described in step 2, its concrete grammar is,
M=2* (a-a
min)/(a
max-a
min)-1 (formula one);
Wherein a
maxand a
minbe respectively maximal value and the minimum value of data a before normalization, m is the value after normalization.
As preferably, the determination BP network G FET model hidden layer structure described in step 3, it mainly relies on method of trial and error, and specific implementation comprises following sub-step:
Step 3.1: select two hidden layer structure;
Step 3.2: after determining hidden layer number, sets two and judge that BP network G FET model exports whether excellent judgment criteria, more selected neuron number is certain limit, carries out scan round, the network structure that Selection effect is best to it; Wherein two judge that BP network G FET model exports whether excellent judgment criteria respectively:
Judgment criteria 1: test sample book target current and GFET model are to the Error Absolute Value between test sample book predicted current and errorsum, and good criteria is less than 0.01 for its value;
Judgment criteria 2: test sample book object vector and GFET model are to the square error errormse between test sample book predicted current, and good criteria is less than 10e-10 for its value;
To in the scan round of neuron number, select the BP network structure that errorsum and errormse is less; If errorsum with the errormse value of multiple network structure is similar, then select the network structure that neuron number is minimum.
As preferably, the GFET model construction described in step 4 and training, specific implementation comprises following sub-step:
The transport function of step 4.1:BP network G FET model hidden layer is as follows,
The transport function of output layer is linear function, as follows,
Y=x (formula three);
In above two formulas, x is transport function input, and y is that transport function exports, and namely neuron exports;
Wherein the input x of transport function is the weighted sum that before all inputs of neuron namely, all neurons of one deck export, as follows,
Wherein N is the number of input, O
ithat before neuron input namely, one deck neuron exports, W
ibe corresponding weights, b is neuronic threshold value;
The training process of step 4.2:BP network model adopts Levenberg-Marquardt algorithm, and specific implementation comprises following sub-step:
Step 4.2.1: calculate the error between target current and model prediction electric current, as follows,
E
p=T
p-y
p(formula five);
Wherein T
pand y
pbe respectively the target current of p group training data and the predicted current of corresponding model;
Step 4.2.2: by e
plocal derviation is asked to neuronic weights and threshold each in step 4.1, by the local derviation value composition Jacobian matrix J tried to achieve,
Wherein M is training sample sum, and D is the number of all weights of BP network model, and F is the number of all threshold values of network,
represent weights and threshold;
Step 4.2.3: upgrade weights and threshold, method is as follows,
E=[e
1... e
p... e
m]
t(formula nine);
Wherein
represent the weights after upgrading or threshold value,
represent corresponding weights or threshold value before upgrading, λ is damping factor, and I is unit matrix, and exponent number is D+F, and in formula eight and formula nine, E is error matrix, and it is by e in formula five
pcomposition;
Step 4.2.4: constantly circulate said process, until BP network model error sum of squares is less than network settings target or cycle index reaches maximum set point number.
As preferably, the output current renormalization described in step 5, its concrete grammar is,
C=(y
p+ 1) (c
max-c
min)/2+c
min(formula ten);
Wherein c
maxand c
minbe respectively maximal value and the minimum value of output current before normalization, y
pbe the value of GEFT model prediction electric current, c is the model output current value after renormalization.
As preferably, the Verilog-A of the GFET model described in step 6 realizes, and its concrete grammar is: first process data with the method for normalizing described in step 2 before data input; Then according to the formula of BP network propagated forward in BP topology of networks in step 3 and step 4, formula comprises formula two, formula three, formula four, the threshold value of the weights connected between each layer, each layer and transport function is expressed with the form of Verilog-A; Finally by the method for the renormalization described in step 5, renormalization process is carried out to output data, to obtain exporting channel current; The process function of normalization and renormalization is also expressed by the form of Verilog-A, finally obtains the GFET model for circuit design simulation.
Relative to prior art, the invention has the beneficial effects as follows and propose a kind of new GFET modeling method, by using the mode of finite data training BP neural network, building one and not relying on analytic theory formula, calculate GFET circuit level model that is quick, accurate expression.
Accompanying drawing explanation
Fig. 1: the topological structure signal being the BP network model of the embodiment of the present invention;
Fig. 2: the process flow diagram being the GFET neural network model training of the embodiment of the present invention;
Fig. 3: the output characteristic curve being the GFET modeled example of the embodiment of the present invention.
Embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, below in conjunction with drawings and Examples, the present invention is described in further detail, should be appreciated that exemplifying embodiment described herein is only for instruction and explanation of the present invention, is not intended to limit the present invention.
A kind of GFET modeling method based on BP neural network provided by the invention, comprises the following steps:
Step 1. data acquisition: gather a certain amount of GFET inputoutput data for BP network model training and testing.Namely the network model obtained after training test can be used for the prediction to GFET performance under other input parameters.Input data comprise gate source voltage V
gs, drain-source voltage V
ds, channel width W, channel length L, export data be channel current I
d; By a certain percentage all data are divided into training data and test data at random after collecting data;
Step 2. data prediction: be normalized unified for step 1 the data obtained; The concrete grammar of data normalization is,
M=2* (a-a
min)/(a
max-a
min)-1 (formula one);
Wherein a
maxand a
minbe respectively maximal value and the minimum value of data a before normalization, m is the value after normalization.
Step 3. determines BP network G FET model structure used, and comprise the neuron number of hidden layer number and each hidden layer, in this example, input layer is 4, and output layer neuron is 1;
Determine BP network G FET model hidden layer structure, it mainly relies on method of trial and error, and specific implementation comprises following sub-step:
Step 3.1: the BP network containing a hidden layer can approach any continuous function, but to actual modeling problem, select two hidden layer structure sometimes can reach the total neuron number of minimizing, accelerate training, put forward high-precision object; The present embodiment selects two hidden layer structure, as shown in Figure 1;
Step 3.2: after determining hidden layer number, sets two and judge that BP network G FET model exports whether excellent judgment criteria, more selected neuron number is certain limit, carries out scan round, the network structure that Selection effect is best to it; The selected first hidden layer neuron number of the present embodiment is the 15, second hidden layer neuron number is 12, and wherein two judge that BP network G FET model exports whether excellent judgment criteria respectively:
Judgment criteria 1: test sample book target current and GFET model are to the Error Absolute Value between test sample book predicted current and errorsum.In this example, good criteria is less than 0.01 for its value.
Judgment criteria 2: test sample book object vector and GFET model are to the square error errormse between test sample book predicted current.In this example, good criteria is less than 10e-10 for its value.
To in the scan round of neuron number, select the BP network structure that errorsum and errormse is less; If errorsum with the errormse value of multiple network structure is similar, then select the network structure that neuron number is minimum.
Step 4.GFET model construction and training; Ask for an interview Fig. 2, specific implementation comprises following sub-step:
The transport function of step 4.1:BP network G FET model hidden layer is as follows,
The transport function of output layer is linear function, as follows,
Y=x (formula three);
In above two formulas, x is transport function input, and y is that transport function exports, and namely neuron exports;
Wherein the input x of transport function is the weighted sum that before all inputs of neuron namely, all neurons of one deck export, as follows,
Wherein N is the number of input, O
ithat before neuron input namely, one deck neuron exports, W
ibe corresponding weights, b is neuronic threshold value;
The training process of step 4.2:BP network model adopts Levenberg-Marquardt algorithm, and specific implementation comprises following sub-step:
Step 4.2.1: calculate the error between target current and model prediction electric current, as follows,
E
p=T
p-y
p(formula five);
Wherein T
pand y
pbe respectively the target current of p group training data and the predicted current of corresponding model;
Step 4.2.2: by e
plocal derviation is asked to neuronic weights and threshold each in step 4.1, by the local derviation value composition Jacobian matrix J tried to achieve,
Wherein M is training sample sum, and D is the number of all weights of BP network model, and F is the number of all threshold values of network,
represent weights and threshold;
Step 4.2.3: upgrade weights and threshold, method is as follows,
E=[e
1... e
p... e
m]
t(formula nine);
Wherein
represent the weights after upgrading or threshold value,
represent corresponding weights or threshold value before upgrading, λ is damping factor, and I is unit matrix, and exponent number is D+F, and in formula eight and formula nine, E is error matrix, and it is by e shown in formula five
pcomposition;
Step 4.2.4: constantly circulate said process, until BP network model error sum of squares is less than network settings target or cycle index reaches maximum set point number.
Step 5. pair GFET model prediction output current carries out renormalization, obtains model output valve; Output current renormalization disposal route is,
C=(y
p+ 1) (c
max-c
min)/2+c
min(formula ten);
Wherein c
maxand c
minbe respectively maximal value and the minimum value of output current before normalization, y
pbe the value of GEFT model prediction electric current, c is the model output current value after renormalization.
The Verilog-A of step 6.GFET model realizes.Its concrete grammar is: first process data with the method for normalizing described in step 2 before data input; Then according to the formula of BP network propagated forward in BP topology of networks in step 3 and step 4, formula comprises formula two, formula three, formula four, the threshold value of the weights connected between each layer, each layer and transport function is expressed with the form of Verilog-A; Finally by the method for the renormalization described in step 5, renormalization process is carried out to output data, to obtain exporting channel current.The process function of normalization and renormalization is also expressed by the form of Verilog-A.Finally obtain the GFET model for circuit design simulation.
Ask for an interview Fig. 3, in the present embodiment, image data 3922 groups altogether, wherein 3000 groups of data are used for training, and 922 groups of data are used for test.Modeling achieves good effect, and test data average relative error is less than 1%.From example, based on BP neural network, modeling is carried out to GFET, good effect can be obtained, and it is short to have computing time, do not need the advantage of GFET categorical theory.
Should be understood that, the part that this instructions does not elaborate all belongs to prior art.
Should be understood that; the above-mentioned description for preferred embodiment is comparatively detailed; therefore the restriction to scope of patent protection of the present invention can not be thought; those of ordinary skill in the art is under enlightenment of the present invention; do not departing under the ambit that the claims in the present invention protect; can also make and replacing or distortion, all fall within protection scope of the present invention, request protection domain of the present invention should be as the criterion with claims.
Claims (6)
1., based on a GFET modeling method for BP neural network, it is characterized in that, comprise the following steps:
Step 1. data acquisition: gather a certain amount of GFET inputoutput data for BP network model training and testing, namely the network model obtained after training test can be used for the prediction to GFET performance under other input parameters; Input data comprise gate source voltage V
gs, drain-source voltage V
ds, channel width W, channel length L, export data be channel current I
d; By a certain percentage all data are divided into training data and test data at random after collecting data;
Step 2. data prediction: be normalized unified for step 1 the data obtained;
Step 3. determines BP network G FET model structure used, comprises the neuron number of hidden layer number and each hidden layer, and input layer number is input parameter number, and output layer neuron number is output parameter number;
Step 4.GFET model construction and training;
Step 5. pair GFET model prediction output current carries out renormalization, obtains model output valve;
The Verilog-A of step 6.GFET model realizes.
2. the GFET modeling method based on BP neural network according to claim 1, is characterized in that: the data normalization described in step 2, and its concrete grammar is,
M=2* (a-a
min)/(a
max-a
min)-1 (formula one);
Wherein a
maxand a
minbe respectively maximal value and the minimum value of data a before normalization, m is the value after normalization.
3. the GFET modeling method based on BP neural network according to claim 1, is characterized in that: the determination BP network G FET model hidden layer structure described in step 3, and it mainly relies on method of trial and error, and specific implementation comprises following sub-step:
Step 3.1: select two hidden layer structure;
Step 3.2: after determining hidden layer number, sets two and judge that BP network G FET model exports whether excellent judgment criteria, more selected neuron number is certain limit, carries out scan round, the network structure that Selection effect is best to it; Wherein two judge that BP network G FET model exports whether excellent judgment criteria respectively:
Judgment criteria 1: test sample book target current and GFET model are to the Error Absolute Value between test sample book predicted current and errorsum, and good criteria is less than 0.01 for its value;
Judgment criteria 2: test sample book object vector and GFET model are to the square error errormse between test sample book predicted current, and good criteria is less than 10e-10 for its value;
To in the scan round of neuron number, select the BP network structure that errorsum and errormse is less; If errorsum with the errormse value of multiple network structure is similar, then select the network structure that neuron number is minimum.
4. the GFET modeling method based on BP neural network according to claim 1, it is characterized in that: the GFET model construction described in step 4 and training, specific implementation comprises following sub-step:
The transport function of step 4.1:BP network G FET model hidden layer is as follows,
The transport function of output layer is linear function, as follows,
Y=x (formula three);
In above two formulas, x is transport function input, and y is that transport function exports, and namely neuron exports;
Wherein the input x of transport function is the weighted sum that before all inputs of neuron namely, all neurons of one deck export, as follows,
Wherein N is the number of input, O
ithat before neuron input namely, one deck neuron exports, W
ibe corresponding weights, b is neuronic threshold value;
The training process of step 4.2:BP network model adopts Levenberg-Marquardt algorithm, and specific implementation comprises following sub-step:
Step 4.2.1: calculate the error between target current and model prediction electric current, as follows,
E
p=T
p-y
p(formula five);
Wherein T
pand y
pbe respectively the target current of p group training data and the predicted current of corresponding model;
Step 4.2.2: by e
plocal derviation is asked to neuronic weights and threshold each in step 4.1, by the local derviation value composition Jacobian matrix J tried to achieve,
Wherein M is training sample sum, and D is the number of all weights of BP network model, and F is the number of all threshold values of network,
represent weights and threshold;
Step 4.2.3: upgrade weights and threshold, method is as follows,
E=[e
1... e
p... e
m]
t(formula nine);
Wherein
represent the weights after upgrading or threshold value,
represent corresponding weights or threshold value before upgrading, λ is damping factor, and I is unit matrix, and exponent number is D+F, and in formula eight and formula nine, E is error matrix, and it is by e in formula five
pcomposition;
Step 4.2.4: constantly circulate said process, until BP network model error sum of squares is less than network settings target or cycle index reaches maximum set point number.
5. the GFET modeling method based on BP neural network according to claim 1, it is characterized in that: the output current renormalization described in step 5, its concrete grammar is,
C=(y
p+ 1) (c
max-c
min)/2+c
min(formula ten);
Wherein c
maxand c
minbe respectively maximal value and the minimum value of output current before normalization, y
pbe the value of GEFT model prediction electric current, c is the model output current value after renormalization.
6. the GFET modeling method based on BP neural network according to claim 4, it is characterized in that: the Verilog-A of the GFET model described in step 6 realizes, and its concrete grammar is: first with the method for normalizing described in step 2, data are processed before data input; Then according to the formula of BP network propagated forward in BP topology of networks in step 3 and step 4, formula comprises formula two, formula three, formula four, the threshold value of the weights connected between each layer, each layer and transport function is expressed with the form of Verilog-A; Finally by the method for the renormalization described in step 5, renormalization process is carried out to output data, to obtain exporting channel current; The process function of normalization and renormalization is also expressed by the form of Verilog-A, finally obtains the GFET model for circuit design simulation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510364868.4A CN104915515A (en) | 2015-06-26 | 2015-06-26 | BP neural network based GFET modeling method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510364868.4A CN104915515A (en) | 2015-06-26 | 2015-06-26 | BP neural network based GFET modeling method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104915515A true CN104915515A (en) | 2015-09-16 |
Family
ID=54084578
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510364868.4A Pending CN104915515A (en) | 2015-06-26 | 2015-06-26 | BP neural network based GFET modeling method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104915515A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106126815A (en) * | 2016-06-23 | 2016-11-16 | 中国科学院微电子研究所 | A kind of circuit emulation method and device |
CN108111363A (en) * | 2016-11-25 | 2018-06-01 | 厦门雅迅网络股份有限公司 | It is a kind of to analyze the method and device that whether communication linkage is abnormal in car networking system |
CN108875172A (en) * | 2018-06-05 | 2018-11-23 | 天津工业大学 | A kind of sic filed effect tube model neural network based |
CN109292567A (en) * | 2018-02-28 | 2019-02-01 | 武汉大学 | A kind of elevator faults prediction technique based on BP neural network |
CN109409014A (en) * | 2018-12-10 | 2019-03-01 | 福州大学 | The calculation method of shining time per year based on BP neural network model |
CN110647989A (en) * | 2019-09-16 | 2020-01-03 | 长春师范大学 | Graphene defect modification prediction method based on neural network |
CN110968949A (en) * | 2019-11-25 | 2020-04-07 | 北京交通大学 | Modeling method of electromagnetic sensitivity prediction model of high-speed train vehicle-mounted equipment |
WO2022100118A1 (en) * | 2020-11-13 | 2022-05-19 | 华为技术有限公司 | Model processing method and related device |
CN115345106A (en) * | 2022-07-14 | 2022-11-15 | 贝叶斯电子科技(绍兴)有限公司 | Verilog-A model construction method, system and equipment of electronic device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663495A (en) * | 2012-02-22 | 2012-09-12 | 天津大学 | Neural net data generation method for nonlinear device modeling |
CN103105246A (en) * | 2012-12-31 | 2013-05-15 | 北京京鹏环球科技股份有限公司 | Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm |
CN104199536A (en) * | 2014-07-23 | 2014-12-10 | 西安空间无线电技术研究所 | FPGA dynamic power consumption estimation method based on BP neural network |
-
2015
- 2015-06-26 CN CN201510364868.4A patent/CN104915515A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663495A (en) * | 2012-02-22 | 2012-09-12 | 天津大学 | Neural net data generation method for nonlinear device modeling |
CN103105246A (en) * | 2012-12-31 | 2013-05-15 | 北京京鹏环球科技股份有限公司 | Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm |
CN104199536A (en) * | 2014-07-23 | 2014-12-10 | 西安空间无线电技术研究所 | FPGA dynamic power consumption estimation method based on BP neural network |
Non-Patent Citations (1)
Title |
---|
张济 等: ""一种具备良好泛化能力的神经网络MOSFET模型"", 《固体电子学研究与进展》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106126815A (en) * | 2016-06-23 | 2016-11-16 | 中国科学院微电子研究所 | A kind of circuit emulation method and device |
CN106126815B (en) * | 2016-06-23 | 2018-12-25 | 中国科学院微电子研究所 | A kind of circuit emulation method and device |
CN108111363A (en) * | 2016-11-25 | 2018-06-01 | 厦门雅迅网络股份有限公司 | It is a kind of to analyze the method and device that whether communication linkage is abnormal in car networking system |
CN109292567A (en) * | 2018-02-28 | 2019-02-01 | 武汉大学 | A kind of elevator faults prediction technique based on BP neural network |
CN108875172A (en) * | 2018-06-05 | 2018-11-23 | 天津工业大学 | A kind of sic filed effect tube model neural network based |
CN109409014A (en) * | 2018-12-10 | 2019-03-01 | 福州大学 | The calculation method of shining time per year based on BP neural network model |
CN109409014B (en) * | 2018-12-10 | 2021-05-04 | 福州大学 | BP neural network model-based annual illuminable time calculation method |
CN110647989A (en) * | 2019-09-16 | 2020-01-03 | 长春师范大学 | Graphene defect modification prediction method based on neural network |
CN110968949A (en) * | 2019-11-25 | 2020-04-07 | 北京交通大学 | Modeling method of electromagnetic sensitivity prediction model of high-speed train vehicle-mounted equipment |
WO2022100118A1 (en) * | 2020-11-13 | 2022-05-19 | 华为技术有限公司 | Model processing method and related device |
CN115345106A (en) * | 2022-07-14 | 2022-11-15 | 贝叶斯电子科技(绍兴)有限公司 | Verilog-A model construction method, system and equipment of electronic device |
CN115345106B (en) * | 2022-07-14 | 2023-10-17 | 贝叶斯电子科技(绍兴)有限公司 | Verilog-A model construction method, system and equipment for electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104915515A (en) | BP neural network based GFET modeling method | |
CN108898215B (en) | Intelligent sludge bulking identification method based on two-type fuzzy neural network | |
CN107590565A (en) | A kind of method and device for building building energy consumption forecast model | |
CN106777775B (en) | Neural network method for predicting river flow based on multi-section water level | |
CN103606006B (en) | Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network | |
CN104408562B (en) | A kind of photovoltaic system generating efficiency comprehensive estimation method based on BP neural network | |
CN106127387A (en) | A kind of platform district based on BP neutral net line loss per unit appraisal procedure | |
CN105574326A (en) | Self-organizing fuzzy neural network-based soft measurement method for effluent ammonia-nitrogen concentration | |
CN107705556A (en) | A kind of traffic flow forecasting method combined based on SVMs and BP neural network | |
CN112990500B (en) | Transformer area line loss analysis method and system based on improved weighted gray correlation analysis | |
CN108304674A (en) | A kind of railway prediction of soft roadbed settlement method based on BP neural network | |
CN105552902A (en) | Super-short-term forecasting method for load of terminal of power distribution network based on real-time measurement of feeder end | |
CN108090629A (en) | Load forecasting method and system based on nonlinear auto-companding neutral net | |
CN107688863A (en) | The short-term wind speed high accuracy combination forecasting method that adaptive iteration is strengthened | |
CN103839106A (en) | Ball grinding mill load detecting method for optimizing BP neural network based on genetic algorithm | |
CN1987477B (en) | Interlinked fitting method for heavy metals in river channel sediment | |
CN103637800A (en) | Eight-section impedance model based body composition analysis method | |
CN114626769A (en) | Operation and maintenance method and system for capacitor voltage transformer | |
CN111047476A (en) | Dam structure safety monitoring accurate prediction method and system based on RBF neural network | |
CN111914488B (en) | Data area hydrologic parameter calibration method based on antagonistic neural network | |
CN104156878A (en) | Method for determining weight of evaluation index of rural power grid renovation and upgrading project | |
CN106355273A (en) | Predication system and predication method for after-stretching performance of nuclear material radiation based on extreme learning machine | |
CN103279030A (en) | Bayesian framework-based dynamic soft measurement modeling method and device | |
CN103675010B (en) | The industrial melt index soft measurement instrument of support vector machine and method | |
Zhang et al. | Comparing BP and RBF neural network for forecasting the resident consumer level by MATLAB |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150916 |
|
RJ01 | Rejection of invention patent application after publication |