CN111896038B - Semiconductor process data correction method based on correlation entropy and shallow neural network - Google Patents

Semiconductor process data correction method based on correlation entropy and shallow neural network Download PDF

Info

Publication number
CN111896038B
CN111896038B CN202010591258.9A CN202010591258A CN111896038B CN 111896038 B CN111896038 B CN 111896038B CN 202010591258 A CN202010591258 A CN 202010591258A CN 111896038 B CN111896038 B CN 111896038B
Authority
CN
China
Prior art keywords
layer
variable
neural network
function
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010591258.9A
Other languages
Chinese (zh)
Other versions
CN111896038A (en
Inventor
谢磊
吴小菲
徐浩杰
陈启明
苏宏业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010591258.9A priority Critical patent/CN111896038B/en
Publication of CN111896038A publication Critical patent/CN111896038A/en
Application granted granted Critical
Publication of CN111896038B publication Critical patent/CN111896038B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D18/00Testing or calibrating apparatus or arrangements provided for in groups G01D1/00 - G01D15/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a semiconductor process data correction method based on correlation entropy and a shallow neural network, which comprises the following steps: (1) collecting output signals of a process variable sensor corresponding to a variable to be corrected; (2) inputting each variable into the established shallow neural network model, extracting the correlation information of the variables layer by layer, and transferring the output of each layer through a function; collecting the variable output of the last layer of the model, comparing the variable output with the input variable, and establishing a regression model; (3) saving the parameter weight of the current model, calculating a final objective function value, if the final objective function value does not meet the stop condition, updating the parameter weight and repeating the step (2) until the stop condition is reached; (4) changing the number of network layers, and repeating the steps (2) to (3) until the maximum number of network layers is reached; (5) selecting the network layer number with the best correction result; and storing parameters of each layer, calculating new data to be corrected and obtaining a correction value. By using the method and the device, the data correction result with lower error can be obtained.

Description

Semiconductor process data correction method based on correlation entropy and shallow neural network
Technical Field
The invention relates to the field of process monitoring in an industrial system, in particular to a semiconductor process data correction method based on a correlation entropy and a shallow neural network.
Background
In recent years, data-driven methods such as process monitoring, soft metrology, and the like have been established as powerful process control tools in the semiconductor industry. Therefore, the reliability and accuracy of measured process data is critical to the efficiency, profitability and safe operation of a plant in the chemical industry. However, due to process variability and measurement technology limitations, the data measured online is often disturbed by random and gross errors. By improving the raw data set, process performance and maintenance efficiency can be significantly improved. Therefore, data rectification that can mitigate the effects of errors in raw data has become an important area of research in data analysis.
In the semiconductor industry, data rectification is also referred to as bias estimation. Researchers have combined statistical information (distribution, variance, etc.) with known models to enable selection of efficient estimation methods, improve the original mean square error objective function and eliminate bias. While these methods perform well in engineering processes, all of them are model-based techniques, and the key to effective data correction is the adoption of a good process model. If the model does not faithfully represent the process, the corrected data will be distorted by the model mismatch. For part of real industrial processes, it is difficult to accurately obtain a process model. On the other hand, for the serious error in the model, the prior model is usually solved by adopting a preprocessing mode, but the method only considers the statistical knowledge of a single variable and does not consider the relation among other variables in the whole process, and the improper correction result can be caused.
Based on the above background, a method is considered to be found, a data relationship can be mined through the acquired original sample data, and the data relationship is used as a basis for correction to obtain a better correction value, so that the optimization of the data relationship is further promoted. Such a loop is advantageous for obtaining a final relatively accurate data correction value.
Disclosure of Invention
The invention discloses a semiconductor process data correction method based on a correlation entropy and a shallow neural network, which can be suitable for process measurement values containing random errors and major errors, only needs to acquire conventional operation data, and does not need any prior knowledge or pretreatment.
A semiconductor process data correction method based on correlation entropy and a shallow neural network comprises the following steps:
(1) for a control process in which disturbances exist, the output signal of a process variable sensor corresponding to the variable to be corrected is collected:
(2) directly inputting the collected variables into the constructed shallow neural network model, extracting the relevant information in the variables layer by layer, and transmitting the output of each layer through a set function;
collecting the variable output of the last layer of the model, comparing the variable output with the numerical value of the input variable, and establishing a regression model;
(3) saving the parameter weight of the current shallow neural network model, and calculating a final objective function value, wherein the objective function adopts a related entropy function; if the stopping condition is not met, updating the parameter weight and repeating the step (2) until the stopping condition is reached;
(4) changing the network layer number, and repeating the steps (2) to (3) until the maximum network layer number is reached;
(5) selecting the network layer number with the best correction result; and storing the parameter values of each layer, inputting the new data to be corrected into the shallow neural network model, recalculating and obtaining a variable correction value.
The invention can reduce the interference of random and serious errors, improve the original data and obviously improve the process performance and the maintenance efficiency, thereby reducing the production loss and having important practical value in the aspect of improving the economic benefit.
The method is different from the traditional model-based method, does not need to rely on the accuracy of prior knowledge, can directly establish the relationship between model mining data and use the relationship for adjusting data errors, and correct good data can also obtain an accurate variable model; also different from the traditional preprocessing method, the method carries out data correction while acquiring the relation, directly introduces the related entropy into the objective function, can more effectively consider the relation among all variables, does not depend on the characteristics of a certain variable, and thus obtains a better correction result.
In the step (1), the acquired output signals contain random errors and major errors, and can be transmitted to the neural network model in the step (2) without any pretreatment.
In the step (2), the input and output of the model are all measured variables, so as to obtain the relationship between the variables.
The specific process of the step (2) is as follows:
(2-1)x0∈RD;x0=[x0,1,x0,2,…x0,D]Trepresenting the D-dimensional input of a variable with error, and xl-1L is 1, 2, …, L indicates that the model shares L-layer operations, each layer of network nodesAre all input variable dimensions D, weight matrix
Figure BDA0002555625000000031
And
Figure BDA0002555625000000032
deviation vector
Figure BDA0002555625000000033
And
Figure BDA0002555625000000034
and respectively defining parameters of linear and nonlinear functions in the transfer function of the network, and then the output of the ith layer is expressed as the following process:
Figure BDA0002555625000000035
Figure BDA0002555625000000036
Figure BDA0002555625000000037
Figure BDA0002555625000000038
and
Figure BDA0002555625000000039
for hidden nodes in the network, respectively transmitting corresponding linear (psi) and nonlinear activation functions (phi) until the output vector x of the first layer of the network is obtainedl
(2-2) after multi-layer continuous iteration, the output of the last layer of neural network model is a correction value xL
x1=F(x0)
x2=F(x1)
Figure BDA00025556250000000310
xL=F(xL-1)
Here function F represents the hidden node acquisition and corresponding linear and nonlinear activation function operations shown in (2-1);
and (2-3) comparing the output of the neural network model with the numerical value of the input variable to establish a regression model.
In step (3), the correlation entropy function is expressed as:
Figure BDA0002555625000000041
Figure BDA0002555625000000042
εd=x0,d-xL,d
in the formula, kσd(. to) is a function of the associated entropy, σdRepresenting adjustable parameters, epsilon, of the corresponding d-th variable in the associated entropy functiondThe difference between the measured value and the corrected value of the corresponding dimension d variable is obtained.
In the step (3), a gradient descent method is adopted to train and update the parameter weight, and the formula is as follows:
Figure BDA0002555625000000043
Figure BDA0002555625000000044
Figure BDA0002555625000000045
Figure BDA0002555625000000046
wherein α represents a fixed learning rate and α > 0; gradient of parameter
Figure BDA0002555625000000047
Figure BDA0002555625000000048
And
Figure BDA0002555625000000049
is obtained by the following formula
Figure BDA00025556250000000410
Figure BDA00025556250000000411
Figure BDA00025556250000000412
Figure BDA0002555625000000051
Wherein, the symbol
Figure BDA0002555625000000058
Representing the partial derivatives of the element-by-element one-to-one multiplication with the target function on the output value of the last layer of the function
Figure BDA0002555625000000052
And
Figure BDA0002555625000000053
is obtained by the following formula
Figure BDA0002555625000000054
Figure BDA0002555625000000055
In addition, the remaining iteration part, i.e. the objective function, is represented by the hidden node partial derivatives of each layer as:
Figure BDA0002555625000000056
Figure BDA0002555625000000057
in the step (3), the stop condition is as follows: the target function reaches the maximum value or the cycle number reaches the set maximum cycle number.
And (4) for the new data to be corrected, obtaining a new variable correction value according to network iteration.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention defines the input and the output as the variables per se, and the relation between the variables is effectively extracted by simulating a black box model, thereby replacing the original prior knowledge and being used as the constraint in the variables to better correct the data.
2. According to the invention, the obtained data of certain correction can further promote the more accurate expression of the model relation, and the accuracy of the correction result is promoted to a certain extent.
3. In the invention, the number of nodes of each layer of the neural network is the same as the dimension of the input variable, the built-in weight matrix of the linear nonlinear function of each layer is shared with the deviation vector, and the advantage of reducing the complexity of the model is eliminated.
4. For selecting a proper model, the model weight matrix and the deviation vector are shared in each layer, and the number of hidden nodes in each layer is the same, so that only the number of model layers needs to be adjusted when the model is selected, and the parameter adjusting pressure is reduced.
5. The invention further optimizes the objective function by adopting an estimation method based on the correlation entropy, so that the objective function can also process the major error.
6. The present invention can automatically adjust the built-in parameters through an efficient gradient-based approach.
7. The invention completely adopts a data driving type method, does not need prior knowledge of the process and does not need to design a filter in advance.
Drawings
FIG. 1 is a flow chart of a semiconductor process data correction method based on correlation entropy and shallow neural network according to the present invention;
FIG. 2 is a schematic diagram of a model according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating measured process band error values and model output corrections, in accordance with an embodiment of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
In the following, taking the estimation of the deposition process result of a certain factory in China as an example, the height value of the wafer passing through the multi-stage chemical process is virtually measured.
In the manufacturing process, the chemical vapor deposition process is similar to the process of applying a solid thin film coating on a surface that is often used in the semiconductor industry. This process is complex because it involves many chemical reactions, and the reactors in a multiple reactor system are independently controlled to deposit films in the process chamber under a variety of conditions. The chemical vapor deposition apparatus is equipped with a considerable number of sensors. These measurements include random errors and gross errors due to unstable production environments and unreliable measurement instruments. Thus, an accurate model for obtaining reliable measurements helps to optimize operation, and subsequently a series of controls.
Step 1, for a control process with disturbance, acquiring an output signal of a process variable sensor corresponding to a variable to be corrected.
Step 2, directly inputting each variable into the established shallow neural network model, extracting correlation information in the variables layer by layer, and transmitting each layer of output through a set function; collecting the variable output of the last layer of the model, comparing the variable output with the input variable value, and establishing a regression model;
as shown in fig. 2, the whole model modeling steps are as follows:
(2-1)x0∈RD;x0=[x0,1,x0,2,…x0,D]Trepresenting the D-dimensional input of a variable with error, and xl-1L is 1, 2, …, L represents model common L-layer operation, each layer network node is input variable dimension D, weight matrix
Figure BDA0002555625000000071
And
Figure BDA0002555625000000072
deviation vector
Figure BDA0002555625000000073
And
Figure BDA0002555625000000074
and respectively defining parameters of linear and nonlinear functions in the transfer function of the network, and then the output of the ith layer is expressed as the following process:
Figure BDA0002555625000000075
Figure BDA0002555625000000076
Figure BDA0002555625000000081
Figure BDA0002555625000000082
and
Figure BDA0002555625000000083
respectively transmitting corresponding linear (psi) and nonlinear activation functions (phi, in the text, sigmoid functions are selected) for hidden nodes in the network until a first-layer output vector x of the network is obtainedl
(2-2) after multi-layer continuous iteration, the final layer of neural network outputs a correction value xL
x1=F(x0)
x2=F(x1)
Figure BDA00025556250000000811
xL=F(xL-1)
Function F here represents the hidden node acquisition and corresponding linear and nonlinear activation function operations shown in (2-1).
(2-3) since it is considered herein that in addition to random errors, significant errors occur in the industrial process due to additional disturbance variables, and the conventional mean square error objective function is sensitive to such errors, mean square error cannot be taken as the objective function here. Thus, the objective function based on the correlation entropy is introduced, which can be expressed as:
Figure BDA0002555625000000084
Figure BDA0002555625000000085
εd=x0,d-xL,d
wherein
Figure BDA0002555625000000086
As a function of the associated entropy, σdRepresenting adjustable parameters, epsilon, of the corresponding d-th variable in the associated entropy functiondThe difference between the measured value and the corrected value of the corresponding dimension d variable is obtained.
(2-4) training the update parameters according to a gradient descent method:
Figure BDA0002555625000000087
Figure BDA0002555625000000088
Figure BDA0002555625000000089
Figure BDA00025556250000000810
where α represents a fixed learning rate and α > 0, a parameter gradient
Figure BDA0002555625000000091
Figure BDA0002555625000000092
And
Figure BDA0002555625000000093
the following formula is used to obtain the following formula,
Figure BDA0002555625000000094
Figure BDA0002555625000000095
Figure BDA0002555625000000096
Figure BDA0002555625000000097
wherein, the symbol
Figure BDA00025556250000000913
Representing the partial derivatives of the output values of the last layer of the function, one-to-one by element operation, and the objective function
Figure BDA0002555625000000098
And
Figure BDA0002555625000000099
obtained from the following equation:
Figure BDA00025556250000000910
Figure BDA00025556250000000911
in addition, the remaining iteration sections can be represented as:
Figure BDA00025556250000000912
Figure BDA0002555625000000101
step 3, saving the parameter weight of the current model, calculating a final objective function value, if the final objective function value does not meet the stop condition, updating the parameter weight and repeating the step (2) until the stop condition is reached;
step 4, changing the network layer number, and repeating the steps (2) to (3) until the maximum network layer number is reached;
step 5, selecting the network layer with the best correction result; and storing the parameter values of each layer, recalculating the new data to be corrected and obtaining the corrected value.
In this example, the result is shown in fig. 3, and the proposed method performs well, corrects random errors, and also detects significant errors and obtains corresponding correction values.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (4)

1. A semiconductor process data correction method based on correlation entropy and a shallow neural network is characterized by comprising the following steps:
(1) for a control process in which disturbances exist, the output signal of a process variable sensor corresponding to the variable to be corrected is collected:
(2) directly inputting the collected variables into the constructed shallow neural network model, extracting the relevant information in the variables layer by layer, and transmitting the output of each layer through a set function;
collecting the variable output of the last layer of the model, comparing the variable output with the numerical value of the input variable, and establishing a regression model;
(3) saving the parameter weight of the current shallow neural network model, and calculating a final objective function value, wherein the objective function adopts a related entropy function; if the stopping condition is not met, updating the parameter weight and repeating the step (2) until the stopping condition is reached;
the correlation entropy function is expressed as:
Figure FDA0002989225090000011
Figure FDA0002989225090000012
εd=x0,d-xL,d
in the formula (I), the compound is shown in the specification,
Figure FDA0002989225090000017
as a function of the associated entropy, σdRepresenting adjustable parameters, epsilon, of the corresponding d-th variable in the associated entropy functiondThe difference value of the measured value and the corrected value of the corresponding dimension d variable is obtained;
and (3) training the weight of the updated parameter by adopting a gradient descent method, wherein the formula is as follows:
Figure FDA0002989225090000013
Figure FDA0002989225090000014
Figure FDA0002989225090000015
Figure FDA0002989225090000016
wherein α represents a fixed learning rate and α > 0; gradient of parameter
Figure FDA0002989225090000021
Figure FDA0002989225090000022
And
Figure FDA0002989225090000023
is obtained by the following formula
Figure FDA0002989225090000024
Figure FDA0002989225090000025
Figure FDA0002989225090000026
Figure FDA0002989225090000027
Wherein, the symbol
Figure FDA0002989225090000028
Representing the partial derivatives of the element-by-element one-to-one multiplication with the target function on the output value of the last layer of the function
Figure FDA0002989225090000029
And
Figure FDA00029892250900000210
is obtained by the following formula
Figure FDA00029892250900000211
Figure FDA00029892250900000212
In addition, the remaining iteration part, i.e. the objective function, is represented by the hidden node partial derivatives of each layer as:
Figure FDA00029892250900000213
Figure FDA0002989225090000031
the stop conditions are as follows: the target function reaches the maximum value or the cycle number reaches the set maximum cycle number;
(4) changing the network layer number, and repeating the steps (2) to (3) until the maximum network layer number is reached;
(5) selecting the network layer number with the best correction result; and storing the parameter values of each layer, inputting the new data to be corrected into the shallow neural network model, recalculating and obtaining a variable correction value.
2. The semiconductor process data correcting method based on the correlated entropy and the shallow neural network as claimed in claim 1, wherein in the step (1), the collected output signal contains random errors and significant errors.
3. The semiconductor process data correcting method based on the correlation entropy and the shallow neural network as claimed in claim 1, wherein the specific process of the step (2) is as follows:
(2-1)x0∈RD;x0=[x0,1,x0,2,…x0,D]Trepresenting the D-dimensional input of a variable with error, and xl-1L is 1, 2, …, L represents model common L-layer operation, each layer network node is input variable dimension D, weight matrix
Figure FDA0002989225090000032
And
Figure FDA0002989225090000033
deviation vector
Figure FDA0002989225090000034
And
Figure FDA0002989225090000035
and respectively defining parameters of linear and nonlinear functions in the transfer function of the network, and then the output of the ith layer is expressed as the following process:
Figure FDA0002989225090000036
Figure FDA0002989225090000037
Figure FDA0002989225090000038
Figure FDA0002989225090000039
and
Figure FDA00029892250900000310
for hidden nodes in the network, respectively transmitting corresponding linear (psi) and nonlinear activation functions (phi) until the output vector x of the first layer of the network is obtainedl
(2-2) after multi-layer continuous iteration, the output of the last layer of neural network model is a correction value xL
Figure FDA0002989225090000041
Here function F represents the hidden node acquisition and corresponding linear and nonlinear activation function operations shown in (2-1);
and (2-3) comparing the output of the neural network model with the numerical value of the input variable to establish a regression model.
4. The semiconductor process data correction method based on correlated entropy and shallow neural network as claimed in claim 1, wherein in step (4), new variable correction values are obtained according to network iteration for new data to be corrected.
CN202010591258.9A 2020-06-24 2020-06-24 Semiconductor process data correction method based on correlation entropy and shallow neural network Active CN111896038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010591258.9A CN111896038B (en) 2020-06-24 2020-06-24 Semiconductor process data correction method based on correlation entropy and shallow neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010591258.9A CN111896038B (en) 2020-06-24 2020-06-24 Semiconductor process data correction method based on correlation entropy and shallow neural network

Publications (2)

Publication Number Publication Date
CN111896038A CN111896038A (en) 2020-11-06
CN111896038B true CN111896038B (en) 2021-08-31

Family

ID=73207074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010591258.9A Active CN111896038B (en) 2020-06-24 2020-06-24 Semiconductor process data correction method based on correlation entropy and shallow neural network

Country Status (1)

Country Link
CN (1) CN111896038B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488466A (en) * 2015-11-26 2016-04-13 中国船舶工业系统工程研究院 Deep neural network and underwater sound target vocal print feature extraction method
CN107145624A (en) * 2017-03-28 2017-09-08 浙江大学 Gases Dissolved in Transformer Oil online monitoring data antidote based on artificial neural network
CN107612016A (en) * 2017-08-08 2018-01-19 西安理工大学 The planing method of Distributed Generation in Distribution System based on voltage maximal correlation entropy
CN109379379A (en) * 2018-12-06 2019-02-22 中国民航大学 Based on the network inbreak detection method for improving convolutional neural networks
CN110287983A (en) * 2019-05-10 2019-09-27 杭州电子科技大学 Based on maximal correlation entropy deep neural network single classifier method for detecting abnormality

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003522577A (en) * 2000-02-18 2003-07-29 アーゴス インク Multivariate analysis of green and ultraviolet spectra of cell and tissue samples

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488466A (en) * 2015-11-26 2016-04-13 中国船舶工业系统工程研究院 Deep neural network and underwater sound target vocal print feature extraction method
CN107145624A (en) * 2017-03-28 2017-09-08 浙江大学 Gases Dissolved in Transformer Oil online monitoring data antidote based on artificial neural network
CN107612016A (en) * 2017-08-08 2018-01-19 西安理工大学 The planing method of Distributed Generation in Distribution System based on voltage maximal correlation entropy
CN109379379A (en) * 2018-12-06 2019-02-22 中国民航大学 Based on the network inbreak detection method for improving convolutional neural networks
CN110287983A (en) * 2019-05-10 2019-09-27 杭州电子科技大学 Based on maximal correlation entropy deep neural network single classifier method for detecting abnormality

Also Published As

Publication number Publication date
CN111896038A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN109472057B (en) Product processing quality prediction device and method based on cross-process implicit parameter memory
CN109543916B (en) Model for estimating growth rate of silicon rod in polycrystalline silicon reduction furnace
CN109992921B (en) On-line soft measurement method and system for thermal efficiency of boiler of coal-fired power plant
CN109612513B (en) Online anomaly detection method for large-scale high-dimensional sensor data
CN110426956B (en) Intermittent process optimal compensation control strategy based on process migration model
CN111177970B (en) Multi-stage semiconductor process virtual metering method based on Gaussian process and convolutional neural network
CN111638707B (en) Intermittent process fault monitoring method based on SOM clustering and MPCA
CN111982302A (en) Temperature measurement method with noise filtering and environment temperature compensation
CN113722997A (en) New well dynamic yield prediction method based on static oil and gas field data
CN115828754A (en) Cutter wear state monitoring method based on multi-scale space-time fusion network model
CN110245398B (en) Soft measurement deep learning method for thermal deformation of air preheater rotor
CN106547899B (en) Intermittent process time interval division method based on multi-scale time-varying clustering center change
CN113420815B (en) Nonlinear PLS intermittent process monitoring method of semi-supervision RSDAE
CN117289652A (en) Numerical control machine tool spindle thermal error modeling method based on multi-universe optimization
CN116852665A (en) Injection molding process parameter intelligent adjusting method based on mixed model
CN111896038B (en) Semiconductor process data correction method based on correlation entropy and shallow neural network
Karim et al. Data‐based modeling and analysis of bioprocesses: some real experiences
CN111854822B (en) Semiconductor process data correction method based on correlation entropy and deep neural network
CN109858190B (en) Penicillin fermentation process soft measurement modeling method based on Drosophila algorithm optimization gradient lifting regression tree
CN109101683B (en) Model updating method for pyrolysis kettle of coal quality-based utilization and clean pretreatment system
CN114372181A (en) Intelligent planning method for equipment production based on multi-mode data
CN116052786A (en) Soft measurement method and controller for key parameters in marine alkaline protease fermentation process
CN114320266B (en) Dense oil reservoir conventional well yield prediction method based on support vector machine
CN114997486A (en) Effluent residual chlorine prediction method of water works based on width learning network
CN110045616B (en) Robust prediction control method for stirring reaction tank

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant