CN116431966A - Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder - Google Patents
Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder Download PDFInfo
- Publication number
- CN116431966A CN116431966A CN202310255054.1A CN202310255054A CN116431966A CN 116431966 A CN116431966 A CN 116431966A CN 202310255054 A CN202310255054 A CN 202310255054A CN 116431966 A CN116431966 A CN 116431966A
- Authority
- CN
- China
- Prior art keywords
- layer
- decoupling
- training
- feature
- loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 57
- 238000012549 training Methods 0.000 claims abstract description 61
- 238000000034 method Methods 0.000 claims abstract description 55
- 230000002159 abnormal effect Effects 0.000 claims abstract description 7
- 210000002569 neuron Anatomy 0.000 claims description 40
- 239000013598 vector Substances 0.000 claims description 35
- 238000013507 mapping Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 11
- 238000012512 characterization method Methods 0.000 abstract description 6
- 238000013461 design Methods 0.000 abstract description 2
- 238000012423 maintenance Methods 0.000 abstract 1
- 238000012360 testing method Methods 0.000 description 26
- 230000006870 function Effects 0.000 description 14
- 230000005856 abnormality Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000012880 independent component analysis Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000000513 principal component analysis Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010248 power generation Methods 0.000 description 2
- 238000012847 principal component analysis method Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01K—MEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
- G01K13/00—Thermometers specially adapted for specific purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G21—NUCLEAR PHYSICS; NUCLEAR ENGINEERING
- G21C—NUCLEAR REACTORS
- G21C17/00—Monitoring; Testing ; Maintaining
- G21C17/10—Structural combination of fuel element, control rod, reactor core, or moderator structure with sensitive instruments, e.g. for measuring radioactivity, strain
- G21C17/112—Measuring temperature
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E30/00—Energy generation of nuclear origin
- Y02E30/30—Nuclear fission reactors
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Analysis (AREA)
- Software Systems (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Operations Research (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- High Energy & Nuclear Physics (AREA)
- Plasma & Fusion (AREA)
- Monitoring And Testing Of Nuclear Reactors (AREA)
Abstract
The invention discloses a reactor core temperature anomaly detection method of an incremental characteristic decoupling self-encoder. Aiming at the problem that feature dimensions are difficult to be appointed in advance in the traditional decoupling characterization learning, the invention designs a feature increment strategy, and hidden space features of the feature increment strategy are generated step by step and feature dimensions are determined in a self-adaptive mode during self-encoder model training. Simultaneously, an iterative training strategy based on double performance indexes is provided for model training, so that the characteristics extracted from the encoder model have stronger reconstruction capability on data, and the requirements of hidden space characteristic decoupling are met. Finally, the characteristic space and the residual space of the reactor core temperature data are respectively described by utilizing statistics, so that the comprehensive anomaly detection of the reactor core temperature is realized. The method can effectively reduce the false alarm rate of faults and improve the fault detection rate in the abnormal detection task of the multi-measuring-point temperature data of the nuclear reactor core, and provides practical help for safe and stable operation and intelligent operation and maintenance of the nuclear reactor.
Description
Technical Field
The invention discloses a reactor core temperature anomaly detection method of an incremental characteristic decoupling self-encoder. The invention belongs to the field of industrial fault detection, and particularly relates to abnormality detection for the temperature of a nuclear reactor core.
Background
Nuclear power generation is a clean, economical and efficient power generation mode, and higher requirements are put on continuous safe and stable operation of nuclear power equipment. The reactor core is the energy core of the whole nuclear power system, the temperature of the reactor core is the most direct representation of the health state of the reactor core, and if the abnormality in the temperature distribution cannot be found in time, a series of major accidents such as reactor core fusion and the like are most likely to be caused, so that serious casualties and economic losses are caused. Therefore, the abnormal detection work is carried out aiming at the temperature of the nuclear reactor core, faults are found in time, major accidents are avoided, and the method has important significance for the productivity safety of the nuclear power station. The traditional abnormal detection of the temperature of the reactor core is generally carried out by adopting the thought of monitoring after an accident, judging and summarizing the change trend of the temperature of the reactor core during the accident and after the accident by means of the professional experience and the mechanism knowledge of operators, and has high labor cost, low detection efficiency and poor instantaneity. In recent years, with the development of machine learning and artificial intelligence technology, data-driven anomaly detection methods have been developed, and are widely applied to industrial process fault detection tasks, and good results are obtained. Therefore, the nuclear reactor fault detection work based on the data driving method is very urgent to develop by combining the characteristics of the nuclear reactor core temperature data.
The anomaly detection method based on data driving does not depend on the professional knowledge related to the mechanism, and can realize efficient and real-time anomaly detection by capturing information such as potential coupling relations among variables only by means of a large amount of data collected in the running process of the system. Among them, a multivariate statistical analysis method such as principal component analysis (principle component analysis, PCA), partial least squares (partial least square, PLS), independent component analysis (independent component analysis, ICA), and a deep learning method such as auto-encoder (AE), convolutional neural network (convolutional neural network, CNN) are more commonly used. However, complex nonlinear coupling relations exist among different measuring points of the core temperature, nonlinear characteristics of data are difficult to effectively capture by a multivariate statistical analysis method, and anomaly detection with high accuracy is difficult to achieve. Although the deep learning method can learn the implicit nonlinear characteristics in the data by using the nonlinear mapping between neurons, the characteristics extracted from the model hidden space may have a strong coupling relationship, i.e. there is a problem of information redundancy. The monitoring statistics calculated based on the redundant features may not accurately describe the degree to which the test data deviates from the model in the feature space in the anomaly detection task, resulting in failure omission and false alarm.
The anomaly detection method based on decoupling characterization learning overcomes the problem of feature redundancy, such as a variable auto-encoder (VAE), by designing decoupling constraints for hidden space generation on the basis of a traditional deep learning model. However, such a method often needs to manually preset and fix the dimension of the hidden space of the model before training, and the selection of the dimension of the hidden space has a larger influence on the performance of the model, which brings difficulty to the application of the anomaly detection method based on decoupling characterization learning in the actual modeling process.
Disclosure of Invention
Aiming at the problem that feature dimensions are difficult to be appointed in advance in the traditional decoupling characterization learning, the invention provides a reactor core temperature anomaly detection method of an incremental feature decoupling self-encoder, which is used for generating features of a self-encoder model hidden space in a stepping manner and adaptively determining feature dimensions, so that the dual requirements of strong reconstruction capability of the extracted features of the model on data and sufficient decoupling of the model hidden space features are met, and statistics are respectively constructed for anomaly detection based on the hidden space and residual space of reactor core temperature data.
The aim of the invention is realized by the following technical scheme:
a core temperature anomaly detection method of an incremental characteristic decoupling self-encoder specifically comprises the following steps:
inputting a nuclear reactor core temperature data sample acquired in real time into a trained incremental characteristic decoupling self-encoder model to obtain a characteristic vector and a reconstructed sample, and calculating statistics based on the characteristic vector and the reconstructed sample to perform anomaly detection on the core temperature;
the incremental characteristic decoupling self-encoder model is obtained through training by the following method:
constructing a training set, wherein each sample of the training set is nuclear reactor core temperature data acquired during normal operation of a nuclear reactor;
constructing an incremental characteristic decoupling self-encoder, wherein the incremental characteristic decoupling self-encoder consists of an input layer I, an adjusting layer P, a characteristic layer F and an output layer O; setting the initial neuron numbers of an adjusting layer P and a characteristic layer F;
inputting samples of the training set to an incremental characteristic decoupling self-encoder to obtain feature vectors and reconstructed samples, and performing neuron incremental iterative training on the incremental characteristic decoupling self-encoder based on a loss function until the model performance index meets the requirement; the loss function includes: a first loss function consisting of reconstruction losses; a second loss function consisting of a sum of reconstruction loss and hidden space decoupling loss; the model performance index comprises: the reconstruction error index R is the same as the reconstruction loss in value; the hidden space characteristic correlation index C is the same as the hidden space decoupling loss in value; wherein:
if C < C th And R < R th Considering that the reconstruction capability and the hidden space decoupling degree of the current model reach the requirements, and finishing model training;
if C < C th And R > R th The hidden space decoupling degree of the current model is considered to meet the requirement, but the reconstruction capability is insufficient, and the nerve is required to be performed in the feature layer FPerforming meta increment and training;
if C > C th And R < R th Considering that the reconstruction capability of the current model meets the requirement, but the hidden space decoupling degree is insufficient, and neuron increment and training are needed in the adjusting layer P;
if C > C th And R > R th Considering that the reconstruction capability and the hidden space decoupling degree of the current model are not up to the standard, and preferentially meeting the requirement on the hidden space decoupling degree is required, and performing neuron increment and training on an adjusting layer P;
wherein C is th 、R th The thresholds of the hidden space feature correlation index C and the reconstruction error index R are respectively represented.
Further, after the feature layer F performs neuron increment, during training, the network parameters of the mapping between the input layer I and the adjustment layer P are unchanged, the mapping between the adjustment layer P and the feature layer F fixes the network parameters participating in the training of the previous round and updates the newly added neurons in the current round for generating a new feature vector F k The subscript k represents the hidden space dimension of the current model after the neuron is newly added, and the mapped network parameters between the feature layer F and the output layer O are completely updated.
Further, after the adjustment layer P performs neuron increment, during training, the mapping between the input layer I and the adjustment layer P fixes the network parameters participating in the previous training round and updates the newly added neurons in the current training round to generate a new vector P j+1 The subscript j indicates the number of neurons of the adjustment layer P participating in the previous round of training, the mapping between the adjustment layer P and the feature layer F fixes the network parameters participating in the previous round of training therein and updates the network parameters in the current round based on the new vector P j+1 The matrix of which generates new feature vectors, the mapped network parameters between feature layer F and output layer O are fully updated.
Further, the hidden space decoupling Loss C Expressed as:
Cov(f k ,f i )=E(f k ·f i )-E(f k )E(f i )
where k is the hidden space dimension of the current model, f k The kth dimension feature vector F output for feature layer F i (i=1, 2,., k-1) calculating the mean of the feature vectors for k-1 feature vectors extracted for the previous round, function E.
Further, the reconstruction Loss R Expressed as:
wherein I II F Representing the Frobenius norm, X is a matrix of nuclear reactor core temperature data samples of the input model,is the reconstructed sample matrix and n is the number of samples.
Further, the second loss function is expressed as:
Loss total =Loss R +βLoss C
wherein beta is a model hyper-parameter, loss R Is reconstruction Loss, loss C Is the hidden space decoupling loss.
Further, the statistic includes T 2 And SPE statistics.
Further, abnormal detection is carried out on the temperature of the reactor core based on the feature vector and the reconstructed sample calculation statistic, specifically:
and calculating statistics based on the feature vector and the reconstructed sample, and if any calculated statistics exceed a control limit, indicating that the nuclear reactor operation process is faulty.
Further, the control limit of the statistic is calculated by a method of nuclear density estimation.
Further, the nuclear reactor core temperature data comprises core temperatures of a plurality of measuring points acquired by sensors distributed at different positions of the reactor core.
The beneficial effects of the invention are as follows: aiming at the problem that feature dimensions are difficult to be appointed in advance in traditional decoupling characterization learning, the invention provides a reactor core temperature anomaly detection method of an incremental feature decoupling self-encoder. The invention designs a feature increment strategy, which generates hidden space features of the self-encoder model in a stepping way and adaptively determines feature dimensions. Simultaneously, an iterative training strategy based on double performance indexes is provided for training a self-encoder model, so that the characteristics extracted by the model have stronger reconstruction capability on data, and the requirements of decoupling hidden space characteristics are met. Finally, the invention respectively constructs the monitoring statistics based on the characteristic space and the residual error of the data, thereby realizing the comprehensive anomaly detection of the reactor core temperature data. Compared with the traditional deep learning-based anomaly detection method, the method realizes decoupling of model hidden space features, constructs statistics capable of capturing data anomalies more accurately based on the hidden space, and reduces false alarm and missing alarm of fault detection. Compared with the anomaly detection method based on traditional decoupling characterization learning, the method can realize self-adaptive determination of the hidden space feature dimension, and effectively reduces the application difficulty of the method in the actual modeling process.
Drawings
FIG. 1 is a flow chart of an overall framework of the present invention
FIG. 2 is a graph of the fault detection results of the method on the test set 1, wherein the dotted line is a control limit, and the solid line is the statistic calculation result of the test set sample;
FIG. 3 is a diagram of the results of fault detection of the principal component analysis method on the test set 1, wherein the dotted line is a control limit, and the solid line is the statistic calculation result of the test set sample;
FIG. 4 is a graph of the fault detection results of the conventional self-encoder method on the test set 1, wherein the dotted line is a control limit, and the solid line is the statistic calculation result of the test set sample;
FIG. 5 is a graph of the results of fault detection on test set 1 for the direct decoupled self-encoder method, wherein the dashed line is the control limit and the solid line is the statistic calculation result for the test set samples;
FIG. 6 is a graph of the results of fault detection on test set 2 according to the method of the present invention, wherein the dotted line is the control limit and the solid line is the statistic calculation result of test set samples;
FIG. 7 is a graph of the results of fault detection of the principal component analysis method on the test set 2, wherein the dotted line is a control limit and the solid line is the statistic calculation result of the test set sample;
FIG. 8 is a graph of the fault detection results of the conventional self-encoder method on the test set 2, wherein the dotted line is a control limit, and the solid line is the statistic calculation result of the test set sample;
FIG. 9 is a graph of the results of fault detection on test set 2 for the direct decoupled self-encoder approach, where the dashed line is the control limit and the solid line is the statistic calculation of test set samples;
Detailed Description
In this embodiment, the validity of the method is verified by taking the actual nuclear reactor core temperature data as an example. The invention will now be described in further detail with reference to the drawings and to specific examples.
According to the core temperature anomaly detection method of the incremental characteristic decoupling self-encoder, a core temperature data sample acquired in real time is input into a trained incremental characteristic decoupling self-encoder model to obtain a characteristic vector and a reconstructed sample, and anomaly detection is carried out on the core temperature based on calculation statistics of the characteristic vector and the reconstructed sample. Wherein, the training of the incremental feature decoupling self-encoder model is the key point of the invention, as shown in fig. 1, the offline modeling training comprises the following steps:
step 1: constructing a training set, wherein each sample of the training set is nuclear reactor core temperature data acquired during normal operation of a nuclear reactor;
in this embodiment, 6000 normal samples are collected as training sets to build a model, 6000 samples are taken as two test sets for each of two different fault types occurring in the process to perform an abnormality detection test, each sample contains 40 measurement point temperature variables, the temperature variables are collected by sensors axially distributed on the top layer, middle layer and bottom layer of the nuclear reactor core, and the sampling interval of the samples is 60 seconds. Faults in both test sets, which occur from sample 2000, are caused by abrupt and gradual changes in core temperature distribution, respectively.
Further, the data collected is normalized, specifically as follows:
wherein n represents the number of samples and m represents the number of variables; x is x i,r An element representing the ith row and the ith column in the data matrix X, namely the value of the ith temperature variable in the acquired ith sample;is the mean of the r-th process variable in all samples; s is(s) r Representing standard deviation of the r-th process variable in all samples; x is x i,r The normalized value of the corresponding sample and corresponding variable. For dataXAfter normalization, a data matrix is available->
Step 2: constructing an incremental characteristic decoupling self-encoder, modeling normal data X to obtain a characteristic vector and a reconstructed sample, and performing neuron incremental iterative training on the incremental characteristic decoupling self-encoder based on a loss function until the model performance index meets the requirement; the method specifically comprises the following substeps:
step 2.1: the incremental characteristic decoupling self-encoder model consists of an input layer I, an adjustment layer P, a characteristic layer F and an output layer O, wherein the characteristic layer F is defined as a model hidden space, and the number of neurons in the layer is the characteristic dimension of the corresponding hidden space. According to the invention, the characteristic layer F is designed into a form that neurons can be increased, and when the model has poor data reconstruction capability, new neurons are required to be added to the characteristic layer F for training. In the invention, the regulating layer P is also designed into a form that neurons can be increased, and if the characteristic decoupling degree of the characteristic layer F is insufficient in the training process, new neurons are added to the regulating layer P for training.
Step 2.2: setting the neuron numbers of a characteristic layer F and an adjusting layer P of the incremental characteristic decoupling self-encoder model in an initial state; in general, the initial neuron number of the feature layer F is set to 1, and the increment can be adapted to the greatest extent. In this embodiment, the number of neurons of the feature layer F and the adjustment layer P of the incremental feature decoupling self-encoder model in the initial state is set to be 1 and 3, respectively. By usingWhen the model is used for generating one-dimensional features, mapping between the input layer I and the adjusting layer P, between the adjusting layer P and the feature layer F and between the feature layer F and the output layer O is respectively represented, and then information transfer in the model can be summarized as follows: input data->Will be mapped into j neurons of the adjustment layer P to obtain a matrixI.e. < ->Then the information contained in P is further compressed into a neuron of the feature layer F to obtain a feature vector +.>I.e. < ->Finally, feature f 1 Mapped to output layer O to obtain reconstruction data +.>I.e. < ->Mapping->Comprising a networkThe parameters are +.>The first Loss function trained by the model in the initial state is the same as that of the traditional self-encoder, and the Loss is reconstructed R The calculation method adopted in the embodiment is as follows:
wherein I II F Indicating the Frobenius norm.
Step 2.3: let reconstruction error index r=loss R And set a corresponding threshold R th . If the model has R > R after training in the initial state th Then a neuron is newly added into the model feature layer F and retraining is carried out, so as to generate a second-dimension feature vector F 2 And then f 1 And f 2 Combines and maps to output layer O. By usingWhen the representation model generates two-dimensional features, mapping between an input layer I and an adjustment layer P, between the adjustment layer P and a feature layer F and between the feature layer F and an output layer O is recorded, and network parameters contained in the mapping are respectively->In this training, for mapping +.>The model fixes its parameters->Parameter from previous round->The same and not updated; for mapping->Parameters of->Consists of two parts of parameters that need to be fixed and updated: the part to be fixed and the parameters of the previous round +.>The same part is not updated, and the part needing to be updated is newly added in the round and is used for generating f 2 Is marked as +.>I.e. < ->For mapping->The model will be parameterized>Complete updating is performed, i.e. the parameter +.>Retraining. Generating f in the training 2 In the process of (1), the Loss function of the model is in the original reconstruction Loss R Based on (1), add a Loss item Loss of hidden space decoupling C The following form may be adopted:
Cov(f k ,f i )=E(f k ·f i )-E(f k )E(f i )
where k is the hidden space dimension of the current model, f k For the kth dimension feature vector, f i (i=1, 2,., k-1) calculating the mean of the feature vectors for k-1 feature vectors extracted for the previous round, function E. The second Loss function Loss total Loss of Loss by reconstruction R And hidden space decoupling Loss C The sum is formed by:
Loss total =Loss R +βLoss C
wherein, beta is a model hyper-parameter. The model will be trained along this loss function during subsequent training.
Step 2.4: let the hidden space feature correlation index c=loss C And set a corresponding threshold C th . If the model has C > C after the previous training th Neuron increment and retraining model mapping to the adjustment layer PNamely: will adjust the newly generated vector P in layer P j+1 Combining with matrix P to obtain matrix->Remapped to feature layer F to regenerate a second dimension feature F 2 . In this training, for mapping +.>Parameters of->Consists of two parts of parameters that need to be fixed and updated: parts and parameters to be fixed->The same part is not updated, and the part needing to be updated is newly added in the round and is used for generating p j+1 Is marked as +.>I.e. < ->For mapping->Parameters of->Consists of two parts of parameters that need to be fixed and updated: parts and parameters to be fixed->The same and not updated, the part to be updated is the parameter +.>Update for regenerating f 2 I.e. +.>For mappingThe model will be parameterized>A full update is performed.
Step 2.5: on the basis of the specific neuron increment operation in the step 2.3 and the step 2.4, and the like, the incremental characteristic decoupling self-encoder is subjected to neuron increment iterative training based on the loss function until the model performance index meets the requirement, and the complete model iterative training strategy is summarized as follows:
judging condition 1: if C < C th And R < R th Considering that the reconstruction capability and the hidden space decoupling degree of the current model reach the requirements, and finishing model training;
judging condition 2: if C < C th And R > R th The hidden space decoupling degree of the current model is considered to meet the requirement, but the reconstruction capability is insufficient, and neuron increment and training are needed in the feature layer F.
Judging condition 3: if C > C th And R < R th The current model reconstruction capability is considered to meet the requirement, but the hidden space decoupling degree is insufficient, and neuron increment and training are needed in the adjustment layer P.
Judging condition 4: if C > C th And R > R th The reconstruction capability and the hidden space decoupling degree of the current model are not up to the standard, the requirement on the hidden space decoupling degree needs to be met preferentially, and the operation is the same as that of the condition 3.
Wherein C is th 、R th The thresholds of the hidden space feature correlation index C and the reconstruction error index R are respectively represented. In the present embodiment of the present invention, in the present embodiment,0.8 and 0.1, respectively.
In this embodiment, the incremental feature decoupling self-encoder model is trained according to the above strategy until the number of neurons in the feature layer F and the adjustment layer P of the model is 6 and 9, respectively, when the iteration termination condition is satisfied. Fixing the model structure and recording the model map obtained by the last round of training asAnd (5) finishing model training.
Statistics are designed for normal core temperature data feature space and residual space for anomaly detection, in this embodiment with T 2 For example, the calculation method is as follows: for normal dataEach of which can be expressed as +.>Extracting feature vectors from encoder models using incremental feature decouplingAt the same time get the reconstruction output +.>And calculate the reconstructed residual term +.>Constructing T based on the vectors 2 And SPE statistics are as follows:
T 2 =f t T Σ -1 f t
SPE=e T e
wherein Σ is the covariance matrix of the features extracted by the model based on the normal data. Calculation of T by means of nuclear density estimation 2 Control limit Ctr with SPE statistics T2 Ctr SPE 。
To this end, 2 statistics and control limits are obtained. And (5) the fault detection can be realized by using the trained incremental characteristic decoupling self-encoder model. Specifically, it willNuclear reactor core temperature data sample acquired in real timeAnd inputting the fault detection result into a trained incremental characteristic decoupling self-encoder model. Extracting its feature vector ++using trained incremental feature decoupling from encoder model>At the same time get the reconstruction output +.>Computing a reconstructed residual term->Calculate T 2 And SPE statistics to detect anomalies in core temperature:
T 2 =f new T Σ -1 f new
SPE=e new T e new
if T 2 And any statistic in SPE exceeds control limit, which indicates that the running process of nuclear reactor has faults. An overall framework flow diagram of the present invention is shown in fig. 1. The effect of the present invention will be described below by performing fault detection analysis using samples of two test sets, respectively.
In the aspect of fault detection, the invention carries out fault detection aiming at the test sets of two fault types of temperature mutation and gradual change, and the results are respectively shown in fig. 2 and 6. It can be found that under the abrupt fault, both statistics of the method obviously exceed the control limit at the moment of occurrence of the fault, and the occurrence of the fault can be timely and sensitively detected. Under the slow-change fault, two statistics of the method can capture the slowly-gradually-changed abnormality in the data, and the occurrence of the fault is accurately detected.
In order to more clearly embody the superiority of the feature increment strategy and the iterative training strategy designed by the invention in the abnormality detection task, the model obtained after the increment strategy and the iterative training strategy of the increment feature decoupling self-encoder are completely abandoned is named as a direct decoupling self-encoder (constraint training is carried out by using only a second loss function) for experimental comparison. Principal component analysis PCA (W.Svante, K.Esbensen, and P.Geladi. "Principal component analysis," chemom. Intell. Lab. Syst., vol.2, no.1-3, pp.37-52,1987), conventional self-encoder AE (Sakurada, mayu, and Takehisa Yairi. "Anomaly detection using autoencoders with nonlinear dimensionality reduction." Proceedings of the MLSDA 2014 2nd workshop on machine learning for sensory data analysis.2014.) and direct decoupling self-encoder were selected for comparison. The fault detection effect of the above method on the two test sets is shown in fig. 3 and 7, fig. 4 and 8, and fig. 5 and 9, respectively. It has been found that in experiments of the abrupt fault type, the above-described method is able to detect the occurrence of a fault relatively accurately in most cases. However, in the experiment of the slow-change type fault, the method generates a large number of false positives before the fault occurs, and meanwhile, the time for detecting the fault is also greatly prolonged, so that the fault detection rate is lower.
For more visual comparison, tables 1 and 2 list the Fault Detection Rate (FDR) and fault False Alarm Rate (FAR) of the different methods on the test set, respectively. The calculated expression of FDR and FAR is:
wherein N is MAE ,N abnormal ,N FAE ,N normal The number of the missed report event, the abnormal sample, the false report event and the normal sample are represented respectively.
It can be found that the fault false alarm rate of the method provided by the invention is the lowest in the four methods on the two test sets, and the fault detection rate is also the highest in the four methods as a whole, wherein the superiority of the method provided by the invention compared with other methods is shown to be most remarkable on the test set 2 containing the slowly-varying faults. This demonstrates the feasibility and effectiveness of the proposed method.
TABLE 1 Failure Detection Rate (FDR) vs. (percent)
TABLE 2 failure False Alarm Rate (FAR) vs. (percent)
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary or exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.
Claims (10)
1. The method for detecting the abnormal temperature of the reactor core of the incremental characteristic decoupling self-encoder is characterized by comprising the following steps of:
inputting a nuclear reactor core temperature data sample acquired in real time into a trained incremental characteristic decoupling self-encoder model to obtain a characteristic vector and a reconstructed sample, and calculating statistics based on the characteristic vector and the reconstructed sample to perform anomaly detection on the core temperature;
the incremental characteristic decoupling self-encoder model is obtained through training by the following method:
constructing a training set, wherein each sample of the training set is nuclear reactor core temperature data acquired during normal operation of a nuclear reactor;
constructing an incremental characteristic decoupling self-encoder, wherein the incremental characteristic decoupling self-encoder consists of an input layer I, an adjusting layer P, a characteristic layer F and an output layer O; setting the initial neuron numbers of an adjusting layer P and a characteristic layer F;
inputting samples of the training set to an incremental characteristic decoupling self-encoder to obtain feature vectors and reconstructed samples, and performing neuron incremental iterative training on the incremental characteristic decoupling self-encoder based on a loss function until the model performance index meets the requirement; the loss function includes: a first loss function consisting of reconstruction losses; a second loss function consisting of a sum of reconstruction loss and hidden space decoupling loss; the model performance index comprises: the reconstruction error index R is the same as the reconstruction loss in value; the hidden space characteristic correlation index C is the same as the hidden space decoupling loss in value; wherein:
if C < C th And R < R th Considering that the reconstruction capability and the hidden space decoupling degree of the current model reach the requirements, and finishing model training;
if C < C th And R > R th Considering that the hidden space decoupling degree of the current model meets the requirement, but the reconstruction capability is insufficient, and neuron increment and training are needed in a feature layer F;
if C > C th And R < R th Considering that the reconstruction capability of the current model meets the requirement, but the hidden space decoupling degree is insufficient, and neuron increment and training are needed in the adjusting layer P;
if C > C th And R > R th Considering that the reconstruction capability and the hidden space decoupling degree of the current model are not up to the standard, and preferentially meeting the requirement on the hidden space decoupling degree is required, and performing neuron increment and training on an adjusting layer P;
wherein C is th 、R th The thresholds of the hidden space feature correlation index C and the reconstruction error index R are respectively represented.
2. The method according to claim 1, wherein after the feature layer F performs the neuron increment, the network parameters of the mapping between the input layer I and the adjustment layer P are unchanged, the mapping between the adjustment layer P and the feature layer F fixes the network parameters in which the training of the previous round is participated and updates the newly added neurons in the present round for generating the new feature vector F k The subscript k represents the hidden space dimension of the current model after the neuron is newly added, and the mapped network parameters between the feature layer F and the output layer O are completely updated.
3. The method according to claim 1, wherein after the adjustment layer P performs the neuron increment, the mapping between the input layer I and the adjustment layer P fixes the network parameters involved in the previous training and updates the newly added neurons in the current training round for generating the new vector P j+1 The subscript j indicates the number of neurons of the adjustment layer P participating in the previous round of training, the mapping between the adjustment layer P and the feature layer F fixes the network parameters participating in the previous round of training therein and updates the network parameters in the current round based on the new vector P j+1 The matrix of which generates new feature vectors, the mapped network parameters between feature layer F and output layer O are fully updated.
4. The method of claim 1, wherein the implicit spatial decoupling Loss C Expressed as:
Cov(f k ,f i )=E(f k ·f i )-E(f k )E(f i )
where k is the hidden space dimension of the current model, f k The kth dimension feature vector F output for feature layer F i (i=1, 2,., k-1) calculating the mean of the feature vectors for k-1 feature vectors extracted for the previous round, function E.
6. The method of claim 1, wherein the second loss function is expressed as:
Loss total =Loss R +βLoss C
wherein beta is a model hyper-parameter, loss R Is reconstruction Loss, loss C Is the hidden space decoupling loss.
7. The method of claim 1, wherein the statistic comprises T 2 And SPE statistics.
8. The method of claim 1, wherein the anomaly detection of core temperature is performed based on the eigenvector and the reconstructed sample calculation statistic, in particular:
and calculating statistics based on the feature vector and the reconstructed sample, and if any calculated statistics exceed a control limit, indicating that the nuclear reactor operation process is faulty.
9. The method of claim 8, wherein the control limit of the statistic is calculated by a method of nuclear density estimation.
10. The method of claim 1, wherein the nuclear reactor core temperature data comprises core temperatures at a plurality of stations acquired by sensors distributed at different locations in a reactor core.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310255054.1A CN116431966A (en) | 2023-03-16 | 2023-03-16 | Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310255054.1A CN116431966A (en) | 2023-03-16 | 2023-03-16 | Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116431966A true CN116431966A (en) | 2023-07-14 |
Family
ID=87086363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310255054.1A Pending CN116431966A (en) | 2023-03-16 | 2023-03-16 | Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116431966A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116738354A (en) * | 2023-08-15 | 2023-09-12 | 国网江西省电力有限公司信息通信分公司 | Method and system for detecting abnormal behavior of electric power Internet of things terminal |
CN117150243A (en) * | 2023-10-27 | 2023-12-01 | 湘江实验室 | Fault isolation and estimation method based on fault influence decoupling network |
CN117499199A (en) * | 2023-08-30 | 2024-02-02 | 长沙理工大学 | VAE-based information enhanced decoupling network fault diagnosis method and system |
-
2023
- 2023-03-16 CN CN202310255054.1A patent/CN116431966A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116738354A (en) * | 2023-08-15 | 2023-09-12 | 国网江西省电力有限公司信息通信分公司 | Method and system for detecting abnormal behavior of electric power Internet of things terminal |
CN116738354B (en) * | 2023-08-15 | 2023-12-08 | 国网江西省电力有限公司信息通信分公司 | Method and system for detecting abnormal behavior of electric power Internet of things terminal |
CN117499199A (en) * | 2023-08-30 | 2024-02-02 | 长沙理工大学 | VAE-based information enhanced decoupling network fault diagnosis method and system |
CN117150243A (en) * | 2023-10-27 | 2023-12-01 | 湘江实验室 | Fault isolation and estimation method based on fault influence decoupling network |
CN117150243B (en) * | 2023-10-27 | 2024-01-30 | 湘江实验室 | Fault isolation and estimation method based on fault influence decoupling network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116431966A (en) | Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder | |
CN106251059B (en) | Cable state evaluation method based on probabilistic neural network algorithm | |
CN116757534B (en) | Intelligent refrigerator reliability analysis method based on neural training network | |
Lindemann et al. | Anomaly detection and prediction in discrete manufacturing based on cooperative LSTM networks | |
CN106951695A (en) | Plant equipment remaining life computational methods and system under multi-state | |
CN113642754B (en) | Complex industrial process fault prediction method based on RF noise reduction self-coding information reconstruction and time convolution network | |
CN109472097B (en) | Fault diagnosis method for online monitoring equipment of power transmission line | |
CN107807860B (en) | Power failure analysis method and system based on matrix decomposition | |
CN113094860B (en) | Industrial control network flow modeling method based on attention mechanism | |
CN112784920A (en) | Cloud-side-end-coordinated dual-anti-domain self-adaptive fault diagnosis method for rotating part | |
CN116020879B (en) | Technological parameter-oriented strip steel hot continuous rolling space-time multi-scale process monitoring method and device | |
CN110244690B (en) | Multivariable industrial process fault identification method and system | |
CN117076171A (en) | Abnormality detection and positioning method and device for multi-element time sequence data | |
CN117032165A (en) | Industrial equipment fault diagnosis method | |
CN114581699A (en) | Transformer state evaluation method based on deep learning model in consideration of multi-source information | |
CN117313796A (en) | Wind power gear box fault early warning method based on DAE-LSTM-KDE model | |
CN117833468A (en) | Maintenance method for circuit breaker unit transmission control part of power distribution ring main unit in operation | |
CN111723857B (en) | Intelligent monitoring method and system for running state of process production equipment | |
CN112380763A (en) | System and method for analyzing reliability of in-pile component based on data mining | |
CN117540309A (en) | Method, system, equipment and medium for identifying anomaly of aircraft parameters | |
CN114548701B (en) | Full-measurement-point-oriented coupling structure analysis and estimation process early warning method and system | |
CN115564075B (en) | Main and auxiliary integrated fault collaborative diagnosis method and system for urban power grid | |
CN115169426B (en) | Anomaly detection method and system based on similarity learning fusion model | |
CN114282608B (en) | Hidden fault diagnosis and early warning method and system for current transformer | |
Tian et al. | Anomaly detection with convolutional autoencoder for predictive maintenance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |