CN116431966A - Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder - Google Patents

Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder Download PDF

Info

Publication number
CN116431966A
CN116431966A CN202310255054.1A CN202310255054A CN116431966A CN 116431966 A CN116431966 A CN 116431966A CN 202310255054 A CN202310255054 A CN 202310255054A CN 116431966 A CN116431966 A CN 116431966A
Authority
CN
China
Prior art keywords
feature
layer
decoupling
training
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310255054.1A
Other languages
Chinese (zh)
Inventor
赵春晖
常俊宇
陈旭
范海东
王文海
阮伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310255054.1A priority Critical patent/CN116431966A/en
Publication of CN116431966A publication Critical patent/CN116431966A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01KMEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
    • G01K13/00Thermometers specially adapted for specific purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G21NUCLEAR PHYSICS; NUCLEAR ENGINEERING
    • G21CNUCLEAR REACTORS
    • G21C17/00Monitoring; Testing ; Maintaining
    • G21C17/10Structural combination of fuel element, control rod, reactor core, or moderator structure with sensitive instruments, e.g. for measuring radioactivity, strain
    • G21C17/112Measuring temperature
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E30/00Energy generation of nuclear origin
    • Y02E30/30Nuclear fission reactors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Operations Research (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Plasma & Fusion (AREA)
  • Monitoring And Testing Of Nuclear Reactors (AREA)

Abstract

本发明公开了一种增量式特征解耦自编码器的堆芯温度异常检测方法。本发明针对传统解耦表征学习难以预先指定特征维度的问题,设计了一种特征增量策略,在自编码器模型训练时步进式地生成其隐空间特征并自适应地确定特征维度。同时提出了一种基于双性能指标的迭代训练策略用于模型训练,使自编码器模型所提取的特征对数据具有较强的重构能力,并满足隐空间特征解耦要求。最后,利用统计量对堆芯温度数据的特征空间和残差空间分别进行描述,实现对于反应堆堆芯温度全面的异常检测。本发明方法能够在核反应堆堆芯多测点温度数据的异常检测任务中有效降低故障的误报率、提高故障检出率,为核反应堆安全平稳运行和智能运维提供实用性帮助。

Figure 202310255054

The invention discloses a core temperature anomaly detection method of an incremental feature decoupling self-encoder. Aiming at the problem that traditional decoupling representation learning is difficult to pre-designate feature dimensions, the present invention designs a feature increment strategy, which generates latent space features step by step during autoencoder model training and adaptively determines feature dimensions. At the same time, an iterative training strategy based on dual performance indicators is proposed for model training, so that the features extracted by the autoencoder model have a strong ability to reconstruct data and meet the requirements of latent space feature decoupling. Finally, the feature space and residual space of the core temperature data are described respectively by statistics, so as to realize the comprehensive anomaly detection of the reactor core temperature. The method of the invention can effectively reduce the false alarm rate of faults and improve the fault detection rate in the abnormal detection task of the temperature data of multiple measuring points in the nuclear reactor core, and provide practical help for the safe and stable operation and intelligent operation and maintenance of the nuclear reactor.

Figure 202310255054

Description

一种增量式特征解耦自编码器的堆芯温度异常检测方法An Incremental Feature Decoupling Autoencoder Core Temperature Abnormal Detection Method

技术领域technical field

本发明公开一种增量式特征解耦自编码器的堆芯温度异常检测方法。本发明属于工业故障检测领域,特别是针对核反应堆堆芯温度的异常检测。The invention discloses a core temperature anomaly detection method of an incremental feature decoupling self-encoder. The invention belongs to the field of industrial fault detection, in particular to the abnormal detection of core temperature of nuclear reactors.

背景技术Background technique

核能发电是一种清洁、经济、高效的发电方式,这对于核电设备的持续安全平稳运行提出了更高的要求。反应堆堆芯是整个核电系统的能源核心,堆芯温度则是堆芯健康状态的最直接表征,温度分布中出现的异常若不能被及时发现,极有可能引发堆芯熔毁等一系列重大事故,造成严重的人员伤亡和经济损失。因此,针对核反应堆堆芯温度开展异常检测工作,及时发现故障,避免重大事故的发生,对于核电站的产能安全具有重要意义。传统的堆芯温度异常检测一般采用事故后监视的思路进行,依靠操作人员的专业经验和机理知识对事故时和事故后的堆芯温度变化趋势进行判断和总结,人力成本高,检测效率低,且实时性差。近年来,随着机器学习和人工智能技术的发展,数据驱动的异常检测方法得到了长足发展,并广泛应用于工业过程故障检测任务中,取得了较好的成果。因此,结合核电堆芯温度数据的特点开展基于数据驱动方法的核反应堆故障检测工作是十分迫切的。Nuclear power generation is a clean, economical and efficient power generation method, which puts forward higher requirements for the continuous, safe and stable operation of nuclear power equipment. The reactor core is the energy core of the entire nuclear power system, and the core temperature is the most direct indicator of the health status of the core. If abnormalities in the temperature distribution cannot be detected in time, it is very likely to cause a series of major accidents such as core meltdown. , causing serious casualties and economic losses. Therefore, it is of great significance for the production safety of nuclear power plants to carry out abnormal detection work for nuclear reactor core temperature, find faults in time, and avoid major accidents. The traditional core temperature anomaly detection is generally carried out with the idea of post-accident monitoring, relying on the professional experience and mechanism knowledge of the operators to judge and summarize the core temperature change trend during and after the accident, the labor cost is high, and the detection efficiency is low. And the real-time performance is poor. In recent years, with the development of machine learning and artificial intelligence technology, data-driven anomaly detection methods have been greatly developed and widely used in industrial process fault detection tasks with good results. Therefore, it is very urgent to develop nuclear reactor fault detection based on data-driven methods combined with the characteristics of nuclear power core temperature data.

基于数据驱动的异常检测方法不依赖机理相关的专业知识,仅依靠系统运行过程中收集的大量数据,捕捉变量间潜在的耦合关系等信息,就能实现高效、实时的异常检测。其中较为常用的有主成分分析(principle component analysis,PCA)、偏最小二乘法(partial least square,PLS)、独立成分分析(independent component analysis,ICA)等多元统计分析方法,以及自编码器(auto-encoder,AE)、卷积神经网络(convolutionalneural network,CNN)等深度学习方法。然而,堆芯温度不同测点间具有复杂非线性耦合关系,多元统计分析方法一般难以有效捕捉数据的非线性特征,难以实现高准确率的异常检测。深度学习方法虽然能够利用神经元之间的非线性映射学习数据中隐含的非线性特征,但模型隐空间中提取到的特征可能存在较强的耦合关系,即存在信息冗余的问题。基于冗余特征计算的监测统计量可能不能在异常检测任务中准确描述测试数据在特征空间中偏离模型的程度,造成了故障漏报和误报。The data-driven anomaly detection method does not rely on mechanism-related expertise, but only relies on a large amount of data collected during system operation to capture information such as potential coupling relationships between variables, and can achieve efficient and real-time anomaly detection. Among them, the more commonly used methods are principal component analysis (principle component analysis, PCA), partial least square (partial least square, PLS), independent component analysis (independent component analysis, ICA) and other multivariate statistical analysis methods, and autoencoders (auto -encoder, AE), convolutional neural network (convolutional neural network, CNN) and other deep learning methods. However, there is a complex nonlinear coupling relationship between different measurement points of the core temperature, and multivariate statistical analysis methods are generally difficult to effectively capture the nonlinear characteristics of the data, and it is difficult to achieve high-accuracy anomaly detection. Although the deep learning method can use the nonlinear mapping between neurons to learn the hidden nonlinear features in the data, the features extracted in the hidden space of the model may have a strong coupling relationship, that is, there is a problem of information redundancy. The monitoring statistics calculated based on redundant features may not be able to accurately describe the degree of deviation of the test data from the model in the feature space in the anomaly detection task, resulting in false negatives and false positives.

基于解耦表征学习的异常检测方法通过在传统深度学习模型的基础上设计对于隐空间生成的解耦约束,以克服特征冗余问题,如变分自编码器(variational auto-encoder,VAE)等方法。但此类方法往往需要在训练前对模型隐空间的维度进行人为预设并固定,而隐空间维度的选择对模型性能具有较大的影响,这为基于解耦表征学习的异常检测方法在实际建模过程中的应用带来了困难。Anomaly detection methods based on decoupled representation learning design decoupling constraints on latent space generation based on traditional deep learning models to overcome the problem of feature redundancy, such as variational auto-encoders (VAE), etc. method. However, such methods often need to artificially preset and fix the dimension of the latent space of the model before training, and the choice of the hidden space dimension has a great impact on the performance of the model. The application in the modeling process poses difficulties.

发明内容Contents of the invention

本发明的目的在于针对传统解耦表征学习难以预先指定特征维度的问题,提出一种增量式特征解耦自编码器的堆芯温度异常检测方法,步进式地生成自编码器模型隐空间的特征并自适应地确定特征维度,满足模型所提取特征对数据具有较强重构能力和模型隐空间特征充分解耦的双重要求,并基于堆芯温度数据的隐空间和残差空间分别构造统计量用于异常检测。The purpose of the present invention is to solve the problem that traditional decoupling representation learning is difficult to specify the feature dimension in advance, and propose a core temperature anomaly detection method of incremental feature decoupling autoencoder, which generates the latent space of autoencoder model step by step feature and adaptively determine the feature dimension to meet the dual requirements of the features extracted by the model, which have a strong reconstruction ability for the data and fully decouple the latent space features of the model, and construct the latent space and residual space based on the core temperature data respectively Statistics are used for anomaly detection.

本发明的目的是通过以下技术方案来实现的:The purpose of the present invention is achieved through the following technical solutions:

一种增量式特征解耦自编码器的堆芯温度异常检测方法,具体为:A core temperature anomaly detection method for an incremental feature decoupling autoencoder, specifically:

将实时采集的核反应堆芯温度数据样本输入至训练好的增量式特征解耦自编码器模型中,获得特征向量和重构样本,基于特征向量和重构样本计算统计量对堆芯温度进行异常检测;Input the real-time collected nuclear reactor core temperature data samples into the trained incremental feature decoupling autoencoder model to obtain feature vectors and reconstructed samples, and calculate statistics based on the feature vectors and reconstructed samples to perform anomalies on the core temperature detection;

所述增量式特征解耦自编码器模型通过如下方法训练获得:The incremental feature decoupling autoencoder model is obtained by training as follows:

构建训练集,所述训练集的每个样本为核反应堆正常运行时采集的核反应堆芯温度数据;Build a training set, each sample of the training set is nuclear reactor core temperature data collected when the nuclear reactor is in normal operation;

构建增量式特征解耦自编码器,所述增量式特征解耦自编码器由输入层I,调节层P,特征层F和输出层O构成;设定调节层P、特征层F的初始神经元个数;Build an incremental feature decoupling autoencoder, the incremental feature decoupling autoencoder is composed of an input layer I, an adjustment layer P, a feature layer F and an output layer O; set the adjustment layer P, feature layer F The number of initial neurons;

将训练集的样本输入至增量式特征解耦自编码器,获得特征向量和重构样本,基于损失函数对增量式特征解耦自编码器进行神经元增量迭代训练直至模型性能指标符合要求;所述损失函数包括:第一损失函数,由重构损失构成;第二损失函数,由重构损失和隐空间解耦损失之和构成;所述模型性能指标包括:重构误差指标R,数值上与重构损失相同;隐空间特征相关性指标C,数值上与隐空间解耦损失相同;其中:Input the samples of the training set to the incremental feature decoupling autoencoder to obtain feature vectors and reconstructed samples, and perform neuron incremental iterative training on the incremental feature decoupling autoencoder based on the loss function until the model performance index meets Requirements; the loss function includes: a first loss function, composed of reconstruction loss; a second loss function, composed of the sum of reconstruction loss and hidden space decoupling loss; the model performance index includes: reconstruction error index R , is numerically the same as the reconstruction loss; the latent space feature correlation index C is numerically the same as the latent space decoupling loss; where:

如果C<Cth且R<Rth,认为当前模型重构能力和隐空间解耦程度均达到要求,模型训练完毕;If C<C th and R<R th , it is considered that the current model reconstruction ability and hidden space decoupling degree meet the requirements, and the model training is completed;

如果C<Cth且R>Rth,认为当前模型隐空间解耦程度达到要求,但重构能力不足,需要在特征层F进行神经元增量并训练;If C<C th and R>R th , it is considered that the degree of decoupling of the hidden space of the current model meets the requirements, but the reconstruction ability is insufficient, and neurons need to be incremented and trained in the feature layer F;

如果C>Cth且R<Rth,认为当前模型重构能力达到要求,但隐空间解耦程度不足,需要在调节层P进行神经元增量并训练;If C>C th and R<R th , it is considered that the reconstruction ability of the current model meets the requirements, but the degree of decoupling of the hidden space is insufficient, and neurons need to be incremented and trained in the adjustment layer P;

如果C>Cth且R>Rth,认为当前模型重构能力和隐空间解耦程度均未达标,需要优先满足对隐空间解耦程度的要求,在调节层P进行神经元增量并训练;If C>C th and R>R th , it is considered that the current model reconstruction ability and latent space decoupling degree are not up to standard, and it is necessary to give priority to meeting the requirements for latent space decoupling degree, and perform neuron increment and training in the adjustment layer P ;

其中,Cth、Rth分别表示隐空间特征相关性指标C、重构误差指标R的阈值。Among them, C th and R th represent the thresholds of latent space feature correlation index C and reconstruction error index R, respectively.

进一步地,在特征层F进行神经元增量后,训练时,输入层I与调节层P之间的映射的网络参数不变,调节层P与特征层F之间的映射固定其中参与上一轮的训练的网络参数并更新本轮中新增神经元用于生成新特征向量fk的网络参数,下标k表示新增神经元后当前模型的隐空间维度,特征层F和输出层O之间的映射的网络参数进行完全更新。Further, after the neuron increment in the feature layer F, during training, the network parameters of the mapping between the input layer I and the adjustment layer P remain unchanged, and the mapping between the adjustment layer P and the feature layer F is fixed. The network parameters of the round of training and update the network parameters of the new neurons used to generate the new feature vector f k in this round. The subscript k represents the hidden space dimension of the current model after the addition of neurons, the feature layer F and the output layer O The mapping between network parameters is completely updated.

进一步地,在调节层P进行神经元增量后,训练时,输入层I与调节层P之间的映射固定其中参与上一轮训练的网络参数并更新本轮中新增神经元用于生成新向量pj+1的网络参数,下标j表示参与上一轮训练的调节层P的神经元数量,调节层P与特征层F之间的映射固定其中参与上一轮训练的网络参数并更新本轮中基于包含新向量pj+1的矩阵生成新特征向量的网络参数,特征层F和输出层O之间的映射的网络参数进行完全更新。Further, after the adjustment layer P performs neuron increment, during training, the mapping between the input layer I and the adjustment layer P fixes the network parameters participating in the previous round of training and updates the new neurons in this round to generate The network parameters of the new vector p j+1 , the subscript j represents the number of neurons in the regulation layer P that participated in the previous round of training, the mapping between the regulation layer P and the feature layer F fixes the network parameters that participated in the previous round of training and Update the network parameters of the new feature vector generated based on the matrix containing the new vector p j+1 in this round, and the network parameters of the mapping between the feature layer F and the output layer O are completely updated.

进一步地,所述隐空间解耦损失LossC,表示为:Further, the hidden space decoupling loss Loss C is expressed as:

Figure SMS_1
Figure SMS_1

Cov(fk,fi)=E(fk·fi)-E(fk)E(fi)Cov(f k ,f i )=E(f k ·f i )-E(f k )E(f i )

其中k为当前模型的隐空间维度,fk为特征层F输出的第k维特征向量,fi(i=1,2,...,k-1)为先前轮次提取的k-1个特征向量,函数E计算特征向量的均值。where k is the hidden space dimension of the current model, f k is the k-th dimension feature vector output by the feature layer F, and f i (i=1,2,...,k-1) is the k- 1 eigenvector, the function E calculates the mean of the eigenvectors.

进一步地,所述重构损失LossR,表示为:Further, the reconstruction loss Loss R is expressed as:

Figure SMS_2
Figure SMS_2

其中||·||F表示Frobenius范数,X是输入模型的核反应堆芯温度数据样本矩阵,

Figure SMS_3
是重构样本矩阵,n是样本数。Where ||·|| F represents the Frobenius norm, X is the nuclear reactor core temperature data sample matrix input into the model,
Figure SMS_3
is the reconstructed sample matrix, and n is the number of samples.

进一步地,第二损失函数表示为:Further, the second loss function is expressed as:

Losstotal=LossR+βLossC Loss total =Loss R +βLoss C

其中,β为模型超参数,LossR是重构损失,LossC是隐空间解耦损失。Among them, β is the model hyperparameter, Loss R is the reconstruction loss, and Loss C is the hidden space decoupling loss.

进一步地,统计量包括T2和SPE统计量两种。Further, the statistics include T2 and SPE statistics.

进一步地,基于特征向量和重构样本计算统计量对堆芯温度进行异常检测,具体为:Further, based on the eigenvector and the reconstructed sample calculation statistics, the abnormal detection of the core temperature is carried out, specifically:

基于特征向量和重构样本计算统计量,若计算的任意一个统计量超过控制限,表明核反应堆运行过程发生故障。Statistics are calculated based on eigenvectors and reconstructed samples. If any of the calculated statistics exceeds the control limit, it indicates that the nuclear reactor has failed during operation.

进一步地,所述统计量的控制限是通过核密度估计的方法计算得到的。Further, the control limit of the statistic is calculated by means of kernel density estimation.

进一步地,所述核反应堆芯温度数据包括分布在反应堆堆芯不同位置的传感器所采集到的多个测点的堆芯温度。Further, the nuclear reactor core temperature data includes core temperatures collected by sensors distributed at different positions of the reactor core.

本发明的有益效果是:本发明针对传统解耦表征学习难以预先指定特征维度的问题,提出了一种增量式特征解耦自编码器的堆芯温度异常检测方法。本发明设计了一种特征增量策略,在自编码器模型训练时步进式地生成其隐空间特征并自适应地确定特征维度。同时提出了一种基于双性能指标的迭代训练策略用于自编码器模型训练,使模型所提取的特征对数据具有较强的重构能力,并满足隐空间特征解耦要求。最后,本发明基于数据的特征空间和残差分别构造监测统计量,实现了对于堆芯温度数据的全面异常检测。相比于基于传统深度学习的异常检测方法,本发明实现了模型隐空间特征的解耦,并基于隐空间构建了能够更准确地捕捉数据异常的统计量,降低故障检测的误报和漏报。相比于基于传统解耦表征学习的异常检测方法,本发明能够实现自适应确定隐空间特征维度,有效降低了方法在实际建模过程中的应用难度。The beneficial effect of the present invention is: the present invention aims at the problem that the traditional decoupling representation learning is difficult to pre-designate the feature dimension, and proposes an incremental feature decoupling self-encoder core temperature anomaly detection method. The present invention designs a feature increment strategy, which generates latent space features of the self-encoder model step by step and adaptively determines the feature dimension when training the autoencoder model. At the same time, an iterative training strategy based on dual performance indicators is proposed for the training of the autoencoder model, so that the features extracted by the model have a strong ability to reconstruct the data and meet the requirements of latent space feature decoupling. Finally, the present invention respectively constructs monitoring statistics based on the feature space and residual error of the data, and realizes comprehensive anomaly detection of core temperature data. Compared with the anomaly detection method based on traditional deep learning, the present invention realizes the decoupling of latent space features of the model, and builds statistics that can capture data anomalies more accurately based on the latent space, reducing false positives and missed negatives in fault detection . Compared with the anomaly detection method based on traditional decoupling representation learning, the present invention can realize self-adaptive determination of latent space feature dimension, effectively reducing the application difficulty of the method in the actual modeling process.

附图说明Description of drawings

图1为本发明的整体框架流程图Fig. 1 is the overall framework flowchart of the present invention

图2为本发明所提方法在测试集1上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Fig. 2 is the fault detection result diagram of the method proposed by the present invention on the test set 1, wherein the dotted line is the control limit, and the solid line is the statistical calculation result of the test set sample;

图3为主成分分析方法在测试集1上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Fig. 3 is the diagram of the fault detection results of the principal component analysis method on the test set 1, where the dotted line is the control limit, and the solid line is the statistical calculation result of the test set samples;

图4为传统自编码器方法在测试集1上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Figure 4 is a diagram of the fault detection results of the traditional autoencoder method on the test set 1, where the dotted line is the control limit, and the solid line is the statistical calculation result of the test set samples;

图5为直接解耦自编码器方法在测试集1上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Figure 5 is a diagram of the fault detection results of the direct decoupling autoencoder method on the test set 1, where the dotted line is the control limit, and the solid line is the statistical calculation result of the test set samples;

图6为本发明所提方法在测试集2上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Fig. 6 is the fault detection result diagram on the test set 2 of the proposed method of the present invention, wherein the dotted line is the control limit, and the solid line is the statistical calculation result of the test set sample;

图7为主成分分析方法在测试集2上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Fig. 7 is the diagram of the fault detection results of the principal component analysis method on the test set 2, where the dotted line is the control limit, and the solid line is the statistical calculation result of the test set samples;

图8为传统自编码器方法在测试集2上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Fig. 8 is a diagram of the fault detection results of the traditional autoencoder method on the test set 2, wherein the dotted line is the control limit, and the solid line is the statistical calculation result of the test set samples;

图9为直接解耦自编码器方法在测试集2上的故障检测结果图,其中虚线为控制限,实线为测试集样本的统计量计算结果;Figure 9 is a diagram of the fault detection results of the direct decoupling autoencoder method on the test set 2, where the dotted line is the control limit, and the solid line is the statistical calculation result of the test set samples;

具体实施方式Detailed ways

本实施例以真实核反应堆堆芯温度数据为例验证本方法的有效性。下面结合附图和具体实施例,对本发明作进一步详细说明。In this embodiment, the effectiveness of the method is verified by taking the core temperature data of a real nuclear reactor as an example. The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

本发明的一种增量式特征解耦自编码器的堆芯温度异常检测方法,是通过实时采集的核反应堆芯温度数据样本输入至训练好的增量式特征解耦自编码器模型中,获得特征向量和重构样本,基于特征向量和重构样本计算统计量对堆芯温度进行异常检测。其中,增量式特征解耦自编码器模型的训练是本发明的重点,如图1所示,离线建模训练包括以下步骤:The core temperature abnormal detection method of an incremental feature decoupling self-encoder of the present invention is to input the nuclear reactor core temperature data samples collected in real time into the trained incremental feature decoupling self-encoder model, and obtain Eigenvectors and reconstructed samples are used to calculate statistics based on eigenvectors and reconstructed samples to detect anomalies in core temperature. Wherein, the training of the incremental feature decoupling autoencoder model is the focus of the present invention, as shown in Figure 1, the offline modeling training includes the following steps:

步骤1:构建训练集,所述训练集的每个样本为核反应堆正常运行时采集的核反应堆芯温度数据;Step 1: constructing a training set, each sample of the training set is nuclear reactor core temperature data collected during normal operation of the nuclear reactor;

在本实施例中,收集了6000个正常样本作为训练集建立模型,并针对过程发生的两种不同故障类型各取6000个样本作为两个测试集进行异常检测测试,每个样本包含40个测点的温度变量,由轴向分布在核反应堆堆芯顶层、中部、底层三部分的传感器采集,样本的采样间隔为60秒。两个测试集的故障分别是由堆芯温度分布的突变和缓变所导致的,该故障从第2000个样本开始出现。In this embodiment, 6,000 normal samples are collected as a training set to establish a model, and 6,000 samples are taken for two different types of faults that occur in the process as two test sets for anomaly detection testing, each sample contains 40 test samples. The temperature variables of the points are collected by sensors axially distributed in the top, middle, and bottom parts of the nuclear reactor core, and the sampling interval of the samples is 60 seconds. The faults of the two test sets are caused by sudden and slow changes of the core temperature distribution respectively, and the faults begin to appear from the 2000th sample.

进一步地,对收集的数据进行数据标准化,具体如下:Further, standardize the collected data, as follows:

针对采集到的正常堆芯温度数据

Figure SMS_4
标准化的计算公式为:For the collected normal core temperature data
Figure SMS_4
The standardized calculation formula is:

Figure SMS_5
Figure SMS_5

其中,n表示样本数,m表示变量数;xi,r表示数据矩阵X中第i行第r列的元素,即采集的第i个样本中的第r个温度变量的值;

Figure SMS_6
是所有样本中第r个过程变量的均值;sr表示所有样本中第r个过程变量的标准差;xi,r则是对应样本、对应变量经过标准化后的数值。对数据X进行标准化后可得数据矩阵/>
Figure SMS_7
Among them, n represents the number of samples, m represents the number of variables; x i, r represents the element in the i-th row and r-th column in the data matrix X, that is, the value of the r-th temperature variable in the i-th sample collected;
Figure SMS_6
is the mean value of the r-th process variable in all samples; s r represents the standard deviation of the r-th process variable in all samples; x i,r is the standardized value of the corresponding sample and corresponding variable. After normalizing the data X , the data matrix can be obtained />
Figure SMS_7

步骤2:构建增量式特征解耦自编码器,对正常数据X进行建模,获得特征向量和重构样本,基于损失函数对增量式特征解耦自编码器进行神经元增量迭代训练直至模型性能指标符合要求;具体包括以下子步骤:Step 2: Build an incremental feature decoupling autoencoder, model the normal data X, obtain feature vectors and reconstructed samples, and perform neuron incremental iterative training on the incremental feature decoupling autoencoder based on the loss function Until the model performance indicators meet the requirements; specifically include the following sub-steps:

步骤2.1:增量式特征解耦自编码器模型由输入层I,调节层P,特征层F和输出层O构成,其中特征层F即定义为模型隐空间,层内的神经元数量即对应隐空间的特征维度。本发明将特征层F设计为神经元可增量的形式,当模型对数据重构能力较差,则需要向特征层F添加新的神经元进行训练。本发明将调节层P也设计为神经元可增量的形式,训练过程中如特征层F的特征解耦程度不足,则需要向调节层P添加新的神经元进行训练。Step 2.1: The incremental feature decoupling autoencoder model consists of an input layer I, an adjustment layer P, a feature layer F and an output layer O, where the feature layer F is defined as the model latent space, and the number of neurons in the layer corresponds to The feature dimension of the latent space. In the present invention, the feature layer F is designed in a form in which neurons can be increased. When the model has a poor ability to reconstruct data, it is necessary to add new neurons to the feature layer F for training. In the present invention, the adjustment layer P is also designed in the form of neurons that can be increased. If the decoupling degree of the feature layer F is insufficient during the training process, it is necessary to add new neurons to the adjustment layer P for training.

步骤2.2:设置增量式特征解耦自编码器模型在初始状态下的特征层F和调节层P的神经元个数;一般情况下,特征层F的初始神经元个数设置为1,可以最大程度自适应增量。本实施例中,设置增量式特征解耦自编码器模型在初始状态下的特征层F和调节层P的神经元个数分别为1和3。用

Figure SMS_10
分别表示模型在生成一维特征时,输入层I与调节层P之间、调节层P与特征层F之间、特征层F与输出层O之间的映射,则信息在模型中的传递可以总结为:输入数据/>
Figure SMS_11
的全部信息将先被映射至调节层P的j个神经元中得到矩阵
Figure SMS_13
即/>
Figure SMS_9
然后P所包含的信息再进一步被压缩至特征层F的一个神经元中,得到特征向量/>
Figure SMS_15
即/>
Figure SMS_16
最后,特征f1被映射至输出层O,得到重构数据/>
Figure SMS_17
即/>
Figure SMS_8
记映射/>
Figure SMS_12
包含的网络参数分别为/>
Figure SMS_14
模型在初始状态下训练的第一损失函数与传统自编码器相同,为重构损失LossR,本实施例采用的计算方法为:Step 2.2: Set the number of neurons in the feature layer F and adjustment layer P of the incremental feature decoupled autoencoder model in the initial state; in general, the initial number of neurons in the feature layer F is set to 1, which can be Maximum adaptive increment. In this embodiment, the number of neurons in the feature layer F and the adjustment layer P of the incremental feature decoupled autoencoder model in the initial state is set to 1 and 3, respectively. use
Figure SMS_10
Respectively represent the mapping between the input layer I and the adjustment layer P, between the adjustment layer P and the feature layer F, and between the feature layer F and the output layer O when the model generates one-dimensional features, then the transfer of information in the model can be Summarized as: input data />
Figure SMS_11
All the information of will be mapped to the j neurons of the adjustment layer P to obtain the matrix
Figure SMS_13
i.e. />
Figure SMS_9
Then the information contained in P is further compressed into a neuron in the feature layer F to obtain the feature vector />
Figure SMS_15
i.e. />
Figure SMS_16
Finally, the feature f1 is mapped to the output layer O to obtain the reconstructed data />
Figure SMS_17
i.e. />
Figure SMS_8
note mapping />
Figure SMS_12
The network parameters included are />
Figure SMS_14
The first loss function of the model trained in the initial state is the same as the traditional autoencoder, which is the reconstruction loss Loss R , and the calculation method used in this embodiment is:

Figure SMS_18
Figure SMS_18

其中||·||F表示Frobenius范数。where ||·|| F represents the Frobenius norm.

步骤2.3:设重构误差指标R=LossR,并设置对应阈值Rth。如模型在初始状态下训练后有R>Rth,则向模型特征层F新添加一个神经元并重新训练,生成第二维特征向量f2,再将f1与f2组合并映射至输出层O。用

Figure SMS_21
表示模型在生成二维特征时,输入层I与调节层P之间、调节层P与特征层F之间、特征层F与输出层O之间的映射,记上述映射包含的网络参数分别为/>
Figure SMS_24
在本次训练中,对于映射/>
Figure SMS_27
模型固定其参数/>
Figure SMS_20
与上一轮的参数/>
Figure SMS_23
相同且不更新;对于映射/>
Figure SMS_26
其参数/>
Figure SMS_30
由需要固定和更新的两部分参数组成:需要固定的部分与上一轮的参数/>
Figure SMS_19
相同且不更新,需要更新的部分是在本轮中新增的、用于生成f2的新映射的参数,记为/>
Figure SMS_25
即/>
Figure SMS_28
对于映射/>
Figure SMS_29
模型将其参数/>
Figure SMS_22
进行完全更新,即舍弃上一轮的参数/>
Figure SMS_31
重新训练。在本次训练生成f2的过程中,模型的损失函数在原有重构损失LossR的基础上,添加了隐空间解耦损失项LossC,可以采用如下形式:Step 2.3: Set the reconstruction error index R=Loss R , and set the corresponding threshold R th . If the model has R>R th after training in the initial state, add a new neuron to the model feature layer F and retrain to generate the second dimension feature vector f 2 , then combine f 1 and f 2 and map to the output Layer O. use
Figure SMS_21
Indicates the mapping between the input layer I and the adjustment layer P, between the adjustment layer P and the feature layer F, and between the feature layer F and the output layer O when the model generates two-dimensional features. Note that the network parameters included in the above mapping are respectively />
Figure SMS_24
In this training, for the mapping />
Figure SMS_27
The model fixes its parameters />
Figure SMS_20
with the parameters of the previous round />
Figure SMS_23
Same and do not update; for mapping />
Figure SMS_26
its parameters />
Figure SMS_30
It consists of two parts of parameters that need to be fixed and updated: the part that needs to be fixed and the parameters of the previous round />
Figure SMS_19
The same and not updated, the part that needs to be updated is the parameter that is added in this round and used to generate the new mapping of f 2 , denoted as />
Figure SMS_25
i.e. />
Figure SMS_28
for mapping />
Figure SMS_29
The model will have its parameters />
Figure SMS_22
Perform a complete update, i.e. discard the parameters of the previous round />
Figure SMS_31
Retrain. In the process of generating f 2 in this training, the loss function of the model is based on the original reconstruction loss Loss R , and the hidden space decoupling loss item Loss C is added, which can be in the following form:

Figure SMS_32
Figure SMS_32

Cov(fk,fi)=E(fk·fi)-E(fk)E(fi)Cov(f k ,f i )=E(f k ·f i )-E(f k )E(f i )

其中k为当前模型的隐空间维度,fk为第k维特征向量,fi(i=1,2,...,k-1)为先前轮次提取的k-1个特征向量,函数E计算特征向量的均值。该第二损失函数Losstotal由重构损失LossR和隐空间解耦损失LossC之和构成:Where k is the hidden space dimension of the current model, f k is the k-th dimension feature vector, f i (i=1,2,...,k-1) is the k-1 feature vectors extracted in the previous round, Function E computes the mean of the eigenvectors. The second loss function Loss total consists of the sum of the reconstruction loss Loss R and the hidden space decoupling loss Loss C :

Losstotal=LossR+βLossC Loss total =Loss R +βLoss C

其中,β为模型超参数。在后续训练过程中模型将沿用此损失函数进行训练。Among them, β is the model hyperparameter. In the subsequent training process, the model will continue to use this loss function for training.

步骤2.4:设隐空间特征相关性指标C=LossC,并设置对应阈值Cth。如模型在上一轮训练后有C>Cth,则向调节层P进行神经元增量并重新训练模型映射

Figure SMS_34
即:将调节层P中新生成的向量pj+1与矩阵P合并得到矩阵/>
Figure SMS_39
再映射至特征层F以重新生成第二维特征f2。在本次训练中,对于映射/>
Figure SMS_42
其参数/>
Figure SMS_35
由需要固定和更新的两部分参数组成:需要固定的部分与参数/>
Figure SMS_37
相同且不更新,需要更新的部分是在本轮中新增的、用于生成pj+1的新映射的参数,记为/>
Figure SMS_40
即/>
Figure SMS_44
对于映射/>
Figure SMS_33
其参数/>
Figure SMS_38
由需要固定和更新的两部分参数组成:需要固定的部分与参数/>
Figure SMS_41
相同且不更新,需要更新的部分是对上一轮中的参数/>
Figure SMS_43
进行更新,用于重新生成f2,即/>
Figure SMS_36
对于映射
Figure SMS_45
模型将其参数/>
Figure SMS_46
进行完全更新。Step 2.4: Set the latent spatial feature correlation index C=Loss C , and set the corresponding threshold C th . If the model has C>C th after the last round of training, then perform neuron increments to the adjustment layer P and retrain the model mapping
Figure SMS_34
That is: merge the newly generated vector p j+1 in the adjustment layer P with the matrix P to obtain the matrix />
Figure SMS_39
Then map to the feature layer F to regenerate the second dimension feature f 2 . In this training, for the mapping />
Figure SMS_42
its parameters />
Figure SMS_35
It consists of two parts of parameters that need to be fixed and updated: the part that needs to be fixed and the parameter />
Figure SMS_37
The same and not updated, the part that needs to be updated is the new parameter used to generate the new mapping of p j+1 in this round, denoted as />
Figure SMS_40
i.e. />
Figure SMS_44
for mapping />
Figure SMS_33
its parameters />
Figure SMS_38
It consists of two parts of parameters that need to be fixed and updated: the part that needs to be fixed and the parameter />
Figure SMS_41
The same and not updated, the part that needs to be updated is the parameters in the previous round />
Figure SMS_43
is updated to regenerate f 2 , i.e. />
Figure SMS_36
for mapping
Figure SMS_45
The model will have its parameters />
Figure SMS_46
Do a full update.

步骤2.5:在步骤2.3和步骤2.4中神经元增量具体操作的基础上,以此类推,基于损失函数对增量式特征解耦自编码器进行神经元增量迭代训练直至模型性能指标符合要求,将完整的模型迭代训练策略总结如下:Step 2.5: Based on the specific operation of neuron increment in step 2.3 and step 2.4, and so on, based on the loss function, perform neuron incremental iterative training on the incremental feature decoupled autoencoder until the model performance index meets the requirements , the complete model iterative training strategy is summarized as follows:

判断条件1:如果C<Cth且R<Rth,认为当前模型重构能力和隐空间解耦程度均达到要求,模型训练完毕;Judgment condition 1: If C<C th and R<R th , it is considered that the current model reconstruction ability and hidden space decoupling degree meet the requirements, and the model training is completed;

判断条件2:如果C<Cth且R>Rth,认为当前模型隐空间解耦程度达到要求,但重构能力不足,需要在特征层F进行神经元增量并训练。Judgment condition 2: If C<C th and R>R th , it is considered that the degree of decoupling of the hidden space of the current model meets the requirements, but the reconstruction ability is insufficient, and neurons need to be incremented and trained in the feature layer F.

判断条件3:如果C>Cth且R<Rth,认为当前模型重构能力达到要求,但隐空间解耦程度不足,需要在调节层P进行神经元增量并训练。Judgment condition 3: If C>C th and R<R th , it is considered that the reconstruction ability of the current model meets the requirements, but the degree of decoupling of the latent space is insufficient, and neurons need to be incremented and trained in the adjustment layer P.

判断条件4:如果C>Cth且R>Rth,认为当前模型重构能力和隐空间解耦程度均未达标,需要优先满足对隐空间解耦程度的要求,操作与条件3相同。Judgment condition 4: If C>C th and R>R th , it is considered that the current model reconstruction ability and latent space decoupling degree are not up to standard, and the requirements for latent space decoupling degree should be met first, and the operation is the same as condition 3.

其中,Cth、Rth分别表示隐空间特征相关性指标C、重构误差指标R的阈值。在本实施例中,分别为0.8和0.1。Among them, C th and R th represent the thresholds of latent space feature correlation index C and reconstruction error index R, respectively. In this example, they are 0.8 and 0.1, respectively.

在本实施例中,增量式特征解耦自编码器模型按照上述策略进行训练直至满足迭代终止条件时,模型的特征层F和调节层P的神经元个数分别为6和9。固定模型结构,并将最后一轮训练得到的模型映射记作

Figure SMS_47
模型训练完毕。In this embodiment, when the incremental feature decoupled autoencoder model is trained according to the above strategy until the iteration termination condition is met, the number of neurons in the feature layer F and adjustment layer P of the model is 6 and 9, respectively. The model structure is fixed, and the model mapping obtained in the last round of training is denoted as
Figure SMS_47
The model is trained.

针对正常堆芯温度数据特征空间和残差空间设计统计量用于异常检测,本实施例中以T2与SPE为例,计算方式为:对于正常数据

Figure SMS_48
其中的每一个样本可表示为/>
Figure SMS_49
使用增量式特征解耦自编码器模型提取其特征向量
Figure SMS_50
同时得到重构输出/>
Figure SMS_51
并计算重构残差项/>
Figure SMS_52
基于上述向量构造T2和SPE统计量如下:For the feature space and residual space of normal core temperature data, statistics are designed for anomaly detection. In this embodiment, T2 and SPE are taken as examples, and the calculation method is: for normal data
Figure SMS_48
Each of these samples can be expressed as />
Figure SMS_49
Extract feature vectors from encoder models using incremental feature decoupling
Figure SMS_50
Also get the refactored output />
Figure SMS_51
and calculate the reconstruction residual term />
Figure SMS_52
Construct T2 and SPE statistics based on the above vectors as follows:

T2=ft TΣ-1ft T 2 =f t T Σ -1 f t

SPE=eTeSPE=e T e

其中,Σ为模型基于正常数据提取到的特征的协方差矩阵。通过核密度估计的方式计算T2与SPE统计量的控制限CtrT2以及CtrSPEAmong them, Σ is the covariance matrix of the features extracted by the model based on normal data. The control limits Ctr T2 and Ctr SPE of T 2 and SPE statistics were calculated by means of kernel density estimation.

至此,获得2个统计量及其控制限。利用训练好的增量式特征解耦自编码器模型即可实现故障检测。具体地,将实时采集的核反应堆芯温度数据样本

Figure SMS_53
输入至训练好的增量式特征解耦自编码器模型中即可实现故障检测。使用训练好的增量式特征解耦自编码器模型提取其特征向量/>
Figure SMS_54
同时得到重构输出/>
Figure SMS_55
计算重构残差项/>
Figure SMS_56
计算T2和SPE统计量对堆芯温度进行异常检测:So far, two statistics and their control limits have been obtained. Fault detection can be achieved by decoupling the autoencoder model with the trained incremental features. Specifically, the real-time collected nuclear reactor core temperature data sample
Figure SMS_53
Fault detection can be achieved by inputting into the trained incremental feature decoupled autoencoder model. Use the trained incremental feature decoupled autoencoder model to extract its feature vector />
Figure SMS_54
Also get the refactored output />
Figure SMS_55
Compute Reconstruction Residuals />
Figure SMS_56
Compute T2 and SPE statistics for core temperature anomaly detection:

T2=fnew TΣ-1fnew T 2 = f new T Σ -1 f new

SPE=enew Tenew SPE=e new T e new

若T2和SPE中任意一个统计量超过控制限,表明核反应堆运行过程发生故障。本发明的整体框架流程图如图1所示。下面分别利用两个测试集的样本进行故障检测分析,说明本发明的效果。If any statistic of T 2 and SPE exceeds the control limit, it indicates that the nuclear reactor is malfunctioning during operation. The overall framework flow chart of the present invention is shown in Fig. 1 . The following uses the samples of the two test sets to perform fault detection and analysis to illustrate the effect of the present invention.

在故障检测方面,本发明针对温度突变和缓变两种故障类型的测试集进行故障检测,结果分别如图2和图6所示。可以发现,在突变故障下,本方法的两个统计量都在故障发生的时刻明显超过了控制限,能够及时灵敏地检测到故障的发生。在缓变故障下,本方法的两个统计量也能够捕捉到数据中缓慢渐变的异常,较为准确地检测到故障的发生。In terms of fault detection, the present invention performs fault detection on test sets of two types of faults, sudden temperature change and slow change, and the results are shown in Fig. 2 and Fig. 6 respectively. It can be found that under sudden faults, the two statistics of this method obviously exceed the control limit at the moment of fault occurrence, and the occurrence of faults can be detected sensitively in time. Under slow-changing faults, the two statistics of this method can also capture slowly and gradually changing abnormalities in the data, and detect the occurrence of faults more accurately.

为了更加清晰地体现本发明设计的特征增量策略和迭代训练策略在异常检测任务中的优越性,将本发明中增量式特征解耦自编码器的增量策略和迭代训练策略全部舍弃后得到的模型命名为直接解耦自编码器(仅使用第二损失函数进行约束训练)用于实验对比。此处选取了主成分分析PCA(W.Svante,K.Esbensen,and P.Geladi.“Principalcomponent analysis,”Chemom.Intell.Lab.Syst.,vol.2,no.1-3,pp.37-52,1987)、传统自编码器AE(Sakurada,Mayu,and TakehisaYairi.“Anomaly detection usingautoencoders with nonlinear dimensionality reduction.”Proceedings of theMLSDA 2014 2nd workshop on machine learning for sensory data analysis.2014.)和直接解耦自编码器进行比较。上述方法在两个测试集上的故障检测效果分别如图3和图7、图4和图8、图5和图9所示。可以发现,在突变故障类型的实验中,上述方法大多能够相对准确地检测到故障发生。但在缓变类型故障的实验中,上述方法在故障发生前产生了大量误报,同时检出故障的时间也大幅度延后,故障检出率较低。In order to more clearly reflect the superiority of the feature incremental strategy and iterative training strategy designed in the present invention in anomaly detection tasks, the incremental strategy and iterative training strategy of the incremental feature decoupling self-encoder in the present invention are all discarded. The resulting model is named Directly Decoupled Autoencoder (constrained training with only the second loss function) for experimental comparison. Here the principal component analysis PCA (W.Svante, K.Esbensen, and P.Geladi. "Principalcomponent analysis," Chemom.Intell.Lab.Syst., vol.2, no.1-3, pp.37- 52, 1987), traditional autoencoder AE (Sakurada, Mayu, and Takehisa Yairi. "Anomaly detection using autoencoders with nonlinear dimensionality reduction." Proceedings of the MLSDA 2014 2nd workshop on machine learning for sensory data analysis. 2014.) and direct decoupling from encoders for comparison. The fault detection effects of the above method on the two test sets are shown in Figure 3 and Figure 7, Figure 4 and Figure 8, Figure 5 and Figure 9, respectively. It can be found that most of the above methods can detect the occurrence of faults relatively accurately in the experiments of sudden fault types. However, in the experiments of slowly changing faults, the above method produced a large number of false alarms before the fault occurred, and at the same time, the time to detect the fault was also greatly delayed, and the fault detection rate was low.

为了更直观地进行比较,表1和表2分别列出了不同方法在测试集上的故障检出率(FDR)与故障误报率(FAR)。FDR与FAR的计算表达式为:For a more intuitive comparison, Table 1 and Table 2 list the fault detection rate (FDR) and fault false alarm rate (FAR) of different methods on the test set, respectively. The calculation expressions of FDR and FAR are:

Figure SMS_57
Figure SMS_57

Figure SMS_58
Figure SMS_58

其中NMAE,Nabnormal,NFAE,Nnormal分别代表漏报事件,异常样本,误报事件和正常样本的数量。Among them, N MAE , N abnormal , N FAE , and N normal respectively represent the number of missed events, abnormal samples, false positive events and normal samples.

可以发现,本发明所提方法的故障误报率在两个测试集上均是四种方法中最低的,故障检出率也在总体上处于四种方法中的最高水平,其中,本发明所提方法相比于其他方法的优越性在含缓变类型故障的测试集2上体现最为显著。这证明了本发明所提方法的可行性和有效性。It can be found that the fault false alarm rate of the proposed method of the present invention is the lowest among the four methods on the two test sets, and the fault detection rate is also generally at the highest level among the four methods. The superiority of the proposed method compared to other methods is most obvious on the test set 2 with slow-varying faults. This proves the feasibility and effectiveness of the proposed method of the present invention.

表1不同方法故障检出率(FDR)对比(%)Table 1 Comparison of fault detection rate (FDR) of different methods (%)

Figure SMS_59
Figure SMS_59

表2不同方法故障误报率(FAR)对比(%)Table 2 Comparison of False Alarm Rate (FAR) of Different Methods (%)

Figure SMS_60
Figure SMS_60

显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其他不同形式的变化或变动。这里无需也无法把所有的实施方式予以穷举。而由此所引申出的显而易见的变化或变动仍处于本发明的保护范围。Apparently, the above-mentioned embodiments are only examples for clear description, rather than limiting the implementation. For those of ordinary skill in the art, other changes or changes in different forms can be made on the basis of the above description. It is not necessary and impossible to exhaustively list all implementation modes here. However, the obvious changes or variations derived therefrom still fall within the protection scope of the present invention.

Claims (10)

1. The method for detecting the abnormal temperature of the reactor core of the incremental characteristic decoupling self-encoder is characterized by comprising the following steps of:
inputting a nuclear reactor core temperature data sample acquired in real time into a trained incremental characteristic decoupling self-encoder model to obtain a characteristic vector and a reconstructed sample, and calculating statistics based on the characteristic vector and the reconstructed sample to perform anomaly detection on the core temperature;
the incremental characteristic decoupling self-encoder model is obtained through training by the following method:
constructing a training set, wherein each sample of the training set is nuclear reactor core temperature data acquired during normal operation of a nuclear reactor;
constructing an incremental characteristic decoupling self-encoder, wherein the incremental characteristic decoupling self-encoder consists of an input layer I, an adjusting layer P, a characteristic layer F and an output layer O; setting the initial neuron numbers of an adjusting layer P and a characteristic layer F;
inputting samples of the training set to an incremental characteristic decoupling self-encoder to obtain feature vectors and reconstructed samples, and performing neuron incremental iterative training on the incremental characteristic decoupling self-encoder based on a loss function until the model performance index meets the requirement; the loss function includes: a first loss function consisting of reconstruction losses; a second loss function consisting of a sum of reconstruction loss and hidden space decoupling loss; the model performance index comprises: the reconstruction error index R is the same as the reconstruction loss in value; the hidden space characteristic correlation index C is the same as the hidden space decoupling loss in value; wherein:
if C < C th And R < R th Considering that the reconstruction capability and the hidden space decoupling degree of the current model reach the requirements, and finishing model training;
if C < C th And R > R th Considering that the hidden space decoupling degree of the current model meets the requirement, but the reconstruction capability is insufficient, and neuron increment and training are needed in a feature layer F;
if C > C th And R < R th Considering that the reconstruction capability of the current model meets the requirement, but the hidden space decoupling degree is insufficient, and neuron increment and training are needed in the adjusting layer P;
if C > C th And R > R th Considering that the reconstruction capability and the hidden space decoupling degree of the current model are not up to the standard, and preferentially meeting the requirement on the hidden space decoupling degree is required, and performing neuron increment and training on an adjusting layer P;
wherein C is th 、R th The thresholds of the hidden space feature correlation index C and the reconstruction error index R are respectively represented.
2. The method according to claim 1, wherein after the feature layer F performs the neuron increment, the network parameters of the mapping between the input layer I and the adjustment layer P are unchanged, the mapping between the adjustment layer P and the feature layer F fixes the network parameters in which the training of the previous round is participated and updates the newly added neurons in the present round for generating the new feature vector F k The subscript k represents the hidden space dimension of the current model after the neuron is newly added, and the mapped network parameters between the feature layer F and the output layer O are completely updated.
3. The method according to claim 1, wherein after the adjustment layer P performs the neuron increment, the mapping between the input layer I and the adjustment layer P fixes the network parameters involved in the previous training and updates the newly added neurons in the current training round for generating the new vector P j+1 The subscript j indicates the number of neurons of the adjustment layer P participating in the previous round of training, the mapping between the adjustment layer P and the feature layer F fixes the network parameters participating in the previous round of training therein and updates the network parameters in the current round based on the new vector P j+1 The matrix of which generates new feature vectors, the mapped network parameters between feature layer F and output layer O are fully updated.
4. The method of claim 1, wherein the implicit spatial decoupling Loss C Expressed as:
Figure FDA0004129248730000021
Cov(f k ,f i )=E(f k ·f i )-E(f k )E(f i )
where k is the hidden space dimension of the current model, f k The kth dimension feature vector F output for feature layer F i (i=1, 2,., k-1) calculating the mean of the feature vectors for k-1 feature vectors extracted for the previous round, function E.
5. The method of claim 1, wherein the reconstruction Loss R Expressed as:
Figure FDA0004129248730000022
wherein I II F Representing the Frobenius norm, X is a matrix of nuclear reactor core temperature data samples of the input model,
Figure FDA0004129248730000023
is the reconstructed sample matrix and n is the number of samples.
6. The method of claim 1, wherein the second loss function is expressed as:
Loss total =Loss R +βLoss C
wherein beta is a model hyper-parameter, loss R Is reconstruction Loss, loss C Is the hidden space decoupling loss.
7. The method of claim 1, wherein the statistic comprises T 2 And SPE statistics.
8. The method of claim 1, wherein the anomaly detection of core temperature is performed based on the eigenvector and the reconstructed sample calculation statistic, in particular:
and calculating statistics based on the feature vector and the reconstructed sample, and if any calculated statistics exceed a control limit, indicating that the nuclear reactor operation process is faulty.
9. The method of claim 8, wherein the control limit of the statistic is calculated by a method of nuclear density estimation.
10. The method of claim 1, wherein the nuclear reactor core temperature data comprises core temperatures at a plurality of stations acquired by sensors distributed at different locations in a reactor core.
CN202310255054.1A 2023-03-16 2023-03-16 Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder Pending CN116431966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310255054.1A CN116431966A (en) 2023-03-16 2023-03-16 Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310255054.1A CN116431966A (en) 2023-03-16 2023-03-16 Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder

Publications (1)

Publication Number Publication Date
CN116431966A true CN116431966A (en) 2023-07-14

Family

ID=87086363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310255054.1A Pending CN116431966A (en) 2023-03-16 2023-03-16 Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder

Country Status (1)

Country Link
CN (1) CN116431966A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116738354A (en) * 2023-08-15 2023-09-12 国网江西省电力有限公司信息通信分公司 Method and system for detecting abnormal behavior of electric power Internet of things terminal
CN117150243A (en) * 2023-10-27 2023-12-01 湘江实验室 Fault isolation and estimation method based on fault influence decoupling network
CN117499199A (en) * 2023-08-30 2024-02-02 长沙理工大学 VAE-based information enhanced decoupling network fault diagnosis method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116738354A (en) * 2023-08-15 2023-09-12 国网江西省电力有限公司信息通信分公司 Method and system for detecting abnormal behavior of electric power Internet of things terminal
CN116738354B (en) * 2023-08-15 2023-12-08 国网江西省电力有限公司信息通信分公司 Method and system for detecting abnormal behavior of electric power Internet of things terminal
CN117499199A (en) * 2023-08-30 2024-02-02 长沙理工大学 VAE-based information enhanced decoupling network fault diagnosis method and system
CN117150243A (en) * 2023-10-27 2023-12-01 湘江实验室 Fault isolation and estimation method based on fault influence decoupling network
CN117150243B (en) * 2023-10-27 2024-01-30 湘江实验室 Fault isolation and estimation method based on fault influence decoupling network

Similar Documents

Publication Publication Date Title
CN112926273B (en) Method for predicting residual life of multivariate degradation equipment
CN116431966A (en) Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder
CN109842373B (en) Photovoltaic array fault diagnosis method and device based on space-time distribution characteristics
CN109146246B (en) Fault detection method based on automatic encoder and Bayesian network
CN110738274A (en) A data-driven fault diagnosis method for nuclear power plant
CN110119854A (en) Voltage-stablizer water level prediction method based on cost-sensitive LSTM Recognition with Recurrent Neural Network
CN116381517A (en) A probabilistic prediction method for the remaining life of lithium batteries based on temporal convolutional attention mechanism
CN111881627B (en) Fault diagnosis method and system for nuclear power plant
CN107807860B (en) Power failure analysis method and system based on matrix decomposition
CN115563563A (en) Fault diagnosis method and device based on transformer oil chromatographic analysis
CN116738868B (en) Rolling bearing residual life prediction method
CN115618732A (en) Data inversion method for autonomous optimization of key parameters of nuclear reactor digital twin
CN115564310A (en) A new energy power system reliability assessment method based on convolutional neural network
CN118566770A (en) Estimation method and system for state of health value of battery system
CN117458480A (en) Photovoltaic power generation power short-term prediction method and system based on improved LOF
CN112651519A (en) Secondary equipment fault positioning method and system based on deep learning theory
CN114548701B (en) Process early warning method and system for coupling structure analysis and estimation of all measurement points
CN112380763A (en) System and method for analyzing reliability of in-pile component based on data mining
CN116125923B (en) Hybrid industrial process monitoring method and system based on mixed variable dictionary learning
CN117313796A (en) Wind power gear box fault early warning method based on DAE-LSTM-KDE model
CN114298413B (en) A method for predicting the swing trend of hydropower units
CN116702580A (en) Fermentation process fault monitoring method based on attention convolution self-encoder
CN112684778B (en) Steam generator water supply system diagnosis method based on multi-source information reinforcement learning
Li et al. A Fuzzy Reinforcement LSTM-based Long-term Prediction Model for Fault Conditions in Nuclear Power Plants
CN115357010B (en) State monitoring and fault diagnosis method for power grid safety and stability control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination