CN116700011A - Fractional calculus energy reduction guiding method for enhanced depth transducer-attribute integrated prediction - Google Patents

Fractional calculus energy reduction guiding method for enhanced depth transducer-attribute integrated prediction Download PDF

Info

Publication number
CN116700011A
CN116700011A CN202310867687.8A CN202310867687A CN116700011A CN 116700011 A CN116700011 A CN 116700011A CN 202310867687 A CN202310867687 A CN 202310867687A CN 116700011 A CN116700011 A CN 116700011A
Authority
CN
China
Prior art keywords
refers
attention
energy consumption
energy
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310867687.8A
Other languages
Chinese (zh)
Inventor
殷林飞
曹义
胡立坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi University
Original Assignee
Guangxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University filed Critical Guangxi University
Priority to CN202310867687.8A priority Critical patent/CN116700011A/en
Publication of CN116700011A publication Critical patent/CN116700011A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a fractional order calculus energy reduction guiding method for enhanced depth transducer-Attention integrated prediction. The method considers the energy consumption of the comprehensive energy system, takes the energy consumption influencing factors and the energy production as inputs, and outputs the optimal energy reduction guiding signals. The transform-Attention network and the high-efficiency time sequence prediction network combined with the time sequence Attention unit in the method can solve the problem of prediction of the reference energy consumption and are used for outputting the optimal prediction result; the fractional order random dynamic calculus controller in the method can obtain the optimal energy-reducing guide signal through the predicted energy consumption and energy production. The method can reduce the overall energy consumption of the comprehensive energy system and improve the stability of the comprehensive energy system.

Description

一种增强深度Transformer-Attention集成预测的分数阶微 积分降能引导方法A Fractional Differential for Enhancing Deep Transformer-Attention Integrated Prediction Integral Degradation Guidance Method

技术领域technical field

本发明属于电力系统的能源控制技术、人工智能领域和数学应用中的微积分应用领域,涉及一种人工智能与综合能源系统的控制方法,适用于综合能源系统的长期降能引导。The invention belongs to the energy control technology of electric power system, the field of artificial intelligence and the application field of calculus in mathematics application, relates to a control method of artificial intelligence and comprehensive energy system, and is applicable to the long-term energy reduction guidance of the comprehensive energy system.

背景技术Background technique

2020.08.25申请的申请号为2020108660490的专利名为《一种多群分布式灵活能源服务商长期价格引导方法》提出一种在非完全信息情况下的长期动态博弈策略,只考虑到了研究对象之间的长期动态博弈,没有将研究对象的影响因素加入到控制策略中。2022.01.18申请的申请号为2021111974623的专利名为《一种灵活能源混合网络动态微分控制的长期价格引导方法动态微分方法》只考虑随机动态微分在整数阶微分的情况,但是能源消耗量与时间之间是一种非线性关系,且具有非马尔科夫性质,整数阶随机动态微分在描述非马尔科夫性质的关系时具有局限性。2022.10.18申请的申请号为2022112767254的专利名为《一种增强深度注意力双向预测的分数阶长期价格引导方法》将分数阶随机动态微分用于电动汽车的能耗引导,但是这种方法仅仅局限于电动汽车这一个单一的对象,没有考虑到整个能源系统,单一的电动汽车能耗对整个能源系统的影响非常小,缺乏对整个能源系统的能耗管理。The patent application No. 2020108660490 filed on 2020.08.25 is titled "A Long-term Price Guidance Method for Multi-Group Distributed Flexible Energy Service Providers", which proposes a long-term dynamic game strategy under the condition of incomplete information. The long-term dynamic game among them does not include the influencing factors of the research object into the control strategy. 2022.01.18 The patent application No. 2021111974623 is titled "A Long-term Price Guidance Method Dynamic Differentiation Method for Dynamic Differential Control of Flexible Energy Hybrid Networks" only considers the case of random dynamic differential in integer order differential, but energy consumption and time There is a non-linear relationship between and has non-Markov properties, integer order stochastic dynamic differential has limitations in describing the relationship of non-Markov properties. The patent application No. 2022112767254 filed on 2022.10.18 is entitled "A Fractional Long-term Price Guidance Method for Enhancing Deep Attention Bidirectional Prediction", which uses fractional stochastic dynamic differentiation for energy consumption guidance of electric vehicles, but this method only It is limited to the single object of electric vehicles and does not take into account the entire energy system. The energy consumption of a single electric vehicle has very little impact on the entire energy system, and lacks energy management for the entire energy system.

因此,提出一种增强深度Transformer-Attention集成预测的分数阶微积分降能引导方法,该方法能考虑能耗状态、能耗系数、季节、气温和准入规则对能耗的影响;该方法能发挥能耗在综合能源系统中的调节功能,解决综合能源系统中能源供给量和能源消耗不平衡的问题;该方法利用Transformer-Attention网络和结合时序注意力单元的高效时序预测网络来对用户的基准能源消耗量进行预测,能解决预测用户能源消耗量的问题;该方法将维纳过程的分数阶积分引入控制器中,能更好地描述噪声信号的随机波动和系统的随机扰动;该方法从综合能源系统的角度出发,通过降能引导信号来引导系统能源,在满足能源消耗侧体验的情况下降低能耗;该方法从长远来看,降低综合能源系统的能耗,能提高综合能源系统的稳定性。Therefore, this paper proposes a fractional calculus energy reduction guidance method that enhances deep Transformer-Attention integrated prediction, which can consider the influence of energy consumption state, energy consumption coefficient, season, temperature and access rules on energy consumption; this method can Give full play to the regulating function of energy consumption in the integrated energy system, and solve the problem of energy supply and energy consumption imbalance in the integrated energy system; this method uses the Transformer-Attention network and the efficient timing prediction network combined with the temporal attention unit to analyze the user's The prediction of the reference energy consumption can solve the problem of predicting the user's energy consumption; this method introduces the fractional order integral of the Wiener process into the controller, which can better describe the random fluctuation of the noise signal and the random disturbance of the system; the method From the perspective of the integrated energy system, guide the system energy through the energy reduction guidance signal, and reduce energy consumption while satisfying the experience of the energy consumption side; in the long run, this method can reduce the energy consumption of the integrated energy system and improve the comprehensive energy system. stability.

发明内容Contents of the invention

本发明提出一种增强深度Transformer-Attention集成预测的分数阶微积分降能引导方法,该方法将Transformer-Attention网络、结合时序注意力单元的高效时序预测网络、分数阶随机动态微积分控制器相结合,用于综合能源系统的长期降能引导,具有提高能源利用率、提升综合能源系统的稳定性、推进可再生能源融入综合能源系统和降低能耗的功能;所述方法在使用过程中的步骤为:The present invention proposes a fractional-order calculus energy reduction guidance method that enhances deep Transformer-Attention integrated prediction. Combined, it is used for long-term energy reduction guidance of the integrated energy system, which has the functions of improving energy utilization, improving the stability of the integrated energy system, promoting the integration of renewable energy into the integrated energy system, and reducing energy consumption; The steps are:

步骤(1):建立综合能源系统降能引导的运作框架;综合能源系统从发电厂、锅炉、化石燃料中获得能源生产量,从能源消耗侧收集预计需要的能源消耗量,综合能源系统将能源生产量和预计需要的能源消耗量比对,通过降能引导方法找到最优的降能引导信号,然后,综合能源系统向能源消耗侧发送降能引导信号,引导工业、商业、住宅消耗能源,降低能耗;Step (1): Establish an operational framework for energy reduction guidance of the integrated energy system; the integrated energy system obtains energy production from power plants, boilers, and fossil fuels, and collects the expected energy consumption from the energy consumption side. The production volume is compared with the expected energy consumption, and the optimal energy reduction guidance signal is found through the energy reduction guidance method. Then, the integrated energy system sends the energy reduction guidance signal to the energy consumption side to guide industrial, commercial, and residential energy consumption. Reduce energy consumption;

步骤(2):提出增强深度Transformer-Attention集成预测的分数阶微积分控制,通过Transformer-Attention网络、结合时序注意力单元的高效时序预测网络对综合能源系统的基准能源消耗量进行预测;Step (2): Propose a fractional-order calculus control that enhances deep Transformer-Attention integrated prediction, and predict the benchmark energy consumption of the integrated energy system through the Transformer-Attention network and the efficient timing prediction network combined with timing attention units;

首先,进行数据预处理,并将处理后的数据进行特征提取;然后,将经过处理后得到的特征数据分别用Transformer-Attention网络方法和结合时序注意力单元的高效时序预测网络进行预测,对预测结果进行选择后得到预测的基准能源消耗量;First, data preprocessing is carried out, and the processed data is subjected to feature extraction; then, the processed feature data are respectively predicted by the Transformer-Attention network method and the efficient time series prediction network combined with the time series attention unit, and the prediction After the results are selected, the predicted baseline energy consumption is obtained;

Transformer-Attention网络是一种以多头注意力为基本运算单元的深层网络架构,能在获取长期依赖和低时间复杂度间获取平衡;通过提出ProbSparse自注意力机制、自注意力蒸馏机制、生成式解码器来提高预测精度;The Transformer-Attention network is a deep network architecture with multi-head attention as the basic computing unit, which can achieve a balance between long-term dependence and low time complexity; by proposing the ProbSparse self-attention mechanism, self-attention distillation mechanism, and generative decoder to improve prediction accuracy;

Transformer-Attention网络由输入层、编码器、解码器、全连接层和输出层组成;Transformer-Attention network consists of input layer, encoder, decoder, fully connected layer and output layer;

首先,将电能、热能和燃料的历史消耗量全部输入到编码器中进行编码得到一个映射序列,然后再将长序列中需要预测的目标值填补为零,与编码器得到的映射序列一起输入到解码器中,直接生成需要得到的预测输出元素;编码器将第t个输入序列χt塑造成一个矩阵:Firstly, all the historical consumption of electric energy, thermal energy and fuel is input into the encoder for encoding to obtain a mapping sequence, and then the target value to be predicted in the long sequence is filled with zeros, and input together with the mapping sequence obtained by the encoder to In the decoder, the required predicted output elements are directly generated; the encoder shapes the t-th input sequence χ t into a matrix:

式中,是指编码矩阵;t是指t时刻;/>是指大小为Lx×d的实数矩阵集合;是指实数;Lx是指序列的长度;d是输入维度;In the formula, refers to the coding matrix; t refers to time t; /> refers to a set of real matrixes whose size is L x ×d; refers to a real number; L x refers to the length of the sequence; d is the input dimension;

Transformer-Attention网络所采用的ProbSparse自注意力机制与标准的自注意力机制不同,标准的自注意力机制采用的缩放的点积对为:The ProbSparse self-attention mechanism used by the Transformer-Attention network is different from the standard self-attention mechanism. The scaled dot product pair used by the standard self-attention mechanism is:

式中,A(Q,K,V)是指标准的自注意力机制值;Q是查询向量;K是指被查询的向量;V是内容向量;KT是被查询向量的转置;Softmax()是归一化指数函数;LQ是Q的长度;LK是K的长度;是指大小为LQ×d的实数矩阵集合;/>是指大小为LK×d的实数矩阵集合;In the formula, A(Q, K, V) refers to the standard self-attention mechanism value; Q is the query vector; K refers to the queried vector; V is the content vector; KT is the transpose of the queried vector; Softmax( ) is a normalized exponential function; L Q is the length of Q; L K is the length of K; refers to the set of real matrix whose size is L Q ×d;/> refers to a set of real matrixes whose size is L K ×d;

ProbSparse自注意力机制将标准注意力机制的缩放点积对定义成一个概率形式的内核平滑器为The ProbSparse self-attention mechanism defines the scaled dot product pair of the standard attention mechanism as a probabilistic kernel smoother as

式中,Ai(qi,K,V)是指概率形式的自注意力机制值;i、j、l是指行数;qi是指Q的第i行;kj是指K的第j行;kl是指K的第l行;vj是指V的第j行;k()是指概率函数;p(kj|qi)是指kj在qi下的条件概率分布;是指条件概率的和;In the formula, A i (q i , K, V) refers to the self-attention mechanism value in the form of probability; i, j, l refers to the number of rows; q i refers to the i-th row of Q; k j refers to the jth row; k l refers to the lth row of K; v j refers to the jth row of V; k() refers to the probability function; p(k j |q i ) refers to the condition of k j under q i Probability distributions; is the sum of the conditional probabilities;

内核平滑器中的概率函数k(qi,kj)采用的非对称指数核为:The asymmetric exponential kernel adopted by the probability function k(q i , k j ) in the kernel smoother is:

式中,是kj的转置;In the formula, is the transpose of k j ;

查询向量的概率分布满足的均匀分布为:The uniform distribution that the probability distribution of the query vector satisfies is:

式中,q(kj|qi)是指查询向量的均匀分布;In the formula, q(k j |q i ) refers to the uniform distribution of the query vector;

如果p(kj|qi)接近均匀分布q(kj|qi),则自注意力变成值V,对于预测输出是多余的,因此采用分布p和q之间的相似性来区分序列中重要的部分,通过Kullback-Leibler散度来衡量的相似性为:If p(k j | q i ) is close to the uniform distribution q(k j | The important part of the sequence, the similarity measured by Kullback-Leibler divergence is:

式中,KL(q||p)是指分布p和q之间的Kullback-Leibler散度;是指kl的转置;1n()是以e为底的对数函数;where KL(q||p) refers to the Kullback-Leibler divergence between distribution p and q; refers to the transposition of k l ; 1n() is a logarithmic function with base e;

在Kullback-Leibler散度中去掉lnLK这个常数,将第i个查询向量的稀疏度度量定义为:Remove the lnL K constant in the Kullback-Leibler divergence, and define the sparsity measure of the i-th query vector as:

式中,M(qi,K)是指第i个查询向量的稀疏度度量;In the formula, M(q i , K) refers to the sparsity measure of the i-th query vector;

在标准的自注意力机制中稀疏度度量得到ProbSparse自注意力机制的注意力值为:In the standard self-attention mechanism, the sparsity measurement obtains the attention value of the ProbSparse self-attention mechanism:

式中,As(Q,K,V)是指ProbSparse自注意力机制的注意力值;是指一个与Q相同大小的稀疏矩阵,仅包含稀疏度度量M(qi,K);In the formula, A s (Q, K, V) refers to the attention value of the ProbSparse self-attention mechanism; refers to a sparse matrix of the same size as Q, containing only the sparsity measure M(q i , K);

通过计算每个编码矩阵中每个元素的ProbSparse自注意力值,注意力值的蒸馏过程是提取ProbSparse自注意力机制的注意力值,对注意力值赋予具有支配特征的较优特征特权,并在下一层生成聚焦的自注意特征映射;通过蒸馏过程从第n层向前进入(n+1)层为:By calculating the ProbSparse self-attention value of each element in each encoding matrix, the distillation process of the attention value is to extract the attention value of the ProbSparse self-attention mechanism, and give the attention value a better feature privilege with dominant features, and Generate focused self-attention feature maps in the next layer; Going forward from the nth layer to the (n+1) layer through the distillation process is:

式中,是指/>第n层的自注意特征映射;/>是指/>第(n+1)层的自注意特征映射;In the formula, refers to /> Self-attention feature map for layer n; /> refers to /> The self-attention feature map of the (n+1)th layer;

MaxPool()是指最大池化函数;ELU()是指激活函数;Conv1d()是指利用激活函数在时间维度上执行1维卷积滤波器;[]AB是指包含多头ProbSparse自注意和注意块中的基本操作;MaxPool() refers to the maximum pooling function; ELU() refers to the activation function; Conv1d() refers to using the activation function to perform a 1-dimensional convolution filter in the time dimension; [] AB refers to the multi-head ProbSparse self-attention and attention basic operations in the block;

生成式解码器由2个相同的多头注意层堆叠而成,生成式预测则能有效地缓解长时间预测中速度下降的问题;向解码器输入的向量为:The generative decoder is stacked by two identical multi-head attention layers, and the generative prediction can effectively alleviate the problem of slowing down in long-term prediction; the input vector to the decoder is:

式中,是指向解码器输入的向量;/>是指起始分词向量;是指目标序列的占位符向量,每一个元素都为0;Ltoken是指起始分词向量的长度;Ly是指目标序列的占位符向量的长度;Concat()是指拼接操作函数;In the formula, is a vector pointing to the input of the decoder; /> refers to the initial word segmentation vector; Refers to the placeholder vector of the target sequence, each element is 0; Ltoken refers to the length of the initial word segmentation vector; L y refers to the length of the placeholder vector of the target sequence; Concat() refers to the splicing operation function;

通过将和/>进行解码操作之后最后将向量通过全连接层得到最终的需要预测的能源消耗量为:by putting and /> After the decoding operation, finally pass the vector through the fully connected layer to obtain the final energy consumption that needs to be predicted:

式中,是指通过Transformer-Attention网络预测的能源消耗量;/>是指输出值构成的大小为dy实数矩阵集合;dy是指输出数据的维度;/>是指构成目标输出的第o个向量;In the formula, Refers to the energy consumption predicted by the Transformer-Attention network; /> It refers to the set of real matrix whose size is dy composed of the output value; dy refers to the dimension of the output data; /> refers to the o-th vector that constitutes the target output;

结合时序注意力单元的高效时序预测网络不是使用循环神经网络,而是使用注意力机制来并行化地处理时间演变;结合时序注意力单元的高效时序预测网络将时序注意力分解为两个部分:静态注意力和动态注意力;静态注意力使用小核心深度卷积和扩张卷积来实现大感受野,从而捕捉序列的长时间依赖关系;动态注意力利用时序间注意力的不同来学习时序权重,从而捕捉序列间变化趋势;结合时序注意力单元的高效时序预测网络利用一种差分散度正则化方法,用于优化时序预测学习的损失函数;差分散度正则化方法通过将预测值和真实值之间的差分转换为概率分布,并计算它们之间的Kullback-Leibler散度,使结合时序注意力单元的高效时序预测网络学习到时序中固有的变化规律;将电能、热能和燃料的历史消耗量输入到结合时序注意力单元的高效时序预测网络的输入矩阵中为:The efficient timing prediction network combined with timing attention unit does not use recurrent neural network, but uses attention mechanism to process time evolution in parallel; the efficient timing prediction network combining timing attention unit decomposes timing attention into two parts: Static attention and dynamic attention; static attention uses small core depth convolution and expanded convolution to achieve a large receptive field, thereby capturing the long-term dependency of the sequence; dynamic attention uses the difference in attention between time series to learn time series weights , so as to capture the change trend between sequences; the efficient temporal prediction network combined with temporal attention unit uses a differential dispersion regularization method to optimize the loss function of temporal prediction learning; the differential dispersion regularization method combines the predicted value with the real The difference between the values is converted into a probability distribution, and the Kullback-Leibler divergence between them is calculated, so that the efficient sequence prediction network combined with the sequence attention unit can learn the inherent change law in the sequence; the history of electric energy, thermal energy and fuel The consumption input into the input matrix of the efficient temporal prediction network combined with temporal attention unit is:

式中,T是指输入的时间序列的长度;是指大小为T的实数矩阵集合;In the formula, T refers to the length of the input time series; refers to a set of real matrixes of size T;

神经网络映射的预测值为:The predicted value of the neural network map is:

式中,是指神经网络模型映射的预测值;/>是一个神经网络模型;In the formula, refers to the predicted value of the neural network model mapping; /> is a neural network model;

将神经网络模型映射的预测值和真实值作前向差分为:The forward difference between the predicted value and the real value mapped by the neural network model is:

式中,是神经网络映射的预测值的前向差分;/>是指神经网络模型映射的第i+1个预测值;/>是指神经网络模型映射的第i个预测值;/>是指真实值的前向差分;是指真实值/>第i+1个数据;/>是指真实值/>第i个数据;In the formula, is the forward difference of the predicted value of the neural network map; /> refers to the i+1 predicted value mapped by the neural network model; /> refers to the i-th predicted value mapped by the neural network model; /> is the forward difference of the true value; refers to the true value /> i+1th data; /> refers to the true value /> i-th data;

将前向差分通过Softmax()函数转化为概率为:The forward difference is transformed into a probability through the Softmax() function:

式中,σ()是指概率分布函数;是指动态注意力;/>是指静态注意力;τ是指温度系数;exp()是以e为底的指数函数;In the formula, σ() refers to the probability distribution function; refers to dynamic attention; /> refers to static attention; τ refers to the temperature coefficient; exp() is an exponential function with e as the base;

通过计算概率分布和/>之间的Kullback-Leibler散度得到微分散度正则化函数为:Calculate the probability distribution by and /> The Kullback-Leibler divergence between the micro-dispersion regularization function is:

式中,是指微分散度正则化函数;T′是需要预测的时间序列的长度;In the formula, Refers to the microdispersion regularization function; T' is the length of the time series to be predicted;

结合时序注意力单元的高效时序预测网络以完全无监督的方式进行端到端训练,由均方误差损失和常数λ加权的微分发散正则化构成的评估差异的损失函数为:An efficient temporal prediction network combined with a temporal attention unit is trained end-to-end in a fully unsupervised manner, and the loss function for evaluating variance composed of a mean squared error loss and a differential divergence regularization weighted by a constant λ is:

式中,是评估差异的损失函数;λ是一个常数;In the formula, is the loss function to evaluate the difference; λ is a constant;

通过损失函数能求解出结合时序注意力单元的高效时序预测网络的权重参数为:The weight parameters of the efficient time series prediction network combined with the time series attention unit can be solved through the loss function as follows:

式中,Θ*是求解出的权重参数的值;argmin是求解目标函数最小值对应的解;Θ是一个权重参数;In the formula, Θ * is the value of the weight parameter obtained by solving; argmin is the solution corresponding to the minimum value of the objective function; Θ is a weight parameter;

结合时序注意力单元的高效时序预测网络是从时间t+1预测随后的T′,结合时序注意力单元的高效时序预测网络能学习从的映射,通过结合时序注意力单元的高效时序预测网络得到的能源消耗量为:The efficient temporal prediction network combined with temporal attention unit is to predict the subsequent T′ from time t+1, and the efficient temporal prediction network combined with temporal attention unit can learn from The mapping of , the energy consumption obtained by combining the efficient temporal prediction network with temporal attention unit is:

式中,是指通过结合时序注意力单元的高效时序预测网络预测的能源消耗量;/>是指大小为T′的实数矩阵集合;In the formula, refers to the energy consumption predicted by the efficient temporal prediction network combined with temporal attention units;/> refers to the set of real number matrices whose size is T′;

步骤(3):把增强深度Transformer-Attention集成预测的分数阶微积分控制用于综合能源系统的降能引导;将预测的基准能源消耗量输入到分数阶随机动态微积分控制器中,分数阶随机动态微积分控制器输出降能引导信号;Step (3): Use the fractional-order calculus control predicted by the enhanced deep Transformer-Attention integration for energy reduction guidance of the integrated energy system; input the predicted baseline energy consumption into the fractional-order stochastic dynamic calculus controller, and the fractional-order The stochastic dynamic calculus controller outputs energy-reduction guiding signals;

综合能源系统的能源消耗有电力消耗、热能消耗和燃料消耗,利用Transformer-Attention网络和结合时序注意力单元的高效时序预测网络预测到的能源消耗量通过基准能源消耗量预测函数得到的基准能源消耗量为:The energy consumption of the integrated energy system includes power consumption, thermal energy consumption and fuel consumption. The energy consumption predicted by using the Transformer-Attention network and the efficient time-series prediction network combined with the time-series attention unit is obtained through the reference energy consumption prediction function. The amount is:

式中,Dt是指预测的基准能源消耗量;forecast()是指基准能源消耗量预测函数;In the formula, D t refers to the predicted baseline energy consumption; forecast() refers to the baseline energy consumption prediction function;

利用分数阶随机动态微积分控制器输出的能耗状态微分为:The energy consumption state differential output by the fractional order stochastic dynamic calculus controller is:

式中,α是指分数阶微积分的阶次;St是指能耗状态;dαSt是指对能耗状态的分数阶微分;是指综合能源系统获得的能源量;Pt是指分数阶随机动态微积分控制器预测的能源消耗量;dαt是指对时间的分数阶微分;Nnoise是指噪音强度;/>是指对维纳过程的分数阶积分;Wt是指维纳过程;In the formula, α refers to the order of fractional calculus; S t refers to the energy consumption state; d α S t refers to the fractional order differential of the energy consumption state; refers to the amount of energy obtained by the integrated energy system; Pt refers to the energy consumption predicted by the fractional order stochastic dynamic calculus controller; d α t refers to the fractional order differential with respect to time; N noise refers to the noise intensity; /> refers to the fractional order integration of the Wiener process; W t refers to the Wiener process;

综合能源系统包括能源产生侧和能源消耗侧,能源消耗量受利用分数阶随机动态微积分控制器输出能源消耗量的变化量为:The comprehensive energy system includes the energy generation side and the energy consumption side. The energy consumption is affected by the change of the energy consumption output by the fractional order stochastic dynamic calculus controller as follows:

式中,是指能源消耗量的变化量;γ()是指逻辑函数;l1、l2、l3、l4和l5分别是指逻辑函数γ()里面的能耗状态函数、能耗系数函数、季节函数、气温函数和准入规则函数的系数;Sta()是指能耗状态函数;δ是能耗状态函数Sta()的参数;Coe()是指能耗系数函数;ct是指能耗系数;β是指能耗系数函数Coe()的参数;Wea()是指季节函数;wt是指季节情况;Tem()是指气温函数;ht是指气温;Rul()是指准入规则函数;ut是指准入规则;θ是指能源消耗量的参数;In the formula, refers to the variation of energy consumption; γ() refers to the logic function; l 1 , l 2 , l 3 , l 4 and l 5 refer to the energy consumption state function and energy consumption coefficient function in the logic function γ() respectively , seasonal function, air temperature function and access rule function; Sta() refers to the energy consumption state function; δ is the parameter of the energy consumption state function Sta(); Coe ( ) refers to the energy consumption coefficient function; Energy consumption coefficient; β refers to the parameters of the energy consumption coefficient function Coe(); Wea() refers to the seasonal function; w t refers to the seasonal situation; Tem() refers to the temperature function; h t refers to the temperature; Rul() is refers to the access rule function; u t refers to the access rule; θ refers to the parameter of energy consumption;

通过分数阶随机动态微分控制器来得到预测的用电负荷为:The predicted electricity load obtained by the fractional order stochastic dynamic differential controller is:

式中,penergy是指可再生能源量的比例;是指符号函数;In the formula, p energy refers to the proportion of renewable energy; is a sign function;

符号函数S()为:The symbolic function S() is:

逻辑函数γ()为:The logic function γ() is:

式中,l是逻辑函数γ()的参数;In the formula, l is the parameter of the logic function γ();

步骤(4):考虑到由能耗状态、能耗系数、季节、气温和准入规则所引起的动态能耗变化,由分数阶随机动态微积分控制器产生降能引导信号;Step (4): Considering the dynamic energy consumption changes caused by the energy consumption state, energy consumption coefficient, season, air temperature and access rules, a fractional order stochastic dynamic calculus controller generates an energy reduction guidance signal;

由能耗的影响因素、预测的基准能源消耗量和综合能源系统提供的能源量作为分数阶随机动态微积分控制器的输入变量,降能引导信号作为输出变量;The influencing factors of energy consumption, the predicted baseline energy consumption and the energy provided by the integrated energy system are used as the input variables of the fractional-order stochastic dynamic calculus controller, and the energy reduction guide signal is used as the output variable;

能耗状态函数Sta()为:The energy consumption state function Sta() is:

Sta(St,δ1,δ2,δ3,δ4)=(1-2St1×[1-(2St-1)2]}×[δ23×(2St-1)24×(2St-1)6] (26)Sta(S t , δ 1 , δ 2 , δ 3 , δ 4 )=(1-2S t1 ×[1-(2S t -1) 2 ]}×[δ 23 ×(2S t -1) 24 ×(2S t -1) 6 ] (26)

式中,δ1、δ2、δ3和δ4分别是指能耗状态函数Sta()中控制能耗状态的偏斜程度、变化量常数项、变化量二次项和变化量六次项的系数;In the formula, δ 1 , δ 2 , δ 3 and δ 4 refer to the degree of skewness, constant term of change, quadratic term of change and sixth order of change in the energy consumption state function Sta(), respectively coefficient;

能耗系数函数Coe()为:The energy consumption coefficient function Coe() is:

式中,TNz是指样条总数;z是指第z个样条;Iz()是指I样条函数;In the formula, TNz refers to the total number of splines; z refers to the zth spline; Iz() refers to the I-spline function;

季节函数Wea()为:The seasonal function Wea() is:

式中,sin()是指正弦函数;In the formula, sin() refers to the sine function;

气温函数Tem()为:The temperature function Tem() is:

Tem(ht)=0.6 exp(ht)+8ht (29)Tem(h t )=0.6 exp(h t )+8h t (29)

准入规则函数Rul()为:The access rule function Rul() is:

预测的基准能源消耗量和能源供给量作为分数阶随机动态微积分控制器的输入变量,通过分数阶随机动态微分方程输出预测能源消耗量,再利用目标函数求解最优的降能引导信号;利用分数阶随机动态微积分控制器求解降能引导信号的函数为:The predicted baseline energy consumption and energy supply are used as the input variables of the fractional-order stochastic dynamic calculus controller, and the predicted energy consumption is output through the fractional-order stochastic dynamic differential equation, and then the optimal energy reduction guide signal is solved by using the objective function; The function of the fractional-order stochastic dynamic calculus controller to solve the degraded guiding signal is:

式中,period是指预测周期;Pre(Dt,ct)是指以能耗系数ct为变量的预测能源消耗量函数;In the formula, period refers to the forecast period; Pre(D t , c t ) refers to the predicted energy consumption function with the energy consumption coefficient c t as a variable;

步骤(5):将降能引导信号运用到综合能源系统中,引导能源消耗侧使用能源,提高能源利用率,加强综合能源系统的稳定性,促进可再生能源的整合,并降低综合能源系统的能耗。Step (5): Apply the energy reduction guidance signal to the integrated energy system, guide the energy consumption side to use energy, improve energy utilization, strengthen the stability of the integrated energy system, promote the integration of renewable energy, and reduce the energy consumption of the integrated energy system. energy consumption.

本发明相对于现有技术具有如下的优点及效果:Compared with the prior art, the present invention has the following advantages and effects:

(1)本发明利用数据预处理功能、Transformer-Attention网络和结合时序注意力单元的高效时序预测网络来预测用户的基准能源消耗量,同时将维纳过程的分数阶积分引入到分数阶随机动态微积分控制器中,能提高降能引导信号的精确度。(1) The present invention utilizes the data preprocessing function, the Transformer-Attention network and the efficient time series prediction network combined with the time series attention unit to predict the user's baseline energy consumption, and at the same time introduces the fractional order integration of the Wiener process into the fractional order stochastic dynamics In the calculus controller, the accuracy of the degraded pilot signal can be improved.

(2)本发明从综合能源系统的角度出发,考虑能耗状态、能耗系数、季节、气温和准入规则对能耗的影响,通过降能引导信号引导能源消耗侧消耗能源,降低综合能源系统的能耗。(2) From the perspective of the comprehensive energy system, the present invention considers the influence of energy consumption state, energy consumption coefficient, season, temperature and access rules on energy consumption, guides the energy consumption side to consume energy through the energy reduction guidance signal, and reduces the comprehensive energy consumption. The energy consumption of the system.

(3)本发明相比于2020.08.25申请的申请号为2020108660490的专利名为《一种多群分布式灵活能源服务商长期价格引导方法》,不仅考虑了能源生产和能源消耗之间的动态博弈,还将能耗的影响因素考虑到了综合能源系统的降能引导中。(3) Compared with the patent application No. 2020108660490 filed on August 25, 2020, the present invention is entitled "A Long-term Price Guidance Method for Multi-Group Distributed Flexible Energy Service Providers", which not only considers the dynamics between energy production and energy consumption The game also takes the influence factors of energy consumption into the energy reduction guidance of the comprehensive energy system.

(4)本发明相比于2022.01.18申请的申请号为2021111974623的专利名为《一种灵活能源混合网络动态微分控制的长期价格引导方法动态微分方法》,将分数阶随机动态微分加入到了控制器中,能更好地描述能源消耗量与时间之间的非线性关系和非马尔可夫性质。(4) Compared with the patent application No. 2021111974623 filed on 2022.01.18, the present invention is entitled "A Dynamic Differentiation Method for Long-term Price Guidance Method for Dynamic Differential Control of Flexible Energy Hybrid Network", which adds fractional order stochastic dynamic differentiation to the control In the detector, the nonlinear relationship and non-Markovian nature between energy consumption and time can be better described.

(5)本发明相比于2022.10.18申请的申请号为2022112767254的专利名为《一种增强深度注意力双向预测的分数阶长期价格引导方法》,将研究对象从单一的电动汽车扩大到综合能源系统,同时在控制器中加入了维纳过程的分数阶积分,能更好地描述噪声信号的随机波动和系统的随机扰动,提高能耗引导信号的精确度。(5) Compared with the patent application No. 2022112767254 filed on 2022.10.18, the present invention is entitled "A Fractional Long-term Price Guidance Method for Enhancing Deep Attention Bidirectional Prediction", which expands the research object from a single electric vehicle to a comprehensive In the energy system, at the same time, the fractional order integral of the Wiener process is added to the controller, which can better describe the random fluctuation of the noise signal and the random disturbance of the system, and improve the accuracy of the energy consumption guidance signal.

附图说明Description of drawings

图1是本发明方法的增强深度Transformer-Attention集成预测的分数阶微积分降能引导方法控制框架图。Fig. 1 is a control framework diagram of the fractional-order calculus energy reduction guidance method for enhanced deep Transformer-Attention integrated prediction of the method of the present invention.

图2是本发明方法的Transformer-Attention网络。Figure 2 is the Transformer-Attention network of the method of the present invention.

图3是本发明方法的结合时序注意力单元的高效时序预测网络。Fig. 3 is an efficient temporal prediction network combined with a temporal attention unit of the method of the present invention.

具体实施方式Detailed ways

本发明提出一种增强深度Transformer-Attention集成预测的分数阶微积分降能引导方法,结合附图详细说明如下:The present invention proposes a fractional-order calculus energy reduction guidance method that enhances deep Transformer-Attention integrated prediction. The detailed description is as follows in conjunction with the accompanying drawings:

图1是本发明方法的增强深度Transformer-Attention集成预测的分数阶微积分降能引导方法控制框架图。首先,对从工业、商业、住宅得到原始能源消耗量数据并进行数据预处理。然后,将预处理后的数据分别通过Transformer-Attention网络和结合时序注意力单元的高效时序预测网络进行预测,确定出预测的基准能源消耗量。最后,将预测的基准能源消耗量输入分数阶随机动态微积分控制器中,结合能耗影响因素和能源生产量,输出最优的降能引导信号来引导工业、商业、住宅消耗能源。Fig. 1 is a control framework diagram of the fractional-order calculus energy reduction guidance method for enhanced deep Transformer-Attention integrated prediction of the method of the present invention. First, the raw energy consumption data obtained from industry, commerce, and residences are preprocessed. Then, the preprocessed data is predicted through the Transformer-Attention network and the efficient time-series prediction network combined with the time-series attention unit to determine the predicted baseline energy consumption. Finally, input the predicted baseline energy consumption into the fractional-order stochastic dynamic calculus controller, combine energy consumption factors and energy production, and output the optimal energy reduction guidance signal to guide industrial, commercial, and residential energy consumption.

图2是本发明方法的Transformer-Attention网络。首先编码器接收大量长序列输入。Transformer-Attention网络用ProbeSparse自注意取代标准自注意。然后,对多头注意力进行蒸馏操作,经过蒸馏操作的注意力是一种金字塔依赖关系,通过层层蒸馏,提取主导注意力到解码器,大幅缩小网络大小。最后,解码器接收长序列输入,将目标元素填充为零,计算特征图的加权注意力组成,并以生成式风格立即预测输出元素。Figure 2 is the Transformer-Attention network of the method of the present invention. First the encoder receives a large number of long sequence inputs. The Transformer-Attention network replaces standard self-attention with ProbeSparse self-attention. Then, the distillation operation is performed on the multi-head attention. The attention after the distillation operation is a pyramid dependency. Through layer-by-layer distillation, the dominant attention is extracted to the decoder, and the network size is greatly reduced. Finally, the decoder takes long sequence inputs, zero-pads target elements, computes a weighted attention composition of feature maps, and immediately predicts output elements in a generative style.

图3是本发明方法的结合时序注意力单元的高效时序预测网络。结合时序注意力单元的高效时序预测网络的整体结构是数据输入、编码器、时序注意力单元、解码器和数据输出。在时序注意力单元中,首先,将由实际值产生的静注意力通过小核深度卷积、可扩张深度卷积和1*1卷积进行转化,以获得第一个输出向量。然后,将神经网络映射的动态注意力通过平均池化层和全连接层得到第二个输出向量。最后,将这两个输出向量结合发送到解码器中,由解码器生成最后的数据输出。Fig. 3 is an efficient temporal prediction network combined with a temporal attention unit of the method of the present invention. The overall structure of an efficient temporal prediction network combined with temporal attention units is data input, encoder, temporal attention unit, decoder, and data output. In the temporal attention unit, firstly, the static attention generated by the actual value is converted by small kernel depth convolution, dilated depth convolution and 1*1 convolution to obtain the first output vector. Then, the dynamic attention of the neural network map is passed through the average pooling layer and the fully connected layer to obtain the second output vector. Finally, the two output vectors are combined and sent to the decoder, which generates the final data output.

以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process conversion made by using the description and drawings of the present invention, or directly or indirectly used in other relevant Technical fields are all included in the scope of patent protection of the present invention in the same way.

Claims (1)

1. The fractional calculus energy reduction guiding method for enhancing depth conversion-Attention integrated prediction is characterized by combining a conversion-Attention network, a high-efficiency time sequence prediction network combined with a time sequence Attention unit and a fractional random dynamic calculus controller, is used for long-term energy reduction guiding of a comprehensive energy system, and has the functions of improving the energy utilization rate, improving the stability of the comprehensive energy system, promoting renewable energy to be integrated into the comprehensive energy system and reducing energy consumption; the method comprises the following steps in the using process:
step (1): establishing an operation frame for energy reduction guidance of the comprehensive energy system; the comprehensive energy system obtains energy production from power plants, boilers and fossil fuels, collects expected required energy consumption from an energy consumption side, compares the energy production with the expected required energy consumption, finds an optimal energy-reducing guide signal through an energy-reducing guide method, and then sends the energy-reducing guide signal to the energy consumption side to guide energy consumption of industries, businesses and residences and reduce the energy consumption;
step (2): providing fractional calculus control for enhancing depth transducer-Attention integrated prediction, and predicting the reference energy consumption of the comprehensive energy system through a transducer-Attention network and a high-efficiency time sequence prediction network combined with a time sequence Attention unit;
firstly, preprocessing data, and extracting features of the processed data; then, respectively predicting the processed characteristic data by using a transducer-Attention network method and a high-efficiency time sequence prediction network combined with a time sequence Attention unit, and selecting a prediction result to obtain a predicted reference energy consumption;
the transform-Attention network is a deep network architecture taking multi-head Attention as a basic operation unit, and can acquire balance between long-term dependence acquisition and low time complexity acquisition; the prediction precision is improved by proposing a ProbSparse self-attention mechanism, a self-attention distillation mechanism and a generative decoder;
the transducer-attribute network consists of an input layer, an encoder, a decoder, a full-connection layer and an output layer;
firstly, all the historical consumption of electric energy, heat energy and fuel are input into an encoder to be encoded to obtain a mapping sequence, then target values to be predicted in a long sequence are filled to be zero, the target values and the mapping sequence obtained by the encoder are input into a decoder together, and prediction output elements to be obtained are directly generated; the encoder inputs the t-th input sequence χ t Molding into a matrix:
in the method, in the process of the invention,refers to a coding matrix; t is the time t; />Refers to the size L x X d set of real matrices; />Refers to real numbers; l (L) x Refers to the length of the sequence; d is the input dimension;
the probspark self-Attention mechanism employed by the transform-Attention network differs from the standard self-Attention mechanism, which employs scaled dot product pairs of:
wherein A (Q, K, V) is a self-attention mechanism value indicative of the index; q is a query vector; k refers to the vector being queried; v is the content vector; k (K) T Is a transpose of the queried vector; softmax () is a normalized exponential function; l (L) Q Is the length of Q; l (L) K Is the length of K;refers to the size L Q X d set of real matrices; />Refers to the size L K X d set of real matrices;
the probspark self-attention mechanism defines the scaled dot product pair of the standard attention mechanism as a kernel smoother in probabilistic form as:
wherein A is i (q i K, V) refers to the self-attention mechanism value in the form of probability; i. j and l refer to the number of rows; q i Refers to row i of Q; k (k) j Refers to row j of K; k (k) l Refers to the first row of K; v j Refers to row j of V; k () refers to a probability function; p (k) j |q i ) Refers to k j At q i The following conditional probability distribution;refers to the sum of conditional probabilities;
probability function k (q in kernel smoother i ,k j ) The asymmetric index kernel used is:
in the method, in the process of the invention,is k j Is a transpose of (2);
the probability distribution of the query vector satisfies the uniform distribution:
wherein q (k) j |q i ) Refers to the uniform distribution of query vectors;
if p (k) j |q i ) Near uniform distribution q (k) j |q i ) The self-attention becomes the value V, which is redundant for the prediction output, so the similarity between the distributions p and q is used to distinguish important parts of the sequence, as measured by Kullback-Leibler divergence:
wherein KL (q||p) refers to the Kullback-Leibler divergence between the distributions p and q;refers to k l Is a transpose of (2); ln () is a logarithmic function based on e;
removal of lnL in the Kullback-Leibler divergence K This constant, the sparsity measure for the ith query vector is defined as:
wherein M (q i K) refers to the sparsity measure of the ith query vector;
the sparsity measure in the standard self-attention mechanism yields the attention value of the probspark self-attention mechanism as:
wherein A is s (Q, K, V) refers to the attention value of the ProbSparse self-attention mechanism;refers to a sparse matrix of the same size as Q, comprising only sparsity metrics M (Q i ,K);
By calculating the ProbSparse self-attention value of each element in each coding matrix, the distillation process of the attention value is to extract the attention value of the ProbSparse self-attention mechanism, endow the attention value with better feature privileges with dominant features, and generate a focused self-attention feature map at the next layer;the forward entry into the (n+1) layer from the n-th layer by distillation is:
in the method, in the process of the invention,refers to->A layer n self-care feature map; />Refers to->A self-care feature map of layer (n+1); maxPool () refers to the max pooling function; ELU () refers to an activation function; conv1d () refers to performing a 1-dimensional convolution filter in the time dimension using an activation function; [] AB Refers to the basic operations in the self-attention and attention block that contain multiple heads probspark;
the generating type decoder is formed by stacking 2 identical multi-head attention layers, and the generating type prediction can effectively relieve the problem of speed reduction in long-time prediction; the vector input to the decoder is:
in the method, in the process of the invention,is a vector pointing to the decoder input; />The initial word segmentation vector is referred to; />Refers to a placeholder vector for the target sequence, each element being 0; l (L) token Refers to the length of the initial segmentation vector; l (L) y Refers to the length of the placeholder vector of the target sequence; concat () refers to a splice operation function;
by combiningAnd->Finally, the vector is passed through the full connection layer after decoding operation to obtain the final energy consumption to be predicted, which is:
in the method, in the process of the invention,the energy consumption is predicted by a transducer-attribute network; />Refers to the output value with the size d y A real number matrix set; d, d y Refers to the dimension of the output data; />Refers to the o-th vector constituting the target output;
the efficient timing prediction network in combination with the timing attention unit does not use a recurrent neural network, but rather uses an attention mechanism to parallelize the processing of the time evolution; an efficient timing prediction network in combination with a timing attention unit breaks the timing attention down into two parts: static attention and dynamic attention; static attention uses small core depth convolution and dilation convolution to achieve a large receptive field, capturing long-term dependencies of sequences; the dynamic attention learns the time sequence weight by utilizing the difference of the time sequence attention, thereby capturing the variation trend among the sequences; the high-efficiency time sequence prediction network combined with the time sequence attention unit utilizes a differential divergence regularization method for optimizing a loss function of time sequence prediction learning; the differential divergence regularization method converts the difference between the predicted value and the true value into probability distribution, calculates the Kullback-Leibler divergence between the predicted value and the true value, and enables an efficient time sequence prediction network combined with a time sequence attention unit to learn the inherent change rule in the time sequence; the input matrix of the high-efficiency time sequence prediction network combined with the time sequence attention unit is that the historical consumption of electric energy, heat energy and fuel is input as follows:
wherein T is the length of the input time series;refers to a real matrix set of size T;
the predicted values for the neural network map are:
in the method, in the process of the invention,the predicted value of the neural network model mapping is referred to; />Is a neural network model;
the predicted value and the true value mapped by the neural network model are subjected to forward difference as follows:
in the method, in the process of the invention,is the forward difference of the predicted values of the neural network map; />The (i+1) th predicted value mapped by the neural network model is referred to; />Refers to the ith predicted value mapped by the neural network model; />Refers to the forward difference of the true values; />Refers to the true value +.>Data i+1th; />Refers to the true value +.>Ith data;
the forward difference is converted into probability by Softmax () function as follows:
wherein σ () refers to a probability distribution function;refers to dynamic attention; />Refers to static attention; τ is the temperature coefficient; exp () is an exponential function based on e;
by calculating probability distributionAnd->The obtained micro-dispersion regularization function of the Kullback-Leibler dispersion is as follows:
in the method, in the process of the invention,refers to a micro-dispersion regularization function; t' is the length of the time series that needs to be predicted;
the efficient time sequence prediction network combined with the time sequence attention unit performs end-to-end training in a completely unsupervised manner, and the loss function of the evaluation difference consisting of the mean square error loss and the constant lambda weighted differential divergence regularization is as follows:
in the method, in the process of the invention,is a loss function that evaluates the difference; lambda is a constant;
the weight parameters of the high-efficiency time sequence prediction network combined with the time sequence attention unit can be solved through the loss function, and the weight parameters are as follows:
in the formula Θ * Is the value of the solved weight parameter; argmin is a solution corresponding to the minimum value of the solving objective function; Θ is a weight parameter;
the efficient time sequence prediction network combined with the time sequence attention unit predicts the following T' from the time t+1, and the efficient time sequence prediction network combined with the time sequence attention unit can learn the slaveThe energy consumption obtained by the high-efficiency time sequence prediction network combined with the time sequence attention unit is as follows:
in the method, in the process of the invention,refers to the energy consumption predicted by the network through the efficient time sequence prediction combined with the time sequence attention unit;refers to a real matrix set of size T';
step (3): fractional order calculus control of enhanced depth transducer-attribute integrated prediction is used for energy reduction guidance of a comprehensive energy system; inputting the predicted reference energy consumption into a fractional order random dynamic calculus controller, and outputting an energy reduction guide signal by the fractional order random dynamic calculus controller;
the energy consumption of the comprehensive energy system comprises electric power consumption, heat energy consumption and fuel consumption, and the reference energy consumption obtained by using the energy consumption predicted by the converter-Attention network and the high-efficiency time sequence prediction network combined with the time sequence Attention unit through the reference energy consumption prediction function is as follows:
wherein D is t Refers to a predicted reference energy consumption; forecast () refers to a reference energy consumption amount prediction function;
the energy consumption state differentiation output by the fractional order random dynamic calculus controller is as follows:
wherein, alpha refers to the order of fractional calculus; s is S t Refers to the energy consumption state; d, d α S t Refers to fractional differentiation of the energy consumption state;refers to the energy obtained by the comprehensive energy system; p (P) t The energy consumption predicted by the fractional order random dynamic calculus controller is referred to; d, d α t refers to fractional differentiation over time; n (N) noise Refers to noise intensity; />Refers to fractional order integration of the wiener process; w (W) t Refers to the wiener process;
the comprehensive energy system comprises an energy generation side and an energy consumption side, wherein the energy consumption is output by the utilization fractional order random dynamic calculus controller, and the change of the energy consumption is as follows:
in the method, in the process of the invention,refers to the amount of change in energy consumption; gamma () refers to a logical function; l (L) 1 、l 2 、l 3 、l 4 And l 5 The energy consumption state function, the energy consumption coefficient function, the seasonal function, the air temperature function and the admission rule function in the logic function gamma (); sta () refers to an energy consumption state function; delta is a parameter of the energy consumption state function Sta (); coe () refers to an energy consumption coefficient function; c t Refers to the energy consumption coefficient; beta refers to a parameter of an energy consumption coefficient function Coe (); wea () refers to a seasonal function; w (w) t Refers to the seasonal situation; tem () refers to an air temperature function; h is a t Refers to air temperature; rul () refers to an admission rule function; u (u) t Refers to an admission rule; θ refers to a parameter of energy consumption;
the predicted electricity load is obtained through a fractional order random dynamic differential controller as follows:
wherein p is energy Refers to the proportion of renewable energy sources;refers to a sign function;
the sign function S () is:
the logical function γ () is:
where l is a argument of a logical function γ ();
step (4): taking into account dynamic energy consumption changes caused by energy consumption states, energy consumption coefficients, seasons, air temperatures and admission rules, generating a descending energy guiding signal by a fractional order random dynamic calculus controller;
the influence factors of energy consumption, the predicted reference energy consumption and the energy provided by the comprehensive energy system are used as input variables of the fractional order random dynamic calculus controller, and the energy reduction guide signal is used as output variable;
the energy consumption state function Sta () is:
Sta(S t12 ,δ 34 )={1-2S t1 ×[1-(2S t -1) z l}×[δ 23 ×(2S t -1) z4 ×(2S t -1) 6 ] (26)
in delta 1 、δ 2 、δ 3 And delta 4 The energy consumption state function Sta () is a coefficient for controlling the deflection degree, the variable constant term, the variable quadratic term and the variable sixth term of the energy consumption state;
the energy consumption coefficient function Coe () is:
in TN z Refers to the total number of splines; z refers to the z-th spline; i z () Refers to I spline functions;
the seasonal function Wea () is:
where sin () refers to a sine function;
the air temperature function Tem () is:
Tem(h t )=0.6exp(h t )+8h t (29)
the admission rule function Rul () is:
the predicted reference energy consumption and the energy supply are used as input variables of a fractional order random dynamic calculus controller, the predicted energy consumption is output through a fractional order random dynamic derivative equation, and then an optimal energy reduction guide signal is solved by utilizing an objective function; the function of solving the energy-reducing guide signal by using the fractional order random dynamic calculus controller is as follows:
wherein period refers to a prediction period; pre (D) t ,c t ) Is expressed by the energy consumption coefficient c t A predicted energy consumption function for the variable;
step (5): the energy-reducing guide signal is applied to the comprehensive energy system, the energy consumption side is guided to use energy, the energy utilization rate is improved, the stability of the comprehensive energy system is enhanced, the integration of renewable energy sources is promoted, and the energy consumption of the comprehensive energy system is reduced.
CN202310867687.8A 2023-07-15 2023-07-15 Fractional calculus energy reduction guiding method for enhanced depth transducer-attribute integrated prediction Pending CN116700011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310867687.8A CN116700011A (en) 2023-07-15 2023-07-15 Fractional calculus energy reduction guiding method for enhanced depth transducer-attribute integrated prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310867687.8A CN116700011A (en) 2023-07-15 2023-07-15 Fractional calculus energy reduction guiding method for enhanced depth transducer-attribute integrated prediction

Publications (1)

Publication Number Publication Date
CN116700011A true CN116700011A (en) 2023-09-05

Family

ID=87825917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310867687.8A Pending CN116700011A (en) 2023-07-15 2023-07-15 Fractional calculus energy reduction guiding method for enhanced depth transducer-attribute integrated prediction

Country Status (1)

Country Link
CN (1) CN116700011A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947427A (en) * 2021-10-14 2022-01-18 广西大学 Long-term price guiding method for dynamic differential control of flexible energy hybrid network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947427A (en) * 2021-10-14 2022-01-18 广西大学 Long-term price guiding method for dynamic differential control of flexible energy hybrid network

Similar Documents

Publication Publication Date Title
CN109685252B (en) Building energy consumption prediction method based on cyclic neural network and multi-task learning model
CN104616078B (en) Photovoltaic system electricity generation power Forecasting Methodology based on Spiking neutral nets
CN112232575B (en) Comprehensive energy system regulation and control method and device based on multi-element load prediction
CN108647839A (en) Voltage-stablizer water level prediction method based on cost-sensitive LSTM Recognition with Recurrent Neural Network
CN108197751A (en) Seq2seq network Short-Term Load Forecasting Methods based on multilayer Bi-GRU
CN105913175A (en) Intelligent power grid short period load prediction method based on improved nerve network algorithm
CN109919178A (en) Fault prediction method based on feature optimization and wavelet kernel function LSSVM
CN111160620A (en) Short-term wind power prediction method based on end-to-end memory network
CN113762387B (en) Multi-element load prediction method for data center station based on hybrid model prediction
CN110598929A (en) Wind power nonparametric probability interval ultrashort term prediction method
CN116843057A (en) Wind power ultra-short-term prediction method based on LSTM-ViT
CN114022311A (en) Comprehensive energy system data compensation method for generating countermeasure network based on time sequence condition
CN115310674A (en) Long-time sequence prediction method based on parallel neural network model LDformer
CN114117852A (en) Regional heat load rolling prediction method based on finite difference working domain division
CN115615575A (en) A Boiler Wall Temperature Prediction Method Based on Multi-temporal Atlas Convolutional Attention Network
CN116700011A (en) Fractional calculus energy reduction guiding method for enhanced depth transducer-attribute integrated prediction
CN116975645A (en) Industrial process soft measurement modeling method based on VAE-MRCNN
CN113485261A (en) CAEs-ACNN-based soft measurement modeling method
CN113393119A (en) Stepped hydropower short-term scheduling decision method based on scene reduction-deep learning
CN116050478A (en) Time sequence filling method based on attention mechanism
CN110738363A (en) A photovoltaic power generation power prediction model and its construction method and application
CN112615843B (en) Power Internet of things network security situation assessment method based on multi-channel SAE-AdaBoost
CN112232570A (en) Forward active total electric quantity prediction method and device and readable storage medium
CN110674460B (en) E-Seq2Seq technology-based data driving type unit combination intelligent decision method
CN117455536A (en) Short-term coal price prediction method and system based on error compensation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination