WO2023035727A1 - 一种基于联邦增量随机配置网络的工业过程软测量方法 - Google Patents

一种基于联邦增量随机配置网络的工业过程软测量方法 Download PDF

Info

Publication number
WO2023035727A1
WO2023035727A1 PCT/CN2022/100744 CN2022100744W WO2023035727A1 WO 2023035727 A1 WO2023035727 A1 WO 2023035727A1 CN 2022100744 W CN2022100744 W CN 2022100744W WO 2023035727 A1 WO2023035727 A1 WO 2023035727A1
Authority
WO
WIPO (PCT)
Prior art keywords
hidden layer
client
parameters
nodes
central server
Prior art date
Application number
PCT/CN2022/100744
Other languages
English (en)
French (fr)
Inventor
代伟
王兰豪
董良
胡梦洁
王光辉
南静
季朗龙
敖硯驦
王殿辉
Original Assignee
中国矿业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国矿业大学 filed Critical 中国矿业大学
Priority to AU2022343574A priority Critical patent/AU2022343574B2/en
Priority to US18/002,274 priority patent/US20240027976A1/en
Priority to JP2022570190A priority patent/JP7404559B2/ja
Publication of WO2023035727A1 publication Critical patent/WO2023035727A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the invention relates to the technical field of soft-sensing of industrial process product quality indicators, in particular to an industrial process soft-sensing method based on federated incremental random configuration network.
  • Federated learning can realize a unified machine learning model trained by local data of multiple participants under the premise of protecting data privacy. Therefore, in privacy-sensitive scenarios (including financial, industrial and many other data-aware Scenario) Federated learning shows excellent application prospects.
  • federated learning is mainly combined with deep learning, but the deep algorithm itself has some difficult bottleneck problems, such as easy to fall into local minimum points, strong dependence on initial parameter settings, gradient disappearance and gradient explosion, etc.
  • Random configuration network is an advanced single hidden layer random weight network with infinite approximation characteristics that has emerged in recent years. A large number of regression and classification experiments have confirmed that it has obvious advantages in compactness, fast learning and generalization performance. .
  • the present invention proposes an industrial process soft-sensing method based on federated incremental random configuration network, including the following steps:
  • Step 1 Each factory obtains historical industrial process auxiliary data and corresponding product quality data, and initializes the parameters required for local incremental random configuration network model learning.
  • Each factory is a client, and each client will satisfy the local Data-constrained hidden layer nodes are put into the candidate pool, and the best candidate node is selected from the candidate pool and uploaded to the central server;
  • Step 2 the central server performs weighted aggregation or greedy selection on the uploaded best candidate nodes to obtain global parameters, and downloads the global parameters to each client as a local incremental random configuration of the hidden layer parameters of the network model;
  • Step 3 each client calculates the new hidden layer output after obtaining the global parameters, and uploads the output weights to the central server for weighted aggregation, and continues to start the next round of training;
  • Step 4 When the number of hidden layer nodes in the current network exceeds the given maximum number of hidden layer nodes or the residual error in the current iteration meets the expected tolerance, no new nodes are added, federated training is stopped, and a trained global model is obtained;
  • Step 5 the server distributes the trained global model to each local factory as a soft sensor model.
  • step 1 a total of K factories are set to participate in federated training.
  • n k sets of historical industrial process auxiliary data X k and corresponding product quality data T k denoted as ⁇ X k ,T k ⁇ ;
  • auxiliary industrial process data of the k-th factory and the i-th group history Contains d auxiliary process variables, the corresponding product quality data t i contains m product quality data, and the value of i is 1 ⁇ n k , then input the sample matrix
  • the i-th set of z auxiliary process variables is denoted as Indicates the zth auxiliary process variable of the kth plant and the ith group.
  • step 1 the K factories all implement the same industrial process; most of the same industrial processes use the same process flow and process equipment, and have similar characteristics.
  • Step 1 also includes:
  • the hidden layer parameters are randomly generated within the adjustable symmetric interval ⁇ and
  • Node hidden layer output Superscript T is the transpose of matrix or vector
  • L (1-r)/(L+1), L is the total number of hidden layer nodes of the current local incremental random configuration network model, r represents a learning parameter, and ⁇ L is a non-negative real number sequence;
  • m represents the dimensionality of each training set output
  • the symbols ⁇ , ⁇ > represent the inner product of vectors
  • Step 1 also includes: selecting the best candidate node from the candidate pool and uploading to the central server, including weighted aggregation and greedy selection:
  • Step 2 includes:
  • the central server performs weighted aggregation on the uploaded best candidate nodes to obtain the global parameters of the Lth node of the model and
  • n is the sum of all clients' local historical industrial process auxiliary data nk .
  • step 2 the greedy selection of the uploaded best node by the central server includes:
  • Parameters uploaded by the central server compare and choose the largest
  • the corresponding client parameters are used as the global parameters of the Lth node of the model and
  • is the optimal parameter uploaded by each client and set of , ⁇ is collection.
  • Step 3 includes:
  • the advantage of the present invention is that the method adopts the federated learning mode of dynamic configuration to train the model, and establishes an industrial process product quality soft sensor model with optimal parameters with infinite approximation characteristics in the form of a construction method, No complex retraining process is required, and the accuracy of the model can be guaranteed, with good compactness and generalization performance.
  • Figure 1 is a schematic diagram of a federated incremental random configuration network model.
  • the present invention comprises the following steps:
  • each factory selects 100 sets of historical data measured during the traditional hematite grinding process from the local database of grinding process history, that is, each set includes ball mill current c 1 , spiral classifier current c 2 , mill feed c 3 , mill inlet feed water flow c 4 and classifier overflow concentration c 5 five auxiliary process variable data, use Indicates the homogenized input data of the kth client and its corresponding product quality data, that is, the grinding particle size value t i . Represents the c5 auxiliary process variable data for the i-th sample of the k-th client.
  • There are currently 10 factories participating in the training with a total of 1000 sets of historical data, of which 800 sets are used as training sets and 200 sets are used as test sets. Then the input sample is in The output sample is
  • Node hidden layer output T represents the transpose operation
  • L (1-r)/(L+1), L is the total number of hidden layer nodes of the current local network
  • the corresponding set of hidden layer parameters is the best hidden layer parameter that satisfies the supervision mechanism
  • Step two includes:
  • the central server performs weighted aggregation or greedy selection on the uploaded best nodes:
  • the weighted aggregation of the uploaded best nodes by the central server includes:
  • the central server performs weighted aggregation on the uploaded parameters to obtain the global parameters of the Lth node and
  • n is the total number of data samples of all clients
  • nk is the total number of data samples of client k.
  • the greedy selection of the uploaded best node by the central server includes:
  • is the optimal parameter uploaded by each client and set of , ⁇ is collection.
  • Step three includes:
  • the per-client gets the global parameter and After calculating the new hidden layer output and output weight and will Uploading to the server for weighted aggregation includes:
  • Each client uploads the output matrix To the central server, the server will upload the Perform weighted aggregation to get ⁇ L
  • Step 4 When the number of hidden layer nodes of the federated incremental random configuration network exceeds 100 or the residual error in the current iteration meets the expected tolerance of 0.05, no new nodes are added, and the modeling is completed. Otherwise, return to step 1 and continue to construct the network until the preset requirements are met.
  • Each client downloads the grinding particle size soft sensor model based on the federated incremental random configuration network. Each client collects local data online and inputs it into this global soft sensor model.
  • each client collects ball mill current c 1 , spiral classifier current c 2 , mill feed volume c 3 , mill inlet water flow rate c 4 and classifier overflow concentration c 5 , and inputs them into the constructed mill
  • the ore particle size soft-sensing model is used to estimate the grinding particle size online, that is, in Product quality data estimated online for client k.
  • the present invention provides an industrial process soft-sensing method based on a federated incremental random configuration network.
  • the above description is only a preferred embodiment of the present invention.
  • Those of ordinary skill in the art can also make some improvements and modifications without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All components that are not specified in this embodiment can be realized by existing technologies.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Automation & Control Theory (AREA)
  • Computational Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Geometry (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • General Factory Administration (AREA)

Abstract

本发明提供了一种基于联邦增量随机配置网络的工业过程软测量方法,包括:获取历史的工业过程辅助数据和其对应的产品质量数据;找出最佳隐层参数;中央服务器处理得到全局参数,下传至每个工厂作为本地模型的隐层参数。每个工厂的本地模型计算新增隐层输出并构建隐层输出矩阵;通过优化算法得到当前网络的输出权值,并将输出权值上传至服务器进行加权聚合;在当前网络隐层节点数超过给定最大隐层节点数或当前迭代中的残差满足期望容差时,不再增加新节点,建模完成,得到全局联邦增量随机配置网络。本发明不仅能有效提高模型的预测性能,而且可以有效地保护数据隐私,能够很好地满足工业过程软测量的需求。

Description

一种基于联邦增量随机配置网络的工业过程软测量方法 技术领域
本发明涉及工业过程产品质量指标的软测量技术领域,特别涉及一种基于联邦增量随机配置网络的工业过程软测量方法。
背景技术
为了降低生产成本,提高生产效率和质量,能够实时准确预报生产产品质量指标的软测量技术,是当今复杂工业过程控制领域的一个重要研究方向,具有深远的意义和实用应用价值。由于复杂工业这个领域面临着数据不足的问题,如果多方企业不进行数据交换和整合,则通过人工智能模型训练和预测得到的效果指标不理想,难以落地应用;随着大数据的进一步发展,重视数据隐私和安全已经成为了世界性的趋势。因此各国都在加强对数据安全和隐私的保护,欧盟最近引入的新法案《通用数据保护条例》(General Data Protection Regulation,GDPR)表明,对用户数据隐私和安全管理的日趋严格将是世界趋势。这给人工智能领域带来了前所未有的挑战。联邦学习作为一个机器学习框架可在保护数据隐私的前提下实现由多个参与者的本地数据训练出统一的机器学习模型,因此在隐私敏感的场景中(包括金融业、工业和许多其他数据感知场景)联邦学习展现出了极好的应用前景。目前联邦学习主要与深度学习相结合,但深度算法本身存在一些难以解决的瓶颈问题,如易陷入局部极小点、对初始参数的设定依赖性较强、梯度消失及梯度爆炸等问题,难以充分发挥神经网络强大的学习能力。随机配置网络作为近年来出现的一种先进的具有无限逼近特性的单隐层随机权值网络,大量的回归和分类试验都证实它在紧凑性、快速学习和泛化性能等方面具有明显的优势。
发明内容
发明目的:本发明针对现有工业过程产品数据量少,难以集中各方数据训练,本发明提出一种基于联邦增量随机配置网络的工业过程软测量方法,包括以下步骤:
步骤1,各个工厂获取历史的工业过程辅助数据和对应的产品质量数据,并初始化本地增量随机配置网络模型学习所需要的参数,每个工厂都是一个客户端,每个客户端将满足本地数据约束的隐层节点放入候选池,从候选池中选择最佳候选节点上传至 中央服务器;
步骤2,中央服务器对上传的最佳候选节点进行加权聚合或者贪婪选择得到全局参数,并将全局参数下传至每个客户端作为本地增量随机配置网络模型的隐层参数;
步骤3,每个客户端得到全局参数后计算新增隐层输出,并将输出权值上传至中央服务器进行加权聚合,继续开始下一轮训练;
步骤4,在当前网络隐层节点数超过给定最大隐层节点数或当前迭代中的残差满足期望容差时,不再增加新节点,停止联邦训练,得到训练好的全局模型;
步骤5,服务器将训练好的全局模型分发给各个本地工厂作为软测量模型。
步骤1中,设定共有K个工厂参与联邦训练,对于第k个工厂,获取n k组历史的工业过程辅助数据X k和对应的产品质量数据T k,记为{X k,T k};第k个工厂第i组历史的工业过程辅助数据
Figure PCTCN2022100744-appb-000001
包含d个辅助过程变量,对应的产品质量数据t i包含m个产品质量数据,i取值为1~n k,则输入样本矩阵
Figure PCTCN2022100744-appb-000002
第i组z个辅助过程变量集合记为
Figure PCTCN2022100744-appb-000003
表示第k个工厂第i组第z个辅助过程变量。
步骤1中,所述的K个工厂都执行相同的工业过程;相同的工业过程大多采用相同的工艺流程和过程设备,具有特征相似性。
步骤1中,所述初始化本地增量随机配置网络学习所需要的参数。包括:最大隐层节点数L max、最大随机配置次数T max、期望容差ε、隐层参数随机配置范围Υ={λ min:Δλ:λ max}、学习参数r、激活函数g(.)、初始残差e 0=T k,其中λ min为随机参数的分配区间下限,λ max为随机参数的分配区间上限,Δλ为随机参数分配区间增量参数
步骤1还包括:
在每个客户端的本地增量随机配置网络构建过程中,在可调对称区间Υ内分别随机生成隐层参数
Figure PCTCN2022100744-appb-000004
Figure PCTCN2022100744-appb-000005
节点隐层输出
Figure PCTCN2022100744-appb-000006
上标T为矩阵或向量的转置;
设定μ L=(1-r)/(L+1),L为当前本地增量随机配置网络模型隐层节点总数,r表示学习参数,μ L是一个非负实数序列;
找出满足以下不等式约束的隐层节点即为候选节点:
Figure PCTCN2022100744-appb-000007
其中,
Figure PCTCN2022100744-appb-000008
式中,m表示各训练集输出的维数,符号<·,·>表示向量的内积,
Figure PCTCN2022100744-appb-000009
代表在客户端k中当前隐层节点数为L时各训练集第q个输出对应的监督机制,计算
Figure PCTCN2022100744-appb-000010
得到新增候选节点
Figure PCTCN2022100744-appb-000011
j≤T max构建候选池,其中
Figure PCTCN2022100744-appb-000012
表示第k个客户端在第L次迭代时随机配置的节点监督值,
Figure PCTCN2022100744-appb-000013
表示第k个客户端中第L次迭代时第j次随机配置的节点监督值;
找出使得
Figure PCTCN2022100744-appb-000014
最大的一组隐层参数,即为满足监督机制的最佳隐层参数
Figure PCTCN2022100744-appb-000015
Figure PCTCN2022100744-appb-000016
步骤1还包括:所述从候选池中选择最佳候选节点上传至中央服务器,包括加权聚合和贪婪选择:
加权聚合上传
Figure PCTCN2022100744-appb-000017
Figure PCTCN2022100744-appb-000018
贪婪选择上传
Figure PCTCN2022100744-appb-000019
和对应的
Figure PCTCN2022100744-appb-000020
步骤2包括:
中央服务器对上传的最佳候选节点进行加权聚合得到模型第L个节点的全局参数
Figure PCTCN2022100744-appb-000021
Figure PCTCN2022100744-appb-000022
其中
Figure PCTCN2022100744-appb-000023
其中n为所有客户端本地历史工业过程辅助数据n k的总和。
步骤2中,所述中央服务器对上传的最佳节点进行贪婪选择包括:
中央服务器对上传的参数
Figure PCTCN2022100744-appb-000024
进行比较,选择最大的
Figure PCTCN2022100744-appb-000025
对应的客户端参数作为模型第L个节点的全局参数
Figure PCTCN2022100744-appb-000026
Figure PCTCN2022100744-appb-000027
其中
Figure PCTCN2022100744-appb-000028
其中Θ是每个客户端上传的最优参数
Figure PCTCN2022100744-appb-000029
Figure PCTCN2022100744-appb-000030
的集合,Ξ为
Figure PCTCN2022100744-appb-000031
的集合。
步骤3包括:
根据当前全局参数
Figure PCTCN2022100744-appb-000032
Figure PCTCN2022100744-appb-000033
每个客户端计算新增隐层输出
Figure PCTCN2022100744-appb-000034
Figure PCTCN2022100744-appb-000035
计算客户端本地隐层输出矩阵
Figure PCTCN2022100744-appb-000036
Figure PCTCN2022100744-appb-000037
其中当前隐层输出矩阵为
Figure PCTCN2022100744-appb-000038
式中,
Figure PCTCN2022100744-appb-000039
表示本地客户端k每个客户端上传输出矩阵
Figure PCTCN2022100744-appb-000040
至中央服务器,中央服务器对上传的
Figure PCTCN2022100744-appb-000041
进行加权聚合得到全局输出矩阵β L,其中
Figure PCTCN2022100744-appb-000042
有益效果:与现有技术相比,本发明的优点在于本方法采用动态配置的联邦学习方式训练模型,以构造法的形式建立具有无限逼近特性的参数最优的工业过程产品质量软测量模型,不需要复杂的再训练过程,且能够保证模型的准确性,具有良好的紧致性和泛化性能。
附图说明
下面结合附图和具体实施方式对本发明做更进一步的具体说明,本发明的上述和/或其他方面的优点将会变得更加清楚。
图1是联邦增量随机配置网络模型示意图。
具体实施方式
本发明提供了一种基于联邦增量随机配置网络的工业过程软测量方法,本发明所使用的拟合模型结构如图1所示,包括输入层、隐藏层和输出层;d=5,m=1。本发明包括以下步骤:
步骤一,每个工厂从磨矿过程历史本地数据库中选取100组传统赤铁矿磨矿过程测得的历史数据,即每组包含球磨机电流c 1和螺旋分级机电流c 2、磨机给矿量c 3、磨 机入口给水流量c 4和分级机溢流浓度c 5五个辅助过程变量数据,用
Figure PCTCN2022100744-appb-000043
表示第k个客户端均一化后的输入数据,和其对应的产品质量数据,即磨矿粒度值t i
Figure PCTCN2022100744-appb-000044
表示第k个客户端的第i个样本的c5辅助过程变量数据。现有10个工厂参与训练,共1000组历史数据,其中800组作为训练集,200组作为测试集。则输入样本为
Figure PCTCN2022100744-appb-000045
其中
Figure PCTCN2022100744-appb-000046
输出样本为
Figure PCTCN2022100744-appb-000047
初始化联邦增量随机配置网络软测量模型学习所需要的参数,其中最大隐层节点数L max=100、最大配置次数T max=20、期望容差ε=0.05、隐层参数随机配置范围Υ:={1:1:10}、学习参数r=0.99、初始残差e 0=T,激活函数选取Sigmoid(S曲线)函数g(x)=1/(1+exp(-x));
联邦增量随机配置网络构造过程中,当第k个客户端添加第L个节点时:
在可调范围[-1,1]内随机生成20对隐层参数,即输入权值
Figure PCTCN2022100744-appb-000048
和偏置
Figure PCTCN2022100744-appb-000049
将其代入激活函数g(x);
节点隐层输出
Figure PCTCN2022100744-appb-000050
T表示转置运算;
设定μ L=(1-r)/(L+1),L为当前本地网络隐层节点总数;
找出满足以下不等式约束的隐层节点即为候选节点:
Figure PCTCN2022100744-appb-000051
其中,
Figure PCTCN2022100744-appb-000052
若20轮均未找到满足条件的隐层参数,则放宽监督机制的条件:更新r=r+τ,其中参数τ∈(0,1-r),直到找到满足监督机制的参数。
将所述候选节点分别代入
Figure PCTCN2022100744-appb-000053
得到
Figure PCTCN2022100744-appb-000054
j≤20,其中
Figure PCTCN2022100744-appb-000055
表示客户端k中表示第L次迭代中第n次随机配置的节点监督值;
Figure PCTCN2022100744-appb-000056
为满足增量监督机制的多个新增候选节点,构建候选隐层单节点池;
从节点池找出最大值
Figure PCTCN2022100744-appb-000057
对应的一组隐层参数,即为满足监督机制的最佳隐层参数
Figure PCTCN2022100744-appb-000058
Figure PCTCN2022100744-appb-000059
上传最佳候选节点上传至中央服务器,根据不同的算法上传不同的参数,算法包括加权聚合和贪婪选择:
加权聚合上传
Figure PCTCN2022100744-appb-000060
Figure PCTCN2022100744-appb-000061
贪婪选择上传
Figure PCTCN2022100744-appb-000062
和其对应的
Figure PCTCN2022100744-appb-000063
步骤二包括:
中央服务器对上传的最佳节点进行加权聚合或贪婪选择:
所述中央服务器对上传的最佳节点进行加权聚合包括:
中央服务器对上传的参数进行加权聚合得到第L个节点的全局参数
Figure PCTCN2022100744-appb-000064
Figure PCTCN2022100744-appb-000065
其中
Figure PCTCN2022100744-appb-000066
其中n为所有客户端数据样本的总数,n k为客户端k的数据样本总数。
所述中央服务器对上传的最佳节点进行贪婪选择包括:
中央服务器对上传的参数
Figure PCTCN2022100744-appb-000067
进行比较,选择最大
Figure PCTCN2022100744-appb-000068
对应的客户端参数作为第L个节点的全局参数
Figure PCTCN2022100744-appb-000069
Figure PCTCN2022100744-appb-000070
其中
Figure PCTCN2022100744-appb-000071
其中Θ是每个客户端上传的最优参数
Figure PCTCN2022100744-appb-000072
Figure PCTCN2022100744-appb-000073
的集合,Ξ为
Figure PCTCN2022100744-appb-000074
的集合。
步骤三包括:
所述每个客户端得到全局参数
Figure PCTCN2022100744-appb-000075
Figure PCTCN2022100744-appb-000076
后计算新增隐层输出和输出权值
Figure PCTCN2022100744-appb-000077
并将
Figure PCTCN2022100744-appb-000078
上传至服务器进行加权聚合包括:
根据当前全局参数
Figure PCTCN2022100744-appb-000079
Figure PCTCN2022100744-appb-000080
每个客户端计算新增隐层输出:
Figure PCTCN2022100744-appb-000081
计算本地客户端隐层输出矩阵:
Figure PCTCN2022100744-appb-000082
其中
Figure PCTCN2022100744-appb-000083
每个客户端上传输出矩阵
Figure PCTCN2022100744-appb-000084
至中央服务器,服务器对上传的
Figure PCTCN2022100744-appb-000085
进行加权聚合得到β L
其中
Figure PCTCN2022100744-appb-000086
此时每个客户端的增量随机配置网络残差为:
Figure PCTCN2022100744-appb-000087
步骤四:当联邦增量随机配置网络的隐层节点数超过100或当前迭代中的残差满足期望容差0.05时,不再增加新节点,建模完成。否则,返回步骤一,继续构造网络,直到达到预先设置的要求。各个客户端下载基于联邦增量随机配置网络的磨矿粒度软测量模型。每个客户端在线采集本地数据输入此全局软测量模型。
步骤五,各个客户端在线采集球磨机电流c 1、螺旋分级机电流c 2、磨机给矿量c 3、磨机入口给水流量c 4和分级机溢流浓度c 5,输入至已构建的磨矿粒度软测量模型,进行磨矿粒度在线估计,即
Figure PCTCN2022100744-appb-000088
其中
Figure PCTCN2022100744-appb-000089
为客户端k在线估计的产品质量数据。
本发明提供了一种基于联邦增量随机配置网络的工业过程软测量方法,具体实现该技术方案的方法和途径很多,以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。本实施例中未明确的各组成部分均可用现有技术加以实现。

Claims (8)

  1. 一种基于联邦增量随机配置网络的工业过程软测量方法,其特征在于,包括以下步骤:
    步骤1,各个工厂获取历史的工业过程辅助数据和对应的产品质量数据,并初始化本地增量随机配置网络模型学习所需要的参数,每个工厂都是一个客户端,每个客户端将满足本地数据约束的隐层节点放入候选池,从候选池中选择最佳候选节点上传至中央服务器;
    步骤2,中央服务器对上传的最佳候选节点进行加权聚合或者贪婪选择得到全局参数,并将全局参数下传至每个客户端作为本地增量随机配置网络模型的隐层参数;
    步骤3,每个客户端得到全局参数后计算新增隐层输出,并将输出权值上传至中央服务器进行加权聚合,继续开始下一轮训练;
    步骤4,在当前网络隐层节点数超过给定最大隐层节点数或当前迭代中的残差满足期望容差时,不再增加新节点,停止联邦训练,得到训练好的全局模型;
    步骤5,服务器将训练好的全局模型分发给各个本地工厂作为软测量模型。
  2. 根据权利要求1所述的方法,其特征在于,步骤1中,设定共有K个工厂参与联邦训练,对于第k个工厂,获取n k组历史的工业过程辅助数据X k和对应的产品质量数据T k,记为{X k,T k};第k个工厂第i组历史的工业过程辅助数据
    Figure PCTCN2022100744-appb-100001
    包含z个辅助过程变量,对应的产品质量数据t i包含m个产品质量数据,i取值为1~n k,则输入样本矩阵
    Figure PCTCN2022100744-appb-100002
    第i组z个辅助过程变量集合记为
    Figure PCTCN2022100744-appb-100003
    表示第k个工厂第i组第z个辅助过程变量。
  3. 根据权利要求2所述的方法,其特征在于,步骤1中,所述初始化本地增量随机配置网络学习所需要的参数,包括:最大隐层节点数L max、最大随机配置次数T max、期望容差ε、隐层参数随机配置范围Υ={λ min:Δλ:λ max}、学习参数r、激活函数g(.)、初始残差e 0=T k,其中λ min是随机参数的分配区间下限,λ max是随机参数的分配区间上限,Δλ为随机参数分配区间增量参数。
  4. 根据权利要求3所述的方法,其特征在于,步骤1还包括:
    在每个客户端的本地增量随机配置网络构建过程中,在可调对称区间Υ内分别随机生成隐层参数
    Figure PCTCN2022100744-appb-100004
    Figure PCTCN2022100744-appb-100005
    节点隐层输出
    Figure PCTCN2022100744-appb-100006
    上标T为 矩阵或向量的转置;
    设定μ L=(1-r)/(L+1),L为当前本地增量随机配置网络模型隐层节点总数,r表示学习参数,μ L是一个非负实数序列;
    找出满足以下不等式约束的隐层节点即为候选节点:
    Figure PCTCN2022100744-appb-100007
    其中,
    Figure PCTCN2022100744-appb-100008
    式中,m表示各训练集输出的维数,符号<·,·>表示向量的内积,
    Figure PCTCN2022100744-appb-100009
    代表在客户端k中当前隐层节点数为L时各训练集第q个输出对应的监督机制,计算
    Figure PCTCN2022100744-appb-100010
    得到新增候选节点
    Figure PCTCN2022100744-appb-100011
    j≤T max构建候选池,其中
    Figure PCTCN2022100744-appb-100012
    表示第k个客户端在第L次迭代时随机配置的节点监督值,
    Figure PCTCN2022100744-appb-100013
    表示第k个客户端中第L次迭代时第j次随机配置的节点监督值;
    找出使得
    Figure PCTCN2022100744-appb-100014
    最大的一组隐层参数,即为满足监督机制的最佳隐层参数
    Figure PCTCN2022100744-appb-100015
    Figure PCTCN2022100744-appb-100016
  5. 根据权利要求4所述的方法,其特征在于,步骤1还包括:所述从候选池中选择最佳候选节点上传至中央服务器,包括加权聚合和贪婪选择:
    加权聚合上传
    Figure PCTCN2022100744-appb-100017
    Figure PCTCN2022100744-appb-100018
    贪婪选择上传
    Figure PCTCN2022100744-appb-100019
    和对应的
    Figure PCTCN2022100744-appb-100020
  6. 根据权利要求5所述的方法,其特征在于,步骤2包括:
    中央服务器对上传的最佳候选节点进行加权聚合得到模型第L个节点的全局参数
    Figure PCTCN2022100744-appb-100021
    Figure PCTCN2022100744-appb-100022
    其中
    Figure PCTCN2022100744-appb-100023
    其中n为所有客户端本地历史工业过程辅助数据n k的总和。
  7. 根据权利要求5所述的方法,其特征在于,步骤2包括:所述中央服务器对上传的最佳候选节点进行贪婪选择包括:
    中央服务器对上传的参数
    Figure PCTCN2022100744-appb-100024
    进行比较,选择最大的
    Figure PCTCN2022100744-appb-100025
    对应的客户端参数作为模型第L个节点的全局参数
    Figure PCTCN2022100744-appb-100026
    Figure PCTCN2022100744-appb-100027
    其中
    Figure PCTCN2022100744-appb-100028
    其中Θ是每个客户端上传的最优参数
    Figure PCTCN2022100744-appb-100029
    Figure PCTCN2022100744-appb-100030
    的集合,Ξ为
    Figure PCTCN2022100744-appb-100031
    的集合。
  8. 根据权利要求6或7所述的方法,其特征在于,步骤3包括:
    根据当前全局参数
    Figure PCTCN2022100744-appb-100032
    Figure PCTCN2022100744-appb-100033
    每个客户端计算新增隐层输出
    Figure PCTCN2022100744-appb-100034
    Figure PCTCN2022100744-appb-100035
    计算客户端本地隐层输出矩阵
    Figure PCTCN2022100744-appb-100036
    Figure PCTCN2022100744-appb-100037
    其中当前隐层输出矩阵为
    Figure PCTCN2022100744-appb-100038
    式中,
    Figure PCTCN2022100744-appb-100039
    表示本地客户端k每个客户端上传输出矩阵
    Figure PCTCN2022100744-appb-100040
    至中央服务器,中央服务器对上传的
    Figure PCTCN2022100744-appb-100041
    进行加权聚合得到全局输出矩阵β L,其中
    Figure PCTCN2022100744-appb-100042
PCT/CN2022/100744 2021-09-09 2022-06-23 一种基于联邦增量随机配置网络的工业过程软测量方法 WO2023035727A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2022343574A AU2022343574B2 (en) 2021-09-09 2022-06-23 Industrial process soft-measurement method based on federated incremental stochastic configuration network
US18/002,274 US20240027976A1 (en) 2021-09-09 2022-06-23 Industrial Process Soft Sensor Method Based on Federated Stochastic Configuration Network
JP2022570190A JP7404559B2 (ja) 2021-09-09 2022-06-23 連合増分確率的構成ネットワークに基づく工業プロセスのソフト測定方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111054478.9 2021-09-09
CN202111054478.9A CN113761748B (zh) 2021-09-09 2021-09-09 一种基于联邦增量随机配置网络的工业过程软测量方法

Publications (1)

Publication Number Publication Date
WO2023035727A1 true WO2023035727A1 (zh) 2023-03-16

Family

ID=78794162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100744 WO2023035727A1 (zh) 2021-09-09 2022-06-23 一种基于联邦增量随机配置网络的工业过程软测量方法

Country Status (5)

Country Link
US (1) US20240027976A1 (zh)
JP (1) JP7404559B2 (zh)
CN (1) CN113761748B (zh)
AU (1) AU2022343574B2 (zh)
WO (1) WO2023035727A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958783A (zh) * 2023-07-24 2023-10-27 中国矿业大学 基于深度残差二维随机配置网络的轻量型图像识别方法
CN117094031A (zh) * 2023-10-16 2023-11-21 湘江实验室 工业数字孪生数据隐私保护方法及相关介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761748B (zh) * 2021-09-09 2023-09-15 中国矿业大学 一种基于联邦增量随机配置网络的工业过程软测量方法
CN118070928B (zh) * 2024-02-18 2024-09-24 淮阴工学院 一种工业过程关键性指标软测量建模方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635337A (zh) * 2018-11-13 2019-04-16 中国矿业大学 一种基于块增量随机配置网络的工业过程软测量建模方法
CN112507219A (zh) * 2020-12-07 2021-03-16 中国人民大学 一种基于联邦学习增强隐私保护的个性化搜索系统
CN113191092A (zh) * 2020-09-30 2021-07-30 中国矿业大学 一种基于正交增量随机配置网络的工业过程产品质量软测量方法
WO2021158313A1 (en) * 2020-02-03 2021-08-12 Intel Corporation Systems and methods for distributed learning for wireless edge dynamics
CN113761748A (zh) * 2021-09-09 2021-12-07 中国矿业大学 一种基于联邦增量随机配置网络的工业过程软测量方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9038449B2 (en) * 2010-04-16 2015-05-26 Camber Ridge, Llc Tire testing systems and methods
WO2018017467A1 (en) 2016-07-18 2018-01-25 NantOmics, Inc. Distributed machine learning systems, apparatus, and methods
US10957442B2 (en) 2018-12-31 2021-03-23 GE Precision Healthcare, LLC Facilitating artificial intelligence integration into systems using a distributed learning platform
CN110807510B (zh) * 2019-09-24 2023-05-09 中国矿业大学 面向工业大数据的并行学习软测量建模方法
CN111914492B (zh) * 2020-04-28 2022-09-13 昆明理工大学 一种基于进化优化的半监督学习工业过程软测量建模方法
CN112989711B (zh) * 2021-04-25 2022-05-20 昆明理工大学 基于半监督集成学习的金霉素发酵过程软测量建模方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635337A (zh) * 2018-11-13 2019-04-16 中国矿业大学 一种基于块增量随机配置网络的工业过程软测量建模方法
WO2021158313A1 (en) * 2020-02-03 2021-08-12 Intel Corporation Systems and methods for distributed learning for wireless edge dynamics
CN113191092A (zh) * 2020-09-30 2021-07-30 中国矿业大学 一种基于正交增量随机配置网络的工业过程产品质量软测量方法
CN112507219A (zh) * 2020-12-07 2021-03-16 中国人民大学 一种基于联邦学习增强隐私保护的个性化搜索系统
CN113761748A (zh) * 2021-09-09 2021-12-07 中国矿业大学 一种基于联邦增量随机配置网络的工业过程软测量方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958783A (zh) * 2023-07-24 2023-10-27 中国矿业大学 基于深度残差二维随机配置网络的轻量型图像识别方法
CN116958783B (zh) * 2023-07-24 2024-02-27 中国矿业大学 基于深度残差二维随机配置网络的轻量型图像识别方法
CN117094031A (zh) * 2023-10-16 2023-11-21 湘江实验室 工业数字孪生数据隐私保护方法及相关介质
CN117094031B (zh) * 2023-10-16 2024-02-06 湘江实验室 工业数字孪生数据隐私保护方法及相关介质

Also Published As

Publication number Publication date
AU2022343574B2 (en) 2024-06-20
US20240027976A1 (en) 2024-01-25
CN113761748B (zh) 2023-09-15
JP2023544935A (ja) 2023-10-26
JP7404559B2 (ja) 2023-12-25
CN113761748A (zh) 2021-12-07
AU2022343574A1 (en) 2023-06-08

Similar Documents

Publication Publication Date Title
WO2023035727A1 (zh) 一种基于联邦增量随机配置网络的工业过程软测量方法
Wang et al. Prediction of bending force in the hot strip rolling process using artificial neural network and genetic algorithm (ANN-GA)
CN106570597B (zh) 一种sdn架构下基于深度学习的内容流行度预测方法
CN112966954B (zh) 一种基于时间卷积网络的防洪调度方案优选方法
CN106503867A (zh) 一种遗传算法最小二乘风电功率预测方法
CN109462520A (zh) 基于lstm模型的网络流量资源态势预测方法
CN110427654A (zh) 一种基于敏感状态的滑坡预测模型构建方法及系统
Niu et al. Model turbine heat rate by fast learning network with tuning based on ameliorated krill herd algorithm
CN109543916A (zh) 一种多晶硅还原炉内硅棒生长速率预估模型
CN107463993A (zh) 基于互信息‑核主成分分析‑Elman网络的中长期径流预报方法
CN114168845B (zh) 一种基于多任务学习的序列化推荐方法
CN106371316A (zh) 基于pso‑lssvm的水岛加药在线控制方法和装置
CN110347192A (zh) 基于注意力机制和自编码器的玻璃炉温智能预测控制方法
CN113191092A (zh) 一种基于正交增量随机配置网络的工业过程产品质量软测量方法
CN111709519A (zh) 一种深度学习并行计算架构方法及其超参数自动配置优化
CN111130909B (zh) 基于自适应储备池esn的网络流量预测方法
CN109493921B (zh) 一种基于多代理模型的常压精馏过程建模方法
Vakhshouri et al. Application of adaptive neuro-fuzzy inference system in high strength concrete
CN107729988A (zh) 基于动态深度置信网络的蓝藻水华预测方法
Xia et al. A multiswarm competitive particle swarm algorithm for optimization control of an ethylene cracking furnace
CN112183721B (zh) 一种基于自适应差分进化的组合水文预测模型的构建方法
Chi et al. Application of BP neural network based on genetic algorithms optimization in prediction of postgraduate entrance examination
Qiu et al. Air traffic flow of genetic algorithm to optimize wavelet neural network prediction
Prakash et al. Speculation of compressive strength of concrete in real-time
CN107025497A (zh) 一种基于Elman神经网络的电力负荷预警方法及装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022570190

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 18002274

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22866202

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022343574

Country of ref document: AU

Date of ref document: 20220623

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22866202

Country of ref document: EP

Kind code of ref document: A1