CN116894163A - Method and device for generating load prediction information for charging and discharging facilities based on information security - Google Patents

Method and device for generating load prediction information for charging and discharging facilities based on information security Download PDF

Info

Publication number
CN116894163A
CN116894163A CN202311160057.3A CN202311160057A CN116894163A CN 116894163 A CN116894163 A CN 116894163A CN 202311160057 A CN202311160057 A CN 202311160057A CN 116894163 A CN116894163 A CN 116894163A
Authority
CN
China
Prior art keywords
model file
model
load prediction
prediction information
local terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311160057.3A
Other languages
Chinese (zh)
Other versions
CN116894163B (en
Inventor
付昀夕
赵永生
刘泽三
戚艳
高紫婷
任博强
张文娟
张帅
文爱军
吴俊峰
张磐
闫晨阳
闫廷廷
刘振圻
宫晓辉
肖松宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Information and Telecommunication Co Ltd
State Grid Tianjin Electric Power Co Ltd
Original Assignee
State Grid Information and Telecommunication Co Ltd
State Grid Tianjin Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Information and Telecommunication Co Ltd, State Grid Tianjin Electric Power Co Ltd filed Critical State Grid Information and Telecommunication Co Ltd
Priority to CN202311160057.3A priority Critical patent/CN116894163B/en
Publication of CN116894163A publication Critical patent/CN116894163A/en
Application granted granted Critical
Publication of CN116894163B publication Critical patent/CN116894163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for AC mains or AC distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Power Engineering (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a charge and discharge facility load prediction information generation method and device based on information security. One embodiment of the method comprises the following steps: collecting a target specimen local data set sequence; initial model training is carried out on the initial charge load prediction information generation model through the target specimen data set; splitting the model files of the trained model file set to obtain a trained model file set; performing primary model file aggregation on the trained model file group to obtain a first aggregated model file; performing secondary model file aggregation on the first aggregated model file set to obtain a second aggregated model file; and generating a model according to the real-time operation data and the charging load prediction information, and generating the charging load prediction information. According to the embodiment, the problem that when a large number of new energy automobiles to be stored exist at the same time, the power grid corresponding to the energy storage station is overloaded, so that unstable voltage is caused, and damage to vehicle charging equipment is possibly caused is avoided.

Description

基于信息安全的充放电设施负荷预测信息生成方法和装置Method and device for generating load prediction information for charging and discharging facilities based on information security

技术领域Technical field

本公开的实施例涉及计算机技术领域,具体涉及基于信息安全的充放电设施负荷预测信息生成方法和装置。Embodiments of the present disclosure relate to the field of computer technology, and specifically to methods and devices for generating load prediction information for charging and discharging facilities based on information security.

背景技术Background technique

随着新能源汽车的发展和普及,针对保有量快速增加的新能源汽车,如何提高新能源汽车的蓄能灵活性和便利性尤为重要。目前,针对新能源汽车的蓄能,通常采用的方式为:通过设置针对新能源汽车的蓄能站点,并通过蓄能站点内设置的车辆充电设备对新能源车辆进行蓄能。With the development and popularization of new energy vehicles, how to improve the energy storage flexibility and convenience of new energy vehicles is particularly important for the rapidly increasing number of new energy vehicles. At present, the usual method for energy storage of new energy vehicles is to set up energy storage sites for new energy vehicles, and store energy for new energy vehicles through vehicle charging equipment set up in the energy storage sites.

然而,发明人发现,当采用上述方式时,经常会存在如下技术问题:However, the inventor found that when the above method is adopted, the following technical problems often exist:

第一,当同时存在大量的待蓄能新能源汽车时,会导致蓄能站点对应的电网过载,从而造成电压不稳定,可能会对车辆充电设备造成损坏;First, when there are a large number of new energy vehicles to be stored at the same time, it will cause the power grid corresponding to the energy storage site to be overloaded, resulting in voltage instability and possible damage to vehicle charging equipment;

第二,由于每个本地终端对应的本地数据组的数据量差异较大,当本地数据组的数据量较小时,会导致基于本地数据组训练得到的、本地充电负荷预测信息生成模型的预测准确度较差,当预测的充电负荷预测信息与实际充电负荷之间存在较大差异时,可能会导致车辆充电设备造成损坏;Second, due to the large difference in the data volume of the local data group corresponding to each local terminal, when the data volume of the local data group is small, the prediction of the local charging load prediction information generation model based on the local data group training will be accurate. The accuracy is poor. When there is a large difference between the predicted charging load prediction information and the actual charging load, it may cause damage to the vehicle charging equipment;

第三,当传统的线性连接的卷积神经网络的网络层数加深时,可能会造成特征遗忘的问题,从而影响生成的充电负荷预测信息的准确度,当预测的充电负荷预测信息与实际充电负荷之间存在较大差异时,可能会导致车辆充电设备造成损坏。Third, when the number of network layers of the traditional linearly connected convolutional neural network is deepened, it may cause the problem of feature forgetting, thereby affecting the accuracy of the generated charging load prediction information. When the predicted charging load prediction information is different from the actual charging Large differences between loads may cause damage to the vehicle's charging equipment.

该背景技术部分中所公开的以上信息仅用于增强对本发明构思的背景的理解,并因此,其可包含并不形成本国的本领域普通技术人员已知的现有技术的信息。The above information disclosed in this Background section is only for enhancement of understanding of the background of the inventive concept and therefore it may contain information that does not form the prior art that is already known in this country to a person of ordinary skill in the art.

发明内容Contents of the invention

本公开的内容部分用于以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。本公开的内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。This Summary is provided to introduce in simplified form concepts that are later described in detail in the Detailed Description. The content of this disclosure is not intended to identify key features or essential features of the claimed technical solutions, nor is it intended to be used to limit the scope of the claimed technical solutions.

本公开的一些实施例提出了基于信息安全的充放电设施负荷预测信息生成方法和装置,来解决以上背景技术部分提到的技术问题中的一项或多项。Some embodiments of the present disclosure propose methods and devices for generating load prediction information for charging and discharging facilities based on information security to solve one or more of the technical problems mentioned in the background art section above.

第一方面,本公开的一些实施例提供了一种基于信息安全的充放电设施负荷预测信息生成的方法,该方法包括:采集目标本地数据组序列,其中,上述目标本地数据组序列为至少一个本地终端对应的本地数据组,其中,本地终端的终端类型为车辆充电设备类型,目标本地数据是本地终端在进行车辆充电时产生的运行数据;对于上述目标本地数据组序列中的每个目标本地数据组,通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件,其中,上述初始模型文件是与上述目标本地数据组对应的本地终端关联的边缘端服务器发送的模型文件,其中,初始充电负荷预测信息生成模型是待进行模型训练的充电负荷预测信息生成模型;对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合,其中,上述训练后模型文件组集合对应至少一个边缘端服务器;对于上述训练后模型文件组集合中的每个训练后模型文件组,通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件;对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件;对于上述至少一个本地终端中的每个本地终端,根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息。In a first aspect, some embodiments of the present disclosure provide a method for generating charging and discharging facility load prediction information based on information security. The method includes: collecting a target local data group sequence, wherein the target local data group sequence is at least one The local data group corresponding to the local terminal, where the terminal type of the local terminal is the vehicle charging equipment type, and the target local data is the operating data generated by the local terminal when charging the vehicle; for each target local data group in the above sequence data group, through the above-mentioned target local data group, perform initial model training on the initial charging load prediction information generation model corresponding to the initial model file, and obtain a post-training model file, wherein the above-mentioned initial model file is a local data group corresponding to the above-mentioned target local data group. The model file sent by the edge server associated with the terminal, in which the initial charging load prediction information generation model is the charging load prediction information generation model to be model trained; the obtained set of trained model files is split into model files to obtain the A set of model file groups, wherein the above-mentioned set of trained model file groups corresponds to at least one edge server; for each post-trained model file group in the above-mentioned set of post-trained model file groups, the edge server corresponding to the above-mentioned post-trained model file group is The server performs a model file aggregation on the above-mentioned trained model file group to obtain a first aggregation model file; performs a second model file aggregation on the obtained first aggregation model file set to obtain a second aggregation model file; for at least the above Each local terminal in a local terminal generates a charging load prediction information corresponding to the local terminal based on the real-time operating data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file.

第二方面,本公开的一些实施例提供了一种基于信息安全的充放电设施负荷预测信息生成的装置,装置包括:采集单元,被配置成采集目标本地数据组序列,其中,上述目标本地数据组序列为至少一个本地终端对应的本地数据组,其中,本地终端的终端类型为车辆充电设备类型,目标本地数据是本地终端在进行车辆充电时产生的运行数据;模型训练单元,被配置成对于上述目标本地数据组序列中的每个目标本地数据组,通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件,其中,上述初始模型文件是与上述目标本地数据组对应的本地终端关联的边缘端服务器发送的模型文件,其中,初始充电负荷预测信息生成模型是待进行模型训练的充电负荷预测信息生成模型;拆分单元,被配置成对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合,其中,上述训练后模型文件组集合对应至少一个边缘端服务器;第一聚合单元,被配置成对于上述训练后模型文件组集合中的每个训练后模型文件组,通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件;第二聚合单元,被配置成对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件;生成单元,被配置成对于上述至少一个本地终端中的每个本地终端,根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息。In the second aspect, some embodiments of the present disclosure provide a device for generating charging and discharging facility load prediction information based on information security. The device includes: a collection unit configured to collect a target local data group sequence, wherein the above target local data The group sequence is a local data group corresponding to at least one local terminal, where the terminal type of the local terminal is the vehicle charging equipment type, and the target local data is the operating data generated by the local terminal when charging the vehicle; the model training unit is configured to Each target local data group in the above-mentioned target local data group sequence performs initial model training on the initial charging load prediction information generation model corresponding to the initial model file through the above-mentioned target local data group, and obtains a post-training model file, wherein the above-mentioned initial The model file is a model file sent by the edge server associated with the local terminal corresponding to the above-mentioned target local data group, in which the initial charging load prediction information generation model is the charging load prediction information generation model to be model trained; the split unit is Configured to perform model file splitting on the obtained post-training model file set to obtain a post-training model file group set, wherein the above-mentioned post-training model file group set corresponds to at least one edge server; the first aggregation unit is configured to perform the above-mentioned Each post-training model file group in the set of post-training model file groups performs a model file aggregation on the above-mentioned post-training model file group through the edge server corresponding to the above-mentioned post-training model file group to obtain a first aggregated model file; The second aggregation unit is configured to perform secondary model file aggregation on the obtained first aggregated model file set to obtain the second aggregated model file; the generating unit is configured to perform a second aggregation of model files for each local terminal in the at least one local terminal. , based on the real-time operating data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file, the charging load prediction information corresponding to the local terminal is generated.

第三方面,本公开的一些实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现上述第一方面任一实现方式所描述的方法。In a third aspect, some embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device on which one or more programs are stored. When one or more programs are processed by one or more The processor executes, causing one or more processors to implement the method described in any implementation manner of the first aspect.

第四方面,本公开的一些实施例提供了一种计算机可读介质,其上存储有计算机程序,其中,程序被处理器执行时实现上述第一方面任一实现方式所描述的方法。In a fourth aspect, some embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, wherein when the program is executed by a processor, the method described in any implementation manner of the first aspect is implemented.

本公开的上述各个实施例具有如下有益效果:通过本公开的一些实施例的基于信息安全的充放电设施负荷预测信息生成方法,避免了当同时存在大量的待蓄能新能源汽车时,导致蓄能站点对应的电网过载,从而造成电压不稳定,可能会对车辆充电设备造成损坏的问题。具体来说,造成车辆充电设备损坏的原因在于:同时存在大量的待蓄能新能源汽车可能会导致蓄能站点对应的电网负荷迅速增加,会造成蓄能站点对应的电网过载,可能会对车辆充电设备造成损坏。基于此,本公开的一些实施例的基于信息安全的充放电设施负荷预测信息生成方法,首先,采集目标本地数据组序列,其中,上述目标本地数据组序列为至少一个本地终端对应的本地数据组,其中,本地终端的终端类型为车辆充电设备类型,目标本地数据是本地终端在进行车辆充电时产生的运行数据。其次,对于上述目标本地数据组序列中的每个目标本地数据组,通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件,其中,上述初始模型文件是与上述目标本地数据组对应的本地终端关联的边缘端服务器发送的模型文件,其中,初始充电负荷预测信息生成模型是待进行模型训练的充电负荷预测信息生成模型。通过以上述目标本地数据组序列中的每个目标本地数据组为训练样本,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,可以生成与目标本地数据组相对应的个性化局部模型。同时,对于每个初始充电负荷预测信息生成模型,整个模型训练的过程中仅涉及目标本地数据组,本地数据不出域(,即不同的目标本地数据组之间相互隔离),实现了在保护数据隐私前提下的模型生成。接着,对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合,其中,上述训练后模型文件组集合对应至少一个边缘端服务器。实践中,不同的训练后模型文件往往对应不同的边缘端服务器,需要由对应的边缘端服务器进行模型文件聚合,因此,需要对训练后模型文件进行模型文件拆分。进一步,对于上述训练后模型文件组集合中的每个训练后模型文件组,通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件。通过模型文件聚合,实现了在数据相互隔离的情况下,间接地实现数据融合和共享,提高了模型的鲁棒性,同时保护了数据的隐私性。此外,对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件。通过分层聚合,可以在保护数据隐私的同时,降低计算成本。最后,对于上述至少一个本地终端中的每个本地终端,根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息。通过此种方式可以生成准确地充电负荷预测信息,通过结合充电负荷预测信息进行充电负荷的调整,侧面避免了当存在大量待蓄能新能源汽车时,蓄能站点对应的电网过载所造成的电压不稳定的问题,从而降低了车辆充电设备被损坏的概率。The above-mentioned embodiments of the present disclosure have the following beneficial effects: through the charging and discharging facility load prediction information generation method based on information security of some embodiments of the present disclosure, it is avoided that when there are a large number of new energy vehicles to be stored at the same time, causing the storage The power grid corresponding to the power station is overloaded, causing voltage instability and possibly causing damage to vehicle charging equipment. Specifically, the reason for damage to vehicle charging equipment is that the presence of a large number of new energy vehicles to be stored at the same time may cause the grid load corresponding to the energy storage site to increase rapidly, causing the grid corresponding to the energy storage site to be overloaded, which may cause damage to the vehicle. Damage to charging equipment. Based on this, some embodiments of the present disclosure provide a method for generating load prediction information for charging and discharging facilities based on information security. First, a target local data group sequence is collected, wherein the target local data group sequence is a local data group corresponding to at least one local terminal. , where the terminal type of the local terminal is the vehicle charging equipment type, and the target local data is the operating data generated by the local terminal when charging the vehicle. Secondly, for each target local data group in the above target local data group sequence, perform initial model training on the initial charging load prediction information generation model corresponding to the initial model file through the above target local data group, and obtain the trained model file, where , the above-mentioned initial model file is a model file sent by the edge server associated with the local terminal corresponding to the above-mentioned target local data group, wherein the initial charging load prediction information generation model is the charging load prediction information generation model to be model trained. By using each target local data group in the above target local data group sequence as a training sample and performing initial model training on the initial charging load prediction information generation model corresponding to the initial model file, a personalized model corresponding to the target local data group can be generated. local model. At the same time, for each initial charging load prediction information generation model, the entire model training process only involves the target local data group, and the local data does not go out of the domain (that is, different target local data groups are isolated from each other), achieving protection in Model generation under the premise of data privacy. Next, the obtained set of trained model files is split into model files to obtain a set of post-trained model file groups, where the above set of post-trained model file groups corresponds to at least one edge server. In practice, different trained model files often correspond to different edge servers, and the corresponding edge servers need to aggregate model files. Therefore, the trained model files need to be split into model files. Further, for each post-training model file group in the above-mentioned post-training model file group set, perform a model file aggregation on the above-mentioned post-training model file group through the edge server corresponding to the above-mentioned post-training model file group to obtain the first aggregation Post model file. Through model file aggregation, data fusion and sharing can be achieved indirectly when data are isolated from each other, improving the robustness of the model while protecting the privacy of the data. In addition, secondary model file aggregation is performed on the obtained first aggregated model file set to obtain a second aggregated model file. Through hierarchical aggregation, computing costs can be reduced while protecting data privacy. Finally, for each local terminal in the at least one local terminal, a model is generated based on the real-time operating data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file to generate a charging load prediction corresponding to the local terminal. information. In this way, accurate charging load prediction information can be generated, and the charging load can be adjusted by combining the charging load prediction information. This can avoid the voltage caused by the overload of the power grid corresponding to the energy storage site when there are a large number of new energy vehicles to be stored. instability problem, thereby reducing the probability of vehicle charging equipment being damaged.

附图说明Description of the drawings

结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,元件和元素不一定按照比例绘制。The above and other features, advantages, and aspects of various embodiments of the present disclosure will become more apparent with reference to the following detailed description taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.

图1是根据本公开的基于信息安全的充放电设施负荷预测信息生成方法的一些实施例的流程图;Figure 1 is a flow chart of some embodiments of a method for generating load prediction information for charging and discharging facilities based on information security according to the present disclosure;

图2是根据本公开的基于信息安全的充放电设施负荷预测信息生成装置的一些实施例的结构示意图;Figure 2 is a schematic structural diagram of some embodiments of an information security-based charge and discharge facility load prediction information generation device according to the present disclosure;

图3是适于用来实现本公开的一些实施例的电子设备的结构示意图。3 is a schematic structural diagram of an electronic device suitable for implementing some embodiments of the present disclosure.

具体实施方式Detailed ways

下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例。相反,提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of the present disclosure.

另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。It should also be noted that, for convenience of description, only the parts related to the invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.

需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that concepts such as “first” and “second” mentioned in this disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order of functions performed by these devices, modules or units. Or interdependence.

需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。It should be noted that the modifications of "one" and "plurality" mentioned in this disclosure are illustrative and not restrictive. Those skilled in the art will understand that unless the context clearly indicates otherwise, it should be understood as "one or Multiple”.

本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are for illustrative purposes only and are not used to limit the scope of these messages or information.

下面将参考附图并结合实施例来详细说明本公开。The present disclosure will be described in detail below in conjunction with embodiments with reference to the accompanying drawings.

参考图1,示出了根据本公开的基于信息安全的充放电设施负荷预测信息生成方法的一些实施例的流程100。该基于信息安全的充放电设施负荷预测信息生成方法,包括以下步骤:Referring to FIG. 1 , a process 100 of some embodiments of an information security-based charging and discharging facility load prediction information generation method according to the present disclosure is shown. The information security-based charging and discharging facility load prediction information generation method includes the following steps:

步骤101,采集目标本地数据组序列。Step 101: Collect the target local data group sequence.

在一些实施例中,基于信息安全的充放电设施负荷预测信息生成方法的执行主体(例如,计算设备)可以采集目标本地数据组序列。其中,上述目标本地数据组序列为至少一个本地终端对应的本地数据组。其中,本地终端的终端类型为车辆充电设备类型。目标本地数据是本地终端在进行车辆充电时产生的运行数据。实践中,本地终端可以是车辆充电设备。例如,本地终端可以是用于对电动汽车和/或混合动力汽车进行车辆充电的车辆充电桩。In some embodiments, the execution subject (for example, a computing device) of the information security-based charging and discharging facility load prediction information generation method may collect a target local data group sequence. Wherein, the target local data group sequence is a local data group corresponding to at least one local terminal. Among them, the terminal type of the local terminal is the vehicle charging equipment type. The target local data is the operating data generated by the local terminal when charging the vehicle. In practice, the local terminal may be a vehicle charging device. For example, the local terminal may be a vehicle charging station for vehicle charging of electric vehicles and/or hybrid vehicles.

作为示例,上述执行主体可以通过有线连接,或无线连接的方式,获取至少一个本地终端对应的本地数据组,作为上述目标本地数据组序列。As an example, the execution subject may obtain the local data group corresponding to at least one local terminal as the target local data group sequence through a wired connection or a wireless connection.

需要指出的是,上述无线连接方式可以包括但不限于3G/4G/5G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。It should be noted that the above wireless connection methods may include but are not limited to 3G/4G/5G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connections that are now known or developed in the future. Connection method.

需要说明的是,上述计算设备可以是硬件,也可以是软件。当计算设备为硬件时,可以实现成多个服务器或终端设备组成的分布式集群,也可以实现成单个服务器或单个终端设备。当计算设备体现为软件时,可以安装在上述所列举的硬件设备中。其可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。应该理解,计算设备的数目根据实现需要,可以具有任意数目。It should be noted that the above computing device may be hardware or software. When the computing device is hardware, it can be implemented as a distributed cluster composed of multiple servers or terminal devices, or it can be implemented as a single server or a single terminal device. When the computing device is embodied as software, it can be installed in the hardware device listed above. It may be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There are no specific limitations here. It should be understood that the number of computing devices may be any number depending on implementation needs.

步骤102,对于目标本地数据组序列中的每个目标本地数据组,通过目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件。Step 102: For each target local data group in the target local data group sequence, perform initial model training on the initial charging load prediction information generation model corresponding to the initial model file through the target local data group to obtain a trained model file.

在一些实施例中,对于上述目标本地数据组序列中的每个目标本地数据组,上述执行主体可以通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件。其中,上述初始模型文件是与上述目标本地数据组对应的本地终端关联的边缘端服务器发送的模型文件。其中,初始充电负荷预测信息生成模型是待进行模型训练的充电负荷预测信息生成模型。其中,充电负荷预测信息生成模型可以是用于预测本地终端在进行车辆充电时的实时充电负荷信息的模型。实践中,上述充电负荷预测信息生成模型可以是GAN(Generative Adversarial Network,生成对抗网络)模型。In some embodiments, for each target local data group in the target local data group sequence, the execution subject can perform initial model training on the initial charging load prediction information generation model corresponding to the initial model file through the target local data group. , get the model file after training. Wherein, the above initial model file is a model file sent by the edge server associated with the local terminal corresponding to the above target local data group. Among them, the initial charging load prediction information generation model is a charging load prediction information generation model to be trained. The charging load prediction information generation model may be a model used to predict real-time charging load information of the local terminal when charging the vehicle. In practice, the above charging load prediction information generation model may be a GAN (Generative Adversarial Network) model.

实践中,上述执行主体可以将目标本地数据组作为模型训练的样本,对初始模型文件对应的初始充电负荷预测信息生成模型进行无监督模型训练,得到训练后模型文件。In practice, the above execution subject can use the target local data group as a sample for model training, conduct unsupervised model training on the initial charging load prediction information generation model corresponding to the initial model file, and obtain a post-training model file.

在一些实施例的一些可选的实现方式中,上述通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件,可以包括以下步骤:In some optional implementations of some embodiments, performing initial model training on the initial charging load prediction information generation model corresponding to the initial model file through the target local data group to obtain the trained model file may include the following steps:

第一步,对上述目标本地数据组进行数据预处理,得到预处理后目标本地数据组。The first step is to perform data preprocessing on the above target local data group to obtain the preprocessed target local data group.

实践中,上述执行主体可以对目标本地数据组进行小数定标标准化,得到预处理后目标本地数据组。In practice, the above execution subject can perform decimal scaling standardization on the target local data group to obtain the preprocessed target local data group.

第二步,根据上述预处理后目标本地数据组,对上述初始模型文件对应的初始充电负荷预测信息生成模型进行模型训练,以生成候选模型文件。The second step is to perform model training on the initial charging load prediction information generation model corresponding to the above-mentioned initial model file according to the above-mentioned preprocessed target local data group to generate a candidate model file.

其中,上述候选模型文件是对应的训练次数与预设训练次数一致的、初始充电负荷预测信息生成模型对应的模型文件。Wherein, the above candidate model file is a model file corresponding to the initial charging load prediction information generation model whose corresponding training times are consistent with the preset training times.

实践中,上述执行主体可以以预处理后目标本地数据组为训练样本,对上述初始模型文件对应的初始充电负荷预测信息生成模型进行无监督的模型训练,将达到预设训练次数后的初始充电负荷预测信息生成模型对应的初始模型文件作为候选模型文件。In practice, the above-mentioned execution subject can use the pre-processed target local data group as a training sample to conduct unsupervised model training on the initial charging load prediction information generation model corresponding to the above-mentioned initial model file, and will reach the initial charging after the preset training times. The initial model file corresponding to the load forecast information generation model is used as a candidate model file.

第三步,对上述候选模型文件进行参数加密,以生成上述训练后模型文件。The third step is to perform parameter encryption on the above-mentioned candidate model files to generate the above-mentioned post-training model files.

实践中,上述执行主体可以用对称加密算法对上述候选模型文件进行参数加密,以生成上述训练后模型文件。In practice, the above execution subject may use a symmetric encryption algorithm to encrypt the parameters of the above candidate model file to generate the above trained model file.

步骤103,对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合。Step 103: Split the model files on the obtained set of trained model files to obtain a set of post-trained model file groups.

在一些实施例中,上述执行主体可以对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合。其中,上述训练后模型文件组集合对应至少一个边缘端服务器。In some embodiments, the above execution subject can split the obtained set of trained model files into model files to obtain a set of post-trained model file groups. Wherein, the above-mentioned set of trained model file groups corresponds to at least one edge server.

实践中,上述执行主体可以通过本地终端的终端类型,对得到的训练后模型文件集合进行模型文件进行拆分,得到训练后模型文件组集合。In practice, the above execution subject can use the terminal type of the local terminal to split the obtained set of trained model files into model files to obtain a set of post-trained model file groups.

在一些实施例的一些可选的实现方式中,上述执行主体对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合,可以包括以下步骤:In some optional implementations of some embodiments, the above execution subject performs model file splitting on the obtained set of trained model files to obtain a set of post-trained model file groups, which may include the following steps:

第一步,确定上述目标本地数据组序列对应的边缘端服务器信息集合。The first step is to determine the edge server information set corresponding to the above target local data group sequence.

其中,上述边缘端服务器信息集合中的边缘端服务器信息的数量大于预设数值。实践中,预设数量大于等于2。Wherein, the number of edge server information in the edge server information set is greater than a preset value. In practice, the default number is greater than or equal to 2.

第二步,确定上述边缘端服务器信息集合中的每个边缘端服务器信息在上述训练后模型文件集合中对应的至少一个训练后模型文件,作为训练后模型文件组,得到上述训练后模型文件组集合。The second step is to determine at least one post-training model file corresponding to each edge-side server information in the above-mentioned edge-side server information set in the above-mentioned post-training model file set, as a post-training model file group, to obtain the above-mentioned post-training model file group. gather.

其中,上述执行主体可以将与边缘端服务器信息对应的边缘端服务器存在通信关系的至少一个本地终端对应的训练后模型文件,作为训练后模型文件组。Wherein, the above-mentioned execution subject may use the trained model file corresponding to at least one local terminal that has a communication relationship with the edge server corresponding to the edge server information as a group of trained model files.

步骤104,对于训练后模型文件组集合中的每个训练后模型文件组,通过训练后模型文件组对应的边缘端服务器,对训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件。Step 104: For each post-training model file group in the post-training model file group set, perform a model file aggregation on the post-training model file group through the edge server corresponding to the post-training model file group to obtain the first aggregated model. document.

在一些实施例中,对于上述训练后模型文件组集合中的每个训练后模型文件组,上述执行主体可以通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件。In some embodiments, for each post-training model file group in the above-mentioned set of post-training model file groups, the above-mentioned execution subject can perform an operation on the above-mentioned post-training model file group through the edge server corresponding to the above-mentioned post-training model file group. The model files are aggregated to obtain the first aggregated model file.

实践中,上述执行主体可以将训练后模型文件组集合中的每个训练后模型文件组进行贝叶斯聚合,得到第一聚合后模型文件。In practice, the above execution subject may perform Bayesian aggregation on each post-training model file group in the post-training model file group set to obtain the first aggregated model file.

在一些实施例的一些可选的实现方式中,上述执行主体通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件,可以包括以下步骤:In some optional implementations of some embodiments, the execution subject performs a model file aggregation on the trained model file group through the edge server corresponding to the trained model file group to obtain the first aggregated model file, The following steps can be included:

第一步,根据上述训练后模型文件组,确定上述训练后模型文件组的第一贝叶斯先验概率和第一样本分布。The first step is to determine the first Bayesian prior probability and the first sample distribution of the above-mentioned post-training model file group based on the above-mentioned post-training model file group.

其中,上述第一贝叶斯先验概率为:Among them, the above-mentioned first Bayesian prior probability is:

实践中,/>是边缘端服务器对应的边缘参数。/>表示序号。/>是第/>个边缘端服务器对应的边缘参数。/>是训练后模型文件组中对应的本地终端的数量。/>是本地终端参数。/>是序号。/>是第/>个本地终端参数。/>表示第1个到第/>个本地终端参数,即表示/>。/>是第一贝叶斯先验概率。/>是第/>个边缘端服务器对应的边缘参数的真实后验概率。/>是在/>存在的情况下,/>出现的概率。 In practice,/> It is the edge parameter corresponding to the edge server. /> Indicates the serial number. /> Is the first/> The edge parameters corresponding to each edge server. /> Is the number of corresponding local terminals in the model file group after training. /> are local terminal parameters. /> is the serial number. /> Is the first/> local terminal parameters. /> Indicates the 1st to the/> local terminal parameters, which means/> . /> is the first Bayesian prior probability. /> Is the first/> The true posterior probability of the edge parameters corresponding to each edge server. /> is in/> If exists,/> probability of occurrence.

其中,上述第一样本分布为:Among them, the above-mentioned first sample distribution is:

实践中,/>是目标本地数据组。/>是序号。/>是第/>个目标本地数据组。/>是传统的神经网络模型。例如,/>是卷积神经网络模型。/>是第一样本分布。 In practice,/> is the target local data group. /> is the serial number. /> Is the first/> target local data group. /> It is a traditional neural network model. For example,/> is a convolutional neural network model. /> is the first sample distribution.

第二步,确定上述边缘端服务器对应的边缘参数的第一对数似然函数。The second step is to determine the first log-likelihood function of the edge parameters corresponding to the edge server.

其中,上述第一对数似然函数为:其中,/>是第/>个边缘端服务器对应的边缘参数的近似后验概率。/>是第一对数似然函数。/>表示以10为底的对数函数。是KL散度,其中,KL散度来度量近似后验概率与真实后验概率之间的差异。表示/>与/>之间的KL散度。/>是/>证据下界(ELBO,evidence lower bound)。Among them, the above first log-likelihood function is: Among them,/> Is the first/> The approximate posterior probability of the edge parameters corresponding to each edge server. /> is the first log-likelihood function. /> Represents the base 10 logarithmic function. is the KL divergence, where KL divergence measures the difference between the approximate posterior probability and the true posterior probability. Express/> with/> KL divergence between. /> Yes/> Evidence lower bound (ELBO, evidence lower bound).

第三步,对上述第一贝叶斯先验概率和上述第一样本分布进行近似处理,生成第一近似贝叶斯后验概率。The third step is to perform approximate processing on the above-mentioned first Bayesian prior probability and the above-mentioned first sample distribution to generate a first approximate Bayesian posterior probability.

实践中,上述执行主体利用变分推理将上述第一贝叶斯先验概率和上述第一样本分布的积进行近似,得到第一近似贝叶斯后验概率。In practice, the execution subject uses variational inference to approximate the product of the first Bayesian prior probability and the first sample distribution to obtain the first approximate Bayesian posterior probability.

实践中,利用变分推理得到的第一近似贝叶斯后验概率为:实践中,/>是赋值符号。/>为第一近似贝叶斯后验概率。/>是/>的变分参数。/>是第/>个/>的近似后验概率。/>是/>的变分参数。/>是/>的变分参数。变分参数指的是自变量(此处自变量指公式中的/>,/>,/>)变换时保持不变的自适应参数,应用于数学分析。/>是第/>个/>的近似后验概率,/>是第/>个本地终端参数的近似后验概率。/>是/>和/>之间的近似后验概率。/>是第/>个/>和/>之间的近似后验概率。In practice, the first approximate Bayesian posterior probability obtained using variational inference is: In practice,/> is the assignment symbol. /> is the first approximation Bayesian posterior probability. /> Yes/> variational parameters. /> Is the first/> /> the approximate posterior probability of . /> Yes/> variational parameters. /> Yes/> variational parameters. The variational parameter refers to the independent variable (here the independent variable refers to the /> in the formula ,/> ,/> ) adaptive parameters that remain unchanged during transformation and are used in mathematical analysis. /> Is the first/> /> The approximate posterior probability of ,/> Is the first/> Approximate posterior probabilities of local terminal parameters. /> Yes/> and/> approximate posterior probabilities. /> Is the first/> /> and/> approximate posterior probabilities.

第四步,基于上述第一对数似然函数和上述第一近似贝叶斯后验概率,得到第一目标函数。The fourth step is to obtain the first objective function based on the above-mentioned first log-likelihood function and the above-mentioned first approximate Bayesian posterior probability.

其中,上述第一目标函数包括:边缘端服务器参数和本地终端参数集合,上述边缘端服务器参数表征上述边缘端服务器对应的边缘参数,本地终端参数表征与上述边缘端服务器对应的本地终端的参数。Wherein, the above-mentioned first objective function includes: an edge server parameter and a local terminal parameter set, the edge server parameter represents the edge parameter corresponding to the edge server, and the local terminal parameter represents the local terminal parameter corresponding to the edge server.

实践中,上述执行主体利用标准变分推理技术,得到的第一目标函数。In practice, the above execution subject uses standard variational reasoning technology to obtain the first objective function.

其中,第一目标函数是: 实践中,/>是第一目标函数。/>表示和/>之间的KL散度。/>是第一对数似然函数。/>是/>的对数似然函数,公式为:/>其中,/>第/>个/>的真实后验概率,/>是第/>个本地终端参数的真实后验概率。/>是/>的证据下界(ELBO,evidence lower bound)。/>表示/>与/>之间的KL散度。Among them, the first objective function is: In practice,/> is the first objective function. /> express and/> KL divergence between. /> is the first log-likelihood function. /> Yes/> The log-likelihood function of , the formula is:/> Among them,/> No./> /> The true posterior probability of ,/> Is the first/> The true posterior probability of the local terminal parameters. /> Yes/> Evidence lower bound (ELBO, evidence lower bound). /> Express/> with/> KL divergence between.

第五步,响应于确定上述边缘端服务器参数收敛,对本地终端参数集合进行参数优化处理,得到第一模型文件。In the fifth step, in response to determining that the edge server parameters converge, perform parameter optimization processing on the local terminal parameter set to obtain a first model file.

实践中,响应于确定上述边缘端服务器参数收敛,上述执行主体通过最小化第一目标函数,对本地终端参数集合进行参数优化处理,得到第一模型文件。In practice, in response to determining that the edge server parameters converge, the execution subject performs parameter optimization processing on the local terminal parameter set by minimizing the first objective function to obtain the first model file.

其中,当上述边缘端服务器参数收敛时,上述最小化第一目标函数为:其中:/> 。/>表示/>Among them, when the above-mentioned edge server parameters converge, the above-mentioned minimization first objective function is: Among them:/> . /> Express/> .

第六步,响应于确定上述本地终端参数集合中的本地终端参数收敛,对上述边缘端服务器参数进行参数优化处理,得到第二模型文件。In the sixth step, in response to determining that the local terminal parameters in the local terminal parameter set have converged, perform parameter optimization processing on the edge server parameters to obtain a second model file.

实践中,响应于确定上述本地终端参数集合中的本地终端参数收敛,上述执行主体通过最小化第一目标函数,对上述边缘端服务器参数进行参数优化处理,得到第二模型文件。In practice, in response to determining that the local terminal parameters in the local terminal parameter set have converged, the execution subject performs parameter optimization processing on the edge server parameters by minimizing the first objective function to obtain the second model file.

其中,当上述本地终端参数集合中的本地终端参数收敛时,上述最小化第一目标函数为:其中:/>。/>是/>的证据下界(ELBO,evidence lower bound)。/>是以/>和/>为参数的对数似然函数。/>是/>与/>之间的KL散度。Wherein, when the local terminal parameters in the above local terminal parameter set converge, the above-mentioned minimization first objective function is: Among them:/> . /> Yes/> Evidence lower bound (ELBO, evidence lower bound). /> Yes/> and/> is the log-likelihood function of the parameters. /> Yes/> with/> KL divergence between.

第七步,将上述第一模型文件和上述第二模型文件确定为上述第一聚合后模型文件。The seventh step is to determine the above-mentioned first model file and the above-mentioned second model file as the above-mentioned first aggregated model file.

步骤105,对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件。Step 105: Perform secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file.

在一些实施例中,上述执行主体可以对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件。In some embodiments, the execution subject may perform secondary model file aggregation on the first aggregated model file set to obtain a second aggregated model file.

实践中,上述执行主体将得到第一聚合后模型文件集合进行贝叶斯聚合,得到第二聚合后模型文件。In practice, the above execution subject will obtain the first aggregated model file set and perform Bayesian aggregation to obtain the second aggregated model file.

在一些实施例的一些可选的实现方式中,上述对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件,可以包括以下步骤:In some optional implementations of some embodiments, performing secondary model file aggregation on the first aggregated model file set to obtain the second aggregated model file may include the following steps:

第一步,根据上述第一聚合后模型文件集合,确定上述第一聚合后模型文件集合的第二贝叶斯先验概率和第二样本分布。The first step is to determine the second Bayesian prior probability and the second sample distribution of the first aggregated model file set based on the first aggregated model file set.

其中,上述第二贝叶斯先验概率为:实践中,/>是云端聚合服务器参数。/>是边缘端服务器的数量。/>表示第1个到第/>个边缘端服务器对应的边缘参数,即表示/>。/>是第二贝叶斯先验概率。/>是云端聚合服务器连接各个边缘端服务器的云端参数的真实后验概率。/>是在/>存在的情况下,/>出现的概率。Among them, the above second Bayesian prior probability is: In practice,/> Is the cloud aggregation server parameter. /> is the number of edge servers. /> Indicates the 1st to the/> The edge parameters corresponding to the edge servers represent/> . /> is the second Bayesian prior probability. /> It is the true posterior probability of the cloud parameters connecting the cloud aggregation server to each edge server. /> is in/> If exists,/> probability of occurrence.

其中,上述第二样本分布为:Among them, the above second sample distribution is:

实践中,/>是与第/>个边缘端服务器连接的各个本地终端的目标本地数据组之和。/>是传统的神经网络模型,例如,/>可以是卷积神经网络模型。是第二样本分布。 In practice,/> Yes and No./> The sum of the target local data groups of each local terminal connected to the edge server. /> is a traditional neural network model, for example, /> It can be a convolutional neural network model. is the second sample distribution.

第二步,确定上述至少一个边缘端服务器中每个边缘端服务器对应的边缘端服务器参数的第二对数似然函数,得到第二对数似然函数集合。The second step is to determine the second log-likelihood function of the edge-side server parameters corresponding to each edge-side server in the above-mentioned at least one edge-side server, and obtain a second log-likelihood function set.

其中,上述第二对数似然函数为:实践中,/>是第二对数似然函数。是云端聚合服务器参数的近似后验概率。/>表示/>与/>之间的KL散度。/>是/>的证据下界(ELBO,evidence lower bound)。Among them, the above second log-likelihood function is: In practice,/> is the second log-likelihood function. is the approximate posterior probability of the cloud aggregation server parameters. /> Express/> with/> KL divergence between. /> Yes/> Evidence lower bound (ELBO, evidence lower bound).

第三步,基于上述第二贝叶斯先验概率、上述第二样本分布,生成第二近似贝叶斯后验概率。The third step is to generate a second approximate Bayesian posterior probability based on the above-mentioned second Bayesian prior probability and the above-mentioned second sample distribution.

实践中,上述执行主体利用变分推理将上述第二贝叶斯先验概率和上述第二样本分布的积进行近似,得到第二近似贝叶斯后验概率。In practice, the execution subject uses variational inference to approximate the product of the second Bayesian prior probability and the second sample distribution to obtain a second approximate Bayesian posterior probability.

因此,利用变分推理得到的第二近似贝叶斯后验概率为:实践中,/>为第二近似贝叶斯后验概率。通过第一近似贝叶斯后验概率和第二近似贝叶斯后验概率相结合,可以看出变分参数由/>、/>和/>、/>组成。/>是/>的变分参数。/>是/>和/>之间的近似后验概率。/>是第/>个/>和/>之间的近似后验概率。Therefore, the second approximate Bayesian posterior probability obtained using variational inference is: In practice,/> is the second approximation Bayesian posterior probability. By combining the first-approximation Bayesian posterior probability with the second-approximation Bayesian posterior probability, we can see that the variational parameters by/> ,/> and/> ,/> composition. /> Yes/> variational parameters. /> Yes/> and/> approximate posterior probabilities. /> Is the first/> /> and/> approximate posterior probabilities.

第四步,根据上述第二对数似然函数集合和上述第二近似贝叶斯后验概率,得到第二目标函数,其中,上述第二目标函数包括:云端聚合服务器参数和目标边缘端服务器参数集合,上述云端聚合服务器参数表征上述云端聚合服务器对应的各个边缘端服务器之间的云端参数,目标边缘端服务器参数集合表征至少一个边缘端服务器的边缘参数集合。The fourth step is to obtain a second objective function based on the above-mentioned second log-likelihood function set and the above-mentioned second approximate Bayesian posterior probability, where the above-mentioned second objective function includes: cloud aggregation server parameters and target edge server Parameter set, the above-mentioned cloud aggregation server parameters represent cloud parameters between edge servers corresponding to the above-mentioned cloud aggregation server, and the target edge server parameter set represents an edge parameter set of at least one edge server.

实践中,上述执行主体利用标准变分推理技术,得到的第二目标函数。In practice, the above execution subject uses standard variational reasoning techniques to obtain the second objective function.

其中,第二目标函数是:其中: ,/>是第二目标函数。表示/>和/>之间的KL散度。/>是第二对数似然函数。是/>的对数似然函数,其中:/>其中,/>第/>个/>的真实后验概率,/>是第/>个边缘端服务器的边缘参数的真实后验概率。/>是/>的证据下界(ELBO,evidence lower bound)。表示/>与/>之间的KL散度。Among them, the second objective function is: in: ,/> is the second objective function. Express/> and/> KL divergence between. /> is the second log-likelihood function. Yes/> The log-likelihood function of , where:/> Among them,/> No./> /> The true posterior probability of ,/> Is the first/> The true posterior probability of the edge parameters of the edge server. /> Yes/> Evidence lower bound (ELBO, evidence lower bound). Express/> with/> KL divergence between.

第五步,响应于确定上述云端聚合服务器参数收敛,对上述目标边缘端服务器参数集合进行参数优化处理,得到第三模型文件。In the fifth step, in response to determining that the cloud aggregation server parameters have converged, perform parameter optimization processing on the target edge server parameter set to obtain a third model file.

实践中,响应于确定上述云端聚合服务器参数收敛,上述执行主体通过最小化第一目标函数,对上述目标边缘端服务器参数集合进行参数优化处理,得到第三模型文件。In practice, in response to determining that the cloud aggregation server parameters have converged, the execution subject performs parameter optimization processing on the target edge server parameter set by minimizing the first objective function to obtain a third model file.

其中,上述最小化第二目标函数为:其中, 表示/>Among them, the above-mentioned minimization second objective function is: in, Express/> .

第六步,响应于确定上述目标边缘端服务器参数集合中的目标边缘端服务器参数收敛,对上述云端聚合服务器参数进行参数优化处理,得到第四模型文件。In the sixth step, in response to determining that the target edge server parameters in the target edge server parameter set have converged, perform parameter optimization processing on the cloud aggregation server parameters to obtain a fourth model file.

实践中,响应于确定上述目标边缘端服务器参数集合中的目标边缘端服务器参数收敛,上述执行主体通过最小化第二目标函数,对上述云端聚合服务器参数进行参数优化处理,得到第四模型文件。In practice, in response to determining that the target edge server parameters in the target edge server parameter set have converged, the execution subject performs parameter optimization processing on the cloud aggregation server parameters by minimizing the second objective function to obtain a fourth model file.

其中,上述最小化第二目标函数为:其中,/> 是/>的证据下界(ELBO,evidence lowerbound)。/>是以/>和/>为参数的对数似然函数。/>与/>之间的KL散度。Among them, the above-mentioned minimization second objective function is: Among them,/> Yes/> The evidence lower bound (ELBO, evidence lowerbound). /> Yes/> and/> is the log-likelihood function of the parameters. /> yes with/> KL divergence between.

第七步,将上述第三模型文件和上述第四模型文件确定为上述第二聚合后模型文件。The seventh step is to determine the above-mentioned third model file and the above-mentioned fourth model file as the above-mentioned second aggregated model file.

步骤104至步骤105中的“在一些实施例的一些可选的实现方式”中的内容,作为本公开的一个发明点,解决了背景技术提及的技术问题二,即“由于每个本地终端对应的本地数据组的数据量差异较大,当本地数据组的数据量较小时,会导致基于本地数据组训练得到的、本地充电负荷预测信息生成模型的预测准确度较差,当预测的充电负荷预测信息与实际充电负荷之间存在较大差异时,可能会导致车辆充电设备造成损坏。”实践中,由于每个本地终端对应的本地数据组的数据量差异较大,而训练样本的数量一定程度上决定了本地充电负荷预测信息生成模型的预测准确度,当本地数据组的数据量较小时,会导致基于本地数据组训练得到的、本地充电负荷预测信息生成模型的预测准确度较差,当预测的充电负荷预测信息与实际充电负荷之间存在较大差异时,可能会造成车辆充电设备的损坏。基于此,首先,本公开对于训练后模型文件组集合中的每个训练后模型文件组,通过训练后模型文件组对应的边缘端服务器,对训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件。将对应同一个边缘端服务器的至少一个本地终端的训练后模型文件组进行模型聚合,在保护了数据隐私性的同时,间接实现了各个本地终端对应的本地数据组之间的融合和共享,提高了模型的鲁棒性。最后,本公开对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件。通过分层进行模型聚合,实现了在数据隔离的情况下,间接的整合各个本地终端对应的训练后模型文件。综上所述,通过两次进行模型聚合,在保护数据隐私的情况下,将各个本地终端对应的本地数据组进行了数据融合与间接共享,以此提高了充电负荷预测信息的准确度,避免了因预测的充电负荷预测信息与实际充电负荷之间存在较大差异时,所可能导致车辆充电设备造成损坏的问题,保证了车辆充电设备的设备安全性。The content in "Some optional implementations in some embodiments" in steps 104 to 105, as an inventive point of this disclosure, solves the second technical problem mentioned in the background art, that is, "because each local terminal The data volume of the corresponding local data group differs greatly. When the data volume of the local data group is small, the prediction accuracy of the local charging load prediction information generation model trained based on the local data group will be poor. When the predicted charging When there is a large difference between the load prediction information and the actual charging load, it may cause damage to the vehicle charging equipment." In practice, due to the large difference in the amount of data in the local data group corresponding to each local terminal, the number of training samples To a certain extent, it determines the prediction accuracy of the local charging load prediction information generation model. When the amount of data in the local data group is small, it will lead to poor prediction accuracy of the local charging load prediction information generation model trained based on the local data group. , when there is a large difference between the predicted charging load prediction information and the actual charging load, it may cause damage to the vehicle charging equipment. Based on this, first of all, this disclosure performs a model file aggregation on the post-training model file group through the edge server corresponding to the post-training model file group for each post-training model file group in the post-training model file group set, and obtains the first An aggregated model file. Model aggregation is performed on the trained model file group corresponding to at least one local terminal of the same edge server. While protecting data privacy, it indirectly realizes the integration and sharing of local data groups corresponding to each local terminal, improving improve the robustness of the model. Finally, the present disclosure performs secondary model file aggregation on the obtained first aggregated model file set to obtain the second aggregated model file. Through hierarchical model aggregation, the trained model files corresponding to each local terminal can be indirectly integrated under the condition of data isolation. In summary, through two model aggregations, the local data groups corresponding to each local terminal are data fused and indirectly shared while protecting data privacy, thereby improving the accuracy of charging load prediction information and avoiding It eliminates the problem of possible damage to vehicle charging equipment due to a large difference between the predicted charging load prediction information and the actual charging load, and ensures the equipment safety of vehicle charging equipment.

步骤106,对于至少一个本地终端中的每个本地终端,根据本地终端对应的实时运行数据和第二聚合后模型文件对应的充电负荷预测信息生成模型,生成本地终端对应的充电负荷预测信息。Step 106: For each local terminal in at least one local terminal, generate a model based on the real-time operating data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generate the charging load prediction information corresponding to the local terminal.

在一些实施例中,上述执行主体可以对于上述至少一个本地终端中的每个本地终端,根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息。其中,实时运行数据是本地终端在终端运行时(例如,车辆充电过程中)对应的、实时的运行数据。实践中,充电负荷预测信息可以表征本地终端在未来一段时间段内的、预测的充电负荷。In some embodiments, the execution subject may generate a model for each local terminal in the at least one local terminal based on the real-time operating data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generate Charging load prediction information corresponding to the above-mentioned local terminal. Among them, the real-time operating data is the corresponding real-time operating data of the local terminal when the terminal is running (for example, during vehicle charging). In practice, the charging load prediction information can represent the predicted charging load of the local terminal in a future period of time.

实践中,上述执行主体可以将上述本地终端对应的实时运行数据输入第二聚合后模型文件对应的充电负荷预测信息生成模型,以生成本地终端对应的充电负荷预测信息。In practice, the execution subject may input the real-time operating data corresponding to the local terminal into the charging load prediction information generation model corresponding to the second aggregated model file to generate the charging load prediction information corresponding to the local terminal.

可选地,上述第二聚合后模型文件对应的充电负荷预测信息生成模型可以包括:数据特征提取模型和信息预测模型。Optionally, the charging load prediction information generation model corresponding to the second aggregated model file may include: a data feature extraction model and an information prediction model.

在一些实施例的一些可选的实现方式中,上述执行主体根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息,可以包括以下步骤:In some optional implementations of some embodiments, the execution subject generates a model based on the real-time operating data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generates the charging load corresponding to the local terminal. Forecasting information can include the following steps:

第一步,将上述本地终端对应的实时运行数据输入上述数据特征提取模型,以生成特征提取后数据信息。In the first step, the real-time operating data corresponding to the above-mentioned local terminal is input into the above-mentioned data feature extraction model to generate feature-extracted data information.

其中,数据特征提取模型可以包括K个串行连接的、基于注意力机制的卷积残差块。实践中,卷积残差块由2个卷积层、1个残差模块和1个基于内容的注意力机制(Content-Based Attention,CBA)层构成。Among them, the data feature extraction model can include K serially connected convolutional residual blocks based on the attention mechanism. In practice, the convolutional residual block consists of 2 convolutional layers, 1 residual module and 1 content-based attention mechanism (Content-Based Attention, CBA) layer.

实践中,上述执行主体可以根据上述数据特征提取模型,对实时运行数据进行特征提取,以生成特征提取后数据信息。In practice, the above-mentioned execution subject can perform feature extraction on real-time operating data according to the above-mentioned data feature extraction model to generate feature-extracted data information.

第二步,对上述特征提取后数据信息进行数据清洗处理,得到清洗后数据信息。In the second step, the data information after feature extraction is subjected to data cleaning processing to obtain the cleaned data information.

实践中,上述执行主体可以对上述特征提取后数据信息进行异常值剔除处理,以生成清洗后数据信息。In practice, the above-mentioned execution subject can perform outlier elimination processing on the above-mentioned feature-extracted data information to generate cleaned data information.

第三步,通过上述信息预测模型,对上述清洗后数据信息进行信息预测,得到上述本地终端对应的充电负荷预测信息。The third step is to perform information prediction on the above-mentioned cleaned data information through the above-mentioned information prediction model to obtain the charging load prediction information corresponding to the above-mentioned local terminal.

实践中,上述执行主体可以将根据上述清洗后数据信息输入上述信息预测模型(例如,全连接层),以生成上述本地终端对应的充电负荷预测信息。In practice, the execution subject may input the cleaned data information into the information prediction model (for example, a fully connected layer) to generate the charging load prediction information corresponding to the local terminal.

上述“在一些实施例的一些可选的实现方式中”中的内容作为本公开的一个发明点,解决了背景技术提及的技术问题三,即“当传统的线性连接的卷积神经网络的网络层数加深时,可能会造成特征遗忘的问题,从而影响生成的充电负荷预测信息的准确度,当预测的充电负荷预测信息与实际充电负荷之间存在较大差异时,可能会导致车辆充电设备造成损坏。”实践中,当传统的线性连接的卷积神经网络的网络层数加深时,容易造成梯度消失的情况,可能会造成特征遗忘的问题,从而影响生成的充电负荷预测信息的准确度,使得当预测的充电负荷预测信息与实际充电负荷之间存在较大差异时,可能会导致车辆充电设备造成损坏。基于此,本公开设计了数据特征提取模型和信息预测模型。首先,通过数据特征提取模型对实时运行数据进行特征提取,数据特征提取模型包括的残差模块可以避免因网络加深而导致的性能降低的问题,缓解了梯度消失的问题。接着,对上述特征提取后数据信息进行数据清洗处理,得到清洗后数据信息。通过清洗数据可以减少计算消耗,提高数据的质量。此外,将清洗后数据信息输入上述信息预测模型,以生成上述本地终端对应的充电负荷预测信息。通过此种方式,避免了因传统的线性连接的卷积神经网络的网络层数加深,可能造成的特征遗忘的问题,提高了充电负荷预测信息的准确度。The above-mentioned content in “in some optional implementations of some embodiments” serves as an inventive point of the present disclosure and solves the third technical problem mentioned in the background art, that is, “when the traditional linearly connected convolutional neural network When the number of network layers is deepened, it may cause the problem of feature forgetting, thereby affecting the accuracy of the generated charging load prediction information. When there is a large difference between the predicted charging load prediction information and the actual charging load, it may cause vehicle charging equipment causing damage." In practice, when the number of network layers of a traditional linearly connected convolutional neural network is deepened, it is easy to cause the gradient to disappear, which may cause the problem of feature forgetting, thereby affecting the accuracy of the generated charging load prediction information. Therefore, when there is a large difference between the predicted charging load prediction information and the actual charging load, it may cause damage to the vehicle charging equipment. Based on this, the present disclosure designs a data feature extraction model and an information prediction model. First, features are extracted from the real-time running data through the data feature extraction model. The residual module included in the data feature extraction model can avoid the problem of performance degradation caused by network deepening and alleviate the problem of gradient disappearance. Then, perform data cleaning processing on the above feature extracted data information to obtain the cleaned data information. By cleaning data, computing consumption can be reduced and data quality improved. In addition, the cleaned data information is input into the above-mentioned information prediction model to generate the charging load prediction information corresponding to the above-mentioned local terminal. In this way, the problem of feature forgetting that may be caused by the deepening of the network layers of the traditional linearly connected convolutional neural network is avoided, and the accuracy of the charging load prediction information is improved.

本公开的上述各个实施例具有如下有益效果:通过本公开的一些实施例的基于信息安全的充放电设施负荷预测信息生成方法,避免了当同时存在大量的待蓄能新能源汽车时,导致蓄能站点对应的电网过载,从而造成电压不稳定,可能会对车辆充电设备造成损坏的问题。具体来说,造成车辆充电设备损坏的原因在于:同时存在大量的待蓄能新能源汽车可能会导致蓄能站点对应的电网负荷迅速增加,会造成蓄能站点对应的电网过载,可能会对车辆充电设备造成损坏。基于此,本公开的一些实施例的基于信息安全的充放电设施负荷预测信息生成方法,首先,采集目标本地数据组序列,其中,上述目标本地数据组序列为至少一个本地终端对应的本地数据组,其中,本地终端的终端类型为车辆充电设备类型,目标本地数据是本地终端在进行车辆充电时产生的运行数据。其次,对于上述目标本地数据组序列中的每个目标本地数据组,通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件,其中,上述初始模型文件是与上述目标本地数据组对应的本地终端关联的边缘端服务器发送的模型文件,其中,初始充电负荷预测信息生成模型是待进行模型训练的充电负荷预测信息生成模型。通过以上述目标本地数据组序列中的每个目标本地数据组为训练样本,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,可以生成与目标本地数据组相对应的个性化局部模型。同时,对于每个初始充电负荷预测信息生成模型,整个模型训练的过程中仅涉及目标本地数据组,本地数据不出域(即,不同的目标本地数据组之间相互隔离),实现了在保护数据隐私前提下的模型生成。接着,对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合,其中,上述训练后模型文件组集合对应至少一个边缘端服务器。实践中,不同的训练后模型文件往往对应不同的边缘端服务器,需要由对应的边缘端服务器进行模型文件聚合,因此,需要对训练后模型文件进行模型文件拆分。进一步,对于上述训练后模型文件组集合中的每个训练后模型文件组,通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件。通过模型文件聚合,实现了在数据相互隔离的情况下,间接地实现数据融合和共享,提高了模型的鲁棒性,同时保护了数据的隐私性。此外,对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件。通过分层聚合,可以在保护数据隐私的同时,降低计算成本。最后,对于上述至少一个本地终端中的每个本地终端,根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息。通过此种方式可以生成准确地充电负荷预测信息,通过结合充电负荷预测信息进行充电负荷的调整,侧面避免了当存在大量待蓄能新能源汽车时,蓄能站点对应的电网过载所造成的电压不稳定的问题,从而降低了车辆充电设备被损坏的概率。The above-mentioned embodiments of the present disclosure have the following beneficial effects: through the charging and discharging facility load prediction information generation method based on information security of some embodiments of the present disclosure, it is avoided that when there are a large number of new energy vehicles to be stored at the same time, causing the storage The power grid corresponding to the power station is overloaded, causing voltage instability and possibly causing damage to vehicle charging equipment. Specifically, the reason for damage to vehicle charging equipment is that the presence of a large number of new energy vehicles to be stored at the same time may cause the grid load corresponding to the energy storage site to increase rapidly, causing the grid corresponding to the energy storage site to be overloaded, which may cause damage to the vehicle. Damage to charging equipment. Based on this, some embodiments of the present disclosure provide a method for generating load prediction information for charging and discharging facilities based on information security. First, a target local data group sequence is collected, wherein the target local data group sequence is a local data group corresponding to at least one local terminal. , where the terminal type of the local terminal is the vehicle charging equipment type, and the target local data is the operating data generated by the local terminal when charging the vehicle. Secondly, for each target local data group in the above target local data group sequence, perform initial model training on the initial charging load prediction information generation model corresponding to the initial model file through the above target local data group, and obtain the trained model file, where , the above-mentioned initial model file is a model file sent by the edge server associated with the local terminal corresponding to the above-mentioned target local data group, wherein the initial charging load prediction information generation model is the charging load prediction information generation model to be model trained. By using each target local data group in the above target local data group sequence as a training sample and performing initial model training on the initial charging load prediction information generation model corresponding to the initial model file, a personalized model corresponding to the target local data group can be generated. local model. At the same time, for each initial charging load prediction information generation model, the entire model training process only involves the target local data group, and the local data does not go out of the domain (that is, different target local data groups are isolated from each other), achieving protection in Model generation under the premise of data privacy. Next, the obtained set of trained model files is split into model files to obtain a set of post-trained model file groups, where the above set of post-trained model file groups corresponds to at least one edge server. In practice, different trained model files often correspond to different edge servers, and the corresponding edge servers need to aggregate model files. Therefore, the trained model files need to be split into model files. Further, for each post-training model file group in the above-mentioned post-training model file group set, perform a model file aggregation on the above-mentioned post-training model file group through the edge server corresponding to the above-mentioned post-training model file group to obtain the first aggregation Post model file. Through model file aggregation, data fusion and sharing can be achieved indirectly when data are isolated from each other, improving the robustness of the model while protecting the privacy of the data. In addition, secondary model file aggregation is performed on the obtained first aggregated model file set to obtain a second aggregated model file. Through hierarchical aggregation, computing costs can be reduced while protecting data privacy. Finally, for each local terminal in the at least one local terminal, a model is generated based on the real-time operating data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file to generate a charging load prediction corresponding to the local terminal. information. In this way, accurate charging load prediction information can be generated, and the charging load can be adjusted by combining the charging load prediction information. This can avoid the voltage caused by the overload of the power grid corresponding to the energy storage site when there are a large number of new energy vehicles to be stored. instability problem, thereby reducing the probability of vehicle charging equipment being damaged.

进一步参考图2,作为对上述各图所示方法的实现,本公开提供了一种基于信息安全的充放电设施负荷预测信息生成装置的一些实施例,这些装置实施例与图1所示的那些方法实施例相对应,该基于信息安全的充放电设施负荷预测信息生成装置具体可以应用于各种电子设备中。With further reference to Figure 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of an information security-based charge and discharge facility load prediction information generation device. These device embodiments are similar to those shown in Figure 1 Corresponding to the method embodiment, the device for generating information security-based charging and discharging facility load prediction information can be applied to various electronic devices.

如图2所示,一些实施例的基于信息安全的充放电设施负荷预测信息生成装置200包括:采集单元201、模型训练单元202、拆分单元203、第一聚合单元204、第二聚合单元205和生成单元206。其中,采集单元201,被配置成采集目标本地数据组序列,其中,上述目标本地数据组序列为至少一个本地终端对应的本地数据组,其中,本地终端的终端类型为车辆充电设备类型,目标本地数据是本地终端在进行车辆充电时产生的运行数据;模型训练单元202,被配置成对于上述目标本地数据组序列中的每个目标本地数据组,通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件,其中,上述初始模型文件是与上述目标本地数据组对应的本地终端关联的边缘端服务器发送的模型文件,其中,初始充电负荷预测信息生成模型是待进行模型训练的充电负荷预测信息生成模型;拆分单元203,被配置成对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合,其中,上述训练后模型文件组集合对应至少一个边缘端服务器;第一聚合单元204,被配置成对于上述训练后模型文件组集合中的每个训练后模型文件组,通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件;第二聚合单元205,被配置成对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件;生成单元206,被配置成对于上述至少一个本地终端中的每个本地终端,根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息。As shown in Figure 2, the charging and discharging facility load prediction information generation device 200 based on information security in some embodiments includes: an acquisition unit 201, a model training unit 202, a splitting unit 203, a first aggregation unit 204, and a second aggregation unit 205. and generation unit 206. The collection unit 201 is configured to collect a target local data group sequence, where the target local data group sequence is a local data group corresponding to at least one local terminal, wherein the terminal type of the local terminal is a vehicle charging equipment type, and the target local data group sequence is a vehicle charging equipment type. The data is operating data generated by the local terminal when charging the vehicle; the model training unit 202 is configured to, for each target local data group in the above-mentioned target local data group sequence, correspond to the initial model file through the above-mentioned target local data group. The initial charging load prediction information generation model is subjected to initial model training to obtain a post-training model file, where the above-mentioned initial model file is a model file sent by the edge server associated with the local terminal corresponding to the above-mentioned target local data group, where the initial charging The load prediction information generation model is a charging load prediction information generation model to be model trained; the splitting unit 203 is configured to perform model file splitting on the obtained set of trained model files to obtain a set of trained model file groups, where, The above-mentioned set of trained model file groups corresponds to at least one edge server; the first aggregation unit 204 is configured to, for each post-trained model file group in the above-mentioned set of post-trained model file groups, through the above-mentioned post-trained model file group corresponding The edge server performs a model file aggregation on the above-mentioned set of trained model files to obtain a first aggregated model file; the second aggregation unit 205 is configured to perform a secondary model file aggregation on the obtained first aggregated model file set, Obtain the second aggregated model file; the generating unit 206 is configured to, for each local terminal in the at least one local terminal, based on the real-time operating data corresponding to the local terminal and the charging load prediction corresponding to the second aggregated model file. An information generation model generates charging load prediction information corresponding to the above-mentioned local terminal.

可以理解的是,该基于信息安全的充放电设施负荷预测信息生成装置200中记载的诸单元与参考图1描述的方法中的各个步骤相对应。由此,上文针对方法描述的操作、特征以及产生的有益效果同样适用于基于信息安全的充放电设施负荷预测信息生成装置200及其中包含的单元,在此不再赘述。It can be understood that the units recorded in the information security-based charging and discharging facility load prediction information generating device 200 correspond to each step in the method described with reference to FIG. 1 . Therefore, the operations, features and beneficial effects described above for the method are also applicable to the charging and discharging facility load prediction information generation device 200 based on information security and the units included therein, and will not be described again here.

下面参考图3,其示出了适于用来实现本公开的一些实施例的电子设备(例如,计算设备)300的结构示意图。图3示出的电子设备仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。Referring now to FIG. 3 , a schematic structural diagram of an electronic device (eg, computing device) 300 suitable for implementing some embodiments of the present disclosure is shown. The electronic device shown in FIG. 3 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.

如图3所示,电子设备300可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储器302中的程序或者从存储装置308加载到随机访问存储器303中的程序而执行各种适当的动作和处理。在随机访问存储器303中,还存储有电子设备300操作所需的各种程序和数据。处理装置301、只读存储器302以及随机访问存储器303通过总线304彼此相连。输入/输出接口305也连接至总线304。As shown in FIG. 3 , the electronic device 300 may include a processing device (eg, central processing unit, graphics processor, etc.) 301 , which may be loaded into the random access memory 303 according to a program stored in the read-only memory 302 or from the storage device 308 program to perform various appropriate actions and processes. In the random access memory 303, various programs and data required for the operation of the electronic device 300 are also stored. The processing device 301, the read-only memory 302 and the random access memory 303 are connected to each other via a bus 304. Input/output interface 305 is also connected to bus 304.

通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图3示出了具有各种装置的电子设备300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图3中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration An output device 307 such as a computer; a storage device 308 including a magnetic tape, a hard disk, etc.; and a communication device 309. The communication device 309 may allow the electronic device 300 to communicate wirelessly or wiredly with other devices to exchange data. Although FIG. 3 illustrates electronic device 300 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided. Each block shown in Figure 3 may represent one device, or may represent multiple devices as needed.

特别地,根据本公开的一些实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的一些实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的一些实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从只读存储器302被安装。在该计算机程序被处理装置301执行时,执行本公开的一些实施例的方法中限定的上述功能。In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as a computer software program. For example, some embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In some such embodiments, the computer program may be downloaded and installed from the network via communication device 309, or from storage device 308, or from read-only memory 302. When the computer program is executed by the processing device 301, the above-described functions defined in the methods of some embodiments of the present disclosure are performed.

需要说明的是,本公开的一些实施例中记载的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的一些实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的一些实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium recorded in some embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In some embodiments of the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device . Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.

在一些实施方式中,客户端、服务器可以利用诸如HTTP(Hyper Text TransferProtocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server can communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and can communicate with digital data in any form or medium. Communications (e.g., communications networks) interconnections. Examples of communications networks include local area networks ("LAN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.

上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:采集目标本地数据组序列,其中,上述目标本地数据组序列为至少一个本地终端对应的本地数据组,其中,本地终端的终端类型为车辆充电设备类型,目标本地数据是本地终端在进行车辆充电时产生的运行数据;对于上述目标本地数据组序列中的每个目标本地数据组,通过上述目标本地数据组,对初始模型文件对应的初始充电负荷预测信息生成模型进行初始模型训练,得到训练后模型文件,其中,上述初始模型文件是与上述目标本地数据组对应的本地终端关联的边缘端服务器发送的模型文件,其中,初始充电负荷预测信息生成模型是待进行模型训练的充电负荷预测信息生成模型;对得到的训练后模型文件集合进行模型文件拆分,得到训练后模型文件组集合,其中,上述训练后模型文件组集合对应至少一个边缘端服务器;对于上述训练后模型文件组集合中的每个训练后模型文件组,通过上述训练后模型文件组对应的边缘端服务器,对上述训练后模型文件组进行一次模型文件聚合,得到第一聚合后模型文件;对得到第一聚合后模型文件集合进行二次模型文件聚合,得到第二聚合后模型文件;对于上述至少一个本地终端中的每个本地终端,根据上述本地终端对应的实时运行数据和上述第二聚合后模型文件对应的充电负荷预测信息生成模型,生成上述本地终端对应的充电负荷预测信息。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device. The computer-readable medium carries one or more programs. When the one or more programs are executed by the electronic device, the electronic device: collects a target local data group sequence, wherein the target local data group sequence is at least one The local data group corresponding to the local terminal, where the terminal type of the local terminal is the vehicle charging equipment type, and the target local data is the operating data generated by the local terminal when charging the vehicle; for each target local data group in the above sequence data group, through the above-mentioned target local data group, perform initial model training on the initial charging load prediction information generation model corresponding to the initial model file, and obtain a post-training model file, wherein the above-mentioned initial model file is a local data group corresponding to the above-mentioned target local data group. The model file sent by the edge server associated with the terminal, in which the initial charging load prediction information generation model is the charging load prediction information generation model to be model trained; the obtained set of trained model files is split into model files to obtain the A set of model file groups, wherein the above-mentioned set of trained model file groups corresponds to at least one edge server; for each post-trained model file group in the above-mentioned set of post-trained model file groups, the edge server corresponding to the above-mentioned post-trained model file group is The server performs a model file aggregation on the above-mentioned trained model file group to obtain a first aggregation model file; performs a second model file aggregation on the obtained first aggregation model file set to obtain a second aggregation model file; for at least the above Each local terminal in a local terminal generates a charging load prediction information corresponding to the local terminal based on the real-time operating data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file.

可以以一种或多种程序设计语言或其组合来编写用于执行本公开的一些实施例的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of some embodiments of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, or a combination thereof, Also included are conventional procedural programming languages—such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer, such as an Internet service provider. connected via the Internet).

附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved. It will also be noted that each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.

描述于本公开的一些实施例中的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括采集单元、模型训练单元、拆分单元、第一聚合单元、第二聚合单元和生成单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,采集单元还可以被描述为“采集目标本地数据组序列的单元”。The units described in some embodiments of the present disclosure may be implemented in software or hardware. The described unit may also be provided in a processor. For example, it may be described as follows: a processor includes an acquisition unit, a model training unit, a splitting unit, a first aggregation unit, a second aggregation unit and a generation unit. The names of these units do not constitute a limitation on the unit itself under certain circumstances. For example, the collection unit may also be described as "a unit that collects the target local data group sequence."

本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, and without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical device (CPLD) and so on.

以上描述仅为本公开的一些较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开的实施例中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开的实施例中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only an illustration of some preferred embodiments of the present disclosure and the technical principles applied. Persons skilled in the art should understand that the scope of the invention involved in the embodiments of the present disclosure is not limited to technical solutions composed of specific combinations of the above technical features, and should also cover the above-mentioned technical solutions without departing from the above-mentioned inventive concept. Other technical solutions formed by any combination of technical features or their equivalent features. For example, a technical solution is formed by replacing the above features with technical features with similar functions disclosed in the embodiments of the present disclosure (but not limited to).

Claims (7)

1. A charge and discharge facility load prediction information generation method based on information security comprises the following steps:
collecting a target sample local data set sequence, wherein the target sample local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target sample local data is operation data generated when the local terminal charges a vehicle;
for each target sample local data set in the target sample local data set sequence, performing initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target sample local data set to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target sample local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training;
splitting the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server;
For each trained model file group in the trained model file group set, performing primary model file aggregation on the trained model file group through an edge server corresponding to the trained model file group to obtain a first aggregated model file;
performing secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file;
and generating a model for each local terminal in the at least one local terminal according to the real-time operation data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generating the charging load prediction information corresponding to the local terminal.
2. The method according to claim 1, wherein the performing initial model training on the initial charge load prediction information generation model corresponding to the initial model file through the target local data set to obtain a trained model file includes:
performing data preprocessing on the target specimen data set to obtain a preprocessed target specimen data set;
model training is carried out on an initial charge load prediction information generation model corresponding to the initial model file according to the preprocessed target specimen data set so as to generate a candidate model file, wherein the candidate model file is a model file corresponding to the initial charge load prediction information generation model, and the corresponding training times of the model file are consistent with the preset training times;
And carrying out parameter encryption on the candidate model file to generate the trained model file.
3. The method of claim 2, wherein the splitting the model file from the obtained trained model file set to obtain a trained model file set includes:
determining an edge server information set corresponding to the target local data set sequence, wherein the number of the edge server information in the edge server information set is larger than a preset value;
and determining at least one trained model file corresponding to each piece of edge server information in the trained model file set as a trained model file set, and obtaining the trained model file set.
4. The method of claim 3, wherein the performing, by the edge server corresponding to the trained model file group, model file aggregation on the trained model file group once to obtain a first aggregated model file includes:
determining a first Bayesian prior probability and a first sample distribution of the trained model file group according to the trained model file group;
Determining a first log likelihood function of an edge parameter corresponding to the edge server;
performing approximate processing on the first Bayesian prior probability and the first sample distribution to generate a first approximate Bayesian posterior probability;
obtaining a first objective function based on the first log-likelihood function and the first approximate Bayesian posterior probability, wherein the first objective function comprises: the system comprises an edge server parameter and a local terminal parameter set, wherein the edge server parameter represents an edge parameter corresponding to the edge server, and the local terminal parameter represents a parameter of a local terminal corresponding to the edge server;
in response to determining that the edge server parameter converges, performing parameter optimization processing on a local terminal parameter set to obtain a first model file;
responding to the determination of the convergence of the local terminal parameters in the local terminal parameter set, and carrying out parameter optimization processing on the edge server parameters to obtain a second model file;
and determining the first model file and the second model file as the first aggregated model file.
5. An information security-based charge and discharge facility load prediction information generation device, comprising:
The system comprises an acquisition unit, a storage unit and a control unit, wherein the acquisition unit is configured to acquire a target local data set sequence, the target local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target local data is operation data generated when the local terminal charges a vehicle;
the model training unit is configured to perform initial model training on an initial charge load prediction information generation model corresponding to an initial model file through each target local data set in the target local data set sequence to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training;
the splitting unit is configured to split the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server;
The first aggregation unit is configured to aggregate the model files once through the edge server corresponding to the trained model file group for each trained model file group in the trained model file group set to obtain a first aggregated model file;
the second aggregation unit is configured to perform secondary model file aggregation on the first aggregated model file set to obtain a second aggregated model file;
and the generating unit is configured to generate a charging load prediction information corresponding to each local terminal in the at least one local terminal according to the real-time operation data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file.
6. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1 to 4.
7. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1 to 4.
CN202311160057.3A 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security Active CN116894163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311160057.3A CN116894163B (en) 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311160057.3A CN116894163B (en) 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security

Publications (2)

Publication Number Publication Date
CN116894163A true CN116894163A (en) 2023-10-17
CN116894163B CN116894163B (en) 2024-01-16

Family

ID=88312411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311160057.3A Active CN116894163B (en) 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security

Country Status (1)

Country Link
CN (1) CN116894163B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010027726A1 (en) * 2010-04-14 2012-05-10 Bayerische Motoren Werke Aktiengesellschaft Electrical power providing method for electrical energy network for battery electric vehicle, involves determining driving profile of vehicle, and providing electrical power to network as function of energy requirement prediction
CN113610303A (en) * 2021-08-09 2021-11-05 北京邮电大学 Load prediction method and system
US20220300858A1 (en) * 2020-10-14 2022-09-22 Ennew Digital Technology Co., Ltd Data measurement method and apparatus, electronic device and computer-readable medium
CN115563859A (en) * 2022-09-26 2023-01-03 国电南瑞南京控制系统有限公司 Power load prediction method, device and medium based on layered federal learning
CN115907136A (en) * 2022-11-16 2023-04-04 北京国电通网络技术有限公司 Electric vehicle scheduling method, device, equipment and computer readable medium
CN116050557A (en) * 2021-10-28 2023-05-02 新智我来网络科技有限公司 Power load prediction method, device, computer equipment and medium
CN116111579A (en) * 2022-12-13 2023-05-12 广西电网有限责任公司 Electric automobile access distribution network clustering method
CN116546567A (en) * 2023-07-06 2023-08-04 深圳市大数据研究院 Data processing method and system based on Bayesian federal learning and electronic equipment
CN116562476A (en) * 2023-07-12 2023-08-08 北京中电普华信息技术有限公司 Charging load information generation method and device applied to electric automobile
CN116596105A (en) * 2023-03-13 2023-08-15 燕山大学 Charging station load prediction method considering power distribution network development

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010027726A1 (en) * 2010-04-14 2012-05-10 Bayerische Motoren Werke Aktiengesellschaft Electrical power providing method for electrical energy network for battery electric vehicle, involves determining driving profile of vehicle, and providing electrical power to network as function of energy requirement prediction
US20220300858A1 (en) * 2020-10-14 2022-09-22 Ennew Digital Technology Co., Ltd Data measurement method and apparatus, electronic device and computer-readable medium
CN113610303A (en) * 2021-08-09 2021-11-05 北京邮电大学 Load prediction method and system
CN116050557A (en) * 2021-10-28 2023-05-02 新智我来网络科技有限公司 Power load prediction method, device, computer equipment and medium
CN115563859A (en) * 2022-09-26 2023-01-03 国电南瑞南京控制系统有限公司 Power load prediction method, device and medium based on layered federal learning
CN115907136A (en) * 2022-11-16 2023-04-04 北京国电通网络技术有限公司 Electric vehicle scheduling method, device, equipment and computer readable medium
CN116111579A (en) * 2022-12-13 2023-05-12 广西电网有限责任公司 Electric automobile access distribution network clustering method
CN116596105A (en) * 2023-03-13 2023-08-15 燕山大学 Charging station load prediction method considering power distribution network development
CN116546567A (en) * 2023-07-06 2023-08-04 深圳市大数据研究院 Data processing method and system based on Bayesian federal learning and electronic equipment
CN116562476A (en) * 2023-07-12 2023-08-08 北京中电普华信息技术有限公司 Charging load information generation method and device applied to electric automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AFAF TAIK 等: "Electrical load forecasting using edge computing and federated learning", 《ICC 2020-2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS(ICC)》, pages 1 - 7 *
汤凌韬 等: "联邦学习中的隐私问题研究进展", 《软件学报》, vol. 34, no. 1, pages 197 - 229 *

Also Published As

Publication number Publication date
CN116894163B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
CN115085196B (en) Power load predicted value determination method, device, equipment and computer readable medium
CN111985831A (en) Scheduling method and device of cloud computing resources, computer equipment and storage medium
CN116512980B (en) Power distribution method, device, equipment and medium based on internal resistance of power battery
CN116562476B (en) Charging load information generation method and device applied to electric automobile
CN116703131A (en) Power resource allocation method, device, electronic device and computer readable medium
CN115759444B (en) Power equipment distribution methods, devices, electronic equipment and computer-readable media
CN112907942A (en) Vehicle scheduling method, device, equipment and medium based on edge calculation
CN118035893A (en) A rolling bearing fault diagnosis method and system based on cloud-edge collaborative federated model migration
CN115640285A (en) Power abnormality information transmission method, device, electronic apparatus, and medium
CN116894163B (en) Charging and discharging facility load prediction information generation method and device based on information security
CN118449279A (en) Smart and safe electricity management system based on cloud service
CN118432871A (en) Distributed anomaly detection method, device, equipment and medium
CN116862118B (en) Carbon emission information generation method, device, electronic equipment and computer-readable medium
CN115907136B (en) Electric vehicle dispatching method, device, equipment and computer-readable medium
CN117236805A (en) Power equipment control method, device, electronic equipment and computer readable medium
CN117522169A (en) Wind power prediction method, device, equipment and medium
CN115577980B (en) Power equipment regulation and control method and device, electronic equipment and medium
CN116995784B (en) Ship energy storage and discharge control method and device, electronic equipment and readable medium
CN118228200B (en) Method, device and equipment for identifying abnormality of power equipment based on multimodal model
CN116307998B (en) Power equipment material transportation method, device, electronic equipment and computer medium
CN112364284B (en) Method and device for detecting abnormality based on context and related product
CN116757443B (en) Novel power line loss rate prediction method and device for power distribution network, electronic equipment and medium
CN117639042B (en) Loss-reduction power supply control method and device for medium-low voltage distribution network, electronic equipment and medium
CN115689210A (en) Water and electricity adjustment method, device, and electronic equipment based on water and electricity privacy data
CN117040135B (en) Power equipment power supply method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant