CN116894163B - Charging and discharging facility load prediction information generation method and device based on information security - Google Patents

Charging and discharging facility load prediction information generation method and device based on information security Download PDF

Info

Publication number
CN116894163B
CN116894163B CN202311160057.3A CN202311160057A CN116894163B CN 116894163 B CN116894163 B CN 116894163B CN 202311160057 A CN202311160057 A CN 202311160057A CN 116894163 B CN116894163 B CN 116894163B
Authority
CN
China
Prior art keywords
model file
model
local terminal
load prediction
edge server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311160057.3A
Other languages
Chinese (zh)
Other versions
CN116894163A (en
Inventor
付昀夕
赵永生
刘泽三
戚艳
高紫婷
任博强
张文娟
张帅
文爱军
吴俊峰
张磐
闫晨阳
闫廷廷
刘振圻
宫晓辉
肖松宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Information and Telecommunication Co Ltd
State Grid Tianjin Electric Power Co Ltd
Original Assignee
State Grid Information and Telecommunication Co Ltd
State Grid Tianjin Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Information and Telecommunication Co Ltd, State Grid Tianjin Electric Power Co Ltd filed Critical State Grid Information and Telecommunication Co Ltd
Priority to CN202311160057.3A priority Critical patent/CN116894163B/en
Publication of CN116894163A publication Critical patent/CN116894163A/en
Application granted granted Critical
Publication of CN116894163B publication Critical patent/CN116894163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Power Engineering (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a charge and discharge facility load prediction information generation method and device based on information security. One embodiment of the method comprises the following steps: collecting a target specimen local data set sequence; initial model training is carried out on the initial charge load prediction information generation model through the target specimen data set; splitting the model files of the trained model file set to obtain a trained model file set; performing primary model file aggregation on the trained model file group to obtain a first aggregated model file; performing secondary model file aggregation on the first aggregated model file set to obtain a second aggregated model file; and generating a model according to the real-time operation data and the charging load prediction information, and generating the charging load prediction information. According to the embodiment, the problem that when a large number of new energy automobiles to be stored exist at the same time, the power grid corresponding to the energy storage station is overloaded, so that unstable voltage is caused, and damage to vehicle charging equipment is possibly caused is avoided.

Description

Charging and discharging facility load prediction information generation method and device based on information security
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a charge and discharge facility load prediction information generation method and device based on information security.
Background
With the development and popularization of new energy automobiles, how to improve the energy storage flexibility and convenience of the new energy automobiles is particularly important for the new energy automobiles with rapidly increased conservation amount. At present, aiming at energy storage of a new energy automobile, the following modes are generally adopted: the energy storage station aiming at the new energy automobile is arranged, and the new energy automobile is stored through the vehicle charging equipment arranged in the energy storage station.
However, the inventors found that when the above manner is adopted, there are often the following technical problems:
firstly, when a large number of new energy automobiles to be stored exist at the same time, an electric network corresponding to an energy storage station is overloaded, so that unstable voltage is caused, and the vehicle charging equipment is possibly damaged;
secondly, as the data volume difference of the local data sets corresponding to each local terminal is larger, when the data volume of the local data sets is smaller, the prediction accuracy of the local charging load prediction information generation model obtained based on the local data set training is poorer, and when the predicted charging load prediction information and the actual charging load have larger difference, the vehicle charging equipment is possibly damaged;
Third, when the number of network layers of the conventional convolutional neural network for linear connection is deepened, a problem of forgetting characteristics may be caused, thereby affecting the accuracy of the generated charge load prediction information, and when there is a large difference between the predicted charge load prediction information and the actual charge load, damage may be caused to the vehicle charging device.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose charge and discharge facility load prediction information generation methods and apparatuses based on information security to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method of charge and discharge facility load prediction information generation based on information security, the method comprising: collecting a target sample local data set sequence, wherein the target sample local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target sample local data is operation data generated when the local terminal charges a vehicle; for each target specimen data set in the target specimen data set sequence, performing initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target specimen data set to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target specimen data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training; splitting the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server; for each trained model file group in the trained model file group set, performing primary model file aggregation on the trained model file group through an edge server corresponding to the trained model file group to obtain a first aggregated model file; performing secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file; and generating charging load prediction information corresponding to the local terminals according to the real-time operation data corresponding to the local terminals and the charging load prediction information generation model corresponding to the second aggregated model file for each local terminal in the at least one local terminal.
In a second aspect, some embodiments of the present disclosure provide an apparatus for generating charge and discharge facility load prediction information based on information security, the apparatus comprising: the system comprises an acquisition unit, a storage unit and a control unit, wherein the acquisition unit is configured to acquire a target local data set sequence, the target local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target local data is operation data generated when the local terminal charges a vehicle; the model training unit is configured to perform initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target local data sets for each target local data set in the target local data set sequence to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training; the splitting unit is configured to split the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server; the first aggregation unit is configured to perform model file aggregation on each trained model file group in the trained model file group set once through an edge server corresponding to the trained model file group to obtain a first aggregated model file; the second aggregation unit is configured to perform secondary model file aggregation on the first aggregated model file set to obtain a second aggregated model file; and a generation unit configured to generate, for each of the at least one local terminal, charging load prediction information corresponding to the local terminal according to the real-time operation data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: according to the charge and discharge facility load prediction information generation method based on information safety, which is disclosed by the embodiment of the invention, the problem that when a large number of new energy automobiles to be stored exist at the same time, the corresponding power grid of the energy storage station is overloaded, so that unstable voltage is caused, and the vehicle charging equipment is possibly damaged is avoided. Specifically, the cause of the damage to the vehicle charging apparatus is: meanwhile, a large number of new energy automobiles to be stored can cause rapid increase of the power grid load corresponding to the energy storage station, overload of the power grid corresponding to the energy storage station can be caused, and damage to vehicle charging equipment can be caused. Based on this, according to some embodiments of the present disclosure, a target local data set sequence is collected, where the target local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging device type, and the target local data is operation data generated by the local terminal when the vehicle is charged. And secondly, carrying out initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target local data sets for each target local data set in the target local data set sequence to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training. By taking each target specimen data set in the target specimen data set sequence as a training sample and carrying out initial model training on the initial charge load prediction information generation model corresponding to the initial model file, a personalized local model corresponding to the target specimen data set can be generated. Meanwhile, for each initial charge load prediction information generation model, the whole model training process only relates to the target local data sets, and the local data cannot go out of the domain (namely, different target local data sets are isolated from each other), so that model generation under the premise of protecting data privacy is realized. And then, splitting the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server. In practice, different trained model files often correspond to different edge servers, and model file aggregation needs to be performed by the corresponding edge servers, so that model file splitting needs to be performed on the trained model files. Further, for each trained model file group in the trained model file group set, performing model file aggregation on the trained model file group once through an edge server corresponding to the trained model file group to obtain a first aggregated model file. By means of model file aggregation, data fusion and sharing are indirectly achieved under the condition that data are mutually isolated, robustness of a model is improved, and meanwhile privacy of the data is protected. And in addition, performing secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file. By layering aggregation, the computing cost can be reduced while protecting the data privacy. And finally, for each local terminal in the at least one local terminal, generating a model according to the real-time operation data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generating the charging load prediction information corresponding to the local terminal. By means of the method, accurate charging load prediction information can be generated, charging load adjustment is carried out by combining the charging load prediction information, the problem of unstable voltage caused by overload of a power grid corresponding to an energy storage station when a large number of new energy automobiles to be stored exist is avoided on the side surface, and therefore the probability of damage to vehicle charging equipment is reduced.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of an information security-based charge and discharge facility load prediction information generation method according to the present disclosure;
FIG. 2 is a schematic structural diagram of some embodiments of an information security-based charge and discharge facility load prediction information generation device according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, a flow 100 of some embodiments of an information security-based charge and discharge facility load prediction information generation method according to the present disclosure is shown. The charge and discharge facility load prediction information generation method based on information security comprises the following steps:
Step 101, collecting a target specimen local data set sequence.
In some embodiments, an executing body (e.g., computing device) of the information security-based charge-discharge facility load prediction information generation method may collect a sequence of target specimen data sets. The target local data set sequence is a local data set corresponding to at least one local terminal. The terminal type of the local terminal is a vehicle charging equipment type. The target local data is operation data generated when the local terminal performs vehicle charging. In practice, the local terminal may be a vehicle charging device. For example, the local terminal may be a vehicle charging peg for charging an electric vehicle and/or a hybrid vehicle.
As an example, the execution body may acquire, by means of a wired connection or a wireless connection, a local data set corresponding to at least one local terminal as the target local data set sequence.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G/5G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection.
The computing device may be hardware or software. When the computing device is hardware, the computing device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein. It should be appreciated that the number of computing devices may have any number, as desired for implementation.
Step 102, for each target specimen data set in the target specimen data set sequence, performing initial model training on an initial charge load prediction information generation model corresponding to the initial model file through the target specimen data set to obtain a trained model file.
In some embodiments, for each target local data set in the target local data set sequence, the execution subject may perform initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target local data set, to obtain a trained model file. The initial model file is a model file transmitted by an edge server associated with the local terminal corresponding to the target local data set. The initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training. The charge load prediction information generation model may be a model for predicting real-time charge load information of the local terminal when charging the vehicle. In practice, the charge load prediction information generation model may be a GAN (Generative Adversarial Network) model that generates an countermeasure network).
In practice, the execution subject may take the target local data set as a model training sample, and perform unsupervised model training on the initial charge load prediction information generation model corresponding to the initial model file, to obtain a trained model file.
In some optional implementations of some embodiments, the performing initial model training on the initial charge load prediction information generation model corresponding to the initial model file through the target local data set to obtain a trained model file may include the following steps:
and a first step of carrying out data preprocessing on the target specimen data set to obtain a preprocessed target specimen data set.
In practice, the execution subject may perform fractional calibration normalization on the target specimen local data set to obtain a preprocessed target specimen local data set.
And secondly, performing model training on an initial charge load prediction information generation model corresponding to the initial model file according to the preprocessed target specimen data set so as to generate a candidate model file.
The candidate model files are model files corresponding to initial charge load prediction information generation models, wherein the corresponding training times are consistent with preset training times.
In practice, the execution subject may perform unsupervised model training on the initial charge load prediction information generation model corresponding to the initial model file by using the preprocessed target specimen data set as a training sample, and use an initial model file corresponding to the initial charge load prediction information generation model after reaching the preset training times as a candidate model file.
And thirdly, carrying out parameter encryption on the candidate model file to generate the trained model file.
In practice, the executing body may perform parameter encryption on the candidate model file by using a symmetric encryption algorithm to generate the trained model file.
And 103, splitting the model files of the obtained training model file set to obtain a training model file set.
In some embodiments, the executing body may split the model file from the obtained trained model file set to obtain a trained model file set. The trained model file set corresponds to at least one edge server.
In practice, the executing body can split the model files of the obtained training model file set through the terminal type of the local terminal to obtain a training model file set.
In some optional implementations of some embodiments, the executing body performs model file splitting on the obtained trained model file set to obtain a trained model file set, and may include the following steps:
and step one, determining an edge server information set corresponding to the target local data set sequence.
The number of the edge server information in the edge server information set is larger than a preset value. In practice, the preset number is greater than or equal to 2.
And a second step of determining at least one trained model file corresponding to each edge server information in the edge server information set in the trained model file set as a trained model file set to obtain the trained model file set.
The executing body may use a trained model file corresponding to at least one local terminal having a communication relationship with an edge server corresponding to the edge server information as the trained model file group.
Step 104, for each trained model file group in the trained model file group set, performing model file aggregation on the trained model file group once through an edge server corresponding to the trained model file group to obtain a first aggregated model file.
In some embodiments, for each trained model file group in the trained model file group set, the executing body may perform model file aggregation on the trained model file group once through an edge server corresponding to the trained model file group, to obtain a first aggregated model file.
In practice, the executing body may perform bayesian aggregation on each trained model file group in the trained model file group set, to obtain a first aggregated model file.
In some optional implementations of some embodiments, the executing body performs, through an edge server corresponding to the trained model file group, model file aggregation on the trained model file group once to obtain a first aggregated model file, and may include the following steps:
the first step is to determine a first Bayesian prior probability and a first sample distribution of the trained model file group according to the trained model file group.
Wherein, the first bayesian prior probability is:
in practice, the +>Is the edge parameter corresponding to the edge server. />Indicating the sequence number. />Is->And the edge end servers correspond to the edge parameters. / >Is the number of corresponding local terminals in the trained model file set. />Is a local terminal parameter. />Is a sequence number. />Is->And (5) local terminal parameters.Represents 1 st to->Personal local terminal parameters, i.e. representing +.>。/>Is a first bayesian prior probability. />Is->The true posterior probability of the edge parameters corresponding to the edge servers.Is at->In the presence of->Probability of occurrence.
Wherein, the first sample distribution is:
in practice, the +>Is the target specimen data set. />Is a sequence number. />Is->A data set of the subject specimen. />Is a traditional neural network model. For example, a->Is a convolutional neural network model. />Is the first sample distribution.
And step two, determining a first log likelihood function of the edge parameters corresponding to the edge server.
Wherein the first log likelihood function is:wherein (1)>Is->Approximate posterior probability of the edge parameters corresponding to each edge server. />Is a first log-likelihood function. />A base 10 logarithmic function is shown. />Is KL divergence, where KL divergence measures the difference between the approximate posterior probability and the true posterior probability. />Representation->And->KL divergence between. />Is thatLower bound of evidence (ELBO, evidence lower bound).
And thirdly, performing approximation processing on the first Bayesian prior probability and the first sample distribution to generate a first approximate Bayesian posterior probability.
In practice, the execution subject approximates the product of the first bayesian prior probability and the first sample distribution by using variational reasoning, and obtains a first approximate bayesian posterior probability.
In practice, the first approximate Bayesian posterior probability obtained by using variational reasoning is as follows:in practice, the +>Is a valuation symbol.Is a first approximate bayesian posterior probability. />Is->Is used for the variation parameters of the (a). />Is->Personal->Is a probability of approximation of a posterior. />Is->Is used for the variation parameters of the (a). />Is->Is used for the variation parameters of the (a). The variational parameter refers to an independent variable (where independent variable refers to +.>,/>,/>) The adaptive parameters which remain unchanged during transformation are applied to mathematical analysis. />Is->Personal->Is>Is->Approximate posterior probability of individual local terminal parameters. />Is->And->Approximate posterior probability between.Is->Personal->And->Approximate posterior probability between.
And step four, obtaining a first objective function based on the first log-likelihood function and the first approximate Bayesian posterior probability.
Wherein the first objective function includes: the system comprises an edge server parameter and a local terminal parameter set, wherein the edge server parameter represents an edge parameter corresponding to the edge server, and the local terminal parameter represents a parameter of a local terminal corresponding to the edge server.
In practice, the executing body uses standard variational reasoning technology to obtain a first objective function.
Wherein the first objective function is: in practice, the +>Is a first objective function. />Representation->And->KL divergence between。/>Is a first log-likelihood function. />Is->The formula is:wherein (1)>First->Personal (S)Is>Is->True posterior probability of individual local terminal parameters. />Is->Lower bound of evidence (ELBO, evidence lower bound). />Representation->And->KL divergence between.
And fifthly, performing parameter optimization processing on the local terminal parameter set in response to determining that the edge server parameter converges, and obtaining a first model file.
In practice, in response to determining that the edge server parameter converges, the executing body performs parameter optimization processing on the local terminal parameter set by minimizing the first objective function to obtain a first model file.
When the edge server parameter converges, the minimizing the first objective function is:wherein: /> 。/>Representation->
And step six, responding to the determination of the convergence of the local terminal parameters in the local terminal parameter set, and carrying out parameter optimization processing on the edge server parameters to obtain a second model file.
In practice, in response to determining that the local terminal parameters in the local terminal parameter set converge, the executing body performs parameter optimization processing on the edge server parameters by minimizing a first objective function to obtain a second model file.
When the local terminal parameters in the local terminal parameter set converge, the minimizing the first objective function is:wherein:。/>is->Lower bound of evidence (ELBO, evidence lower bound). />Is to->And->Is a log-likelihood function of the parameter. />Is->And->KL divergence between.
Seventh, determining the first model file and the second model file as the first aggregated model file.
And 105, performing secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file.
In some embodiments, the executing body may perform secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file.
In practice, the execution body performs bayesian aggregation on the first aggregated model file set to obtain a second aggregated model file.
In some optional implementations of some embodiments, the performing the second model file aggregation on the obtained first aggregate model file set to obtain a second aggregate model file may include the following steps:
the first step is to determine a second Bayesian prior probability and a second sample distribution of the first aggregated model file set according to the first aggregated model file set.
Wherein, the second bayesian prior probability is:in practice, the +>Is a cloud aggregation server parameter. />Is the number of edge servers. />Represents 1 st to->Edge parameters corresponding to the edge servers, i.e. representing +.>。/>Is a second bayesian prior probability.The real posterior probability of cloud parameters of the cloud aggregation server connected with each edge server is obtained. />Is at->In the presence of->Probability of occurrence.
Wherein the second sample distribution is:
in practice, the +>Is in accordance with->And the sum of target local data sets of the local terminals connected by the edge servers. />Is a traditional neural network model, e.g. +.>May be a convolutional neural network model.Is the second sample distribution.
And a second step of determining a second log-likelihood function of the edge server parameters corresponding to each edge server in the at least one edge server to obtain a second log-likelihood function set.
Wherein the second log likelihood function is:in practice, the +>Is a second log-likelihood function.Is the approximate posterior probability of the cloud aggregation server parameters. />Representation->And (3) withKL divergence between. />Is->Lower bound of evidence (ELBO, evidence lower bound).
And thirdly, generating a second approximate Bayesian posterior probability based on the second Bayesian prior probability and the second sample distribution.
In practice, the execution subject approximates the product of the second bayesian prior probability and the second sample distribution by using variational reasoning, so as to obtain a second approximate bayesian posterior probability.
Therefore, the second approximate Bayesian posterior probability obtained by using the variational reasoning is:in practice, the +>Is a second approximate bayesian posterior probability. The variation parameter +.>By->、/>And->、/>Composition is prepared. />Is->Is used for the variation parameters of the (a). />Is thatAnd->Approximate posterior probability between. />Is->Personal->And->Approximate posterior probability between.
Fourth, obtaining a second objective function according to the second log likelihood function set and the second approximate Bayesian posterior probability, wherein the second objective function comprises: the cloud aggregation server parameters represent cloud parameters among the edge servers corresponding to the cloud aggregation server, and the target edge server parameter set represents an edge parameter set of at least one edge server.
In practice, the executing body uses standard variational reasoning technology to obtain a second objective function.
Wherein the second objective function is:wherein: ,/>is a second objective function. />Representation->And->KL divergence between.Is a second log-likelihood function. />Is->Log likelihood function of (2), wherein:wherein (1)>First->Personal (S)Is>Is->True posterior probability of edge parameters for each edge server.Is->Lower bound of evidence (ELBO, evidence lower bound). />Representation ofAnd->KL divergence between.
And fifthly, performing parameter optimization processing on the target edge server parameter set in response to determining that the cloud aggregation server parameter converges, and obtaining a third model file.
In practice, in response to determining that the cloud aggregation server parameter converges, the execution body performs parameter optimization processing on the target edge server parameter set by minimizing the first objective function, so as to obtain a third model file.
Wherein the minimizing the second objective function is:wherein, representation of
And step six, responding to the determination of the convergence of the target edge server parameters in the target edge server parameter set, and carrying out parameter optimization processing on the cloud aggregation server parameters to obtain a fourth model file.
In practice, in response to determining that the target edge server parameter in the target edge server parameter set converges, the execution body performs parameter optimization processing on the cloud aggregation server parameter by minimizing the second objective function, so as to obtain a fourth model file.
Wherein the minimizing the second objective function is:wherein (1)> Is->Lower bound of evidence (ELBO, evidence lower bound). />Is to->And->Is a log-likelihood function of the parameter. />Is->And->KL divergence between. />
And seventh, determining the third model file and the fourth model file as the second aggregated model file.
As an invention point of the present disclosure, the content of "some optional implementation manners of some embodiments" in steps 104 to 105 solves the second technical problem mentioned in the background art, that is, "because the data amount difference of the local data set corresponding to each local terminal is large, when the data amount of the local data set is small, the prediction accuracy of the local charge load prediction information generation model obtained based on the training of the local data set is poor, and when there is a large difference between the predicted charge load prediction information and the actual charge load, damage may be caused to the vehicle charging device. In practice, since the data amount of the local data set corresponding to each local terminal is large in difference, the number of training samples determines the prediction accuracy of the local charging load prediction information generation model to a certain extent, when the data amount of the local data set is small, the prediction accuracy of the local charging load prediction information generation model obtained based on the local data set training is poor, and when a large difference exists between the predicted charging load prediction information and the actual charging load, damage to the vehicle charging equipment may be caused. Based on this, first, according to the present disclosure, for each trained model file group in the trained model file group set, model file aggregation is performed once on the trained model file group through an edge server corresponding to the trained model file group, so as to obtain a first aggregated model file. The training model file group corresponding to at least one local terminal of the same edge server is subjected to model aggregation, so that the fusion and sharing among the local data groups corresponding to the local terminals are indirectly realized while the data privacy is protected, and the robustness of the model is improved. Finally, the present disclosure performs a second model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file. Model aggregation is performed in a layered mode, and the model files after training corresponding to each local terminal are indirectly integrated under the condition of data isolation. In summary, through model aggregation performed twice, under the condition of protecting data privacy, the local data sets corresponding to each local terminal are subjected to data fusion and indirect sharing, so that the accuracy of the charging load prediction information is improved, the problem that the vehicle charging equipment is possibly damaged due to large difference between the predicted charging load prediction information and the actual charging load is avoided, and the equipment safety of the vehicle charging equipment is ensured.
And 106, for each local terminal in the at least one local terminal, generating a model according to the real-time operation data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generating the charging load prediction information corresponding to the local terminal.
In some embodiments, the executing body may generate, for each of the at least one local terminal, the charging load prediction information corresponding to the local terminal according to the real-time operation data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file. The real-time operation data is corresponding real-time operation data of the local terminal when the terminal operates (for example, in a vehicle charging process). In practice, the charge load prediction information may characterize a predicted charge load of the local terminal over a future period of time.
In practice, the executing body may input the real-time operation data corresponding to the local terminal into the charging load prediction information generating model corresponding to the second aggregated model file, so as to generate the charging load prediction information corresponding to the local terminal.
Optionally, the charging load prediction information generating model corresponding to the second aggregated model file may include: a data feature extraction model and an information prediction model.
In some optional implementations of some embodiments, the executing body may generate the charging load prediction information corresponding to the local terminal according to the real-time running data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file, and the generating may include the following steps:
and the first step is to input the real-time operation data corresponding to the local terminal into the data feature extraction model so as to generate feature extracted data information.
The data feature extraction model may include K serially connected, attention-mechanism-based convolution residual blocks, among others. In practice, a convolutional residual block consists of 2 convolutional layers, 1 residual block, and 1 Content-Based Attention mechanism (CBA) layer.
In practice, the execution subject may perform feature extraction on the real-time operation data according to the data feature extraction model to generate feature extracted data information.
And secondly, performing data cleaning processing on the data information after the feature extraction to obtain cleaned data information.
In practice, the execution subject may perform outlier rejection processing on the feature-extracted data information to generate cleaned data information.
And thirdly, carrying out information prediction on the cleaned data information through the information prediction model to obtain charging load prediction information corresponding to the local terminal.
In practice, the execution subject may input the information prediction model (for example, a fully-connected layer) according to the post-cleaning data information to generate the charging load prediction information corresponding to the local terminal.
The content of the foregoing "in some alternative implementations of some embodiments" is taken as an invention point of the present disclosure, and solves the third technical problem mentioned in the background art, namely, "when the number of network layers of a traditional convolutional neural network connected linearly is deepened, a problem that characteristics are forgotten may be caused, so that the accuracy of the generated charge load prediction information is affected, and when there is a large difference between the predicted charge load prediction information and an actual charge load, damage may be caused to vehicle charging equipment. In practice, when the number of network layers of the traditional linear connected convolutional neural network is deepened, the condition of gradient disappearance is easy to be caused, and the problem of feature forgetting can be caused, so that the accuracy of the generated charge load prediction information is influenced, and when a large difference exists between the predicted charge load prediction information and the actual charge load, the damage to the vehicle charging equipment can be caused. Based on this, the present disclosure designs a data feature extraction model and an information prediction model. Firstly, the data feature extraction model is used for carrying out feature extraction on real-time operation data, and the residual error module included in the data feature extraction model can avoid the problem of performance reduction caused by network deepening and alleviate the problem of gradient disappearance. And then, carrying out data cleaning processing on the data information after the feature extraction to obtain the cleaned data information. The data can be cleaned to reduce the calculation consumption and improve the quality of the data. In addition, the cleaned data information is input into the information prediction model to generate charging load prediction information corresponding to the local terminal. By the method, the problem that characteristics are forgotten due to the fact that the number of network layers of a traditional linear connected convolutional neural network is deepened is avoided, and accuracy of charging load prediction information is improved.
The above embodiments of the present disclosure have the following advantageous effects: according to the charge and discharge facility load prediction information generation method based on information safety, which is disclosed by the embodiment of the invention, the problem that when a large number of new energy automobiles to be stored exist at the same time, the corresponding power grid of the energy storage station is overloaded, so that unstable voltage is caused, and the vehicle charging equipment is possibly damaged is avoided. Specifically, the cause of the damage to the vehicle charging apparatus is: meanwhile, a large number of new energy automobiles to be stored can cause rapid increase of the power grid load corresponding to the energy storage station, overload of the power grid corresponding to the energy storage station can be caused, and damage to vehicle charging equipment can be caused. Based on this, according to some embodiments of the present disclosure, a target local data set sequence is collected, where the target local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging device type, and the target local data is operation data generated by the local terminal when the vehicle is charged. And secondly, carrying out initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target local data sets for each target local data set in the target local data set sequence to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training. By taking each target specimen data set in the target specimen data set sequence as a training sample and carrying out initial model training on the initial charge load prediction information generation model corresponding to the initial model file, a personalized local model corresponding to the target specimen data set can be generated. Meanwhile, for each initial charge load prediction information generation model, the whole model training process only relates to the target local data sets, and the local data cannot go out of the domain (namely, different target local data sets are isolated from each other), so that model generation under the premise of protecting data privacy is realized. And then, splitting the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server. In practice, different trained model files often correspond to different edge servers, and model file aggregation needs to be performed by the corresponding edge servers, so that model file splitting needs to be performed on the trained model files. Further, for each trained model file group in the trained model file group set, performing model file aggregation on the trained model file group once through an edge server corresponding to the trained model file group to obtain a first aggregated model file. By means of model file aggregation, data fusion and sharing are indirectly achieved under the condition that data are mutually isolated, robustness of a model is improved, and meanwhile privacy of the data is protected. And in addition, performing secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file. By layering aggregation, the computing cost can be reduced while protecting the data privacy. And finally, for each local terminal in the at least one local terminal, generating a model according to the real-time operation data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generating the charging load prediction information corresponding to the local terminal. By means of the method, accurate charging load prediction information can be generated, charging load adjustment is carried out by combining the charging load prediction information, the problem of unstable voltage caused by overload of a power grid corresponding to an energy storage station when a large number of new energy automobiles to be stored exist is avoided on the side surface, and therefore the probability of damage to vehicle charging equipment is reduced.
With further reference to fig. 2, as an implementation of the method shown in the above-described figures, the present disclosure provides some embodiments of an information-security-based charge and discharge facility load prediction information generation apparatus, which corresponds to those shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 2, the charge and discharge facility load prediction information generating device 200 based on information security according to some embodiments includes: an acquisition unit 201, a model training unit 202, a splitting unit 203, a first aggregation unit 204, a second aggregation unit 205 and a generation unit 206. The collecting unit 201 is configured to collect a target local data set sequence, where the target local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target local data is operation data generated when the local terminal charges a vehicle; a model training unit 202 configured to perform initial model training on an initial charge load prediction information generation model corresponding to an initial model file, for each target local data set in the target local data set sequence, to obtain a trained model file, where the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training; a splitting unit 203, configured to split the model file from the obtained training model file set to obtain a training model file set, where the training model file set corresponds to at least one edge server; a first aggregation unit 204, configured to perform, for each trained model file group in the trained model file group set, model file aggregation on the trained model file group once through an edge server corresponding to the trained model file group, to obtain a first aggregated model file; a second aggregation unit 205 configured to perform a second model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file; and a generating unit 206 configured to generate, for each of the at least one local terminal, charging load prediction information corresponding to the local terminal according to the real-time operation data corresponding to the local terminal and the charging load prediction information generation model corresponding to the second aggregated model file.
It will be appreciated that the elements described in the information security-based charge and discharge facility load prediction information generation apparatus 200 correspond to the respective steps in the method described with reference to fig. 1. Thus, the operations, features, and advantages described above with respect to the method are equally applicable to the information security-based charge and discharge facility load prediction information generating device 200 and the units contained therein, and are not described herein.
Referring now to fig. 3, a schematic diagram of an electronic device (e.g., computing device) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with programs stored in a read-only memory 302 or programs loaded from a storage 308 into a random access memory 303. In the random access memory 303, various programs and data necessary for the operation of the electronic device 300 are also stored. The processing means 301, the read only memory 302 and the random access memory 303 are connected to each other by a bus 304. An input/output interface 305 is also connected to the bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from read only memory 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: collecting a target sample local data set sequence, wherein the target sample local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target sample local data is operation data generated when the local terminal charges a vehicle; for each target specimen data set in the target specimen data set sequence, performing initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target specimen data set to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target specimen data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training; splitting the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server; for each trained model file group in the trained model file group set, performing primary model file aggregation on the trained model file group through an edge server corresponding to the trained model file group to obtain a first aggregated model file; performing secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file; and generating charging load prediction information corresponding to the local terminals according to the real-time operation data corresponding to the local terminals and the charging load prediction information generation model corresponding to the second aggregated model file for each local terminal in the at least one local terminal.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a model training unit, a splitting unit, a first aggregation unit, a second aggregation unit, and a generation unit. Wherein the names of the units do not constitute a limitation of the unit itself in some cases, for example, the acquisition unit may also be described as "unit for acquiring a sequence of data sets of the object sample".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (6)

1. A charge and discharge facility load prediction information generation method based on information security comprises the following steps:
collecting a target sample local data set sequence, wherein the target sample local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target sample local data is operation data generated when the local terminal charges a vehicle;
for each target sample local data set in the target sample local data set sequence, performing initial model training on an initial charge load prediction information generation model corresponding to an initial model file through the target sample local data set to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target sample local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training;
splitting the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server;
For each trained model file group in the trained model file group set, performing primary model file aggregation on the trained model file group through an edge server corresponding to the trained model file group to obtain a first aggregated model file;
performing secondary model file aggregation on the obtained first aggregated model file set to obtain a second aggregated model file;
for each local terminal in the at least one local terminal, generating a model according to the real-time operation data corresponding to the local terminal and the charging load prediction information corresponding to the second aggregated model file, and generating charging load prediction information corresponding to the local terminal, wherein the performing, by the edge server corresponding to the trained model file group, model file aggregation on the trained model file group once to obtain a first aggregated model file includes:
determining a first Bayesian prior probability and a first sample distribution of the trained model file group according to the trained model file group;
determining a first log likelihood function of an edge parameter corresponding to the edge server;
Performing approximate processing on the first Bayesian prior probability and the first sample distribution to generate a first approximate Bayesian posterior probability;
obtaining a first objective function based on the first log-likelihood function and the first approximate Bayesian posterior probability, wherein the first objective function comprises: the system comprises an edge server parameter and a local terminal parameter set, wherein the edge server parameter represents an edge parameter corresponding to the edge server, and the local terminal parameter represents a parameter of a local terminal corresponding to the edge server;
in response to determining that the edge server parameter converges, performing parameter optimization processing on a local terminal parameter set to obtain a first model file;
responding to the determination of the convergence of the local terminal parameters in the local terminal parameter set, and carrying out parameter optimization processing on the edge server parameters to obtain a second model file;
and determining the first model file and the second model file as the first aggregated model file.
2. The method according to claim 1, wherein the performing initial model training on the initial charge load prediction information generation model corresponding to the initial model file through the target local data set to obtain a trained model file includes:
Performing data preprocessing on the target specimen data set to obtain a preprocessed target specimen data set;
model training is carried out on an initial charge load prediction information generation model corresponding to the initial model file according to the preprocessed target specimen data set so as to generate a candidate model file, wherein the candidate model file is a model file corresponding to the initial charge load prediction information generation model, and the corresponding training times of the model file are consistent with the preset training times;
and carrying out parameter encryption on the candidate model file to generate the trained model file.
3. The method of claim 2, wherein the splitting the model file from the obtained trained model file set to obtain a trained model file set includes:
determining an edge server information set corresponding to the target local data set sequence, wherein the number of the edge server information in the edge server information set is larger than a preset value;
and determining at least one trained model file corresponding to each piece of edge server information in the trained model file set as a trained model file set, and obtaining the trained model file set.
4. An information security-based charge and discharge facility load prediction information generation device, comprising:
the system comprises an acquisition unit, a storage unit and a control unit, wherein the acquisition unit is configured to acquire a target local data set sequence, the target local data set sequence is a local data set corresponding to at least one local terminal, the terminal type of the local terminal is a vehicle charging equipment type, and the target local data is operation data generated when the local terminal charges a vehicle;
the model training unit is configured to perform initial model training on an initial charge load prediction information generation model corresponding to an initial model file through each target local data set in the target local data set sequence to obtain a trained model file, wherein the initial model file is a model file sent by an edge server associated with a local terminal corresponding to the target local data set, and the initial charge load prediction information generation model is a charge load prediction information generation model to be subjected to model training;
the splitting unit is configured to split the model files of the obtained training model file set to obtain a training model file set, wherein the training model file set corresponds to at least one edge server;
The first aggregation unit is configured to aggregate the model files once through the edge server corresponding to the trained model file group for each trained model file group in the trained model file group set to obtain a first aggregated model file;
the second aggregation unit is configured to perform secondary model file aggregation on the first aggregated model file set to obtain a second aggregated model file;
the generating unit is configured to generate, for each local terminal in the at least one local terminal, a model according to real-time running data corresponding to the local terminal and charging load prediction information corresponding to the second aggregated model file, and generate charging load prediction information corresponding to the local terminal, where the performing, by using the edge server corresponding to the trained model file group, model file aggregation on the trained model file group once, to obtain a first aggregated model file includes:
determining a first Bayesian prior probability and a first sample distribution of the trained model file group according to the trained model file group;
Determining a first log likelihood function of an edge parameter corresponding to the edge server;
performing approximate processing on the first Bayesian prior probability and the first sample distribution to generate a first approximate Bayesian posterior probability;
obtaining a first objective function based on the first log-likelihood function and the first approximate Bayesian posterior probability, wherein the first objective function comprises: the system comprises an edge server parameter and a local terminal parameter set, wherein the edge server parameter represents an edge parameter corresponding to the edge server, and the local terminal parameter represents a parameter of a local terminal corresponding to the edge server;
in response to determining that the edge server parameter converges, performing parameter optimization processing on a local terminal parameter set to obtain a first model file;
responding to the determination of the convergence of the local terminal parameters in the local terminal parameter set, and carrying out parameter optimization processing on the edge server parameters to obtain a second model file;
and determining the first model file and the second model file as the first aggregated model file.
5. An electronic device, comprising:
one or more processors;
A storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1 to 3.
6. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1 to 3.
CN202311160057.3A 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security Active CN116894163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311160057.3A CN116894163B (en) 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311160057.3A CN116894163B (en) 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security

Publications (2)

Publication Number Publication Date
CN116894163A CN116894163A (en) 2023-10-17
CN116894163B true CN116894163B (en) 2024-01-16

Family

ID=88312411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311160057.3A Active CN116894163B (en) 2023-09-11 2023-09-11 Charging and discharging facility load prediction information generation method and device based on information security

Country Status (1)

Country Link
CN (1) CN116894163B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010027726A1 (en) * 2010-04-14 2012-05-10 Bayerische Motoren Werke Aktiengesellschaft Electrical power providing method for electrical energy network for battery electric vehicle, involves determining driving profile of vehicle, and providing electrical power to network as function of energy requirement prediction
CN113610303A (en) * 2021-08-09 2021-11-05 北京邮电大学 Load prediction method and system
CN115563859A (en) * 2022-09-26 2023-01-03 国电南瑞南京控制系统有限公司 Power load prediction method, device and medium based on layered federal learning
CN115907136A (en) * 2022-11-16 2023-04-04 北京国电通网络技术有限公司 Electric vehicle scheduling method, device, equipment and computer readable medium
CN116050557A (en) * 2021-10-28 2023-05-02 新智我来网络科技有限公司 Power load prediction method, device, computer equipment and medium
CN116111579A (en) * 2022-12-13 2023-05-12 广西电网有限责任公司 Electric automobile access distribution network clustering method
CN116546567A (en) * 2023-07-06 2023-08-04 深圳市大数据研究院 Data processing method and system based on Bayesian federal learning and electronic equipment
CN116562476A (en) * 2023-07-12 2023-08-08 北京中电普华信息技术有限公司 Charging load information generation method and device applied to electric automobile
CN116596105A (en) * 2023-03-13 2023-08-15 燕山大学 Charging station load prediction method considering power distribution network development

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114372569A (en) * 2020-10-14 2022-04-19 新智数字科技有限公司 Data measurement method, data measurement device, electronic equipment and computer readable medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010027726A1 (en) * 2010-04-14 2012-05-10 Bayerische Motoren Werke Aktiengesellschaft Electrical power providing method for electrical energy network for battery electric vehicle, involves determining driving profile of vehicle, and providing electrical power to network as function of energy requirement prediction
CN113610303A (en) * 2021-08-09 2021-11-05 北京邮电大学 Load prediction method and system
CN116050557A (en) * 2021-10-28 2023-05-02 新智我来网络科技有限公司 Power load prediction method, device, computer equipment and medium
CN115563859A (en) * 2022-09-26 2023-01-03 国电南瑞南京控制系统有限公司 Power load prediction method, device and medium based on layered federal learning
CN115907136A (en) * 2022-11-16 2023-04-04 北京国电通网络技术有限公司 Electric vehicle scheduling method, device, equipment and computer readable medium
CN116111579A (en) * 2022-12-13 2023-05-12 广西电网有限责任公司 Electric automobile access distribution network clustering method
CN116596105A (en) * 2023-03-13 2023-08-15 燕山大学 Charging station load prediction method considering power distribution network development
CN116546567A (en) * 2023-07-06 2023-08-04 深圳市大数据研究院 Data processing method and system based on Bayesian federal learning and electronic equipment
CN116562476A (en) * 2023-07-12 2023-08-08 北京中电普华信息技术有限公司 Charging load information generation method and device applied to electric automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Electrical load forecasting using edge computing and federated learning;Afaf Taik 等;《ICC 2020-2020 IEEE International conference on communications(ICC)》;第1-7页 *
联邦学习中的隐私问题研究进展;汤凌韬 等;《软件学报》;第34卷(第1期);第197-229页 *

Also Published As

Publication number Publication date
CN116894163A (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN116562476B (en) Charging load information generation method and device applied to electric automobile
CN116512980B (en) Power distribution method, device, equipment and medium based on internal resistance of power battery
CN117236805B (en) Power equipment control method, device, electronic equipment and computer readable medium
CN116894163B (en) Charging and discharging facility load prediction information generation method and device based on information security
CN116388112A (en) Abnormal supply end power-off method, device, electronic equipment and computer readable medium
CN111898061B (en) Method, apparatus, electronic device and computer readable medium for searching network
CN111626044B (en) Text generation method, text generation device, electronic equipment and computer readable storage medium
CN111582456B (en) Method, apparatus, device and medium for generating network model information
CN111709366A (en) Method, apparatus, electronic device, and medium for generating classification information
CN112488947A (en) Model training and image processing method, device, equipment and computer readable medium
CN116995784B (en) Ship energy storage and discharge control method and device, electronic equipment and readable medium
CN116757443B (en) Novel power line loss rate prediction method and device for power distribution network, electronic equipment and medium
CN117040135B (en) Power equipment power supply method, device, electronic equipment and computer readable medium
CN117131366B (en) Transformer maintenance equipment control method and device, electronic equipment and readable medium
CN113240107B (en) Image processing method and device and electronic equipment
CN117235535B (en) Abnormal supply end power-off method and device, electronic equipment and medium
CN111582482B (en) Method, apparatus, device and medium for generating network model information
CN112070163B (en) Image segmentation model training and image segmentation method, device and equipment
WO2024007938A1 (en) Multi-task prediction method and apparatus, electronic device, and storage medium
CN117913779A (en) Method, apparatus, electronic device and readable medium for predicting electric load information
Wang et al. Predict the Remaining Useful Life of Lithium Batteries Based on EWT-Elman
CN116028819A (en) Model quantization training method, device, equipment and storage medium
CN118228200A (en) Multi-mode model-based power equipment abnormality identification method, device and equipment
CN118228031A (en) System and method for estimating capacity of distributed photovoltaic system based on prediction
CN116577684A (en) Battery remaining life prediction method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant