CN116050557A - Power load prediction method, device, computer equipment and medium - Google Patents

Power load prediction method, device, computer equipment and medium Download PDF

Info

Publication number
CN116050557A
CN116050557A CN202111280103.4A CN202111280103A CN116050557A CN 116050557 A CN116050557 A CN 116050557A CN 202111280103 A CN202111280103 A CN 202111280103A CN 116050557 A CN116050557 A CN 116050557A
Authority
CN
China
Prior art keywords
power
related data
data
power load
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111280103.4A
Other languages
Chinese (zh)
Inventor
刘国柄
刘嘉
吕宏强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinzhi I Lai Network Technology Co ltd
Original Assignee
Xinzhi I Lai Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinzhi I Lai Network Technology Co ltd filed Critical Xinzhi I Lai Network Technology Co ltd
Priority to CN202111280103.4A priority Critical patent/CN116050557A/en
Publication of CN116050557A publication Critical patent/CN116050557A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Development Economics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Embodiments of the present disclosure disclose power load prediction methods, apparatus, computer devices, and media. The method comprises the following steps: acquiring operation related data of a target date; inputting the operation related data into a pre-trained power load prediction model to generate a power load predicted value, wherein a training sample set of the power load prediction model is obtained based on data similarity; transmitting the power load predicted value to a target device with a display function; and controlling the target device to display the power load predicted value. According to the method and the device for predicting the electric load, based on the operation related data, the electric load is accurately predicted by using the pre-trained electric load prediction model, convenience is provided for operators to know the operation condition of the device and overhaul the device, and the service life of the device is prolonged.

Description

Power load prediction method, device, computer equipment and medium
Technical Field
The disclosure relates to the technical field of data prediction of energy sources, in particular to a power load prediction method, a device, computer equipment and a medium.
Background
The power load prediction is an important component of the power market and is also an essential link of power grid safety early warning. The prediction of the power load in the prior art usually has a large error, which brings great hidden danger to the operation of energy equipment and the safety of operators. Therefore, how to accurately predict the power load becomes a primary issue.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a method, an apparatus, a computer device, and a medium for predicting a power load, so as to solve the technical problem that the power load cannot be accurately predicted in the prior art.
In a first aspect of an embodiment of the present disclosure, there is provided a power load prediction method including: acquiring operation related data of a target date; inputting the operation related data into a pre-trained power load prediction model to generate a power load predicted value, wherein a training sample set of the power load prediction model is obtained based on data similarity; transmitting the power load predicted value to a target device with a display function; and controlling the target device to display the power load predicted value.
In a second aspect of the disclosed embodiments, there is provided an electrical load prediction apparatus comprising: an acquisition unit configured to acquire operation-related data of a target date; a generation unit configured to input the operation-related data to a pre-trained power load prediction model, and generate a power load prediction value, wherein a training sample set of the power load prediction model is obtained based on similarity of data; a transmission unit configured to transmit the above-described power load predicted value to a target device having a display function; and a display unit configured to control the target device to display the power load predicted value.
In a third aspect of the disclosed embodiments, a computer device is provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when the computer program is executed.
In a fourth aspect of the disclosed embodiments, a computer-readable storage medium is provided, which stores a computer program which, when executed by a processor, implements the steps of the above-described method.
One of the above embodiments of the present disclosure has the following advantageous effects: firstly, acquiring operation related data of a target date; then, inputting the operation related data into a pre-trained power load prediction model to generate a power load predicted value; and then, the obtained power load predicted value is transmitted to the target equipment and displayed. The method provided by the present disclosure can accurately predict the power load using a pre-trained neural network model (power load prediction model) based on the operation-related data. The equipment operation condition and equipment maintenance are convenient for operators, and the service life of the equipment is indirectly prolonged.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of a joint learning architecture according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of an embodiment of a power load prediction method according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural view of an embodiment of a power load prediction device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural view of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
The joint learning refers to comprehensively utilizing a plurality of AI (Artificial Intelligence ) technologies on the premise of ensuring data safety and user privacy, jointly excavating data value by combining multiparty cooperation, and promoting new intelligent business states and modes based on joint modeling. The joint learning has at least the following characteristics:
(1) The participating nodes control the weak centralized joint training mode of the own data, so that the data privacy safety in the co-creation intelligent process is ensured.
(2) Under different application scenes, a plurality of model aggregation optimization strategies are established by utilizing screening and/or combination of an AI algorithm and privacy protection calculation so as to obtain a high-level and high-quality model.
(3) On the premise of ensuring data safety and user privacy, a method for improving the efficiency of the joint learning engine is obtained based on a plurality of model aggregation optimization strategies, wherein the efficiency method can be used for improving the overall efficiency of the joint learning engine by solving the problems of information interaction, intelligent perception, exception handling mechanisms and the like under a large-scale cross-domain network with parallel computing architecture.
(4) The requirements of multiparty users in all scenes are acquired, the real contribution degree of all joint participants is determined and reasonably evaluated through a mutual trust mechanism, and distribution excitation is carried out.
Based on the mode, AI technical ecology based on joint learning can be established, the industry data value is fully exerted, and the scene of the vertical field is promoted to fall to the ground.
A power load prediction method and apparatus according to embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a joint learning architecture according to an embodiment of the present disclosure. As shown in fig. 1, the architecture of joint learning may include a server (central node) 101, as well as participants 102, 103, and 104.
In the joint learning process, a basic model may be established by the server 101, and the server 101 transmits the model to the participants 102, 103, and 104 with which a communication connection is established. The basic model may also be uploaded to the server 101 after any party has established, and the server 101 sends the model to the other parties with whom it has established a communication connection. The participants 102, 103 and 104 construct a model according to the downloaded basic structure and model parameters, perform model training using local data, obtain updated model parameters, and encrypt and upload the updated model parameters to the server 101. Server 101 aggregates the model parameters sent by participants 102, 103, and 104 to obtain global model parameters, and transmits the global model parameters back to participants 102, 103, and 104. Participant 102, participant 103 and participant 104 iterate the respective models according to the received global model parameters until the models eventually converge, thereby enabling training of the models. In the joint learning process, the data uploaded by the participants 102, 103 and 104 are model parameters, local data is not uploaded to the server 101, and all the participants can share final model parameters, so that common modeling can be realized on the basis of ensuring data privacy. One of the participants is composed of one or more clients.
It should be noted that the number of participants is not limited to three as described above, but may be set as needed, and the embodiment of the present disclosure is not limited thereto.
With continued reference to fig. 2, a flow 200 of an embodiment of a power load prediction method according to the present disclosure is shown. The method may be performed by a server or a participant in fig. 1. The power load prediction method comprises the following steps:
step S201, operation-related data of the target date is acquired.
In the embodiment, the execution subject of the power load prediction method may acquire the operation-related data of the above-described target date by setting a sensor or a self-storage manner or the like. The operation-related data includes at least operation state data, fuel usage data, power generation value data, and the like. The manner in which the operation-related data of the target date is acquired is not limited by the present invention.
Step S202, inputting the operation-related data into a pre-trained power load prediction model to generate a power load predicted value.
In an embodiment, the execution subject may input the operation-related data into a pre-trained power load prediction model to generate a power load predicted value. Here, the training sample set of the above-described power load prediction model is obtained based on the similarity of the data.
In an alternative implementation of the embodiment, the power load prediction model is trained according to the following steps:
the first step, based on the joint learning architecture, in response to determining that the data amount of the training data of the training initiator is less than the preset data amount, the executing body may receive the power-related data transmitted by at least one participant in a wired connection manner or a wireless connection manner, so as to obtain a power-related data set. Here, the above-mentioned preset data amount may be a threshold value set in advance for knowing how much training data is.
And a second step, the executing body can determine the similarity between each piece of power related data in the power related data set and the training data to obtain a similarity set.
Third, based on the similarity set, the execution subject may determine a training data set.
And step four, the execution subject may train the initial model to obtain the power load prediction model by using the training data set as the training sample set.
The second step set forth above may be accomplished by: the execution body may determine cosine similarity between each power-related data in the power-related data set and the training data to obtain a cosine similarity set; the execution body may determine the cosine similarity as a data similarity, and obtain a similarity set. The cosine similarity (Cosine similarity) is a method for evaluating the similarity by calculating the cosine value of the angle between the vectors of two data. The range of values is [ -1,1], a larger value indicates a larger correlation, and a smaller correlation.
The second step set forth above may also be accomplished by: the execution body may determine a hash value between each power-related data in the power-related data set and the training data by using a preset algorithm to obtain a hash value set; the execution body may determine the hash value as a data similarity, and obtain a similarity set. The preset algorithm may be a local sensitivity hash (locality sensitivity Hashing, LSH) algorithm, which is an algorithm for performing similarity search in mass data. The local sensitivity hash algorithm can map data with high similarity to the same hash value with high probability, and map data with low similarity to the same hash value with very low probability, so as to determine the similarity of the data.
The second step set forth above may also be accomplished by: the execution body may average cosine similarity and hash values between each power-related data in the power-related data set and the training data; the execution subject may determine the obtained average value as a value for characterizing similarity of the power-related data and the training data.
The second step set forth above may also be accomplished by: the execution subject may determine euclidean distances between each of the power-related data in the power-related data set and the training data to obtain a euclidean distance set; then, the execution body may determine the euclidean distance as a data similarity, and obtain a similarity set. The Euclidean distance is also called "Euclidean Metric" (Euclidean Distance) herein, and is a commonly used distance definition, which refers to the true distance between two points in a multidimensional space, or the natural length of a vector (i.e., the distance from the point to the origin). The euclidean distance in two and three dimensions is the actual distance between two points. As an example, the euclidean distance of a data point in a two-dimensional data space may be calculated by the following formula:
Figure BDA0003326340200000061
wherein ρ is used to characterize the data point (x 2 ,y 2 ) And data point (x) 1 ,y 1 ) Euclidean distance between them.
The third step set forth above may be accomplished by: based on the similarity set, the execution body may select, from the power-related data set, power-related data having a similarity exceeding a preset similarity threshold as target power-related data, to obtain a target power-related data set; the execution body may combine the training data and the target power-related data set to obtain the training data set.
As an example, the power-related data set may be "power-related data a:12 56, 98, 56, 77; power related data B:55 79, 64, 36, 19; power related data C:89, 76, 36, 64, 23". The set of similarities may be "power-related data a similarities: 82%; power related data B similarity: 75%; power related data C similarity: 53% ". The preset similarity threshold may be "60%". The execution subject may select "power-related data a" and "power-related data B" as the target power-related data, and obtain the target power-related data set.
The fourth step set forth above comprises the sub-steps of:
the first sub-step, based on the training sample set, the execution subject may train the initial model to obtain an initial power load prediction model. The initial model may be a decision tree model (decision tree mode), an XGBoost model (Extreme Gradient Boosting), or a regression neural network model such as a LightGBM model, for example.
A second sub-step, the executive body may transmit the initial power load prediction model to the at least one participant.
And a third sub-step, wherein the executing body may receive parameters of the participant model obtained by training the initial power load prediction model by the at least one participant using local data.
And a fourth sub-step in which the execution subject updates the initial power load prediction model based on the parameters to obtain the power load prediction model. Here, the power load prediction model may be a neural network model for analyzing the power-related data and then predicting the power load.
And a fifth sub-step in which the execution subject may store the power load prediction model in a target storage database.
Step S203, transmitting the power load predicted value to a target device with a display function.
In an embodiment, the execution body may transmit the fir tree power load predicted value to the target device having the display function through a wired connection manner or a wireless connection manner.
Step S204, controlling the target device to display the power load predicted value.
In an embodiment, the execution body may control the target device to display the power load predicted value.
One of the above embodiments of the present disclosure has the following advantageous effects: firstly, acquiring operation related data of a target date; then, inputting the operation related data into a pre-trained power load prediction model to generate a power load predicted value; and then, the obtained power load predicted value is transmitted to the target equipment and displayed. The method provided by the present disclosure can accurately predict the power load using a pre-trained neural network model (power load prediction model) based on the operation-related data. The equipment operation condition and equipment maintenance are convenient for operators, and the service life of the equipment is indirectly prolonged. The present disclosure provides three methods of how to determine data similarity, whereby the similarity of the selected target power-related data used to train the resulting power load prediction model is accurate. In addition, the initial power load prediction model obtained through training is transmitted to the participators, and the received parameters returned by the participators are used for updating and iterating the parameters of the initial power load prediction model, so that the processing precision of the initial power load prediction model is further improved.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein in detail.
With further reference to fig. 3, as an implementation of the method described above for each of the above figures, the present disclosure provides some embodiments of an electrical load prediction apparatus, which apparatus embodiments correspond to those described above for fig. 2, which apparatus is particularly applicable in a variety of electronic devices.
As shown in fig. 3, the power load prediction apparatus 300 of the embodiment includes: an acquisition unit 301, a generation unit 302, a transmission unit 303, and a display unit 304. Wherein the acquiring unit 301 is configured to acquire operation-related data of a target date; a generating unit 302 configured to input the operation-related data to a pre-trained power load prediction model, and generate a power load predicted value, wherein a training sample set of the power load prediction model is obtained based on similarity of data; a transmission unit 303 configured to transmit the above-described power load predicted value to a target device having a display function; and a display unit 304 configured to control the target device to display the power load predicted value.
In an alternative implementation of the embodiment, the power load prediction model is trained according to the following steps: receiving power-related data transmitted by at least one participant in response to determining that the data amount of training data of the training initiator is less than a preset data amount, and obtaining a power-related data set; determining the similarity between each piece of power related data in the power related data set and the training data to obtain a similarity set; determining a training data set based on the similarity set; and training the initial model by taking the training data set as the training sample set to obtain the power load prediction model.
In an optional implementation manner of an embodiment, the determining the similarity between each power-related data in the power-related data set and the training data to obtain a similarity set includes: determining cosine similarity between each piece of power related data in the power related data set and the training data to obtain a cosine similarity set; and determining the cosine similarity as the data similarity to obtain a similarity set.
In an optional implementation manner of an embodiment, the determining the similarity between each power-related data in the power-related data set and the training data to obtain a similarity set includes: determining a hash value between each piece of power related data in the power related data set and the training data by using a preset algorithm to obtain a hash value set; and determining the hash value as data similarity to obtain a similarity set.
In an optional implementation manner of an embodiment, the determining the similarity between each power-related data in the power-related data set and the training data to obtain a similarity set includes: averaging the cosine similarity and the hash value between each piece of power related data in the power related data set and the training data; the obtained average value is determined as a value for characterizing the similarity of the power-related data and the training data.
In an optional implementation of the embodiment, determining the training data set based on the set of similarities includes: selecting power related data with similarity exceeding a preset similarity threshold value from the power related data set as target power related data based on the similarity set to obtain a target power related data set; and combining the training data and the target power related data set to obtain the training data set.
In an optional implementation manner of the embodiment, the training the initial model to obtain the power load prediction model using the training data set as the training sample set includes: training the initial model based on the training sample set to obtain an initial power load prediction model; transmitting the initial power load prediction model to the at least one participant; receiving parameters of the participant model obtained by training the initial power load prediction model by the at least one participant by using local data; updating the initial power load prediction model based on the parameters to obtain the power load prediction model; and storing the power load prediction model into a target storage database.
It will be appreciated that the elements described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 300 and the units contained therein, and are not described in detail herein.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiments of the disclosure.
Fig. 4 is a schematic diagram of a computer device 4 provided by an embodiment of the present disclosure. As shown in fig. 4, the computer device 4 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps of the various method embodiments described above are implemented by processor 401 when executing computer program 403. Alternatively, the processor 401, when executing the computer program 403, performs the functions of the modules/units in the above-described apparatus embodiments.
Illustratively, the computer program 403 may be partitioned into one or more modules/units, which are stored in the memory 402 and executed by the processor 401 to complete the present disclosure. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 403 in the computer device 4.
The computer device 4 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The computer device 4 may include, but is not limited to, a processor 401 and a memory 402. It will be appreciated by those skilled in the art that fig. 4 is merely an example of computer device 4 and is not intended to limit computer device 4, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a computer device may also include an input-output device, a network access device, a bus, etc.
The processor 401 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 402 may be an internal storage unit of the computer device 4, for example, a hard disk or a memory of the computer device 4. The memory 402 may also be an external storage device of the computer device 4, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Further, the memory 402 may also include both internal storage units and external storage devices of the computer device 4. The memory 402 is used to store computer programs and other programs and data required by the computer device. The memory 402 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included in the scope of the present disclosure.

Claims (10)

1. A method of predicting an electrical load, comprising:
acquiring operation related data of a target date;
inputting the operation related data into a pre-trained power load prediction model to generate a power load predicted value, wherein a training sample set of the power load prediction model is obtained based on data similarity;
transmitting the power load predicted value to a target device having a display function;
and controlling the target equipment to display the power load predicted value.
2. The method of claim 1, wherein the power load prediction model is trained according to the steps of:
based on the joint learning architecture, receiving power-related data transmitted by at least one participant in response to determining that the data amount of training data of a training initiator is less than a preset data amount, and obtaining a power-related data set;
determining the similarity between each piece of power related data in the power related data set and the training data to obtain a similarity set;
determining a training dataset based on the set of similarities;
and taking the training data set as the training sample set, and training an initial model to obtain the power load prediction model.
3. The method of claim 2, wherein determining the similarity between each power-related data in the set of power-related data and the training data, resulting in a set of similarities, comprises:
determining cosine similarity between each piece of power related data in the power related data set and the training data to obtain a cosine similarity set;
and determining the cosine similarity as the data similarity to obtain a similarity set.
4. The method of claim 2, wherein determining the similarity between each power-related data in the set of power-related data and the training data, resulting in a set of similarities, comprises:
determining a hash value between each piece of power related data in the power related data set and the training data by using a preset algorithm to obtain a hash value set;
and determining the hash value as data similarity to obtain a similarity set.
5. The method of claim 3 or 4, wherein determining the similarity between each power-related data in the set of power-related data and the training data to obtain a set of similarities comprises:
averaging cosine similarity and hash values between each piece of power-related data in the power-related data set and the training data;
the resulting mean value is determined as a value characterizing the similarity of the power-related data to the training data.
6. The method of power load prediction according to claim 2, wherein the determining a training data set based on the set of similarities comprises:
selecting power related data with similarity exceeding a preset similarity threshold value from the power related data set as target power related data based on the similarity set to obtain a target power related data set;
and combining the training data and the target power related data set to obtain the training data set.
7. The method according to claim 2, wherein training an initial model to obtain the power load prediction model using the training data set as the training sample set includes:
training the initial model based on the training sample set to obtain an initial power load prediction model;
transmitting the initial power load prediction model to the at least one participant;
receiving parameters of the participant model obtained by training the initial power load prediction model by the at least one participant by utilizing local data;
updating the initial power load prediction model based on the parameters to obtain the power load prediction model;
and storing the power load prediction model in a target storage database.
8. An electrical load prediction apparatus, comprising:
an acquisition unit configured to acquire operation-related data of a target date;
a generation unit configured to input the operation-related data to a pre-trained power load prediction model, and generate a power load prediction value, wherein a training sample set of the power load prediction model is obtained based on similarity of data;
a transmission unit configured to transmit the power load predicted value to a target device having a display function;
and a display unit configured to control the target device to display the power load predicted value.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202111280103.4A 2021-10-28 2021-10-28 Power load prediction method, device, computer equipment and medium Pending CN116050557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111280103.4A CN116050557A (en) 2021-10-28 2021-10-28 Power load prediction method, device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111280103.4A CN116050557A (en) 2021-10-28 2021-10-28 Power load prediction method, device, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN116050557A true CN116050557A (en) 2023-05-02

Family

ID=86115126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111280103.4A Pending CN116050557A (en) 2021-10-28 2021-10-28 Power load prediction method, device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN116050557A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894163A (en) * 2023-09-11 2023-10-17 国网信息通信产业集团有限公司 Charging and discharging facility load prediction information generation method and device based on information security

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894163A (en) * 2023-09-11 2023-10-17 国网信息通信产业集团有限公司 Charging and discharging facility load prediction information generation method and device based on information security
CN116894163B (en) * 2023-09-11 2024-01-16 国网信息通信产业集团有限公司 Charging and discharging facility load prediction information generation method and device based on information security

Similar Documents

Publication Publication Date Title
Jiang et al. Scenario generation for wind power using improved generative adversarial networks
NO20230419A1 (en) Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
WO2023124296A1 (en) Knowledge distillation-based joint learning training method and apparatus, device and medium
WO2018211127A1 (en) Methods, systems and apparatus to optimize pipeline execution
CN113988310A (en) Deep learning model selection method and device, computer equipment and medium
CN113487084A (en) Method and device for predicting service life of equipment, computer equipment and computer-readable storage medium
CN113169044A (en) Normative analysis in highly collinear response space
CN116050557A (en) Power load prediction method, device, computer equipment and medium
CN116258923A (en) Image recognition model training method, device, computer equipment and storage medium
CN114154415A (en) Equipment life prediction method and device
CN117033997A (en) Data segmentation method, device, electronic equipment and medium
CN116050556A (en) Gas load prediction method, device, computer equipment and medium
CN116362103A (en) Method and device for predicting residual service life of equipment
CN116340959A (en) Breakpoint privacy protection-oriented method, device, equipment and medium
CN114154714A (en) Time series data prediction method, time series data prediction device, computer equipment and medium
CN116362101A (en) Data processing method based on joint learning, data model generation method and device
CN114970357A (en) Energy-saving effect evaluation method, system, device and storage medium
CN116361928A (en) Equipment detection method and device based on joint learning
CN116049681A (en) Method, device, computer equipment and medium for selecting energy related data
CN114503505A (en) Learning a pattern dictionary from noisy numerical data in a distributed network
CN116484707A (en) Determination method and device of joint learning model
WO2023071529A1 (en) Device data cleaning method and apparatus, computer device and medium
CN116090848A (en) Instrument state judging method, instrument state judging device, computer equipment and medium
CN116050558A (en) Method, device, computer equipment and medium for predicting energy related time sequence data
CN116049682A (en) Training method and device for joint learning model, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination