CN116415744A - Power prediction method and device based on deep learning and storage medium - Google Patents

Power prediction method and device based on deep learning and storage medium Download PDF

Info

Publication number
CN116415744A
CN116415744A CN202310687810.8A CN202310687810A CN116415744A CN 116415744 A CN116415744 A CN 116415744A CN 202310687810 A CN202310687810 A CN 202310687810A CN 116415744 A CN116415744 A CN 116415744A
Authority
CN
China
Prior art keywords
information
season
trend
module
power prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310687810.8A
Other languages
Chinese (zh)
Other versions
CN116415744B (en
Inventor
王浩
宋晓宝
陈作胜
邓力玮
张耀安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202310687810.8A priority Critical patent/CN116415744B/en
Publication of CN116415744A publication Critical patent/CN116415744A/en
Application granted granted Critical
Publication of CN116415744B publication Critical patent/CN116415744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Computational Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Water Supply & Treatment (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a power prediction method, a device and a storage medium based on deep learning, wherein the method comprises the following steps: inputting the time series data into an encoder of the power prediction model to obtain first season information; inputting the time sequence data into a first frequency dismantling module to obtain trend information and second season information; inputting the first season information, the trend information and the second season information into a decoder to obtain target season information and target trend information; inputting the target trend information into a trend module of the power prediction model to obtain trend prediction information; and determining a power prediction result based on the target season information and the trend prediction information. According to the invention, the important frequency domain Fourier components are screened through the first seasonal module and the second seasonal module of the attention mechanism, so that the information loss caused by discarding the Fourier components carrying important information can be reduced, the accuracy of model prediction is improved, and the accuracy of electric power prediction is improved.

Description

Power prediction method and device based on deep learning and storage medium
Technical Field
The invention relates to the technical field of electric power prediction based on deep learning, in particular to an electric power prediction method and device based on deep learning and a storage medium.
Background
Power prediction refers to the completion of prediction of future power characteristics based on given historical power characteristics, where power characteristics refer to characteristics (e.g., peak power consumption, variance, mean, etc.) calculated based on power values. The power prediction technology is crucial in operation and management of a power system, and high-precision power prediction can assist an energy supplier and a power grid manager in planning power energy better so as to adjust energy supply, keep stable operation of the power system and further improve overall energy efficiency.
Electric power predictions are affected by factors such as population, politics, economy, climate, etc. These factors greatly promote the difficulty of power data prediction. In order to accurately predict the power data, many excellent algorithms have been proposed. Currently, with the rise of a transducer model, more and more deep learning models adopt a transducer structure to analyze time series. Of these, fedformer is the most typical. Fedformer filters an input sequence based on sliding window mean values of different receptive fields, so that the seasonal items and trend items of the input sequence are split, and then electric power prediction is realized through separate prediction of the seasonal items and the trend items. Fedformer discusses the trend term and the season term of the sequence in the time domain and the frequency domain, respectively, and enhances the characterization capability of the season term by randomly sampling the Fourier components of the season term.
Since the Fedformer converts the seasonal term from the time domain to the frequency domain based on the Fourier transform and randomly samples a plurality of Fourier components of the frequency domain to enhance the frequency characteristics of the seasonal term, the length of the input vector is greatly reduced. However, the random sampling approach results in the fourier component containing important information being discarded, so that some important information in the data cannot be captured by the model, resulting in lower accuracy of model prediction.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a power prediction method, device and storage medium based on deep learning, and aims to solve the technical problem that the accuracy of the existing power prediction is low.
In order to achieve the above object, the present invention provides a deep learning-based power prediction method, comprising the steps of:
inputting time sequence data corresponding to historical power characteristic data into an encoder of a power prediction model for processing to obtain first season information corresponding to the time sequence data, wherein the encoder comprises a plurality of encoding layers which are sequentially connected, and the encoding layers comprise a first season module of an attention mechanism;
Inputting the time sequence data into a first frequency dismantling module of the power prediction model to carry out frequency dismantling processing to obtain trend information and second season information;
inputting the first season information, the trend information and the second season information into a decoder of the power prediction model for processing to obtain target season information and target trend information corresponding to the time sequence data, wherein the decoder comprises a plurality of decoding layers which are connected in sequence, and the decoding layers comprise a second season module of an attention mechanism;
inputting the target trend information into a trend module of the power prediction model for processing so as to obtain trend prediction information;
and determining a power prediction result based on the target season information and the trend prediction information through the power prediction model.
Further, the step of inputting the time series data into the encoder of the power prediction model for processing to obtain the first season information corresponding to the time series data includes:
for each current coding layer, acquiring first input information through a second frequency disassembly module of the current coding layer, wherein if the current coding layer is a first coding layer of the coder, the first input information is the time sequence data, and if the current coding layer is not the first coding layer, the first input information is the output information of the last coding layer;
Performing frequency dismantling processing on the first input information through the second frequency dismantling module to obtain third season information;
inputting the third seasonal information into a first forward propagation module of the current coding layer for processing to obtain a first intermediate parameter, and determining a first intermediate hidden state parameter by the current coding layer based on the third seasonal information and the first intermediate parameter;
inputting the first intermediate hidden state parameter into a first season module of the current coding layer for processing to obtain a second intermediate parameter;
and determining, by the current coding layer, output information of the current coding layer based on the second intermediate parameter and the intermediate concealment state parameter, where if the current coding layer is a last coding layer of the encoder, determining that the output information of the current coding layer is the first season information.
Further, the step of inputting the first season information, the trend information, and the second season information into the decoder of the power prediction model for processing to obtain target season information and target trend information corresponding to the time series data includes:
For each current decoding layer, obtaining second input information through the current decoding layer, wherein the second input information comprises season item input information, remainder item input information and trend item input information;
inputting the remainder input information into a third frequency disassembly module of the current decoding layer to perform frequency disassembly processing to obtain first remainder information and first trend information;
inputting the first remainder information and the first season information into a second season module of the current decoding layer for processing so as to obtain fourth season information;
determining a third intermediate parameter by the current decoding layer based on the first remainder information and the fourth section information;
inputting the third intermediate parameter into a second forward propagation module of the current decoding layer for processing to obtain a fourth intermediate parameter, and determining a second intermediate hidden state parameter by the current decoding layer based on the third intermediate parameter and the fourth intermediate parameter;
inputting the second intermediate hidden state parameter into a fourth frequency disassembly module of the current decoding layer to perform frequency disassembly processing to obtain second trend information and remainder output information of the current encoding layer;
Determining trend item output information of the current decoding layer based on the trend item input information, the first trend information and the second trend information by the current decoding layer;
and determining, by the current decoding layer, season term output information of the current decoding layer based on the season term input information and the fourth season term information, wherein if the current decoding layer is a last decoding layer of the decoder, the trend term output information is the target trend information, and the season term output information is the target season information.
Further, the step of inputting the first remainder information and the first season information into the second season module of the current decoding layer for processing to obtain fourth season information includes:
the first remainder information is subjected to linear mapping through the second season module so as to obtain a query vector of an attention mechanism, and the first season information is subjected to linear mapping through the second season module so as to obtain keys and values of the attention mechanism;
determining, by the second seasonal module, a weighted attention representation in a time dimension corresponding to the first remainder information based on the query vector, key, and value;
Determining, by the second seasonal module, an amplitude value corresponding to the periodic encoding function based on the weighted attention representation;
and determining the fourth season information through the second season module based on the time vector corresponding to the time sequence length of the time sequence data and the amplitude.
Further, if the current decoding layer is the first decoding layer of the decoder, determining that the season term input information is preset season information, determining remainder term input information based on the second season information, and determining trend term input information based on the trend information;
if the current decoding layer is not the first decoding layer, determining that the season item input information is the season item output information of the last encoding layer, the remainder input information is the remainder output information of the last encoding layer and the trend item input information is the trend item output information of the last encoding layer.
Further, the step of inputting the time series data into the first frequency disassembling module of the power prediction model to perform frequency disassembling processing to obtain trend information and second season information includes:
performing fast Fourier transform on the time sequence data through the first frequency disassembly module to obtain frequency domain data corresponding to the time sequence data;
Decomposing the frequency domain data through a Fourier decomposition module of the first frequency decomposition module to obtain a low-frequency component and a high-frequency component corresponding to the frequency domain data;
and performing inverse fast Fourier transform on the low-frequency component through the first frequency dismantling module to obtain the trend information, and performing inverse fast Fourier transform on the high-frequency component through the first frequency dismantling module to obtain the second season information.
Further, the step of inputting the target trend information into the trend module of the power prediction model for processing to obtain trend prediction information includes:
performing linear transformation on the target trend information through the trend module to obtain trend information after linear transformation;
and inputting a time vector corresponding to the time sequence length of the target trend information and the trend information after linear transformation into a trend function of the trend module for fitting processing so as to obtain the trend prediction information.
Further, the step of determining a power prediction result by the power prediction model based on the target season information and the trend prediction information includes:
Slicing the target season information and the trend prediction information through the electric power prediction model to obtain a predicted season item sequence and a predicted trend item sequence;
and determining the power prediction result based on the prediction season term sequence and the prediction trend term sequence through the power prediction model.
In addition, in order to achieve the above object, the present application further provides a deep learning-based power prediction apparatus, including: the power prediction system comprises a memory, a processor and a power prediction program based on deep learning, wherein the power prediction program based on deep learning is stored on the memory and can run on the processor, and the power prediction program based on deep learning realizes the steps of the power prediction method based on deep learning.
In addition, in order to achieve the above object, the present application further provides a computer-readable storage medium having stored thereon a deep learning-based power prediction program that, when executed by a processor, implements the steps of the deep learning-based power prediction method as described above.
The method comprises the steps of inputting time sequence data corresponding to historical power characteristic data into an encoder of a power prediction model for processing to obtain first season information corresponding to the time sequence data, wherein the encoder comprises a plurality of coding layers which are sequentially connected, and the coding layers comprise a first season module of an attention mechanism; then inputting the time sequence data into a first frequency dismantling module of the electric power prediction model to carry out frequency dismantling processing to obtain trend information and second season information; inputting the first season information, the trend information and the second season information into a decoder of the power prediction model for processing to obtain target season information and target trend information corresponding to the time sequence data, wherein the decoder comprises a plurality of decoding layers which are connected in sequence, and the decoding layers comprise a second season module of an attention mechanism; inputting the target trend information into a trend module of the power prediction model for processing so as to obtain trend prediction information; and finally, determining a power prediction result based on the target season information and the trend prediction information through the power prediction model, and screening important frequency domain Fourier components through a first season module and a second season module of an attention mechanism, so that information loss caused by discarding the Fourier components carrying important information can be reduced, the accuracy of model prediction is improved, and the accuracy of power prediction is improved.
Drawings
FIG. 1 is a schematic diagram of a power prediction device based on deep learning in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a power prediction method based on deep learning according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a power prediction model in the power prediction method based on deep learning;
FIG. 4 is a schematic diagram of a decoder of a power prediction model in a power prediction method based on deep learning according to the present invention;
FIG. 5 is a schematic diagram of a second season module of the power prediction model in the power prediction method based on deep learning according to the present invention;
FIG. 6 is a schematic diagram of an encoder of a power prediction model in a power prediction method based on deep learning according to the present invention;
fig. 7 is a schematic structural diagram of a first frequency disassembling module of a power prediction model in the power prediction method based on deep learning.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a power prediction device based on deep learning of a hardware running environment according to an embodiment of the present invention.
The electric power prediction device based on deep learning in the embodiment of the invention can be a PC, and also can be mobile terminal equipment with a display function, such as a smart phone, a tablet personal computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 4) player, a portable computer and the like.
As shown in fig. 1, the deep learning-based power prediction apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the deep learning-based power prediction device may further include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 does not constitute a limitation of the deep learning based power prediction device, and may include more or fewer components than shown, or certain components may be combined, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a deep learning-based power prediction program may be included in a memory 1005 as one type of computer storage medium.
In the deep learning-based power prediction apparatus shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server, and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke the deep learning based power prediction program stored in the memory 1005.
In the present embodiment, the electric power prediction apparatus based on deep learning includes: the power prediction method comprises a memory 1005, a processor 1001 and a power prediction program based on deep learning, wherein the power prediction program based on deep learning is stored in the memory 1005, and the power prediction method based on deep learning in the following embodiments is executed when the power prediction program based on deep learning is called by the processor 1001.
The invention further provides a power prediction method based on deep learning, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the power prediction method based on deep learning.
In this embodiment, a power prediction result is obtained by inputting historical power feature data into a power prediction model for model training, as shown in fig. 3, where the power prediction model includes an encoder, a decoder, a frequency disassembly module and a trend module, the encoder includes a plurality of encoding layers that are sequentially connected, each encoding layer includes a first season module of an attention mechanism, the decoder includes a plurality of decoding layers that are sequentially connected, and each decoding layer includes a second season module of the attention mechanism.
In this embodiment, the power prediction method based on deep learning includes:
step S101, inputting time sequence data corresponding to historical power characteristic data into an encoder of a power prediction model for processing so as to obtain first season information corresponding to the time sequence data, wherein the encoder comprises a plurality of coding layers which are sequentially connected, and the coding layers comprise a first season module of an attention mechanism;
in the present application, when power prediction is performed, historical power characteristic data is acquired first, where the power characteristic refers to each characteristic calculated based on a power numerical value, for example, a power consumption peak value, a variance, a mean value, and the like, where the historical power characteristic data may be power characteristic data of a certain province, a city, a county, and a country, and power characteristic data of a certain region (for example, a certain cell, a certain industrial region, and the like), and then time sequence data corresponding to the historical power characteristic data is acquired.
The power prediction model is used for performing model training according to time sequence data corresponding to the historical power characteristic data, obtaining predicted future power characteristic data, for example, the time sequence data is,
Figure SMS_1
future power characteristic data (power prediction result) is +.>
Figure SMS_2
The power prediction model may be +.>
Figure SMS_3
The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>
Figure SMS_4
Namely X t:t+O D is the number of the power characteristics of the historical power characteristic data, and I is the historical time length of the historical power characteristic data; o is the prediction time length of the prediction result. The encoder includes a plurality of encoding layers connected in sequence, each encoding layer including a first season module of an attention mechanism.
In this embodiment, after time-series sequence data corresponding to historical power feature data is obtained, the time-series sequence data is input into an encoder of a power prediction model to be processed so as to obtain first season information corresponding to the time-series sequence data, specifically, in each encoding layer, a first season module of the encoding layer processes the first input information, then output information of the encoding layer is obtained through processing of the encoding layer, if the encoding layer is the first encoding layer of the encoder, the first input information is the time-series sequence data, if the encoding layer is not the first encoding layer, the first input information is output information of the last encoding layer, and if the encoding layer is the last encoding layer of the encoder, the output information of the encoding layer is determined to be the first season information.
Step S102, inputting the time sequence data into a first frequency dismantling module of the electric power prediction model for frequency dismantling processing to obtain trend information and second season information;
in this embodiment, the time series data corresponding to the historical power characteristic data may be input to the encoder of the power prediction model for processing, and simultaneously, the time series data may be input to the first frequency disassembly module of the power prediction model for frequency disassembly processing, so as to obtain trend information and second season information corresponding to the time series data.
Step S103, inputting the first season information, the trend information and the second season information into a decoder of the power prediction model for processing to obtain target season information and target trend information corresponding to the time sequence data, wherein the decoder comprises a plurality of decoding layers which are connected in sequence, and the decoding layers comprise a second season module of an attention mechanism;
in this embodiment, after the first season information, the trend information, and the second season information are acquired, the first season information, the trend information, and the second season information are input to a decoder of the power prediction model for processing, where the decoder includes a plurality of decoding layers that are sequentially connected, the decoding layers include a second season module of an attention mechanism, for each decoding layer, the remainder input information and the first season information are respectively processed through the second season module of the decoding layer, the decoding layer processes the processed data and the trend information to obtain season term output information, remainder output information, and trend term output information, and if the decoding layer is the last decoding layer of the decoder, the trend term output information is the target season information, and the season term output information is the target season information.
Step S104, inputting the target trend information into a trend module of the power prediction model for processing so as to obtain trend prediction information;
in this embodiment, after the target trend information is obtained, the target trend information is input into a trend module of the power prediction model for processing, so as to obtain trend prediction information, and specifically, the trend module performs linear transformation and fitting processing on the target trend information, so as to obtain trend prediction information.
Step S105, determining, by the power prediction model, a power prediction result based on the target season information and the trend prediction information.
In this embodiment, after the trend prediction information is acquired, the power prediction model determines a power prediction result based on the target season information and the trend prediction information, and specifically, the step S105 includes:
step S1051, slicing the target season information and the trend prediction information through the power prediction model to obtain a predicted season term sequence and a predicted trend term sequence;
step S1052, determining, by the power prediction model, the power prediction result based on the prediction season term sequence and the prediction trend term sequence.
In this embodiment, after the trend prediction information is obtained, the power prediction model performs slicing operation on the target season information and the trend prediction information to obtain a predicted season term sequence and a predicted trend term sequence, specifically, since the time length of the target season information and the trend prediction information is (I/2+O) and the time length of the power prediction result is O, the power prediction model needs to perform slicing operation on the target season information and the trend prediction information to obtain a predicted season term sequence with the time length of O and a predicted trend term sequence with the time length of O, where the specific formula is as follows:
Figure SMS_5
Figure SMS_6
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_7
for the target season information->
Figure SMS_8
For target trend information, trendblock (++>
Figure SMS_9
) For trend prediction information ++>
Figure SMS_10
∈R D×O For predicting seasonal item sequences ++>
Figure SMS_11
∈R D×O In order to predict the trend term sequence, D is the number of power characteristics of the historical power characteristic data, O is the prediction time length of the prediction result, and Trendblock () is a function of the trend module. In the present embodiment, target seasons are respectively reservedAnd the data of the last O time series lengths in the information and trend prediction information are used as a predicted season term sequence and a predicted trend term sequence.
And then, determining the power prediction result based on the prediction season term sequence and the prediction trend term sequence through the power prediction model, specifically, adding the prediction season term sequence and the prediction trend term sequence by the power prediction model to obtain the power prediction result, further accurately obtaining the power prediction result, and improving the accuracy of the power prediction result. The formula of the power prediction result is:
Figure SMS_12
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_13
is the power prediction result.
According to the deep learning-based power prediction method, time sequence data corresponding to historical power characteristic data are input into an encoder of a power prediction model to be processed, so that first season information corresponding to the time sequence data is obtained, wherein the encoder comprises a plurality of encoding layers which are sequentially connected, and the encoding layers comprise a first season module of an attention mechanism; then inputting the time sequence data into a first frequency dismantling module of the electric power prediction model to carry out frequency dismantling processing to obtain trend information and second season information; inputting the first season information, the trend information and the second season information into a decoder of the power prediction model for processing to obtain target season information and target trend information corresponding to the time sequence data, wherein the decoder comprises a plurality of decoding layers which are connected in sequence, and the decoding layers comprise a second season module of an attention mechanism; inputting the target trend information into a trend module of the power prediction model for processing so as to obtain trend prediction information; and finally, determining a power prediction result based on the target season information and the trend prediction information through the power prediction model, and screening important frequency domain Fourier components through a first season module and a second season module of an attention mechanism, so that information loss caused by discarding the Fourier components carrying important information can be reduced, the accuracy of model prediction is improved, and the accuracy of power prediction is improved.
Based on the first embodiment, a second embodiment of the deep learning-based power prediction method of the present invention is proposed, in which step S103 includes:
step S201, for each current decoding layer, obtaining second input information through the current decoding layer, wherein the second input information comprises season item input information, remainder item input information and trend item input information;
step S202, inputting the remainder input information into a third frequency disassembly module of the current decoding layer for frequency disassembly processing to obtain first remainder information and first trend information;
step S203, inputting the first remainder information and the first season information into a second season module of the current decoding layer for processing, so as to obtain fourth season information;
step S204, determining a third intermediate parameter by the current decoding layer based on the first remainder information and the fourth section information;
step S205, inputting the third intermediate parameter into a second forward propagation module of the current decoding layer for processing, so as to obtain a fourth intermediate parameter, and determining, by the current decoding layer, a second intermediate hidden state parameter based on the third intermediate parameter and the fourth intermediate parameter;
Step S206, inputting the second intermediate hidden state parameter into a fourth frequency disassembly module of the current decoding layer for frequency disassembly processing to obtain second trend information and remainder output information of the current encoding layer;
step S207, determining, by the current decoding layer, trend item output information of the current decoding layer based on the trend item input information, the first trend information, and the second trend information;
step S208, determining, by the current decoding layer, season term output information of the current decoding layer based on the season term input information and the fourth season term information, where if the current decoding layer is a last decoding layer of the decoder, the trend term output information is the target trend information, and the season term output information is the target season information.
In this embodiment, the decoder includes a plurality of decoding layers sequentially connected, as shown in fig. 4, including n decoding layers from a first decoding layer to an nth decoding layer, and for each current decoding layer, the current decoding layer includes a third frequency disassembling module, a second season module, a second forward propagation module, and a fourth frequency disassembling module.
After the first season information, the trend information and the second season information are acquired, the first season information, the trend information and the second season information are input into a decoder of the power prediction model for processing, specifically, for each current decoding layer, second input information is acquired through the current decoding layer, wherein the second input information comprises season item input information
Figure SMS_14
∈R D×(I/2+O) The remainder inputs information->
Figure SMS_15
∈R D×(I/2+O) Trend item input information ++>
Figure SMS_16
∈R D×(I/2+O)
Further, in one possible implementation manner, if the current decoding layer is not the first decoding layer, determining that the seasonal item input information is seasonal item output information of the last encoding layer, the residual item input information is residual item output information of the last encoding layer, and the trend item input information is trend item output information of the last encoding layer;
and if the current decoding layer is the first decoding layer of the decoder, determining that the season item input information is preset season information, determining remainder input information based on the second season information and determining trend item input information based on the trend information.
Wherein when the current decoding layer is the first decoding layer, the season item input information
Figure SMS_17
∈R D×(I/2+O) Can be initialized to 0, and trend item data X obtained by a first frequency disassembly module T Season item data X S After the decoder is input, the seasonal item data and the trend item data are respectively sliced to obtain a seasonal sequence and a trend sequence of the second half of the input length, and the seasonal sequence is filled with 0 value to the remainder input information of the length (I/2+O) through a packing () function>
Figure SMS_18
The specific formula is:
Figure SMS_19
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_20
inputting information for the remainder->
Figure SMS_21
Is a seasonal sequence.
For trend sequences
Figure SMS_22
The decoder first obtains the Mean of trend sequence (A)>
Figure SMS_23
) Average expansion is carried out through broadcasting mechanism Broadcast () to obtain a spreading sequence +.>
Figure SMS_24
Linking the extended sequence with the trend sequence to obtain trend item input information>
Figure SMS_25
In particular, a publicThe formula is:
Figure SMS_26
Figure SMS_27
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_28
∈R D×O for the spreading sequence +.>
Figure SMS_29
∈R D×(I/2) For trend sequence, ++>
Figure SMS_30
∈R D×(I/2+O) Information is entered for the trending items. Padding () is a filling function of a time dimension, broadcast () broadcasts a broadcasting mechanism function of an O length, concat () is a splicing function of the time dimension, mean () is a averaging function of the time dimension, D is the number of power features of the historical power feature data, O is a prediction time length of a prediction result, and I is a history time length of the historical power feature data.
For each current decoding layer, after obtaining the second input information, inputting the remainder input information into a third frequency disassembly module of the current decoding layer to perform frequency disassembly processing to obtain first remainder information and first trend information, wherein the processing procedure of the third frequency disassembly module is the same as the processing procedure of the first frequency disassembly module, referring to the processing procedure of the first frequency disassembly module in the fourth embodiment, after inputting the remainder input information into the third frequency disassembly module to perform frequency disassembly processing, trend item data X is obtained as well T Season item data X S In this embodiment, the trend item data X output by the third frequency disassembling module T For the first trend information, season item data X S For the first remainder information, a specific formula of the third frequency disassembly module may be:
Figure SMS_31
wherein Decomp () is a frequency-splitting function of the third frequency-splitting mode,
Figure SMS_32
inputting information for the remainder->
Figure SMS_33
For the first trend information, ++>
Figure SMS_34
For the first remainder information, the current decoding layer is the first decoding layer of the decoder.
After the first remainder information and the first trend information are acquired, the current decoding layer inputs the first remainder information and the first season information into the second season module for processing, so as to obtain fourth season information, wherein a specific formula of the fourth season information can be:
Figure SMS_35
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_36
for the first remainder information,/for>
Figure SMS_37
For the first season information->
Figure SMS_38
Is the fourth season information.
Further, in one possible implementation manner, the step S203 includes:
step S2031, performing linear mapping on the first remainder information through the second season module to obtain a query vector of an attention mechanism, and performing linear mapping on the first season information through the second season module to obtain a key and a value of the attention mechanism;
Step S2032, determining, by the second season module, a weighted attention representation in a time dimension corresponding to the first remainder information based on the query vector, key, and value;
step S2033, determining, by the second season module, an amplitude corresponding to the periodic encoding function based on the weighted attention representation;
step S2034, determining, by the second season module, the fourth season information based on the time vector corresponding to the time sequence length of the time sequence data and the amplitude.
In this embodiment, the structure of the second seasonal module is shown in fig. 5, and after the current decoding layer inputs the first remainder information and the first seasonal information into the second seasonal module, the second seasonal module performs linear mapping on the first remainder information to obtain the query vector Q e R of the attention mechanism D×L At the same time, the second season module carries out linear mapping on the first season information to obtain the key K E R of the attention mechanism D×L Sum value V epsilon R D×L The specific formulas are as follows: q=w Q S+B Q ,K=W K S’+B K ,V=W V S’+B V Wherein W is Q ∈R D×D ,W K ∈R D×D ,W V ∈R D×D ,B Q ∈R D×L ,B K ∈R D×L ,B V ∈R D×L ,W Q 、W K 、W V For the corresponding weight matrix, B Q 、B K 、B V For the corresponding offset item, L is the time series length of the input data (first remainder information or first season information), D is the number of power features of the time series data, S is the first remainder information, and S' is the first season information.
The second season module determines weighted attention characterization on the corresponding time dimension of the first remainder information based on the query vector, key and value, wherein the formula is as follows:
Figure SMS_39
wherein M is E R D×L For weighted attention characterization, K is a key, V is a value, Q is a query directionAnd D is the number of power characteristics of the time sequence data.
When the weighted attention representation is acquired, the second season module determines the amplitude value corresponding to the periodic coding function and theta based on the weighted attention representation S =MW θS +B θS Wherein W is θS ∈R D×D ,B θS ∈R D×L ,W θS For the corresponding weight matrix, B θS For the corresponding bias term, θ S Is of amplitude, M L Is a weighted attention characterization.
After the amplitude is obtained, the second season module determines fourth season information based on a time vector corresponding to the time sequence length of the time sequence data and the amplitude, and specifically, the formula is as follows:
Figure SMS_40
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_41
for the fourth season information->
Figure SMS_42
,θ S For the amplitude, H is the dimension of the preset hidden state, t= [0,1/L … (L-1)/L]∈R L As a time vector, L is a time series length of the input data (first remainder information or first trend information).
After the fourth-season information is acquired, the current decoding layer determines a third intermediate parameter based on the first remainder information and the fourth-season information, specifically, takes the difference between the first remainder information and the fourth-season information as the third intermediate parameter, and the formula is as follows:
Figure SMS_43
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_44
for the third intermediate parameter, +.>
Figure SMS_45
For the fourth season information->
Figure SMS_46
Is the first remainder information.
After the third intermediate parameter is obtained, the third intermediate parameter is input into a second forward propagation module of the current decoding layer for processing so as to obtain a fourth intermediate parameter, wherein the second forward propagation module integrates the input characteristic information by using a plurality of full connection layers, and a specific calculation formula of the second forward propagation module is as follows:
U 1 =ReLU(W 1 R+B 1 );
U 2 =W 2 U 1 +B 2
wherein R is a third intermediate parameter
Figure SMS_47
,U 2 As a fourth intermediate parameter, W 1 ∈R D’×D ,W 2 ∈R D×D’ ,B 1 ∈R D’×L ,B 2 ∈R D×L ,W 1 、W 2 、B 1 、B 2 All are module learning parameters of the second forward propagation module, reLU () is an activation function, D 'is a preset parameter, in this embodiment, D' =4d, and D is the number of power features of the time sequence data.
After the fourth intermediate parameter is obtained, determining, by the current decoding layer, a second intermediate concealment state parameter based on the third intermediate parameter and the fourth intermediate parameter, wherein the second intermediate concealment state parameter is specifically a sum of the third intermediate parameter and the fourth intermediate parameter, and the specific formula is as follows:
Figure SMS_48
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_49
for the third intermediate parameter, +.>
Figure SMS_50
For the fourth intermediate parameter, +.>
Figure SMS_51
Is a second intermediate hidden state parameter.
After the second intermediate hidden state parameter is obtained, inputting the second intermediate hidden state parameter into a fourth frequency disassembly module of the current decoding layer to perform frequency disassembly processing, and obtaining second trend information and remainder output information of the current encoding layer; wherein, the processing procedure of the fourth frequency disassembling module is the same as the processing procedure of the first frequency disassembling module, referring to the processing procedure of the first frequency disassembling module in the fourth embodiment, after the second intermediate hidden state parameter is input into the fourth frequency disassembling module to perform the frequency disassembling process, trend item data X is obtained as well T Season item data X S In this embodiment, the trend item data X output by the fourth frequency disassembling module T For the second trend information, season item data X S For the remainder output information of the current coding layer, the specific formula of the fourth frequency disassembly module may be:
Figure SMS_52
;/>
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_53
∈R D×(I/2+O) outputting information for remainder->
Figure SMS_54
,/>
Figure SMS_55
∈R D×(I/2+O) Decomp () is a frequency split function of the fourth frequency split mode, which is the second trend information.
After the second trend information is obtained, the current decoding layer determines trend item output information of the current decoding layer based on the trend item input information, the first trend information and the second trend information, specifically, the trend item output information is a sum of the trend item input information, the first trend information and the second trend information, and a specific formula may be:
Figure SMS_56
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_57
outputting information for trend item, < > for>
Figure SMS_58
For the first trend information, ++>
Figure SMS_59
For the second trend information, ++>
Figure SMS_60
Information is entered for the trending items.
Finally, determining, by the current decoding layer, season term output information of the current decoding layer based on the season term input information and the fourth season term information, specifically, the season term output information is a sum of the season term input information and the fourth season term information, where a specific formula may be:
Figure SMS_61
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_62
outputting information for season items, < >>
Figure SMS_63
For the fourth season information->
Figure SMS_64
Information is entered for the seasonal items.
And if the current decoding layer is the last decoding layer of the decoder, the trend item output information is the target trend information, and the season item output information is the target season information.
In this embodiment, after the first season information, the trend information, and the second season information are acquired, the first season information, the trend information, and the second season information are input to a decoder of the power prediction model for processing, where the decoder includes a plurality of decoding layers that are sequentially connected, the decoding layers include a second season module of an attention mechanism, for each decoding layer, the remainder input information and the first season information are respectively processed through the second season module of the decoding layer, the decoding layer processes the processed data and the trend information to obtain season term output information, remainder output information, and trend term output information, and if the decoding layer is the last decoding layer of the decoder, the trend term output information is the target season information, and the season term output information is the target season information.
According to the deep learning-based power prediction method, for each current decoding layer, second input information is obtained through the current decoding layer, wherein the second input information comprises season item input information, remainder item input information and trend item input information; inputting the remainder input information into a third frequency disassembly module of the current decoding layer to perform frequency disassembly processing to obtain first remainder information and first trend information; inputting the first remainder information and the first season information into a second season module of the current decoding layer for processing so as to obtain fourth season information; determining a third intermediate parameter by the current decoding layer based on the first remainder information and the fourth section information; inputting the third intermediate parameter into a second forward propagation module of the current decoding layer for processing to obtain a fourth intermediate parameter, and determining a second intermediate hidden state parameter by the current decoding layer based on the third intermediate parameter and the fourth intermediate parameter; inputting the second intermediate hidden state parameter into a fourth frequency disassembly module of the current decoding layer to perform frequency disassembly processing to obtain second trend information and remainder output information of the current encoding layer; determining trend item output information of the current decoding layer based on the trend item input information, the first trend information and the second trend information by the current decoding layer; the current decoding layer is used for determining the season term output information of the current decoding layer based on the season term input information and the fourth season term information, wherein if the current decoding layer is the last decoding layer of the decoder, the trend term output information is the target trend information, the season term output information is the target season information, the target season information and the target trend information can be accurately obtained through the decoder, and important frequency domain Fourier components are screened through a second season module of an attention mechanism, so that information loss caused by discarding Fourier components carrying important information can be reduced, the model prediction precision is improved, and the electric power prediction accuracy is improved. The frequency dismantling operation of the third frequency dismantling module and the fourth frequency dismantling module is adopted to obtain Fourier components of the sequence through Fourier transformation of the whole sequence, and threshold value division trend components and seasonal components are set based on energy duty ratio, so that the sequence seasonal items and the trend items are dismantled, meanwhile, the frequency dismantling module is used for processing the whole sequence, long-term dependence of the seasonal items is considered, and high-quality sequence dismantling is achieved.
Based on the first embodiment, a third embodiment of the deep learning-based power prediction method of the present invention is proposed, in which step S101 includes:
step S301, for each current coding layer, acquiring first input information through a second frequency disassembly module of the current coding layer, where if the current coding layer is a first coding layer of the encoder, the first input information is the time sequence data, and if the current coding layer is not the first coding layer, the first input information is output information of a previous coding layer;
step S302, frequency disassembly processing is carried out on the first input information through the second frequency disassembly module, and third season information is obtained;
step S303, inputting the third seasonal information into a first forward propagation module of the current coding layer for processing to obtain a first intermediate parameter, and determining a first intermediate hidden state parameter by the current coding layer based on the third seasonal information and the first intermediate parameter;
step S304, inputting the first intermediate hidden state parameter into a first season module of the current coding layer for processing to obtain a second intermediate parameter;
Step S305, determining, by the current coding layer, output information of the current coding layer based on the second intermediate parameter and the intermediate concealment state parameter, where if the current coding layer is a last coding layer of the encoder, the output information of the current coding layer is determined to be the first season information.
In this embodiment, the encoder includes a plurality of coding layers sequentially connected, as shown in fig. 6, including a first coding layer to an nth coding layer, and for each current coding layer, the current coding layer includes a second frequency disassembly module, a first forward propagation module, and a first season module. After the time sequence data is acquired, the encoder inputs the time sequence data into the first coding layer for processing, and carries out subsequent processing through the coding layers which are connected in sequence until output information of the last coding layer, namely first season information, is obtained.
For each current coding layer, if the current coding layer is the first coding layer of the encoder, the first input information is the time sequence data, and if the current coding layer is not the first coding layer, the first input information is the output information of the last coding layer.
After the current coding layer obtains the first input information, the first input information is input into a second frequency disassembling module to carry out frequency disassembling processing, and third season information is obtained; wherein the processing procedure of the second frequency disassembling module is the same as the processing procedure of the first frequency disassembling module, referring to the processing procedure of the first frequency disassembling module in the fourth embodiment, after the first input information is input into the second frequency disassembling module to perform the frequency disassembling process, trend item data X is obtained as well T Season item data X S In this embodiment, the second frequency disassembling moduleTrend item data X for block output T To discard seasonal item data X S The specific formula of the second frequency disassembly module according to the third season information may be:
Figure SMS_65
wherein Decomp () is a frequency-splitting function of the third frequency-splitting mode,
Figure SMS_66
for the first input information, < >>
Figure SMS_67
∈R I×D For the third season information->
Figure SMS_68
∈R I×D And the current coding layer is the first coding layer of the coder for trend item data output by the second frequency disassembly module.
After the third seasonal information is obtained, the third seasonal information is input into the first forward propagation module of the current coding layer to be processed, and a first intermediate parameter is obtained, wherein the processing procedure of the second forward propagation module is the same as the processing procedure of the first forward propagation module, and the first intermediate parameter can be obtained after the third seasonal information is input into the first forward propagation module to be processed by referring to the processing procedure of the second forward propagation module in the second embodiment.
After the first intermediate parameter is acquired, the current coding layer determines a first intermediate hidden state parameter based on the third season information and the first intermediate parameter; specifically, the first intermediate hidden state parameter can be obtained by adding the third season information and the first intermediate parameter, and the specific formula is as follows:
Figure SMS_69
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_70
for the third seasonInformation, feed forward ()>
Figure SMS_71
) For the first intermediate parameter, +.>
Figure SMS_72
The state parameter is hidden for the first intermediate.
After the first intermediate hidden state parameter is obtained, the first intermediate hidden state parameter is input into a first seasonal module for processing to obtain a second intermediate parameter, wherein the processing procedure of the first seasonal module is the same as the processing procedure of the second seasonal module, referring to the processing procedure of the second seasonal module in the second embodiment, after the first intermediate hidden state parameter is input into the first seasonal module, the first seasonal module is S is the first intermediate hidden state parameter, S' is the first intermediate hidden state parameter, L is the time series length of the input data (the first intermediate hidden state parameter), D is the number of electric power characteristics of the time series data,
Figure SMS_73
is a second intermediate parameter.
After the second intermediate parameter is obtained, the current coding layer determines output information of the current coding layer based on the second intermediate parameter and the intermediate hiding state parameter, wherein the output information of the current coding layer is the sum of the second intermediate parameter and the intermediate hiding state parameter, and a specific formula is as follows:
Figure SMS_74
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_75
for the second intermediate parameter, +.>
Figure SMS_76
For the first intermediate hidden state parameter,
Figure SMS_77
is the output information of the current coding layer.
And if the current coding layer is the last coding layer of the coder, determining that the output information of the current coding layer is the first season information.
According to the electric power prediction method based on the deep learning, for each current coding layer, first input information is obtained through a second frequency disassembly module of the current coding layer, wherein if the current coding layer is the first coding layer of the encoder, the first input information is the time sequence data, and if the current coding layer is not the first coding layer, the first input information is the output information of the last coding layer; performing frequency dismantling processing on the first input information through the second frequency dismantling module to obtain third season information; inputting the third seasonal information into a first forward propagation module of the current coding layer for processing to obtain a first intermediate parameter, and determining a first intermediate hidden state parameter by the current coding layer based on the third seasonal information and the first intermediate parameter; inputting the first intermediate hidden state parameter into a first season module of the current coding layer for processing to obtain a second intermediate parameter; the output information of the current coding layer is determined based on the second intermediate parameter and the intermediate hiding state parameter through the current coding layer, wherein if the current coding layer is the last coding layer of the coder, the output information of the current coding layer is determined to be the first season information, the first season information can be accurately obtained through the coder, important frequency domain Fourier components are screened through a first season module of an attention mechanism, information loss caused by discarding Fourier components carrying important information can be reduced, the model prediction precision is improved, and the electric power prediction accuracy is improved. The frequency dismantling operation of the second dismantling module is adopted to obtain Fourier components of the sequence through Fourier transformation of the whole sequence, and threshold dividing trend components and seasonal components are set based on the energy duty ratio, so that the sequence seasonal items and the trend items are split, meanwhile, the frequency dismantling module is used for processing the whole sequence, long-term dependence of the seasonal items is considered, and high-quality sequence dismantling is achieved.
Based on the first embodiment, a fourth embodiment of the deep learning-based power prediction method of the present invention is proposed, in which step S104 includes:
step S401, performing fast fourier transform on the time sequence data by using the first frequency disassembly module to obtain frequency domain data corresponding to the time sequence data;
step S402, performing decomposition processing on the frequency domain data by using a fourier decomposition module of the first frequency decomposition module, so as to obtain a low frequency component and a high frequency component corresponding to the frequency domain data;
step S403, performing inverse fast fourier transform on the low-frequency component by the first frequency disassembly module to obtain the trend information, and performing inverse fast fourier transform on the high-frequency component by the first frequency disassembly module to obtain the second season information.
In this embodiment, the structure of the frequency disassembly module is shown in fig. 7, and after the power prediction model obtains the time-series sequence data, the first frequency disassembly module performs fast fourier transform on the time-series sequence data to obtain frequency domain data corresponding to the time-series sequence data, i.e. f=fft (X), where X is the time-series sequence data, F e C D×L’ Is frequency domain data.
Then, a Fourier decomposition module of the first frequency decomposition module carries out decomposition processing on the frequency domain data so as to obtain a low-frequency component and a high-frequency component corresponding to the frequency domain data; specifically, the Fourier decomposition module firstly determines the number m of components, and then the Fourier decomposition module carries out decomposition processing on frequency domain data so as to obtain low-frequency components and high-frequency components corresponding to the frequency domain data; the specific formula is as follows:
F T =Padding(F[:,:m]);
F S =Padding(F[:,m:]);
Figure SMS_78
wherein Padding () is a filling function of a time dimension, F is frequency domain data, m is the number of components, F T As low frequency component, F S As high frequency component, F T ∈C D×L’ ,F S ∈C D×L’ Alpha is the energy duty cycle threshold of the low frequency Fourier component, alpha is set to 0.8, F i For the ith data in the frequency domain data, L is the time series length of the input data (time series data), L' is the number of fourier components in the frequency domain data, and D is the number of power characteristics of the time series data.
Then, the low-frequency component is subjected to inverse fast fourier transform through the first frequency disassembly module to obtain trend information, and the high-frequency component is subjected to inverse fast fourier transform to obtain second season information, wherein the formula of the inverse fast fourier transform is as follows: x is X T =IFFT(F T ),X S =IFFT(F S ) Wherein X is T X is trend information S Is the second season information.
According to the electric power prediction method based on deep learning, the first frequency disassembly module is used for carrying out fast Fourier transform on the time sequence data so as to obtain frequency domain data corresponding to the time sequence data; then, decomposing the frequency domain data through a Fourier decomposition module of the first frequency decomposition module to obtain a low-frequency component and a high-frequency component corresponding to the frequency domain data; and then, the first frequency disassembly module performs inverse fast Fourier transform on the low-frequency component to obtain the trend information, and the first frequency disassembly module performs inverse fast Fourier transform on the high-frequency component to obtain the second season information, so that the trend information and the second season information can be accurately obtained according to the time sequence data, and the accuracy and the efficiency of power prediction are further improved.
Based on the first embodiment, a fifth embodiment of the deep learning-based power prediction method of the present invention is proposed, in which step S104 includes:
step S501, performing linear transformation on the target trend information through the trend module to obtain trend information after linear transformation;
Step S502, inputting a time vector corresponding to the time sequence length of the target trend information and the trend information after linear transformation into a trend function of the trend module for fitting processing so as to obtain the trend prediction information.
In this embodiment, after target trend information is obtained, the target trend information is input into a trend module of the power prediction model for processing, and the trend module performs linear transformation of a time dimension on the target trend information to obtain trend information after linear transformation, where a specific formula is as follows:
Figure SMS_79
wherein θ T Is trend information after linear transformation, T is target trend information, W θT Is a weight matrix, B θT As bias term, W θT ∈C L×P ,B θT ∈C D×P L is the time-series length of the input data (target trend information).
After the trend information after the linear transformation is obtained, the trend module inputs a time vector corresponding to the time sequence length of the target trend information and the trend information after the linear transformation into a trend function for fitting processing so as to obtain the trend prediction information, wherein the specific formula is as follows:
Figure SMS_80
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_81
for trend prediction information, t= [0,1/L … (L-1)/L]∈R L Is a time vector, L is the time sequence length of the input data (target trend information), θ T Is trend information after linear transformation.
According to the deep learning-based power prediction method provided by the embodiment, the trend module is used for carrying out linear transformation on the target trend information so as to obtain trend information after linear transformation; and then, inputting a time vector corresponding to the time sequence length of the target trend information and the trend information after linear transformation into a trend function of the trend module for fitting treatment so as to obtain the trend prediction information, and fitting the general trend characteristics of the data by the trend module based on a polynomial fitting mode, so that the learning capability of the model on the trend information is improved, and the accuracy of power prediction is further improved.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a power prediction program based on deep learning, and the power prediction program based on deep learning realizes the steps of the power prediction method based on deep learning when being executed by a processor.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The electric power prediction method based on the deep learning is characterized by comprising the following steps of:
inputting time sequence data corresponding to historical power characteristic data into an encoder of a power prediction model for processing to obtain first season information corresponding to the time sequence data, wherein the encoder comprises a plurality of encoding layers which are sequentially connected, and the encoding layers comprise a first season module of an attention mechanism;
inputting the time sequence data into a first frequency dismantling module of the power prediction model for frequency dismantling, trend information and second season information are obtained;
inputting the first season information, the trend information and the second season information into a decoder of the power prediction model for processing to obtain target season information and target trend information corresponding to the time sequence data, wherein the decoder comprises a plurality of decoding layers which are connected in sequence, and the decoding layers comprise a second season module of an attention mechanism;
inputting the target trend information into a trend module of the power prediction model for processing so as to obtain trend prediction information;
And determining a power prediction result based on the target season information and the trend prediction information through the power prediction model.
2. The deep learning-based power prediction method of claim 1, wherein the step of inputting the time series data into an encoder of a power prediction model to process to obtain the first season information corresponding to the time series data comprises:
for each current coding layer, acquiring first input information through a second frequency disassembly module of the current coding layer, wherein if the current coding layer is a first coding layer of the coder, the first input information is the time sequence data, and if the current coding layer is not the first coding layer, the first input information is the output information of the last coding layer;
performing frequency dismantling processing on the first input information through the second frequency dismantling module to obtain third season information;
inputting the third seasonal information into a first forward propagation module of the current coding layer for processing to obtain a first intermediate parameter, and determining a first intermediate hidden state parameter by the current coding layer based on the third seasonal information and the first intermediate parameter;
Inputting the first intermediate hidden state parameter into a first season module of the current coding layer for processing to obtain a second intermediate parameter;
and determining, by the current coding layer, output information of the current coding layer based on the second intermediate parameter and the intermediate concealment state parameter, where if the current coding layer is a last coding layer of the encoder, determining that the output information of the current coding layer is the first season information.
3. The deep learning-based power prediction method as claimed in claim 1, wherein the step of inputting the first season information, the trend information, and the second season information into a decoder of the power prediction model to be processed, to obtain the target season information and the target trend information corresponding to the time series data, comprises:
for each current decoding layer, obtaining second input information through the current decoding layer, wherein the second input information comprises season item input information, remainder item input information and trend item input information;
inputting the remainder input information into a third frequency disassembly module of the current decoding layer to perform frequency disassembly processing to obtain first remainder information and first trend information;
Inputting the first remainder information and the first season information into a second season module of the current decoding layer for processing so as to obtain fourth season information;
determining a third intermediate parameter by the current decoding layer based on the first remainder information and the fourth section information;
inputting the third intermediate parameter into a second forward propagation module of the current decoding layer for processing to obtain a fourth intermediate parameter, and determining a second intermediate hidden state parameter by the current decoding layer based on the third intermediate parameter and the fourth intermediate parameter;
inputting the second intermediate hidden state parameter into a fourth frequency disassembly module of the current decoding layer to perform frequency disassembly processing to obtain second trend information and remainder output information of the current encoding layer;
determining trend item output information of the current decoding layer based on the trend item input information, the first trend information and the second trend information by the current decoding layer;
and determining, by the current decoding layer, season term output information of the current decoding layer based on the season term input information and the fourth season term information, wherein if the current decoding layer is a last decoding layer of the decoder, the trend term output information is the target trend information, and the season term output information is the target season information.
4. The deep learning based power prediction method as claimed in claim 3, wherein the step of inputting the first remainder information and the first season information into the second season module of the current decoding layer to be processed to obtain fourth season information comprises:
the first remainder information is subjected to linear mapping through the second season module so as to obtain a query vector of an attention mechanism, and the first season information is subjected to linear mapping through the second season module so as to obtain keys and values of the attention mechanism;
determining, by the second seasonal module, a weighted attention representation in a time dimension corresponding to the first remainder information based on the query vector, key, and value;
determining, by the second seasonal module, an amplitude value corresponding to the periodic encoding function based on the weighted attention representation;
and determining the fourth season information through the second season module based on the time vector corresponding to the time sequence length of the time sequence data and the amplitude.
5. The deep learning-based power prediction method of claim 3, wherein if the current decoding layer is a first decoding layer of the decoder, determining the seasonal item input information as preset seasonal information, determining remaining item input information based on the second seasonal information, and determining trend item input information based on the trend information;
If the current decoding layer is not the first decoding layer, determining that the season item input information is the season item output information of the last encoding layer, the remainder input information is the remainder output information of the last encoding layer and the trend item input information is the trend item output information of the last encoding layer.
6. The deep learning-based power prediction method as claimed in claim 1, wherein the step of inputting the time series data into the first frequency disassembly module of the power prediction model to perform frequency disassembly processing, and obtaining trend information and second season information includes:
performing fast Fourier transform on the time sequence data through the first frequency disassembly module to obtain frequency domain data corresponding to the time sequence data;
decomposing the frequency domain data through a Fourier decomposition module of the first frequency decomposition module to obtain a low-frequency component and a high-frequency component corresponding to the frequency domain data;
and performing inverse fast Fourier transform on the low-frequency component through the first frequency dismantling module to obtain the trend information, and performing inverse fast Fourier transform on the high-frequency component through the first frequency dismantling module to obtain the second season information.
7. The deep learning-based power prediction method of claim 1, wherein the step of inputting the target trend information into a trend module of the power prediction model for processing to obtain trend prediction information comprises:
performing linear transformation on the target trend information through the trend module to obtain trend information after linear transformation;
and inputting a time vector corresponding to the time sequence length of the target trend information and the trend information after linear transformation into a trend function of the trend module for fitting processing so as to obtain the trend prediction information.
8. The deep learning-based power prediction method according to any one of claims 1 to 7, wherein the step of determining a power prediction result based on the target season information and the trend prediction information by the power prediction model includes:
slicing the target season information and the trend prediction information through the electric power prediction model to obtain a predicted season item sequence and a predicted trend item sequence;
and determining the power prediction result based on the prediction season term sequence and the prediction trend term sequence through the power prediction model.
9. A deep learning-based power prediction apparatus, characterized in that the deep learning-based power prediction apparatus includes: a memory, a processor, and a deep learning based power prediction program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the deep learning based power prediction method of any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a deep learning-based power prediction program, the deep learning based power prediction program when executed by a processor implements the steps of the deep learning based power prediction method of any one of claims 1 to 8.
CN202310687810.8A 2023-06-12 2023-06-12 Power prediction method and device based on deep learning and storage medium Active CN116415744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310687810.8A CN116415744B (en) 2023-06-12 2023-06-12 Power prediction method and device based on deep learning and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310687810.8A CN116415744B (en) 2023-06-12 2023-06-12 Power prediction method and device based on deep learning and storage medium

Publications (2)

Publication Number Publication Date
CN116415744A true CN116415744A (en) 2023-07-11
CN116415744B CN116415744B (en) 2023-09-19

Family

ID=87049625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310687810.8A Active CN116415744B (en) 2023-06-12 2023-06-12 Power prediction method and device based on deep learning and storage medium

Country Status (1)

Country Link
CN (1) CN116415744B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116629316A (en) * 2023-07-26 2023-08-22 无锡雪浪数制科技有限公司 Object production model training method, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220121871A1 (en) * 2020-10-16 2022-04-21 Tsinghua University Multi-directional scene text recognition method and system based on multi-element attention mechanism
CN115049169A (en) * 2022-08-16 2022-09-13 国网湖北省电力有限公司信息通信公司 Regional power consumption prediction method, system and medium based on combination of frequency domain and spatial domain
CN115936237A (en) * 2022-12-23 2023-04-07 西南科技大学 Time series prediction method, time series prediction device, computer equipment and storage medium
CN116187498A (en) * 2022-11-25 2023-05-30 国网山西省电力公司大同供电公司 Photovoltaic power generation power prediction method based on frequency domain decomposition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220121871A1 (en) * 2020-10-16 2022-04-21 Tsinghua University Multi-directional scene text recognition method and system based on multi-element attention mechanism
CN115049169A (en) * 2022-08-16 2022-09-13 国网湖北省电力有限公司信息通信公司 Regional power consumption prediction method, system and medium based on combination of frequency domain and spatial domain
CN116187498A (en) * 2022-11-25 2023-05-30 国网山西省电力公司大同供电公司 Photovoltaic power generation power prediction method based on frequency domain decomposition
CN115936237A (en) * 2022-12-23 2023-04-07 西南科技大学 Time series prediction method, time series prediction device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范杏蕊等: "基于改进 Autoformer 模型的短期电力负荷预测", 《电力自动化设备》, pages 1 - 13 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116629316A (en) * 2023-07-26 2023-08-22 无锡雪浪数制科技有限公司 Object production model training method, device, electronic equipment and storage medium
CN116629316B (en) * 2023-07-26 2024-03-08 无锡雪浪数制科技有限公司 Object production model training method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116415744B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109902186B (en) Method and apparatus for generating neural network
CN112396613B (en) Image segmentation method, device, computer equipment and storage medium
CN116415744B (en) Power prediction method and device based on deep learning and storage medium
CN112488183B (en) Model optimization method, device, computer equipment and storage medium
CN112101172A (en) Weight grafting-based model fusion face recognition method and related equipment
WO2022105117A1 (en) Method and device for image quality assessment, computer device, and storage medium
CN114724643A (en) Method for screening polypeptide compound and related device
CN111935487A (en) Image compression method and system based on video stream detection
CN116432868B (en) Subway passenger flow prediction method and device based on node query set and storage medium
CN114259255A (en) Modal fusion fetal heart rate classification method based on frequency domain signals and time domain signals
CN116383666B (en) Power data prediction method and device and electronic equipment
CN112381224A (en) Neural network training method, device, equipment and computer readable storage medium
CN109543187B (en) Method and device for generating electronic medical record characteristics and storage medium
CN114399028B (en) Information processing method, graph convolution neural network training method and electronic equipment
CN114241411A (en) Counting model processing method and device based on target detection and computer equipment
CN113568741B (en) Service expansion and contraction method, device, equipment and storage medium of distributed system
CN110674994A (en) Data value evaluation method, terminal, device and readable storage medium
CN118052126A (en) Panel parameter generation method and device, electronic equipment and storage medium
CN115357461B (en) Abnormality detection method, abnormality detection device, electronic device, and computer-readable storage medium
CN116501993B (en) House source data recommendation method and device
CN116757216B (en) Small sample entity identification method and device based on cluster description and computer equipment
Vangali et al. A compression algorithm design and simulation for processing large volumes of data from wireless sensor networks
CN116994102A (en) Evaluation method and device of network model and storage medium
CN117473312A (en) Bearing state prediction method, bearing state prediction device, computer equipment and storage medium
CN114970713A (en) User behavior prediction method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant